-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fleet] Use unmapped_type: long
and missing: 0
when sorting datasets that don't include event.ingested
#136114
[Fleet] Use unmapped_type: long
and missing: 0
when sorting datasets that don't include event.ingested
#136114
Conversation
Pinging @elastic/fleet (Team:Fleet) |
@elasticmachine merge upstream |
I'm currently testing this PR against the e2e tests:
|
@kpollich with the steps above, I'm seeing the agent never gets online. When I SSH into the machine, and inspect the agent I see:
and TBH I don't know where this The Kibana response is:
|
Hi @mdelapenya thanks for looking into this. I'm continuing to work on this today.
The only time I see the
I am not aware of any changes to this environment variable, no. I'm looking at why Cypress tests are failing on this branch now, but I will try to reproduce the above issue once that's resolved. |
I pushed a change in 7e07c5d to filter out non-integrations data streams from the Fleet data streams API. It didn't make sense that we returned all data streams in this API anyway, as Elasticsearch has its own methods for getting that information. This should help with the enterprise search documents ingested under the @mdelapenya I am having trouble following your instructions to test here. I get a docker auth error when trying to fetch the docker image for this PR. Can you help out when you're next online? Thanks. |
Please run docker login in your stack machine (will update the description above), as I verified it's not enough copying the docker config file. |
@kpollich I'm still seeing this error:
Kibana logs:
API call:
If I go to the browser and browse the same endpoint, I get the same error message. Kibana config ---
server.name: kibana
server.host: "0.0.0.0"
telemetry.enabled: false
elasticsearch.hosts: [ "http://18.222.197.41:9200" ]
elasticsearch.username: admin
elasticsearch.password: changeme
xpack.monitoring.ui.container.elasticsearch.enabled: true
xpack.fleet.registryUrl: "https://epr-staging.elastic.co"
xpack.fleet.agents.enabled: true
xpack.fleet.agents.elasticsearch.host: "http://18.222.197.41:9200"
xpack.fleet.agents.fleet_server.hosts: ["http://18.222.197.41:8220"]
xpack.encryptedSavedObjects.encryptionKey: "12345678901234567890123456789012"
xpack.fleet.agents.tlsCheckDisabled: true
xpack.fleet.packages:
- name: fleet_server
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server policy
id: fleet-server-policy
description: Fleet server policy
namespace: default
package_policies:
- name: Fleet Server
package:
name: fleet_server |
@mdelapenya I'm trying your updated steps above but getting a docker error when trying to spin up kibana in the stack
This is after running Because Kibana never comes up, the |
@mdelapenya Helped me get unblocked on E2E test setup and I discovered something interesting. In the e2e environment, none of the data streams created seem to contain documents that have been run through Fleet's
Example records in E2E:
Example records in local dev environment:
Neither the I see the expected fleet component templates that set the I'm also still not sure where the date value throwing the error
I'm unable to reproduce the specific date parsing error coming from E2E tests in my local development environment, but I will continue investigating. |
@timestamp
instead of event.ingested
in data stream sorting
Building a new docker image off of ce09f35 where I switched to using |
So turns out my assumption that we can use |
@kpollich just to add more context, this error happens only for main (8.4), not in 8.3, 8.2 or 7.17 |
@timestamp
instead of event.ingested
in data stream sortingunmapped_type: long
when sorting datasets that don't include event.ingested
@mdelapenya I was finally able to reproduce this by setting up a custom logs integration (which doesn't include |
FWIW it's still a concern to me that the documents we ingest during E2E tests seemingly don't go through the |
In that sense, we can pair and follow the code in the E2E, but they basically invoke the install command in the elastic-agent. Is it possible that fleet-server needs to be configured/updated in any other manner? |
Ugh maybe not. A second run of the test resulted in errors. Still looking :/ |
@nchaulet I re-requested review here since the actual scope of changes has been altered. Title/description updated to match. |
I managed to shell into the docker container running Kibana in my E2E stack and isolate the print width error to a single data stream. Here's the exact query we run in the data streams API now and its results on this data stream:
Here's all the documents under that data stream that I see in my E2E cluster
|
Realized I didn't provide a
No documents in this data stream have an |
Found this Elasticsearch issue that seems relevant elastic/elasticsearch#81960 |
The recommendation from the above and its related SDH: https://github.com/elastic/sdh-elasticsearch/issues/5352 was to provide an explicit I manually made the same change by editing the built Since we're green and approved on the Fleet side (thanks @nchaulet). I'm going to merge this PR, and that should unblock the next bump to 8.4 in the E2E repo. Thanks all. |
unmapped_type: long
when sorting datasets that don't include event.ingested
unmapped_type: long
and missing: 0
when sorting datasets that don't include event.ingested
💚 Build SucceededMetrics [docs]
History
To update your PR or re-run it, just comment with: cc @kpollich |
Summary
Ref elastic/e2e-testing#2771
I created a new E2E stack and checked the data streams UI in Fleet before running any tests and saw this error:
This seemed fixed in elastic/elastic-agent#654 (comment) (see QAS comment), but for some reason this issue is still present in the E2E suite. I've opted to swap to
@timestamp
which should always exist as a way to unblock E2E tests.I was able to reproduce this by creating a custom logs integration, ingesting some data, and attempting to load the data streams page in Fleet. Providing an
unmapped_type
allows Elasticsearch to sort documents even if they don't includeevent.ingested