Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Meta] Expand support for populating event.ingested #20073

Closed
2 of 4 tasks
spong opened this issue Jul 20, 2020 · 3 comments · Fixed by #20386
Closed
2 of 4 tasks

[Meta] Expand support for populating event.ingested #20073

spong opened this issue Jul 20, 2020 · 3 comments · Fixed by #20386

Comments

@spong
Copy link
Member

spong commented Jul 20, 2020

Within Elastic Security we've recently exposed the ability for users to specify the timestamp field used when a Detection Rule runs in effort to minimize any gaps in alerts from delayed events. The ECS field most useful here would be event.ingested (elastic/ecs#453, elastic/ecs#582), and so ensuring that this field is populated whenever possible would greatly benefit any downstream use-cases where the system must determine if an event is stale or has been delayed.

Currently it looks like only two modules are setting event.ingested (thanks @leehinman!):

x-pack/filebeat/module/microsoft/defender_atp/ingest/pipeline.yml:    field: event.ingested
x-pack/filebeat/module/gsuite/ingest/common.yml:      field: event.ingested

In discussions it looks like we can add the following to relevant pipelines:

- set:
    field: event.ingested
    value: '{{_ingest.timestamp}}'

And also update the test modules here:

    # Remove event.ingested from testing, as it will never be the same.
    if obj["event.dataset"] == "microsoft.defender_atp":
        delete_key(obj, "event.ingested")
        delete_key(obj, "@timestamp")

    if obj["event.module"] == "gsuite":
        delete_key(obj, "event.ingested")

Let's use this as a meta issue for tracking the support of event.ingested across modules -- feel free to update this description as things progress.

Currently supported:

  • Filebeat
    • Microsoft Defender
    • GSuite

Yet to be supported:

  • TBD
@elasticmachine
Copy link
Collaborator

Pinging @elastic/siem (Team:SIEM)

@andrewkroh
Copy link
Member

In my opinion an optimal solution would be to have Filebeat install an Elasticsearch pipeline that adds event.ingested and then put final_pipeline into its index template to force all events through this pipeline. This would ensure that all events, not only module events, have the event.ingested value. This would take a bit of effort and this might not work so well when module transition to integration packages.

So I think the easiest way to address this would be to update the modules in bulk and add a set processor to the beginning of every module pipeline.

@mark54g
Copy link

mark54g commented Jul 27, 2020

Would this be available to all beats, if implemented, or would it be just filebeat?

andrewkroh added a commit to andrewkroh/beats that referenced this issue Aug 3, 2020
The event.ingested field defines time at which the event was ingested to Elasticsearch
and it added by the Ingest Node pipeline. This field is important when trying to build
alerts for activities that may have been reported long after they occurred (@timestamp is
much older than event.ingested). This might happen if an agent was offline for a period
of time or the processing was delayed.

This adds a test to ensure all modules create event.ingested.

Closes elastic#20073
andrewkroh added a commit that referenced this issue Aug 4, 2020
The event.ingested field defines time at which the event was ingested to Elasticsearch
and it added by the Ingest Node pipeline. This field is important when trying to build
alerts for activities that may have been reported long after they occurred (@timestamp is
much older than event.ingested). This might happen if an agent was offline for a period
of time or the processing was delayed.

This adds a test to ensure all modules create event.ingested.

Use Filebeat read time instead of ingest time as event.created in Zeek.

Closes #20073
andrewkroh added a commit to andrewkroh/beats that referenced this issue Aug 6, 2020
The event.ingested field defines time at which the event was ingested to Elasticsearch
and it added by the Ingest Node pipeline. This field is important when trying to build
alerts for activities that may have been reported long after they occurred (@timestamp is
much older than event.ingested). This might happen if an agent was offline for a period
of time or the processing was delayed.

This adds a test to ensure all modules create event.ingested.

Use Filebeat read time instead of ingest time as event.created in Zeek.

Closes elastic#20073

(cherry picked from commit 829c3b7)
andrewkroh added a commit that referenced this issue Aug 11, 2020
The event.ingested field defines time at which the event was ingested to Elasticsearch
and it added by the Ingest Node pipeline. This field is important when trying to build
alerts for activities that may have been reported long after they occurred (@timestamp is
much older than event.ingested). This might happen if an agent was offline for a period
of time or the processing was delayed.

This adds a test to ensure all modules create event.ingested.

Use Filebeat read time instead of ingest time as event.created in Zeek.

Closes #20073

(cherry picked from commit 829c3b7)
melchiormoulin pushed a commit to melchiormoulin/beats that referenced this issue Oct 14, 2020
The event.ingested field defines time at which the event was ingested to Elasticsearch
and it added by the Ingest Node pipeline. This field is important when trying to build
alerts for activities that may have been reported long after they occurred (@timestamp is
much older than event.ingested). This might happen if an agent was offline for a period
of time or the processing was delayed.

This adds a test to ensure all modules create event.ingested.

Use Filebeat read time instead of ingest time as event.created in Zeek.

Closes elastic#20073
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants