Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

datasets, next_run_datasets, remove unnecessary timestamp filter #29441

Merged
merged 2 commits into from
Feb 20, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 1 addition & 4 deletions airflow/www/views.py
Original file line number Diff line number Diff line change
Expand Up @@ -3713,10 +3713,7 @@ def next_run_datasets(self, dag_id):
)
.join(
DatasetEvent,
and_(
DatasetEvent.dataset_id == DatasetModel.id,
DatasetEvent.timestamp > DatasetDagRunQueue.created_at,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@blag Does this look right to you? re: #26356

Copy link
Contributor

@blag blag Feb 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😬 It's been a long minute since I wrote this, but...

I believe when I wrote this the intent with the lastUpdate field was to only show last updates since the last time the dag was queued/run. But yeah, the lastUpdate label isn't descriptive enough for that.

Option 1: Personally, I would consider changing the name of what the lastUpdate field is rendered to, something like "Last update since last run" or something more wordy.

Option 2: But if you don't want to do that, and you want to display the last update for every dataset regardless of whether has already been "consumed" by a DagRun (eg: in either the DatasetDagRunQueue or actually scheduled into a DagRun), then yeah it makes sense to remove this filter. However, I would also remove the and_ around it since then there would only be one filter condition in that join:

                .join(
                    DatasetEvent,
                    DatasetEvent.dataset_id == DatasetModel.id,
                    isouter=True,
                )

If you go for option 2, I think you should be able to compare the existence and creation time of the DDRQ with the DatasetEvent timestamp to figure out whether or not the last update time has already triggered a DDRQ/DagRun or if it has partially satisfied the conditions of a future DagRun.

Hope this makes sense.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, I would also remove the and_ around it since then there would only be one filter condition in that join:

Yes, you're right the and becomes unnecessary.

I think there might be some confusion around DDRQ. My understanding is that when a DatasetEvent is created, a DDRQ record is created per consuming DAG. Then, once a DAG has an associated DDRQ record for each Dataset that it depends on, a dag_run is created and then all DDRQ records associated with that DAG are deleted.

If you go for option 2, I think you should be able to compare the existence and creation time of the DDRQ with the DatasetEvent timestamp to figure out whether or not the last update time has already triggered a DDRQ/DagRun or if it has partially satisfied the conditions of a future DagRun.

As I understand it, if there are DDRQ records for a DAG, we can assume that there hasn't been a DagRun triggered since the last DatasetEvent (because we delete DDRQ records on the creation of a DagRun).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@blag Does that make sense to you?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just dug through the code in more detail, and yes, your understanding seems to be correct. 😄

),
DatasetEvent.dataset_id == DatasetModel.id,
isouter=True,
)
.filter(DagScheduleDatasetReference.dag_id == dag_id, ~DatasetModel.is_orphaned)
Expand Down