Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(alerts): Support apdex alerts for anomaly detection #76960

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

ceorourke
Copy link
Member

Add support for apdex type metric alerts for anomaly detection

Closes https://getsentry.atlassian.net/browse/ALRT-245

@github-actions github-actions bot added the Scope: Backend Automatically applied to PRs that change backend components label Sep 4, 2024
start, end, project, alert_rule.organization, granularity
)

elif "apdex" in snuba_query.aggregate:
generator = get_stats_generator(use_discover=True, remove_on_demand=False)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is pulled from the metrics-estimation-endpoint here which then gets called in in organization_events.py here. I think this is all that's needed for the type of apdex metric alert you can create from the UI, and all the other stuff is for Discover queries (and I support custom metric type metric alerts?).

snuba_query = SnubaQuery.objects.get(id=alert_rule.snuba_query_id)
result = fetch_historical_data(alert_rule, snuba_query, self.project)
assert result
assert {"time": int(self.time_1_ts), "count": 1} in result.data.get("data")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't gotten this to pass yet as I'm not sure what I need to store in order for a count to show up in the results

Copy link

codecov bot commented Sep 4, 2024

❌ 2 Tests Failed:

Tests completed Failed Passed Skipped
21659 2 21657 209
View the top 2 failed tests by shortest run time
tests.sentry.seer.anomaly_detection.test_store_data.AnomalyDetectionStoreDataTest test_anomaly_detection_fetch_historical_data_apdex_alert
Stack Traces | 4.75s run time
#x1B[1m#x1B[.../seer/anomaly_detection/test_store_data.py#x1B[0m:187: in test_anomaly_detection_fetch_historical_data_apdex_alert
    assert {"time": int(self.time_1_ts)} in result.data.get("data")
#x1B[1m#x1B[31mE   AssertionError: assert {'time': 1724889600} in [{'time': 1723078380}, {'time': 1723078440}, {'time': 1723078500}, {'time': 1723078560}, {'time': 1723078620}, {'time': 1723078680}, ...]#x1B[0m
#x1B[1m#x1B[31mE    +  where [{'time': 1723078380}, {'time': 1723078440}, {'time': 1723078500}, {'time': 1723078560}, {'time': 1723078620}, {'time': 1723078680}, ...] = <built-in method get of dict object at 0x7f100d491640>('data')#x1B[0m
#x1B[1m#x1B[31mE    +    where <built-in method get of dict object at 0x7f100d491640> = {'data': [{'time': 1723078380}, {'time': 1723078440}, {'time': 1723078500}, {'time': 1723078560}, {'time': 1723078620}, {'time': 1723078680}, ...], 'meta': {'fields': {'apdex_300': 'number', 'time': 'date'}}}.get#x1B[0m
#x1B[1m#x1B[31mE    +      where {'data': [{'time': 1723078380}, {'time': 1723078440}, {'time': 1723078500}, {'time': 1723078560}, {'time': 1723078620}, {'time': 1723078680}, ...], 'meta': {'fields': {'apdex_300': 'number', 'time': 'date'}}} = SnubaTSResult(data={'data': [{'time': 1723078380}, {'time': 1723078440}, {'time': 1723078500}, {'time': 1723078560}, {...=datetime.timezone.utc), end=datetime.datetime(2024, 9, 5, 0, 53, 22, 355695, tzinfo=datetime.timezone.utc), rollup=60).data#x1B[0m
tests.sentry.incidents.test_subscription_processor.ProcessUpdateTest test_seer_call_performance_rule
Stack Traces | 6.58s run time
#x1B[1m#x1B[.../sentry/incidents/test_subscription_processor.py#x1B[0m:634: in test_seer_call_performance_rule
    assert deserialized_body["config"]["seasonality"] == throughput_rule.seasonality
#x1B[1m#x1B[31mE   KeyError: 'seasonality'#x1B[0m

To view individual test run time comparison to the main branch, go to the Test Analytics Dashboard

@getsantry
Copy link
Contributor

getsantry bot commented Sep 26, 2024

This pull request has gone three weeks without activity. In another week, I will close it.

But! If you comment or otherwise update it, I will reset the clock, and if you add the label WIP, I will leave it alone unless WIP is removed ... forever!


"A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀

@getsantry getsantry bot added Stale and removed Stale labels Sep 26, 2024
@getsantry getsantry bot added the Stale label Oct 19, 2024
@getsantry
Copy link
Contributor

getsantry bot commented Oct 19, 2024

This issue has gone three weeks without activity. In another week, I will close it.

But! If you comment or otherwise update it, I will reset the clock, and if you remove the label Waiting for: Community, I will leave it alone ... forever!


"A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀

@getsantry getsantry bot removed the Stale label Oct 20, 2024
@getsantry
Copy link
Contributor

getsantry bot commented Nov 10, 2024

This issue has gone three weeks without activity. In another week, I will close it.

But! If you comment or otherwise update it, I will reset the clock, and if you remove the label Waiting for: Community, I will leave it alone ... forever!


"A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀

@getsantry getsantry bot added Stale and removed Stale labels Nov 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Scope: Backend Automatically applied to PRs that change backend components
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant