Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add resource leak hero scenario. #455

Merged
merged 11 commits into from
Oct 18, 2022
Merged

Add resource leak hero scenario. #455

merged 11 commits into from
Oct 18, 2022

Conversation

austinlparker
Copy link
Member

@austinlparker austinlparker commented Oct 15, 2022

Changes

Adds a 'hero scenario' of a resource leak to the Recommendation Service. This scenario implements a naive cache, controlled by the recommendationCache flag, as well as a new span.cache_size span attribute. Other changes include adding a restart policy to the container for this service, as the cache will blow up after a few minutes.

For significant contributions please make sure you have completed the following items:

  • Appropriate CHANGELOG.md updated for non-trivial changes
  • Design discussion issue #

@austinlparker austinlparker requested a review from a team October 15, 2022 04:27
@julianocosta89
Copy link
Member

Wow! This is cool:
On the left we have recommendationCache disabled.
On the right (from 08:22) we have recommendationCache enabled.

image

Copy link
Member

@julianocosta89 julianocosta89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minimal adjusts, but it looks great!

CHANGELOG.md Outdated Show resolved Hide resolved
src/recommendationservice/recommendation_server.py Outdated Show resolved Hide resolved
@austinlparker
Copy link
Member Author

Wow! This is cool: On the left we have recommendationCache disabled. On the right (from 08:22) we have recommendationCache enabled.

image

Yeah -- incidentally I'm not wedded to the growth here (I think it's probably too quick, it takes about 5 minutes at the current random rate + ^2 growth of the list), I might tweak it to ^1.5 or ^1.2 to get more of a stairstep in growth.

@austinlparker
Copy link
Member Author

Screen Shot 2022-10-16 at 1 48 47 PM

Tweaked it a bit and added a memory limit to the service in docker.

Copy link
Member

@julianocosta89 julianocosta89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add app.cache_size to list of manual span attributes.
All looking good!

docs/manual_span_attributes.md Show resolved Hide resolved
Copy link
Member

@mviitane mviitane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, works fine!

@austinlparker austinlparker merged commit 177d31c into open-telemetry:main Oct 18, 2022
@austinlparker austinlparker deleted the heroScenario branch October 18, 2022 12:23
jmichalak9 pushed a commit to jmichalak9/opentelemetry-demo that referenced this pull request Mar 22, 2024
* add cache leak failure scenario

* add attribute to span

* update changelog

* Update src/recommendationservice/recommendation_server.py

Co-authored-by: Juliano Costa <[email protected]>

* add newline

* add docs

* tweak scenario

* add resource limit to force service overlimit

* review

Co-authored-by: Juliano Costa <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants