Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics for StatefulSet deployments cannot easily be aggregated together #33507

Closed
jbeemster opened this issue Jun 12, 2024 · 4 comments
Closed

Comments

@jbeemster
Copy link

Component(s)

receiver/awscontainerinsight

Is your feature request related to a problem? Please describe.

Currently all deployment types but StatefulSet expose the PodName as the name of the deployment itself (see: https://github.com/open-telemetry/opentelemetry-collector-contrib/blame/main/receiver/awscontainerinsightreceiver/internal/stores/podstore.go#L590-L597).

We have started exploring using StatefulSet's and this difference means we cannot easily set alarms or dashboards against the aggregate metric view as each PodName value is unique rather than consistent across the pool of pods being used. For example being able to set a general alarm against the CPU utilization of the entire pool of auto-scaling pods in use.

Describe the solution you'd like

Ideally I would like to expose a parameter that allows this behavior to be changed so that we can set the PodName value to be consistent with other deployment types.

Something like:

prefer_controller_name_for_statefulset: true|false

With the default maintaining the current behavior.

Describe alternatives you've considered

We are exploring deploying each application into an isolated namespace to be able to get that aggregate alarm capability back.

Additional context

No response

@jbeemster jbeemster added enhancement New feature or request needs triage New item requiring triage labels Jun 12, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 11, 2024
@gthomson31
Copy link

This is still an issue - awaiting PR Review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants