Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to retain or delete the workspace after successful sync #694

Open
EronWright opened this issue Sep 26, 2024 · 1 comment
Open

Option to retain or delete the workspace after successful sync #694

EronWright opened this issue Sep 26, 2024 · 1 comment
Labels
kind/enhancement Improvements or new features

Comments

@EronWright
Copy link
Contributor

EronWright commented Sep 26, 2024

The stack controller provisions a workspace on demand, and it is fine to delete the workspace after sync was successful. Obviously the user is trading off performance/efficiency.

Similar to the "reclaim policy" of a PVC.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaim-policy

For the retain case, we should consider making the workspace pods be "burstable" so that they consume minimal resources at rest but are allowed to consume unlimited memory resources during a deployment operation.

For example:

kind: Stack
spec:
  workspaceReclaimPolicy: retain|delete
@EronWright EronWright converted this from a draft issue Sep 26, 2024
@cleverguy25
Copy link

Added to epic #586

@pulumi-bot pulumi-bot added the needs-triage Needs attention from the triage team label Sep 26, 2024
@EronWright EronWright changed the title Option to retain or discard the workspace after successful sync Option to retain or delete the workspace after successful sync Sep 26, 2024
@blampe blampe added kind/enhancement Improvements or new features and removed needs-triage Needs attention from the triage team labels Sep 27, 2024
@blampe blampe added this to the 0.111 milestone Oct 1, 2024
EronWright added a commit that referenced this issue Oct 7, 2024
<!--Thanks for your contribution. See [CONTRIBUTING](CONTRIBUTING.md)
    for Pulumi's contribution guidelines.

    Help us merge your changes more quickly by adding more details such
    as labels, milestones, and reviewers.-->

### Proposed changes

<!--Give us a brief description of what you've done and what it solves.
-->
Implements good defaults for the workspace resource, using a
["burstable"](https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#burstable)
approach.
Since a workspace pod's utilization is bursty - with low resource usage
during idle times and with high resource usage during deployment ops -
the pod requests a small amount of resources (64mb, 100m) to be able to
idle. A deployment op is able to use much more memory - all available
memory on the host.

Users may customize the resources (e.g. to apply different requests
and/or limits). For large/complex Pulumi apps, it might make sense to
reserve more memory and/or use
#694.

The agent takes some pains to stay within the requested amount, using a
programmatic form of the
[GOMEMLIMIT](https://weaviate.io/blog/gomemlimit-a-game-changer-for-high-memory-applications)
environment variable. The agent detects the requested amount via the
Downward API. We don't use `GOMEMLIMIT` to avoid propagating it to
sub-processes, and because the format is a Kubernetes 'quantity'.

It was observed that zombies weren't being cleaned up, and this was
leading to resource exhaustion. Fixed by using
[tini](https://github.com/krallin/tini/) as the entrypoint process (PID
1).

### Related issues (optional)

<!--Refer to related PRs or issues: #1234, or 'Fixes #1234' or 'Closes
#1234'.
Or link to full URLs to issues or pull requests in other GitHub
repositories. -->
Closes #698
@mjeffryes mjeffryes removed this from the 0.111 milestone Oct 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement Improvements or new features
Projects
No open projects
Status: No status
Development

No branches or pull requests

5 participants