Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial Blue Green Strategy #1

Merged
merged 5 commits into from
Jan 7, 2019
Merged

Conversation

dthomson25
Copy link
Member

No description provided.

@@ -56,22 +40,23 @@ func main() {
klog.Fatalf("Error building kubernetes clientset: %s", err.Error())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless I'm missing something klog by default sends logs to files, not stdout. It can be configured using --logtostderr flag but we need to run klog.InitFlags(nil).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed when we switched away from klog: #7

@@ -217,7 +221,7 @@ func (c *Controller) processNextWorkItem() bool {
return nil
}
// Run the syncHandler, passing it the namespace/name string of the
// Foo resource to be synced.
// Rollout resource to be synced.
if err := c.syncHandler(key); err != nil {
// Put the item back on the workqueue to handle any transient errors.
c.workqueue.AddRateLimited(key)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to reschedule only if error is transient. I've created invalid rollout (with invalid active service name ) and controller kept trying to synchronize it forever. I don't know how to distinguish which error is transient so I would suggest to remove this rescheduling.

// The number of old ReplicaSets to retain.
// This is a pointer to distinguish between explicit zero and not specified.
// +optional
RevisionHistoryLimit *int32 `json:"revisionHistoryLimit,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like it is not used

// 2) Max number of pods allowed is reached: deployment's replicas + maxSurge == all RSs' replicas
func NewRSNewReplicas(rollout *v1alpha1.Rollout, allRSs []*appsv1.ReplicaSet, newRS *appsv1.ReplicaSet) (int32, error) {
switch rollout.Spec.Strategy.Type {
case v1alpha1.BlueGreenRolloutStrategyType:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strategy field is marked as optional. I think controller should use BlueGreen rollout strategy is Strategy field is not specified (e.g. rollout.Spec.Strategy.Type == "")

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern with blue green is that it requires an active service, and it won't be set if the Strategy field is empty. What do you think about using a rolling-update once it's implemented as a default and have an invalid spec condition that says that the strategy needs to be filled out in the meanwhile.


// Wait for the caches to be synced before starting workers
klog.Info("Waiting for informer caches to sync")
if ok := cache.WaitForCacheSync(stopCh, c.deploymentsSynced, c.foosSynced); !ok {
if ok := cache.WaitForCacheSync(stopCh, c.replicaSetSynced, c.rolloutsSynced); !ok {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried to create rollout without strategy and controller never starts. WaitForCacheSync stuck forever

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: example-rollout
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.4
        ports:
        - containerPort: 80
  minReadySeconds: 30
  revisionHistoryLimit: 3

@dthomson25
Copy link
Member Author

dthomson25 commented Jan 7, 2019

After discusing with the team, I'm merging in this pr and I will address this feedback in future PRs

@dthomson25 dthomson25 merged commit 049454f into argoproj:master Jan 7, 2019
@dthomson25 dthomson25 deleted the PR-branch branch January 7, 2019 18:22
miriamssinger referenced this pull request in miriamssinger/argo-rollouts Jan 12, 2023
…sses

feat: add support for N nginx ingresses per service
huynhsontung pushed a commit to huynhsontung/argo-rollouts that referenced this pull request Mar 2, 2023
y0ngha pushed a commit to y0ngha/argo-rollouts that referenced this pull request Jul 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants