You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 3, 2022. It is now read-only.
The actual implementation considers that if a Pod has been successfully patched to be added into the Load Balancer, the Pod is receiving traffic.
This is not reliable, since the production Service .spec.selector is modified, we risk that existing Pods do not have matching labels and traffic will never reach those Pods, resulting in Shipper being stuck while reporting erroneously that traffic is being routed to those Pods.
The text was updated successfully, but these errors were encountered:
Our current traffic controller considers a Pod being successfully
patched to have traffic labels as traffic actually progressing.
This isn't reliable for several reasons:
* A Service's `.spec.selector` can be modified, resulting in Pods no
longer receiving traffic, but the traffic controller will still report
that traffic has been achieved.
* Pods can be in a non-Ready state, making them not receive traffic, and
again Shipper will report traffic as being achieved.
To solve both of those issues, we now watch endpoints and extract actual
traffic information from them. For our purposes, only Ready Pods will
count towards traffic. This might make rollouts slightly slower (as
traffic now has to wait for pods to be actually ready to receive
traffic) but also much more reliable: we won't drain traffic from an
incument Release until the contender is actually fully ready.
Closes#23.
Our current traffic controller considers a Pod being successfully
patched to have traffic labels as traffic actually progressing.
This isn't reliable for several reasons:
* A Service's `.spec.selector` can be modified, resulting in Pods no
longer receiving traffic, but the traffic controller will still report
that traffic has been achieved.
* Pods can be in a non-Ready state, making them not receive traffic, and
again Shipper will report traffic as being achieved.
To solve both of those issues, we now watch endpoints and extract actual
traffic information from them. For our purposes, only Ready Pods will
count towards traffic. This might make rollouts slightly slower (as
traffic now has to wait for pods to be actually ready to receive
traffic) but also much more reliable: we won't drain traffic from an
incument Release until the contender is actually fully ready.
Closes#23.
Our current traffic controller considers a Pod being successfully
patched to have traffic labels as traffic actually progressing.
This isn't reliable for several reasons:
* A Service's `.spec.selector` can be modified, resulting in Pods no
longer receiving traffic, but the traffic controller will still report
that traffic has been achieved.
* Pods can be in a non-Ready state, making them not receive traffic, and
again Shipper will report traffic as being achieved.
To solve both of those issues, we now watch endpoints and extract actual
traffic information from them. For our purposes, only Ready Pods will
count towards traffic. This might make rollouts slightly slower (as
traffic now has to wait for pods to be actually ready to receive
traffic) but also much more reliable: we won't drain traffic from an
incument Release until the contender is actually fully ready.
Closes#23.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The actual implementation considers that if a Pod has been successfully patched to be added into the Load Balancer, the Pod is receiving traffic.
This is not reliable, since the production Service
.spec.selector
is modified, we risk that existing Pods do not have matching labels and traffic will never reach those Pods, resulting in Shipper being stuck while reporting erroneously that traffic is being routed to those Pods.The text was updated successfully, but these errors were encountered: