-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
*: handle auth invalid token and old revision errors in watch #14322
Conversation
Codecov Report
@@ Coverage Diff @@
## main #14322 +/- ##
==========================================
- Coverage 75.39% 75.21% -0.19%
==========================================
Files 457 457
Lines 37207 37235 +28
==========================================
- Hits 28053 28005 -48
- Misses 7394 7455 +61
- Partials 1760 1775 +15
Flags with carried forward coverage won't be shown. Click here to find out more.
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
Thanks @mitake , just marked this PR as draft. Please feel free to mark it as ready for review once it's ready. |
I clone locally, tried a run and the problem was solved, the previous watch will also be send by However, because the default I also read the committed code, instead of using a string, would try passing an error code between the client and the server? err := sws.isWatchPermitted(creq)
if err != nil { |
Thanks for trying this PR @kafuu-chino .
It might be simpler but the error codes don't have unique integer IDs. So keeping the type of I'll also change the return value type of isWatchPermitted(). |
427bb49
to
ff9c661
Compare
ff9c661
to
94fd161
Compare
@ahrtr @kafuu-chino I think this PR is ready to be reviewed, could you check when you have time? |
The pipeline failure should be caused by this PR. Please fix the failures. |
Signed-off-by: Hitoshi Mitake <[email protected]>
Signed-off-by: Hitoshi Mitake <[email protected]>
94fd161
to
c561452
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thank you @mitake
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I think it's okay and the test is fine.
Thanks for solving my problem :)
@ahrtr @kafuu-chino Thanks for reviewing! The failed check caused by the previous version of the PR was e2e IIRC, but I couldn't reproduce the failure on my local env, I guess it was a non deterministic issue. Let me merge this branch. |
I'll open PRs for backporting the change to stable releases later. |
In order to fix etcd-io#12385, PR etcd-io#14322 introduced a change in which the client side may retry based on the error message returned from server side. This is not good, as it's too fragile and it's also changed the protocol between client and server. Please see the discussion in kubernetes/kubernetes#114403 Note: The issue etcd-io#12385 only happens when auth is enabled, and client side reuse the same client to watch. So we decided to rollback the change on 3.5, reasons: 1.K8s doesn't enable auth at all. It has no any impact on K8s. 2.It's very easy for client application to workaround the issue. The client just needs to create a new client each time before watching. Signed-off-by: Benjamin Wang <[email protected]>
In order to fix etcd-io#12385, PR etcd-io#14322 introduced a change in which the client side may retry based on the error message returned from server side. This is not good, as it's too fragile and it's also changed the protocol between client and server. Please see the discussion in kubernetes/kubernetes#114403 Note: The issue etcd-io#12385 only happens when auth is enabled, and client side reuse the same client to watch. So we decided to rollback the change on 3.5, reasons: 1.K8s doesn't enable auth at all. It has no any impact on K8s. 2.It's very easy for client application to workaround the issue. The client just needs to create a new client each time before watching. Signed-off-by: Benjamin Wang <[email protected]>
In order to fix etcd-io#12385, PR etcd-io#14322 introduced a change in which the client side may retry based on the error message returned from server side. This is not good, as it's too fragile and it's also changed the protocol between client and server. Please see the discussion in kubernetes/kubernetes#114403 Note: The issue etcd-io#12385 only happens when auth is enabled, and client side reuse the same client to watch. So we decided to rollback the change on 3.5, reasons: 1.K8s doesn't enable auth at all. It has no any impact on K8s. 2.It's very easy for client application to workaround the issue. The client just needs to create a new client each time before watching. Signed-off-by: Benjamin Wang <[email protected]>
Fix #12385
It's still a WIP PR, please do not merge. Remaining todos: