-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mutation failed: cannot decode incoming new object: json: unknown field "subresource" #11448
Comments
/area API I tried serving v0.23 but it worked fine on my end. |
@howardjohn are you able to consistently reproduce this on a clean kind cluster? If so could you let us know version of /needs-user-input |
/triage needs-user-input |
I met with the same problem on Kubernetes 1.22. It used to work fine on Kubernetes 1.21. |
This is also something happening on tektoncd/pipeline with 1.22 (OCP 4.9 nightly and even k8s 1.22 it seems) |
Created #11805 to get 1.22 in to CI and see if we can reproduce there. |
discussion with k8s api-machinery on the topic at https://kubernetes.slack.com/archives/C0EG7JC6T/p1629219036101100, wrapping up with:
|
Let me repeat the other thing I said, too: There's not much point in having a webhook disallow unknown fields -- the only time you'll see "not allowed" fields from the server is if the server has a different idea about what is allowed than the webhook does. |
based on the description in #11848
I think this was being used to prevent unknown fields from being persisted in scenarios where the webhook had a go type and the CRD did not define a schema, so in that case, for things outside the |
I see, that makes sense. Yes, if you're going to do that, I'd remove metadata prior to validating and parse that separately. If space is preventing adding in schemas, you can save a bit of space by not using client side apply, and if absolutely necessary you can send proto-encoded objects to apiserver, which may save a little more. |
@liggitt that's exactly right. I think now we have schemas in Knative CRDs we can and should rely on those. Unfortunately Tekton doesn't have schemas yet (and we're worried the size of the needed schemas would exceed limits, based on previous investigations, but this needs revalidating). |
Based on the comments above
This is caused by |
FWIW, so it doesn't get lost in some Slack thread: The hybrid solution (allowing unknown fields only in metadata) could look somewhat like this: func decodeIgnoringMetadata(decoder *json.Decoder, into *resourcesemantics.GenericCRD, disallowUnknownFields bool) error {
var intermediate map[string]json.RawMessage
if err := decoder.Decode(&intermediate); err != nil {
return err
}
// Roundtrip via ObjectMeta to strip inconmpatible fields, if metadata is present.
rawMeta := intermediate["metadata"]
if len(rawMeta) != 0 {
var meta metav1.ObjectMeta
if err := json.Unmarshal(intermediate["metadata"], &meta); err != nil {
return err
}
var err error
intermediate["metadata"], err = json.Marshal(&meta)
if err != nil {
return err
}
}
newBytes, err := json.Marshal(&intermediate)
if err != nil {
return err
}
newDecoder := json.NewDecoder(bytes.NewBuffer(newBytes))
if disallowUnknownFields {
newDecoder.DisallowUnknownFields()
}
if err := newDecoder.Decode(into); err != nil {
return err
}
return nil
} |
|
Would like to confirm that: does this issue only happen on k8s 1.22? Is there any other k8s version that could also have this issue? |
1.22+ - but it should be resolved now |
Kubernetes 1.22 added a new subresource field kubernetes/kubernetes#100970. The knative pkg which is used the webhook had an issue that was resolved here knative/serving#11448. This change bumps the knative pkg dependency which includes the fix but keeps it pinned on the 0.22 release branch. In addition it adds 1.22.6 to the k8s testing matrix in Github actions. fixes vmware-tanzu#214
Kubernetes 1.22 added a new subresource field kubernetes/kubernetes#100970. The knative pkg which is used the webhook had an issue that was resolved here knative/serving#11448. This change bumps the knative pkg dependency which includes the fix but keeps it pinned on the 0.22 release branch. In addition it adds 1.22.5 to the k8s testing matrix in Github actions. fixes vmware-tanzu#214
Kubernetes 1.22 added a new subresource field kubernetes/kubernetes#100970. The knative pkg which is used the webhook had an issue that was resolved here knative/serving#11448. This change bumps the knative pkg dependency which includes the fix but keeps it pinned on the 0.22 release branch. fixes vmware-tanzu#214
Kubernetes 1.22 added a new subresource field kubernetes/kubernetes#100970. The knative pkg which is used the webhook had an issue that was resolved here knative/serving#11448. This change bumps the knative pkg dependency which includes the fix but keeps it pinned on the 0.22 release branch. In addition it adds 1.22.5 to the k8s testing matrix in Github actions. fixes vmware-tanzu#214
Kubernetes 1.22 added a new subresource field kubernetes/kubernetes#100970. The knative pkg which is used the webhook had an issue that was resolved here knative/serving#11448. This change bumps the knative pkg dependency which includes the fix but keeps it pinned on the 0.22 release branch. In addition it adds 1.22.5 to the k8s testing matrix in Github actions. fixes #214
/area networking
What version of Knative?
0.23
Expected Behavior
Hello world works
Actual Behavior
Kservice stuck, logs are showing a bunch of errors like:
Steps to Reproduce the Problem
After manually deleting all the mutating and validating webhooks seems thing to start working
The text was updated successfully, but these errors were encountered: