-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adjust API groups for Controller API and CRDs #1122
Comments
We cannot use "internal" as packages under "internal" directory will be treated as internal packages in Golang.
|
+1 to controlplane |
controlplane sounds good. |
One problem @tnqn mentioned in the community meeting - we need to use different API groups for CRDs and Controller/Agent API even they are for the same kind of information. For example, we have "clusterinformation" for monitoring CRD; "system" for system information from Controller API which includes the same information exposed via monitoring CRD; and "ops" for Traceflow CRD (while support bundle API is in the "system" group). |
I think the issue is that we cannot use the same API group for both CRDs and an APIService object? Because the k8s apiserver uses the API group to decide which requests to forward through the aggregation layer? That's funny because @tnqn and I were having a conversation in another issue about whether we actually needed APIServices... @tnqn, do you remember why we need an APIService for |
antctl uses "system" group, not monitoring CRDs (one reason is we wanted to support local debugging even when K8s API is down, but probably it is not really implemented for controller API), and "system" group also includes support bundle. |
Yes but it doesn't seem to me that either of these use cases require an APIService. I'm pretty sure that "antctl supportbundle" connects directly to the Agent / Controller, without going through the k8s apiserver. So I wanted to confirm that the issue @tnqn brought up comes from the fact that we cannot use the same group for CRD + APIService. |
APIService just enables you to run antctl outside the K8s cluster and reaches Controller via K8s API. I know we do not do that for supportbundle, but it is the case for Controller information. |
I am aware of that, but it doesn't seem that useful here. We can use CRDs for out-of-cluster and still have a separate implementation for the local use case that you mentioned above. I think it's actually better than exposing the same info through 2 different mechanisms, which is what we seem to be doing today. So my point is that removing the |
Ok. I now got your point. |
I was suggesting having |
We can talk about "system" group separately (still not convinced we must treat it differently). I personally want not to use a single group for CRD either. But on the other hand, I do not like so many groups, and would like to differentiate CRDs and other APIs from group names, but I do not have a good idea to achieve this. |
controlplane sounds good to me as well. Would it make sense to get rid of all APIServices and using NodePort service or exec endpoint to access Antrea APIs from out of cluster as discussed here #1137 (comment), given that we will need a way to access non registered API like #1137 anyway? |
NodePort is not convenient and in many cases not really exposed to external network. Feel Pod exec is indirect too. So, I am not convinced to remove API Service just to reduce API groups. Even now we do not have many cases yet, I would not exclude the possibility to expose public API from Antrea directly, and would leverage the power K8s apiserver provides. |
Anyway, from all discussion, I think at least we can start to move "networking" group to "controlplane". I will start to make the change. |
Describe what you are trying to solve
Need to finalize the API group definition for Controller API and CRDs. This is also required to support new features like IPAM, egress policy, NodePortLocal.
Describe the solution you have in mind
Per discussions offline, the proposal supported by most people is:
The major change needed is to change the current "networking" group to "internal", and then some new features can start to use the "networking" group.
The text was updated successfully, but these errors were encountered: