Skip to content
This repository has been archived by the owner on Apr 25, 2023. It is now read-only.

feat: collect remote resource status when enabled #1292

Conversation

hectorj2f
Copy link
Contributor

@hectorj2f hectorj2f commented Oct 2, 2020

What this PR does / why we need it:

This PR aims to implement this feature https://github.com/kubernetes-sigs/kubefed/blob/ba8f9a44e85951e460454ab7875cb0a551750124/docs/keps/20200619-federated-resource-status.md. This work is a first iteration towards improving the troubleshooting of the status of federated resources.

In this branch, we add a new property to the FederatedResources remoteStatus which stores the status of the created object on each federated cluster. An example of a FederatedDeployment and its status is shown below:

- apiVersion: types.kubefed.io/v1beta1
  kind: FederatedDeployment
  metadata:
    finalizers:
    - kubefed.io/sync-controller
    generation: 1
    name: asystem
    namespace: asystem
    resourceVersion: "70174497"
  spec:
    placement:
      clusters:
      - name: cluster1
    template:
      metadata:
        labels:
          app: nginx
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - image: nginx
              name: nginx
  status:
    clusters:
    - name: "cluster1"
      remoteStatus:
        availableReplicas: 3
        conditions:
        - lastTransitionTime: "2020-09-18T11:07:55Z"
          lastUpdateTime: "2020-09-18T11:18:31Z"
          message: ReplicaSet "asystem-f89759699" has successfully progressed.
          reason: NewReplicaSetAvailable
          status: "True"
          type: Progressing
        - lastTransitionTime: "2020-09-24T05:42:11Z"
          lastUpdateTime: "2020-09-24T05:42:11Z"
          message: Deployment has minimum availability.
          reason: MinimumReplicasAvailable
          status: "True"
          type: Available
        observedGeneration: 3
        readyReplicas: 3
        replicas: 3
        updatedReplicas: 3
    conditions:
    - lastTransitionTime: "2020-05-25T20:23:59Z"
      lastUpdateTime: "2020-05-25T20:23:59Z"
      status: "True"
      type: "Propagation"

To enable this feature, you need to add the feature gate in the kubefedconfig spec:

  featureGates:
  - name: RawResourceStatusCollection
    configuration: "Enabled"

In addition to that step, you will need to enable the statusCollection for the FederatedTypeConfig resources you wish to collect the remoteStatus. To do so, you set the property statusCollection to Enabled.

spec:
  federatedType:
    ...
  propagation: Enabled
  statusCollection: Enabled
  targetType:
    ...

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:
I am testing some changes in the test suite so additional polishing actions are expected.

Obviously this feature could potentially cause some performance degradation if our scenario federates many resources and we enable the status collection for all them. We are working on measuring and improving this behavior.

@hectorj2f hectorj2f self-assigned this Oct 2, 2020
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Oct 2, 2020
@k8s-ci-robot k8s-ci-robot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Oct 2, 2020
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 2, 2020
@hectorj2f
Copy link
Contributor Author

I'll have a look at the tests.

@hectorj2f hectorj2f force-pushed the hectorj2f/kubefed_fed_remote_status branch 3 times, most recently from fde81cd to 710f39a Compare October 5, 2020 08:58
@hectorj2f
Copy link
Contributor Author

It seems the in-memory controllers test is failing 🔍 .

Copy link
Contributor

@makkes makkes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just some questions.

pkg/controller/sync/controller.go Outdated Show resolved Hide resolved
pkg/controller/sync/controller.go Outdated Show resolved Hide resolved
test/e2e/defaulting.go Show resolved Hide resolved
test/e2e/framework/controller.go Outdated Show resolved Hide resolved
Copy link

@kensipe kensipe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a few nits and questions along the way

pkg/controller/sync/controller.go Outdated Show resolved Hide resolved
docs/keps/20200619-federated-resource-status.md Outdated Show resolved Hide resolved
```

An example of a `FederatedDeployment` federated in a `cluster1` where the feature
collected the resource remote status `remoteStatus`.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is unclear...

  1. does the example below define the full schema of remoteStatus?
  2. does the structure in remoteStatus depend on the targettype?
  3. Is there a list of acceptable targetTypes?
  4. using 1 example... status.clusters["cluster1"].replicas (not exact jsonpath, but hopefully clear). What is the value here if the targettype does support replicas?
  5. what does `conditions[*].status == "True" mean?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

status: "True" is part of the status of the DeploymentStatus core resources.

pkg/apis/core/v1beta1/federatedtypeconfig_types.go Outdated Show resolved Hide resolved
pkg/controller/federatedtypeconfig/controller.go Outdated Show resolved Hide resolved
pkg/features/features.go Show resolved Hide resolved
test/common/crudtester.go Outdated Show resolved Hide resolved
test/common/crudtester.go Show resolved Hide resolved
test/common/crudtester.go Outdated Show resolved Hide resolved
test/common/crudtester.go Show resolved Hide resolved
Copy link

@alenkacz alenkacz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've done a first pass, it's mostly nits. On Monday I'll do another one as I think I understand the flow of the feature much better now :)

pkg/controller/sync/status/status.go Outdated Show resolved Hide resolved
pkg/controller/sync/status/status.go Outdated Show resolved Hide resolved
pkg/controller/sync/status/status.go Outdated Show resolved Hide resolved
Copy link

@alenkacz alenkacz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, did my second pass. I think the general direction makes sense (doing this in the sync controller where propagation status is already updated) - so 👍 .

I have couple of questions/concerns though and because of that not ready to approve.

@@ -207,18 +207,20 @@ func (f *FederatedTypeConfig) GetFederatedType() metav1.APIResource {
return apiResourceToMeta(f.Spec.FederatedType, f.GetFederatedNamespaced())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please be a little bit more verbose with the docs here especially now that things are a bit more complicated

note that statustype is invalid when rawresourcecollection is enabled, also not that without rawresourcecollection, statusenabled works only for services. I would personally already go as far as marking the statustype as deprecated.

this kind of information should be available on the type level https://github.com/kubernetes-sigs/kubefed/blob/11cbdf9171b19eeabdc56a6c2e1f29f6c851eee9/pkg/apis/core/v1beta1/federatedtypeconfig_types.go#L49

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, you propose to mark statusType as deprecated, where we agreed on that. I don't think we mentioned in the KEP to remove the code from the old mechanism.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I just wanted for this PR to enhance the docs above that field. Not remove anything

pkg/controller/federatedtypeconfig/controller.go Outdated Show resolved Hide resolved
pkg/controller/federatedtypeconfig/controller.go Outdated Show resolved Hide resolved
pkg/controller/sync/status/status.go Outdated Show resolved Hide resolved
pkg/controller/sync/dispatch/managed.go Show resolved Hide resolved

d.unmanagedDispatcher.RemoveManagedLabel(clusterName, clusterObj)
}

func (d *managedDispatcherImpl) RecordClusterError(propStatus status.PropagationStatus, clusterName string, err error) {
d.fedResource.RecordError(string(propStatus), err)
d.RecordStatus(clusterName, propStatus)
d.RecordStatus(clusterName, propStatus, "UnknownClusterError")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this good idea? That means that instead of a status resource of known structure we'll report string and that can break people parsing the status expecting it to have form of the resource status

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The resource status for the federated resource does not have a known structure.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats not right. The resource status will be an unstructured object which is supposed to be of status type of that particular resource. Users can be expected to decode it into a typed structure using the type informartion.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it does not have explicit schema but as a consumer of the Federated api you would expect the schema to be the same as schema of the status of the underlying resource, right?

}

func (d *managedDispatcherImpl) recordOperationError(propStatus status.PropagationStatus, clusterName, operation string, err error) util.ReconciliationStatus {
d.recordError(clusterName, operation, err)
d.RecordStatus(clusterName, propStatus)
d.RecordStatus(clusterName, propStatus, "UnknownOperationError")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again, not a big fan of the string here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am open to suggestions.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leaving it empty? :-) why is propagation status not enough in this case?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nil :-)
and the error should surface in conditions

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I think empty is much better than string from consumer standpoint

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh I see, yeah. I like the fact that conditions will be used for such cases! So +1

Name string `json:"name"`
Status PropagationStatus `json:"status,omitempty"`
RemoteStatus interface{} `json:"remoteStatus,omitempty"`
// Conditions []*metav1.Condition `json:"conditions,omitempty"`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so are conditions part of this PR or not? KEP still mentions it but I don't see them implemented and I think they are a bit redundant now with the full status

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is part of the KEP, but how stated in the PR description This work is a first iteration. Anyway I could remove it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, my second part of the question is whether they are still needed now that we federate the whole status. The original idea was to federate only conditions but we stepped away from that.

}
return status.CollectedPropagationStatus{
StatusMap: statusMap,
ResourcesUpdated: d.resourcesUpdated,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this rather be a field of CollectedPropagationStatus?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate more the reasoning behind this decision ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well it's a field with the same value. Actually it looks like ResourcesUpdated on CollectedResourceStatus is never used. Not sure if it's bug or if it's just redundant. Might be redundant because it always has the same value as the other one 🤷

@hectorj2f hectorj2f force-pushed the hectorj2f/kubefed_fed_remote_status branch from c46c070 to e5b8ce8 Compare October 13, 2020 17:34
@jimmidyson
Copy link
Contributor

@hectorj2f I noticed that the default federated CRDs don't have RemoteStatus in their CRD, plus this is also not present when you kubefedctl enable a new resource for federation. This will also affect the migration to apiextensions.k8s.io/v1 (cc @makkes for #1296).


Name string `json:"name"`
Status PropagationStatus `json:"status,omitempty"`
RemoteStatus interface{} `json:"remoteStatus,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This deviates from the proposal (KEP) that you originally posted, the intent of which was to be able to precisely capture the state of a given resource in each cluster here. In other words mainly derive at the state as ready or notReady. Its understood, that state can only be derived from individual resource status per cluster, I have my apprehensions in storing the status (in etcd) as part of the individual federated resources. What you have added in this PR can already be done via the status controller as I did mention in this comment on your KEP, but as a separate status resource. The only change that would be needed would be to remove this condition. I am quite fine with capturing the raw status as a different resource right now.

The concerns that I have with storing the raw status as part of the federated resource:

  1. Performance: This is more of a question then anything else (as I don't have concrete numbers myself). I guess the CRUD on individual resources will be much more expensive, given all the additional statuses stored in it. This can be a problem as the sync reconciler already does a lot and is a very regular process.
  2. Manually unreadable: Think getting a federated resource list and then trying to figure out federated fields among all the statuses.
  3. Confusing statuses/so many statuses at the same place: As a user what I am really interested in is:- if my resource was propagated fine to all clusters and then, is my resource in ready/not ready state in my clusters, with some place I can go an refer to (for example the separate resourceStatus) if I see a probalematic state.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having said this, I would also add to please treat this as a trigger for a conversation to arrive at the best solution.
Another problem, that I do remember with the raw status collection is user reports of cluster etcd being filled up to its limits, the reason why we did put this condition in place.
@jimmidyson @RainbowMango

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That said, it will be collected only where you wanna collect it - you still have to turn it on per type. That said, maybe this perf problems will be partially solved by a solution we were also discussing and that's not federating whole status but just a selected subset via jsonpath. That at least makes it possible to lower the performance impact.

But overall I feel your concerns regarding performance even though this is opt-in so current installations won't be affected.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review @irfanurrehman

What you have added in this PR can already be done via the status controller as I did mention in this comment on your KEP, but as a separate status resource.

@irfanurrehman Yes, we've deviated from the original KEP due to some recent requirements, but we also updated the original KEP as part of this PR. The reason is to store the status of the federated resources in the target clusters instead of using the original approach FederatedServiceStatus. As mentioned in the description, this PR is a first iteration of the original proposal, fetching the status from the federated resources does not determine its readiness, so there is some work to be done.

@irfanurrehman The main reason to exclude the current approach is the inefficiency of having to store/manage the status of each resource as an additional resource type and resource in the datastore.

I will share our thoughts about your concerns in the following:

Performance: This is more of a question then anything else (as I don't have concrete numbers myself). I guess the CRUD on individual resources will be much more expensive, given all the additional statuses stored in it. This can be a problem as the sync reconciler already does a lot and is a very regular process.

Firstly, the feature is disabled by default for all the resources. When enabled, it'd be done for those specific resources. In addition to that, I decided to omit the status controller to carry on this new task, and instead rely on the dispatch controller which is already fetching the resource. We are already periodically fetching the resource, so we could simply reuse the same logic to consume its status.



Another problem, that I do remember with the raw status collection is user reports of cluster etcd being filled up to its limits, the reason why we did put this condition in place.



This sounds more like a problem we had with etcd v2 where bolt stored data locally on each node. Etcdv3 has improved this behavior and the size of the database is relatively improved with new boltdb improvements.

In any case we are planning to evaluate the performance cost of this change.

Manually unreadable: Think getting a federated resource list and then trying to figure out federated fields among all the statuses.



I agree but I assume this is a decision of the user to desire having the status of the resources as part of the federated resources.


Confusing statuses/so many statuses at the same place: As a user what I am really interested in is:- if my resource was propagated fine to all clusters and then, is my resource in ready/not ready state in my clusters, with some place I can go an refer to (for example the separate resourceStatus) if I see a problematic state.



Yes, I agree with the terms here. But I see this feature as an addition to have a better visibility of the status of the federated resources to be aware of consume their status in centralized manner. I can see value out of this feature when having the status as part of the FederatedResource.

pkg/controller/sync/controller.go Outdated Show resolved Hide resolved
collectedStatus := dispatcher.CollectedStatus()
return s.setFederatedStatus(fedResource, status.AggregateSuccess, &collectedStatus)
collectedStatus, collectedResourceStatus := dispatcher.CollectedStatus()
klog.Infof("Setting the federating status '%v'", collectedResourceStatus)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't make sense to have this at loglevel 0.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/federating/federated

return true, nil
}

klog.Infof("Updating status for %s %q", kind, name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, we don't want a flood of messages at loglevel 0 for each resource status updated.


d.unmanagedDispatcher.RemoveManagedLabel(clusterName, clusterObj)
}

func (d *managedDispatcherImpl) RecordClusterError(propStatus status.PropagationStatus, clusterName string, err error) {
d.fedResource.RecordError(string(propStatus), err)
d.RecordStatus(clusterName, propStatus)
d.RecordStatus(clusterName, propStatus, "UnknownClusterError")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats not right. The resource status will be an unstructured object which is supposed to be of status type of that particular resource. Users can be expected to decode it into a typed structure using the type informartion.

}

func (d *managedDispatcherImpl) recordOperationError(propStatus status.PropagationStatus, clusterName, operation string, err error) util.ReconciliationStatus {
d.recordError(clusterName, operation, err)
d.RecordStatus(clusterName, propStatus)
d.RecordStatus(clusterName, propStatus, "UnknownOperationError")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nil :-)
and the error should surface in conditions

pkg/controller/sync/status/status.go Show resolved Hide resolved
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 20, 2020
@hectorj2f hectorj2f force-pushed the hectorj2f/kubefed_fed_remote_status branch from e5b8ce8 to 331a14a Compare November 2, 2020 15:39
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 2, 2020
@irfanurrehman
Copy link
Contributor

@hectorj2f I see that you have pushed new changes. Have you completed your review rework?

@hectorj2f
Copy link
Contributor Author

@irfanurrehman Yes. But I want to share some graphs and numbers from a test we performed to compare existing vs current approach. I'll ping you this week, thanks for keeping one eye on the PR.

@hectorj2f
Copy link
Contributor Author

hectorj2f commented Nov 27, 2020

As promised, I am sharing the results of some local tests about the resource usage of the current kubefed status collection versus the additional remote status collection provided in this PR.

We run incremental tests over two different clusters running kubefed v0.5.0 and a build kubefed_fed_remote_status with the code of this PR. These tests created respectively N number of federated namespaces for which 2 federated services and 2 federated deployments were created.

N FederatedNamespace => 2*N FederatedService and 2*N FederatedDeployments

The clusters were both instrumented with Prometheus as monitoring tool to collect metrics, and at the same time cpu and heap profiles were captured.

I am sharing couple of charts that extracted from the global resource usage focusing on the SetFederatedStatus action whose logic slightly changed here.

In the first graph, I show the CPU usage time (%) of the FederatedStatusAction for each one of the 4 tests:

Screen Shot 2020-11-27 at 3 09 05 PM

A flamegraph of the current kubefed version v0.5.0 using the cpu pprof file with 1000 federated resources:
Screen Shot 2020-11-27 at 3 26 58 PM

And a flamegraph of the proposed solution using the cpu pprof file with 1000 federated resources:

Screen Shot 2020-11-27 at 3 27 08 PM

The second chart highlights the percentage of memory allocation to the FederatedStatusAction. There is a small and obvious increment due to the memory allocation to store the remote status from the resources. This increment is overall extra 60MB of memory consumption when handling 3000 federated resources with their status.

Screen Shot 2020-11-27 at 3 08 59 PM

Both flamegraphs about the allocatted memory space based on the heap pprof file:

Screen Shot 2020-11-27 at 1 22 59 PM

and

Screen Shot 2020-11-27 at 1 23 06 PM

The following chart shows the memory in-use consumed by the json.DecodeState action. This value has to be higher because the size of the decoding data is bigger as well. However the growth of both approach remains parallel and following a similar increment.

Screen Shot 2020-11-27 at 3 08 45 PM

Note that, a cooling down period was reserved after each test to give some time to the system to stabilize.

To sum up, we can imagine this new approach might consume more resources, but it is generally not affecting at a high scale the performance of kubefed when handling 1500 resources with their respective status stored as part of the federated resource spec. We also measured the state of etcd without remarking any performance degradation of database increment.

@hectorj2f hectorj2f force-pushed the hectorj2f/kubefed_fed_remote_status branch from 331a14a to 02f40ac Compare November 27, 2020 14:33
@jimmidyson
Copy link
Contributor

@hectorj2f Thank you so much for that in-depth performance benchmarking. I don't see anything of concern there. One thing that I don't see covered though is when resources are propagated to multiple clusters. It would be good to see what happens when propagating resources to 10, 50, 100, 200 clusters. Even without that, I am happy with this approach and would like to see it merged.

Signed-off-by: Hector Fernandez <[email protected]>
@hectorj2f hectorj2f force-pushed the hectorj2f/kubefed_fed_remote_status branch from 02f40ac to 154d5f6 Compare December 1, 2020 14:17
@hectorj2f
Copy link
Contributor Author

@jimmidyson I agree those tests are missing. But the degradation would be proportional to the amount of clusters in a linear manner to how the system behaves today under these circumstances.
Of course, this feature won't avoid ending into a situation where a FederatedResource placed across 1k clusters will face similar issues to those the core API type Endpoints faced (a consequence to the introduction of EndpointSlice). In other words, the scale of this feature might be affected by the amount of clusters to which you federate the resource and the size of the remote status property.

@hectorj2f
Copy link
Contributor Author

I will look at the tests, since I rebased last time it is failing 🤔.

@irfanurrehman
Copy link
Contributor

Thanks @hectorj2f for doing this and apologies for missing your update earlier.
Its satisfactory to know that the overhead increase while collecting status has a linear corelation with the number of resources. The corelation should ideally be similar when collecting status from multiple clusters.
I think this is good to go. I can do a final pass after you confirm that you have finished your changes.

@hectorj2f hectorj2f force-pushed the hectorj2f/kubefed_fed_remote_status branch from 75b03b2 to 1ea9fac Compare December 6, 2020 18:34
@hectorj2f
Copy link
Contributor Author

@irfanurrehman Yes, it is done. Please, feel free to re-review it. Thanks

Copy link

@alenkacz alenkacz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I would love to do some follow-ups around naming stuff but otherwise this looks great. I also tested it locally and it's working 👏

docs/development.md Show resolved Hide resolved
pkg/controller/sync/controller.go Outdated Show resolved Hide resolved
Signed-off-by: Hector Fernandez <[email protected]>
@hectorj2f hectorj2f force-pushed the hectorj2f/kubefed_fed_remote_status branch from 1ea9fac to 6c72bac Compare December 8, 2020 11:51
Copy link
Contributor

@irfanurrehman irfanurrehman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hectorj2f Thanks for doing this. I have one observation which remains open. Please update that and then we can merge.

pkg/controller/sync/status/status.go Outdated Show resolved Hide resolved
@irfanurrehman
Copy link
Contributor

Thanks @hectorj2f for doing this.
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Dec 14, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alenkacz, hectorj2f, irfanurrehman, makkes

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [hectorj2f,irfanurrehman]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm Indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants