-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New OpenshiftCatalogueValidator #661
Conversation
/assign awgreene |
/assign gallettilance |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
High level concern: I'm concerned about whether this stuff should really land in the upstream operator-framework project. Are we attempting to explicitly tie the operator-sdk with downstream OCP? There is a downstream version of the sdk, should these bits just land there? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, I think having a set of validations that OpenShift cares about is a good idea, but I do have some reservations about the specifics of the design in this proposal. Mainly, as a maintainer of operator-framework, I would not want to maintain OpenShift-specific validators in an operator-framework repository. Our intention from the beginning was for validations to be pluggable so that folks could define and layer new validations that they care about. Currently, this amounts to defining an implementation of the validator interface and adding it to the set of validators to be run. In the future, I was hoping we could offload validation entirely to something like Cue, so that we don't incur the overhead of maintaining custom tooling (IMO, this also keeps us from being brittle).
This comment has been minimized.
This comment has been minimized.
Hi @njhale, Really thank you for your input. I have some questions I hope that you can help me with.
|
Hi @kevinrizza, Really thank you for your input. I updated this ep and added all discussed suggestions a few open questions which I hope leads us to the best approach.
We have an sdk repo for downstream but, as far as I know, it does not provide the binary. It is only used to provide the ocp images for ansible/helm operators. For SDK users and Pipeline shows a little problematic and confuse not be able to use the sdk binary that is provided by upstream. It also would bring some bad sides such as; not be able to get a bug fix or a new feature so faster for example. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few changes. I will continue to review the second half soon.
@@ -16,7 +16,7 @@ config: | |||
|
|||
# We like to use really long lines | |||
line-length: | |||
line_length: 400 | |||
line_length: 800 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was this increased?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to allow us to do a table with bigger content. See its comment # We like to use really long lines
. however, it was not long enough :-P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please do not change this value. 400 is already too long. I set it that way so that when I added the linter job I did not have to wrap every single line of every file in the repository, but we don't actually want it that long. Please wrap paragraphs to a reasonable width to make reviewing easier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A markdown table breaks the lines automatically and makes it easier for us to view the data.
However, the lint here is static and does not ignore tables.
Also, I do not sure what is the motivation of this check. line_length: 400 requires to scroll such ass 500 such as 800. so, why it needs to be 400 and can't be 800?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I chose 400 as the smallest value that allowed me to turn on the linter job as required without editing every file in the repository first. 80 or 100 would be my preference, but I haven't wanted to come back and edit all of those files to make that limit work. Raising the value to 800 makes it completely useless as a check and encourages authors to write documentation in a format that is hard to review, hard to track edits, etc.
It looks like the tables in this document could easily be turned into bullet lists and retain the semantic value of special formatting beyond regular paragraphs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
linter rules that prevent people from using reasonable markdown structures seems like the tail wagging the dog.
Tables have headers and columns. Bullet lists are not the same thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to flag some lines as "ignore"? Then we could flag the tables as ignore.
From @kevinrizza and @njhale concerns, I think we might need to do what Nick suggested is take this opportunity to figure out how to create pluggable "rules" that can be pulled in dynamically and be used by operator-sdk. That way if someone wants to use an upstream operator-sdk with an openshift specific validation they can without our team having to bake it into the upstream. So I see a couple things coming out of this EP:
Maybe something that knows how to git pull validation files from a git repo. that would allow pulling rules data files without having to rebuild operator-sdk and they can be used upstream or downstream.
|
This comment has been minimized.
This comment has been minimized.
**Cons** | ||
|
||
- Effort required to keep a downstream repository for a downstream component and its releases [operator-framework/api][oper-api] | ||
- SDK would need to probably use only the downstream import to avoid misleading |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SDK upstream would NOT be using a downstream o-f/api. So the con is that anyone that wants to target openshift MUST use a downstream SDK. That's actually pretty bad because today it honestly doesn't matter which one you use.
If we must do Option B, then there MUST be a mechanism to dynamically load these downstream rules into an upstream at runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we not import both (downstream and upstream) APIS as suggested here in the SDK repo?
|
||
### (Option C) Design and implement Pluggable Validator mechanism for SDK | ||
|
||
Instead of we add the OpenshiftValidator to [operator-framework/api][oper-api] we would only provide it via the custom "downstream plugin" and make it available only for SDK binary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of we add the OpenshiftValidator to [operator-framework/api][oper-api] we would only provide it via the custom "downstream plugin" and make it available only for SDK binary. | |
Instead of adding the OpenshiftValidator to [operator-framework/api][oper-api], we would provide it via the custom "downstream plugin" and make it available only for SDK binary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it would be "only for SDK binary"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of adding the OpenshiftValidator to [operator-framework/api][oper-api], we would provide it via the custom "downstream plugin" and make it available only for SDK binary by:
- Implement an interface for the
operator-sdk bundle validate
command - Then, we could have plugins that would respect this interface and be recognized by the SDK command
- The plugins would be such as go modules which would be downloaded in a directory. SDK command would be able to recognize and use the plugin.
So, it cannot be used without SDK at all.
The EP was clarified. Also, see that the idea/suggestions made by you and @@njhale is described in option E.
So the way I envision this dynamic validator thingy is that all validations live in git repos. Either a single repo with a collection of validations or a one repo per validation, or a combination of both. This means that the validations will no longer be Go code. They will be something like CUE, yaml, json, whatever it needs to be like was suggested by others. Then in the SDK (or in other places to). there's a system that knows how to load validations from a repo. We load those validations, parse and run them. The beauty of this is that you do not have to embed these validations into the binaries. So upstream vs downstream goes away entirely. It is super easy to add new validations, you could standup a new repo with your validation in it then point the tools to use it. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Description
See that our goal here is;