Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New OpenshiftCatalogueValidator #661

Closed
wants to merge 1 commit into from
Closed

New OpenshiftCatalogueValidator #661

wants to merge 1 commit into from

Conversation

camilamacedo86
Copy link

@camilamacedo86 camilamacedo86 commented Feb 20, 2021

Description

See that our goal here is;

  • We have OLM, and it has rules/criteria to work with operators in upstream(k8s) and (ocp) - We offer these standard options in ( operatorhub.io )
  • Then, we are proposing here have a validator for OCP such as we have for operatorhub.io

@camilamacedo86
Copy link
Author

camilamacedo86 commented Feb 20, 2021

/assign awgreene

@camilamacedo86
Copy link
Author

/assign gallettilance
/assign bparees

@camilamacedo86 camilamacedo86 changed the title EP: OpenshiftCatalogueValidator New OpenshiftCatalogueValidator in operator-framework/api Feb 20, 2021
@camilamacedo86

This comment has been minimized.

@camilamacedo86

This comment has been minimized.

@kevinrizza
Copy link
Member

High level concern: I'm concerned about whether this stuff should really land in the upstream operator-framework project. Are we attempting to explicitly tie the operator-sdk with downstream OCP? There is a downstream version of the sdk, should these bits just land there?

Copy link
Contributor

@njhale njhale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, I think having a set of validations that OpenShift cares about is a good idea, but I do have some reservations about the specifics of the design in this proposal. Mainly, as a maintainer of operator-framework, I would not want to maintain OpenShift-specific validators in an operator-framework repository. Our intention from the beginning was for validations to be pluggable so that folks could define and layer new validations that they care about. Currently, this amounts to defining an implementation of the validator interface and adding it to the set of validators to be run. In the future, I was hoping we could offload validation entirely to something like Cue, so that we don't incur the overhead of maintaining custom tooling (IMO, this also keeps us from being brittle).

@camilamacedo86

This comment has been minimized.

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 28, 2021
@camilamacedo86 camilamacedo86 changed the title New OpenshiftCatalogueValidator in operator-framework/api WIP: New OpenshiftCatalogueValidator in operator-framework/api Feb 28, 2021
@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 28, 2021
@camilamacedo86
Copy link
Author

camilamacedo86 commented Mar 9, 2021

Hi @njhale,

Really thank you for your input. I have some questions I hope that you can help me with.

Our intention from the beginning was for validations to be pluggable so that folks could define and layer new validations that they care about. Currently, this amounts to defining an implementation of the validator interface and adding it to the set of validators to be run. In the future, I was hoping we could offload validation entirely to something like Cue, so that we don't incur the overhead of maintaining custom tooling (IMO, this also keeps us from being brittle).

  • Regarding the OCP concerns, I got that. I tried to add all possible alternative solutions found so far. 👍
  • Regarding the "clu" option then, I think we could use the Scorecard instead which already exist. However, then it means that we cannot centralize the code and we have the risk of more than once place doing the same.

@camilamacedo86
Copy link
Author

camilamacedo86 commented Mar 9, 2021

Hi @kevinrizza,

Really thank you for your input.

I updated this ep and added all discussed suggestions a few open questions which I hope leads us to the best approach.

High level concern: I'm concerned about whether this stuff should really land in the upstream operator-framework project. Are we attempting to explicitly tie the operator-sdk with downstream OCP? There is a downstream version of the sdk, should these bits just land there?

We have an sdk repo for downstream but, as far as I know, it does not provide the binary. It is only used to provide the ocp images for ansible/helm operators. For SDK users and Pipeline shows a little problematic and confuse not be able to use the sdk binary that is provided by upstream. It also would bring some bad sides such as; not be able to get a bug fix or a new feature so faster for example.

@camilamacedo86 camilamacedo86 changed the title WIP: New OpenshiftCatalogueValidator in operator-framework/api New OpenshiftCatalogueValidator in operator-framework/api Mar 9, 2021
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 9, 2021
Copy link
Contributor

@jmrodri jmrodri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few changes. I will continue to review the second half soon.

@@ -16,7 +16,7 @@ config:

# We like to use really long lines
line-length:
line_length: 400
line_length: 800
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this increased?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to allow us to do a table with bigger content. See its comment # We like to use really long lines. however, it was not long enough :-P

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do not change this value. 400 is already too long. I set it that way so that when I added the linter job I did not have to wrap every single line of every file in the repository, but we don't actually want it that long. Please wrap paragraphs to a reasonable width to make reviewing easier.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dhellmann, @jmrodri,

A markdown table breaks the lines automatically and makes it easier for us to view the data.
However, the lint here is static and does not ignore tables.
Also, I do not sure what is the motivation of this check. line_length: 400 requires to scroll such ass 500 such as 800. so, why it needs to be 400 and can't be 800?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I chose 400 as the smallest value that allowed me to turn on the linter job as required without editing every file in the repository first. 80 or 100 would be my preference, but I haven't wanted to come back and edit all of those files to make that limit work. Raising the value to 800 makes it completely useless as a check and encourages authors to write documentation in a format that is hard to review, hard to track edits, etc.

It looks like the tables in this document could easily be turned into bullet lists and retain the semantic value of special formatting beyond regular paragraphs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

linter rules that prevent people from using reasonable markdown structures seems like the tail wagging the dog.

Tables have headers and columns. Bullet lists are not the same thing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to flag some lines as "ignore"? Then we could flag the tables as ignore.

enhancements/operator-framework-api/catalogue-check.md Outdated Show resolved Hide resolved
enhancements/operator-framework-api/catalogue-check.md Outdated Show resolved Hide resolved
enhancements/operator-framework-api/catalogue-check.md Outdated Show resolved Hide resolved
enhancements/operator-framework-api/catalogue-check.md Outdated Show resolved Hide resolved
enhancements/operator-framework-api/catalogue-check.md Outdated Show resolved Hide resolved
enhancements/operator-framework-api/catalogue-check.md Outdated Show resolved Hide resolved
enhancements/operator-framework-api/catalogue-check.md Outdated Show resolved Hide resolved
enhancements/operator-framework-api/catalogue-check.md Outdated Show resolved Hide resolved
enhancements/operator-framework-api/catalogue-check.md Outdated Show resolved Hide resolved
@jmrodri
Copy link
Contributor

jmrodri commented Mar 11, 2021

In general, I think having a set of validations that OpenShift cares about is a good idea, but I do have some reservations about the specifics of the design in this proposal. Mainly, as a maintainer of operator-framework, I would not want to maintain OpenShift-specific validators in an operator-framework repository. Our intention from the beginning was for validations to be pluggable so that folks could define and layer new validations that they care about. Currently, this amounts to defining an implementation of the validator interface and adding it to the set of validators to be run. In the future, I was hoping we could offload validation entirely to something like Cue, so that we don't incur the overhead of maintaining custom tooling (IMO, this also keeps us from being brittle).

From @kevinrizza and @njhale concerns, I think we might need to do what Nick suggested is take this opportunity to figure out how to create pluggable "rules" that can be pulled in dynamically and be used by operator-sdk. That way if someone wants to use an upstream operator-sdk with an openshift specific validation they can without our team having to bake it into the upstream.

So I see a couple things coming out of this EP:

  1. define the validation rules needed for an "openshift validator" but not using the current implementation just the "rules"
  2. probably need another EP to design a pluggable or more dynamic validation engine. it can not use the kubebuilder style plugins because those are compiled in.

Maybe something that knows how to git pull validation files from a git repo. that would allow pulling rules data files without having to rebuild operator-sdk and they can be used upstream or downstream.

operator-sdk bundle validate --optional-validator github.com/myorg/my-custom-validator

@camilamacedo86

This comment has been minimized.

**Cons**

- Effort required to keep a downstream repository for a downstream component and its releases [operator-framework/api][oper-api]
- SDK would need to probably use only the downstream import to avoid misleading
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SDK upstream would NOT be using a downstream o-f/api. So the con is that anyone that wants to target openshift MUST use a downstream SDK. That's actually pretty bad because today it honestly doesn't matter which one you use.

If we must do Option B, then there MUST be a mechanism to dynamically load these downstream rules into an upstream at runtime.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we not import both (downstream and upstream) APIS as suggested here in the SDK repo?


### (Option C) Design and implement Pluggable Validator mechanism for SDK

Instead of we add the OpenshiftValidator to [operator-framework/api][oper-api] we would only provide it via the custom "downstream plugin" and make it available only for SDK binary.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Instead of we add the OpenshiftValidator to [operator-framework/api][oper-api] we would only provide it via the custom "downstream plugin" and make it available only for SDK binary.
Instead of adding the OpenshiftValidator to [operator-framework/api][oper-api], we would provide it via the custom "downstream plugin" and make it available only for SDK binary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it would be "only for SDK binary"

Copy link
Author

@camilamacedo86 camilamacedo86 Mar 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of adding the OpenshiftValidator to [operator-framework/api][oper-api], we would provide it via the custom "downstream plugin" and make it available only for SDK binary by:

  • Implement an interface for the operator-sdk bundle validate command
  • Then, we could have plugins that would respect this interface and be recognized by the SDK command
  • The plugins would be such as go modules which would be downloaded in a directory. SDK command would be able to recognize and use the plugin.

So, it cannot be used without SDK at all.

The EP was clarified. Also, see that the idea/suggestions made by you and @@njhale is described in option E.

@jmrodri
Copy link
Contributor

jmrodri commented Mar 12, 2021

So the way I envision this dynamic validator thingy is that all validations live in git repos. Either a single repo with a collection of validations or a one repo per validation, or a combination of both. This means that the validations will no longer be Go code. They will be something like CUE, yaml, json, whatever it needs to be like was suggested by others.

Then in the SDK (or in other places to). there's a system that knows how to load validations from a repo. We load those validations, parse and run them. The beauty of this is that you do not have to embed these validations into the binaries. So upstream vs downstream goes away entirely. It is super easy to add new validations, you could standup a new repo with your validation in it then point the tools to use it.

@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please ask for approval from bparees after the PR has been reviewed.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@camilamacedo86 camilamacedo86 changed the title New OpenshiftCatalogueValidator in operator-framework/api New OpenshiftCatalogueValidator Mar 21, 2021
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 19, 2021
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 19, 2021
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this Aug 19, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 19, 2021

@openshift-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants