Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(cli): lacework query fail_on_count #415

Merged
merged 14 commits into from
Jun 15, 2022
Merged

feat(cli): lacework query fail_on_count #415

merged 14 commits into from
Jun 15, 2022

Conversation

hazedav
Copy link
Collaborator

@hazedav hazedav commented May 17, 2021

As a Lacework user I would like the "lacework query" CLI command to return a non-zero exit code when the number of results returned by my query meets a certain criteria.

I believe the cleanest way to implement this functionality would be to implement a "--fail_on_count" flag which can take on values such as ">0", "=1", "<10", etc. This allows us to evaluate the number of results returned by the query without having to alter the semantics of the query in any way.

ALLY-485

cli/cmd/lql.go Outdated Show resolved Hide resolved
Copy link
Contributor

@afiune afiune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It took a lot of Customer interviews and UX sessions to figure out that a fail flag would help, have you reached out to anyone to verify that this is indeed something we want to add to the query commands? If so, go for it! 🙌🏽

One major blocker though, the os.Exit().

cli/cmd/lql.go Outdated Show resolved Hide resolved
@hazedav
Copy link
Collaborator Author

hazedav commented May 17, 2021

It took a lot of Customer interviews and UX sessions to figure out that a fail flag would help, have you reached out to anyone to verify that this is indeed something we want to add to the query commands? If so, go for it! 🙌🏽

Would the previous interview and UX sessions not just apply here? I personally haven't reached out to anyone, but we're welcome to stash this until we can validate that customers will adopt such a thing...

Copy link
Contributor

@afiune afiune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we close this PR?

@hazedav
Copy link
Collaborator Author

hazedav commented May 24, 2022

Should we close this PR?

I thought this was good to go we just wanted to refactor the "policy enforcement flows"...

@hazedav lets refactor both "policy enforcement" workflows.

Also captured in this thread:

https://lacework.slack.com/archives/C01RSNZ0C5V/p1649787895250409?thread_ts=1648222632.313529&cid=C01RSNZ0C5V

@hazedav
Copy link
Collaborator Author

hazedav commented Jun 4, 2022

Updated to use new(FAIL-ON) exit terminology.

@hazedav hazedav requested a review from afiune June 4, 2022 17:45
Copy link
Contributor

@afiune afiune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tenor-39258445

integration/lql_test.go Outdated Show resolved Hide resolved
cli/cmd/lql.go Outdated Show resolved Hide resolved
Enabling tests for production pipelines
Incorporating feedback for flag description

ALLY-485
Fixing up test conditions

ALLY-485
Fixing up test conditions

ALLY-485
Wiring up exit code

ALLY-485
var e *vulnerabilityPolicyError
if errors.As(err, &e) {
exitwithCode(e, e.ExitCode)
var vpe *vulnerabilityPolicyError
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @afiune . we needed some of this magic.

@hazedav hazedav merged commit 0b1de6a into main Jun 15, 2022
@hazedav hazedav deleted the ALLY-485 branch June 15, 2022 14:53
This was referenced Jun 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants