-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Way to have test pass if compile error raised #3060
Comments
@MarkMacArdle This is such a neat line of thinking. Thanks for opening it as a separate issue. Within dbt integration tests, we use Of course, it's possible to do the same via a bash script that executes an exception-raising test, then inspects the exit code and stdout from the previous command. The current integration testing suite is a little more elegant in that it enables, to a very basic degree, hooking into dbt's main point of entry as a python module (despite the very true fact that dbt does not have a documented or stable python API). We're actually thinking about packaging and releasing the Your issue encourages a few more imaginative leaps:
I think the latter might be a bigger lift than it's worth. Right now, I think about different personas doing dbt work, and the required skills / languages for each:
I don't know exactly how you'd slot yourself, but at the point of wanting to (a) raise custom compilation exceptions and (b) unit- or integration-test those custom exceptions, I imagine that writing a little bit of python may be necessary. I figure you (@kwigley No action needed here, I think you might find this interesting) |
When I first read about dbt and its testing capabilities this is actually what I presumed would be possible.
This would be fine either and allow getting to the same end. Sounds like it might be easier to implement. I think it's important to be able to keep all tests in the yml files. As well as allowing keeping everything in one place it avoids users having to learn a new syntax if they want to add a test that expects an error. My own experience is mostly SQL and python (although rarely using classes), jinja since I started using dbt and some unit tests. I'm going to try look into the add hook for I haven't looked at how dbt actually runs the tests before so if there's a good place to start for that please shout. |
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please remove the stale label or comment on the issue, or it will be closed in 7 days. |
Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest; add a comment to notify the maintainers. |
Describe the feature
A method of specifying that a test should pass if a compile error is raised.
I wanted this when working on modifying the equality test in dbt-utils. There was two places I thought it'd be useful:
Validating inputs
The current equality test will raise a compile error to validate inputs but there isn't a way of making an integration test that confirms the compile error is raised when expected.
Tests involving metadata
The modification I was working on was to check if two tables had the same columns. I added optional arguments for checking column order, capitalisation of names and data types (PR here). The information schema is used to pull the column metadata and if the specified parts don't match compile errors are raised. With four optional arguments and many possible combinations I would have loved a way to write integrations tests to check the compile errors where being raised when they should have been.
Proposal
I like the discussion of having a wrapper test that would consume another test in #2982 so I could have something like:
Some time could be saved by not actually running the query as once it compiles successfully the test can be passed.
Describe alternatives you've considered
Using a bash/python script to expect getting an error as suggested by @jtcohen6 here. I think you could get something working that way but wouldn't be as nice defining everything in normal tests. You'd need some way of differentiating between tests that should be run by the expecting-failure-script and those that can be run normally. Could maybe use tags for that.
Additional context
Somewhat related to #2982 which requests a way to specify a test should fail. Differs in that that issue would react to a returned result whereas this one would need to react to a compile error.
Who will this benefit?
Anyone using compile errors
Are you interested in contributing this feature?
Yes. Don't know where to start or if this should be a feature of an external library.
The text was updated successfully, but these errors were encountered: