-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TG2-MEASURE_VALIDATIONTESTS_PREREQUISITESNOTMET #134
Comments
This probably doesn't fit into the framework - prerequisites not met is a result.status not a result.value, not clear if a measure can evaluate that metadata. |
Might not fit the Framework, but important for the tests and fits with the other related Measures - No. of Validations tests passed, No. failed and the No. that couldn't be run because the prerequisites were not met. These are metadata on the tests, and without this one - the others make no sense. |
@ArthurChapman Sounds like we need to file an issue against the framework for @allankv to evaluate what might be needed there to support this need. For a single record (and we should probably rename this test to include SINGLE in the name (probably important to clearly distinguish measures operating on single records and multi-records), the sum of of Problems pass, Problems fail, and Problems prerequisites not met should equal the total number of Problems tested and be consistent among records in a single data quality report (likewise for Validations COMPLIANT, Validations NOT_COMPLIANT, and Validations with prerequisites not met). |
I don't think that this is a SINGLE record test. This (and the other tests mentioned above) is meant to be a count in a dataset when you run all the tests on a dataset and this is a report on the tests run on that dataset at a point in time. I think we originally called it a REPORT rather than a MEASURE. |
I don't agree with @ArthurChapman about this test being multi-record: It is definitely single record. Like any of the assertions, they can be accumulated across any set of multiple records (or datasets etc). Like the other MEASURES, they are additive. In the case of VALIDATIONs, the result will be a count of COMPLIANT/NOT_COMPLIANT. With AMENDMENTS, I presume RUN/FAILED/...? |
I agree with you @Tasilee - must have been late at night when I was responding to that - of course it is Single Record. |
Slight tweak of Expected Response applied: INTERNAL_PREREQUISITES_NOT_MET if no tests of type VALIDATION were run; REPORT the number of tests of output type VALIDATION that did not run because prerequisites for those tests were not met (Result.status="INTERNAL_PREREQUISITES_NOT_MET" or "EXTERNAL_PREREQUISITES_NOT_MET"); otherwise NOT_REPORTED |
I suggest the Description: 'The number of distinct VALIDATION tests that have a Response.status="EXTERNAL_PREREQUISITES_NOT_MET" or "INTERNAL_PREREQUISITES_NOT_MET" for a given record.' in place of: 'The number of VALIDATION type tests run on a record that have a Response.status="EXTERNAL_PREREQUISITES_NOT_MET" or "INTERNAL_PREREQUISITES_NOT_MET".' |
From Zoom meeting 30th May 2022, change the Expected Response INTERNAL_PREREQUISITES_NOT_MET if no tests of type VALIDATION were run; REPORT the number of tests of output type VALIDATION that did not run because prerequisites for those tests were not met (Result.status="INTERNAL_PREREQUISITES_NOT_MET" or "EXTERNAL_PREREQUISITES_NOT_MET"); otherwise NOT_REPORTED to INTERNAL_PREREQUISITES_NOT_MET if no tests of type VALIDATION were run; Report the number of tests of output type VALIDATION that did not run because prerequisites for those tests were not met (Result.status="INTERNAL_PREREQUISITES_NOT_MET" or "EXTERNAL_PREREQUISITES_NOT_MET") |
Updated wording of Notes to be consistent with #135 and to remove internal GitHub References. |
Splitting bdqffdq:Information Elements into "Information Elements ActedUpon" and "Information Elements Consulted". This MEASURE I am unsure about: I opted for "Consulted" Also changed "Field" to "TestField", "Output Type" to "TestType" and updated "Specification Last Updated" |
AllDarwinCoreTerms needs to be replaced by a list of relevant validations, in the form (used in the multirecord measures): bdq:VALIDATION_BASISOFRECORD_NOTEMPTY.Response, as it is the results of validations on the single records that are the information elements for this test, not the darwin core terms. We should probably also split this test into one test for each use case, with information elements matching the validations found in that use case. |
I agree @chicoreus , but there are two issue. First up, we probably should not list CORE VALIDATION tests as these may change depending on context. Second, we don't agree about splitting on use case. I have changed Information Element Consulted from "All DarinCoreTerms" to "All CORE tests of type VALIDATION that were run" |
Good, we just need to use something machine interpretable as the
specific information element. That is the advanage of listing the
possible validations as information elements consulted.
We should be able to agree to a machine readable term that means all tests of a given type that were run on the record.
bdq:AllValidationsRunOnSingleRecord?
Something like that would then be flexible to use case, and would support users who include additional validations in their test suite, and would be consistent with our use of specific Darwin Core terms as information elements.
The framework allows for very generic information elements, one thing we've been doing to aid implementors is to be specific in what input terms should be bound as information elements in each test, rather than using Space/Time/etc as information elements.
|
Changed Information Elements Consulted to "bdq:AllValidationTestRunOnSingleRecord" on all relevant MEASURE type tests |
The text was updated successfully, but these errors were encountered: