You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Users may optionally define their own set of expectations, overriding the default, that would be checked against every public model in the project.
These expectations should be defined in a separate file. (Teams can take advantage of .CODEOWNERS rules, e.g. to require reviews from repository maintainers any time these expectations are updated.)
These rules would be validated during parsing. The idea is not for public models to magically inherit these configurations, but simply to make sure that they match up.
For example, a data team may want to enforce that every public model has persist_docs enabled (for integration with an external data catalog), is materialized as a view (on top of an underlying private table), and has at least a certain number of data quality tests. Imagine something like:
# public_models.ymldescription: true # every public model must be describedconfig: # every public model must match these configsconstraints_enabled: truepersist_docs:
relation: truecolumns: truematerialized: viewcolumns:
description: true # every column must be describedtests: # matches 'test_name', with optional package prefixunique: 1# at least one unique test, on any columninstalled_package.totally_custom_test: 3# at least 3 of whatever this is
For totally custom & complex validation logic (e.g. "every column named email should have a BigQuery policy tag, a dbt pii tag, and a description containing the word 'pseudonymized'"), these rules could, as they can today, be written in:
Jinja macros, enforced at compile/runtime via hook (a la dbt_project_evaluator and dbt_meta_testing)
Custom scripts that parse dbt metadata artifacts (manifest.json)
The text was updated successfully, but these errors were encountered:
This is a really cool topic, and something we should go much deeper on in the future. This is more than just a one-off feature; it might be an entire package, plugin, or product.
I'm going to close this issue for now, and kick it out of scope for our nearer-term work on multi-project deployments. In the meantime, it will be possible to write similar rules (in Jinja) following the same pattern used by dbt_project_evaluator and dbt_meta_testing.
From the model groups & access discussion:
Users may optionally define their own set of expectations, overriding the default, that would be checked against every public model in the project.
These expectations should be defined in a separate file. (Teams can take advantage of .CODEOWNERS rules, e.g. to require reviews from repository maintainers any time these expectations are updated.)
These rules would be validated during parsing. The idea is not for public models to magically inherit these configurations, but simply to make sure that they match up.
For example, a data team may want to enforce that every public model has persist_docs enabled (for integration with an external data catalog), is materialized as a view (on top of an underlying private table), and has at least a certain number of data quality tests. Imagine something like:
For totally custom & complex validation logic (e.g. "every column named email should have a BigQuery policy tag, a dbt pii tag, and a description containing the word 'pseudonymized'"), these rules could, as they can today, be written in:
Jinja macros, enforced at compile/runtime via hook (a la dbt_project_evaluator and dbt_meta_testing)
Custom scripts that parse dbt metadata artifacts (manifest.json)
The text was updated successfully, but these errors were encountered: