-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ADAP-522] Three-part identifiers (catalog.schema.table
)
#755
Comments
catalog.schema.table
)catalog.schema.table
)
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please comment on the issue or else it will be closed in 7 days. |
Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest. Just add a comment to notify the maintainers. |
Hi Everyone, Can someone suggest how to use Apache Nessie catalog with Spark adapter in dbt. If we modify generate_schema_name to add nessie catalog name, then list_None_* API fails as it creates show table extended in nessie.<schema_name> like '*' query to get table names in the schema. And we cannot use "generate_database_name" for customisation in Spark adapter as it fails with "Cannot set database in spark!" error. |
Spark >= 3.0 can discover tables/views from multiple catalogs, such as a Hive or Glue catalog. This is a prerequisite to enable so-called three-part identifiers for dbt-spark (
catalog.schema.table
).To keep PRs concise for any refactoring plus the implementation, we are splitting this over multiple issues.
Three-part identifiers
dict
#751The text was updated successfully, but these errors were encountered: