Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ADAP-522] Three-part identifiers (catalog.schema.table) #755

Open
1 task
dbeatty10 opened this issue May 8, 2023 · 3 comments
Open
1 task

[ADAP-522] Three-part identifiers (catalog.schema.table) #755

dbeatty10 opened this issue May 8, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@dbeatty10
Copy link
Contributor

Spark >= 3.0 can discover tables/views from multiple catalogs, such as a Hive or Glue catalog. This is a prerequisite to enable so-called three-part identifiers for dbt-spark (catalog.schema.table).

To keep PRs concise for any refactoring plus the implementation, we are splitting this over multiple issues.

Three-part identifiers

  1. enhancement
@dbeatty10 dbeatty10 added the enhancement New feature or request label May 8, 2023
@github-actions github-actions bot changed the title Three-part identifiers (catalog.schema.table) [ADAP-522] Three-part identifiers (catalog.schema.table) May 8, 2023
Copy link
Contributor

github-actions bot commented Nov 5, 2023

This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please comment on the issue or else it will be closed in 7 days.

@github-actions github-actions bot added the Stale label Nov 5, 2023
Copy link
Contributor

Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest. Just add a comment to notify the maintainers.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 12, 2023
@mukeshkumarkulmi
Copy link

Hi Everyone, Can someone suggest how to use Apache Nessie catalog with Spark adapter in dbt. If we modify generate_schema_name to add nessie catalog name, then list_None_* API fails as it creates show table extended in nessie.<schema_name> like '*' query to get table names in the schema. And we cannot use "generate_database_name" for customisation in Spark adapter as it fails with "Cannot set database in spark!" error.
Also, in case of incremental models, DBT only creates insert into command, if the table already exists in the schema when it run "show table extended" query but show table extended in nessie.<schema_name> like '*' this query is failing, so DBT always create new table in default catalog.
Please suggest.

@dbeatty10 dbeatty10 reopened this May 6, 2024
@dbeatty10 dbeatty10 removed the Stale label May 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants