-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open up catalog impl #127
Open up catalog impl #127
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slightly worried about merge conflicts pulling in future stuff from Apache master. The plan when that happens is to just fix them I suppose?
|
||
val SESSION_STATE_IMPLEMENTATION = buildStaticConf("spark.sql.sessionStateImplementation") | ||
.internal() | ||
.stringConf | ||
.createWithDefault("in-memory") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we make this default be the value of spark.sql.catalogImplementation
? Otherwise I think we break any consuemrs that have modified that value with this change
This is rather simple to manage. The more annoying part is that this isn't stable yet and downstream we will have to handle that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good to merge when build passes
@@ -1036,7 +1036,7 @@ object StaticSQLConf { | |||
val SESSION_STATE_IMPLEMENTATION = buildStaticConf("spark.sql.sessionStateImplementation") | |||
.internal() | |||
.stringConf | |||
.createWithDefault("in-memory") | |||
.createWithDefault(CATALOG_IMPLEMENTATION.defaultValueString) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant to make the actual value of CATALOG_IMPLEMENTATION (not the default value) become the defualt value for SESSION_STATE_IMPLEMENTATION.
But I think it's unlikely anyone was setting spark.sql.catalogImplementation
directly rather than using the enableHiveSupport()
that you already changed
c04973c
to
dd08174
Compare
This adds config that allows us to to inject a custom session builder. Internally we use it to build SparkSessions that highly configured beyond what Spark's built-in configs allow. Most importantly that includes building and registering our own session catalog (v1) implementation with the SparkSession. You can find how we use this config here [1] and our own SessionStateBuilder here [2]. [1]: https://pl.ntr/1UU [2]: https://pl.ntr/1UT Co-authored-by: Robert Kruszewski <[email protected]> Co-authored-by: Josh Casale <[email protected]> Co-authored-by: Will Raschkowski <[email protected]>
This adds config that allows us to to inject a custom session builder. Internally we use it to build SparkSessions that highly configured beyond what Spark's built-in configs allow. Most importantly that includes building and registering our own session catalog (v1) implementation with the SparkSession. You can find how we use this config here [1] and our own SessionStateBuilder here [2]. [1]: https://pl.ntr/1UU [2]: https://pl.ntr/1UT Co-authored-by: Robert Kruszewski <[email protected]> Co-authored-by: Josh Casale <[email protected]> Co-authored-by: Will Raschkowski <[email protected]>
This adds config that allows us to to inject a custom session builder. Internally we use it to build SparkSessions that highly configured beyond what Spark's built-in configs allow. Most importantly that includes building and registering our own session catalog (v1) implementation with the SparkSession. You can find how we use this config here [1] and our own SessionStateBuilder here [2]. [1]: https://pl.ntr/1UU [2]: https://pl.ntr/1UT Co-authored-by: Robert Kruszewski <[email protected]> Co-authored-by: Josh Casale <[email protected]> Co-authored-by: Will Raschkowski <[email protected]>
Would need to eat up upstream breaks