-
Notifications
You must be signed in to change notification settings - Fork 28.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-12579][SQL] Force user-specified JDBC driver to take precedence #10519
Conversation
Test build #48446 has finished for PR 10519 at commit
|
I'd appreciate any feedback on how we can/should test this change and prevent this behavior from regressing in the future. |
Jenkins, retest this please. |
Test build #48647 has finished for PR 10519 at commit
|
DriverManager.getDriver(url).getClass.getCanonicalName | ||
} | ||
() => { | ||
userSpecifiedDriverClass.foreach(DriverRegistry.register) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the one that register the right driver at the executor side, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that's right: this function gets shipped to executors, where it's called to create connections.
Looks good to me. Just have a quick clarification question. |
() => { | ||
userSpecifiedDriverClass.foreach(DriverRegistry.register) | ||
val driver: Driver = DriverManager.getDrivers.asScala.collectFirst { | ||
case d: DriverWrapper if d.wrapped.getClass.getCanonicalName == driverClass => d |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only real bit of trickiness here and was the part that was missing from my spark-redshift
patch.
LGTM |
Merging to master. |
Also merging to branch 1.6. |
Spark SQL's JDBC data source allows users to specify an explicit JDBC driver to load (using the `driver` argument), but in the current code it's possible that the user-specified driver will not be used when it comes time to actually create a JDBC connection. In a nutshell, the problem is that you might have multiple JDBC drivers on the classpath that claim to be able to handle the same subprotocol, so simply registering the user-provided driver class with the our `DriverRegistry` and JDBC's `DriverManager` is not sufficient to ensure that it's actually used when creating the JDBC connection. This patch addresses this issue by first registering the user-specified driver with the DriverManager, then iterating over the driver manager's loaded drivers in order to obtain the correct driver and use it to create a connection (previously, we just called `DriverManager.getConnection()` directly). If a user did not specify a JDBC driver to use, then we call `DriverManager.getDriver` to figure out the class of the driver to use, then pass that class's name to executors; this guards against corner-case bugs in situations where the driver and executor JVMs might have different sets of JDBC drivers on their classpaths (previously, there was the (rare) potential for `DriverManager.getConnection()` to use different drivers on the driver and executors if the user had not explicitly specified a JDBC driver class and the classpaths were different). This patch is inspired by a similar patch that I made to the `spark-redshift` library (databricks/spark-redshift#143), which contains its own modified fork of some of Spark's JDBC data source code (for cross-Spark-version compatibility reasons). Author: Josh Rosen <[email protected]> Closes #10519 from JoshRosen/jdbc-driver-precedence. (cherry picked from commit 6c83d93) Signed-off-by: Yin Huai <[email protected]>
…etConnector() This is a followup to #143 which fixes a corner-case bug in our scanning of registered JDBC drivers: we need to properly handle Spark's `DriverWrapper` drivers, which are used to wrap JDBC drivers in order to make them accessible from the root classloader so that the `DriverManager` can find them. A simpler, reflection-free version of this change was incorporated into apache/spark#10519 Author: Josh Rosen <[email protected]> Closes #147 from JoshRosen/jdbc-driver-precedence-round-2.
Spark SQL's JDBC data source allows users to specify an explicit JDBC driver to load (using the
driver
argument), but in the current code it's possible that the user-specified driver will not be used when it comes time to actually create a JDBC connection.In a nutshell, the problem is that you might have multiple JDBC drivers on the classpath that claim to be able to handle the same subprotocol, so simply registering the user-provided driver class with the our
DriverRegistry
and JDBC'sDriverManager
is not sufficient to ensure that it's actually used when creating the JDBC connection.This patch addresses this issue by first registering the user-specified driver with the DriverManager, then iterating over the driver manager's loaded drivers in order to obtain the correct driver and use it to create a connection (previously, we just called
DriverManager.getConnection()
directly).If a user did not specify a JDBC driver to use, then we call
DriverManager.getDriver
to figure out the class of the driver to use, then pass that class's name to executors; this guards against corner-case bugs in situations where the driver and executor JVMs might have different sets of JDBC drivers on their classpaths (previously, there was the (rare) potential forDriverManager.getConnection()
to use different drivers on the driver and executors if the user had not explicitly specified a JDBC driver class and the classpaths were different).This patch is inspired by a similar patch that I made to the
spark-redshift
library (databricks/spark-redshift#143), which contains its own modified fork of some of Spark's JDBC data source code (for cross-Spark-version compatibility reasons).