Skip to content

Commit

Permalink
[SPARK-39773][SQL][DOCS] Update document of JDBC options for `pushDow…
Browse files Browse the repository at this point in the history
…nOffset`

### What changes were proposed in this pull request?
Because the DS v2 pushdown framework added new JDBC option `pushDownOffset` for offset pushdown, we should update sql-data-sources-jdbc.md.

### Why are the changes needed?
Add doc for `pushDownOffset`.

### Does this PR introduce _any_ user-facing change?
'No'. Updated for new feature.

### How was this patch tested?
N/A

Closes #37186 from beliefer/SPARK-39773.

Authored-by: Jiaan Geng <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
  • Loading branch information
a0x8o committed Jul 15, 2022
1 parent 2511944 commit 7028144
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 5 deletions.
11 changes: 10 additions & 1 deletion docs/sql-data-sources-jdbc.md
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,16 @@ logging into the data sources.
<td><code>pushDownLimit</code></td>
<td><code>false</code></td>
<td>
The option to enable or disable LIMIT push-down into V2 JDBC data source. The LIMIT push-down also includes LIMIT + SORT , a.k.a. the Top N operator. The default value is false, in which case Spark does not push down LIMIT or LIMIT with SORT to the JDBC data source. Otherwise, if sets to true, LIMIT or LIMIT with SORT is pushed down to the JDBC data source. If <code>numPartitions</code> is greater than 1, SPARK still applies LIMIT or LIMIT with SORT on the result from data source even if LIMIT or LIMIT with SORT is pushed down. Otherwise, if LIMIT or LIMIT with SORT is pushed down and <code>numPartitions</code> equals to 1, SPARK will not apply LIMIT or LIMIT with SORT on the result from data source.
The option to enable or disable LIMIT push-down into V2 JDBC data source. The LIMIT push-down also includes LIMIT + SORT , a.k.a. the Top N operator. The default value is false, in which case Spark does not push down LIMIT or LIMIT with SORT to the JDBC data source. Otherwise, if sets to true, LIMIT or LIMIT with SORT is pushed down to the JDBC data source. If <code>numPartitions</code> is greater than 1, Spark still applies LIMIT or LIMIT with SORT on the result from data source even if LIMIT or LIMIT with SORT is pushed down. Otherwise, if LIMIT or LIMIT with SORT is pushed down and <code>numPartitions</code> equals to 1, Spark will not apply LIMIT or LIMIT with SORT on the result from data source.
</td>
<td>read</td>
</tr>

<tr>
<td><code>pushDownOffset</code></td>
<td><code>false</code></td>
<td>
The option to enable or disable OFFSET push-down into V2 JDBC data source. The default value is false, in which case Spark will not push down OFFSET to the JDBC data source. Otherwise, if sets to true, Spark will try to push down OFFSET to the JDBC data source. If <code>pushDownOffset</code> is true and <code>numPartitions</code> is equal to 1, OFFSET will be pushed down to the JDBC data source. Otherwise, OFFSET will not be pushed down and Spark still applies OFFSET on the result from data source.
</td>
<td>read</td>
</tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1114,10 +1114,10 @@ object JdbcUtils extends Logging with SQLConfHelper {
*/
def processIndexProperties(
properties: util.Map[String, String],
catalogName: String): (String, Array[String]) = {
dialectName: String): (String, Array[String]) = {
var indexType = ""
val indexPropertyList: ArrayBuffer[String] = ArrayBuffer[String]()
val supportedIndexTypeList = getSupportedIndexTypeList(catalogName)
val supportedIndexTypeList = getSupportedIndexTypeList(dialectName)

if (!properties.isEmpty) {
properties.asScala.foreach { case (k, v) =>
Expand Down Expand Up @@ -1147,8 +1147,8 @@ object JdbcUtils extends Logging with SQLConfHelper {
false
}

def getSupportedIndexTypeList(catalogName: String): Array[String] = {
catalogName match {
def getSupportedIndexTypeList(dialectName: String): Array[String] = {
dialectName match {
case "mysql" => Array("BTREE", "HASH")
case "postgresql" => Array("BTREE", "HASH", "BRIN")
case _ => Array.empty
Expand Down

0 comments on commit 7028144

Please sign in to comment.