Skip to content

Commit

Permalink
[SPARK-48706][PYTHON] Python UDF in higher order functions should not…
Browse files Browse the repository at this point in the history
… throw internal error

### What changes were proposed in this pull request?

This PR fixes the error messages and classes when Python UDFs are used in higher order functions.

### Why are the changes needed?

To show the proper user-facing exceptions with error classes.

### Does this PR introduce _any_ user-facing change?

Yes, previously it threw internal error such as:

```python
from pyspark.sql.functions import transform, udf, col, array
spark.range(1).select(transform(array("id"), lambda x: udf(lambda y: y)(x))).collect()
```

Before:

```
py4j.protocol.Py4JJavaError: An error occurred while calling o74.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 15 in stage 0.0 failed 1 times, most recent failure: Lost task 15.0 in stage 0.0 (TID 15) (ip-192-168-123-103.ap-northeast-2.compute.internal executor driver): org.apache.spark.SparkException: [INTERNAL_ERROR] Cannot evaluate expression: <lambda>(lambda x_0#3L)#2 SQLSTATE: XX000
	at org.apache.spark.SparkException$.internalError(SparkException.scala:92)
	at org.apache.spark.SparkException$.internalError(SparkException.scala:96)
```

After:

```
pyspark.errors.exceptions.captured.AnalysisException: [INVALID_LAMBDA_FUNCTION_CALL.UNEVALUABLE] Invalid lambda function call. Python UDFs should be used in a lambda function at a higher order function. However, "<lambda>(lambda x_0#3L)" was a Python UDF. SQLSTATE: 42K0D;
Project [transform(array(id#0L), lambdafunction(<lambda>(lambda x_0#3L)#2, lambda x_0#3L, false)) AS transform(array(id), lambdafunction(<lambda>(lambda x_0#3L), namedlambdavariable()))#4]
+- Range (0, 1, step=1, splits=Some(16))
```

### How was this patch tested?

Unittest was added

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#47079 from HyukjinKwon/SPARK-48706.

Authored-by: Hyukjin Kwon <[email protected]>
Signed-off-by: Kent Yao <[email protected]>
  • Loading branch information
HyukjinKwon authored and yaooqinn committed Jun 26, 2024
1 parent 169346c commit 07cbba6
Show file tree
Hide file tree
Showing 3 changed files with 27 additions and 2 deletions.
5 changes: 5 additions & 0 deletions common/utils/src/main/resources/error/error-conditions.json
Original file line number Diff line number Diff line change
Expand Up @@ -4482,6 +4482,11 @@
"INSERT INTO <tableName> with IF NOT EXISTS in the PARTITION spec."
]
},
"LAMBDA_FUNCTION_WITH_PYTHON_UDF" : {
"message" : [
"Lambda function with Python UDF <funcName> in a higher order function."
]
},
"LATERAL_COLUMN_ALIAS_IN_AGGREGATE_FUNC" : {
"message" : [
"Referencing a lateral column alias <lca> in the aggregate function <aggFunc>."
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -254,6 +254,14 @@ trait CheckAnalysis extends PredicateHelper with LookupCatalog with QueryErrorsB
hof.invalidFormat(checkRes)
}

case hof: HigherOrderFunction
if hof.resolved && hof.functions
.exists(_.exists(_.isInstanceOf[PythonUDF])) =>
val u = hof.functions.flatMap(_.find(_.isInstanceOf[PythonUDF])).head
hof.failAnalysis(
errorClass = "UNSUPPORTED_FEATURE.LAMBDA_FUNCTION_WITH_PYTHON_UDF",
messageParameters = Map("funcName" -> toSQLExpr(u)))

// If an attribute can't be resolved as a map key of string type, either the key should be
// surrounded with single quotes, or there is a typo in the attribute name.
case GetMapValue(map, key: Attribute) if isMapWithStringKey(map) && !key.resolved =>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@

package org.apache.spark.sql.execution.python

import org.apache.spark.sql.{IntegratedUDFTestUtils, QueryTest}
import org.apache.spark.sql.functions.count
import org.apache.spark.sql.{AnalysisException, IntegratedUDFTestUtils, QueryTest}
import org.apache.spark.sql.functions.{array, count, transform}
import org.apache.spark.sql.test.SharedSparkSession
import org.apache.spark.sql.types.LongType

Expand Down Expand Up @@ -112,4 +112,16 @@ class PythonUDFSuite extends QueryTest with SharedSparkSession {
val pandasTestUDF = TestGroupedAggPandasUDF(name = udfName)
assert(df.agg(pandasTestUDF(df("id"))).schema.fieldNames.exists(_.startsWith(udfName)))
}

test("SPARK-48706: Negative test case for Python UDF in higher order functions") {
assume(shouldTestPythonUDFs)
checkError(
exception = intercept[AnalysisException] {
spark.range(1).select(transform(array("id"), x => pythonTestUDF(x))).collect()
},
errorClass = "UNSUPPORTED_FEATURE.LAMBDA_FUNCTION_WITH_PYTHON_UDF",
parameters = Map("funcName" -> "\"pyUDF(namedlambdavariable())\""),
context = ExpectedContext(
"transform", s".*${this.getClass.getSimpleName}.*"))
}
}

0 comments on commit 07cbba6

Please sign in to comment.