Skip to content

Commit 6c792a7

Browse files
committed
[SPARK-31234][SQL][FOLLOW-UP] ResetCommand should not affect static SQL Configuration
### What changes were proposed in this pull request? This PR is the follow-up PR of #28003 - add a migration guide - add an end-to-end test case. ### Why are the changes needed? The original PR made the major behavior change in the user-facing RESET command. ### Does this PR introduce any user-facing change? No ### How was this patch tested? Added a new end-to-end test Closes #28265 from gatorsmile/spark-31234followup. Authored-by: gatorsmile <[email protected]> Signed-off-by: gatorsmile <[email protected]>
1 parent 44d370d commit 6c792a7

File tree

6 files changed

+25
-5
lines changed

6 files changed

+25
-5
lines changed

docs/core-migration-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ license: |
2525
## Upgrading from Core 2.4 to 3.0
2626

2727
- The `org.apache.spark.ExecutorPlugin` interface and related configuration has been replaced with
28-
`org.apache.spark.plugin.SparkPlugin`, which adds new functionality. Plugins using the old
28+
`org.apache.spark.api.plugin.SparkPlugin`, which adds new functionality. Plugins using the old
2929
interface must be modified to extend the new interfaces. Check the
3030
[Monitoring](monitoring.html) guide for more details.
3131

docs/sql-migration-guide.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -216,6 +216,10 @@ license: |
216216

217217
* The decimal string representation can be different between Hive 1.2 and Hive 2.3 when using `TRANSFORM` operator in SQL for script transformation, which depends on hive's behavior. In Hive 1.2, the string representation omits trailing zeroes. But in Hive 2.3, it is always padded to 18 digits with trailing zeroes if necessary.
218218

219+
## Upgrading from Spark SQL 2.4.5 to 2.4.6
220+
221+
- In Spark 2.4.6, the `RESET` command does not reset the static SQL configuration values to the default. It only clears the runtime SQL configuration values.
222+
219223
## Upgrading from Spark SQL 2.4.4 to 2.4.5
220224

221225
- Since Spark 2.4.5, `TRUNCATE TABLE` command tries to set back original permission and ACLs during re-creating the table/partition paths. To restore the behaviour of earlier versions, set `spark.sql.truncateTable.ignorePermissionAcl.enabled` to `true`.

sql/catalyst/src/main/scala/org/apache/spark/sql/internal/StaticSQLConf.scala

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,9 @@ object StaticSQLConf {
4747
.internal()
4848
.version("2.1.0")
4949
.stringConf
50+
// System preserved database should not exists in metastore. However it's hard to guarantee it
51+
// for every session, because case-sensitivity differs. Here we always lowercase it to make our
52+
// life easier.
5053
.transform(_.toLowerCase(Locale.ROOT))
5154
.createWithDefault("global_temp")
5255

sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -153,9 +153,6 @@ private[sql] class SharedState(
153153
* A manager for global temporary views.
154154
*/
155155
lazy val globalTempViewManager: GlobalTempViewManager = {
156-
// System preserved database should not exists in metastore. However it's hard to guarantee it
157-
// for every session, because case-sensitivity differs. Here we always lowercase it to make our
158-
// life easier.
159156
val globalTempDB = conf.get(GLOBAL_TEMP_DATABASE)
160157
if (externalCatalog.databaseExists(globalTempDB)) {
161158
throw new SparkException(

sql/core/src/test/scala/org/apache/spark/sql/SparkSessionBuilderSuite.scala

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ import org.scalatest.BeforeAndAfterEach
2222
import org.apache.spark.{SparkConf, SparkContext, SparkFunSuite}
2323
import org.apache.spark.internal.config.UI.UI_ENABLED
2424
import org.apache.spark.sql.internal.SQLConf
25+
import org.apache.spark.sql.internal.StaticSQLConf.GLOBAL_TEMP_DATABASE
2526

2627
/**
2728
* Test cases for the builder pattern of [[SparkSession]].
@@ -152,4 +153,19 @@ class SparkSessionBuilderSuite extends SparkFunSuite with BeforeAndAfterEach {
152153
session.sparkContext.hadoopConfiguration.unset(mySpecialKey)
153154
}
154155
}
156+
157+
test("SPARK-31234: RESET command will not change static sql configs and " +
158+
"spark context conf values in SessionState") {
159+
val session = SparkSession.builder()
160+
.master("local")
161+
.config(GLOBAL_TEMP_DATABASE.key, value = "globalTempDB-SPARK-31234")
162+
.config("spark.app.name", "test-app-SPARK-31234")
163+
.getOrCreate()
164+
165+
assert(session.sessionState.conf.getConfString("spark.app.name") === "test-app-SPARK-31234")
166+
assert(session.sessionState.conf.getConf(GLOBAL_TEMP_DATABASE) === "globaltempdb-spark-31234")
167+
session.sql("RESET")
168+
assert(session.sessionState.conf.getConfString("spark.app.name") === "test-app-SPARK-31234")
169+
assert(session.sessionState.conf.getConf(GLOBAL_TEMP_DATABASE) === "globaltempdb-spark-31234")
170+
}
155171
}

sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ class SQLConfSuite extends QueryTest with SharedSparkSession {
116116
}
117117
}
118118

119-
test("reset will not change static sql configs and spark core configs") {
119+
test("SPARK-31234: reset will not change static sql configs and spark core configs") {
120120
val conf = spark.sparkContext.getConf.getAll.toMap
121121
val appName = conf.get("spark.app.name")
122122
val driverHost = conf.get("spark.driver.host")

0 commit comments

Comments
 (0)