Skip to content

Commit b330967

Browse files
gatorsmilecloud-fan
authored andcommitted
[SPARK-20667][SQL][TESTS] Cleanup the cataloged metadata after completing the package of sql/core and sql/hive
## What changes were proposed in this pull request? So far, we do not drop all the cataloged objects after each package. Sometimes, we might hit strange test case errors because the previous test suite did not drop the cataloged/temporary objects (tables/functions/database). At least, we can first clean up the environment when completing the package of `sql/core` and `sql/hive`. ## How was this patch tested? N/A Author: Xiao Li <[email protected]> Closes #17908 from gatorsmile/reset. (cherry picked from commit 0d00c76) Signed-off-by: Wenchen Fan <[email protected]>
1 parent 4b7aa0b commit b330967

File tree

3 files changed

+4
-7
lines changed

3 files changed

+4
-7
lines changed

sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1251,9 +1251,10 @@ class SessionCatalog(
12511251
dropTempFunction(func.funcName, ignoreIfNotExists = false)
12521252
}
12531253
}
1254-
tempTables.clear()
1254+
clearTempTables()
12551255
globalTempViewManager.clear()
12561256
functionRegistry.clear()
1257+
tableRelationCache.invalidateAll()
12571258
// restore built-in functions
12581259
FunctionRegistry.builtin.listFunction().foreach { f =>
12591260
val expressionInfo = FunctionRegistry.builtin.lookupFunction(f)

sql/core/src/test/scala/org/apache/spark/sql/test/SharedSQLContext.scala

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,7 @@ trait SharedSQLContext extends SQLTestUtils with BeforeAndAfterEach with Eventua
7474
protected override def afterAll(): Unit = {
7575
super.afterAll()
7676
if (_spark != null) {
77+
_spark.sessionState.catalog.reset()
7778
_spark.stop()
7879
_spark = null
7980
}

sql/hive/src/main/scala/org/apache/spark/sql/hive/test/TestHive.scala

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -488,14 +488,9 @@ private[hive] class TestHiveSparkSession(
488488

489489
sharedState.cacheManager.clearCache()
490490
loadedTables.clear()
491-
sessionState.catalog.clearTempTables()
492-
sessionState.catalog.tableRelationCache.invalidateAll()
493-
491+
sessionState.catalog.reset()
494492
metadataHive.reset()
495493

496-
FunctionRegistry.getFunctionNames.asScala.filterNot(originalUDFs.contains(_)).
497-
foreach { udfName => FunctionRegistry.unregisterTemporaryUDF(udfName) }
498-
499494
// HDFS root scratch dir requires the write all (733) permission. For each connecting user,
500495
// an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with
501496
// ${hive.scratch.dir.permission}. To resolve the permission issue, the simplest way is to

0 commit comments

Comments
 (0)