Skip to content

Commit 03960fa

Browse files
dongjoon-hyungatorsmile
authored andcommitted
[MINOR][SQL] Fix an error message about inserting into bucketed tables
## What changes were proposed in this pull request? This replaces `Sparkcurrently` to `Spark currently` in the following error message. ```scala scala> sql("insert into t2 select * from v1") org.apache.spark.sql.AnalysisException: Output Hive table `default`.`t2` is bucketed but Sparkcurrently does NOT populate bucketed ... ``` ## How was this patch tested? Manual. Author: Dongjoon Hyun <[email protected]> Closes #20617 from dongjoon-hyun/SPARK-ERROR-MSG. (cherry picked from commit 6968c3c) Signed-off-by: gatorsmile <[email protected]>
1 parent bae4449 commit 03960fa

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ case class InsertIntoHiveTable(
172172
val enforceBucketingConfig = "hive.enforce.bucketing"
173173
val enforceSortingConfig = "hive.enforce.sorting"
174174

175-
val message = s"Output Hive table ${table.identifier} is bucketed but Spark" +
175+
val message = s"Output Hive table ${table.identifier} is bucketed but Spark " +
176176
"currently does NOT populate bucketed output which is compatible with Hive."
177177

178178
if (hadoopConf.get(enforceBucketingConfig, "true").toBoolean ||

0 commit comments

Comments
 (0)