Skip to content

Commit 69459e7

Browse files
committed
Applying review comments
1 parent 7ca4c6d commit 69459e7

File tree

4 files changed

+6
-6
lines changed

4 files changed

+6
-6
lines changed

core/src/main/scala/org/apache/spark/internal/config/package.scala

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1097,9 +1097,9 @@ package object config {
10971097

10981098
private[spark] val SHUFFLE_HOST_LOCAL_DISK_READING_ENABLED =
10991099
ConfigBuilder("spark.shuffle.readHostLocalDisk.enabled")
1100-
.doc("If enabled (and `spark.shuffle.useOldFetchProtocol` is disabled), shuffle blocks " +
1101-
"requested from those block managers which are running on the same host are read from " +
1102-
"the disk directly instead of being fetched as remote blocks over the network.")
1100+
.doc(s"If enabled (and `${SHUFFLE_USE_OLD_FETCH_PROTOCOL.key}` is disabled), shuffle " +
1101+
"blocks requested from those block managers which are running on the same host are read " +
1102+
"from the disk directly instead of being fetched as remote blocks over the network.")
11031103
.booleanConf
11041104
.createWithDefault(true)
11051105

core/src/main/scala/org/apache/spark/storage/BlockManager.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -479,7 +479,7 @@ private[spark] class BlockManager(
479479

480480
hostLocalDirManager =
481481
if (conf.get(config.SHUFFLE_HOST_LOCAL_DISK_READING_ENABLED) &&
482-
!conf.get(config.SHUFFLE_USE_OLD_FETCH_PROTOCOL)) {
482+
!conf.get(config.SHUFFLE_USE_OLD_FETCH_PROTOCOL)) {
483483
externalBlockStoreClient.map { blockStoreClient =>
484484
new HostLocalDirManager(
485485
futureExecutionContext,

docs/core-migration-guide.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,3 +36,5 @@ license: |
3636
- Deprecated method `AccumulableInfo.apply` have been removed because creating `AccumulableInfo` is disallowed.
3737

3838
- Event log file will be written as UTF-8 encoding, and Spark History Server will replay event log files as UTF-8 encoding. Previously Spark writes event log file as default charset of driver JVM process, so Spark History Server of Spark 2.x is needed to read the old event log files in case of incompatible encoding.
39+
40+
- A new protocol for fetching shuffle blocks is used. It's recommended that external shuffle services be upgraded when running Spark 3.0 apps. Old external shuffle services can still be used by setting the configuration `spark.shuffle.useOldFetchProtocol` to `true`. Otherwise, Spark may run into errors with messages like `IllegalArgumentException: Unexpected message type: <number>`.

docs/sql-migration-guide.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -97,8 +97,6 @@ license: |
9797

9898
- Since Spark 3.0, when Avro files are written with user provided non-nullable schema, even the catalyst schema is nullable, Spark is still able to write the files. However, Spark will throw runtime NPE if any of the records contains null.
9999

100-
- Since Spark 3.0, we use a new protocol for fetching shuffle blocks, for external shuffle service users, we need to upgrade the server correspondingly. Otherwise, we'll get the error message `IllegalArgumentException: Unexpected message type: <number>`. If it is hard to upgrade the shuffle service right now, you can still use the old protocol by setting `spark.shuffle.useOldFetchProtocol` to `true`.
101-
102100
- Since Spark 3.0, a higher-order function `exists` follows the three-valued boolean logic, i.e., if the `predicate` returns any `null`s and no `true` is obtained, then `exists` will return `null` instead of `false`. For example, `exists(array(1, null, 3), x -> x % 2 == 0)` will be `null`. The previous behaviour can be restored by setting `spark.sql.legacy.arrayExistsFollowsThreeValuedLogic` to `false`.
103101

104102
- Since Spark 3.0, if files or subdirectories disappear during recursive directory listing (i.e. they appear in an intermediate listing but then cannot be read or listed during later phases of the recursive directory listing, due to either concurrent file deletions or object store consistency issues) then the listing will fail with an exception unless `spark.sql.files.ignoreMissingFiles` is `true` (default `false`). In previous versions, these missing files or subdirectories would be ignored. Note that this change of behavior only applies during initial table file listing (or during `REFRESH TABLE`), not during query execution: the net change is that `spark.sql.files.ignoreMissingFiles` is now obeyed during table file listing / query planning, not only at query execution time.

0 commit comments

Comments
 (0)