Skip to content

Commit 161c4e6

Browse files
committed
address comments
1 parent 295d163 commit 161c4e6

File tree

2 files changed

+5
-16
lines changed

2 files changed

+5
-16
lines changed

core/src/main/scala/org/apache/spark/internal/config/package.scala

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -634,13 +634,15 @@ package object config {
634634
"in bytes. This is to avoid a giant request takes too much memory. Note this " +
635635
"configuration will affect both shuffle fetch and block manager remote block fetch. " +
636636
"For users who enabled external shuffle service, this feature can only work when " +
637-
"external shuffle service is newer than Spark 2.2.")
637+
"external shuffle service is at least 2.3.0.")
638638
.bytesConf(ByteUnit.BYTE)
639639
// fetch-to-mem is guaranteed to fail if the message is bigger than 2 GB, so we might
640640
// as well use fetch-to-disk in that case. The message includes some metadata in addition
641641
// to the block data itself (in particular UploadBlock has a lot of metadata), so we leave
642642
// extra room.
643-
.checkValue(_ <= Int.MaxValue - 512, "maxRemoteBlockSizeFetchToMem must be less than 2GB.")
643+
.checkValue(
644+
_ <= Int.MaxValue - 512,
645+
"maxRemoteBlockSizeFetchToMem must be less than (Int.MaxValue - 512) bytes.")
644646
.createWithDefaultString("200m")
645647

646648
private[spark] val TASK_METRICS_TRACK_UPDATED_BLOCK_STATUSES =

docs/configuration.md

Lines changed: 1 addition & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -626,19 +626,6 @@ Apart from these, the following properties are also available, and may be useful
626626
You can mitigate this issue by setting it to a lower value.
627627
</td>
628628
</tr>
629-
<tr>
630-
<td><code>spark.maxRemoteBlockSizeFetchToMem</code></td>
631-
<td>Int.MaxValue - 512</td>
632-
<td>
633-
The remote block will be fetched to disk when size of the block is above this threshold in bytes.
634-
This is to avoid a giant request that takes too much memory. By default, this is only enabled
635-
for blocks > 2GB, as those cannot be fetched directly into memory, no matter what resources are
636-
available. But it can be turned down to a much lower value (eg. 200m) to avoid using too much
637-
memory on smaller blocks as well. Note this configuration will affect both shuffle fetch
638-
and block manager remote block fetch. For users who enabled external shuffle service,
639-
this feature can only be used when external shuffle service is newer than Spark 2.2.
640-
</td>
641-
</tr>
642629
<tr>
643630
<td><code>spark.shuffle.compress</code></td>
644631
<td>true</td>
@@ -1527,7 +1514,7 @@ Apart from these, the following properties are also available, and may be useful
15271514
in bytes. This is to avoid a giant request takes too much memory. Note this
15281515
configuration will affect both shuffle fetch and block manager remote block fetch.
15291516
For users who enabled external shuffle service, this feature can only work when
1530-
external shuffle service is newer than Spark 2.2.
1517+
external shuffle service is at least 2.3.0.
15311518
</td>
15321519
</tr>
15331520
</table>

0 commit comments

Comments
 (0)