Skip to content

Conversation

@turboFei
Copy link
Member

What changes were proposed in this pull request?

We've seen some shuffle data corruption during shuffle read phase.

As described in SPARK-26089, spark only checks small shuffle blocks before PR #23453, which is proposed by ankuriitg.

There are two changes/improvements that are made in PR #23453.

  1. Large blocks are checked upto maxBytesInFlight/3 size in a similar way as smaller blocks, so if a
    large block is corrupt in the starting, that block will be re-fetched and if that also fails,
    FetchFailureException will be thrown.
  2. If large blocks are corrupt after size maxBytesInFlight/3, then any IOException thrown while
    reading the stream will be converted to FetchFailureException. This is slightly more aggressive
    than was originally intended but since the consumer of the stream may have already read some records and processed them, we can't just re-fetch the block, we need to fail the whole task. Additionally, we also thought about maybe adding a new type of TaskEndReason, which would re-try the task couple of times before failing the previous stage, but given the complexity involved in that solution we decided to not proceed in that direction.

However, I think there still exists some problems with the current shuffle transmitted data verification mechanism:

  • For a large block, it is checked upto maxBytesInFlight/3 size when fetching shuffle data. So if a large block is corrupt after size maxBytesInFlight/3, it can not be detected in data fetch phase. This has been described in the previous section.
  • Only the compressed or wrapped blocks are checked, I think we should also check thease blocks which are not wrapped.

This pr complete the verification mechanism for shuffle transmitted data:

Firstly, crc32 is choosed for the checksum verification of shuffle data.

Crc is also used for checksum verification in hadoop, it is simple and fast.

During shuffle write phase, after completing the partitionedFile, we compute

the crc32 value for each partition and then write these digests with the indexs into shuffle index file.

For the sortShuffleWriter and unsafe shuffle writer, there is only one partitionedFile for a shuffleMapTask, so the compution of digests(compute the digests for each partition depend on the indexs of this partitionedFile) is cheap.

For the bypassShuffleWriter, the reduce partitions is little than byPassMergeThreshold, so the cost of digests compution is acceptable.

During shuffle read phase, the digest value will be passed with the block data.

And we will recompute the digest of the data obtained to compare with the origin digest value.
When recomputing the digest of data obtained, it only need an additional buffer(2048Bytes) for computing crc32 value.
After recomputing, we will reset the obtained data inputStream, if it is markSupported we only need reset it, otherwise it is a fileSegmentManagerBuffer, we need recreate it.

So, I think this verification mechanism proposed for shuffle transmitted data is efficient and complete.

How was this patch tested?

Unit test.

@turboFei turboFei changed the title [SPARK-27562][Shuffle] Complete the verification mechanism for shuffle transmitted data [WIP][SPARK-27562][Shuffle] Complete the verification mechanism for shuffle transmitted data May 14, 2020
@turboFei turboFei changed the title [WIP][SPARK-27562][Shuffle] Complete the verification mechanism for shuffle transmitted data [SPARK-27562][Shuffle] Complete the verification mechanism for shuffle transmitted data May 14, 2020
@turboFei
Copy link
Member Author

cc @jerryshao

@cloud-fan
Copy link
Contributor

ok to test

@SparkQA
Copy link

SparkQA commented May 14, 2020

Test build #122618 has finished for PR 28525 at commit d13e1a2.

  • This patch fails build dependency tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@yaooqinn
Copy link
Member

Sounds a great and very useful feature

@dongjoon-hyun
Copy link
Member

FYI, the dependency failure will be fixed by the following.

@SparkQA
Copy link

SparkQA commented May 15, 2020

Test build #122640 has finished for PR 28525 at commit e330ca1.

  • This patch fails build dependency tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented May 15, 2020

Test build #122641 has finished for PR 28525 at commit bb15a4d.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented May 15, 2020

Test build #122659 has finished for PR 28525 at commit b0fdea8.

  • This patch fails build dependency tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented May 15, 2020

Test build #122664 has finished for PR 28525 at commit 7ed97ec.

  • This patch fails build dependency tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented May 15, 2020

Test build #122667 has finished for PR 28525 at commit c893dc8.

  • This patch fails build dependency tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@jerryshao
Copy link
Contributor

I would suggest to describe the specs of shuffle index file somewhere in the code, and reduce the magical hard code numbers in everywhere.

@jerryshao
Copy link
Contributor

The current implementation uses shuffle index file to store partition digests. I think this: 1) couples two things together, and hard to evolve; 2) makes the logic unintuitive. I would suggest to separate index file from crc file, use a new file to store shuffle digests.

@turboFei
Copy link
Member Author

Thanks for the review. I have modified the solution and save the digests into independent file.

@turboFei turboFei requested a review from jerryshao May 21, 2020 09:56
@SparkQA
Copy link

SparkQA commented May 21, 2020

Test build #122924 has finished for PR 28525 at commit 73727c4.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented May 21, 2020

Test build #122925 has finished for PR 28525 at commit 09c498c.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented May 21, 2020

Test build #122931 has finished for PR 28525 at commit c59fcd6.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@github-actions
Copy link

We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable.
If you'd like to revive this PR, please reopen it and ask a committer to remove the Stale tag!

1 similar comment
@github-actions
Copy link

github-actions bot commented Dec 8, 2020

We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable.
If you'd like to revive this PR, please reopen it and ask a committer to remove the Stale tag!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants