Skip to content

Conversation

@tuckerh99
Copy link

Improvement on when to close writers. While implementing an s3 storage engine, noticed that we want to close the writers sooner than later to ensure shuffle files are stored on s3 by the end of a stage.

@CLAassistant
Copy link

CLAassistant commented Aug 4, 2022

CLA assistant check
All committers have signed the CLA.

@hiboyang
Copy link
Contributor

@tuckerh99, thanks for the PR! Curious whether you run your Spark production workload with shuffle files storing on S3? If so, how is the performance?

@cpd85
Copy link

cpd85 commented Aug 29, 2022

@hiboyang hi we are testing this out actively right now. in the first iteration it seems the performance impact is very high due to the nature of shuffle data. Basically if we wait to merge the shuffle files until the stage is completed, then we need to add in some delay before starting the next stage while the shuffle data is uploaded to S3.

@leletan
Copy link

leletan commented Jan 22, 2023

Hi @tuckerh99 how are things going on the later iterations? Any luck?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants