-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[SPARK-26712]Support multi directories for executor shuffle info recovery in yarn shuffle serivce #23647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Can one of the admins verify this patch? |
|
@srowen @vanzin @squito @HyukjinKwon +Potetial reviewers, could anybody give some suggestions? |
|
Did you actually run into this problem? It sounds like such an uncommon thing, and Spark should recover even if that problem happens. (stages would be recomputed, etc.) Aside from that there's quite a few style and functional problems in the code, but no point in going through that since I'm not really convinced this helps much. |
|
+1 for ^ |
|
@vanzin @HyukjinKwon Even though, we now have #14162 and the application level blacklist, but I think this PR still make sense for long running applications(for instance, Spark ThriftServer applications or spark streaming applications). Hope my explanation can make you convinced. |
there will certainly be some resource waste, but we have to balance complexity vs. how often the issue would occur and how bad the simpler behavior would be. If you have a bad disk, you're definitely losing some shuffle data. Furthermore, any other shuffleMapStages would need to know to not write their output to the bad disk also. Blacklisting should kick in here, and if it doesn't, we should figure out why. Yes, there will be some waste till that happens, but I think we can live with that. |
|
I'm very torn on this. It makes sense to try to use a better disk, but then the NM itself doesn't do that. So if the recovery dir is bust, then the NM will be affected regardless of this. It feels to me like enabling the option in SPARK-16505 is the right thing. If your recovery dir is bad, then the NM shouldn't be running until that is fixed. But that also assumes that the failure is detected during shuffle service initialization, and not later. If implementing multi-disk supports, I'm also not sure even how you'd do it. Opening the DB may or may not work, depending on how bad the disk is. So if the first time it does not work, and you write the recovery db to some other directory, but then the NM crashes (e.g. because of the bad disk) and the next time, opening the DB actually works in the first try, you'll end up reading stale data before you realize you're reading from the bad disk. I see you have checks for the last mod time, but even that can cause troubles in a scenario where the failure may or may not happen depending on when you look... I tend to think that if your recovery disk is bad, that should be treated as a catastrophic failure, and trying to work around that is kinda pointless. What you could do is try to keep running in spite of the bad disk, e.g. by only keeping data in memory. You'd only see problems when the NM is restarted (you'd lose existing state), but at that point Spark's retry mechanism should fix things. |
Yes, I think we should make this option enabled by default. maybe in another PR.
This PR just periodically check bad disk and saving current executors info in memory to the new good directory. The data is newest if we handles well the synchronization. There indeed exists a case that make the recovery failure(NM crashes and disk broken happens at the same time), but it should be really really rare to happen.
I understand what you are talking about, but the major problem is that if that happens, the long running applications can not recover from resource waste or maybe occasional job failure except restarting the application. I think if this problem can be resolved by current implementation, then I agree with your opinion. But for my understanding, current implementation can not solve this problem. |
OK I see, I've reviewed that PR now. But at best, that still doesn't completely handle the problem, as any existing shuffle data written to the bad disks is gone (and as I noted on that PR, its somewhat complicated to make sure that the ExternalShuffleService and the executor keep a consistent view of good dirs). |
I think the current implementation could be enhanced, but I'd prefer a simpler approach. If you just change the current implementation to not save recovery data, what data is lost and how does Spark recover from it, if at all? The shuffle service will need at least the app secret to allow the executors to connect. I'm wondering if after a restart, YARN actually calls the |
|
My change does save recovery data to a better directory(as explained in the above note) if disk error happens, so spark can recover from it.
this secrets recovery is done by YarnShuffleService itself. so maybe we should also change secret recovery related codes. |
What changes were proposed in this pull request?
Currently,
ExecutorShuffleInfocan be recovered from file if NM recovery enabled, however, the recovery file is under a single directory, which may be unavailable if disk broken. So if a NM restart happen(may be caused by kill or some reason), the shuffle service can not start and theExecutorShuffleInfowould lost even if there are existing executors on the node.This may finally cause job failures(if node or executors on it not blacklisted), or at least, it will cause resource waste.(shuffle from this node always failed.), for long running spark applications, this problem may be more serious.
This PR introduced a mechanism to support multi directories for executor shuffle info recovery, this can improve the robustness of the
YarnShuffleService.How was this patch tested?
UT
Please review http://spark.apache.org/contributing.html before opening a pull request.