-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[SPARK-21535][ML]Reduce memory requirement for CrossValidator and TrainValidationSplit #18733
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Test build #79944 has finished for PR 18733 at commit
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently we have this PR #16774 . Maybe we should pending on it merged first. Because after applying parallelism support, the code is different.
|
Nothing of this change depends on #16774. The basic idea is that we should release the driver memory as soon as a trained model is evaluated. I don't see there's any conflict. |
|
Features should be merged when they are reasonable and ready, but not waiting on uncertain changes especially when there's no conflicts. |
| metrics(i) += metric | ||
| i += 1 | ||
| } | ||
| trainingDataset.unpersist() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One consideration here is that we're unpersisting the training data only after all models (for a fold) are evaluated. This means the full dataset (train and validation) is in cluster memory throughout, whereas previously only one dataset would be in cluster memory at a time. It's possible the impact of this on resources may be a greater than the saving on the driver from storing 1 instead of numModels models temporarily per fold?
It obviously depends on a lot of factors (dataset size, cluster resources, driver memory, model size, etc).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah you're right. I was under the wrong impression that validationDataset is always in the memory.
Even though the size of validationDataset is 1/kfold of the trainingDataset's and it's only used in the transform but not the fit process, I still cannot prove that the new implementation is better in all circumstances.
I'll close the PR unless there's a better way to resolve the concern. Thanks.
What changes were proposed in this pull request?
CrossValidator and TrainValidationSplit both use
models = est.fit(trainingDataset, epm)to fit the models, where epm is
Array[ParamMap].Even though the training process is sequential, current implementation consumes extra driver memory for holding the trained models, which is not necessary and often leads to memory exception for both CrossValidator and TrainValidationSplit. My proposal is to optimize the training implementation, thus that used model can be collected by GC, to avoid the unnecessary OOM exceptions.
E.g. when grid search space is 12, old implementation needs to hold all 12 trained models in the driver memory at the same time, while the new implementation only needs to hold 1 trained model at a time, and previous model can be cleared by GC.
How was this patch tested?
Existing unit test since there's no change to logic.
I've manually tested and the new implementation can allow CrossValidator and TrainValidationSplit to train on much larger models with the same max-heap-memory.