This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Description
In the first version of submission V2 the uploaded jars aren't cleaned up so will accumulate and eventually fill up the disk on the staging server pod.
We should:
- create a default in the staging server config to delete pods N minutes after the last pod of the pod labels terminates
- allow overriding that default in a per-upload basis with an optional field added to the
uploadResources endpoint
- consider adding a "maximum disk usage" setting that will reject resource uploads when it puts the staging server over its allotted capacity
https://github.com/apache-spark-on-k8s/spark/pull/212/files#r111496837