This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Description
Currently, Spark jobs use dirs inside the driver and executor pods for storing temporary files. For instance, the work dirs for the Spark driver and executors use dirs inside the pods. And internal shuffle service per executor also uses in-pod dirs.
These in-pod dirs are within the docker storage backend, which can be slow due to its copy-on-write overhead. Many of the storage backends implement block level CoW. Each small write will incur copy of the entire block. The overhead can become very high if the files are updated by many small writes. It is recommended to avoid using docker storage backend for such use cases. From the first link above:
Ideally, very little data is written to a container’s writable layer, and you use Docker volumes to write data.
We should use EmptyDir for temporary storage to avoid this overhead.