This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Description
I might have missed something obvious, but presently I do not find a way to assign a smaller CPU slices to the spark worker containers. Something like
--conf spark.executor.cores=100m
would be great to assign 100 millicores to the worker container.
Even for the driver there should be a way to control the CPU slices?
I think the presently each of the containers are taking up whole CPUs.