Cook

Cook is a powerful batch scheduler, specifically designed to provide a great user experience when there are more jobs to run than your cluster has capacity for.

Cook is able to intelligently preempt jobs to ensure that no user ever needs to wait long to get quick answers, while simultaneously helping you to achieve 90%+ utilization for massive workloads.

Cook has been battle-hardened to automatically recover after dozens of classes of cluster failures.

Cook can act as a Spark scheduler, and it comes with a REST API and Java client.

But you'd probably like to run Spark jobs on Cook, right? To do so, download the latest Cook scheduler here. You can launch the scheduler for testing by running java -jar cook-release.jar dev-config.edn (get dev-config.edn here; read more about configuration in scheduler/docs/configuration.asc). Then, go to the spark subproject, and follow the README to patch Spark to support Cook as a scheduler. If you'd like to learn more or do something different, read on...