Konstantin Boudnik
added a comment - 06/Aug/13 03:40 The recipe should be pretty trivial. Spark standalone cluster doesn't need much, but a master node name and port number (could be the same as cluster head node with standard port 7077).
The every Spark worker will have to specify
{{ STANDALONE_SPARK_MASTER_HOST}} in the
/etc/spark/conf/spark-env.sh and the cluster will get up properly.