Navigate to Resources > Consumers and select a consumer, such as SampleApplications. After selecting it, click Consumer Properties tab. Under the section Specify slot-based resource groups, select the check mark for the resource group that was just created (gpus) and click Apply.

Navigate to Resources > Resource Planning (Slot) > Resource Plan. Select Resource Group: gpus from the drop down box and Exclusive as the Slot allocation policy. Exclusive indicates that when IBM Spectrum Conductor with Spark allocates resources from this resource group, it results in using all free slots from a host. For example, assuming there are 4 GPUs on a host, a request for 1, 2, 3, or 4 GPUs would take the whole host. Click Apply.

Create Anaconda environment

Create an anaconda environment first and then during creation of Spark Instance Group (SIG), select Jupyter Notebook, and then select the anaconda distribution and the anaconda environment name from the drop down boxes.

Navigate to Workload -> Spark -> Anaconda Management. Click Deploy after selecting a ppc64le anaconda distribution name such as Anaconda5-1-0-Python3-Linux-ppc64le.

Specify a name (such as myAnaconda) for the anaconda distribution and a deployment directory (such as /home/egoadmin/myAnaconda). Click Deploy.

Click on a ppc64le anaconda distribution name (such as Anaconda5-1-0-Python3-Linux-ppc64le) to open the Add Conda Environment wizard. In the wizard, click on the Anaconda distribution instance myAnaconda and click Add under Conda environments. Then, deselect Create environment from a yaml file and provide an environment name (such as env1). Click Add.

Create the Spark Instance Group (SIG)

To use snap-ml-spark, the Spark Instance Group(SIG) is to be configured in Spectrum Conductor with specific configurations as given below.

Click the Configuration link near Spark 2.3.1 to set the configuration properties SPARK_EGO_CONF_DIR_EXTRA, SPARK_EGO_GPU_EXECUTOR_SLOTS_MAX, and spark.jars:

The value to be set for the property SPARK_EGO_CONF_DIR_EXTRA is /opt/DL/snap-ml-spark/conductor_spark/conf.

The value to be set for the property SPARK_EGO_GPU_EXECUTOR_SLOTS_MAX is the number of GPUs available on each host in the cluster. For example, SPARK_EGO_GPU_EXECUTOR_SLOTS_MAX=4.

Go to Additional Parameters and click Add a Parameter. Add the parameter spark.jars with the value /opt/DL/snap-ml-spark/lib/snap-ml-spark-v1.1.0-ppc64le.jar.

In the Spark Instance Group creation page, set the following configuration options:

Select Jupyter 5.4.0 at Enable notebooks.

Provide a shared directory as the base data directory (such as /paie-nfs/data).

Select the Anaconda distribution instance and the Conda environment from the drop down boxes.

Click the Configuration link for Jupyter 5.4.0, go to Environment Variables tab, and click Add a variable. Add the variable JUPYTER_SPARK_OPTS with the value --conf spark.ego.gpu.app=true --conf spark.ego.gpu.executors.slots.max=4 --conf spark.default.parallelism=8 to use 8 GPUs with 8 partitions for notebooks where 2 hosts with 4GPUs on each host exists in the SIG.

Under Resource Groups and Plans section, gpus resource group is selected for Spark executors (GPU slots). The ComputeHosts resource group is selected for all other things under Resource Groups and Plans section. Click Create and Deploy Instance Group.

After the SIG is deployed and started, to run Jupyter Notebooks, click on the SIG, go to Notebooks tab, and click Create Notebooks for Users. Select users (for example, Admin and other users, as required) and click Create.

Stop and start the Jupyter 5.4.0 notebook that you created in order to get the sample notebooks (snap_ml_spark_example_notebooks) to the home page when you log in. Start this Jupyter 5.4.0 notebook only when some Jupyter notebooks are to be executed. This is to make sure that GPUs are not allocated to notebook unnecessarily.

How to run snap-ml-spark applications through spark-submit in PowerAI Enterprise 1.1.2

In the Cluster Management Console(GUI), navigate to Workload -> Spark -> My Applications And Notebooks. Click Run Application for spark-submit.

A sample spark-submit command takes the following arguments in the box in the Run Application wizard:

In the above spark-submit command arguments, ego-client is to be replaced with ego-cluster to submit Spark job in cluster mode instead of client mode.

Here in this spark-submit command, we have used one of the examples, example-criteo45m.py which is shipped with PowerAI base package. The /tmp/criteoData/data/ directory should contain the input criteo data. This data directory is a directory on the host in the cluster where the selected spark master (selected in Run Application page) is running. Details related to how to run this example and its related dataset can be found in /opt/DL/snap-ml-spark/examples/example-criteo45m/README.md. More examples are available in /opt/DL/snap-ml-spark/examples.

Running Application can be seen here as below:

How to run Jupyter Notebooks in PowerAI Enterprise 1.1.2 using snap-ml-spark

Click on the Spark Instance Group (SIG) and go to Notebooks tab.

Start the Notebook, if not in started state.

Click My Notebooks drop down box and click on something similar to Jupyter 5.4.0 – owned by Admin. This would open a new window with login to Home page of Notebooks.

Log in as Admin user and click the snap_ml_spark_example_notebooks folder and then select any of the sample notebooks to open and run.

Click New drop down box in the Jupyter Notebooks Home page and select Spark Cluster to create a new IPython notebook where snap-ml-spark can be imported and its API can be used.