Next we’ll use the vhinit wizard to create a new project on Valohai
and bootstrap the valohai.yaml configuration file.

The configuration will have a single step named ‘Execute nvidia-smi’ that
simply prints the status of the server GPU using the nvidia-smi tool.

$ mkdir test-project
$ cd test-project
$ vh init
# Hello! This wizard will help you start a Valohai compatible project.# First, let's make sure /Users/user/test-project is the root directory of your project.# Is that correct? [y/N]:
y
# Looks like you don't have a Valohai.yaml file. Let's create one!# We couldn't find script files in this directory.# Please enter the command you'd like to run in the Valohai platform.
nvidia-smi
# Is nvidia-smi correct? [y/N]:
y
# Success! Got it! Using nvidia-smi as the command.# Now let's pick a Docker image to use with your code.# Here are some recommended choices, but feel free to type in one of your own.# [ 1] tensorflow/tensorflow:1.0.1-devel-gpu-py3# ...# Choose a number or enter a Docker image name.:1# Is tensorflow/tensorflow:1.0.1-devel-gpu-py3 correct? [y/N]:
y
# Success! Great! Using tensorflow/tensorflow:1.0.1-devel-gpu-py3.# Here's a preview of the Valohai.yaml file I'm going to create.# ...# Write this to /Users/user/test-project/valohai.yaml? [y/N]:
y
# Success! All done! Wrote /Users/user/test-project/valohai.yaml.# Do you want to link this directory to a pre-existing project,# or create a new one? [L/C]:
c
# Project name:
test-project
# Success! Project test-project created.# Success! Linked /Users/user/test-project to test-project.## **********************************************************************# All done! You can now create an ad-hoc execution with# $ vh exec run --adhoc --watch execute# to see that everything works as it should.# For better repeatability, we recommend that your code# is in a Git repository; you can link the repository# to the project in the Valohai webapp.## Happy (machine) learning!# **********************************************************************

You can stop watching the execution with Ctrl+C. (This won’t stop the execution itself, though.)
The execution should only take a second or two to finish if the used Docker image is already on the compute node.

You can see the status of the execution in the web application
or with the command-line client.

Ad-hoc executions are convenient when developing your scripts and learning the platform but we strongly recommend
that you have your main machine learning code in a version control repository to allow better collaboration and
repeatability.