Floydhub looks like it will be the future of ML, it's very easy to use and setup (I figured out pretty much everything immediately, and I am a ML novice). Problem for me was that the version of keras they have on their theano instances was old, and I had no end of trouble trying to make it work. This would be fine if I knew what I was doing with keras and ML, because then I could just rewrite the scripts, which I did, but then it threw some other error and I gave up (for now). I expect to be back soon though because it seems like Floydhub is very promising

Hi! We released a new set of theano images with keras 2.0.3 installed. You can try it out by passing --env theano-0.9 or --env theano-0.8 to the run command. We will be updating the documentation soon to pick up all the new images including tensorflow-1.2.

## Introduction
This is an examle of how to setup instance on floydhub to run lesson 1 of the awesome deep learning course available at http://course.fast.ai/. This document was revised March 5th 2017 and now works with the full dogscats dataset.
## Set up floydhub account and working directory
First, you'll need a floydhub account and have the floyd CLI installed. Follow their online instruction at https://www.floydhub.com/welcome.
Next, you'll want to create a working directory in your local computer. All files in this directory will be uploaded to the floydhub cloud instance when you do <code> floyd run </code> (explained in detail later).
<pre><code>
mkdir ~/Projects/
cd ~/Projects/
</code></pre>
## Set up necessary files for lesson one
If you would like to skip all the details and just to get things going, you may choose to clone all files in this repo.
<pre><code>
git clone https://github.com/YuelongGuo/floydhub.fast.ai.git
cd floydhub.fast.ai/

import utils; reload(utils)
from utils import plots
WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not available (error: Unable to get the number of gpus available: no CUDA-capable device is detected). Using Theano backend.

I have a question about downloading data-sets from AWS S3. My PyTorch models are trained on huge amounts of data that we generate using a Spark process and dump to an AWS S3 location. My Python code loads files of data from there on-demand. Of course when I try to do that from the Floyd instance, it fails because it is not authorized to access my AWS S3 data.

How do you guys suggest getting around this? I think your data-set creation workflow is only for uploading data from my local computer. But it would be great to support downloading from an AWS S3 bucket, and if the bucket has access restrictions, there should be a way to supply the necessary credentials (secret/access key).