Installing docker-compose using the system package manager will likely result in dependency conflicts with the system installed python. Instead, use virtual env to create a localized python install in the project directory and install docker-compose with pip.

virtualenv venv
source venv/bin/activate
pip install docker-compose

Install required node-modules for docker-compose

npm install

Create a .env file. This file isn't committed to the repository as it may contain secure keys. Placeholders will get the application running, but you may need to access actual keys to test things like push notifications.

QB_ID - The id for your quickblox instance, must be a number greater than 0.

QB_KEY - The key for your quickblox instance.

QB_SECRET - The secret token for your quickblox instance.

Migrate DB schema

Option 1:

From the root of the project, run docker-compose up, this should build the required images and run them

Option 2:

The previous option uses the migrate service in docker-compose.yaml to create the db schema. If you wish instead to create the db schema manually:

From the root of the project, run docker-compose run api bash, this places you within the api container running bash.

cd server/bin/ && node migrateSchema.js to create the db schema.

Access the API

OS X

docker-machine ip default to find the IP address for your VM

Open a browser and navigate to <your ip address>:3000/explorer. (Unfortunately, the arc web interface doesn't work fromwithin docker, so you'll need to use the cli for now.

Linux

On Linux distributions you do not need to run docker from within a virtual machine, you should be able to accessthe API directly through local http://0.0.0.0:80

If you already have a service running on port 80 you can change which port is exposed in the docker-compose.yml by changing the external port of the haporxy service here.

Docker hints

Can't resolve issues

Sometimes on a mac we've had issues with npm downloading a dependency or similar connectivity issues. If you experience thisthe fix is usually updating the docker-machine's DNS server.

docker-machine ssh default

vi /etc/resolv.conf

Add this line 8.8.8.8

Docker machine ip

Run ./server/bin/startproxy.sh to start a proxy pointing to your docker-machine. The proxy will allow you toaccess the docker-machine through your computers IP address on port 8080. This is important if you'd like to testthe service from a phone.

There is no autorestart

It'd be nice to get a watcher installed with strongloop, but for now you'll have to kill docker-componse andrerun when you make a change.

Manual Docker Image Build

Database Backup + Restore

The pgbackups3 service handles database backup. The service is configured in Docker Cloud and performs a daily dumpwhich is then uploaded to Amazon S3. Backups are located here in theds-pg-backup bucket.

Back-ups can be restored manually using the restore service. The environment variable RESTORE_DATE should be set tothe desired back up date as found in the S3 bucket. Once the environment variable has been set, the service can beredeployed. It will download the database dump from S3, and load it into the linked database. Finally, the service should stop when the restore is finished. Check the logs to ensure the restore was successful.

Ubuntu hints

If you have already installed PG DB locally then on running docker-compose run api bash you may get

To resolve resolve this you can either turn off or change the postgres port mapping in the docker-compose.ymlhere.

Deployment

Overview

The server is deployed using Docker Cloud. When commits are pushed to master, develop or a new tag is created. The master branch produces and docker image tagged "stable", and the develop branch produces a docker image tagged "latest". The API builds can be viewed in the Docker Cloud repository timeline.

The production server points to the "stable" tag unless a rollback is required, in which case it can pointed to a versioned tag. This can be done by editing the "api" service, and changing the image tag from "stable", to a specific tag, then redeploying the service.

The staging server and production server are broken up into two stacks, drone-squad-api, and the drone-squad-api-staging. Each stack contains a list of interconnected services including, redis, postgres, the api, and static website used for deeplinking. It also includes some services that run onces, then shutdown, such as the migration service.