This is meant as an example of how to use puppet to provision a web application using multiple components, with nginx handling routing. I plan on building this up into a fully fledged microservice solution.

Start

# Get JS and CSS library resources (AngularJS and Bootstrap) for the site
# using npm and bower and compiles the Dropwizard Application as a fat jar
./update_dependencies.sh
# Starts up VM and runs puppet to setup, downloading box if necessary
vagrant up
# On completion nginx should be running and the AngularJS site accessible
# from host at http://192.168.33.10

Running and testing locally

The AngularJS site lives in the site_content puppet module and can be run using npm and tested.

It uses a config module ‘appConfig’ held in config.js to hold the web service URL it uses for calling Dropwizard to get Person JSON, so a different config.js file is copied into the app folder depending on whether it is running locally or on VM were it uses an Nginx proxy to route calls. When running locally dropwizard is on http://localhost:8095/api, which will cause cross domain scripting errors unless you run browser with reduced security (“–args –disable-web-security” on Chrome).

Puppet details

Java

On the version of ubuntu box I had the java module failed due to missing dependencies and needed to run ‘sudo apt-get update’ to find them. I added this to the vagrant script using a shell provisioner, though a better solution would be either run in puppet before java class or directly include all packages.

I have created a custom module, dropwizard_service, to copy the dropwizard jar, config and upstart script to the server, then ensure the dropwizard service is running and will restart on changes to the jar or config.

Nginx

I created a custom puppet module site_content (puppet/modules/site_content) which ensures /var/www exists and deploys the AngularJS site.

The manifest site.pp (puppet/manifests/site.pp) configures the nginx module to include a vhost entry for the site content and an upstream proxy to the dropwizard application. It took a while to understand how the nginx module actually works to create nginx conf files, it actually creates ones for both the proxy “nginx::resource::upstream” and the vhost “nginx::resource::vhost”, with location entries being added to the vhost conf file.

Notes

This is a bare bones example and does not include any security or configuration best practise, so do not use in production.

Any change to the puppet files requires you to run ‘vagrant provision’ to update.

Possible improvements

Move the site content under src directory and build RPM with content instead of using explicit puppet module, that way site can be versioned and installed as RPM directly

Package Dropwizard Application as RPM and install using Puppet for versioning

HTTPS

Run node tests as part of gradle build using gradle plugin

I think the nginx config should not belong in site.pp, rather the module site_content or a different module

Anyone who’s talked to me about Government Digital Services has heard me rant about the world leading provider….Estonia!

A couple of years ago I had a chance to talk to some of the Estonian developers who worked on their Government services platforms. The impressive part wasn’t the UX or site design (mostly WordPress for frontends) but the way all their services were connected, using their ID card authentication system and common data services (access to health/business records). You could legally register a company in 10 mins compared to weeks in the UK.

They are now leveraging their digital services combined with secure digital authentication to make their country attractive to foreign citizens to start businesses and invest. This will give them a considerable economic advantage compared to their neighbours (and a cheeky slap to Russia as it’s capital starts to flee it’s depression).

This is the power of digital services, destroying traditional barriers, making it quick, cheap and safe to interact with legal authorities, making trade and business run more effectively.

It’s sad that the UK is playing catch up, but we’re getting there … slowly.

I’m currently on a project which has a testing problem. The problem is it has a complex web application with multiple web service integration points, some of which are sometimes flaky, have complex dataloads or difficult to setup. This means that our automated web tests (done in Selenium) can be slow, unreliable and tricky to setup (requiring automating setup steps as well as the tests themselves).

Some of those are problems which need to be solved in the service dependencies and can’t be ignored as part of proper integration tests, but at the same time it would be good to be able to quickly test the web UI aspect in isolation for Javascript components (using AngularJS) and browser compatibility without having these separate integration tests wrecking entire test runs.

To achieve this I would like to create a simple web application that mocks and mimics various services, allowing the automated tests to setup canned responses from the services, similar to how a mocked service would be used in a unit test. Basically before an automated test executes it would post one or more canned responses which would go into a stack on the mimicking service, then as the automated test executes it’s steps should cause requests to fire against the mimic service which will respond from it’s response stack (FIFO). Responses could be added to various queues to mimic different endpoints, resources or services.

Objectives for mimic application:

Simple, must be easy to write a canned response or series of responses for tests

Fast, if it can’t respond significantly faster than the real integration points theres no advantage

Flexible, to handle mocking different types of services

Handle multiple endpoints, so you don’t need to host multiple versions for different services

Disadvantages of this approach:

Adds complexity to web tests (hopefully balanced out by removing more setup code)