I'm a web developer in Norfolk. This is my blog...

The Raspberry Pi is a great device for running simple web apps at home on a permanent basis, and you can pick up a small touchscreen for it quite cheaply. This makes it easy to build and host a small personal dashboard that pulls important data from various APIs or RSS feeds and displays it. You’ll often see dashboards like this on Raspberry Pi forums and subreddits. As I’m currently between jobs, and have some time to spare before my new job starts, I decided to start creating my own version of it. It was obvious that React.js is a good fit for this as it allows you to break up your user interface into multiple independent components and keep the functionality close to the UI. It also makes it easy to reuse widgets by passing different parameters through each time.

In this tutorial I’ll show you how to start building a simple personal dashboard using React and Webpack. You can then install Nginx on your Raspberry Pi and host it from there. In the process, you’ll be able to pick up a bit of knowledge about Webpack and ECMAScript 2015 (using Babel). Our initial implementation will have only two widgets, a clock and a feed, but those should show you enough of the basics that you should then be able to build other widgets you may have in mind.

Note the various loaders we’re using. We use ESLint to lint our Javascript files for code quality, and the build will fail if they do not match the required standards. We’re also using loaders for CSS, Sass, Babel (so we can use ES2015 for our Javascript) and fonts. Also, note the hot module replacement plugin - this allows us to reload the application automatically. If you haven’t used Webpack before, this config should be sufficient to get you started, but I recommend reading the documentation.

We also need to configure ESLint how we want. Here is the configuration we will be using, which should be saved as .eslintrc.yml:

rules:

no-debugger:

-0

no-console:

-0

no-unused-vars:

-0

indent:

-2

-2

quotes:

-2

-single

linebreak-style:

-2

-unix

semi:

-2

-always

env:

es6:true

browser:true

node:true

extends:'eslint:recommended'

parserOptions:

sourceType:module

ecmaFeatures:

jsx:true

experimentalObjectRestSpread:true

modules:true

plugins:

-react

We also need a base HTML file. Save this as index.html:

<!doctype html>

<htmllang="en">

<head>

<metacharset="utf-8">

<title>Personal Dashboard</title>

</head>

<body>

<divid="view"></section>

<scriptsrc="bundle.js"></script>

</body>

</html>

We also need to set the commands for building and testing our app in package.json:

The npm test command will call Mocha to run the tests, but will also use Istanbul to generate test coverage. For the sake of brevity, our tests won’t be terribly comprehensive. The npm start command will run a development server, while npm run build will build our application.

We also need to create the test/ folder and the test/setup.js file:

import jsdom from'jsdom';

import chai from'chai';

const doc = jsdom.jsdom('<!doctype html><html><body></body></html>');

const win = doc.defaultView;

global.document = doc;

global.window = win;

Object.keys(window).forEach((key) => {

if (!(key in global)) {

global[key] = window[key];

}

});

This sets up Chai and creates a dummy DOM for our tests. We also need to create the folder js/ and the file js/app.js. You can leave that file empty for now.

Our dashboard component

Our first React component will be a wrapper for all the other ones. Each of the rest of the components will be a self-contained widget that will populate itself without the need for a centralized data store like Redux. I will mention that Redux is a very useful library, and for larger React applications it makes a lot of sense to use it, but here we’re better off having each widget manage its own data internally, rather than have it be passed down from a single data store.

No coverage information was collected, exit without writing coverage information

Our first component is in place. However, it isn’t getting loaded. We also need to start thinking about styling. Create the file scss/style.scss, but leave it blank for now. Then save this in js/app.js:

import React from'react';

import ReactDOM from'react-dom';

import Dashboard from'./components/dashboard';

import styles from'../scss/style.scss';

ReactDOM.render(

<Dashboardtitle="My Dashboard" />,

document.getElementById('view')

);

Note that we’re importing CSS or Sass files in the same way as Javascript files. This is unique to Webpack, and while it takes a bit of getting used to, it has its advantages - if you import only the styles relating to each component, you can be sure there’s no orphaned CSS files. Here, we only have one CSS file anyway, so it’s a non-issue.

If you now run npm start, our dashboard gets loaded and the title is displayed. With our dashboard in place, we can now implement our first widget.

Creating the clock widget

Our first widget will be a simple clock. This demonstrates changing the state of the widget on an interval. First let’s write a test - save this as test/components/clockwidget.js:

Note that the component accepts a property of time. The getInitialState() method then converts this.props.time into this.state.time so that it can be displayed on render. Note we also set a default of the current time using Moment.js.

We also need to update the dashboard component to load this new component:

import React from'react';

import ClockWidget from'./clockwidget';

exportdefault React.createClass({

render() {

return (

<divclassName="dashboard">

<h1ref="title">{this.props.title}</h1>

<divclassName="wrapper">

<ClockWidget />

</div>

</div>

);

}

});

Now, if you try running npm start and viewing the dashboard in the browser, you will see that it displays the current time and date, but it’s not being updated. You can force the page to reload every now and then, but we can do better than that. We can set an interval in which the time will refresh. As the smallest unit we show is seconds, this interval should be 1 second.

When our component has mounted, we set an interval of 1,000 milliseconds, and each time it elapses we call the tick() method. This method sets the state to the current time, and as a result the user interface is automatically re-rendered. On unmount, we clear the interval.

In this case we’re just calling a single function on a set interval. In principle, the same approach can be used to populate components in other ways, such as by making an AJAX request.

Creating an RSS widget

Our next widget will be a simple RSS feed reader. We’ll fetch the content with jQuery and render it using React. We’ll also reload it regularly. First, let’s create our test:

Our feed widget will accept an external URL as an argument, and will then poll this URL regularly to populate the feed. It also allows us to specify the size attribute, which denotes the number of feed items, and the delay attribute, which denotes the number of seconds it should wait before fetching the data again.

Warning: React.createElement: type should not be null, undefined, boolean, or number. It should be a string (for DOM elements) or a ReactClass (for composite components). Check the render method of `dashboard`.

1) renders the dashboard

Feed Widget

Warning: React.createElement: type should not be null, undefined, boolean, or number. It should be a string (for DOM elements) or a ReactClass (for composite components).

This is by far the most complex component, so a little explanation is called for. We include jQuery as a dependency at the top of the file. Then we create a component for rendering an individual feed item, called FeedItem. This is very simple, consisting of an anchor tag wrapped around a list item. Note the use of the const keyword - in ES6 this denotes a constant.

Next, we move onto the feed widget proper. We set the initial state of the feed to be an empty array. Then, we define a componentDidMount() method that calls getFeed() and sets up an interval to call it again, based on the delay property. The getFeed() method fetches the URL in question and sets this.state.feed to an array of the most recent entries in the feed, with the size denoted by the size property passed through. We also clear that interval when the component is about to be umounted.

Note that you may have problems with the Access-Control-Allow-Origin HTTP header. It’s possible to disable this in your web browser, so if you want to run this as a dashboard you’ll probably need to do so. On Chrome there’s a useful plugin that allows you to disable this when needed.

Because our FeedWidget has been created in a generic manner, we can then include multiple feed widgets easily, as in this example:

With that done, feel free to add whatever other feeds you want to include.

Deploying our dashboard

The final step is deploying our dashboard to our Raspberry Pi or other device. Run the following command to generate the Javascript:

$ npm run build

This will create static/bundle.js. You can then copy that file over to your web server with index.html and place both files in the web root. I recommend using Nginx if you’re using a Raspberry Pi as it’s faster and simpler for static content. If you’re likely to make a lot of changes you might want to create a command in the scripts section of your package.json to deploy the files more easily.

These basic widgets should be enough to get you started. You should be able to use the feed widget with virtually any RSS feed, and you should be able to use a similar approach to poll third-party APIs, although you might need to authenticate in some way (if you do, you won’t want to expose your authentication details, so ensure that nobody from outside the network can view your application). I’ll leave it to you to see what kind of interesting widgets you come up with for your own dashboard, but some ideas to get you started include:

Public transport schedules/Traffic issues

Weather reports

Shopping lists/Todo lists, with HTML5 local storage used to persist them

If, like me, you’re a web developer who sometimes also has to wear a sysadmin’s hat, then you’ll probably be coming across the same set of tasks each time you set up a new server. These may include:

Provisioning new servers on cloud hosting providers such as Digital Ocean

Setting up Cloudflare

Installing a web server, database and other required packages

Installing an existing web application, such as Wordpress

Configuring the firewall and Fail2ban

Keeping existing servers up to date

These can get tedious and repetitive fairly quickly - who genuinely wants to SSH into each server individually and run the updates regularly? Also, if done manually, there’s a danger of the setup for each server being inconsistent. Shell scripts will do this, but aren’t easy to read and not necessarily easy to adapt to different operating systems. You need a way to be able to manage multiple servers easily, maintain a series of reusable “recipes” and do it all in a way that’s straightforward to read - in other words, a configuration management system.

There are others around, such as Chef, Puppet, and Salt, but my own choice is Ansible. Here’s why I went for Ansible:

Playbooks and roles are defined as YAML, making them fairly straightforward to read and understand

It’s written in Python, making it easy to create your own modules that leverage existing Python modules to get things done

It’s distributed via pip, making it easy to install

It doesn’t require you to install anything new on the servers, so you can get started straight away as soon as you can access a new server

It has modules for interfacing with cloud services such as Digital Ocean and Amazon Web Services

Ansible is very easy to use, but you do still need to know what is actually going on to get the best out of it. It’s intended as a convenient abstraction on top of the underlying commands, not a replacement, and you should know how to do what you want to do manually before you write an Ansible playbook to do it.

Setting up

You need to have Python 2 available. Ansible doesn’t yet support Python 3 (Grr…) so if you’re using an operating system that has switched to Python 3, such as Arch Linux, you’ll need to have Python 2 installed as well. Assuming you have pip installed, then run this command to install it:

$ sudo pip install ansible

Or for users on systems with Python 3 as the main Python:

$ sudo pip2 install ansible

For Windows users, you’ll want to drop sudo. On Unix-like OS’s that don’t have sudo installed, drop it and run the command as root.

Our first Ansible command

We’ll demonstrate Ansible in action with a Vagrant VM. Drop the following Vagrantfile into your working directory:

# -*- mode: ruby -*-

# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do|config|

config.vm.box = "debian/jessie64"

config.vm.network "forwarded_port", guest:80, host:8080

end

Then fire up the VM:

$ vagrant up

This VM will be our test bed for running Ansible. If you prefer, you can use a remote server instead.

Next, we’ll configure Ansible. Save this as ansible.cfg:

[defaults]

hostfile = inventory

remote_user = vagrant

private_key_file = .vagrant/machines/default/virtualbox/private_key

In this case the remote user is vagrant because we’re using Vagrant, but to manage remote machines you would need to change this to the name of the account that you use on the server. The value of private_key_file will also normally be something like /home/matthew/.ssh/id_rsa.pub, but here we’re using the Vagrant-specific key.

Note the hostfile entry - this points to the list of hosts you want to manage with Ansible. Let’s create this next. Save the following as inventory:

testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222

Note that we explicitly need to set the port here because we’re using Vagrant. Normally it will default to port 22. A typical entry for a remote server might look like this:

example.com ansible_ssh_host=192.168.56.101

Note also that we can refer to hosts by the name we give it, which can be as meaningful (or not) as you want.

Let’s run our first command:

$ ansible all -m ping

testserver | SUCCESS => {

"changed": false,

"ping": "pong"

}

We called Ansible with the hosts set to all, therefore every host in the inventory was contacted. We used the -m flag to say we were calling a module, and then specified the ping module. Ansible therefore pinged each server in turn.

We can call ad-hoc commands using the -a flag, as in this example:

$ ansible all -a "uptime"

testserver | SUCCESS | rc=0 >>

17:26:57 up 19 min, 1 user, load average: 0.00, 0.04, 0.13

This command gets the uptime for the server. If you only want to run the command on a single server, you can specify it by name:

$ ansible testserver -a "uptime"

testserver | SUCCESS | rc=0 >>

17:28:21 up 20 min, 1 user, load average: 0.02, 0.04, 0.13

Here we specified the server as testserver. What about if you want to specify more than one server, but not all of them? You can create groups of servers in inventory, as in this example:

[webservers]

testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222

example.com ansible_ssh_host=192.168.56.101

You could then call the following to run the uptime command on all the servers in the webservers group:

$ ansible webservers -a 'uptime'

If you want to run the command as a different user, you can do so:

$ ansible webservers -a 'uptime' -u bob

Note that for running uptime we haven’t specified the -m flag. This is because the command module is the default, but it’s very basic and doesn’t support shell variables. For more complex interactions you might need to use the shell module, as in this example:

$ ansible testserver -m shell -a 'echo $PATH'

testserver | SUCCESS | rc=0 >>

/usr/local/bin:/usr/bin:/bin:/usr/games

For installing a package on Debian or Ubuntu, you might use the apt module:

Here we specify that a particular package should be state=present or state=absent. Also, note the --become flag, which allows us to become root. If you’re using an RPM-based Linux distro, you can use the yum module in the same way.

Finally, let’s use the git module to check out a project on the server:

Here we check out a Git repository. We specify the repo, destination and version.

You can call any installed Ansible module in an ad-hoc fashion in the same way. Refer to the documentation for a list of modules.

Playbooks

Ad-hoc commands are useful, but they don’t offer much extra over using SSH. Playbooks allow you to define a repeatable set of commands for a particular use case. In this example, I’ll show you how to write a playbook that does the following:

Installs and configures Nginx

Clones the repository for my site into the web root

This is sufficiently complex to demonstrate some more of the functionality of Ansible, while also demonstrating playbooks in action.

Create a new folder called playbooks, and inside it save the following as sitecopy.yml:

---

- name: Copy personal website

hosts: testserver

become: True

tasks:

- name: Install Nginx

apt: name=nginx update_cache=yes

- name: Copy config

copy: >

src=files/nginx.conf

dest=/etc/nginx/sites-available/default

- name: Activate config

file: >

dest=/etc/nginx/sites-enabled/default

src=/etc/nginx/sites-available/default

state=link

- name: Delete /var/www directory

file: >

path=/var/www

state=absent

- name: Clone repository

git: >

repo=https://github.com/matthewbdaly/matthewbdaly.github.io.git

dest=/var/www

version=HEAD

- name: Restart Nginx

service: name=nginx state=restarted

Note the name fields - these are comments that will show up in the output when each step is run. First we use the apt module to install Nginx, then we copy over the config file and activate it, then we empty the existing /var/www and clone the repository, and finally we restart Nginx.

Also, note the following fields:

hosts defines the hosts affected

become specifies that the commands are run using sudo

We also need to create the config for Nginx. Create the files directory under playbooks and save this file as playbooks/files/nginx.conf:

server {

listen80 default_server;

listen [::]:80 default_server ipv6only=on;

root /var/www;

index index.html index.htm;

server_name localhost;

location / {

try_files$uri$uri/ =404;

}

}

Obviously if your Nginx config will be different, feel free to amend it as necessary. Finally, we run the playbook using the ansible-playbook command:

PLAY RECAP *********************************************************************

testserver : ok=7 changed=6 unreachable=0 failed=0

If we had a playbook that we wanted to run on only a subset of the hosts it applied to, we could use the -l flag, as in this example:

$ ansible-playbook playbooks/sitecopy.yml -l testserver

Using these same basic concepts, you can invoke many different Ansible modules to achieve many different tasks. You can spin up new servers on supported cloud hosting companies, you can set up a known good fail2ban config, you can configure your firewall, and many more tasks. As your playbooks get bigger, it’s worth moving sections into separate roles that get invoked within multiple playbooks, in order to reduce repetition.

Finally, I mentioned earlier that you can use Ansible to update all of your servers regularly. Here’s the playbook I use for that:

---

- name: Update system

hosts: all

become: True

tasks:

- name: update system

apt: upgrade=full update_cache=yes

This connects to all hosts using the all shortcut we saw earlier, and upgrades all existing packages. Using this method is a lot easier than connecting to each one in turn via SSH and updating it manually.

Summary

Ansible is an extremely useful tool for managing servers, but to get the most out of it you have to put in a fair bit of work reading the documentation and writing your own playbooks for your own use cases. It’s simple to get started with, and if you’re willing to put in the time writing your own playbooks then in the long run you’ll save yourself a lot of time and grief by making it easy to set up new servers and administer existing ones. Hopefully this has given you a taster of what you can do with Ansible - from here on the documentation is worth a look as it lists all of the modules that ship with Ansible. If there’s a particular task you dread, such as setting up a mail server, then Ansible is a very good way to automate that away so it’s easier next time.

My experience is that it’s best to make an effort to try to standardise on two or three different stacks for different purposes, and create Ansible playbooks for those stacks. For instance, I’ve tended to use PHP 5, Apache, MySQL, Memcached and Varnish for Wordpress sites, and PHP 7, Nginx, Redis and PostgreSQL for Laravel sites. That way I know that any sites I build with Laravel will be using that stack. Knowing my servers are more consistent makes it easier to work with them and identify problems.

Documenting your API is something most developers agree is generally a Good Thing, but it’s a pain in the backside, and somewhat boring to do. What you really need is a tool that allows you to specify the details of your API before you start work, generate documentation from that specification, and test your implementation against that specification.

Fortunately, such a tool exists. The Blueprint specification allows you to document your API using a Markdown-like syntax. You can then create HTML documentation using a tool like Aglio or Apiary, and test it against your implementation using Dredd.

In this tutorial we’ll implement a very basic REST API using the Lumen framework. We’ll first specify our API, then we’ll implement routes to match the implementation. In the process, we’ll demonstrate the Blueprint specification in action.

Getting started

Assuming you already have PHP 5.6 or better and Composer installed, run the following command to create our Lumen app skeleton:

$ composer create-project --prefer-dist laravel/lumen demoapi

Once it has finished installing, we’ll also need to add the Dredd hooks:

$ cd demoapi

$ composer require ddelnano/dredd-hooks-php

We need to install Dredd. It’s a Node.js tool, so you’ll need to have that installed. We’ll also install Aglio to generate HTML versions of our documentation:

$ npm install -g aglio dredd

We also need to create a configuration file for Dredd, which you can do by running dredd init. Or you can just copy the one below:

dry-run:null

hookfiles:null

language:php

sandbox:false

server:'php -S localhost:3000 -t public/'

server-wait:3

init:false

custom:

apiaryApiKey:''

names:false

only:[]

reporter:apiary

output:[]

header:[]

sorted:false

user:null

inline-errors:false

details:false

method:[]

color:true

level:info

timestamp:false

silent:false

path:[]

hooks-worker-timeout:5000

hooks-worker-connect-timeout:1500

hooks-worker-connect-retry:500

hooks-worker-after-connect-wait:100

hooks-worker-term-timeout:5000

hooks-worker-term-retry:500

hooks-worker-handler-host:localhost

hooks-worker-handler-port:61321

config:./dredd.yml

blueprint:apiary.apib

endpoint:'http://localhost:3000'

If you choose to run dredd init, you’ll see prompts for a number of things, including:

The server command

The blueprint file name

The endpoint

Any Apiary API key

The language you want to use

There are Dredd hooks for many languages, so if you’re planning on building a REST API in a language other than PHP, don’t worry - you can still test it with Dredd, you’ll just get prompted to install different hooks.

Note the hookfiles section, which specifies a hookfile to run during the test in order to set up the API. We’ll touch on that in a moment. Also, note the server setting - this specifies the command we should call to run the server. In this case we’re using the PHP development server.

If you’re using Apiary with your API (which I highly recommend), you can also set the following parameter to ensure that every time you run Dredd, it submits the results to Apiary:

custom:

apiaryApiKey:<API KEY HERE>

apiaryApiName:<API NAME HERE>

Hookfiles

As mentioned, the hooks allow you to set up your API. In our case, we’ll need to set up some fixtures for our tests. Save this file at tests/dredd/hooks/hookfile.php:

<?php

useDredd\Hooks;

useIlluminate\Support\Facades\Artisan;

require__DIR__ . '/../../../vendor/autoload.php';

$app = require__DIR__ . '/../../../bootstrap/app.php';

$app->make(\Illuminate\Contracts\Console\Kernel::class)->bootstrap();

Hooks::beforeAll(function(&$transaction)use($app){

putenv('DB_CONNECTION=sqlite');

putenv('DB_DATABASE=:memory:');

Artisan::call('migrate:refresh');

Artisan::call('db:seed');

});

Hooks::beforeEach(function(&$transaction)use($app){

Artisan::call('migrate:refresh');

Artisan::call('db:seed');

});

Before the tests run, we set the environment up to use an in-memory SQLite database. We also migrate and seed the database, so we’re working with a clean database. As part of this tutorial, we’ll create seed files for the fixtures we need in the database.

This hookfile assumes that the user does not need to be authenticated to communicate with the API. If that’s not the case for your API, you may want to include something like this in your hookfile’s beforeEach callback:

$user = App\User::first();

$token = JWTAuth::fromUser($user);

$transaction->request->headers->Authorization = 'Bearer ' . $token;

Here we’re using the JWT Auth package for Laravel to authenticate users of our API, and we need to set the Authorization header to contain a valid JSON web token for the given user. If you’re using a different method, such as HTTP Basic authentication, you’ll need to amend this code to reflect that.

With that done, we need to create the Blueprint file for our API. Recall the following line in dredd.yml:

Our first route

Dredd is not a testing tool in the usual sense. Under no circumstances should you use it as a substitute for something like PHPUnit - that’s not what it’s for. It’s for ensuring that your documentation and your implementation remain in sync. However, it’s not entirely impractical to use it as a Behaviour-driven development tool in the same vein as Cucumber or Behat - you can use it to plan out the endpoints your API will have, the requests they accept, and the responses they return, and then verify your implementation against the documentation.

We will only have a single endpoint, in order to keep this tutorial as simple and concise as possible. Our endpoint will expose products for a shop, and will allow users to fetch, create, edit and delete products. Note that we won’t be implementing any kind of authentication, which in production is almost certainly not what you want - we’re just going for the simplest possible implementation.

First, we’ll implement getting a list of products:

FORMAT: 1A

# Demo API

# Products [/api/products]

Product object representation

## Get products [GET /api/products]

Get a list of products

+ Request (application/json)

+ Response 200 (application/json)

+ Body

{

"id": 1,

"name": "Purple widget",

"description": "A purple widget",

"price": 5.99,

"attributes": {

"colour": "Purple",

"size": "Small"

}

}

A little explanation is called for. First the FORMAT section denotes the version of the API. Then, the # Demo API section denotes the name of the API.

Next, we define the Products endpoint, followed by our first method. Then we define what should be contained in the request, and what the response should look like. Blueprint is a little more complex than that, but that’s sufficient to get us started.

Our route is returning HTML, not JSON, and is also raising a 404 error. So let’s fix that. First, let’s create our Product model at app/Product.php:

<?php

namespaceApp;

useIlluminate\Database\Eloquent\Model;

classProductextendsModel

{

//

}

Next, we need to create a migration for the database tables for the Product model:

$ php artisan make:migration create_product_table

Created Migration: 2016_08_08_105737_create_product_table

This will create a new file under database/migrations. Open this file and paste in the following:

<?php

useIlluminate\Database\Schema\Blueprint;

useIlluminate\Database\Migrations\Migration;

classCreateProductTableextendsMigration

{

/**

* Run the migrations.

*

* @return void

*/

publicfunctionup()

{

// Create products table

Schema::create('products', function(Blueprint $table){

$table->increments('id');

$table->string('name');

$table->text('description');

$table->float('price');

$table->json('attributes');

$table->timestamps();

});

}

/**

* Reverse the migrations.

*

* @return void

*/

publicfunctiondown()

{

// Drop products table

Schema::drop('products');

}

}

Note that we create fields that map to the attributes our API exposes. Also, note the use of the JSON field. In databases that support it, like PostgreSQL, it uses the native JSON support, otherwise it works like a text field. Next, we run the migration to create the table:

$ php artisan migrate

Migrated: 2016_08_08_105737_create_product_table

With our model done, we now need to ensure that when Dredd runs, there is some data in the database, so we’ll create a seeder file at database/seeds/ProductSeeder:

<?php

useIlluminate\Database\Seeder;

useCarbon\Carbon;

classProductSeederextendsSeeder

{

/**

* Run the database seeds.

*

* @return void

*/

publicfunctionrun()

{

// Add product

DB::table('products')->insert([

'name' => 'Purple widget',

'description' => 'A purple widget',

'price' => 5.99,

'attributes' => json_encode([

'colour' => 'purple',

'size' => 'Small'

]),

'created_at' => Carbon::now(),

'updated_at' => Carbon::now(),

]);

}

}

You also need to amend database/seeds/DatabaseSeeder to call it:

<?php

useIlluminate\Database\Seeder;

classDatabaseSeederextendsSeeder

{

/**

* Run the database seeds.

*

* @return void

*/

publicfunctionrun()

{

$this->call('ProductSeeder');

}

}

I found I also had to run the following command to find the new seeder:

$ composer dump-autoload

Then, call the seeder:

$ php artisan db:seed

Seeded: ProductSeeder

We also need to enable Eloquent, as Lumen disables it by default. Uncomment the following line in bootstrap/app.php:

$app->withEloquent();

With that done, we can move onto the controller.

Creating the controller

Create the following file at app/Http/Controllers/ProductController:

<?php

namespaceApp\Http\Controllers;

useIlluminate\Http\Request;

useApp\Product;

classProductControllerextendsController

{

private $product;

publicfunction__construct(Product $product){

$this->product = $product;

}

publicfunctionindex()

{

// Get all products

$products = $this->product->all();

// Send response

return response()->json($products, 200);

}

}

This implements the index route. Note that we inject the Product instance into the controller. Next, we need to hook it up in app/Http/routes.php:

Whoops, looks like we made a mistake here. The index route returns an array of objects, but we’re looking for a single object in the blueprint. We also need to wrap our attributes in quotes, and add the created_at and updated_at attributes. Let’s fix the blueprint:

That’s our read support done. We just need to add support for POST, PATCH and DELETE methods.

Our remaining methods

Let’s set up the test for our POST method first:

## Create products [POST /api/products]

Create a new product

+ name (string) - The product name

+ description (string) - The product description

+ price (float) - The product price

+ attributes (string) - The product attributes

+ Request (application/json)

+ Body

{

"name": "Blue widget",

"description": "A blue widget",

"price": 5.99,

"attributes": "{\"colour\": \"blue\",\"size\": \"Small\"}"

}

+ Response 201 (application/json)

+ Body

{

"id": 2,

"name": "Blue widget",

"description": "A blue widget",

"price": 5.99,

"attributes": "{\"colour\": \"blue\",\"size\": \"Small\"}",

"created_at": "*",

"updated_at": "*"

}

Note we specify the format of the parameters that should be passed through, and that our status code should be 201, not 200 - this is arguably a more correct choice for creating a resource. Be careful of the whitespace - I had some odd issues with it. Next, we add our route:

$app->post('/api/products', 'ProductController@store');

And the store() method in the controller:

publicfunctionstore(Request $request)

{

// Validate request

$valid = $this->validate($request, [

'name' => 'required|string',

'description' => 'required|string',

'price' => 'required|numeric',

'attributes' => 'string',

]);

// Create product

$product = new$this->product;

$product->name = $request->input('name');

$product->description = $request->input('description');

$product->price = $request->input('price');

$product->attributes = $request->input('attributes');

// Save product

$product->save();

// Send response

return response()->json($product, 201);

}

Note that we validate the attributes, to ensure they are correct and that the required ones exist. Running Dredd again should show the route is now in place:

Generating HTML version of your documentation

Now we have finished documenting and implementing our API, we need to generate an HTML version of it. One way is to use aglio:

$ aglio -i apiary.apib -o output.html

This will write the documentation to output.html. There’s also scope for choosing different themes if you wish.

You can also use Apiary, which has the advantage that they’ll create a stub of your API so that if you need to work with the API before it’s finished being implemented, you can use that as a placeholder.

Summary

The Blueprint language is a useful way of documenting your API, and makes it simple enough that it’s hard to weasel out of doing so. It’s worth taking a closer look at the specification as it goes into quite a lot of detail. It’s hard to ensure that the documentation and implementation remain in sync, so it’s a good idea to use Dredd to ensure that any changes you make don’t invalidate the documentation. With Aglio or Apiary, you can easily convert the documentation into a more attractive format.

You’ll find the source code for this demo API on Github, so if you get stuck, take a look at that. I did have a fair few issues with whitespace, so bear that in mind if it behaves oddly. I’ve also noticed a few quirks, such as Dredd not working properly if a route returns a 204 response code, which is why I couldn’t use that for deleting - this appears to be a bug, but hopefully this will be resolved soon.

I’ll say it again, Dredd is not a substitute for proper unit tests, and under no circumstances should you use it as one. However, it can be very useful as a way to plan how your API will work and ensure that it complies with that plan, and to ensure that the implementation and documentation don’t diverge. Used as part of your normal continuous integration setup, Dredd can make sure that any divergence between the docs and the application is picked up on and fixed as quickly as possible, while also making writing documentation less onerous.

I use Jenkins as my main continuous integration solution at work, largely for two reasons:

It generally works out cheaper to host it ourselves than to use one of the paid CI solutions for closed-source projects

The size of the plugin ecosystem

However, we also use Travis CI for testing one or two open-source projects, and one distinct advantage Travis has is the way you can configure it using a single text file.

With the Pipeline plugin, it’s possible to define the steps required to run your tests in a Jenkinsfile and then set up a Pipeline job which reads that file from the version control system and runs it accordingly. Here’s a sample Jenkinsfile for a Laravel project:

Using these three commands it’s straightforward to define a fairly simple build process for your application in a way that’s more easily repeatable when creating new projects - for instance, you can copy this over to a new project and change the source repository URL and you’re pretty much ready to go.

Unfortunately, support for the Pipeline plugin is missing from a lot of Jenkins plugins - for instance, I can’t publish the XML coverage reports. This is something of a deal-breaker for most of my projects as I use these kind of report plugins a lot - it’s one of the reasons I chose Jenkins over Travis. Still, this is definitely a big step forward, and if you don’t need this kind of reporting then there’s no reason not to consider using the Pipeline plugin for your Jenkins jobs. Hopefully in future more plugins will be amended to work with Pipeline so that it’s more widely usable.

You may have heard of Google’s AMP Project, which allows you to create mobile-optimized pages using a subset of HTML. After seeing the sheer speed at which you can load an AMP page (practically instantaneous in many cases), I was eager to see if I could apply it to my own site.

I still wanted to retain the existing functionality for my site, such as comments and search, so I elected not to rewrite the whole thing to make it AMP-compliant. Instead, I opted to create AMP versions of every blog post, and link to them from the original. This preserves the advantages of AMP since search engines will be able to discover it from the header of the original, while allowing those wanting a richer experience to view the original, where the comments are hosted. You can now view the AMP version of any post by appending amp/ to its URL.

The biggest problem was the images in the post body, as the <img> tag needs to be replaced by the <amp-img> tag, which also requires an explicit height and width. I wound up amending the renderer for AMP pages to render an image tag as an empty string, since I have only ever used one image in the post body and I think I can live without them.

It’s also a bit of a pain styling it as it will be awkward to use Bootstrap. I’ve therefore opted to skip Bootstrap for now and write my own fairly basic theme for the AMP pages instead.

It’ll be interesting to see what effect having the AMP versions of the pages available will have on my site in terms of search results. It obviously takes some time before the page gets crawled, and until then the AMP version won’t be served from the CDN used by AMP, so I really can’t guess what effect it will have right now.

Recent Posts

About me

I'm a web and mobile app developer based in Norfolk. My skillset includes Python, PHP and Javascript, and I have extensive experience working with CodeIgniter, Laravel, Zend Framework, Django, Phonegap and React.js.