Eyefodderhttp://eyefodder.com
Thu, 18 Sep 2014 13:20:06 +0000en-UShourly1Eyefodderhttps://feedburner.google.comCode Coverage — a simple Rails examplehttp://feedproxy.google.com/~r/Eyefodder/~3/e-SdvdmNMk4/code-coverage-setup-rails.html
http://eyefodder.com/2014/09/code-coverage-setup-rails.html#commentsThu, 18 Sep 2014 13:13:40 +0000http://eyefodder.com/?p=208My tests are my safety net. With them I can refactor with confidence, knowing that I’m keeping the functionality I intended. With them, I can grow my codebase, knowing that I’m not introducing regression errors. How do I have confidence that my safety net is good enough? One metric I can use to help with this […]

]]>My tests are my safety net. With them I can refactor with confidence, knowing that I’m keeping the functionality I intended. With them, I can grow my codebase, knowing that I’m not introducing regression errors. How do I have confidence that my safety net is good enough? One metric I can use to help with this is code coverage. It answers the question “When I run my tests, how much of my application code executed?”. It’s a somewhat crude metric—telling me how broad the net is not how strong—but it’s a good place to start. Fortunately, setting it up on a rails project is pretty simple.

See how happy people are when they have a safety net?

Getting Started

I’ve made a simple example app that shows code coverage in action. Check out the source code from the code_coverage branch of my spex repository:

Now, check out the reports folder. You’ll see that there is a coverage/rcov folder. Open the index file in the browser and you see an easy to digest code coverage report:
Pretty nifty huh? You can click on the rows in the table to see each class in more detail, and find out exactly which lines aren’t being executed:
Let’s take a look at how this was all set up…

There’s a few things happening here. We have a couple of environment variables that tell us if we should create reports: GENERATE_COVERAGE_REPORTS and if we do, where we should put them: CI_COVERAGE_REPORTS. If you’ve followed my earlier post on getting Guard to send Growl notifications, you will know to find these in ops/dotfiles/guest_bash_profile which is a profile automatically generated when we launch the virtual machine with vagrant up. If not, well, now you do!
The next thing you’ll notice is the SimpleCov.start 'rails' call on line 4. This configures SimpleCov to have a profile that is good for most Rails applications. For example, the spec and config folders are excluded from coverage stats. You can read more about profiles here.
Finally, we tell SimpleCov that we want to format our results with the SimpleCov::Formatter::RcovFormatter. When we get to running our build as part of a continuous integration process with Jenkins, we can use this format to parse results to be viewed in the dashboard.

Viewing Code Coverage Reports generated on a Guest VM

The last thing we have to deal with is the fact that the reports are generated on the guest virtual machine. In our existing setup, we use rsync to push code changes from the host to the virtual machine. But this only works one way, and if we add content within the virtual machine you won’t see them on the host. We solve this with these lines in the Vagrantfile

What this does is exclude the reports from the main rsync and instead setup a new (regular) shared folder that will map reports to /reports on the virtual machine (note this is a root level folder, not in the /app folder on the guest. This is why we have used an environment variable to tell SimpleCov where to output reports.

Beware the emperor’s new code coverage

One thing to bear in mind is that code coverage really is a very crude metric. There are different types of coverage metrics, and SimpleCov only provides ‘C0′ coverage: lines of code that executed. Other types include branch and path coverage, but as far as I know, there aren’t any tools for these in Ruby. Let me show you an example of where this falls down:

If you look at this report, we can see that the some_method_with_conditionals gets called, but only the say_yes path (lines 12 and 13) executes, and we never confirm that ‘no’ gets sent if we pass false to the method. So far, so good, until we look at some_method_with_ternary. This is basically the same method refactored to be more compact, and with the same tests run against it. Yet we are told it is totally covered. So is the metric even still useful?

I still think code coverage is a valuable metric, if only to show you where there are holes in your test suite. If you go in with this knowledge and understanding the limitations, then you will be better equipped to maintain the quality of your app over time.

Code Coverage is a temporal metric

The last thing I want to mention about code coverage is that it’s useful to understand how your coverage changes over time. Particularly if you are managing a team of developers, it provides a quick warning if developers are slipping on their test writing. If you have a Continuous Integration machine, you can track these sort of metrics over time, which can really help you get a sense of where things are headed.
In my next post I’ll show how to set up your very own CI machine with just a few clicks…

]]>http://eyefodder.com/2014/09/code-coverage-setup-rails.html/feed0http://eyefodder.com/2014/09/code-coverage-setup-rails.htmlGetting Growl notifications from your Virtual Machinehttp://feedproxy.google.com/~r/Eyefodder/~3/ruxl5TfDtoA/growl-guard-virtual-machine.html
http://eyefodder.com/2014/09/growl-guard-virtual-machine.html#commentsSun, 07 Sep 2014 19:07:17 +0000http://eyefodder.com/?p=197As I develop I have Guard running in the background, executing my tests when things change. But I often don’t have the Terminal window front and centre, so I like to have Growl notifications for my test results. Setting up Growl to push notifications from the Virtual Machine to the host is a little tricky, so here’s a […]

]]>As I develop I have Guard running in the background, executing my tests when things change. But I often don’t have the Terminal window front and centre, so I like to have Growl notifications for my test results. Setting up Growl to push notifications from the Virtual Machine to the host is a little tricky, so here’s a simple example to show how to do it.

Growl isn’t as hairy as ‘old Ephraim’ but is probably more useful

Getting Started

To get started and see Growl notifications, we need to do a little more than normal. Let’s start by checking out the source code:

Prepping Growl

Download Growl if you don’t already have it. Once it’s installed, open up the preferences pane and click on the ‘network’ panel. Check the box marked ‘Listen for incoming notifications’ and enter a password:

Prepping Vagrant

The next thing we need to do is install a plugin for Vagrant that will allow us to execute a script on the host machine when we boot our virtual machine. I’ll explain what that is below the fold, but for now, make sure that you have Vagrant 1.6.4 or above (hint, type vagrant -v to find out) and update if necessary. Next, install the Vagrant Triggers plugin by entering this on the command line:

$ vagrant plugin install vagrant-triggers

Bringing the virtual machine up

Next, run vagrant up as usual. When it runs, it creates a file ops/dotfiles/guest_bash_profile:

# Edit the following values with your own values
# Growl password set in Growl > Preferences > Network
export GROWL_PASSWORD=enter_growl_password
# The following entries are automatically generated
# Do not edit unless you know what you are doing!
# They are regenerated each time the virtual machine is rebooted
export HOST_IP=10.0.1.28

Go ahead and enter your Growl password. Now we’re good to go. run vagrant rsync-auto to keep things in sync and in another window, then vagrant ssh into the machine.

Hey presto! Growl notifications

So let’s get up and running! When you are ssh’d into the host machine, fire up Guard with cd /app && bundle exec guard -p. Make a change to your code and when the tests run you should see a notification:
You’re good to go now, but read on if you’d like to understand how all this stuff works…

How it works

There are a lot of moving parts to get this thing going. I’m going to work from the Guardfile back, and then the Vagrantfile forwards. I guess we’ll meet somewhere in the middle so bear with me.

Guardfile changes

If you look in the Guardfile you will see a few new lines of code at the top:

Line 23 tells Guard to use GNTP for notifications (assuming HOST_IP and GROWL_PASSWORD have been set). It’s basically a means for sending Growl notifications over a network. You’ll need to add the GNTP gem, so we’ve added this to our Gemfile:

# guard_growl
gem 'ruby_gntp'

We can see that this system relies on a couple of environment variables being set. We got a clue that these get set in ops/dotfiles/guest_bash_profile, let’s see how that file gets created, and how we get that linked into the guest Virtual Machine.

Creating the guest bash profile

For this setup to work, we need two environment variables set: GROWL_PASSWORD and HOST_IP. The script that creates the file is ops/setup_guest_bash_profile. This script does a few things, so let’s step through it:

Create a profile file if it doesn’t exist

The first thing we want to do is create a profile file if one doesn’t exist already:

# make a profile file if it doesn't exist
bash_file=dotfiles/guest_bash_profile
touch $bash_file

Next, we want to add some comments to the file. As we will run this script every time the machine boots up we only really want to add these comments if they don’t already exist. The add_comments_once function does this by checking for a match, and only adding the comment if it isn’t already in the file:

The function gets three arguments: the token GROWL_PASSWORD, the default value enter_growl_password, and a message to explain to the user what to do. If the token is found (lines 22 & 23) then it looks for the default value (lines 25 & 26) and prints a change message if it’s there. If the token isn’t found (lines 32-38) then we write that into the file.

Adding the HOST_IP as a system generated value

The next thing we want to do is add the host system’s IP address to the file so that Growl knows where to send notifications:

This script deletes any existing lines with HOST_IP in them (line 65), then uses a little bash trickery to find the current host ip. I found out how to do it from this post, although I needed to change the delimiter from \ to ' ' (that’s from an escaped space character to a space in quotes. Finally we write this out to our guest_bash_profile file. The next step is getting this script to run when we want it.

Running the profile setup script when Vagrant starts

Thanks to the Vagrant Triggers plugin, this is a really simple affair. We just add the following to our Vagrantfile:

This will run our script every time we call vagrant up or vagrant reload, ensuring that our host ip address is always up to date in the file. The last piece of the puzzle is to make sure we use this file to actually set environment variables on the guest machine.

Linking the guest bash profile to the guest virtual machine

This is a relatively simple two-part process. First thing we do is share the ops/dotfiles directory on the virtual machine:

config.vm.synced_folder 'dotfiles', '/dotfiles'

Secondly, we want that file symlinked in the guest machine to ~/.bash_profile. I created a new Puppet class to achieve this. Check out ops/puppet/modules/spex/dotfile_symlink.pp:

Super simple here. We tell puppet to ensure that /home/vagrant/.bash_profile is a symlink to /dotfiles/guest_bash_profile. In ops/puppet/manifests/default.pp we simply include the class with the others:

include spex::base_packages
include spex::postgres_setup
include spex::ruby_setup
include spex::dotfile_symlink

Now we have everything wired up and ready to go!

Conclusion

This wraps up my example for getting Growl notifications from Guard into the host machine. Although there are a bunch of steps to jump through, once it’s working I hope you’ll find it a pretty robust solution.

The goal in all of this is to shorten feedback loops when you develop. This process should give you some confidence that when your code changes the right tests run. The power of this is greatest when you are confident that your tests cover enough of your application such that you will know when you break things. Our next step is to look into the breadth of your tests and setting up code coverage metrics for your app…

]]>http://eyefodder.com/2014/09/growl-guard-virtual-machine.html/feed1http://eyefodder.com/2014/09/growl-guard-virtual-machine.htmlUsing Spring with RSpec and Guard to speed up testinghttp://feedproxy.google.com/~r/Eyefodder/~3/I3FpxO2LLHQ/using-spring-with-rspec-and-guard.html
http://eyefodder.com/2014/09/using-spring-with-rspec-and-guard.html#commentsThu, 04 Sep 2014 23:50:58 +0000http://eyefodder.com/?p=180In my last post I showed you how to setup Guard and RSpec so you can automatically run tests when things change. Now lets get things cooking on gas by using the Spring application preloader. This will mean that your app framework will only have to load once, and tests will be super zippy. Setting up […]

]]>In my last post I showed you how to setup Guard and RSpec so you can automatically run tests when things change. Now lets get things cooking on gas by using the Spring application preloader. This will mean that your app framework will only have to load once, and tests will be super zippy. Setting up Spring with RSpec is simple and has a huge effect on test running speeds.

Things are zippier on Spring(s)

Getting the sample code for Spring with RSpec

As with all of the examples, I’ve setup the simplest possible example to show setting up Spring with RSpec. Go ahead and checkout the code here:

Once you have checked out the code, jump into the ops directory and vagrant upto bring the VM up. When it’s up and running, get synching running by entering vagrant rsync-auto and then in a new terminal window enter the following:

vagrant ssh
cd /app
bundle exec guard -p

And that is basically it! Now when you run your tests, you will be taking advantage of Spring! Read on to see how to get this working…

How we got it working

So to get this working we did a few things. First off, we added the spring commands for rspec with the spring-commands-rspec gem:

# rspec_guard_spring
gem 'spring-commands-rspec'

Next, we let spring create the binstub for us (in the example we already did this, so you will see the outputted file at bin/rspec:

A little more digging told me that these gems are probably system Ruby packages that appear as gems thanks to the rubygems-integration. Long story short, you have to live with with this warning message for now. Although startup performance might be affected, overall performance each time you run RSpec should still be great! Now you have Spring with RSpec running, our next post will show you how to set up notifications with Growl so you don’t have to keep your terminal window visible while you code.

]]>http://eyefodder.com/2014/09/using-spring-with-rspec-and-guard.html/feed0http://eyefodder.com/2014/09/using-spring-with-rspec-and-guard.htmlGetting Started with Guard and RSpechttp://feedproxy.google.com/~r/Eyefodder/~3/kjGrTIvUTfU/guard-and-rspec.html
http://eyefodder.com/2014/09/guard-and-rspec.html#commentsThu, 04 Sep 2014 22:23:40 +0000http://eyefodder.com/?p=177As I build out an application I want to ensure it’s behaving as I intend it. RSpec is a great framework for testing Ruby code, and is the tool I use most for my testing. But tests are pretty useless if you don’t run them, and rather than manually run tests when I change things, […]

]]>As I build out an application I want to ensure it’s behaving as I intend it. RSpec is a great framework for testing Ruby code, and is the tool I use most for my testing. But tests are pretty useless if you don’t run them, and rather than manually run tests when I change things, I use Guard and RSpec together. Here’s the simplest possible example for setting it up.

Guard and RSpec : rather fancy

Guard is a command line tool that responds to filesystem change events. You can use it to do all sorts of stuff using one of the many plugins that have been built for it. In our simple example we are going to use it to trigger rspec tests when we change code in our app.

Getting Started

For this we are going to grab an example from the Spex repository where I keep all my code examples. Clone the branch from github:

Confirming the tests run

When you have the code cloned, jump into the ops directory and execute vagrant up to bring up the virtual machine that will run our example. As instructed, run vagrant rsync-auto; this will ensure that if you make changes to the code, they will get synced in the virtual machine (which is where Guard will be running). Now, open a new Terminal window, vagrant ssh into the machine and type the following:

cd /app
rspec

By doing this, we should see our tests run and get the following output:

So what happens if we break something? Go ahead and change the code in app/helpers/application_helper.rb:

def some_method_to_test
'resultd'
end

Now when we run rspec again we see that four tests have run and one failed as expected. But really, given we just changed one file, did we have to run all the tests? And did we have to manually re-run the tests? The answers to these is no; Guard and Rspec together are pretty awesome…

If you hit the Enter key, you’ll see all your tests run again. This is pretty awesome right? Now you can save yourself having to type rspec. But wait, that’s not all. Go ahead and change your application_helper.rb file back the way it was. Now you should see something like this:

Pretty sweet huh? Basically, anytime a file changes Guard figures out what test or tests we need to run. This helps to keep my test suite running lean and fast; aiding rather than slowing down development. Let’s take a look at what we have added in our example to make things work.

How this was setup

Next, we have written some specs which you will find in the spec folder. I’m not going to go into detail on the tests I’ve written, but they should be pretty legible and cover just enough to demonstrate what we need to about Guard. Check out spec/helpers/application_helper.rb, spec/integration/static_pages_spec.rb, and spec/routing/static_routing_spec.rb. The spec/rails_helper.rb and spec/spec_helper.rb files are the ones generated by the rspec-rails gem when we run rails generate rspec:install.

The Guardfile

The real magic of getting Guard and RSpec working together comes from the Guardfile which is used to tell Guard what to watch for, and then what to do with it. I’ve modified the file generated by guard init rspec to demonstrate some common usage patterns based on the way I test. Watch statements follow simple regular expressions to determine when to fire. As you build up your app, you will want to keep this file updated as you may need to introduce new patterns. Let’s take a look through some examples in the file:

Run any spec that changes

Any file that is in the spec folder and ends with ‘_spec.rb’ will get run if it changes:

# run a spec file if it changes
watch(%r{^spec/.+_spec\.rb$})

Re-run the spec if the helper files change

spec_helper.rb and rails_helper.rb are files that RSpec uses to setup your specs. So if they change for any reason, it would affect all the tests.

Re-run the suite if support files change

Integration tests for views and controllers

I have found it simpler to test my views and controllers by creating integration tests that mimic user behaviour (although the tests you see aren’t great examples of that, we’ll be building out better ones in a future example on using Capybara). For this to work, I have a naming structure such that specs related to the foo controller (or views) live in spec/integration/foo_pages_spec.rb. If a controller is pluralized, that is handled well too, so that app/controllers/things_controller.rb will trigger spec/integration/thing_pages_spec.rb:

watch(/^app\/controllers\/(.+)_controller.rb/) do |m|
["spec/integration/#{m[1].singularize}_pages_spec.rb",
"spec/routing/#{m[1]}_routing_spec.rb"]
end

And if a view file changes, we run the corresponding integration test using this:

# if something within a view folder changes, run that spec
# eg app/views/static/anyfile runs /spec/integration/static_pages_spec.rb
watch(%r{^app/views/(.+)/}) do |m|
"spec/integration/#{m[1].singularize}_pages_spec.rb"
end

If any of the overall layouts change, it could affect any of our integration tests, so we should re-run them all:

watch(%r{^app/views/layouts/(.*)}) { 'spec/integration' }

Abstract classes

Lastly, you will inevitably end up having abstract classes in your app. If these change you probably want to run all the specs for classes that inherit from it. I only included an example for the application_controller.rb changing but you can extrapolate from here:

A word on Guard with polling

The more astute amongst you will notice that Guard is started up with a -p option. This option forces Guard to poll for changes rather than listen for filesystem change notifications. The reason is that with our setup using rsync to synchronize changes in the virtual machine causes Guard to not always respond to changes. Basically an earlier issue with Guard-Rspec was solved by having rspec only run when it sees a file as modified rather than added. However, with rsync it seems that the listen gem sometimes think the file has been added rather than modified. I’m trying to figure out a resolution for this, but in the meantime, polling it is!

But wait, there’s more!

This super-simple example should help you get up and running with Guard and RSpec. With the setup here you would find that every time the specs run the Rails application has to load up. This makes every test run take at least a few seconds. In earlier versions of Rails I used to use Spork to pre-load the framework, but now Rails comes with Spring which is much easier to setup. In my next post I’ll walk you through it; until then happy testing!

]]>http://eyefodder.com/2014/09/guard-and-rspec.html/feed3http://eyefodder.com/2014/09/guard-and-rspec.htmlCompiling Ruby from source for your development environmenthttp://feedproxy.google.com/~r/Eyefodder/~3/_2loop2eX5s/compiling-ruby-for-dev-environment.html
http://eyefodder.com/2014/08/compiling-ruby-for-dev-environment.html#commentsSun, 31 Aug 2014 20:45:32 +0000http://www.eyefodder.com/?p=161For most of us, downloading the development package of Ruby for your platform will suffice. For the curious, or those needing a Ruby version that doesn’t have a pre-built package available you have to resort to compiling ruby from source code. Fortunately, as we have built ourselves a clean development environment using Vagrant, this is […]

]]>For most of us, downloading the development package of Ruby for your platform will suffice. For the curious, or those needing a Ruby version that doesn’t have a pre-built package available you have to resort to compiling ruby from source code. Fortunately, as we have built ourselves a clean development environment using Vagrant, this is actually a pretty simple task!

Whilst most of us are fine with buying an existing home, some of us want to build our own; and so it is with Ruby

Getting started

For this example we are just going to build a machine and install Ruby, and not worry about installing a Rails app just like we did for our basic example. To get started, checkout the source code from github:

Getting the source code for compiling Ruby

So, the first step is to figure out the url for grabbing the source code for the version you need. In the example code, we’re using the latest stable version of Ruby: 2.1.2, but that maybe different by the time you read this. Best thing to do is go check out the Ruby ftp site and find the tarball you need. Our package can be found at http://ftp.ruby-lang.org/pub/ruby/2.1/ruby-2.1.2.tar.gz. Now check out the code in build_ruby_from_source.sh:

curl --remote-name http://ftp.ruby-lang.org/... downloads the ruby package with the same name as on the ftp site

tar zxf ruby-2.1.2.tar.gz unzip the file and jump into the created directory before…

Compiling the code using ./configure, make and finally make install

Compilation can take a good while, but trust that your box will get there eventually. The last step after compiling Ruby is to gem install bundler. We could have done this through Puppet, but as we need to have Ruby installed first, it is simpler just to install it here.

Idempotence, and saving time

In my post on setting up a base rails environment I talk about the need for Idempotence; that is, ensuring your scripts do no harm if they are run multiple times. Strictly speaking, the code above for compiling Ruby can be run multiple times without harm, and is idempotent. However, the build process takes so long that it’s really worth doing a check here and only installing if actually necessary. You see that in action in the following code:

]]>http://eyefodder.com/2014/08/compiling-ruby-for-dev-environment.html/feed0http://eyefodder.com/2014/08/compiling-ruby-for-dev-environment.htmlBuilding A Pristine Rails Virtual Machinehttp://feedproxy.google.com/~r/Eyefodder/~3/P0s37UZz5K0/building-pristine-rails-virtual-machine.html
http://eyefodder.com/2014/08/building-pristine-rails-virtual-machine.html#commentsThu, 28 Aug 2014 16:53:45 +0000http://www.eyefodder.com/?p=156A (thankfully) long time ago in a galaxy far far away I developed web apps in Flash. And when I had to target different versions I had to go through a whole rigamarole of uninstalling and reinstalling plugins. Fast forward to now and I’m often working on projects in Ruby. I’ve found RVM and gemsets […]

]]>A (thankfully) long time ago in a galaxy far far away I developed web apps in Flash. And when I had to target different versions I had to go through a whole rigamarole of uninstalling and reinstalling plugins. Fast forward to now and I’m often working on projects in Ruby. I’ve found RVM and gemsets to be a lifesaver for managing multiple environments on different projects. But as the environment gets more complicated I’ve started to want to have each project live in splendid isolation, in their own Virtual Machine. My last post described getting a bare-bones Ruby development environment up and running. This post details taking that a little further, and getting a Rails Virtual Machine up and running.

These are some Vagrants on the Rails. Read on for Rails on Vagrants…

Vagrant powered Rails Virtual Machine

In the last post, I showed how the initial machine was installed and configured with the packages we’d need for development. In essence we get as far as having the right version of ruby and installing bundler which we will use to grab all out other dependencies. Now we have everything we need to get up and running with our rails app, we will execute one more script to fully provision our box. Check out the base_rails_app branch to get started:

So, after jumping into the /app directory (which is the guest machine path to our root development directory), we do the following:

bundle install — to install / update any gems specified in the Gemfile

rake db:create — to create any databases specified in database.yaml

rake db:migrate — to execute any pending migrations

rake db:seed — to populate the database with any data specified in db/seeds.rb

And that in essence is it! When you run vagrant up, you can be sure you have the environment up and running and the app in the newest state ready to develop on. All you need to do to fire up the server is the following:

vagrant ssh
cd /app
rails s

Now your app will be accessible on http://localhost:3001 and you can start developing. One thing to note is that in the Vagrantfile, we have set the script to run as privileged: false. Scripts run by Vagrant execute as the root user by default (useful for installing software). But for running bundler and rake tasks, this is a bad thing. So we just run the script as the non-privileged vagrant user and we’re good to go. You’ll also notice that the script is set to run: 'always' which means it will run every time we vagrant up. This one needs a little more explanation

Why prep’ the Rails application every time?

When I have been working on projects with other people, I’ve found that the most common issues people have when running the app are usually caused by not installing required gems or not performing a migration that someone else has committed. What this script helps to do is mitigate that by ensuring that every time you bring the machine up, those regular updating tasks are performed. It’s worked really well for me but does come with one caveat: you have to make sure your scripts are idempotent.

Idempotent what?

When I first started using Puppet I was introduced to a new term – that of idempotence. What it basically means is that after you have applied commands once, they have no further effect. For example, if you run rake db:create multiple times, no harm is done. Same goes for bundle install and rake db:migrate. However, for rake db:seed you have to be a little more careful in how you write your seed code.

Beware multiple seeding

So the only real thing to watch out for in all this is that you write your seeds.rb in such a way that if you run it multiple times you don’t end up with multiple database entries. Gratuitous use of first_or_create! should help here. For example, instead of:

This makes for more cautious code anyway, which regardless of if you’re using this process is generally a good thing.

What’s next

Once I had the Rails Virtual Machine up and running things were (almost) plain sailing. In an upcoming post I’ll talk about testing using Guard and RSpec; as well as how to get notifications from the Rails Virtual Machine to trigger Growl notifications on your desktop.

]]>http://eyefodder.com/2014/08/building-pristine-rails-virtual-machine.html/feed0http://eyefodder.com/2014/08/building-pristine-rails-virtual-machine.htmlUsing Puppet and Vagrant to make a one-click development environmenthttp://feedproxy.google.com/~r/Eyefodder/~3/q6yohjV5AVE/one-click-development-environment.html
http://eyefodder.com/2014/08/one-click-development-environment.html#commentsTue, 26 Aug 2014 22:36:58 +0000http://www.eyefodder.com/?p=151Keeping a development environment clean and tidy can be a bitch. When you are working on multiple projects across different platforms it can get messy really fast. And if you’re managing a team of people and need them all to run your app locally things can get tricky too. Recently I’ve setup a process so […]

]]>Keeping a development environment clean and tidy can be a bitch. When you are working on multiple projects across different platforms it can get messy really fast. And if you’re managing a team of people and need them all to run your app locally things can get tricky too. Recently I’ve setup a process so that a pristine development environment can be spun up for a project in glorious isolation. And the best thing is, once you have it set-up, you can spin up new environments in just one command!

Write once, deploy many times has been a goal for rather a long time

Vagrant, Puppet and VirtualBox, the three amigos

Vagrant, Puppet and VirtualBox—the lesser known nicknames of the three amigos

In order to get this utopian ideal working, we are going to setup a virtual machine, and configure it just the way we want it for our project. There are a few great tools we’re going to use to get there. The first of these is VirtualBox. It’s open source virtualization software from Oracle and is a great way to manage virtual machines on your desktop. Next up is Vagrant, which is designed to allow you to describe the machine (or machines) you need to run your project and how you want it networked. Finally Puppet is used for managing the actual configuration of the machine. Although this might seem complicated, it’s actually really simple, especially as I’ve got an example set up that you can use.

Before you begin

You just need two things downloaded before you start. I develop on a Mac, so my instructions are based on that, but in theory this stuff should work on Windows or Linux distros. Hit me up in the comments if you have issues. So without further ado – go get yourself the following:

In case you’re wondering, we don’t need to install Puppet as it will be installed on the virtual machine.

O.K. Go!

The simplest way to show you how to do this is to clone the repo I setup to demonstrate the process. I have a github repo setup to demonstrate simple standalone examples. You can grab the whole thing of you like, but to follow this example check out the base_ruby_environment branch. If that’s all you want, you can type the following command:

When you’ve grabbed the repo, cd into the ops folder and type vagrant up Now wait. First time round it will take about ten minutes or so as the environment is downloaded and configured. And then—well, that’s actually it! You now have, sitting there and ready to use, a brand spanking new virtual machine, with sqlite, postgres, git and a bunch of development libraries, ready for you to go do your worst.

Exploring your new VM

I’ll explain everything that was setup in a minute, but first, let’s have a dig around. When the VM loads up you’ll see a welcome message, which tells you what to do. Basically you need to tell vagrant to keep a folder in sync between the host machine and your new guest. Typing vagrant rsync-auto will do this for you. There are other ways of doing this, but as I explain below, this is the best I’ve found for development. Next, to hop into the actual machine, we simply type vagrant ssh and vagrant opens up an ssh connection into the new dev environment. Simples! If you navigate to the /app folder inside your VM, you’ll see a facsimile of the repo. The ops folder itself isn’t copied into the virtual machine though.

You can now go off and develop to your heart’s content. In my next post I’ll show you how to set up a rails app running in the virtual machine. If you’d like to understand what went on in setting the VM up, then read on…

How the one-click development environment was setup step-by-step

The rest of this post details how the VM gets to the finished state. It’s certainly not essential reading, but you might find it interesting, or even essential for configuring / debugging. We will be skipping between Vagrant (defining the type of box that we are running) and Puppet (configuring the box to suit our needs).

Vagrant: The Vagrantfile

This is basically the configuration file for the virtual machine. You can check out full reference material on the vagrant file here, but below I’ll walk you through the simple example we have going.

Vagrant Box and Hostname

The config.vm.box parameter tells Vagrant the box that the machine will be brought up against. This is either the name of a box you have already installed, or the name of a box in Vagrant Cloud. This is a place where other boxes can be shared. In fact, you will see that in our Vagrantfile, we are using a box I created— eyefodder/precise64-utf8. This is an Ubuntu 12.04 install that I made 2 minor changes to. The first was to update Puppet to 3.6.2 as there are significant improvements over the version that ships with the default box (2.7.x). The second change is to set the default locale on the box to en_US.UTF-8 I had a hell of a time getting postgres setup and found that in order for it to work with default rails encoding of UTF8, you have to set the locale for the machine to be UTF8 and restart before installing postgres. This extra step seemed counter to my desire to get a development box up and running in one command so I made the change, packaged up the box, and posted it on Vagrant Cloud for all to share.

The hostname is simply the name the machine will be given. You can change it to whatever you like.

Networking

Networking is pretty simple in our example, and the syntax makes things pretty easy to follow. For our box, we expose a couple of ports: port 3000 on the guest is forwarded to 3001 on the host, and port 22 to 2222. For more details on other networking options, check out the docs here.

Synced Folders

Synced folders allow you to work in your familiar desktop environment, yet have the files you’re working on magically appear in the virtual machine. Syncing can happen via a number of methods, and I’ve included two to give you a sense of it. The default when working with VirtualBox machines is a VirtualBox shared folder. This is a two-way sync: if you add files to the folder within the VM, they will appear on your desktop and vice versa. We use this for managing our Puppet install.
The second type we use is rsync. What this does is use rsync to copy files into the virtual machine. It’s a one-way operation and in order to keep things in sync, we need to tell vagrant to run the process vagrant rysnc-auto to keep watching for changes. Why use this when other synched folder mechanisms seem so much simpler? The reasons are two-fold. First off, I’ve found the performance when running a server to be much better. Secondly, and this one is critical for me: when I’m developing, I use Guard to watch for file changes and use that to trigger selective running of my tests. Rsync seems to be the only way to get file change events triggered in the guest machine to kick off Guard. The net impact is just that I have to run that extra vagrant rsync-auto command, and I can live with that…

VirtualBox Specific Configuration

Vagrant is a nice layer on top of many virtualization providers. And for the most part, as we’ve seen above, the differences between platforms are abstracted away. But for fine tuning, you sometimes need to provide provider specific configuration options. In our example, we are setting the maximum memory to 1GB and setting the name that appears in the VirtualBox GUI to ‘spex‘. You can see all the other config options for VirtualBox here.

Provisioning the Box

Provisioning allows you to install the software you need to use your machine for your particular use case in an automated manner. Vagrant gives you a number of options for doing this, including simple shell scripts and popular configuration management systems like Chef and Puppet. For our example, we are going to use a shell script to install some modules required by puppet, then stick with puppet for the rest of the configuration.

The shell script

The first thing that is run is a shell script named install_puppet_modules.sh This does a few things that get us ready for the main event of using Puppet to configure the machine:

A couple of things to note about the puppet module installation. Note we explicitly set the path the modules are getting installed into using --modulepath /etc/puppet/modules/. This is so we explicitly know where to check to see if they have been installed already. The goal of a config script like this is that you can run it multiple times without issue. I’ve also found that as I get to use more puppet modules, it’s helpful to look at the source code, and the place we installed the modules just so happens to be one of our shared folders. Neat huh? Now we have these modules (and their dependencies automatically) installed, we can get to the meat of configuration: the Puppet file.

Puppet

Puppet is great, but there’s quite a learning curve to it. I’m not going to go into tonnes of detail here; and there is great free training material out there. I’ll walk you through what I’ve used puppet for with our simple example though. To follow the trail, check out ops/puppet/manifests/default.pp as that’s our entry point into the configuration:

We see three include statements here, and they each refer to a different stage of configuration of our development environment. You will find each class in the ops/puppet/modules/spex folder. I’ll cover each of them in turn:

Base Packages

If you take a look inside ops/puppet/modules/spex/base_packages.pp you’ll see the general structure for declaring what packages you want to be present on your machine:

package { "build-essential":
ensure => installed,
}

The puppet DSL is pretty simple to grok once you’ve seen some examples. Here, we tell puppet the name of the package we want, then tell it we want to ensure that it’s ‘installed’. The rest of the file does the same for other dev packages we want.

Postgres Setup

A little more complex is the setup of Postgres. The first block: trust_local_traffic, adds some rules to the postgres authentication file to allow us to connect within our dev environment pretty easily. Basically the declaration:

Ruby Setup

Last but by no means least, we need to have Ruby up and running. This of course assumes you’re going to develop in Ruby; if you’re using another language, you can use this as a guide. The first thing we do is to add the package repository that hosts Ruby packages for Ubuntu releases from 10.04 onwards:

Next, we use Hiera to grab a configuration variable for the package we actually want to install. Hiera is a simple way to separate code from configuration when using puppet. We could have used it more in the setup of the app, but for now, suffice it to say that config variables are kept in /ops/puppet/hieradata/common.yaml. In there you will see that we are installing the ruby2.1-dev package. You could swap this out for a different package if you needed it. The ruby package is installed just like we saw in the base_packages.pp file.
Finally, we install bundler:

Note that we tell Puppet that bundler is being installed using gem as a provider rather than the default apt-get. This shows how simple it is to define the packages we want as well as where they come from. The last thing I want to show about this class is how we specify an order of execution. Usually with Puppet, we let it decide the most efficient way to execute the configuration of the machine. But sometimes things have to happen in a specific order. So here, the addition of the repository is specified to execute before ruby is installed; and that before bundler is installed, we require ruby to be present.

And Finally…

Finally, once Puppet has done its magic, we run a simple shell script post_up_message.sh to give you a bit more detail about what to do next. In my next post about using the environment for Rails development, we’ll use this for a little more heavy lifting, but for now, go exploring!

]]>http://eyefodder.com/2014/08/one-click-development-environment.html/feed0http://eyefodder.com/2014/08/one-click-development-environment.htmlAn external keyboard for the iPad; making the right choicehttp://feedproxy.google.com/~r/Eyefodder/~3/1sBQscp9kNQ/choosing-an-external-keyboard-for-the-ipad.html
http://eyefodder.com/2012/02/choosing-an-external-keyboard-for-the-ipad.html#commentsMon, 13 Feb 2012 10:29:58 +0000http://www.eyefodder.com/?p=149As I mentioned in my previous post, I’m traveling for the next few weeks on a sabbatical to Europe and Asia. I’m keeping a journal of my travels on my tumblog, and from time to time figure I’ll be writing longer text posts here. Because of this, I figured that I should bring an external […]

]]>As I mentioned in my previous post, I’m traveling for the next few weeks on a sabbatical to Europe and Asia. I’m keeping a journal of my travels on my tumblog, and from time to time figure I’ll be writing longer text posts here. Because of this, I figured that I should bring an external keyboard with me and I’ve already found it to be a tremendous help. Even writing short emails is so much easier on a real keyboard. When I was researching, I quickly narrowed it down to two choicee: The Apple wireless keyboard (with the Incase origami case) and the Zagg Zaggfolio case and keyboard.

Apple keyboard and Incase origami case

Apple keyboard and Incase origami case

The first setup I looked at was to just use the Apple bluetooth keyboard together with the innovative Incase origami case. There’s not much point in me writing a review of the keyboard as there are already a tonne out there. Suffice it to say that it’s built well, is full sized (minus the number keypad) and feels great. The case is the interesting part of the kit. Based on the japanese art of origami, the case folds into a stand for the iPad when you want to type. When you first put it together before you put the iPad in place it seems a bit flimsy, but as soon as the iPad is in place, its super stable and looks pretty neat too. There are velcro straps to hold it all together; I saw some reports that these might come unstuck after a lot of use, I’ll have to give an update at the end of the trip.

Zagg Zaggfolio case

Zagg Zaggfolio case & keyboard

The other setup I looked at was the Zaggfolio case from Zagg. The nice thing about this setup is that it combines the case and keyboard in one package. There are also function keys specially programmed for the iPad to search or mimic the home key (something Apple’s offering is lacking). The case replaces the Apple Smart cover if you have one, and the iPad sits face down in the case when not in use. Although this seems like a neat solution, I ultimately decided against the Zagg for a couple of reasons. First up, the keyboard isn’t full sized, but a little shorter as it needs to fit within the dimensions of the iPad. I’m a pretty fat-fingered typist at the best of times, so I was worried that the keyboard would end up just being too annoying. Secondly, because the case and keyboard are all in one, you can’t really go traveling without the keyboard and just the tablet. I think I will only be using the tablet for writing 5-10% of the time, so I don’t want to have to bring the keyboard along for the ride all the time.

First impressions

So far, I’m really happy with the keyboard setup. I really like the key-feel of the Appple keyboard, and writing an article like this is almost as painless on the iPad as it is doing it on my laptop. I’ve found a couple of gripes with using the keyboard so far; one general, and one Apple related. The first is that keyboard arrow key navigation doesn’t seem to be built in to Safari. If I’m typing a url or search term, auto-complete suggests some options and my muscle memory keeps trying to use the down arrow key to select an option. I assume this would be the same no matter what keyboard I chose and it’s just something I’ll have to get used to. The second gripe—and it’s minor—is that when I’m done with the keyboard, I have to remember to turn it off, else I don’t get the on-screen keyboard when I’m using the iPad. It’s easily fixed by switching bluetooth off, but nevertheless has been a little bit of a gripe from time to time. That said, I think an external keyboard is an essential accessory for the tablet if you are going to be writing more than the occasional email; so far my money is on the Apple / Incase combo.

]]>http://eyefodder.com/2012/02/choosing-an-external-keyboard-for-the-ipad.html/feed0http://eyefodder.com/2012/02/choosing-an-external-keyboard-for-the-ipad.htmlTablet only living—A geek travelling lighthttp://feedproxy.google.com/~r/Eyefodder/~3/4vcj95qtpMI/why-im-travelling-tablet-only.html
http://eyefodder.com/2012/01/why-im-travelling-tablet-only.html#commentsTue, 31 Jan 2012 16:50:01 +0000http://www.eyefodder.com/?p=143In about a week’s time I’m taking off on a sabbatical. I’m fortunate enough to have employers who have agreed to let me off the leash for a few weeks, so I’m making the most of it. I’m using the opportunity to take myself off on a journey that will take me from Berlin to […]

]]>In about a week’s time I’m taking off on a sabbatical. I’m fortunate enough to have employers who have agreed to let me off the leash for a few weeks, so I’m making the most of it.

On the road again, and learning to be more like a car than a truck

I’m using the opportunity to take myself off on a journey that will take me from Berlin to Bangkok, from Paris to Paro. But this isn’t just a grand tour where I’m off to ‘find myself’ (although I will be travelling to PBH)—I’m also using this as an experiment in tablet-only living.

For most of us, being connected all the time is as natural as breathing the air around us. A typical day for me involves checking emails on my ‘phone in the morning, working on my laptop during the day, and watching tv on the ipad at night. What happens if you take the laptop away? Do we need to have a laptop to be a functioning (and contributing) member of digital society? When Steve Jobs was interviewed about this a couple of years ago, he likened the relationship of tablets to PC’s as that between trucks and cars:

“When we were an agrarian nation, all vehicles were trucks…. but as we moved to a more urban society, now most people use cars, so that now something like one in every twenty five vehicles is a truck. PC’s are like trucks.” —Steve Jobs

I couldn’t agree with this more. I’ve seen how people have moved from having a desktop PC at home, to having a laptop—before long I should imagine its rare to see anything other than a tablet in the house. I can imagine a similar shift for many people at their workplace over time. Right now, the tablet is still in it’s nascent form in terms of both software and hardware. We are starting to see reasonable content creation apps becoming available (I’m writing this post on one now). As I’ve been planning my trip, I’ve tried to predict what sort of scenarios I might find myself in, and setting up my ipad in preparation for that. I’ll share these over the next couple of weeks as I get my stuff together for the trip.
I couldn’t agree with this more. I’ve seen how people have moved from having a desktop PC at home, to having a laptop—before long I should imagine its rare to see anything other than a tablet in the house. I can imagine a similar shift for many people at their workplace over time. Right now, the tablet is still in it’s nascent form in terms of both software and hardware. We are starting to see reasonable content creation apps becoming available (I’m writing this post on one now). As I’ve been planning my trip, I’ve tried to predict what sort of scenarios I might find myself in, and setting up my ipad in preparation for that. I’ll share these over the next couple of weeks as I get my stuff together for the trip.

“The greatest reward and luxury of travel is to be able to experience everyday things as if for the first time, to be in a position in which almost nothing is so familiar it is taken for granted.” —Bill Bryson

I’m excited to travel. Not just to parts of the world I haven’t been before, but also to find out what makes for a good tablet lifestyle. As Bill Bryson said in the quote above, one of the great rewards of travel is to experience everyday things anew. I think this is just as true for my digital journey as for the physical trip I’m going on. What I’m looking forward to are the unexpected changes in online behavior—both positive and negative—that will suggest interesting opportunities to differentiate and enhance tablet-living away from it’s PC roots.

]]>http://eyefodder.com/2012/01/why-im-travelling-tablet-only.html/feed4http://eyefodder.com/2012/01/why-im-travelling-tablet-only.htmlPerformance – spend it wisely and never raise the debt ceilinghttp://feedproxy.google.com/~r/Eyefodder/~3/Suz4ottUwNw/software-performance-dont-get-downgraded.html
http://eyefodder.com/2011/08/software-performance-dont-get-downgraded.html#commentsSat, 06 Aug 2011 23:49:52 +0000http://www.eyefodder.com/?p=140The US had it’s credit rating downgraded yesterday. For many people, it seems at first a little abstract and vaguely ominous—a chance for China to get it’s own back after being harangued by the US on its undervaluation of the Yuen. Maybe the outcome will be catastrophic, maybe not so much. Only time will tell. […]

]]>The US had it’s credit rating downgraded yesterday. For many people, it seems at first a little abstract and vaguely ominous—a chance for China to get it’s own back after being harangued by the US on its undervaluation of the Yuen. Maybe the outcome will be catastrophic, maybe not so much. Only time will tell. What Standard and Poor’s have basically said is that there is more chance than before that America will fail to pay its bills on time. But what does this have to do with quality software and how it gets built? Well here’s the thing—the way the US government are managing the economy and the debt ceiling are very similar to the way many software projects go slowly south; everyone recognizes that something needs to be done, but none can agree on what the right course of action actually is.

Don't overpromise and foreclose on your project

Performance as money

A good way I think about performance, and the reason it’s on the bottom of the heap, is sort of like performance is like money, it’s like currency. You say what good does a stack of hundred dollar bills do for you? Would you rather have food or water or shelter or whatever? And you’re willing to pay those hundred dollar bills, if you have hundred dollar bills, for that commodity.

when he talks about performance being on the bottom of the heap, what he’s talking about is how other things are more important than performance; features, security, ease of use and so on.

Performance constraints as the debt limit

I’ve posted before on non functional requirements every application should have, and first and foremost amongst those constraints is performance. The deal you need to make with yourself and your users is how much of that performance stack of bills are you willing or able to spend to get there? all of the decisions you make, from language choice to functionality to visual design are informed by this decision. Setting your performance constraint is just like setting your household budget (or the government’s debt ceiling).

And the credit rating?

To extend the metaphor a little bit further than is natural, the credit rating is akin to the likelihood that your project will fail overall. We don’t have a good measure for that in software development, but I do think drawing the parallel is interesting. If you read the S&P downgrade overview, you will see that the decision was in part due to the rising debt burden (technical debt?), but more significantly it is due of the infighting and lack of a clear plan forward:

More broadly, the downgrade reflects our view that the effectiveness, stability, and predictability of American policymaking and political institutions have weakened…

This sounds so much like entrenched behavior of stakeholders on a poorly managed software projects (and its effect) that I couldn’t pass on the opportunity to use it. Simply put, if you have a lack of alignment and clear plan about what is acceptable, what is important and how you are going to spend your resources (or generate income) things aren’t going to be plain sailing.

Performance—Avoid foreclosure

O.K. I really am stretching it here, but I wanted to finish up with how I think you best avoid getting mired in software rot or simply just building something that no-one wants to use. And it’s really similar to being financially prudent:

Understand that almost everything you do bears some cost to performance

Decide what is an acceptable level of performance…

…and spend within your means

Don’t go into debt(let performance drop below acceptable level)

But sometimes it’s going to happen. If it does, get a plan in place quickly to address it.