If you have a GPG keypair, believe in using strong passwords and are paranoid (don’t trust password management tools), then pass is the tool for you. I’ve been using it for so long that I can’t remember when I started using it and I have to say, I really like it.

Pass is a FOSS tool that lets you roll your own password management tool-chain and if that sounds hard, it’s not. It works by storing your password, security questions etc in version controlled plain text files and encrypting them using your keys. You then clone your passwords repo and copy your GPG keys to the devices which you would like to access your passwords on.

Pass is a FOSS that lets you roll your own password management tool-chain.

A known downside of pass is that it leaks metadata. The workaround to this is storing all your passwords in a single file. Guys in the ##crypto on Freenode also recommend keepassxc.

A short overview of pass:

Have a GPG keypair and (not required but a really good idea) a hosted version control system.

However, this isn’t a post about pass; it’s about how to use pass on iOS and there’s a tool, pass for iOS, that does that. Assuming you’re already a pass user the question is how to get pass working on your iPad, iPhone or whatever.

How do we transfer our SSH and GPG keys?

I think the easiest way would be via iTunes but that doesn’t feel right at all. Why would I trust a 3rd party server with my private key?

What I decided to go with is a tool, asc-key-to-qr-code-gif, that converts converts ASCII (amored for GPG) keys to QR codes and then I scan those QR codes on Pass for iOS. It’s all open source tools and no 3rd party servers involved. Tell me what you think about this “convert your keys to QR code” business via a tweet.

It’s all open source tools and no 3rd party servers involved.

Setup and installing dependencies

First, I had to install some dependencies via homebrew. I felt it important to install zbar in case there were any errors during QR code generation.

Export your GPG keys into ASCII armored files

Generate and scan the GPG gifs:

SSH

I prefer to have different SSH keys for different devices that way it’s easy to revoke access for different devices. Moreover, using ed25519 keys on phones often fails because of the versions of OpenSSH they ship with so I just go with RSA which is the default anyway. In this case it even had to be PEM due to the version of GitSSH on iOS. Based on the Supported Unsupported Key Algorithms wiki page and issue 218, generate device keys with:

In this post we shall implement a continuous deployment pipeline using ansible, travis ci and git.

During implementation we don’t have steps such as planning, provisioning, configuration management etc that we mentioned in the previous post; those are conceptual. The flowchart below represents the actual places that our software should live at all times. Think of each component in the flowchart as a service that exposes an API.

Deploy server

In the diagram above we introduce an deploy server. This is the host from which you can access your other servers such production, staging etsc.

Exposes: ansible, ssh

Git (Version Control)

We want to have playbooks, deploy scripts and code in version control.
What we get from version control that is necessary for continuous deployment is:

tags get deployed to the main production environment

master branch gets deployed to the main staging environment

other major branches get deployed to other staging environments of our choosing

Not all these steps need to be done for it to be a continuous deployment pipeline. For example: for this blog, changes that get merged into master go straight into production. This is because the application is really small and simple so before anything goes into master I know it’s error free. Moreover, even if the blog were to experience downtime I have very little to lose compared to a business. This is the same model that github pages uses; what is in master is pushed into the gh-pages branch which is basically a github pages blog’s production environment.

Exposes: git branches and git tags

Ansible (Provisioning and Configuration Management)

Assuming you have a fresh server such as the one Digital Ocean would offer or a fresh EC2 instance. We want an ansible play that creates an unprivileged user with SSH authentication. So we have to do the following locally or on our deploy server:

generate an SSH key pair without a passphrase

add the public key of the generated key to the deploy user’s known_hosts file

push the private key of the generated key to travis ci so that the travis container can autheniticate as that user.

Generate an SSH keypair without a passphrase

Under Enter file in which to save the key... type in travis-ci.
Under Enter passphrase (empty for no passphrase): just press enter

$ ssh-keygen -t ed25519 -C "travis@travis-ci.org"

This will create two files travis-ci and travis-ci.pub.

Add the public key to the deploy user’s known_hosts

Write a play to prepare the deploy environment.
Copy the contents of travis-ci.pub file to a vars file in your the playbooks For example here’s my vars file.

This creates the deploy user and adds the travis-ci.pub to the deploy user’s ~/.ssh/known_hosts.

Make your target a git server

For commands like git push to work from travis-ci to your deploy user you have to have your server be ready to receive git push commands. I will explain this later in a different post but for now what you need is a play that:

Installs git

Creates a target git repo which we shall push to

Is able to overwrite the current contents of the repo when a change occurs.

Travis CI (Continuous Integration and Continuous Deployment)

Travis CI is a mix of open source and some proprietary tools.
To quote them “Travis CI is run as a hosted service, free for Open Source, a paid product for private code, and it’s available as an on-premises version (Travis CI Enterprise).”

Here’s their github page and info page. To learn how to get started with travis in your project you can read get started doc. Moving on, I assume you have (gained) enough experience with travis to go on.

Travis will run tests and/or build our application on every branch or specific branches based on rules that we set. We then build on this functionality to deploy to a target based on various rules. The obvious one being when our tests pass.

In our case: we want to run tests then after that deploy to the relevant target. In your .travis.yml file you can use one of the following travis ci build phasesafter_success or deploy steps. I prefer to use after_success when I want to run a deploy script and then list all the commands that my script would run and deploy for already supported deploy environments. This is because the script feature is experimental at the time of writing this.

Exposes: .travis.yml

Continuously deploying to a host

We want to push code from our travis container to our server. Here are some essesntials that would guide you in creating a .travis.yml file that would deploy to your target.

Using after_success

The branches section is essential in this case because it ensures that the .travis.yml file will only be ran for the master branch.

I just explained how we can set up a project so that the CI tool handles all deploys going forward after the inital setting up. If anything goes wrong we can go into the deploy server and then run an ansible script and have it roll back to a specific tag/branch.

In the next post we shall talk about continuous deployment in a microservice architechture using the same tools but deploying to AWS ECS.

]]>Pipeline as Code - overviewhttp://blog.urbanslug.com/posts/2017-10-13-code-pipeline-overview.html2017-10-13T14:23:19Z2017-10-13T14:23:19Z
Posted on October 13, 2017

Pipeline in this context refers to the collection of steps software goes through from planning to deployment. Pipeline as code is having this pipeline be stored in an executable or/and a version controllable way.

Why does this matter? A code pipeline that is executable and/or version controllable:

is easy to keep tracking of as changes occur

makes it possible to keep track of the actual and all possible agents of change (people and/or hosts)

reduces repitition and consequently saves time

is easy to delegate parts of to tools or completely automate

has clear and consistent history

has immutable code pipeline history meaning we can revert to previous stable state

in case of failure, the broken state can be reproduced and post moterms performed

is much it easier to maintain and keep track of its components in complex architechtures such as microservices

makes it much easier to build tools that lower the bar of entry into ops such as running ansible plays and chatops bots

Pipeline as code is the next step in planning, provisioning, configuration management and application deployment, continuous integration and continuous deployment.

It’s also a great way to manage growing complexity in terms of both the architechture and teams involved. I just threw a number of buzzwords around so let me explain each of them and why they matter.

It’s important to note that the tools used in each step have a lot of overlap between them and a tool is likely to show up in multple sections.

Planning

Since we can’t execute plans as code, yet; we have to settle for version controlling them. Save your execution plans as documentation in a docs/ directory or a git submodule (or any other format) files and put them in version control.

You can also commit .org files you created during meetings, export them into .md and add them as docs.

Tools: version control systems

Provisioning

Provision is the past participle of provide, in this context it means providing everything that your application will need to run.

It is an implementation of the infrastructure diagram/plan; it involves the to run the software. That is: where to host it, how many servers, OS versions, server requirements, dependencies, file system, directory structure. The answer to whether to use a vendor solution like AWS Lamda, or ECS would lie here.

You probably need to do this once or at most 3 times ever unless you keep changing core infrastructure. You could put this in an ansible script, ECS task definitions, docker images, Amazon Machine Images, virtual machine images et cetera.

Tools: Packer, Terraform, Ansible, Kubernetes pods, ECS clusters.

Configuration management

Applications today are a collection of tools combined to solve a need. In the example of a simple web application we have a database, an app, an app server and a webserver. Configuration management is basically managing the glue that binds these tools together; which commands to run, which services to start and stop and when, arguments, environment variables, order of running them and so forth.

Tools: ansible vars/vault, ansible plays

Application deployment

This is putting all the parts of the application that need to run on their respective servers, starting them and making sure they’re all working together and correctly. In this case you have vendor tools such as Identity and Access Management from AWS which you can build on top of. This will mean having the following in an executable and version controllable form: the deploy server, their user, deploy scripts avaible to them, actual deploy commands to run and the order in which to run them. You will only need to this during the first deployment or when something goes terribly wrong and you have to rollback but even then it’s still going to be a few commands or just one. You can also use other tools for deployment such as bots.

Tools: ansible, puppet, chef

Continuous integration

This is running tests and building the application to catch errors either in the code or the way parts of it integrate with each other. Running tests, style checks and catching errors in the code.

Tools: travis ci, circle ci, gitlab ci

Continuous deployment

Once the continuous integration tests run and pass, have a tool compile a binary or create a commit, push it to a deploy environment and make sure it’s running.

In this post I’ve explained how the pipeline can be presented as code but only as seperate components not how these components can be combined to work as one.

In the next post I’ll explain how you can use free tools and some open source tools to create a code pipeline that runs from provisioning, configuration management, version control, continuous integration and continuous deployment requiring very little input from devops and with as little complexity as possible.

]]>dm-crypt, luks, systemd-boot and UEFI on Archlinux.http://blog.urbanslug.com/posts/2016-09-11-dm-crypt-systemd-boot-and-efi-on-archlinux.html2016-09-11T17:45:13Z2016-09-11T17:45:13Z
Posted on September 11, 2016

Here I provide a little help for setting up an archlinux system with full disk encryption, efi and using systemd-boot as the boot loader. This is really just what I learned from the arch wiki, Mattias Lundberg’s gist and Brandon Kester’s post. I’ll assume you have installed arch before and just need a little help getting everything up and running.

Desired setup:

100M /boot
100G /root
8G swap
the rest for /home

Unlike Mattias Lundberg I see no reason for separate boot and efi patitions. Although some people have a problem with having their /boot in fat32 due to permissions reasons.

“The resume= option will enable hibernation on the device. The nice thing about having an encrypted swap partition is that your hibernation data will be encrypted just like the rest of the at-rest data. This makes hibernation a very secure alternative to leaving your machine in stand-by mode, which is vulnerable to the cold boot attack.”

During a brew upgrade I noticed that apple has deprecated the use of OpenSSL.

Apple has deprecated use of OpenSSL in favour of its own TLS and crypto libraries

==> Downloading https://homebrew.bintray.com/bottles/openssl-1.0.2h.el_capitan.bottle.tar.gz
######################################################################## 100.0%
==> Pouring openssl-1.0.2h.el_capitan.bottle.tar.gz
==> Caveats
A CA file has been bootstrapped using certificates from the system
keychain. To add additional certificates, place .pem files in
/usr/local/etc/openssl/certs
and run
/usr/local/opt/openssl/bin/c_rehash
This formula is keg-only, which means it was not symlinked into /usr/local.
Apple has deprecated use of OpenSSL in favor of its own TLS and crypto libraries
Generally there are no consequences of this for you. If you build your
own software and it requires this formula, you'll need to add to your
build variables:
LDFLAGS: -L/usr/local/opt/openssl/lib
CPPFLAGS: -I/usr/local/opt/openssl/include

I know people are getting tired of all openSSL holes but this sounds like PR or overkill. Why not use something that exists? Why not name the lib they are favouring over openSSL? Is this security by obscurity or do they assume their users won’t understand it? Notice this: “Generally there are no consequences of this for you.”

I’ve heard good things about libreSSL which I would assume is what they’d use. Since it’s a BSD thing and OSX is BSD.

I wonder what the future of OpenSSL is though this has been a long time coming. I also wonder whether we’ll start seeing more of this in the Linux server world.

These bindings should work for emacs from 24 upwards.
My emacs config is in my dotfiles.

Key binding

Name

Purpose

Package

From emacs version

C-x SPC

(rectangle-mark-mode)

Select a rectangular region.

None

24.4

C-c SPC

(ace-jump mode)

Jump to a letter at start of a word.

ace-jump

unknown

C-s C-w

(write-file)

Save current file as a different file

None

unknown

C-g C-/

Redo

Redo something you’ve undone.

None

unknown

C-/

Undo

Undo something you’ve done.

None

unknown

C-x k

(kill-buffer)

Close the current buffer.

None

unknown

C-x C-f

(find-file)

Visit a file

None

unknown

C-x C-v

(find-alternate-file)

Visit a different file

None

unknown

C-x C-r

(find-file-read-only)

Visit a file as read-only

None

unknown

C-x 4 f

(find-file-other-window)

Visit a file in another window/buffer

None

unknown

C-x 5 f

(find-file-other-frame)

Visit a file in a new frame

None

unknown

C-a

Jump to start of line

Not emacs specific but IBM home

None

all

C-e

Jump to end of line

Not emacs specific but IBM end

None

all

C-s M-%

Queried search and replace

None

all

Handy information

For redo keep repeating C-/ to keep redoing, C-g isn’t repeated.

If you “visit” a file that is actually a directory, Emacs invokes Dired, the Emacs directory browser. See Dired. You can disable this behavior by setting the variable find-file-run-dired to nil; in that case, it is an error to try to visit a directory.

When the emacs version is unknown it will most likely work for your version of emacs.

Lately I’ve been reading huge haskell code bases quite a lot. One thing that I have noted to be helpful when documentated has been the imports section as well as the code having a list of the code it exports.

I don’t know whether this is just a non-experienced programmer issue or it cuts across the board.

Documenting imports can happen:

explicitly through:

comments

implicity through:

uniquely qualified imports.

import AqualifiedasXimport BqualifiedasY

over

import AqualifiedasXimport BqualifiedasX

importing of specific instances (i.e using brackets to specify what one wants to import)

Basically anything that saves the programmer effort or time in:

Understanding what you’re importing

Why you’re importing it

See the usage of a function and quickly know where it’s from

I can’t quantify or explain exactly how this helps me understand the code but it really does. Especially when I can’t hoogle a function name (the internet connections aren’t too fast in these parts). It saves me the time of have to go through several modules trying to figure out where this import is from.

Most of time we are in just too much of a hurry to do this I understand. I’m a victim of some terrible coding practices but I think it’s a good habit to adopt.

Well, the user can use tools like the repl to query where these imports are from but again when you can save the user time and effort of querying for meta information please do so. I know it’s not possible to do it all the time and everywhere but please do it when and where you can.

As you can see one can learn quite a bit just from looking at the imports and module documentation alone.

The issue is that it sometimes takes a while for one to clean up their code like this so it’s okay if your imports aren’t legible before refactoring.

Another thing, I don’t know if it’s just an emacs thing but I can just to my imports and jump between sections of imports with f12. This is both advantageous to both the one writing the code and the one reading it.
The point of all of this is that well structured and well documented imports and exports are a win for both the programmer and the one reading the code.

]]>wai-devel final submission.http://blog.urbanslug.com/posts/2015-08-21-wai-devel-final-submission.html2015-08-21T19:14:34Z2015-08-21T19:14:34Z
Posted on August 21, 2015

This is the final day of code submissions to Google for Google Summer of Code. So it’s only fair that I give the community a report on the current state of affairs regarding wai-devel.
This is more of a very detailed changelog than a blog post about wai-devel.

What wai-devel expects from your application.

UPDATE: Due to it’s reliance on ide-backend you also have to set the environment variable GHC_PACKAGE_PATH

What PORT is used for:

Your application shall listen for connections on localhost:<PORT> wai-devel by default creates a reverse proxy from port number 3000 to your application which is listening in on PORT.
You can change the port from the default port 3000 by setting the environment variable PORT yourself.

wai-devel takes PORT and then cycles through various port numbers adding 1 to PORT to find a port that is free, sets that as the destination port and changes the PORT environment variable to that destination port. Therefore we can reverse proxy from PORT to a random port.

Reverse proxying is important for error reporting, future proofing and other ways of abstracting away the services wai-devel provides to your application.

More reliable dirtiness checking.

wai-devel will use the the module you have chosen to find the files to watch for changes in. It watches the files it imports and their Template Haskell dependencies as well as the cabal file.

Compatibility with Haskell wai-applications.

wai-devel works with your usual yesod scaffold from yesod-bin out of the box and should work with other haskell wai apps as long as they use the PORT environment variable.

You can pass the filepath and function to run via command line arguments --path or -p and function --function or -f. When these aren’t passed it assumes Application.develMain (borrowed from yesod).

Yet to come.

I will be actively developing wai-devel well after Google Summer of Code is over (that is today).

The host:port pair is expected to be passed in as two environment variables: wai_host and wai_port for example:

export wai_host=127.0.0.1

export wai_port=3001

Better yet, the application itself should set the environment variables as in the example code below.

wai-devel looks for a function Application.develMain I have a fork of yesod, that builds a yesod binary which generates a scaffold with this function implemented. I recommend using it to generate the scaffold with which to try out wai-devel with.

The specifics of how to set the port and host within yesod applications will obviously change. The point of this fork is to generate a scaffold that works with wai-devel out of the box.

During socket creation I made sure that the socket option ReuseAddr has been set to 1.
This way the operating system doesn’t hold on to the socket after the program exits. This is important for when wai-devel takes note of file changes and the development server is restarted.

Ignoring files and directories

wai-devel expects that there will be a single Main.main function. In the case of having more than one, for example with yesod, we ignore all but one. Specifically, we ignore the file app/DevelMain.hs. There is no need for app/devel.hs so it has been removed in my fork.

Moreover, wai-devel ignores files in your test/ directory.
This is because wai-devel depends on ide-backend which will attempt to build all files in the current working diretory, including your test directory. This leads to a world of hurt because the test/ directory also has a Main.main function.

Please report an issue if you would like any file ignored during builds.

Moved to stack

Since the Haskell community has moved in this direction, so has wai-devel.
wai-devel only depends on cabal in that stack and ide-backend depend on Cabal the library. Otherwise, the cabal binary is not used and hasn’t been tested to work with wai-devel.

Compatible versions of GHC

Currently wai-devel is built and tested against:

GHC-7.8

GHC-7.10

Regarding file watching

wai-devel watches for file changes on files with the following extensions:

hamlet

shamlet

julius

lucius

hs

yaml

When a change takes place wai-devel will recompile and re-run your application on localhost:3001 or display an error if any on the browser at localhost:3000

If you would want another extension added to the list of file extensions to watch for please report it as an issue.

Command line arguments

Currently wai-devel takes only these two arguments and the two are optional. If you feel the need for more arguments please report it as an issue on github.

-r to turn off reverse proxying If this is turned on you will access your application at an address that is specific to your web application or web framework.

–show-iface [hi file] passes this command to ghc Same as ghc –show-iface