Living is hard, so should your code be

I’ve been using gmail for a long time and have been using the web UI. Keyboard shortcuts enabled of course. Then I started realising the the it drowned visually among the other tabs in chrome. So I pinned it to the first tab, which was ok but not awesome.

When I switched to Mac after a lifetime in PC land I found out about Fluid app (on the Ruby Rogues podcast I think) which let’s you wrap any webapp as it’s own Mac app, with a dock icon and all. That was a step up but I had challanges ther two. I have multiple gmail accounts and had a hard time finding a nice solution for that.

Along came a kickstarter project for Kiwi for Gmail, I imediatly liked it an backed it. After beta-testing for a while I got it and I was happy. Worked like a charm, hosted multiple accounts in isolation. In short all I wanted. Then it stared crashing, having porblems with preview windows and stuff. It was in this state for a long time. I hade some interaction with the support and got a ”we know we have stability issues with your version” answer.

I’m not very patient. So reading Omar Shahine’s blog I stumbled upon a reference to Mailplane. Tried it out and now it’s my Gmail wapon of choise :). The preview functionality of images, pdf’s and such things is not as good as Kiwi’s. Neither is the downloads of attachements UI. But it doesn’t crach or hang which wins out in my book.

Probably will switch again witin a couple of months but for the time being Mailplane is my Gmail client of choise.

Dela det här:

Before updating the firmware on Particle Photon was a bit tricky. You can use the Firmware updater app on Mac. However I tried a couple of times and never got it to work. Another way was to use the particle-cli but ut was a little messy. You had to download the firmware consisting of two files and the run particle commands with those files in the right order.

However, I just upgraded the the CLI tools (> npm install -g particle-cli) and discovered some new (at least to me) utility functions. Namely particle upgrade.

So, I’ve been itching to do a little IoT project at home. Last thing I did was the a little “information radiator” telling us how long before the bus leaves (here). Now I wanted to do something else fun and useful.

So in the basement of our house we have a geothermal heating pump. When water gets warm and expands inside the pump it has to dispose of it. We don’t have a floor drain in that part of the cellar so it ejects the water into a jar. When it’s full I’ll empty it and put it back.

In order for me not to have to check if it’s full with regular intervals I put together the following solution. When the water level reaches a certain threshold I should get a text message telling me that it’s time.

Parts used

Parts needed

1 Particle Photon microcontroller – An arduinoish microcontroller with wifi on the chip and some neat cloud services around it that you can use if you want to (https://www.particle.io/manage), I think they made my solution a bit easier so I used their cloud. I hade one laying around at home.

1 Water sensor for arduino – Cheap simple thing that returns different voltage depending on water level.

1 particle.io account – for setting up webhooks

Particle CLI tool — using it to register webhooks and get data from the device. So you probably can get on without it but I’ll use it in this article.

Module cable with 4 conductors- You can use any cable, I had that laying around from erlier projects.

Soldering gear – you probably can make it work without using clamps or lab jumper cables.

Parts not needed but used

Experiment board – You can use a breadboard or just solder wires to pins and other wires.

PBC Connector – again you could just solder it on to pins.

Shrink tubes – I used to secure som of my amateurish soldering.

Zip tie – just to hold some stuff together

Building the hardware

I actually proofed the concept out using an Arduino Uno, a breadboard and some jumper cables. But when I set of to make the “real stuff” I started with soldering the connectors of the cable to the water sensor. Keep track of the colors! I used red for power, black for ground and green for signal (signal delivers different voltage depending on the water level). Then I put on some shrink plastic to secure the soldering.

I decided to put a PBC connector on a prototypeing circuit board so that I simply can change the length or type of wire used for the water level sensor.

So to clarify the layout:

This is my first, crude, prototype looked.

I made it look a little bit more polished.

Software

So there is a few parts in this solution. Since particle.io has cloud services working with their devices and SDK available at particle.io. I opted to use that instead of doing HTTP calls directly from the Particle Photon to Twilio to get it done faster. Here’s an overview of the solution.

Setting up Twilio

First of all you need to set up a twilio account at https://www.twilio.com/. When you’re logged in you need to deposit some money to be able to use Twilio. Then you need to accuire a phone number enabled for SMS

You need your twilio number, accountd sid and auth token.

Creating the Webhook

Now we’re going to put the Twilio info to use when we are going to create a webhook in our Particle.io account. A particle webhook is a cloud service that acts as a bridge between your particle and the rest of the world. The Particle SDK provides nice utility abstractions for these. So all you have to do in your code (as we’ll see later on) is to call Particle.publish(“webhook-name”, “message”, 60, PRIVATE); Which is kinda neat.
The webhook file is pretty straight.

Code on the device

First off I declare some variables and a method.

DELAY_IN_SECONDS is how often to check the water level. sendSms is the method that calls the webhook, waterLevelPin defines which analog port I connected the water level sensor to and some values for water level and keeping track of text message status. Next the up, the setup:

Using Particle.variable and Particle.function to expose a variable and a method to the Particle cloud service, for testing purposes. Setting the mode of the water level pin to input using pinMode(..).

A larger chunk of code this time. getWaterLevel() gets the value and puts it in a global variable just so I can monitor it through the particle cli. checkWaterLevel() does the actual checking and sends a SMS if the current value is higher than the threshold and no message has been sent.

So no rocket science but a fun little project.
Then I ended up tweaking it a little adding a few LEDs and stuff just for fun.

Dela det här:

Diving into docker compose files (docker-compose.yml) there is a lot of keywords used. Some are obvious, others not. Here’s a little cheat sheet, not at all total coverage but hopefully a few nuggets to get started.

docker-compose file

build

Build points to your Dockerfile. If it’s named Dockerfile and resides in the same directory as your docker-compose.yml you can specify it with a dot (.) otherwise you can give it a path.

myapp:
build: .
build: /path/to/dir/with/Dockerfile

You can only use build OR image (se below) not both.

image

Names an image, local or remote (if its not local docker will try to pull it down). Could for example be redis, ubuntu, mongodb or something else.

...
cache:
image: redis
...

links

This is used to set up relationships (links) between docker containers in your compose environment.

webapp:
build: .
links:
- cache
cache:
image: redis

It’s also possible to set up aliases using the links.

- redis:cache

ports

Pretty self-explaining. What ports should be mapped out to the outside world. Follows a :, so if you have a web server on port 80 and want to expose it on port 80 you do a 80:80. If you only give it 80 it will give you a random port for the outside world to use that maps into port 80 on that container.

So this example maps 80 to port 80, and port 8080 to a random port and interval 8000-8030 to 3000-3030.

webapp:
ports:
- "80:80"
- "8080"
- "8000-8030:3000-3030"

volumes

Is used to mount paths as volumes. Can mount on the host machine or in the container.

webapp:
volumes:
- /var/lib/mysql
- ./cache:/tmp/cache

volumes_from

Is used to mount volumes from another container or service. So and example could be a web server mounting volumes from a file server.

docker-machine

Earlier knows as boot2docker and the image it boots still is named boot2docker.iso. There are different drivers, below I’ll use virtual box which is very common on Mac OS X.

So on OS X creating a virtualbox docker host can look like this:

$ docker-machine create --driver virtualbox docker-test
Running pre-create checks...
Creating machine...
(docker-test) Copying /Users/nippe/.docker/machine/cache/boot2docker.iso to /Users/nippe/.docker/machine/machines/docker-test/boot2docker.iso...
(docker-test) Creating VirtualBox VM...
(docker-test) Creating SSH key...
(docker-test) Starting VM...
Waiting for machine to be running, this may take a few minutes...
Machine is running, waiting for SSH to be available...
Detecting operating system of created instance...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect Docker to this machine, run: docker-machine env docker-test

docker-registry

Is a repository for docker image files. The most common one is Docker Hub.

> docker run -i -p 3000:3000 grafana/grafana

docker-compose

Do docker or the docker engine enables us to run images in containers, which is great. Only thing is, usually our applications consist of more than one box. Here’s where docker compose comes in, it lets us specify entire environments.

Lets take an example: We want to spin up a solution with 3 boxes (web server with node.js, redis server and a mongo database). We might create a docker-compose.yml-file looking something like this:

In this case I have a Dockerfile in the same folder as docker-compose.yml that defines my node environment.

Then we build it and start it:

> docker-compose build
> docker-compose up

And it spins up an entire environment with 3 containers. To check we can open another terminal (don’t forget the eval-trick mentioned earlier to set up docker environment variables, they are per terminal session) and run docker ps:

Dela det här:

This is a post that probably is not of interest to anyone else but me. It’s just a way to keep myself accountable and thus hopefully achieving more of my goals.

I hade a few goals 2015, let’s see how I did:

Weight 85 kg

Actually I did reevaluate this goal when preparing for the world championship in underwater rugby. I realized that after loosing quite a lot of weight that I need to weigh more than 85 kg. So I’ll mark this as a success.

Dela det här:

Trying to use the iothub-explorer (https://www.npmjs.com/package/iothub-explorer) node by using the npm package for Azure I run into some problems. Namely as soon as I touched it I got an error:

env: node\r: No such file or directory
-zsh: list: command not found

So I figured out it was due to different line endings in unix and windows. So what I did was to open up /usr/local/lib/node_modules/iothub-explorer in my editor (atom) and with use of the atom package named lien-ending-converter.

And it works! Should of course be fixed at the source and supposedly has been according to this issue: https://github.com/Azure/azure-iot-sdks/issues/149. However I didn’t know how to install that version so in the mean time I fixed it with the trick above.

Notice the formData. That infers that the form type will be multipart/form-data which did not work at all with the 46elks api. All I got was 404 Not Found all the time.

After trail and error in Postman (chrome plugin for building HTTP requests). So here’s the thing if you use form instead. The form-data type inferred is x-www-form-urlencoded. When I changed it (after 4 hours of head-banging-against-the-wall) to:

Dela det här:

This post is just me nerding out with a open api, raspberry pi, blink(1) and some node.js code.

Background

Outside my house, about 50 meters away there is a bus stop. In my struggle to not use the car so much we are trying to take the bus or bike to the kids school more. So on the morning I open the app for the commuting time tables and check when an appropriate bus will leave. Then I keep track of the watch to time that buse with the kids.

I wanted to do this more effortless.

The project

When it’s 10 minutes left the blink(1) lights up in green, then it changes to yellow, then red to do some flashing last minute.

Node.js – I tried to get nvm upp and running but did not succeed so I did a `sudo apt-get install nodejs’.

Get the code

Got the code from my repo (developed on a Mac), git clone [email protected]:nippe/when-does-the-bus-leave.git and then a npm install. (The node-blink npm package depends on node-HID which has different instructions for different node versions, so just be aware and do what’s right for your situation. Read more on the node-blink1 repo)

Run it

So a simple node busStatus.js does the trick. However that process dies when the ssh connections goes down. So the correct thing would probably be to set it up as a proper demon process and I was about to when I stumbled upon screen. A nice little tool to keep a virtual screen session going even if the client is not connected.

sudo apt-get update
sudo apt-get install screen

Start it:

> screen
> cd when-does-the-bus-leave
> node busStatus.js

The leave the session by hitting ctrl + a + d.

When connecting to the raspberry pi again screen -r reconnects you to the screen-session. Nice little utility!

I’ll probably do updates of code and docs on github: https://github.com/nippe/when-does-the-bus-leave

Dela det här:

This has become a little pet peeve of mine and it’s a bit of a rant. So be warned and exit now :).

I’m getting tired of people who are walking around saying that it’s so nice with document databases because you don’t need a schema. ”You can just insert whatever…”

My issue with this is that I feel they can never have maintained a solution like that.

The schema is always there but if it’s not in the database it’s in the code. I read somewhere that it can be called a on-read-schema (in opposite of a on-write-schema) which I think sums it up nicely.

Example
I have a document database and I’m storing the full address in a field. Then we decide to split it up in street address, zip and city. I can directly start inserting it in that format but when I read the posts I need to put some logic in place to handle this (a schema that is).