Brief

Mostly because I’m cheap (but partly to reduce the scope and therefore increase chance of success) I added a few constraints:

Periodic still image capture is OK (streaming video will be tackled in a future iteration)

Securely store images in the cloud (so I can check them when I’m away from home)

Be configurable

Support the cheap official Raspberry Pi camera modules, particularly the “NoIR” variants which come without the Infrared filter for night-time captures when complemented with an IR light pack

Let’s get started.

Design

After a quick napkin sketch it becomes clear we need to build 4 key parts:

A .NET Core console application.

This is our entrypoint, and using .NET Core lets us compile to the “linux-arm” runtime target, i.e. Raspbian.

A timer

To periodically take the photos. I have some ideas for other conditions or events that could trigger the camera to take a snapshot (such as a PIR sensor detecting heat) so I’ve implemented the timer as a “TimerTrigger“, which implements an ITrigger interface. That way it’s nice and extendable.

Some code that speaks to the Raspberry Pi camera module

Again I’ve written a “CameraImageSource“, which implements the IImageSource interface.

I also wrote a TestImageSource which loads an image from the current working directory for testing.

A client for the cloud storage API

I’ve chosen Google Drive because it offers 15GB free storage, a decent SDK that supports .NET Standard (1.3) and has good documentation.

As with the other components, this is written as an implementation of the “IDataStore” interface, so it’s easy to expand to other cloud storage providers in the future.

And this is how they all communicate:

Implementation

Once you’ve cloned that repository, there are some things we need to do before we can run the code.

Cloud Storage Credentials

To use the Google Drive API, you need an OAuth2 client ID and client secret. Follow these instructions to get a client ID and client secret, then hit the Download button in the “Credentials” area of the API console and save the file as “client_secrets.json” in the src/PiSpy/ folder of the repository you just cloned.

This file will be copied to the output directory when the project is built, per MSBuild instructions in the PiSpy.csproj project file.

Configure the Raspberry Pi Security Camera

If you want to change the default options, open up the appSettings.json file in the project.

Timer interval

You can change the interval between camera shots by modifying the Triggers:TimerTrigger:Interval setting. This is in milliseconds (seconds / 1000 so 60 seconds = 60000).

The default is 180 seconds (3 minutes).

Camera output directory

You can change the output directory by modifying the CameraModule:OutputPath setting.

The default assumes you will copy the console app to /home/pi/pispy, if you want to copy it somewhere else, change the path accordingly, otherwise you’ll see an error from the “mmal” process.

Raspbian with a Desktop

For now, we need an operating system on the Pi with an interactive desktop to complete the Google OAuth2 authorization flow.

Follow the steps in my last blog to set up a Pi with Raspbian and the .NET core runtime – but with one minor difference: grab Raspbian Stretch, NOT Raspbian Stretch Lite. This gives you a Desktop – that’s important for when the Google authorization flow pops open a browser window to enter your Google account details.

(In a future iteration we’ll add a Kestrel HTTP endpoint to the service to negate the need for the non-Lite version of Raspbian with a desktop).

You can either plug in a HDMI-capable screen and keyboard, or you can enable VNC by running:

sudo raspi-config

at the command line (or via SSH) and enabling ‘VNC’ under the ‘Interfacing options’ menu.

I also found I had to set Chromium as my default browser for the authorization flow to work properly – to do that navigate to chrome://settings, choose “Set Chromium as my default browser” and then restart the pi by running:

sudo reboot

Deploy and run

As in my last blog, build the project targeting the linux-arm runtime:

dotnet publish -r linux-arm

and copy the the bin/Debug/netcoreapp2.0/linux-arm/publish folder via FTP to the pi. I copied it to /home/pi/pispy.

We’re going to need access to Raspbian’s PIXEL desktop in a moment, so connect to the Pi’s VNC server by following the instructions here. Once you’re in, open a Terminal and navigate to the folder you copied the /publish folder to. Create a new folder within it called stills, which is where the CameraImageSource code will write the pictures to before handing us a Stream.

cd /home/pi/pispy

mkdir stills

(I’m cheating a bit by simply running the “raspistill” executable that comes with Raspbian to take the pictures. More info here.)

Next, start the service by running:

dotnet /home/pi/pispy/PiSpy.dll

After the time specified in the Triggers:TimerTrigger:Interval appSetting elapses, a photo will be taken and then the GoogleDriveDataStore will trigger the authorization flow (this only needs to happen once). Once you’ve logged in future photos will stream up to the Google Drive.

Next steps

Pull requests are not only welcome, they are encouraged.

A good place to start: I haven’t had much time this holiday season to debug (same excuse for shoddy blogging 🙂 ), but there appears to be an async bug in the TimerTrigger whereby the subscribed actions are invoked on a separate thread and the Timer is restarted even though the CameraImageSource still hasn’t finished taking its picture.

This means that when the app runs on the Pi it will accept requests to port 5000 from external clients.

Please note, Kestrel is not a supported edge server, it is designed to run behind a reverse proxy such as nginx, Apache HTTP Server or Microsoft IIS when exposed to the outside world. Read and understand this before you open up your Pi to the big bad interwebs.

Deploying the App to the Pi

I’m not offering DevOps perfection here, I’m afraid we’re just going to FTP the app across to the pi. But first we need to compile the app so it works with the Raspberry Pi’s low-power ARM processor.

On your development machine, drop to the command line, navigate to your project directory and publish your app so it works on Raspbian by executing the following command:

dotnet publish -r linux-arm

This creates a bin/Debug/netcoreapp2.0/linux-arm/publish directory that contains the binaries for your ASP.NET Core app.

Grab your favourite FTP client. If you don’t have one, FileZilla will do the trick.

Connect your FTP client to your Pi by entering the following details:

Host: raspberrypiUsername: piPassword: raspberryPort: 22

The FTP client should show you the directory structure on the Pi. Copy the contents of your linux-arm/publish directory to any path on the pi (I chose /home/pi/piservice/) using the FTP client.

HTTPS

We’re going to use a self-signed certificate to show HTTPS is possible. In a real-world scenario you’d sign a relatively short-lived RSA keypair with a certificate signed by a trusted root cert (and also, you’d probably not use a Raspberry Pi and publicly-exposed Kestrel Web Server to run your services, but hey ho).

SSH into your Pi again and run the following command to create a public and private key pair that will be valid for a year.

Running the app

<yourappname> will be the name of the project you created if you used Visual Studio, or the name of directory you created that you ran dotnet new in. Typically this will be something like HelloWorld.dll or Acme.Web.dll, etc.

If all has been successful you’ll see the following echoed in your SSH session on the Pi:

(My emphasis). From your development machine you should now be able to begin making requests to your app hosted on the Pi, https://raspberrypi:5000/

You will get errors about the certificate being untrusted, this is expected as your development machine has no reason to trust the little $30 computer, but you can skip past them – or read the final section of Peter Kelly’s article to learn how to trust the Pi’s self-signed certificate.

Making it Public

This is where things turn a bit vague as it’s up to you how you set up your network.

At the most basic level you need to tell your router to send traffic to port 5000 on your Pi. This usually involves adding a Port-Forwarding Rule. You’ll need to know your Pi’s IP address to set up the rule, so it makes sense to either give the Pi a static DHCP lease. Please refer to your router’s user guide for specific information.

To call your Pi from the outside world you’ll need your router’s public IP address – of course it’s best if this is static, ask your ISP if this is an option – and then you can set up a domain name to point to this IP address.

You can get a proper SSL certificate for your domain name from LetsEncrypt, or any other certificate provider.

Note: You’ll get SSL certificate errors when using a self-signed certificate, and browsers may stop allowing access to sites where the certificate doesn’t match the public domain name.

Follow Up

I’ll follow up soon with an article on containerising the ASP.NET Core app and running it on Docker on the Pi.

On a recent business trip to New Delhi, I was out for lunch with colleagues when conversation turned to the price of mobile data. Us foreigners at the table bemoaned the price-per-Gb in our home countries, meanwhile, the locals could barely contain their laughter…

Overnight Success

An overnight success in the world’s 2nd most-populated country is nothing short of breathtaking.

In June 2015, IDC estimated there were 5.8 million 4G LTE subscribers in India. In a country with a population of 1.3 billion that’s a rounding error.

Yet, as I type this now in October 2017 India has 81.56% 4G LTE coverage, better than most countries in Europe and snapping at the heels of city-nation, Singapore.

Isn’t that a huge waste of bandwidth?

Well no… Just months after the IDC estimate was published a new service, Reliance Jio, was launched, and everything changed.

Within a few short years we could find our banking system is bankrupt. No, I’m not trying to predict another subprime mortgage collapse, and this isn’t another anti-Trump message of doom (although his lack of understanding of ‘the cyber’ and affinity with traditional business models will not help the United States of America weather such disruption). Instead, the rise of the ‘Blockchain’ simply renders banks unnecessary.

Why do banks exist?

Banks exist because storing your cash under your mattress isn’t very secure. But what is it about banks that makes them a safer place for your hard-earned wedge?

According to legend, the the Knights Templar invented the first form of modern banking in the 12th century. They would take in money from Christian crusaders, pilgrims and travellers in return for a slip of parchment that detailed their deposit. Further along their journey they could swap their parchment at a ‘Templar House’ for gold, silver and whatever-the-hell-myhrr-is up to the value they had deposited. Sound familiar?

As for security, the Knights Templar were some of the most fearsome warriors around. They didn’t need to chain their pens to the desks, if you nicked one you’d do well if you only lost a hand…

The first Templar banking system relied on low literacy levels. Basically, the hope was that the parchment could easily be overlooked by groups of medieval chavs rifling through your pockets looking for gold coins. Eventually, the parchments were written in code (encrypted) to avoid them being tampered with. Ironically, it is encryption which is the basis for Blockchain, which may end up destroying this old-style of banking.

Remember when the internet was in its infancy? We all had to put up with little 468 x 90 banner ads everywhere you looked – and sometimes we clicked them because we didn’t know better.

As time went on we grew smarter, we were able to tell the bad adverts from the good, and the emergence of online advertising bumped the ugly out of the marketplace entirely. And now, our brains automatically blank out adverts to keep us focused on the content we went to the site in the first place for. Many of us use ad-blocking tools so our brains don’t even need to perform the mental airbrushing.

But what if those adverts were trying to tell us something really important?

What if the Emergency Broadcast System was hooked into those banner ads trying to give us forewarning of an avoidable cataclysm?

Social Engineering

Social Engineering refers to psychological manipulation of people into performing actions or divulging confidential information.

It is becoming increasingly common by malicious actors (bank and identity fraud, for example), but is also becoming a core part of many companies’ business models.

It all started innocently enough with the Social Graph. The ability to link people with other people, events, photos and products via rich, meaningful relationships turned the one-size-fits-all internet into a personalised window where the chaos suddenly started to shape itself into something we recognised and could engage with on a more emotional level.

Google do a lot of good things. They host free webfonts to make the web a nicer place to be. Their cloudy PaaS service, Engine Yard, gets rave reviews. Their maps are better than anyone’s, their mobile OS is the most popular in the world, and their photo hosting offer is second to none. But they can be very evil sometimes too.

The Devil’s In The Detail

For the last few days I’ve been seeing this ‘privacy reminder’ popup whenever I go to Google (including by searching in Chrome’s address bar). And it stops you dead in your tracks. You have to read through all the legalese before it lets you search for pictures of cats. Well I just don’t have time for that, I need instant cat gratification now!.

That sounds so wrong.

Anyway, I had a quick scan through the privacy reminder and immediately smelled a rat… It all seems really un-evil at first, you can choose to switch off some of Google’s invasive behaviour by following the handy-dandy links in the privacy reminder itself. Wowzers! What a nice thing to do. I opted to switch off all the weird adverts-following-you-around settings. They’re here, in case you’re wondering.

This morning, while trying to debug our big ol’ web project in Visual Studio 2015 I encountered a problem – it held me up for a while so I wanted to quickly blog about the solution in case it hits you too. When hitting F5 to start debugging, Chrome launched but then immediately Visual Studio detached from IIS Express and showed the following error:

A process with the ID of <id> is not running

True enough, IIS Express wasn’t running…

Open Wide and Say ‘Ahh!’, Mr Windows

I ran a Repair on IIS Express 10.0 in case it was an issue with that, or the self-signed SSL certificate it uses to host web projects over a secure connection…. but still had the same problem.

I then created a brand new ASP.NET MVC 5 project and hit F5… but that ran fine. Hmm, curious. That let me know IIS Express was fundamentally OK, and the issue lay with the big ol’ web project.

Microsoft are usually pretty good at logging when things go wrong so I fired up eventvwr, the Windows Event Viewer, and saw the following error being thrown by IIS Express:

The Module DLL C:\Program Files (x86)\Microsoft Web Tools\AspNetCoreModule\aspnetcore.dll failed to load. The data is the error

Today I closed a chapter in my life. After nearly 4 years tenure at a company I wanted to reflect on the things I learned over that time.

I have been very lucky to have a few excellent – world-class even – mentors here who have taught me things that will stay with me for the rest of my life, and I wanted to share the reflection process with you in the hope you gain something valuable too.

Individual Success Isn’t Success

For a long, long time I adopted the ‘aircraft oxygen mask’ approach to my career: I’ll get to where I want to be first, then I’ll help others. This company has taught me that isn’t the right thing to do.

My thinking was always “I’ll be in a better position to help others” once I hit my objectives, but that simply doesn’t work in practice: without respectful, cooperative development across your team(s), you risk yourself hitting your goals at all, and if you haven’t helped others hit theirs too, nobody wins.

Dare I use the management-bullshit-bingo term ‘synergy’?

My current role here is a technical leadership role – that means I don’t have people reporting to me but I do have authority over technology direction and a remit to ensure conceptual integrity of the solution. I have led project teams before, I have even run small businesses before, but being a leader in a larger company was new to me when I began this chapter of my life, and I wanted to be good at it.

I’ve seen all the memes about the difference between a boss and a leader but for some reason I struggled to enact the differences. However, after some time spent being (in retrospect) a terrible boss, some sage advice from one of those mentors made everything ‘click’, and I was given the mental tools to develop the techniques required to become a good leader instead. (Note, a good mentor won’t give you the answer, but the means of finding it on your own!).

“Take people with you.”

So what does that look like in practice? Last year I was offered the chance to travel to our American HQ to present some new work to 1,500 customers. ‘Prestigious’ isn’t even close – this is a huge event, so compelling that our customers pay us to listen to our plans and roadmap. The trip dripped with a significant amount of attached ‘kudos’ and the opportunity to rub shoulders with the highest of the high in the business. Not only that – the opportunity to ask probing questions to 1,500 customers about our technology direction is such a rare occurrence it was unmissable. The old me would have started packing immediately. Continue Reading “Things I learned at my last job”→