Franklin Heath Ltdhttps://franklinheath.co.uk
Master Your Information Assets
Fri, 07 Dec 2018 23:44:55 +0000 en
hourly
1 http://wordpress.com/https://secure.gravatar.com/blavatar/a28d4158e876ab5227fe43225f85ea75?s=96&d=https%3A%2F%2Fs0.wp.com%2Fi%2Fbuttonw-com.pngFranklin Heath Ltdhttps://franklinheath.co.uk
Comparing the Security of Low-Power Wide-Area Network Technologieshttps://franklinheath.co.uk/2017/05/02/lpwan-security/
https://franklinheath.co.uk/2017/05/02/lpwan-security/#commentsTue, 02 May 2017 18:17:46 +0000http://franklinheath.co.uk/?p=1595I was recently asked by the GSMA to undertake an independent study looking at the security of various LPWA (Low-Power Wide-Area) network technologies. I took on the project because I find it a very interesting topic; these types of network are targeted at IoT (Internet-of-Things) devices, an area I have been working on over the last couple of years with IoTUK and the IoT Security Foundation. One of the main challenges of the IoT space is in making trade-offs to accommodate low-power and low-cost devices, and security is one of the things that might be traded off.

You can download the 20-page report here.
The obvious question you might expect to be answered is “Which one’s the best?” but I’m afraid the answer is a resounding “It depends” . The different technologies we looked at have varying security features, but it’s not the case that one is always better than another; two technologies might each have a security feature that the other lacks, or you might not need some of the security features and make your choice based on other factors like coverage, power consumption or cost.

The part of the study that I found most interesting was determining the list of different network security features that you might or might not care about, and thinking about how to assess particular use cases to decide whether or not each feature was needed. I didn’t include the detailed working through of that in the report, as considering 5 use cases, 5 network technologies, and 20 different security features (some of them optional) for each pair of use case and technology, makes for a pretty big spreadsheet! I will be talking more about my method at the Mobile 360 Privacy & Security event in The Hague later this month, as I hope it will be useful to others considering the security aspects of deploying of IoT devices on a low power network.

Of course you must have conclusions in a report such as this, and so we have a coloured-in table summarising the suitability of each technology for each use case, but I must emphasise it’s not as simple as that. It very much depends on YOUR particular use case – even if it’s one of the ones listed here, I may have made assumptions that don’t apply to your situation (for example whether it’s feasible to physically access devices to update them, or whether devices are being used in a safety-critical context). That said, the table is reproduced here if you want to “skip to the end”:

LTE-M

NB-IoT

EC-GSM-IoT

LoRaWAN

Sigfox

Smart Pallet

Good

Good *

Adequate

Good

Poor

Smart Agriculture

Good

Good

Good

Adequate

Adequate

Smart Street Lighting

Adequate

Good *

Adequate

Adequate *

Adequate

Water Metering

Adequate *

Good *

Adequate *

Adequate

Poor

Domestic Smoke Detectors

Good

Good

Good

Adequate

Adequate

The final point to note here is that the asterisks (*) in the table above indicate assumptions that certain optional features of that technology have been enabled by the network operator; this may or may not be the case for YOUR network operator, so I’m afraid there’s no short-cut to doing your own assessment of the security needs of your use case, and discovering the security features offered by your network operator. If there’s just one thing to take away from this, I would say it’s that network security is “horses for courses” and you need to assess your own specific security needs before locking yourself in to a particular technology choice.

Feedback in the comments below is welcomed, and we will do our best to respond. As a next step, I intend to take the matrix of technologies and security features and put a version of it up on our wiki; it would be great to extend it with information on some of the other technologies we haven’t been able to cover in this report, such as RPMA and Weightless.

]]>https://franklinheath.co.uk/2017/05/02/lpwan-security/feed/1Craig HIdeal Christmas Present* – Personalised Enigma Logo Mugs!https://franklinheath.co.uk/2015/11/03/ideal-christmas-present-personalised-enigma-logo-mugs/
https://franklinheath.co.uk/2015/11/03/ideal-christmas-present-personalised-enigma-logo-mugs/#respondTue, 03 Nov 2015 20:39:19 +0000http://franklinheath.co.uk/?p=1580Today we’ve launched a new web site, enigmamug.com, and an associated CafePress store. The idea is that you enter your name, or whatever other word(s) you might like on a mug, it creates a design in the style of the Enigma machine logo and you can then (if you like it!) buy a mug with that design from CafePress. We have other designs also in the store: Enigma machine pluboards, with or without the plugs and cables, which we think look pretty good wrapped around a mug.

* Ideal for people with an interest in World War 2, cryptography and/or information security, that is

]]>https://franklinheath.co.uk/2015/11/03/ideal-christmas-present-personalised-enigma-logo-mugs/feed/0Craig HEnigma Name MugsThreats, Risks and Vulnerabilities – What do they Mean for Product Development?https://franklinheath.co.uk/2015/10/14/threats-risks-and-vulnerabilities-what-do-they-mean-for-product-development/
https://franklinheath.co.uk/2015/10/14/threats-risks-and-vulnerabilities-what-do-they-mean-for-product-development/#respondWed, 14 Oct 2015 15:32:12 +0000http://franklinheath.co.uk/?p=1538Recently we’ve taken on a client with immense experience of IT product development but not so much experience with computer security. A report I am writing for them starts by defining terms, to avoid possible confusion; I thought I’d also write this article to discuss more generally why “threats”, “risks” and “vulnerabilities” deserve specific definitions in that context.
The English dictionary definitions don’t help much, as the words represent abstract concepts and have multiple possible meanings depending on context. Even computer security sources don’t agree on a single meaning and can be infuriatingly vague (ISO 27000 includes the spectacularly unhelpful “risk: effect of uncertainty on objectives” ), so I’m just going to explain the usage that works for me in the context of a product development life cycle.

One of the dictionary definitions of threat is “A person or thing likely to cause damage or danger” (this is often more specifically referred to in computer security circles as a “threat actor”). For our purposes, we need to define a broad concept of threat which includes why they would be aware of, or interested in, our product and what their capabilities are. This is similar to the concept of threats in business Risk Management, that is, external factors that you cannot control. In the product development context, threats should be the principal factor which determine the product requirements relating to security (there may also be market requirements arising from customer perceptions and expectations).

A typical definition of risk in business Risk Management (and the CISSP syllabus) is

risk = likelihood of adverse event x cost of adverse event

Aside from the effective impossibility of measuring either of those factors in advance, it’s not really useful in a product development context, as you don’t usually know the environment in which your customers will be deploying your product, or have much of an idea of the value of any assets they will protect with it. What I find much more useful is what Cigital are referring to in their term Architecture Risk Analysis although, confusingly, Microsoft generally call this same concept a threat, as in their SDL Threat Modelling Tool. Microsoft’s STRIDE categories are useful, so we will define risks to be all the potential attacks on the security of our product which could result in spoofing, tampering, repudiation of action, information disclosure, denial of service or elevation of privilege.

Finally, vulnerabilities. Some people assume that this is what computer security is all about (usually in articles referring to “cybersecurity” ) as finding vulnerabilities is the easiest way for a security researcher to get a journalist excited. Penetration testers, software developers and organisations such as PCI focus on lists of vulnerabilities such as the OWASP Top Ten, as a way to quantify the security of an implementation, but that misses the point that the best way to ensure security is to design your product properly in the first place! (and why are developers still using SQL, a 40-year-old, seriously flawed, database query language in this day and age? but that’s an article for another day…) For our purposes, we define vulnerabilities as design or implementation flaws which allow security mechanisms to be defeated or bypassed.

Combining the terms with these definitions, a threat actualizes a risk by means of a vulnerability so, for example, large foreign corporations wanting to steal trade secrets (the threat) might be able to extract database records (the risk of information disclosure) using SQL injection (the vulnerability).

Relating these definitions back to the product development life cycle, it should become clear that each of these three concepts are of most interest at different stages:

Term

Definition

Development Stage

Threats

Capable adversaries who may benefit from misusing the product

Requirements

Risks

Potential ways in which the product might be misused

Architecture and Design

Vulnerabilities

Design or implementation flaws allowing misuse to occur

Implementation and Verification

Some of the above may be debatable, particularly the definition of “Risks” as it varies from a CISO or Enterprise Architect’s typical use of the word (they might consider size of exposure to financial losses to be an essential part of it) but I hope this is useful in presenting things specifically from a product perspective. If I could think of a better word than “Risks” for that concept, I would happily use it; do please let us know in the comments if you have any ideas!

]]>https://franklinheath.co.uk/2015/10/14/threats-risks-and-vulnerabilities-what-do-they-mean-for-product-development/feed/0Craig HCustom Page Sizes for Microsoft Print to PDFhttps://franklinheath.co.uk/2015/08/29/custom-page-sizes-for-microsoft-print-to-pdf/
https://franklinheath.co.uk/2015/08/29/custom-page-sizes-for-microsoft-print-to-pdf/#commentsSat, 29 Aug 2015 13:53:54 +0000http://franklinheath.co.uk/?p=1513I don’t usually post Windows tips and tricks, but I thought this might be useful as I haven’t seen it mentioned anywhere else. Briefly, the Windows 10 Print to PDF support doesn’t allow custom page sizes as it comes, but there is a simple way to enable it.
I’ve been setting up a new office PC with Windows 10 (while checking out the security and privacy settings, which have been well covered elsewhere). I use PDF a lot, for delivering clean versions of reports to clients and distributing presentation notes, so I was pleased to see Microsoft have built PDF creation in to Windows 10. It would be good if I didn’t need to install Acrobat any more, as it’s expensive, uses a lot of disk space, and is one more thing you have to keep up to date with security patches. However, one thing we currently use PDF for is preparing electronic payslips, for which we use a custom, small page size, and this wasn’t working with the Microsoft implementation.

It turns out that Windows printer drivers need to explicitly specify that they will support custom page sizes, and for some reason Microsoft Print to PDF doesn’t do that. Being an incurable tinkerer, I thought I’d try modifying it and see if it worked.

First you need to find the GPD file for the driver, which is installed under C:\Windows\System32\spool\V4Dirs. I was able to find the folder and file names by using regedit and looking in the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Print\Printers\Microsoft Print to PDF\PrinterDriverData. On my system it is {084F01FA‑E634‑4D77‑83EE‑074817C03581}\74e1846.gpd; you will probably find you have the same file name but a different folder name.

Now, make a backup copy of the GPD file, so you can restore it if you fumble the editing. Then you need to edit the original file and add the following section, immediately after the line *DefaultOption: LETTER :

The values for MaxSize are copied from the largest page size already defined (A3) and the values for MinSize are scaled down proportionately from that to represent A8 size. Save the new file somewhere, then copy it over the top of the original file.

Now you can define a custom page size in the normal way: go to Devices and Printers in the control panel, select the printer Microsoft Print to PDF, click Print Server properties on the ribbon menu, tick Create a new form, give it a name and enter the dimensions you want, then click Save Form. Now when you print something, select Microsoft Print to PDF as the printer, click Preferences, then Advanced and you should see your new page size in the pulldown menu.

This seems to be producing correct output for me, so the puzzle is why Microsoft didn’t do this themselves. Perhaps they didn’t want to go through the extra testing for arbitrary page sizes, or maybe they did test it and saw problems with some specific page sizes, I don’t know. If this would be useful for you, please give it a go and let us know how it worked out for you in the comments!

]]>https://franklinheath.co.uk/2015/08/29/custom-page-sizes-for-microsoft-print-to-pdf/feed/35Craig HImagine, 6 Tons of Punched Cards Every Week!https://franklinheath.co.uk/2015/05/02/imagine-6-tons-of-punched-cards-every-week/
https://franklinheath.co.uk/2015/05/02/imagine-6-tons-of-punched-cards-every-week/#respondSat, 02 May 2015 13:00:51 +0000http://franklinheath.co.uk/?p=1488An often neglected, but crucial, part of Bletchley Park’s work in World War II was the vast amount of data processing done using punched cards on Hollerith machines. The department which did this was called the “Freebornery”, at first located in Hut 7 (since demolished) and later in Block C (recently restored as the new visitor centre).

There has been very little detail published on the day-to-day operations of the Freebornery, so I recently visited the National Archives and made a copy of a typewritten document they hold: “The Use of Hollerith Punched Card Equipment in Bletchley Park”. With their kind permission, we are now publishing the text on our wiki for the benefit of researchers and other interested readers.
It is a well-known fact that, before electronic computers came along, very large volumes of data were processed using punched cards, for example the US national census data which Hollerith machines were originally designed to deal with. What may not be widely appreciated is the amazing complexity of the computations that could be done these machines, especially when the ingenious minds at Bletchley Park were applied to customising them and developing new processes.

Although I am (just ) old enough to have used card punches to key in mainframe computer programs, I have never seen electromechanical sorters, tabulators and so on in action. As the document describes, seeing an entire building filled with these huge, heavy, noisy machines all operating at full speed, each executing their own part of a massive computation, with stacks of cards being rushed by operators from one machine to another, must have been quite an experience; and then imagine that all day every day, using 2,000,000 cards every week! A back-of-the-envelope calculation suggests they must have had lorry loads of blank cards coming in almost daily, as 2 million cards weigh almost 6 tons .

I hope that publishing this document will spark some interest in the forgotten art of large-scale electromechanical data processing. I’d love to see emulators of the different machines made available, and even better to see the real thing in action. The National Museum of Computing does have some surviving punched-card-processing machines (although not the models used in the Freebornery) which are being restored to working order; I look forward to seeing them!

]]>https://franklinheath.co.uk/2015/05/02/imagine-6-tons-of-punched-cards-every-week/feed/0Craig HTurning the Tables on Utility Companies with the Data Protection Acthttps://franklinheath.co.uk/2014/08/07/turing-the-tables-on-utility-companies-with-the-data-protection-act/
https://franklinheath.co.uk/2014/08/07/turing-the-tables-on-utility-companies-with-the-data-protection-act/#respondThu, 07 Aug 2014 12:52:18 +0000http://franklinheath.co.uk/?p=1478A few years ago I gave talks at Open Tech and Over the Air, including some mobile security ideas that phone manufacturers were unlikely to implement. One of those ideas was what I called “notarised call recording”, being a way to hold utility companies to account for what they promise you in telephone calls.

I was listening to the BBC’s You and Yours radio programme yesterday (on my way to Bletchley Park, as it happens) and was delighted to hear some aggrieved customers using the UK Data Protection Act (DPA) to get their utility company to supply them with call recordings. The company in question has complied, including a recording which clearly proves that they did promise what they subsequently denied!
Exercising your DPA rights by making a subject access request is quite easy. It doesn’t have to be in legal wording or in any particular format; if you ask a company which is operating in the UK for a copy of information that they hold about you, they are legally required to give it to you. They can only charge you a maximum of £10 (£50 for health or education records in some circumstances) and they must comply within 40 days. If they don’t, you can report them to the Information Commissioner’s Office (ICO), who have the power to fine them (up to £500,000 although that would be a very extreme case).

The reporter in the radio piece asks: “Why do you think a company would send out a recording that proves the customer right?” and the customer replies “a mistake at their end I think”. I would have preferred the reply “because they’re legally required to”, but it would of course be easy for a company to deny that they had the recording of a particular call. In that case you would have to take it up with the ICO, and it would be interesting to see just how thoroughly they would investigate it.

Recording the call yourself would still be the safest thing to do, but if you don’t have your own recording, a subject access request is easy and not too expensive, so give it a go!

]]>https://franklinheath.co.uk/2014/08/07/turing-the-tables-on-utility-companies-with-the-data-protection-act/feed/0Craig HRaspberry Pi Fishcam – The Secure Versionhttps://franklinheath.co.uk/2013/08/16/raspberry-pi-fishcam-the-secure-version/
https://franklinheath.co.uk/2013/08/16/raspberry-pi-fishcam-the-secure-version/#commentsFri, 16 Aug 2013 15:06:27 +0000http://franklinheath.co.uk/?p=1400Having proved the concept using netcat, we need to add access control and make it accessible via a discoverable external address. The design is essentially the same, running the video capture command on the Pi and routing the output stream over IP to a remote client, but we use ssh (Secure SHell) as the transport to add authentication and encryption.

The first thing to do before exposing your Pi to the outside world is: change the default password! With Raspbian, the default admin user name and password is “pi” and “raspberry”. You should change the password to something that’s not based on a name or word that could be found in a cracking dictionary; best would be a randomly generated password that you write down and keep with you, or you can use initial letters of words in a sentence you can remember but others can’t guess. For extra security you could change the name of the admin account too.

Remote Access User Account

To make sure that any problems with our setup won’t allow bad guys into our home network, we use a dedicated account without super-user privileges for the remote camera access. We also disable normal user name and password access for this account, as remote access will use an RSA private key for authentication. Finally, we change the account’s shell so login goes directly to the camera software script; even if someone were able to steal the private key they would still only be able to look at our fish

RSA Key Pair

You will need to run the video streaming client on a device that has ssh installed. We use Mac OS Terminal (which has ssh installed by default) or cygwin on Windows (install the openssh package). The default key size and algorithm is 2048-bit RSA, which will be plenty; we use no passphrase, so we will be able to just click on a script to watch the video stream:

Run this command on the client to create the key pair:

ssh-keygen -f ~/.ssh/fishcam -N ""

Then copy the public key to the Pi:

scp ~/.ssh/fishcam.pub pi@<IP address you noted earlier>:/tmp

Video Command Setup

We need to move the public key into the right place so ssh uses it for authentication, and create the script to be run when the fishcam user logs in:

You can now test this out, within your home network, by installing VLC and running the following script on your client device. The path to the VLC binary will be something like /Applications/VLC.app/Contents/MacOS/vlc on MacOS, or /cygdrive/c/Program Files/VideoLAN/VLC/vlc.exe on cygwin:

Enabling Access from Outside Your Home

Typical home networks, using ADSL or cable broadband connections, have a router which controls the network traffic between all the devices in your home and the outside world. When your device connects to the router, it is given an IP address which is not visible from the outside (usually on the 192.168.xxx.xxx private network). When you connect to outside addresses, such as Google, the router uses Network Address Translation to map from the public IP address of the router to the private IP address of your device. This deals with sessions initiated from the inside out, but not any from the outside in; for that we need to use port forwarding. A further complication is that the public IP address of your router may change each time your broadband starts up; to get around that we need to use Dynamic DNS (DDNS).

The details of configuring the port forwarding depend on your particular router. You need to set up two things; Firstly, an address reservation so when your Pi boots up, the router always gives it the same private IP address, and secondly, the port number to be mapped (which is 22 for ssh). You should be able to find these settings in your router’s admin menus; for NETGEAR routers, address reservations are entered under Advanced / Lan Setup and the port number is entered under Security / Port Forwarding.

To use DDNS, you need to set up an account with a DDNS provider, and install a DDNS client on your Pi. We went with No-IP, as they offer a free service for personal use. You choose a user name, a password, and a domain name to use under one of the top level domains they manage (for example, myfishcam.no-ip.org).

Install the open source ddclient on the Pi:

sudo apt-get install ddclient

The installation script prompts you for various settings, but doesn’t include everything needed to work with No-IP over a secure session; after installation you will need to change a few configuration settings:

To access it from other devices than the one you created the RSA key pair on, you can simply copy the private key file (~/.ssh/fishcam) to the other device; do make sure it’s not publicly readable though, as ssh would then refuse to use it.

To fix read permissions on the private key:

chmod 600 ~/.ssh/fishcam

Extra Security Measures

Because we are using ssh port forwarding, that also enables logins to your Pi’s admin account from the outside world, using the admin user name and password. That’s handy if you want to tinker with things remotely, like changing the video bit rate to suit the bandwidth of your broadband upload speed, but it could be considered an unnecessary security risk (you did change the default password before starting on this, like we said above, didn’t you?)

For maximum security, you can configure your Pi to limit ssh connections from the outside world to just the fishcam account, but still allow you to use ssh for admin within your home network. If you’ve changed the admin account name, or you use a different private network IP address range, this will need to be modified appropriately:

Add the following line to the end of the file /etc/ssh/sshd_config on your Pi:

AllowUsers fishcam pi@192.168.*.*

Conclusion

I’m reasonably confident that it will take more effort to break this security than anyone could feasibly want to put in to it. I am tempted to publish the domain name and challenge people to hack into it (it would be great to learn if more security improvements could be made) but I’m not going to; that’s not because I don’t trust the security of the Pi, it’s because I don’t trust the security of the other devices connected to our home network, that aren’t open source, and that we didn’t configure ourselves! If anyone can spot any flaws in the above configurations do please let us know and we’ll update the article and credit you profusely

Tux keeping an eye on the fish

]]>https://franklinheath.co.uk/2013/08/16/raspberry-pi-fishcam-the-secure-version/feed/6Craig HTux keeping an eye on the fishRaspberry Pi Fishcamhttps://franklinheath.co.uk/2013/07/16/raspberry-pi-fishcam/
https://franklinheath.co.uk/2013/07/16/raspberry-pi-fishcam/#commentsTue, 16 Jul 2013 17:58:05 +0000http://franklinheath.co.uk/?p=1357I had security concerns over installing a wireless webcam to keep an eye on our goldfish. Such things are available cheaply off the shelf, typically manufactured in China, but I’m not willing to put a device of questionable provenance on our Intranet, especially not with a direct channel out to a server in China.

I started thinking about using a Raspberry Pi and Skype as an alternative solution. As (most of) the software would be open source, that way I would only have to trust Microsoft and the NSA not to interfere with the Skype server ;-).

My Raspberry Pi camera module didn’t arrive until this week (the first production run sold out almost immediately back in May) and, unfortunately for the plan, Microsoft have turned off the ability to register a Skype developer account in the meantime :-(. Using the Skype infrastructure with the Skypekit “headless” client would have taken advantage of all of Skype’s well-established security and routing capabilities, and the remote end could have been any device with a Skype client, but for whatever reason it seems Microsoft have decided that they don’t want people to do that.

There is a Linux security camera package called “motion”, which incorporates webcam functionality; Pi user dozencrows has adapted this to work with the MMAL interface that the Pi camera offers. Unfortunately this turned out to be too heavyweight for my purposes, using most of the CPU and giving only a low frame rate.

All I really need is to turn the camera on when a remote client connects to the Pi, transmit the video stream to be viewed on the client, and turn the camera off when the client disconnects. A venerable and admirably simple utility called “netcat” turns out to be ideal for the job:

We are using nc.traditional, not the plain nc, because we need the “dangerous” -c option which has been taken out of the newer versions for security reasons; it’s OK, I know what I’m doing, I’m a security professional :-).

To view the video stream, we simply connect to the Pi using the open source VLC client; this runs on most platforms including Android, Linux, MacOS and Windows. Because we are viewing a raw H.264 video stream, we have to tell VLC that by adding “:demux=h264” to the options:
The URL is constructed with the protocol tcp, the IP address of the Pi (you can find this out using ifconfig), and the port number 1234 that we chose in the netcat script above.

[Edit: the MacOS version of VLC lacks the “Show more options” part of the above dialogue. Fortunately there is a simpler way to specify the H.264 format, which also works on Windows, by including it in the protocol part of the URL like this:tcp/h264://192.168.0.22:1234 .]

There is still plenty of scope for improvement in usability, and there is no access control, but the basic functionality is working and I’m pleased to do it entirely with open source components!

]]>https://franklinheath.co.uk/2013/07/16/raspberry-pi-fishcam/feed/4Craig HVLCoptsFishcam1Fishcam2Security Lessons from Bletchley Park and Enigmahttps://franklinheath.co.uk/2013/05/29/security-lessons-from-bletchley-park-and-enigma/
https://franklinheath.co.uk/2013/05/29/security-lessons-from-bletchley-park-and-enigma/#commentsWed, 29 May 2013 15:32:33 +0000http://franklinheath.co.uk/?p=1350I had fun presenting at the DC4420 security meetup in London yesterday. The topic was “Security Lessons from Bletchley Park and Enigma” and the slides are now up on SlideShare.

We covered how the Enigma machine works, how Bletchley Park exploited German mistakes, and the five lessons I picked out were:

Cryptosystems have subtle flaws

Plan for key compromise

Users pick poor passwords

Pick a good RNG and trust it

Don’t underestimate the enemy

It was a friendly and knowledgeable audience, and one gentleman (CJ) suggested a sixth lesson: all cryptosystems have a shelf life. This came out of a discussion of the GSM A5/1 algorithm, and how the breaks in recent years came about probably because it is still in use over 20 years after it was designed; this is similar to the lifespan of Enigma, which was designed in 1918 but still in use by the Germans up to 1945.

It’s worth noting that Fritz Menzer, a cryptologist working for the German military, had developed two potential replacements for Enigma (SG-39 and SG-41, the digits being the year of the design) but they were never widely deployed due to production difficulties.

]]>https://franklinheath.co.uk/2013/05/29/security-lessons-from-bletchley-park-and-enigma/feed/6Craig HVisualising a Software Security Initiativehttps://franklinheath.co.uk/2013/04/10/visualising-a-software-security-initiative/
https://franklinheath.co.uk/2013/04/10/visualising-a-software-security-initiative/#commentsWed, 10 Apr 2013 13:15:09 +0000http://franklinheath.co.uk/?p=1219Last month I was pleased to attend the BSIMM Europe Open Forum. BSIMM is a model for assessing software security activities within an organisation; I have been following it since its first release in 2009, and over the last several months I’ve been able to use it in earnest at Visa Europe.

For me, the most interesting discussion at the forum was on presenting BSIMM assessment results in a visually compelling way. The BSIMM document uses spider charts, which hide potentially valuable information about activities at lower maturity levels. Sammy Migues presented a format he uses at Cigital, called “equalizer diagrams”, which reveal that information but lack the comparison with a benchmark.

I decided to ask Louise (the other half of Franklin Heath) about this, as data visualisation is one of her principal skills. We’ve come up with something I like to call a “DIP switch diagram”, which I will explain in this post. If you’re familiar with BSIMM and you want to cut to the chase, you can skip straight to the diagram.

First let’s consider a spider chart, as used in BSIMM4:The “sample firm” data is the made-up example used in BSIMM4, to spare the blushes of any of the real participants :-).

A full BSIMM assessment contains 111 data points, showing whether a specific software security activity has been observed in the subject firm or not. That’s a lot of information to take in at a glance, so it’s useful to have a chart or diagram which summarises it. BSIMM groups the activities into 12 practices, which are further grouped into 4 domains. Each activity has a defined maturity level of 1, 2 or 3. The blue line in the spider chart above shows the highest level of maturity of any activity observed within each practice. The orange line shows the average of that measure across all 51 members of the BSIMM community.

Looking at that chart, we might immediately conclude that we’re doing a great job at Strategy & Metrics, Training, Security Features & Design, Code Review, and Software Environment as we’ve got the highest possible score in all those practices, so we don’t need to worry about those areas at all. Unfortunately that might be completely the wrong conclusion, as those 5 practices include 47 activities and, to take an extreme case, we might be doing only 5 of the 47 to get those impressive maximum scores, if those 5 activities happen to be level 3 ones in different practices.

This is where the equalizer diagram comes in (so-called I suppose because it looks a bit like a graphic equalizer display). The following diagram is based on what I remember from Sammy’s presentation:This is using the same data as the spider chart, but it tells a quite different story. Here we can see, for example, that there is plenty of room for improvement in the Training practice: although the spider chart looked like we were already doing the maximum, this shows we are only doing 4 out of 12 activities.

The equalizer diagram isn’t telling a full story though, because it’s not telling us how we are doing compared to our peers. This is important because, as BSIMM says, “not all organizations need to achieve the same security goals” and “there is no inherent reason to adopt all activities in every level for each practice”. The 3 levels of maturity provide coarse-grained information on how frequently activities are observed: “Level 1 activities (straightforward and simple) are commonly observed, Level 2 (more difficult and requiring more coordination) slightly less so, and Level 3 (rocket science) are much more rarely observed.” but we wondered if we could show a bit more information than that.

Louise created a diagram using Tableau, her current favourite visualisation tool. The basic elements in our diagram resemble DIP switches, and the shade of each switch represents the fraction of all firms in the study which perform that activity. You can click to open an interactive version of the diagram:As the activities which are more commonly observed are darker in the diagram, it draws our attention to those first. For example, we can see that the Compliance and Policy level 2 activities are quite commonly performed, but we aren’t doing any of them; it would therefore be a good idea to look at the details of our assessment for those activities to find out if there is a good reason for not doing them.

Referring to the Training practice, where the spider chart might persuade us we’re doing more than needed, and the equaliser diagram might persuade us we’re doing less than needed, this diagram shows us that we are probably doing OK: few firms in the study are doing level 2 or 3 activities in this practice. We should definitely ask ourselves why we aren’t doing [T1.1] but, apart from that, we probably have more important activities to worry about in other practices.

As well as comparing against the full set of firms in the BSIMM community, we can also use this diagram to compare against a particular industry sector. BSIMM4 includes activity counts for two sectors broken out from the total: Financial and ISV (Independent Software Vendors). Click here for a DIP switch diagram shaded for Financial firms; it’s not drastically different, but it does highlight the gap in the Compliance and Policy practice even more, as you might expect.

This concept could be refined further, but we thought it would be interesting to share what we have and get feedback as to whether this is a useful line of enquiry or not. Please let us know what you think in the comments below, and thanks for reading this far!