Category Archives: Operating Systems

I can’t stand to see an old computer be thrown out. A fact that means that I’ve far too much clutter and plenty of things that are collecting dust in some corner or another. A quick mental audit throws up the following:

And yet I still can’t refuse when someone says: “I’ve replace my desktop / laptop / server do you want the old one ?” To this end, I’m grateful to my Brother-in-Law ( he of the clown ) who, when he replaced his old Mac Book (A1181) asked exactly that. To be fair, he was replacing it because it had slowed to a crawl with OS X, the battery had died (permanent power required) and the optical drive has either been removed or died – possibly both, but not necessarily in that order !

It has also come to pass that my eldest child is just about to finish GCSEs and start on A-Levels, and, with mostly (all!) essay subjects plus a relaxation of the school rules on the use of computers – we had a discussion about giving her a laptop.

The above list of machines are things that I own that are in storage – there are plenty of working, switched on computers around the house – oldest two have desktops, youngest has another old laptop re-purposed to run Ubuntu 14 04 – there is also at least one “communal” Apple laptop and my better half’s and my personal laptops ( a distinction that appears to be completely lost on the kids … ) – because of this, I really don’t feel that buying a new laptop – even for the few hundred pounds that you can pick-up things for from PC World today – is worth it. It’s likely to be kicked around, abused, used and left in a bag in a school locker / shelf / corridor – so something new and shiny isn’t going to be for long. Say what you like about the speed of old computers, but the build quality of them is something else all together – I can use the IBM RS6000 to get things off high shelves by standing on it – I’d like to see you do that with a new Mac Pro trashcan !

So, given the “new” laptop at my disposal, and a new found determination to help everyone see the light about Linux – a solution dawned. A quick survey about user requirements elicited the following:

Word Processing

Music – listening, not composing

Movies – watching, not filming

E-mail

Skype

And the usual suspects of social media … ( Facebook … )

Ok, well there is nothing in the list that should challenge the processor overly, so the plan still looks pretty sound. Ordered a new battery from Amazon for the grand total of £17 and set to work.

One of the inspirations for this has been a recent episode of one of the pod-casts that I listen to – “The Linux Action Show” – they have recently migrated an Apple user from her MacBook pro to Linux – originally on the Mac hardware, but ultimately to a Lenovo Yoga. For them, after several attempts at getting Linux installed – over various flavours – they opted for Antergos – this is actually the distro that I currently have running on my desktop, and thus I (a) have a DVD already burnt and (b) it seems fine to me (!).

It was at this point that I found out that the optical drive didn’t exist … Slightly troublesome and I slid the disk into what turned out to be an empty slot and then spent 5 minutes trying to extract it from the case without an eject mechanism. Fine, stage 2 – put it onto a USB thumb drive – piece of cake – had the image, had the drive done. Hold down the option key to select boot media – nothing to chose from except the hard disk… Methinks – ok, being daft here, no trouble to try a different OS, I have the images so won’t take long – let’s have Fedora 22 on the USB … Still nada …

Right – Google time. All became clear, and I kicked myself for my forgetfulness. The Mac needs to have the EFI adjusted to allow it to boot other operating systems. The tool to do this is called rEFIt – this hasn’t been supported since it forked in 2013, but as the A1181 Mac model range is from 2006 – 2009 I figured that there would be a pretty good chance that the last published version ( thankfully still available for download ) would work. It’s a quick an easy install, although I did carry out the following command at the command line to be sure:

sudo /efi/rEFIt/enable.sh

( This is courtesy of this page – don’t know if it was necessary, but didn’t see the harm in being safe ! )

Ah hah ! Brilliant – I can now see the USB boot device and even select it to boot from ! Will it now install, will it heck …

Urm … Back to Google – unfortunately I can’t seem to find the vague comment that caught my attention about the fact that this model doesn’t have a 64bit EFI – it’s a 64bit chip, but only a 32bit EFI. The common factor until this point was that all of the versions that I had tried were 64bit. So, after a quick scout, the first that came up with a 32bit version was …

Right, now we are cooking with gas … sort of … Installed the downloaded 32bit to USB key, tried and … nope.

Ok, so, back to the drawing board – going over the process from the beginning, back to the EFI. Lo and behold:

rEFIt Troubleshooting USB Disks
Note: The following applies not just to USB hard disks, but to any
storage device that is not considered "internal". That includes USB
flash drives, SD cards and other memory cards, as well as hard drives
attached through Firewire or other connections.
Booting Windows or Linux from an external disk is not well-supported
by Apple’s firmware. It may work for you, but if it does not work,
there is nothing rEFIt can do about it.

So, don’t have an “internal” CD drive so might have hit a problem here … But anyhoo, lets give it a shot. Off I go to find my USB optical drive, plug that in, burn a 32bit Elementary ISO to a DVD-R and give it a go …

Success ! Woo hoo. Boots like swimming through treacle – so I must admit that I was a little concerned about the performance of the OS once it was installed, but actually it turned out fine once it was running off the hard disk rather than over the lousy USB interface the Mac.

I have to say that it looks great. The requirements list has been met:

Word processing – the indomitable LibreOffice is providing this function.

Music – Spotify – through the web-player rather than the client – I’d have liked the client to install, but as a 32bit OS there doesn’t seem to be a package available – I might revisit and see if the source is there and re-compilable, but as the web-player works so well, I don’t really see the point.

Movies – Netflix / iPlayer / 4oD etc. through Google Chrome – ( this was also required for Spotify for the built in Flash ) and also VLC for viewing things that are held on the media NAS – I have to say that I was pleasantly surprised by the performance of VLC streaming over the WiFi to the laptop, wasn’t something that I was expecting to be so smooth.

E-mail – Thunderbird and, also, the built in Elementary OS Calendar – both of these were synchronised with the Gmail account and this was seamless ( on the client – had to adjust the Google Security Settings to allow for this to actually work )

Skype – well, I installed Skype – fortunately this is mostly used in chat mode, as hardware support for the camera and microphone seems to be non-existent at the moment. Sound out is fine, sound in – doesn’t even seem to have a microphone – one on the bug list …

Social Media – thank goodness for Google Chrome – I don’t have to worry about any clients as the web interface to everything else is only a click away !

So far the laptop seems to have hit the mark – when I went to say “Goodnight” last night it was in active use – and so far there have been no complaints. There are still some things that need to be sorted out – the microphone hardware is one, an other – if this is going to be used for A-level coursework – will be backups, perhaps that will be tied in with synchronisation with the desktop machine – currently a Windows machine, but maybe going to migrate to Linux off the back of this so that the environment can be common on both. At least that will be 64bit !

The overall title of this series has been based around the “Digital Forensic Workflow” – I mean this in it’s broadest sense. This isn’t just about the imaging / examination side of things – but the full life cycle. From first client contact to the final report ( and billing ! )

Possibly you read a pair of articles that I wrote on Forensic Focus – Part 1 and Part 2 here – in which I mentioned that I’ve “put out to pasture” my old Mac Book Pro and obtained a shiny Lenovo X1 Carbon – just before the Superfish scandal hit. That particular issue didn’t bother me overly as, I never even booted the machine into Windows – as soon as it came out of the box it had Fedora Core 21 installed on it from a USB stick ( yep, no optical media drive on the Lenovo ). Everything on the laptop seems to work without any tinkering – even the fingerprint reader flashes away when authentication is required, although I have not made use of it … yet …

I’m a fan of Fedora – this is news to no-one by now I suspect – and it has been my choice of Linux flavour since it came out. My youngest daughter’s laptop runs Ubuntu ( 14.04 I think – I can’t remember what I installed ) and my workhorse desktop is currently running Antegros ( an Arch based distro – to be fair, as a mix between an experiment and the fact that I couldn’t actually get Ubuntu running satisfactorily on it with a dual-Nvidia dual-monitor setup ). There are many more machines kicking around – not least a rather substantial1 HP rack-mount server that is currently running VMWare ESXi – that are awaiting conversion.

For now though we are going to focus on my mobile computing platform (!) – the others will get their own write up in due course.

So, what exactly does one use a laptop for then ? Well, in my case the list pans out pretty much as follows:

E-mail – a lot !

Writing – which breaks down to:

Blogging ( like this )

Word-Processing

Social Media type stuff – Twitter, Facebook etc.

System Administration of other things

Research ( web, but also other more “hand-on” things )

Coding

Skype (?) / Instant Messaging (?)

I don’t really play games on my laptop – not even solitaire, so I’ve left that off the list – although, again, may come back to that one later2 !

Staring at the top, but not particularly promising to continue in any given order …

I used to use Outlook on the Mac and on Windows – clearly this isn’t one that I can transfer across, being both unavailable for Linux3 so I need to find something else. That something else is Thunderbird ( at least it is at the moment, and so far, so good ). Thunderbird is the sister application of the Firefox web browser from Mozilla. Mozilla, for those of you who are interested in this sort of thing4, has a heck of a pedigree – created in 1998 when the source to the Netscape browser was released (in the 90’s Netscape Communicator was the dominant browser by a long shot) – it has since grown again to a powerhouse with Firefox, Thunderbird and Firefox OS even on phones.

Thunderbird is great for me – not dissimilar in it’s tabbed approach to e-mails to that which I’ve seen in Lotus Notes 9+ lately, I’d like tabbed composition of e-mails as well, rather than pop-outs, but I think I’ll either have to wait or write it ! To be honest, an e-mail client is an e-mail client pretty much – the basic concepts have to be there, otherwise it isn’t an e-mail client ! So effectively it is the enhancements – in the case of Thunderbird, called “Extensions” – that make the difference. I’m working with four that I like that are managing to make things easier for me.

Enigmail is the extension that manages the PGP encryption and signing of e-mails. Provided that you have PGP installed on your system, when you kick it off it talks you through the creation of a public / private key pair, and even uploads the public part for you to a key server. Then it is a matter of selecting the sign or encrypt icons in the compose window and you are done. Emails that come in, either encrypted or signed, are managed automagically and the decryption and / or authentication of signature is seamless.

Thunderbird Conversations

This is more of a pretty-fication rather than an actual tool – it sorts e-mails into conversations – so that you can see the back and forth of an e-mail chain all in the same place, rather than scattered over time. I like it – others may not. It also adds the feature of a “quick reply” in the same tab so that e-mails can quickly be responded to. The problem with that is that Conversations and Enigmail don’t want to talk to each other, so there are no signing / encryption options on the replies.

This is an interesting one – I’m experimenting with this at the moment. Tagging allows you to assign a given message to a specific category ( for both visual separation – you specify the colour of the tag ) and for tag filtering. E.g. ( As is in mine ) Personal = green, Work = orange, Social Media = Purple etc. It makes it easy to carry out a quick scan. TaQuilla is a bayesian tag adder … This means that rather than you tagging the message, or having set rules to tag a message ( all e-mail from the better 1/2 becomes “Personal” ) it learns from the e-mails that you have already tagged. So, because I have tagged all messages from better 1/2, children, sibling and parents as green – it recognises the common features of _that type of e-mail_ so that when an e-mail comes that doesn’t necessarily match a specific rule ( e.g. a child sends from school e-mail rather than personal ) that it can recognise the e-mail with a degree of certainty and tag it as personal … Still training this one a little, but it is getting there.

This is the calendar extension for Thunderbird – required in order to replace Outlook in my opinion. I could make use of another, separate calendaring application to track time, but I happen to like it as I tend to add things to my calendar most often when I’m with my e-mail so it makes sense for me ! I happen to have bound my calendar in Thunderbird to the online one that I have with my Google Account – it wasn’t my first choice, I actually wanted to continue to use my hosted Exchange calendar as my primary source – but it turns out that Microsoft doesn’t want to play particularly nicely with open standards for calendars, Apple won’t let me use iCal unless I make it public to everyone. So fine, I’m using Google. This means that I can sync the calendar across all my devices – laptop, iPhone & Android, which really is great as I’m bound to have one of the above around at any given time !

Thunderbird itself is configured to pick up and send my e-mail to and from Exchange over SSL/TLS IMAP and SMTP. So far, I have to say that it is proving to be a most viable option.

1. For home use anyhoo …

2. With the streaming features of Steam, even though the laptop itself doesn’t have the graphical oomph to pull of gaming. I may, once I get the main desktop machine running to my liking, give this a go …

3. Yes, I know about Wine and CrossOver … but why would I want to in this case ?

Not too long ago there was a reasonable amount of press ( in the IT world anyhoo, meatspace pretty much ignored it ) regarding attacks against the SSh protocol. The “SShPsychos” group has been responsible for a large number of coordinated brute force attempts against well known usernames with a variety of common passwords. This isn’t long term targeted attempts against a particular target – rather a scatter-gun approach at anything that’s running an SSh daemon on Port 22 using a short-ish list of dumb passwords.

To be honest, I’d known about this sort of background level for a long time – and it came as no great surprise to me. It’s been going on as long as I’ve had an SSh server running on a public IP, and to be fair the volume _has_ increased. It has been a great example to students though when I’ve been teaching Linux security – pointing out the reasons for carrying out the basics of securing SSh:

No remote root login

Complex passwords

Specific IP firewall rules if/where possible

And also some of the more complicated ones:

Fail2Ban

Chroot Jails

Multi-Factor Authentication

Even now, logging into my webserver ( “www.thinking-security.co.uk” ) via SSh on Port 22 there are approximately 2000 illegitimate login attempts over the last 20 hours. Quite often when I re-connect after a weekend or more than a few days, this number is in the 10s to even 100s of thousands. I’ll be honest, it doesn’t particularly bother me – it is much rattling of windows and testing of locks – there are much easier fish to fry on the interwebs than that particular machine.

It did cause me to ask two particular questions though:

1) Where are all the attacks coming from ?

2) What usernames and passwords are they trying ?

Turns out that question 1 is easy, and question 2 is half easy …

On any Linux server, the connections made against SSh are logged. These go into /var/log/secure and here is a prime example:

This is one connection attempt – the source ( from ) is 37.59.230.138, and it has presented the invalid user “bankid” – this has actually failed at this point – but SSh won’t let the attacker know that, it will still allow them to enter three password attempts before terminating the connection. This inability to tell if it is the username or the password that has failed is actually quite important – realise that if you can tell if an account is valid, then you can easily stop wasting time and effort on ones which are not. This non-specific failure method “Either the username or the password is wrong” leaves the whole possible space open requiring a far greater number of attempts to find a valid username _and_ password combination.

37.59.230.138 – great IP address – I’m sure that there are some savants out there who can look at that and tell me where it is from – but I assure you, I am _not_ one of them. I have to look it up – and even then the sources occasionally disagree ( not that it actually really matters as I’m not sending a drone over to wreak revenge ). For the purposes of the remainder of this process I’ll be using the MaxMind database, and, for the sake of legal compliance:

As a hosting company, they aren’t directly responsible for the attack, rather it is just being launched from a machine that has been allocated an IP address within their scope. However, having said that, a quick glance at a Google search about them suggests that this is far from the first time that they have been used as a stepping stone onto other things …

The IP address lookup also gave us the Lat / Long ( estimated – by a long shot ! ) of the address. We can plug these into Google Maps to have a look-see at the rough area of operation _of the IP address_. This isn’t, most likely, where our perpetrator is sitting – more likely the recorded head office of the company …

That’s great one IP address down – 2000 to go …

In the next articles in this series, I’m going to extract all of the IP addresses & usernames from the logs ( across multiple servers ! ), and then plot these against a map to show both historic and real-time data … And then we’re going to move on to finding out what passwords are being attempted using a “honeypot” !

Right, you need to read part one in order for this (a) to make sense or (b) to work – so if you haven’t done that already – Go !

Looking at a domain …

So, right now you have a working domain name server, just the one mind you, and it is configured to allow you replicate it over to another server … We are going to set up that second server now, so that we have some resiliency in case something horrible happens to the first one. Just as a side note – my two servers are both hosted at Digital Ocean – a great, cheap, provider of virtual hosts. They have a number of data centres world wide, so it would make no sense at all to have my resilient server located in the same place as my main server. To that end – one is in London, the other in Frankfurt. I figure that as a majority of my business is European, sticking them anywhere else is a bit pointless ! There is nothing, however, in future to stop me from either migrating or adding additional servers …

Anyhoo, onto the configuration. The first part is identical:

yum update -y
yum install bind bind-utils bind-chroot -y

Back into /etc/named.conf we go … ( vi 😉 ) and this time we make the following changes:

You can see the difference from the earlier one, this is slave entry and it points at the master IP address ( fill in your own domain and IPs here … )

This being done we can kick off the BIND process and get it added into the boot sequence as we did on the first server, troubleshooting as required. ( Which, go figure, I needed to do again, because I missed a darn “;” ! )

service named restart
chkconfig named on

Again, now we should be able to query that nameserver directly about our domain, so, using nslookup as we did before double check that it works.

Assuming it did, congratulations, you now have redundant name servers managing your DNS for your domain.

There is one last thing to cover, and that is making changes.

To make a change to an existing zone:

1) Edit the zone file on the master changing:

a) the serial ( incrementing )

b) the entry that you need to alter

2) Reload the zone files using the command

rndc reload

This will reload the zone file and the changes will propagate over to the slave ( if and only if you increment the serial though ! )

To make a change to add / remove a whole zone basically follow the same steps as for adding the first zone from Part 1. You’ll need to create the new zone file and populate it with the information that you require, and add the zone into the named.conf. You’ll also need to let the slave know about the zone in it’s named.conf. In this case use:

service named restart

To restart the service so that it reloads the named.conf file and is aware of the new zones.

I hope that your own DNS server gives you back some of the control that you would like of your digital estate. I will write more on this as I progress through the migration of the other 81 domains and also cover off things like round-robin DNS load balancing and MX entries for e-mail … Along with the trials and tribulations of getting everything migrated !

Somewhere over time I seem to have acquired 82 ( yes, eighty-two ) domain names, a small number have been bought on behalf of other people ( relatives, friends & children ) – some have been bought sensibly ( business related ) and some have been bought on a whim as I thought that I might get around to doing something with them at some point. I’ve made use of the rather good ( and cheap ) service at 123reg – which in terms of registration is great, and, I’m sure if you are managing the DNS of one or two domains is probably a pretty good admin interface for that too – however, for the full 82 it is excessively painful.

Cover of the O’Reilly Book on the subject

The recent – “I’m going to move everything to Linux” – decision has left me thinking that I should get on and tidy up everything else. The company website runs on Linux already – and I’m planning to point all of the pertinent domains at it. At some point I’ll be migrating the e-mail from the hosted MS Exchange server as well – although I have to admit that’s one thing that I don’t fancy doing – partially because other people rely on that one beyond me.

So, as part of this “phase” I’m going to take back control of my domains and host my own DNS servers ( yes, plural, for redundancy purposes ) on Digital Ocean droplets across two data centres – one in Frankfurt the other in London. ( I figure that I’ll remain in Europe for these, rather than the US or Middle/Far East ).

As is my wont, I’m going to be using Fedora Core 21 x64 as the base OS – this isn’t to bad-mouth any other distros ( except Ubuntu – I don’t like Ubuntu, or Debian … ) – I just like Fedora ! This could / would work equally as well with CentOS, which is the other sane option on Digital Ocean – FreeBSD is pretty cool, but it isn’t Linux, so doesn’t count … [ please address hate mail to /dev/null ]

I used to work in an ISP, it was my first job when I was still at University, and managing DNS on BIND was one of my roles back then. I have now, forgotten absolutely everything that I ever know about it. So this is going to be a little bit of a learning curve !

We begin0:

First off – are we up-to-date ? Just installed, so unlikely – a quick:

yum update -y

To bring it all up to speed and make sure that all the packages are at their latest and greatest.

Then, you need to install BIND, BIND tools and, for the sake of security BIND chroot1:

yum install bind bind-utils bind-chroot -y

The main configuration file for bind is in /etc/named.conf, so using your editor of choice ( which will, of course be vi ) edit th options section to look like the following:

[ replacing aaa.bbb.ccc.ddd with the ip of your secondary if you have one, and removing it if you don’t ]

One of the important things here is that – if you are running an authoratitive DNS server for your domain(s) – you _turn off recursion_ – this prevents your server being made part of a Distributed Denial of Service (DDoS) attack.2

N.B. Watch your “;”s BIND is painfully picky about syntax !

Once you’ve got this part configured, it is time to start adding domains !

Further down in the named.conf file are a list of all the “zones” that your nameserver will know about.

The first zone is the default for the server, the second has been added by me for the domain “security-intelligence.uk”- the syntax above makes it the master for the zone ( the definitive record ), the file is where the actual information about the zone is held, and the allow-update relates to which machines are allowed to make dynamic updates to the DNS entries for this domain – for use in DHCP scenarios. You can repeat this as many times as you like ( in my case it will be 82 by the time I’m finished ! Although I suspect that a script may well come into play to take the downloadable CSV file from 123reg and do the majority of this for me !3 )

At this point we move on to create the associated zone file … Ok, so again, using vi your editor of choice, create the file that you gave your zone above in /var/named/ e.g.

I’m not going to go through this in detail at the moment – will come back to that later, but there are a few things here that you need to consider:

(1) The Serial : this needs to be changed each time you make an update to the record, incrementing with each modification, sadly this is a point where the use of the American date format makes sense as yyyy-mm-dd will always increment. If you are making changes more than once a day then append an additional couple of digits so that you can run through 99 changes before requiring a new date …

(2) Change the bits that refer to my domain to refer to yours …

(3) Change the IP address “www.xxx.yyy.zzz” to your main DNS server, “aaa.bbb.ccc.ddd” to your secondary ( if you have one – remove the second name server from the list as well if you are only doing one ) and “qqq.rrr.sss.ttt” with whatever you want your domain records to point at. In this case they both point at my webserver, so a URL of “security-intelligence.info” or “www.security-intelligence.info” will both go to the website.

Once that’s all done for your domain, you are actually good to go. Kick off BIND by entering the following command:

service named restart

If you get an error at this point ( like I did ) then:

systemctl status named.service

May well point you in the direction of your missing “;” !

Assuming that, unlike me, you can get it right – you should now have your primary/master nameserver up and running.

Give it a quick test:

nslookup - www.xxx.yyy.zzz

This will put you into nslookup’s interactive mode, querying your server at the ip “www.xxx.yyy.zzz” ( what your server’s actual IP is here … ). Enter one of the domain names that you are serving, in my case “www.security-intelligence.uk” and you should see back the response with the correct IP address as specified in your zone file.

The other thing that you’ll want to be doing is setting up BIND to start on each reboot, so a quick:

For the whole time that I’ve been doing Digital Forensics, I’ve been using Windows for it. This seriously irks me ! I’ve been in love with open source / free software since I installed my first Linux box at University. The original reason for the install was to avoid having to walk to the CS/AI labs in winter, in Edinburgh. I like being warm and dry as much as the next person – something that doesn’t happen often outside in Scotland in Winter. Linux emulated the SunOS / IRIX environment well enough that I could carry out my C / Prolog work without hypothermia.

Since then, I’ve always had _at least_ one Linux machine running at any one point in time – but since I stopped being a UNIX SysAdmin and started being a Security / Forensics Consultant, usually not as my main machine. I tried for a while to assuage my guilt by using Macs – well documented below – ‘cos at least they are “UNIX” machines when running OS X. Windows though has been an ever present thorn in my side, firstly for the running of proprietary forensic tools ( Oxygen, XWays Forensics & other odds and ends ), secondly for the running of games ( I don’t play many, but enough … ) and finally for the suite that is Office – something that has been required day-in-day-out for far, far too long …

Until now, I haven’t actually _tried_ to get rid of it though – having enough bits of hardware around to run Windows, MacOS and Linux both physically and in virtualised environments has meant that I don’t need to do it. The gnawing feeling that this is wrong has been exacerbated by tuning into a number of Linux podcasts ( I recommend Jupiter Broadcasting, Linux Action Show and Linux Unplugged ) and this had drawn to my attention that perhaps Linux is now “desktop ready”. And now Steam ( at least in theory ) works on Linux for some games in my library ( Bioshock Infinite ), there really is no excuse any longer.

This is it, I’m biting the bullet and removing Microsoft and Apple from my day-to-day workflow – for _everything_ forensics, security, documents, e-mails, IM/VoIP, games, calendar, phone synchronisation (but not phones themselves – I am aware of the Ubuntu phone and may make the switch at some point, but for now my iPhone remains) etc. etc. etc.

I think that there are some things that won’t be straightforward, I’ll admit that up front – but I sincerely hope that the Open Source Eco-System has solutions to all problems, and I’m not unwilling to dust of the few coding skills that I have in order to get to the end goal.

Split, on Mac OS X, doesn’t have the -d option to number files. This is a right royal pain when you are splitting up a dd image as I couldn’t figure out how to get either XWays Forensics or EnCase to accept the split image when suffixed with aaa, aab, aac etc. First time out of the gate I just paid a child to sit and re-number the lot for me ( which cost me £5 – but saved my sanity ), but for future reference and to save my financial status here is a (albeit long) one liner for the command line that will take any three letter suffixed filename & change it to the corresponding numerical value. (There are probably cleaner ways of doing this – feel free to let me know and I’ll be happy to update them here).

So this takes all the files test.dd.aaa, test.dd.aab, test.dd.aac etc. and converts them to test.dd.001, test.dd.002, test.dd.003. So, this will work for any number of files up-to and including zzz which is 17,576 – but extending it further wouldn’t be a particularly challenging task…

As I’m fairly sure that you may have gathered, Windows XP is going to go end of life very, very soon. 8th April this year (2014) in fact. This is proving to be a little bit of an issue for more than one organisation, many people have come to love Windows XP – and the old mantra of “if it ain’t broke, don’t fix it” has, until now – meant that there was little reason to move to Windows 7 or, even less to the poorly received Windows 8(.1). This has meant that, right now, across the world there are IT departments who are having a little bit of an issue – how to upgrade _all_ the machines in the organisation to Windows 7 as soon as is humanly possible. I have to say though that of the multiple organisations that I know of, not a single one is going to have finished their upgrade by the 8th …

So, realistically, where does this leave them in terms of risk ? I was actually asked this by a customer this morning – please quantify our risk. Well, I’m a big fan of statistics, not a great mathematician, but the concept definitely amuses me. So, thought I, what is the probability of there being a certain type of vulnerability this month. A quick Google didn’t throw up many sites with statistical data for XP patches, and I didn’t want to go through all of the Microsoft stuff myself, so I’ve borrowed the data from the excellent guys over at Secunia1. The following graph shows the vulnerability severities ( 356 in total over the last 10 years )2.

This isn’t entirely helpful as these don’t map directly to the Microsoft Classifications ( Critical, Important, Moderate, Low ). Let’s make one or two assumptions here then – we’ll assign the top four Secunia categories to the equivalent Microsoft ones, and we are going to assume that the vulnerabilities were evenly distributed over the 10 year period ( 120 months ). So, 1% of the vulnerabilities is equivalent to 3.5 vulnerabilities ( roughly – it’s 3.56, and I know that I should round up, but this is all assumption anyhoo ! ) So, each of those segments above equates as follows:

Critical ( Extremely) – 4% or 14 vulnerabilities

Important ( Highly) – 38% or 133 vulnerabilities

Moderate (Moderately) – 24% or 84 vulnerabilities

Low (Less) – 28% or 98 vulnerabilities

If we continue with our assumption that these have been evenly distributed over the lifetime of XP ( 10 years/120 months) we can see that the percentage probability of a given criticality occurring in a given month is equivalent to the total number of (vulnerabilities / 120) * 100 which give us the following:

Critical – Approx 10%

Important – Approx 110% ( More than certain !)

Moderate – Approx 70%

Low – Approx 80%

Well, that’s not very good news – it would suggest that each month moving forward would increase the number of vulnerabilities by these amounts, so at the end of 1 year you’d expect to see, on average:

Critical – 1.4 vulnerabilities

Important – 13.3 vulnerabilities

Moderate – 8.4 vulnerabilities

Low – 9.8 vulnerabilities

Ok, that’s fine – but what we are actually interested in is the residual risk isn’t it – what are we left with after we have considered our controls & countermeasures – what mitigation is in place. Well, the following information gives us a bit of a better idea:

Using the base assumption of even distribution we get the following probabilities of it occurring in a single month:

You can keep going if you want to, the following graph shows the actual impacts:

I’m not going to do that here, I don’t really think that it has much value. The point is that there is a distinct bias towards the first vulnerability being announced being an “Important, Remotely Exploitable” one.

So what ? Well, that’s interesting actually. Microsoft has given up patching XP, but that’s not the same thing as being left defenceless. Both Anti-Virus and Firewall technology for XP is going to continue being supported for some time – and if these countermeasures have been implemented, then there is a good chance that any given vulnerability will be completely mitigated by them – the trouble is, until the vulnerabilities are actually announced – you aren’t going to be able to tell how effective your controls will actually be – and you may need to do some fairly rapid reconfiguration of your firewalls &/or AV signatures to ensure that you are detecting and preventing those attacks.

Please don’t take that as permission to slack off in your upgrades, or even worse, decide that you can accept that risk – the best course of action is to upgrade to a patched and supported OS, however, the above at least has a stab3 at quantifying the level of the problems !

Just for the record by the way, I have confirmed with Microsoft that there will be patches for XP released on Tuesday the 8th April – these will be the last ever XP patches, but for those of you who have a monthly patching policy, you won’t actually breach your policy until the following month …

1. Guys, if you read this link, you should definitely bring back the free Vulnerability Alert mailings – but if you don’t a free subscription for plugging you would be welcome 😉

2. Ok, I realise that this is not 100% legitimate, there wouldn’t be an even distribution over the years, so this really is a generalisation. The distribution over the years actually looks like this …

If I had paid attention in University Mathematics lectures, I would remember how to do this more accurately, but I didn’t and I don’t…

3. You should look carefully at what your organisational risk appetite is, and also the full business impact of a vulnerability being exploited. Also, please remember that you may have obligations under other things (PCI/DSS for example) that you need to meet…

I’m planning using the Fedora 17 Remix – at least to start with – shouldn’t be a problem to obtain / compile pretty much anything to run on it ( famous last words those ! ). So seems like a reasonable way forward.

I’ve been a long time RedHat / Fedora fan – was my first Linux back in the day ( when RedHat was still free … I don’t recall, but probably RedHat 2.0 ) – I had it installed on my Pentium at University and used it, with a 14400 modem, to avoid the Edinburgh weather instead of having to go to the AI and CS labs for assignments … Sigh … The good old days …

Getting it onto the card is pretty straight forward, once you have your uncompressed image use Win32ImageWriter to write it to the card.2 This worked just fine, and booted up beautifully.

For screenshots & video of the Pi, I’m using the Elgato Game Capture HD ( see above ) – this works brilliantly, it has a USB connection to my laptop, an HDMI from the Pi and an HDMI to the monitor. It introduces no lag on the monitor side, but quite neatly captures – in full HD – the image on the way through. It’s a very neat way of getting screenshots off the Pi, which otherwise would prove a little troublesome. I’ve attached the video of the first boot ( and setup configuration ) below – more information and details will follow in due course !

*. The astute and keen eyed amongst you may have noticed that in this picture the two USB WiFi devices aren’t showing – that’s because they are currently in my Ubuntu PenTest laptop running aircrack-ng as a proof of concept for this project …

1. We’ve had some laptop issues at home, my other half’s MacBook Pro croaked – and seeing as I have an issued laptop from my current client, and she doesn’t – she’s taken my MacBook Pro with her SSD. I’ve spent the last few weeks turning an old Lenovo T61 into a usable computer again. First off – out with the old spinny platters and in with an SSD for the primary HD. Doubled the RAM again ( past the quoted manufacturer maximum ) to 8GB and got rid of the CD-RW drive ( never used it anyway ) and replaced it with a 750GB hybrid disk to hold my VM images, oh and, missing my screen real estate from my 17″ MBP I also acquired a portable Lenovo second screen – I really don’t know why I’ve not seen these around more – they are brilliant ! I’m not sure that I couldn’t have bought another laptop for the cost of all the upgrades, but – it was fun to do, and there is something quite stylish about the older Lenovos – that IBM feel still I think !3

2. Be prepared, this is a definite “cup of tea” part of the process. In my case unload and load the dish washer, make and drink cup of tea, have chat with Brother-in-Law on phone, get high score on Temple Run 2 and, finally, just to be sure, go and get the kids from school. But hey, it finished ! ( In all seriousness, I was getting about 1MB per second for 3GB – that’s about 50 minutes )

Roman depiction of the Tiber as a river-god (Tiberinus) with cornucopia at the Campidoglio, Rome. (Photo credit: Wikipedia)

I must admit a certain love for the Raspberry Pi – we have two in the house just now – one which was doing a service as an XMBC box onto the TV ( something it was OK at, but not great – now replaced by a PS3, which just works better and I can play BioShock1 on it too ) and a second which was left by Santa in order to take up a role as a Python training device for the smaller members of the household ( although, having discovered yesterday Raspberry Pi Assembly Language Beginners: Hands On Guide: 1 and RISC OS for Pi2 they may well find themselves learning Assembly instead ). With the retirement of the first Pi from media player duties though, I’ve started to contemplate what it might become – it doesn’t pack a huge amount of punch, but for all of that, it’s small, light and exceedingly power efficient – so much so, it is feasible to run it from batteries.

A few years ago I went through a similar Mini-ITX phase, building a small footprint machine which ran very serviceably ( and the components still do I believe – they were carved up for an Arcade project which is still uncompleted [ although the controller with two good arcade joysticks and some good buttons to thump was running very nicely over USB with MAME and Gauntlet ! Anyhoo, I digress more than usual ] ) at the time I was frequenting the rather good Mini-ITX.com and enjoying their project pages ( sadly no longer updated much – they used to be fun … ) – they had a link to “The Janus Project” – a self-contained wireless security test rig in a Pelican case.

Now I always liked this idea, didn’t have the money or the time, but I thought it was cool. Well, time and technology wait for no man, and since then we have had much in the way of efficiency and miniaturisation, not to mention some much more refined ways of cracking WiFi. To this end, I have intent to build a mini-Janus, a son of Janus – “The Tiberinus3 Project” if you will.

Given that time has moved on so much though, I find, that I have an opportunity to work on a smaller scale, and to be portable … So to that end, I have started to assemble the parts – to wit :

1 x Raspberry Pi, OS & SD Card

1 x Power Source ( 12000mAH battery pack )

1 x GPS dohicky

2 x WiFi dohickys

1 x 3G Modem

1 x Waterproof Case

1 x USB Hub

The idea is to contain all of the above in a box which will be self contained for a period ( 12000mah – not sure, but reckon in excess of 8 hours runtime, although that will depend on the peripherals … ) and to be fairly autonomous in the collection of data – e.g. while it is on, it will constantly seek out WiFi sources. This device can then be left comfortably on client site for a period to perform an unobtrusive wireless audit as part of a PenTest. There are currently two WiFi dongles on the list, rather simply one to scan and one to manage, although, depending on the power consumption, it may be possible to run more than two through a powered USB hub, or to run two in scanning mode and leaving management out of the issue, or possibly even use the 3G Modem over USB to provide managment, and use two to scan … All an experimental theory at the moment !

Obviously, you should try this at home – what’s the point in writing it up otherwise – but remember the various legal requirements surrounding ( in the UK4 at least ) the Computer Misuse Act – you shouldn’t make use of anyone’s computer systems without their prior authorisation.

Parts are on order, and I’ll update as things assemble ! ( For the record though, I’ve been looking at doing some of the development work on the QEMU Pi Emulator … Not sure how that’s going to pan out either … )

1. A game I _really_ enjoy, although, like most games – I suck. I’ve also been infuriated by the constant delays surrounding BioShock Infinite which has switched from a birthday present to a Christmas present and back again since it was supposed to be released …

2. My Junior School had just switched to Archimedes computers when I left, the Senior was RM IBM drclones. I actually never really got to play with them properly, although they always held a certain fascination – I’ve eyed up various 2nd hand bits of kit in the Vintage section of E-Bay, and have even bid, but never to a winning outcome – the port to Pi has got me all of a flutter !

3. “One tradition states that he came from Thessaly and that he was welcomed by Camese in Latium, where they shared a kingdom. They married and had several children, among which the river god Tiberinus (after whom the river Tiber is named).” – Encyclopaedia Mythica – I would so love to claim I knew that, but it was Google.

4. Other countries are available, and I could even recommend one or two as being nice places to go. However, make sure that what you are doing is acceptable under your local jurisdiction – fines, prison or worse awaits those who overstep the mark.