A few days ago I had cause to register a new Internet domain name for a project I’m working on.

On this occasion it was a “.net” domain.

Unfortunately for some reason I wasn’t given the option of taking the “Identity protection” option at the time of purchase, and I’ve not previously had too much in the way of issues by not taking it, so didn’t really think much of it.

This time however, proved to be different.

The WHOIS records for the domain concerned contained full name, address, and telephone number contact details.

Within hours of registration I was starting to receive spam e-mails like this one:

Anyone with a degree of knowledge will know that search engine registration is free, and threats like this one that if you don’t (implied: purchase their service) your website will not be indexed by search engines is misleading at best, and outright deception at worst.

What was slightly more sinister this time was the fact that so far this morning I have received no less than five unsolicited phonecalls from foreign sounding ladies & gentlemen, and in one case a silent call directly to my mobile. The calls I actually answered all wanted to talk to me about website development, or search engine registration.

This takes publicly available personal contact details, and exploits it in an entirely new way. They MUST get enough takers to make this worth doing, but it alarms me to think that people would fall for this.

I’ve subsequently enabled ID protection on the domains I can, and have requests outstanding to complete on others.

In the last two weeks, two separate things have occurred which in isolation would be annoying enough, but when taken together become something quite unpleasant.

First off, I ordered a couple of T-Shirts from a website in the US. Nothing unusual or unreasonable about that. These T-shirts cost the equivalent of about £22 once the conversion from Dollars to Sterling had been completed. They were dispatched, and the next thing I know I had a card through the door from the Royal Mail stating that they were unable to deliver the item to me because there was a charge to pay.

Fair enough, import duty, customs duty, call it what you will, I’m not trying to avoid paying what I should, which was something like £1.50 on the package. Again, not unreasonable.

What wound me up was the £8 “handling fee” for the Royal Mail to collect and pass on that £1.50 to HMRC. How can it possibly cost anything like £8 for them to handle that. Ok, so the cost of a card being posted through my door which at the overinflated retail rates would arguably cost a stamp, plus the cost of the card itself. 5 minutes of time for someone to calculate and write on the card, and put it in the post. Maybe some costs for back and forth transport between Royal Mail and HMRC, so a couple of pounds would not be unreasonable in my mind, but £8…?

The Second event happened this morning. We’ve noticed this before, and have commented in conversation, but today the Postman made a special trip to our door to put a leaflet through the letterbox advertising something or other. This was just a generic leaflet, no address on it, not personalised to us in any way. It’s the sort of thing I may have delivered once in a blue moon as a paperboy all those years ago. There was no other mail this morning, just the damn leaflet. As usual it went straight in to the paper recycling. Thank goodness for small mercies – it was at least one leaflet today and not the 10-20 identical copies of the same leaflet that we usually get!

I bet the company paying Royal Mail for the leaflet drop aren’t paying the price of a stamp for each leaflet to be delivered!

I am forced therefore to conclude that the Royal MFail is therefore using the ridiculous charges like the “handling fee” to subsidise unwanted spam leaflets like that. This is unacceptable behaviour in my mind, and while I don’t object to companies diversifying their services to increase profitability, it should stand on it’s own two feet, and not inappropriately dis-aggregate the true costs in other related areas.

PS – Where’s my option to “opt out” of unsolicited Spam from you please Royal Mail?

A few months ago, I was set a challenge by my boss. Fundamentally I needed to come up with a way of measuring the productivity of a Network Architect beyond the usually accepted chargeable utilisation.

Our business had historically been driven by cost recovery carried out by timesheet booking, and project accounting based thereon. What this didn’t do was to show how efficient and effective the Architect was being. Was he or she really delivering that 10 day piece of work in 10 days? Or was it taking 20 or 30 instead? On the one level we didn’t especially care, as if it took 30 days, we’d be being paid for 30 days, and the projects paying for their time would end up carrying the cost overrun. How accurate were our resource estimates anyway? But on the other, we had a backlog of work for our Network Architects, so if they were taking longer than expected, we needed to know!

It would be nice to think that the Project Accounting would be picking up any overrun and flagging it, but the brutal truth was they weren’t doing it, at least not in time to be useful, and we’d be criticized after the fact for allowing the overrun to take place.

Whatever metric we identified needed to be easily measurable and quantifiable, so as to form part of a weekly scorecard to submit to the Senior Management Team (SMT), and ideally not require a significant effort on the part of the TDA to enable.

After much thought, and consultation with colleagues and associates, I came across a concept called “First Pass Yield”, or “Throughput Yield” in Six Sigma lexicon.

In essence, this is the sum of the effectiveness of multiple process elements.

In our case I picked two easily measurable items to begin with.

1) Design Approvals

Our design governance process requires that a completed design be reviewed at a Design Review Board, and approved before being issued to stakeholders. This would apply to an HLD, an LLD, or a “Design Brief” which is a simpler 3-4 page document intended for in life changes to existing environments. Any future changes to the design beyond this point would require re-approval. In all cases, controlled approval numbers were issued to documents that got approved, and these were specific to the version of the document reviewed.

Clearly if we were working optimally, the Architect would be getting their designs right-first-time, and with sufficient detail and be of good enough quality to be approved on the first attempt.

If documents required repeated attempts at getting approval, we were not working efficiently. This was measurable from our design approval register, where we could compare the number of designs approved versus those not approved in any given period.

2) Repeat Approvals

In a similar vein, designs that required a repeat approval for a v1.1, or v2.0 etc. meant that we would need to do costly re-work of a solution design. This may not be our fault as such, since project or customer goalposts regularly move themselves, but equally it could be an indication that we weren’t asking the right questions first time around. So any v1.0 approval was counted as a success, and anything greater was counted as a failure, thus generating our second product.

This second measure feels like it’s a criticism, and it isn’t intended in that way. In actual fact, what it has served to highlight is that we weren’t very effective at managing project change. Customers would often ask us to accommodate changes mid-way through a project, and rather than managing that as a project change and a lever for us to flag that it would require additional effort and/or cost, we would try and accommodate it within the in-flight design work, which would in turn contribute to the cost/effort overruns that we were trying to capture.

And the obvious!

Of course, chargeable utilisation is still a relevant factor. In our business, we would be 100% cost covered and make our planned margin etc if the Architect utilisation hit 85%. The additional 15% was intended for team meetings, admin, appraisals, one-2-one meetings, and such like, plus some training time if we could ever fit it in! If we could exceed the 85% threshold (and we frequently did!), we’d make more profit than we expected to, but this would be at the cost of the other items.

As a result of this, we introduced some additional weekly reporting for the TDA’s. I created a simple spreadsheet which the TDA should be able to update in a few minutes each week. It identified scope creep and resource overrun, both of which were eventually integrated in to the weekly performance dashboard, and had the added benefit of helping to flag RAG issues around resourcing before they actually became issues.

I’d be interested to hear if anyone else has any metrics they use for IT/Network Architects or TDA’s and how the data is captured to generate the information?

Just over a year ago they doubled the price of the basic “Lotto” offering, and by increasing the number of balls in the draw have reduced the odds of wining the Jackpot from something like 14 million to one to 45 million to one. At the same time parallel changes have been made to increase the chances of winning a prize. Many of my friends and family agreed that we’d be doing LESS lines on the National Lottery as a result, or at the very least spending the same sum on lottery tickets, but doing 50% of the lines. The evidence of this failure is clear to see. Gone are the £6-8m jackpots at a weekend with no rollover. Instead we’re lucky if the jackpot reaches £4m without a rollover, and the number of sequential rollovers is clearly increasing.

The latest Camelot ploy has annoyed me further still, and to me points at a company that is loosing public support. The paying public are voting with their feet, and not backing the National Lottery as they once did. Not only are Camelot loosing out, but the extent of the Lottery funds channelled to good causes and charities is similarly shrinking.

At this time of year, there are normally multiple different seasonal scratch cards on sale. These are usually a mix of £1, £2, £5, and £10 scratch cards with a Christmas theme. The £1 scratch cards are often in the form of tags, so they can be used to label a present “from” and “to” an individual.

However it would seem that there are no £1 Christmas scratch cards on sale anywhere this year! The list of Scratch cards in circulation most definitely doesn’t include a Christmas specific one for only £1 this year.

Many people (myself included) use these as part of our Christmas gift giving, using them as Stocking fillers or to boost presents, and it seems likely that Camelot expect everyone to just buy £2 cards instead.

I wonder if Camelot will retain it’s license at the next review? Based on current performance, I sincerely hope not!

So I’ve been having some problems over the last couple of days. And by the looks of the Google search results, I’m not alone.

Our Chinese Spammer friends have found a new and really bloody annoying way of getting in our faces.

For anyone with an iCloud account (including just about anyone with an iPhone, iMac, iPad, etc) they are sending bogus event invitations to your iCloud e-mail address. These are mass sent events, and all you can do is respond with “accept”, “maybe”, or “decline”.

These events will pop up in your calendar, and depending on your reminder settings will disruptively pop up on your Phone or iPad. Unfortunately ALL of these responses will send a trigger back to the sender, so he/she knows that the e-mail address is active, which anyone with a grain of common sense will know is the last thing you want to do, because they will then send a load more junk to the e-mail address if they know it’s active.

The only temporary bodge “fix” seems to be to create a new calendar called “Spam”, move the spam events in to it, and then delete the calendar. I’ve not managed to get that to work as yet on any of my iDevices.

This is a poor show by Apple; There seems to be no way you can restrict the receipt of event invitations to people in your contacts list, or to report an e-mail address for sending spam.

So thanks for this Apple. Sort it out, please!!

Here’s hoping for an “update” soon that will give some better options!

So, on 20th May this year, I received my latest shiny new car, a Mercedes C220 Sport Estate in Palladium Silver.

Unfortunately, just under two weeks and just over 500 miles later I had the misfortune to be #3 in a 4 car sandwich while driving home from visiting a site in Surrey.

I made light of it at the time, with jests like “I wanted to see what it looked like with a Vauxhall Corsa hood ornament”, and “see if I could fit a Mercedes CLK in the nice big boot”. The car wasn’t written off, although the repairs did cost in the region of £12k for parts alone, as advised by the repairing garage. Fortunately I got it back as good as new after four weeks of hell driving a Peugeot 208 courtesy car. Now, just over 6 months later I can say without reservation that I’m really glad I chose the C-class over the other options available to me.

After years of driving Ford and Vauxhall vehicles, I had never really appreciated the difference a premium car brand like Mercedes might make. Gone are the little rattles and creaks from the dashboard and console, replaced with a silent refined atmosphere which disguises the speed and power of the engine. I get better fuel economy out of this 2015 2.1 litre diesel Blutec engine than I did from the 2.0 diesel in the 2011 Insignia that preceded it!

There are also many small refinements which drive home just how carefully this car has been thought thru. For example, after about five weeks of driving the car I realised that the little black strip visible on the ceiling from the drivers central mirror also contained a set of LED’s which showed proximity to other objects while reversing. This being in addition to the usual beeps and the reversing camera that came as standard with this model. My failure to notice this before came from a combination of not needing them due to the Camera and beeping noises, and the angle at which I had the mirror set. I initially thought the concealed illuminations “down” under the doors and mirrors were a silly affectation, but having been getting in to the car in dark car parks since the clocks changed, my mind has been changed here too.

I wonder if I’ll still love the car as much in another 3 years time? 🙂

Having decided to have a play with Cisco’s VIRL solution, I’d intended to set it up on an old Laptop that I had laying around, but the requirement for a minimum of 4 CPU Cores, vt-x support (for Hardware assisted Virtualisation) and no less than FIVE Network Interfaces, I decided that I’d be better off using VMWare Workstation instead. VIRL doesn’t run under VirtualBox so VMware it is.. One license purchase later, and I’m merrily installing it on my Core i7 3820 machine, with plenty of RAM, Disk space, under 64-bit Windows 8.1, no problem! or so I thought!

I vaguely remember having issues with VirtualBox and the hosting of 64-bit Guest OS’es before, but since my need at the time wasn’t too specific, I didn’t really spend the time trying to resolve it, but I was surprised to find once installed, that VMWare Workstation 11 was nagging me that a 64-bit guest OS was not supported on my machine. I’d checked the pre-requisites quite carefully.

Of course I Googled, quite extensively, and didn’t find much that described my exact issue. and yes, I checked in my BIOS that the Hardware Virtualisation settings were enabled (they were!); I was definitely running a 64-bit OS, so why would it not work. My CPU, an i7 3820 definitely supported the Virtualisation extensions, but the Intel Processor Identification Utility stubbornly disagreed! My rig is a 2-3 year old custom build from Mesh Computers, based on an MSI X79A-GD45 main board. There should have been no issue with Virtualisation, and there was certainly no issue with running 64-bit Windows on it!

After much research, I stumbled on an article from VMWare dated December 2008 that suggested that VT-X was often unavailable to normal software if “trusted execution” was enabled. That sent me off in to the ClickBIOS once more as I recalled seeing a setting for an “Execute Disabled Bit” which was ON, so I decided to try turning it off and see what happened. I’ve attached the in-windows screenshot of the BIOS simply because it’s easiest to capture, but the setting was found in the same place at boot time.

It was tucked away under Overclocking Settings, and CPU Features, just before the Intel Virtualisation Tech and VT-D Tech options. Sure enough, disabling this caused VMWare to be quite happy with a 64-bit guest OS, and the Intel CPUID tool also now claimed that I was capable of Virtualisation.

PROBLEM #1 Solved!

Next step will be downloading and installing VIRL itself within a Virtual Machine. That’s a story for another day!

I’ve been doing more musing than usual recently on where I think technology evolution in the Network arena is heading over the next few years, and the concept of a Virtualised CE Router keeps popping in to my head. This entire post is a bit of blue-sky thinking, but it’s not that far away from where we are today.

I think of the idea as a logical next step in the Hybridisation of Virtualisation and Network Function Virtualisation with that of Software Defined Networking.

Virtualisation has already taken over the Data centre, with VMWare and others having the capabilities to provide logically discrete Virtual Switching, Routing, and Firewall instances within the cloud infrastructure, so why not take it to the next step and start to consider Virtualisation for some of the additional services we might want to use? Indeed the IETF has a draft considering exactly this for MPLS VPN’s.

Current WAN networks follow a fairly traditional delivery model in that the edge of the carrier network is terminated on to a local piece of Customer Premise Equipment (CPE), which in turn is connected to a “Customer Edge” (CE) device usually provided by the Network Operator. Domestic DSL services follow a similar model.

My vision of a Virtual CE device fits both the conventional WAN solution, and in particular MPLS type deliveries, and a consumer grade DSL service.

Ethernet is increasingly becoming the bearer of choice for MPLS and Enterprise WAN services, either using Copper or Fibre, and terminating on an RJ-45 Ethernet port on the CPE. Since this is literally an Ethernet service delivery, why not shift the “intelligence” back to the other end of the circuit? Enabling the Service provider to virtualise the physical and provide a logical instance delivered from a shared hardware platform. This reduces the equipment that could “go wrong” on a customer site, reducing (but not totally eliminating) the potential need for engineer visits, break/fix maintenance, and ultimately to save costs. The carrier can also standardise the services that the customer takes, and capitalise on investment in centralised CE equipment. It would still be possible to use tagged Ethernet to deliver traffic to different Networks/VLAN’s for the more sophisticated requirements, and doesn’t really change the scope for screwups which could cause traffic to be delivered in to the wrong logical networks due to mis-patching, (although I do know of a solution that might help there too! 🙂 )

Extending this line of thought in to the Consumer market, I think that It has massive potential there too. It may still be necessary to have an intelligent black box of a sort as a CPE to provide a Layer 2 connection back to the intelligence in the Virtualised CE environment, (using something like L2TP over DSL to the virtual CE router?). Of course local WiFi breakout services will also still be required (Cisco already have the Meraki Cloud-managed Access Point range) but nevertheless similar benefits around centralisation, management, and economy of scale could apply. Consumers could still manage their own CE device via a browser, but the carrier could have a far greater degree of influence/control over the make/model of CE device the customer uses enabling standardisation as well as opening the door to many more value-added services that the carrier could provide. Some possibilities include:

Network Attached Storage: How many high-tech families (read: geeks) have sophisticated home networks with Network Attached Storage capabilities, used to backup Photos/Music/Documents, or other locally stored Data? This type of virtualisation could allow the carrier to provide (sell!) Exchange or Data-centre based NAS/SAN capacity.

Media Centre: What about those people using Media Server(s) running on a NAS or dedicated server Hardware? iTunes or Airplay servers to stream music to a SONOS or similar? Centralised access to subscription based TV services such as Netflix or Amazon Prime Video, or even inbound access to your Sky Plus or Virgin TIVO? Local storage (maybe on NAS?) of your own movies using Plex or XMBC?

Remote Access/VPN: I can only predict this area will grow and grow. I currently have the capability to establish a private VPN connection to my Home Network in order to access data stored on my NAS etc. As the trend towards the “internet of things” accelerates, I predict that this trend will only increase over time as we access additional home based solutions including Lighting, Home Security, Central Heating, Electric/Gas meters, even Cookers and Freezers etc going forward.

Firewall & Security: We all hear about the latest and greatest zero-day exploit and such, wouldn’t it be great if we could sit back secure in the knowledge that our service provider was protecting us against these threats centrally. Integrating this measure of control behind an easy to use UI to facilitate:

Shared Access: Already we find the younger generations gaming together within the same house on their respective games consoles with LAN enabled gaming, and of course MMORPG’s are extremely popular too! Why not have the neighbourhood kids playing Minecraft together on a private server that only they can get to? This is about the ability to selectively extend parts of the Network between entities (on a selective and controlled basis of course). Want to access that particular music track at home while you’re visiting a friend? no problem!

Content Filtering: How about being able to deliver different levels of filtering, maybe to different Wifi SSID’s or LAN ports on the black box locally? How about separate SSID’s for “Adults”, “Teenagers”, and “Children” each with differing levels of content filtering, maybe even logging applied.

And of course that’s before we start entertaining the ideas of Desktop-as-a-service, or the shift of compute workloads to the cloud. I’m pretty sure it’s only a matter of time before we shift the work behind our games consoles away from black boxes in the home, and just use a virtual-screen display type solution for it all! (nVidia SHIELD?)

I know that much of this can be done today, but it requires a particularly persistant technical person to make it all work, and even then it’s not yet as seamless as we’d all like! I think that the idea of Virtualising the CE takes us a step towards my vision, and is a potentially lucrative area for the carriers to invesigate.