Twitter Feed

I remember the first time I worked for IT at a bank. The language in the workplace was all about control and process and more process. My manager was a great people person, which meliorated this mechanistic tendency.

Of course having worked there for a while I realised that talk and action were somewhat different. Undocumented changes occurred and leadership turned a blind eye. Primadonna technologists roamed like cowboys across the systems. GNU tools showed up in the oddest of directory locations.

Command and control was the edict but it was like herding cats. IT was managed as one big machine (Back in the 70s when I was playing school ground tiggy it probably was one machine) that could be managed down to the smallest element. Very particular and focused.

The claw of the beast (John Christian Fjellestad via Flickr)

Of course IT supported only simpler applications back then like ledger accounting (urgh). Now IT underpins every part of a business. Business and IT have become one big melange.

With firewall boundaries being torn down to support XaaS, mobile and 3rd party integration the world and organisations are becoming one mega-melange.
Organisations that see IT as something they can command and control are setting themselves up for disappointment, or consigning themselves to the past.

Risk management entails making decisions moving forward but looking backward. Assessing the likelihood of something bad happening, say an AWS zone failing, or a VC-backed vendor going bankrupt, can be problematic. Assessing the MTTF of a hard drive is a little more scientific. We can apply standards, certifications etc. to technology and providers etc., but hey…. Sarbanes Oxley didn’t stop the GFC.

Another method could be to look at managing the risk through Insurance and possibly even state governments. (Socialist, eh? I saw that “comrade” earlier)

Technology will only improve standards (security, service levels, manageability etc.) with transparency and audited action that drives improvements. Open source development is a good example of heading in the right direction. We also need to report this stuff in annual reports, marketing briefs and government audits.

In the same way that airlines and airplane-makers disclose details of accidents and implement improvements to avoid recurrences, thereby improving air travel safety every year, IT must organise itself so that it is biased to improving without the need for “top-down” intervention. Some things will slip through the cracks but the “rules” will adjust to stop the same event occurring again.

Technology users and providers that can’t adapt will die off. Those that can will thrive.

What are some open and transparent practices IT should have in place to bias it to improving over time?

I caught up with an ex-colleague for lunch recently. We’d both been working on Integration projects and were wondering why good integration capability is difficult and rare.

Integration often manifests itself in a shared asset like a bus or a broker. One issue is that IT is run by projects (with their own selfish interests) so often integration that is scalable, re-usable, loosely coupled etc. gets jettisoned just to deliver the project. This results in point-to-point integrations, managed file transfers and shared DB connections; whatever is easiest for the project to understand and implement. There’s nothing necessarily wrong with using any of these integration options in isolation, but over time the environment becomes unmanageable and ‘orrible.

If a mature integration capability has already been established you stand half a chance, but even then integration teams can be seen as slow, fussy and expensive pedants, to be worked around. (Not by me of course. I love you integration guys)

As a business grows and becomes more complex understanding how applications communicate becomes very difficult. At some stage an organisation invests in a discrete integration capability. (or DIC? Sorry for that moment of immature hilarity). The rise of mobile, cloud and outsourcing (the last being the least sexy of the three) has made integration even harder. It’s a cliché that information needs to be accessible anywhere, any time on any device and not just between systems housed in a single data centre. How do you meet these demands?

Integration – already complex – has become more complex. Security and integration teams must be proactive, forward-thinking and nimble to respond. The diagram shows common integration methods in a today’s organisation.

Sharing a DB between applications (the red arrows) is great for speed of exchange, but your applications need to always agree on the data format (which never happens with commercial software) and change/release management needs to be in lock step. I’ve never heard of an internal and external application sharing a direct database connection before.

Point-to-point integrations (purple) are fine in isolated sub systems where the likelihood of adding a third application to the mix is low. You could integrate an internal and external application this way, possibly with some bespoke format translation, but it’ll end up being some crude hole in the firewall.

File transfers (light blue) are quick to implement once you’ve agreed on the file format. Typically file transfers are done in batch and therefore not real time. You can make your batch transfers more regular but after a while the files are moving so often and are so small, you might as well look at messaging or web services. Externally file transfers can occur using a file transfer gateway device.

Messaging and queuing systems (light green) are great for moving data between systems where exchange is not time critical, delivery is guaranteed, and where systems have different data formats and standards. If using this method externally there are different messaging technologies and governance standards to manage. For example an organisation might use MQSeries internally, but their mobile app partner only has experience with Amazon SQS. You could start with a handcrafted adaptor if likelihood of re-use is low but it’s not going to scale.

Real-time integration between in your inner and outer worlds is the future. Managing things like data confidentiality, service level guarantees, access management, transaction traceability across multiple environments and organisations will become increasingly difficult. Contract management will have to play a part.

The toolsrequiredwill be provided – at significant cost – by the big established integration vendors. They’ll provide you with the tools, but you’ll have to build and run it. Same as it ever was.

Pass it around:

Like this:

Back when I worked for a large bank my manager – a shrewd thinker – was asked what he would do to the IT infrastructure if he had infinite time and money. His answer was that he’d tear it all down and start again. When 9/11 destroyed many buildings in lower Manhattan, some organisations had to do just this.

It’s an interesting thought-exercise because you arrive at a different target state when you think this way, than when you start with an existing set-up and incrementally change your environment.

From what I’ve seen of cloud transformations across different organisations, I’ve found five key areas you need to consider, that are difficult, oft-neglected areas. The way you approach these five depends very much on whether you are starting from scratch or not.

I’ll deep-dive into them in future posts but for now a brief summary (with no particular priority):

1. Identity

Now that platforms, systems, devices etc. are outside your network, how do you identify and provisions users? How do you make sure it’s only Jim who is accessing his iPad and using an approved SaaS provider that uses data from a core internal system? How do you deprovision him and his access when he leaves one Friday to go work at a competitor? The management of identity in the new era requires new platforms and skills. When this area is ignored you start to lose control pretty quickly.

2. Network

Networks used to be like medieval castles. There was a big wall with guards and a few entrances. Legacy networks were built on this paradigm. But today your device could physically be on a public network whilst logically on a companies network. You could be logically managing your network on someone else’s infrastructure (think AWS VPCs). Some applications will be hosted externally and require access to internal systems.

The medieval city has lost its walls and people are roaming freely. Your data assets need to be locked in suitable safes in different towers, with access by appointment only.

FIVE – Photo Friday (Andrew Morrell via Flickr)

3. Service Management.

Service Management is not sexy. Remember all the guys who won awards and got ‘5’s on their balanced, normalised, bell-curve yearly performance review/scorecard? Never happened.

Problem, Incident, Change management etc., that is ITIL stuff, is still important but now the configuration items you manage could be somewhere else. There are externally hosted partners responsible for parts of your service.

You will need to agree with them how to manage and measure the service levels of their components. And they will have their own service management platform and processes (hopefully). Where are the demarcation lines and how do these service management platforms share data? When a major incident occurs how do you know that everyone has the same information and is working in a coherent fashion?

4. Integration

I heard this best described at a vendor demonstration. Systems of record are being separated from systems of engagement. In the past you had a monolithic system that was both a system of record and engagement.

Today, a system of engagement could be a SaaS provider or a mobile App. How do you get data from one to the other? Also externally hosted systems may need access to core company data. Previously you may have had to integrate platforms across an internal network. Now you need to integrate platforms across many providers and geographies.

5 Vendor Management

Vendor management in the old world was somewhat different to the new. The new world is fluid with expectations of quick on-boarding and off-boarding. The market is bigger with many diverse providers rather than the usual chosen few. There are considerations about data sovereignty, off-boarding etc. that you have never had to consider before. There has to be more collaboration between technology teams and procurement teams to understand how these external solutions work and protect data.

That’s my big five. I haven’t included anything about the server platform, orchestration, storage etc., because I think there’s less impact if you stuff them up. You can always adapt. If you get my five wrong, the consequences are significant. And across all of these big five you need to consider security as well.

Pass it around:

Like this:

Until you’re told you have to use a locally-hosted provider! And then you’re told to find perhaps a locally-hosted and locally-owned provider?

I need you to share the names and URLs of locally available cloud IaaS providers in your country. I’m compiling a list for Canada, Mainland Europe, New Zealand, UK, India, China, South Africa and of course US (although they own all the big providers). Feel free to share in the comments.

Here’s what I found for the Australian market in no particular order (Feel free to add to this):

Finding examples of successful private clouds is difficult because many organisations claim to have a private cloud when in fact all they have done is install VMware. If a project manager has to organise a resource to install something it’s not a cloud. Cloud is not virtualisation.

If you were starting a private cloud environment build 3 years ago, it would look different to one you would begin building now. But I like to live in the here and now so let’s look at what we would do now.

For starters, reverse these unspoken principles:

IT departments do not need to communicate with their users.

IT departments are needed and irreplaceable.

Users don’t know their needs and IT departments do.

Then approach as shown below. Users care about the items closer to the top.

Then get the right people on board to build it out. The guy who built your virtualisation stack back in the naughties probably doesn’t have the social intelligence (and hygiene) to approach users and “communicate”. Users value the speed in which resources can be allocated to them, the simplicity of getting their work done and the lack of friction involved so put someone in charge who understands public clouds, understands service design and delivery and is willing to start with an open mind. Then create a list of services you wish to offer.

Eventually after much work and discussion you’ll get to the point where you understand the services you want and what is required of the technology to support your cloud. This is where you may get derailed. Whatever happens make sure your choices minimise “tinkering”, promote scalability through modularity, and can work with your current network.

Your network may suck. It may make providing access to external and internal users difficult. It may be impossible to converge data and storage networks. If so, you could wait for SDN to mature and your CIO to give you budget for a Network Refresh or…

You could build the cloud off-premises. Private cloud is used only by internal users but can be built anywhere. So build it in Rackspace or AWS, or a local provider if data sovereignty is an issue.

You could build the cloud on converged infrastructure which includes its own networking. Something like vBlock or FlexPod. Converged Infrastructure is modular, making it easy to add and manage computing power, storage or networking throughput. The seamless coordination between hardware manufacturer (compute, storage and networking) and virtualisation is key to private cloud computing. It resists “tinkering”.

You’ll need more tools over time to manage this because of new standards and platforms, capacity-managing a pool of resource aggregated at a data-centre level, and also in planning for the hybrid cloud (which may exist for real one day). But all in good time.

The cloud is essentially a vending machine. It’s an automation-oriented, self-service approach to IT. Anything else is folly.

We caught up a few days later for a coffee near his office in Prahran. The Annex office had a cool set-up with ping-pong table, arcade machine, 3D Printer, quadcopter and lots of prototypes. His team were working behind 27-inch iMacs running Solidworks for CAD and for everything else it was the just usual Dropbox, Pages, Numbers, Adobe Creative suite and Final Cut Pro.

I intended to ask him about the different platforms they used to manage and run their business, but we spent quite a bit of time talking about sales and marketing. He sees Annex as a brand and product design business. Great design is important but “isn’t enough to stand out from the crowd”. You need good sales and marketing.

Rob learnt sales and marketing the hard way (or the best way he’d say). In the mid 90s he was out walking the streets selling OptusVision cable TV to Australians. I remember thinking that was a horrible job at the time but Rob reckons it gave him the best possible sales education. Maybe I missed out!?

The best sales and marketing tools Annex use are Facebook Ads, Email lists – “the best customers are return customers” – and AdRoll, a “retargeting” platform. Retargeting works by keeping track of people who visit your site and displaying your retargeting ads to them as they visit other sites online. That’s important because many people don’t buy the first time they visit your site. They just need a little reminding!

Some of the other key systems/platforms they use are:

Crowdsourcing:

Kickstarter: He had to perform some magic in setting up a prerequisite US Bank account to get started, but this step got the ball rolling and “proved” there was a market for their product. (The US bank account issue does not exist anymore, at least for Australians.)

eCommerce customer interface:

Shopify is the businesses storefront. Rob mentioned how easy it was to set-up and use – “Why would you build your own?”. They use Canadian developers to tailor the site (Responsive design etc.). Why Canada? Because Canada has great Liquid developers. (Liquid is a templating language for Shopify). Previously they’d sold Opena cases on a “cheaper solution” (not Shopify) and then one day Ashton Kutcher tweeted about them and knocked their system over. The ecommerce platform provider at the time gave them as much infrastructure grunt as he could and they still fell over. Shopify has never had this problem.

Payments:

Paypal and Stripe (which only just kicked off in Australia making it easier to use accept payments from anywhere in the world).

Fulfilment:

Shipwire: You previously needed a US account to get this started just like Kickstarter. It’s easier now. Shipwire integrates with Shopify, and a heap of other commerce systems. They store your products in their warehouses, provide shipping rates and inventory back to your storefront, arrange delivery etc. These guys are crucial if your business ships a physical product.

Customer Support:

Zendesk: Keeps track of customer issues and details. As the Annex business hit scale, there was no other way to keep track of this stuff.

CRM:

Capsule CRM: Annex primarily use this to manage relationships with their B2B customers.

All of these platforms are software-as-a-service. The ecosystem they play in has forced the major players to integrate well with each other. This truly is a business without on-premise IT (apart from their iMacs).

Rob takes the approach of “do and change” with his business so he’s not too concerned about change. If a SaaS provider they use started to alienate their base or went out of business I imagine he’d take it in his stride and move to someone else. Possibly a few late nights and stressful moments, but Annex would get through it.

One thing he kept iterating was how easy it was to set up these businesses.

So check them out. It’s a great example of a contemporary global niche business, where the barriers to entry get lower and lower.

There are 4 areas where private cloud deployments haven’t matured yet that you must contemplate:

Software licensing – When migrating legacy software to private clouds it gets very difficult to track software licensing. Where previously you had a pretty static license pool, in a private cloud your usage may expand and shrink. How do you manage that? For example, how do you deploy Oracle RDBMS in a private cloud if you don’t have an Enterprise agreement? A nightmare!

Lack of standards – There are many cloud platforms, some proprietary, some open-source. All these are different to their public cloud companions. It’d be good if a public service provider would let you install your own cloud platform on their tin. It’d make hybrid cloud a lot easier. One day I suppose. Until then you have to manage your public and private APIs differently.

Compliance leakage – Your private cloud is a single environment. Anything that has regulatory or compliance requirements (like PCI-DSS) can’t go on there without bringing the whole cloud into scope because of…..

The immaturity of cloud networks – I know SDN will save the world but for now the underlying networking concepts of different clouds are of a differing standard and sophistication. Many enterprises therefore deploy cloud networks in a basic, pre-provisioned fashion. (Read about the 6 challenges of private cloud networks in this F5 document.)

Mike DiStaula, little boy (Mike Burns via Flickr)

I suspect one day cloud platforms, standards, licenses etc. will converge enough so that you are running the same technology in private and public cloud stacks. Possibly AWS will provide an on-premise cloud stack, or a provider will start letting you install your own cloud platform as already mentioned. At least with this last one, the enterprise is finally free of the need to actually deal with hardware. What will we call infrastructure specialists then!?

The default answer is that IT’s vision is to provide hybrid or “multi-cloud” environments but not many people can actually articulate what that means, how it can be achieved, and why business users should care.

Pass it around:

Like this:

In the past twoblogs we’ve looked at the inertia of Enterprise IT and the issues with use-case development for private cloud. Today it’s the business case.

Business cases are typically written for, and approved by, senior management who don’t fully understand technology – its possibilities and limitations. They use IT services more than most demographics and always want the latest tech – for themselves at least!

So what is the business case for private cloud?

Is it to increase revenue? Possibly, in some indirect not-so-obvious way.

Is it to decrease costs? Not at all, unless you were really inefficient to begin with. It does reduce the transaction cost of establishing a new platform but that is but one smallish component of a projects cost. Building a private cloud requires a big upfront investment with no immediate return, a real “field of dreams” situation. Cloud is about speed, automation, self-service and a little experimentation.

A business case must be able to measure and demonstrate success. You need a metric and it’s difficult to measure cost reduction or revenue improvements with a private cloud. You could use project delivery and time-to-market metrics. You could measure how many applications are running on your private cloud. You could measure developer satisfaction and/or usage. None of these metrics are easy to measure or necessarily translate to a successful outcome though. If project delivery times go down is it related to cloud or other improvements? If you measure how many applications are running on the cloud is it just shuffling your existing portfolio around?

The truth as I see it is this. IT has to transform to meet modern requirements and Cloud computing is one necessary component. Disruptions like big data, mobility, wearables, gamification, social networking, high speed networking, Internet of things, global logistics & resourcing can’t be delivered effective using traditional IT. Practices like Agile and DevOps – and other things – complement cloud computing for these same ends. IT now plays in a border-less environment with abundant resources, nimble competitors and changing technology. The game has changed.

To demonstrate the value of a private cloud you need to demonstrate success in the realm of all this disruption and it has to be something that is difficult to do in the public cloud for security, legal, integration or performance reasons. Choose a significant project/platform in this realm, and bolt it to your private cloud project. A champion project for your cloud.

Your business case then includes a complete end-to-end platform and the success of the private cloud is tied to success of the champion platform, by how well the platform is “harnessing change for competitive advantage”. For example, how do the release cycles compare to your existing platforms? What are the scale limitations? How easy is it to add features and fix bugs compared to your legacy platforms? How well does it perform in the digital world? Do developers use the private cloud? Can users get at the data anywhere and any time? What is the next project that could benefit from this approach?

The business case for a private cloud stacks up only if you have a significant partner at the beginning, or at least very early on. This could be a significant project, application or business area. Sure, run some proof of concepts to familiarise yourself with cloud technology but then get this key user who can provide feedback on the platform, someone who can do regular releases on the platform, someone who depends on your cloud to run their business. This partnership could then be the seed for a new digital enterprise.

In the next post we’ll look at all those thing’s private clouds can’t do!

Pass it around:

Like this:

My last post about how to fix your private cloud focused on IT’s organisational flaws, ill-directed focus and lack of customer-responsiveness. If I’m going to fling excrement at my industry peers, I’d at least better have a crack at identifying some good use-cases for the private cloud myself, and highlight where the focus could/should have been.

“Wait here until you are useful”, Matt Brown (via Flickr)

So this post is primarily a bullet point list! Here we go… Private clouds can enable:

Single OS instances for sand-pitting applications. If a developer needs a server environment, they spin one up in the private cloud, rather than hack their desktop to pieces. The benefit over using public cloud here is that the sandpit will have access to the internal systems a developer needs.

PaaS-like environments, and by this I mean pre-configured development environments with all the preferred tools and integration technologies already installed. This is a natural extension on the last point. This could also include database-as-a-service (DBaaS).

Internet-facing applications that need to be spun up, and then down, quickly. The classic example is a marketing campaign. These types of loads may have been handled by a marketing agency in the past. This could be done in public or private cloud, integration depending.

Web portals and e-commerce systems that necessarily combine many internal and external system components. Imagine an e-commerce system that integrates social-login and internal product information systems.

Web services where utilisation is unpredictable and the service consumer is as likely to be external as internal. Web services typically need access to a company’s data so will likely be “close” to internal data sources making private, rather than public, cloud more likely, especially for large or sensitive data sets.

Disaster Recovery/Business Continuity. The public cloud is promoted for this purpose of course, but moving the function in-house could be cost effective for a very large enterprise.

Platforms that are developed using Agile, Continuous Deployment and DevOps practices. In these instances your infrastructure is part of your deployment process and fully orchestrated. There is typically no “operations handover” in this environment and it evolves over time.

Systems where you are occasionally grunting through a very large amount of data. For example, 3d rendering, unstructured or big data analysis, and business modelling.

These some use-cases I researched and have observed myself. If you can think of any more, add them in the comments.

One of the interesting things to consider is what is not on the list: legacy systems, Exchange servers, storage systems, Intranet portals… You could put these on your private cloud, but there’s no great benefit. When you factor in the orchestration effort it can actually take longer to get these working on the cloud. They probably don’t need autoscale and other cloud features. So run them on your ol’ VM farm!

IT would have done better to work through the likely use-cases for the cloud and focus on these rather than looking at the private cloud as the latest platform for…. everything. Even better it could have done this before beginning, which leads me to the topic of the next blog in the series: The business case.

Pass it around:

Like this:

Private clouds are difficult to build. In this blog series I’ve surveyed all the common flaws in private cloud design and implementation so you don’t have to chase them yourself. Hopefully you can relate to some of the issues and contribute your thoughts in the comments. In the final blog in the series I’m going to attempt point a direction forward, to fix the private cloud, and share my reading list.

Today’s blog is about the organisational flaws exposed by the private cloud trend.

*********

IT built the private cloud IT wanted to build and not the cloud anyone wanted to use.

Private clouds were commonly seen as an extension of virtualisation. This encouraged IT to have an inside-out cloud mindset. Now virtualisation and cloud typically go hand-in-hand, but they are not the same thing. Even though the IT infrastructure deployed in both instances is usually quite similar, from a user perspective they are completely different. Clouds are quick and self-serviced whereas virtualisation by itself is not. The truth is you can have a cloud without virtualisation. For example, if you build self-service etc. on top of an Oracle RAC Cluster you could have DBaaS. (Be careful to not get screwed too much on the licensing costs.)

IT built the cloud it wanted to build

The virtualisation trend also exposed some organisational issues, which may or may not have been dealt with. It’s part of the phenomenon of Software eating Infrastructure. Virtualisation allowed cloud engineers to make network changes to “soft” switches and firewalls. It allowed them to deploy storage. Storage I/O problems occurred because of poor workload balancing and live migrations. The opportunity to improve IT efficiency by delegating storage, compute and network activities to a semi-orchestrated cloud team existed. In some organisations this change worked, in others it was resisted.

The arrival of private cloud progressed these organisational issues further with many organisations merging specialists into a cloud team and hoping they would abandon their specialist mindset.

I’ve drawn up a very generic cloud stack below:

You should have put most resources into the top block but these skill-sets (establishing services, APIs and managing a relationship directly with cloud users) are not something specialist IT staff are traditionally good at, or have ever had to do. IT shops started reinventing themselves to make this shift.

The bottom two blocks play to IT’s traditional strengths. Depending on how (in)efficient you were you may have spent most of your time and energy stuck in the bottom block, building networks, hypervisors and customised operating systems. If you were still heavily silo-ed there was no way you could match someone like Amazon’s scale and efficiency. Even if you were uber-efficient you probably can’t match AWS. The thing is, if you’ve spent most of your energies in the bottom blocks, and not in the top block, you probably don’t have many users. You spent a lot of time building the “undifferentiated heavy lifting”, as Adrian Cockcroft said back in the day, because it’s what IT knew how to do.

In the next blog post I’ll discuss the business case for private cloud and some use-cases.