How did your web site stand up on Black Friday and Cyber Monday (Nov 28th and Dec 1st 2014)? These were expected to be the most frenetic online shopping days of the year. Whether you are an online retailer or processing the payments generated, if you were able to maintain a good customer experience and complete transactions on these busiest of days, hopefully the rest of the year was a cake walk!

Meeting the challenge requires a mature approach to managing your online presence as recent Quocirca research shows. The new report (see link at the end of this post) shows consumer-facing organisations to be more advanced in this regard than organisations that deal only with other businesses. They have to be; on average, consumer-facing organisations deal with three times as many registered users online as their non-consumer-facing counterparts. They also know that consumers are more impatient and capricious.

The report identifies seven things that consumer-facing organisations are more likely to be doing to rise to the online maturity challenge. Any organisation that underperformed on Black Friday, Cyber Monday or at any other time should follow their lead.

1: Monitor performanceMost organisations have some sort of capability to monitor the performance of their web sites and online applications. However, consumer-facing organisations are much more likely to be focussed on metrics to do with the user experience whilst their non-consumer-facing counterparts fret about bandwidth and system information. Consumer-facing organisations are able to do this because the platform basics are often outsourced.

2: Outsource infrastructureConsumer-facing organisations free themselves to focus on delivering the applications and websites that are core to their business and avoid getting bogged down with infrastructure issues that are not. This includes the infrastructure on which their online resources are deployed as well as supporting services such as DNS management, content distribution and security. Indeed, a key finding of the new survey is that better security is now seen as one of the top benefits of cloud-based services.

3: Outsource securityNearly all aspects of security were more likely to be outsourced by consumer-facing organisations. This includes emergency DDoS protection, malware detection and blocking, advanced threat detection, security information and event management (SIEM) and fraud detection. The motivators for this are that applications and users are in the cloud, so the security needs to be too and, as with the base infrastructure, leaving security to experts further frees staff to focus directly on the user experience.

4: Deploy advanced securityIt is not just that consumer-facing organisations are using cloud-based security, the protection they have in place is also more advanced. Non-consumer-facing organisations are more likely to rely on older technologies such as host-based malware protection and intrusion detection systems (IDS). Consumer-facing organisations have these capabilities too, but are much more likely to supplement them with state of the art advance security systems, be they outsourced or deployed in-house.

5: Take a granular approachNo two consumers are exactly the same; they will be using different devices, different browsers and have varying access speeds based on their network connection and geographic location. Consumer-facing organisations are more likely to monitor such things and adjust the way they respond to individual users accordingly.

6: Link the user experience metrics with business successHaving all sorts of capabilities to monitor the user experience is all well and good, but it is even more useful if it can be shown how variable delivery affects the business. Consumer-facing organisations are more likely to have a strong capability to do this, linking metrics to revenue and customer loyalty.

7: Find the budget to do all thisOf course putting all these capabilities in place has a cost. However, that is no barrier for the most forward thinking consumer-facing organisations; they are almost twice as likely to be increasing the budget for supporting online resources as their non-consumer-facing counterparts. Just throwing money at a problem is never an answer in its own right, but if the spending is well-focussed it can make real difference as those that coped best over the last few days will surely know.

Organisations that only deal with other businesses may say; "what has all this got to do with us?" Well, as more and more digital natives enter the work place they will bring their consumer expectations and habits with them. All businesses need a razor-sharp focus on the online experience. For those that fail to do so, it will not just be Black Friday and Cyber Monday that they lose business; it will be every day of the year.

A visit to the UK base of Rackspace’s ‘Fanatical Support’ group in West London is a truly cosmopolitan experience. Above every Racker’s (as it calls its staff members) desk is a national flag representing their country of origin—think UN General Assembly meets the IT Crowd. Some of these Rackers are indeed supporting a growing number of customers based in continental Europe and further afield, especially as the take up of Rackspace’s self-service products increases. However, their mixed heritage says as much about London as it does about Rackspace itself; despite its size and growth, the cloud services company’s comfort zone is still mainly in the English-speaking world.

This is one of the reasons for Rackspace’s decision to focus a huge new round of infrastructure investment in the West Sussex town of Crawley, vastly increasing its UK footprint, rather than expanding into other European markets. The scale of the investment will be greeted by many in Crawley, which lies just south of Gatwick Airport. The project will bring new jobs to the town, making productive new use of a 15 acre brown field site that has lain derelict since being abandoned by a pharmaceuticals manufacturer in 2011 (with around 500 of job losses).

Rackspace did consider locations to the chillier north, even perhaps in the Nordic region. However, the cost of dedicated fibre connections outweighed potential power savings from less spent on cooling and anyway, Rackspace says it likes to be where its customers are, especially with the growing volumes of data being handled. Having decided on South East England, Crawley had the benefit of good transport and data connections and ample power supply, the town not (yet) being a noted data centre hub.

The plan is to develop three data centres on the site, each with a capacity of 10 megawatts (for comparison, Rackspace’s Slough data centre, that opened in 2010 is 6 megawatts, the two will be connected by a dedicated metro fibre link). It is not just the scale of the investment that is of interest, but also the revolutionary design of the data centres. Each will be based 100% on specifications and designs from the Open Compute Project. Rackspace says that this underpins its philosophy that anything that can be open should be open; the company has been a major contributor to the project.

Open Compute includes everything from data centre design and cooling, to servers and operating systems, such as the Rackspace-backed OpenStack that Quocirca covered in another recent blog post. Whilst any organisation can source stuff from, and contribute to, Open Compute, none to date has committed their whole ongoing data centre strategy to it in the way Rackspace is planning in Crawley. Open Compute’s other major backers include Facebook, Intel, Goldman Sachs and Arista Networks. Microsoft is joining and has donated some power management tools to the project (but not its operating system!).

For a company such as Rackspace, whose core value proposition is selling top quality cloud-based services out of state-of-the-art data centres, it may seem strange to give away so much of its intellectual property. Rackspace says, no problem, we differentiate on top of the stack. Part of that is the ‘Fanatical Support’ that increasingly includes architecting and coding for its customers. It also includes the flexibility it provides to mix and match private and public cloud resources within and beyond its data centres.

As well as the use of Open Compute there are other innovations aimed at flexibility and reduced energy consumption. The floors are solid concrete rather than raised, all wiring, including power supply will be via structured overhead space. This allows easy changes to the required power mix across racks as more and more are given over to handling that growing demand for storage compared to the lower footprint required for increasingly efficient servers. The data centres will operate at the relatively warm temperature of 29.5 degrees centigrade using non-mechanical 'indirect outside air' cooling. This is an adiabatic system (i.e. with no overall gain or loss of heat), whereby the inside air is sealed from the outside and evaporation is used to cool down the outside air which is then used to cool the inside via heat exchangers. It is equivalent to a standard air conditioning unit, but with no compressor, relying instead on low-speed, low-power fans. Rackspace and its data centre builder partner Digital Reality Trust (DRT) say this will reduce overall energy consumption by 80%.

This major investment will likely see the UK grow as a percentage of Rackspace’s overall business which is still dominated by the USA, where is has data centres in Virginia, Texas and Illinois. Beyond this are facilities in Hong Kong (an English-friendly portal to greater China) and Sydney, Australia. To ensure it can remain competitive with the likes of Google and Amazon, Rackspace knows it needs to up the ante when it comes to global connectivity. To this end it is looking to partner with its Open Compute friends at Facebook and piggyback on the social network’s intercontinental fibre links.

Rackspace may be focussed primarily on the English-speaking world, but its infrastructure and connectivity gives it the global reach to serve certain multinationals as well as those in its core markets. The physical location of its data centre may preclude it from providing local services in other markets, but as a thought leader and contributor to data centre design its influence will be truly global.

As various pundits have reeled off their security advice for 2014 many have listed the growing threat of denial of service (DoS) attacks as something to look out for. They are probably right to do so; two recent publications, the Arbor Worldwide Infrastructure Security Report (WISR) and the Prolexic Global DDoS Attack Report, both show that the number, size, sophistication and impact of DoS attacks continue to increase. Another December 2013 Ponemon Institute report suggests that distributed-DoS (DDoS) attacks are now the 3rd most common cause of data centre outage after power failure and human error, causing 18% of all outages; three years ago it was just 2%.

There are a number of different ways of denying service. The various methods of attack are well documented elsewhere; the Wikipedia entry Denial-of-service attack is a good start. However, it is worth pointing out that whilst most will be familiar with the idea of volumetric DDoS attacks (the flooding of network, server and/or application resources) there are other types of attack that are more insidious. These include state exhaustion of load balancers and firewalls (blocking all possible connections to a given resource), attacks on domain name server (DNS) infrastructure and low rate/slow attacks that will not be detected by looking out for high volumes of traffic and/or resource requests.

To decide how seriously to take the threat and what level of investment should be made in the necessary counter measures, those responsible for IT security should first consider why their organisation may be a target for such an attack? ‘Why would they DoS us?’ After all, launching any sort of DoS takes some effort and it needs to be targeted. Furthermore, it is not immediately obvious how DoS attacks can be monetised. Indeed, for cyber-criminals it is a relatively risky way to make money, principally by extortion; ‘we are going to render your service ineffective until you pay a ransom’. Obvious candidate targets are those businesses that rely heavily on their e-services such as online casinos and retailers. Off web to them means no money coming in.

Data in Arbor’s WISR report sheds some light on the actual motives reported by victims of attacks. Criminal extortion comes near the bottom of the list (16%). The most common motive (40%) is down to political and/or ideological disputes, so not cyber-crime at all but hacktivism. Many may say, well that is alright then, they would take no interest our dull everyday business; that is until you realise you are a supplier to someone that is of interest and an easier attack target due to your complacency.

Other interesting motives are criminals demoing of their DoS capability to prospects (26%); pre-sales activity if you like—who cares what the target is, as long as it can be shown to be rendered non-functional! Competitive rivalry (18%), that is organisations with similar interests attacking each other, mainly seen in emerging markets (imagine the scandal a major EU-based brand was exposed as behaving in this way!) Flash crowds (19%), for example a rush to watch a video or secure coupons, not a DDoS attack per se, but an unexpected legitimate rush on resources. Diversionary attacks (16%), this is where DDoS in particular is used to send an IT department into disarray whilst a more targeted attack is launched elsewhere on its infrastructure.

One area not listed by Arbor is collateral damage. This is where your organisation is not the target but you share resources with an organisation that is. This is increasingly likely to be the case as the use of cloud-based services continues to increase. As with all DoS, this danger can be mitigated against. It should be pointed out that cloud-based services are also part of the solution to DoS. First if you are hit by volume due to a flash crowd, a cloud service provider should be able to add additional resource for as long as it is needed. Second, DoS defence is increasingly offered as an on-demand data-scrubbing service from vendors such as Akamai (via its Prolexic acquisition), Neustar, Black Lotus and DOSarrest. They will divert infected traffic streams to their servers and clean them up once an attack is detected.

However, this is often after the event. Many organisations defend their resources from DOS attacks using on-premise protection from vendors such as Arbor with its Prevail APS product aimed at enterprises or Peak Flow SP aimed at service providers, many of whom tote their own DOS mitigation services, and Corero with its DDoS Defence System. Corero is now going after the SP market too with a new offering called the Smart Wall Threat Defence System, the premise being that cloud service providers should offer to protect their customers, for a premium, and mitigate both direct attacks and collateral damage; its message is ‘always on’ protection, rather than just during an emergency. Arbor also offers cloud-based protection with its Arbor Cloud, which supplements on-premise protection. Radware is another vendor with such hybrid capability.

So, the DoS threat is real. Your organisation does not need to be an obvious target to be a victim; it may be seen as the easy target to disrupt a better protected partner or customer, impacted by collateral damage in the cloud, the hapless target of a pre-sales demo or even the beneficiary of unexpected popularity. Whatever the cause, all organisations need the ability to see attacks coming and respond accordingly. The cost of putting in place some level of protection will likely be a lot less than the cost incurred during an all-out attack.

Introduction

In this age of digital by default it is important that all digital content is accessible. This will include web sites and web pages but also video, audio and documents. This article will investigate the needs, challenges and issues around the creation and consumption of accessible documents.

For this article a document is a collection of words and images that can be printed as a whole. The article does not cover interactive books that require the reader to be able to access them electronically.

These documents will include: letters, memos, minutes, reports, user guides, brochures, pamphlets, transcripts of speeches, magazines, novels, etc. They will be held in one or more digital formats.

There is a potential tension between the requirements of the creator of a document, the distributor and the user:

The creator of the document will wish to use tools and technologies that they are familiar with.

The distributor of the document will wish to minimise the number of document formats used for distribution. Multiple versions cost money, cause management issues and increase the risk of different users seeing different content.

The different users will wish to consume the document in different ways (the word 'consume' is used here rather than 'read' because 'read' implies reading words on paper or a screen, whereas the user may have the document read to them, or turned into braille or sign language, or other formats).

The end user must be considered the most important of these roles; if they cannot consume the document then there is no point in creating or distributing it.

This article looks at the requirements of these different players and reviews the alternative technologies available.

It summarises the pros and cons of various solutions and makes tentative suggestions for an optimum solution. It is hoped that this will help organisations that are going digital by default to decide how to distribute accessible documents; it also hoped that it will show the weaknesses in current technology so that vendors can improve their products.

The document looks first at the end user, then the distribution process, then the creation process, it then looks at the various technologies for creating, distributing and consuming the document and concludes with some tentative best practise.

The end user experience

To understand how these documents must be created, stored and distributed we must first understand how different end-users will consume them.

However the user consumes the document they need to be able to access more than just the text and the images (or descriptions of the images), they need to be able to:

They will not expect to be able to modify the original document without the express authorisation of the owner.

Types of consumer

People with different disabilities will wish to consume the documents in different ways. The following section outlines the different disabilities and methods of consumption:

Non-disabled: a person with no relevant disabilities will want to be able to read the document electronically on some type of screen. The document should be laid out so that its structure is visually apparent by the use of different types and sizes of fonts, use of bullets, indentation, and tables. The reader software should enable the user to navigate the document by table of contents, indexes, bookmarks and searches.

This electronic version of the document should be considered the base version: any other version should contain the same information.

Besides the electronic version, non-disabled people may wish to have a printed version of the document. It must be possible to print all or parts of the document so that the printed version is an accurate reflection of the electronic version.

People who are partially sighted should be able to modify how the document is displayed: size of text, type of font, background-foreground colours, line separation, justification, etc. to enable them to see the content as clearly as possible. The electronic document should interface well with screen-magnifiers.

People who are blind should be able to access the document using a screen-reader. Tthe screen-reader should convey the structure of the document by announcing headings, lists, tables and other structural elements, and assist the user navigation by providing functions such as jump to next header, or to the end of a list, or to the next chapter.

People who are vision impaired and use braille should be able to access the document and have it presented on a braille display including the structure and the ability to navigate.

It should also be possible to create printed braille from the base document.

People with dyslexia: can improve the reading experience by using suitable background-foreground colour and brightness combinations, also by using left justified text. Having text read out aloud and highlighted at the same time can also improve the experience.

People with hearing impairments have different capabilities of reading written text. If their reading level is good then the base document should be accessible. There is a great deal of pressure from the deaf community for films and TV to be captioned but there is much less pressure for them to be signed; the main area of signing is for news and current affairs where live captioning is inadequate. Signed versions of a document should probably be limited to general introductions to an organisation and documents specifically aimed at the deaf community.

People who do not understand written English may need some introductory document which explains what the organisation does and how to get help in understanding the other documents.

People with learning disabilities may not be able to fully understand the base document. Firstly the base document should be reviewed to see if it can altered so that it is understandable by a wider range of cognitive abilities, without it becoming patronising for the majority of users. If this cannot be done then a version may need to be created that is easier to understand without losing any of the meaning. This format is often known as 'Easy Read'; it concentrates on simple language and use of images and videos to match the words.

People who cannot use keyboards and/or pointing devices should be able to access and navigate the base document using assistive technologies such as switches or voice commands.

People with sever dementia and similar problems cannot understand or make decisions independently. In these cases the document only has to be accessible to their carer. An extreme case is a person in a coma.

Formats required by the consumer

To support all these different user types ideally requires the following end user formats (requirements for readers for these formats is discussed below):

Base document, which includes text and images, the format should support:

Screen-readers

Screen-magnifiers

Changes to fonts, colours, justification etc

Easy Read documents are needed where the base document is difficult to understand by some users.

Sign language the base document can be converted into sign language by videoing a signer reading the document (there is research into automatic generation of sign language but it is not considered advanced enough to be used instead of human signers). At present there is no easy way to support navigation of signed videos.

Audio can be produced either by using a text to voice software or by recording a human reading the text. At present there is no easy way to support navigation of audio versions of documents, however if the voice is synchronised with a text version then navigating the text version will provide navigation of the audio.

Possible Distribution Formats

The question is which format(s) should the content be distributed in? The following are some options with pros and cons.

Word processor format

The documents will often be created using a word processor (Microsoft Office (.docx), Open Office (.odt) Apple iWorks (.pages)). If it is going to be distributed in this format it needs to be in a format that can be read by all systems: this means .doc or possible .docx. There are two problems with distributing in this format:

The formatting of these documents by different word processors is not always identical and in a few situations does not work at all. This can be a particular problem with mobile devices that have limited support for these formats.

The content is not intended to be edited or changed by the recipient but the program used to access it is designed to do just that. The recipient should be able to annotate and comment but not to change the original.

For these reasons it is not really a suitable format for distributing the base document. However it is a very common format for creating base documents and therefore there should be methods for converting them into formats suitable for distribution.

PDF

PDF is designed to be a final document format. The common tools used to access it, such as Adobe Reader and Apple Preview, do not support change but do provide annotation functions.

PDF used not to work well with screen readers because the format did not include any document structure information; with the publishing of the PDF/UA standard this is no longer the case.

PDF readers are available on all relevant platforms and are installed on most PCs. PDF is therefore a popular format for distribution of finalised documents.

PDF/UA has not been designed to facilitate conversion to other formats; it is possible but not easy.

PDF documents are designed to ensure the page layout is preserved. This is important if the page layout is critical to the design of the document, or if the layout has a legal significance.

ePub

The ePub format is growing in importance and is especially popular on mobile devices.

The format does not define the page layout but just the document structure. This means that the document can be rendered differently to suit the display device and user preferences. It is also suitable for converting into other formats including Braille.

It has the functionality to support screen readers as the document structure is defined as part of the format. The common reader tools that are used to access the content enable users to annotate but not to change the original.

The latest standard version of the epub standard (epub3) includes functions for synchronising audio with the text.

The present issue is that not everyone has ePub readers installed on their device. Also not everyone has an ePub creator tool.

DAISY

Daisy is a format that has been developed to support people with vision impairments. It requires a special reader and development tools. It would appear that the benefits of DAISY are being built into ePub 3 technology. Therefore it is unlikely that Daisy will become a general document distribution format.

Audio

MP3 is the common format for audio. The problem is that it does not include any facility for defining structure, for navigation or for annotation.

MP3 versions of the base document may work for short documents or for documents that are designed to be read linearly such as novels. On its own it is not a suitable format for documents such as reports, manuals or magazines.

Video

MP4 (or mov) are the standard file format for videos. It is the format that will be used for sign language. The problem, as with mp3, is that it does not include any facility for defining structure, for navigation or for annotation.

A suggestion is that a video file is created which includes the signed version of the text, an audio track with the spoken words, a closed captions track with the written text. This way there is one file that can support users with different disabilities.

Recommended Distribution Formats

Based on the discussion above it would seem that all users can be accommodated by providing two formats: ePub 3 and Video. ePub has been recommended over PDF/UA because it is designed to supported conversion and because of its widespread support on mobile devices.

Base document

The base document should be distributed using EPUB 3 format. Given a suitable reader (see discussion below) this format can be used by people with most of the disabilities described above; the one major exception is people who are dependent on sign language for communications.

The format can be converted relatively easily in to other formats. This means that users who require another format for technological, preferential or legal reasons can convert the document or have it converted for them.

Video

Sign language cannot be adequately created from an EPUB 3 document. The only solution for this requirement is to create a video of a signer reading the document. If this includes the sound track of the document being read then the video provides a single source that supports multiple users.

It is not recommended that a video is made of every document but a decision is made for each new document as to whether it is beneficial to make the video up-front or if it should only be created on request.

Readers

The three formats (EPUB, PDF and Video) have different reader technologies.

EPUB readers

There are many different readers on the market. They all support the EPUB format but vary in details such as which platforms they run on, design of the user interface, options available for the user to change the look and feel. This means that is not possible for the distributor of the document to recommend and link to a single reader (this compares to PDF readers where, although there are multiple readers on the market, Adobe Reader can be recommended for all users).

This means that the user has to decide which reader is most suitable for them. Some questions that the user will need to consider are:

Does the document have to be loaded into the reader library before being consumed or can it be opened from a standard file directory.

Does the reader interface effectively with the assistive technology they use.

Can the user set up themes so that they can use different sets of options for different types of documents.

Is the customisation interface easy enough to use. Some options should be very easy to change but ideally there should be a more sophisticated way of changing the options e.g. a standard of three background-foreground colours to choose from but with the ability to use CSS to define any combination.

Is there an in-built text-to-speech facility.

PDF Readers

There are several readers on the market, not all of them take advantage of the PDF/UA tagging.

Adobe Reader is the leading reader and is available for all major platforms. Not all of the assistive technologies available understand or take advantage of PDF/UA, especially in the mobile environment.

Video Players

Video players are available on all major platforms. The problems with video players are that they do not provide functions for: defining structure, navigation, searching, annotation, copying or extraction.

Creation and Conversion tools

EPUB tools

There are various EPUB creation tools: there are desktop publishing systems that can be used to generate EPUB documents and there are tools that convert from word processors (Microsoft Office or Open Office) to EPUB.

Assuming many of the documents will be written using a word processor this section concentrates on products that convert the source to EPUB.

Calibre is one tool that will convert from .docx and .odt to .epub and the latest version supports more styles and formats than before. The problem is that there is a lack of documentation as to what can be converted and how it is converted. This information is needed as the ideal is to create the document in the word processor and then automatically generate the .epub without any manual intervention.

Calibre and other tools can read EPUB and convert it into other formats.

PDF/UA tools

There are several tools for converting .doc, .docx and .odt files into PDF/UA. These include Adobe Acrobat, Microsoft Office and Open Office so the process is well supported by the leading players.

There are products that attempt to convert from PDF to other formats but they tend not to use the PDF/UA tagging so the output often loses much of the structure of the original.

Conclusion

To provide accessible documents to the widest possible set of users an organisation should distribute the documents in accessible EPUB format with some also available as videos with the text read out and signed.

To ensure this is practical there needs to be more research so that recommendations can be made about:

The best readers for different users.

About the creation of word documents that can be automatically generated into EPUB documents.

This recommendation is intended to provide the best long term solution to accessible documents. It should be the solution promoted by the accessibility community. However, the creation and reader technologies for EPUB are are at present (January 2014) somewhat immature and lacking a complete set of easily implemented functions. There is a need to persuade the providers of EPUB technology to improve the quality and function of their products.

Therefore, for a distributor of accessible documents who requires an immediately available, low risk solution PDF/UA could be the preferred choice.

How will everybody cope after the demise of their favourite social media destination? Farewell Facebook, Ta-ra Twitter, Toodle pip Tumblr etc.

Not going to happen? Sure it will—always does, one way or another. The next big thing, a swing away from sharing and openness, a left field competitor, a marketing or operational blunder, or simply going out of fashion—remember BeBo, MySpace, SecondLife?

Although some may be bereft, tired and emotional, most users will simply shrug their shoulders and move on. Many will have already had that experience with another platform or seen other changes in their lives. Most will probably also have several social tools at their disposal and just substitute one with another.

For businesses it is a different story. Some take so long to catch up with and adopt any new technology idea that by the time they do, they have invested a significant amount of resources and time so are almost wedded (and welded) into a particular approach. Just look at how long so many businesses took to get into using the internet, and sadly how many simply just manage to use it as an extension for something they've already been doing for years—the corporate brochure website being the classic example.

And don't think that this is massively different in the gung-ho technology savvy US of A. Most American companies, large and small, are slow to adopt, have static, out of date or out of style websites and have no more sophisticated e-approaches than their European and Asian cousins.

This is what makes certain players stand out so much; some in the tech industries, colossi like Amazon and the odd player in other traditional sectors. However the majority of large companies are too slow or stifled by internal silos, and the small crazy named startups try to be so hip that their pants fall down. When they eventually get with a specific element of 'the internet', 'm-commerce', 'social media', the audience has moved on to another element.

The problem? Too much focus on the technology specifics, and not enough on how technology can map into the business and support its processes.

The manifestation of many corporate activities in the realm of social media is as clear as the old bad ways adopted in the late 1990s by companies trying to 'go online'. Companies create Facebook pages, LinkedIn groups and Twitter handles, but not strategies for increased customer interaction, how to build partner communities or how to use them to benefit the organisation, i.e. ultimately sell more at lower cost.

For organisations that want to be remembered by their shareholders as more than just "nice, with lots of friends", social media needs to deliver real value just like all other media and channels in marketing and sales. Building awareness is good, but building solid reputations is better and for all that great social connections can recommend, they can also destroy and sometimes they can be deceptive and distrusted—different friends' views carry different weight.

The problem is that all too often, control of the social investment purse strings resides in the wrong place; CFOs who are prone to pre-judge most things through too narrow an ROI lens, and CIOs who, whilst becoming much more positive towards social media in general, are too often too focused on specific technologies and apparently less likely to be as 'social' as other members of the board. (According to recent surveys such as that by Harmon.ie)

While marketing should be much closer to the heart of the business it is also risky letting the CMO run away with the social budgets. Specific social platforms may appear to have immediate short term marketing appeal but, as outlined earlier, an organisation's social strategy should not be led by a particular technology or social media site but by the business outcomes required.

These may be marketing oriented, such as awareness generation, building partner ecosystems and communities of interest etc or they may be directed towards support and long term customer relationships. However, just like the huge IPO valuations of social media providers themselves, eventually the investment will have to be justified by increased value to the business ie significant cost reductions or sales growth.

Instead of fighting on a department by department front, just as social technology implies, a collaboration between different stakeholder groups within the business is more likely to achieve the best results. It might be marketing or IT led, but crucially it must have contributors from right across the organisation, working outside of their silos and comfort zones.

Social media exploitation by the business seemed like such a simple idea when so few we're using it, but the arms race has moved on. Organisations need social media strategies, plans and multiple-faceted teams that acknowledge that social media is a permanent aspect of business that impacts many areas whilst specific social media sites are just isolated ships that may pass in the night. Without an all-embracing strategy, all they might make is a few more friends, rather than adding more to the bottom line.

The role of IT in many organisations is often hotly debated—is it an enabler or an obstacle to getting things done? With so much information in the public psyche from so many consumer technologies, more and more employees will not only have an opinion, they will also have direct and personal experience of good and bad IT products.

So surely, those with access to all the new hot technologies and cool gadgets—the IT department—will be revered as the well-informed guides to this golden age of connectivity? Er, no. More often than not, they will be bypassed in the stampede.

While it is not too surprising that this might happen for the consumer mobile technologies that have risen to prominence in recent years—smartphones, tablets etc.—what is more worrying is the lack of regard for the role of the IT department in the systems that support industry-specific needs and significant business processes. Bring your own device (BYOD) is no longer enough, now it becomes a matter of DYOT—do your own thing.

This was noted in research conducted recently by Quocirca in the insurance sector. Here, flexible systems well aligned to business processes were seen as vital to the business, but there were doubts about the role that IT might play in the provision of this. These reservations were not about the software used or product suppliers, but the internal IT department itself.

This highly conservative industry sector is changing in the same way that many others are doing or have done already. There is a drive for increasing sales, doing more online and automating or streamlining what were once manual business processes.

However, for a business to be 'agile' it needs to not only have the right IT underpinnings, but also has to ensure that these align appropriately with the business.

This is where an IT department needs to be well informed regarding available technology, but also have a good understanding of the key business processes and solid communication with line of business people, so that it can guide the business as to what is available, and how it might be best exploited.

Some examples emerged from the research.

For example, insurance companies are keen to drive up sales volumes and see brokers as their most important channel. Although they might extend their systems to this channel, relatively few have systems that can be tailored by these brokers. Why not? Doing so would not only build a closer relationship with what is admitted to be the most important route to market, but also allow brokers to streamline the IT tools to the way they operate, improving their sales efficiency.

The potential for increasing the online route to market was also being overlooked. However, unlike other industry sectors, where online tools are being used as a great leveller, in this case it seemed that smaller insurance companies were more reluctant to invest in online than the larger players.

Some of this may betray the more conservative nature of the insurance sector, where smaller businesses are often long standing partnerships, but also indicates a lack of confidence in the approach to IT.

Those tasked with IT management, especially in smaller companies, might find themselves bogged down in the day to day challenges of just keeping everything running, but a little more investigation and awareness of what the market has to offer would be worthwhile.

This would not only be good for the individuals concerned (i.e. become more valuable to the organisation, do more interesting work and, crucially, keep employed), but also the organisation itself.

The problem is that without decent IT guidance, those in the line of business might adopt a DYOT approach and just go for popular IT products that appears to fit the needs of the moment. Then it might not only miss out on longer-term benefits, but also find that the adoption process makes the organisation significantly less flexible in the mean time if it locks in and reinforces existing bad habits and poor processes.

Allowing IT to go out on its own and make expensive choices independent of the business is no solution either, so even in smaller organisations it becomes important to pair IT and business expertise together. That significantly lowers the risks of either missing out on new ideas or making bad investment decisions—and surely insurance companies are always interested in risks?

Resellers looking to capitalise on the growing use of cloud services need to look at both the direct and indirect opportunities. The direct ones are the selling of cloud services themselves, perhaps implemented by the reseller or sourced from a cloud service provider or an aggregator. The indirect opportunity comes from selling the technologies that support the use of cloud services, especially those relating to security.

A recent Quocirca research report—Digital identities and the open business—shows that organisations that are using lots of cloud services recognise the importance of security for enabling this (Figure 1) and are spending a greater proportion of their IT budgets on security than those who hold back. One area of security stands out—identity and access management (IAM); 97% of cloud “enthusiasts” have an IAM system compared to just 26% of cloud “avoiders” (these terms are defined in the report).

One reason for this is that the single sign on (SSO) capability of many IAM systems have made it easier to provision and de-provision access to multiple cloud services. This ensures a given user has access the resources they need via a single identity with strong authentication (the user only has to go through the login process once). Perhaps more importantly, when the relationship with a given user ends, IT managers can be certain all access rights are removed quickly and completely through a single update to the IAM system. SSO also makes it easier to create granular access policies for different types of users and to keep accurate audit trails.

This article has been careful to use the term “user” rather than “employee”. This is another benefit of many IAM systems, the ease with which applications, cloud based or otherwise, can be made available to external users. This is the number one motivator for putting IAM in place in the first place; 58% of the business interviewed for the latest research had opened up applications to consumers, users from business customers and/or users from partners. Another recent Quocirca research report—The mid-market conundrum—shows that that the average UK mid-market business has 40 times as many external users as internal ones.

SSO can also be used to allow access to multiple applications for external users. Think of a travel agent providing flights, hotel bookings and car hire or indeed a reseller selling aggregated access to several cloud based applications to a range of customers. A further benefit that many IAM systems provide here is federated identity management.

Whilst the majority of businesses still rely on Microsoft Active Directory as a source of identity for their employees, they are often relying on other sources for external users. For users from business customers and partners, this is most likely to be a given organisation’s own directory system, but it could be a government database or a membership directory of a professional body. However, when it comes to consumers, one source of identity is coming to dominate—social media (Figure 2).

Social identities are those used to access online consumer services such as Facebook, Google and PayPal. Using social login avoids having to create and manage millions of identities. A number of specialist providers have emerged such as Gigya, Janrain and Loginradius. They check the veracity of social logins, act as brokers between multiple social media sites and those providing services that want to use social login and enable a single view of consumer customers regardless of how they login. Using such services it is possible to establish a high level of confidence that a real person is being dealt with.

The social login vendors limit themselves to social identities and maintain a consumer focus. Incorporating users from other businesses alongside employees and consumers requires the broader federated identity management capability described earlier. The big identity vendors such as CA, IBM, Oracle and Intel/McAfee are adapting their systems to address this requirement and new vendors such as Ping Identity, Okta and Symplified have emerged.

To come full circle, many of these are now provided as on-demand services—IAM as a service (IAMaaS). Indeed, Quocirca’s research shows that 43% of IAM deployments are either pure on-demand services or a hybrid deployment of a legacy on-premise system with a cloud service (Figures 3 and 4). Needless to say, those making extensive use of cloud services are the most likely to turn to IAMaaS, with around two thirds using it in some form. This makes sense, if the users and applications can be anywhere, why not the IAM system; cloud feeds and cloud! Resellers preparing the future to make sure they have the capabilities in place to capitalise on the both the direct and indirect cloud opportunity.

Increasingly, collaboration is seen as part of enabling business success. Partly, it seems, because of a need to reduce what people at Salesforce.com sees as the disconnect between structured data and "unstructured [i.e., differently structured] content; and partly because it can bring together teams separated by geography and time zone (and I'm sure there are other benefits). Which is why everybody is trying to sell you collaboration software - but collaboration is very definitely a people and culture thing; any software is only an enabler.

So, I was interested to hear Kuoni, the travel experts, talking about its implementation of Salesforce.com Chatter. I have just enough space here to run through what I took away as the key points:

First, that you need a real issue to address - and some executive sponsorship for addressing it. Your biggest problem isn't going to be a software feature your collaboration tool doesn't have or the quality of its UI, its going to be people who are frightened of collaboration, frightened of saying things in public their management (or peers) might disagree with. That's a cultural issue - probably a cultural change issue, and you need to put resources into addressing it

Second, start with issues that deliver quick returns for not much effort. Sounds obvious, but collaboration evangelists have been known to pursue something that would make a great PhD dissertation and might change the way business operates; but is hard (expensive) for the business to implement and harder still for ordinary users to get their heads around today and see any immediate benefits arriving. Kuoni, which is a B2B travel specialist, chose sharing of market knowledge; market updates, travel experiences - and the induction of new employees.

Then you move onto low hanging fruit (target obvious benefits, that need, but also justify, a bit more effort) - for Kuoni, this included timely knowledge-sharing, across a global business and across time-zones.

Avoid like the plague what Kuoni calls "fools gold": interesting collaboration opportunities that will take a lot of effort to implement and have no obvious benefit in the short term - and no executive sponsor.

Third, choosing tools. The vendors probably won't like me saying this, but if you have a vision and executive sponsorship for collaboration, you can probably make any tool work for you; if you haven't, you'll probably fail with the best tool in the world. Which isn't to say that choosing the right tool for your situation won't make things a lot easier (although the tools selection process probably needs an article to itself). Kuoni chose Salesforce.com Chatter, because its early adopters already used the Salesforce.com CRM app and Chatter fitted into their work pattens (and screen GUI) with minimal disruption.

Fourthly, start small and grow with demonstrable successes (and, oh yes, and I'll say it again, start with informed and enlightened executive sponsorship - not just an Exec who's read in the Times that "Facebook for the business" is the coming thing and doesn't know any more than that). Kuoni started small with the CRM users in just one area, it let the group discover what it could achieve easily and quickly with Chatter (see point 2) and then expanded participation, using its early successes to help people "get the message". It now has some 1600 users, of which some 600 aren't also CRM users. There will be people who can't see the point of Facebook and Twitter (I must confess to some sympathy there - I simply don't need to know that some friend has just bought a Big Mac on the way home), and will need to be persuaded by their workmates that one can actually get something useful out of more business-oriented collaboration.

And that's just about enough for now. I'll just mention that Salesforce.com Communities don't replace Salesforce.com Chatter, they complement Chatter use cases and extend them outside of the company to customers and partners.

Bloor Research is proud to announce it has become a partner of a major new national campaign to raise awareness about the barriers faced by people with disabilities in accessing the internet and other new digital technologies, and help overcome them.This is a natural follow on to the research into accessibility that Bloor has conducted over the last 7 years.

Bloor believes that our readers should follow suit and show their support for ICT accessibility and gain the benefits available from a community of interest.

Go ON Gold aims to encourage businesses, organisations and policy makers to become more aware of the needs of disabled people - including their own staff and customers - and of the benefits to the economy of enabling everyone to be online.

New technology, from the internet to smartphones and digital TV, can be liberating for disabled people but can also turn into another way of excluding them from work, entertainment, shopping and other everyday activities. But shockingly, some four million disabled people in the UK have still never used the internet, either because of design barriers or because they may be unaware of advances in technology that can make access easier.

As part of its awareness-raising work, Go ON Gold has filmed a series of videos of campaigners and technology users.

One of the video subjects is Paralympian peer and disability rights campaigner Tanni Grey-Thompson. The sixteen-times medal winner is a firm believer in the enabling power of IT: "For people whose mobility is compromised or who lack the resources to be able to get out and about as much as they would like, full internet access can be hugely liberating. In front of the screen, we can all be equal and Go ON Gold is set to make this a reality."

Go ON Gold, funded by the Nominet Trust, is a partner campaign of Go ON UK, the new national digital inclusion charity chaired by UK digital champion Martha Lane Fox and backed by the BBC, Age UK, the Post Office, TalkTalk, Lloyds Banking Group, the Big Lottery Fund and Eon.

The Go ON Gold website will act as a central focus for links to key resources and expertise, ranging from charities providing free or subsidised equipment, to centres offering one-to-one advice, and guidance for website developers to ensure the accessibility of the digital content they produce.

There is a principle that internet service providers (ISPs) and governments should treat all data crossing the internet equally. It should not matter what type of device is being used, who the user is, or what site or application the data is being coming from/going to—net neutrality should mean no difference in charging models, no discriminating between the different use cases.

The arguments go back and forth as to whether this should be enshrined in legislation as a right, or allowed to drift in a competitive open market.

Despite the arguments and the capacity of technology to advance there are restrictions due to the laws of physics and certain resources that are therefore limited. This might not be too much of an issue with the massed bundles of fibre optics at the heart of fixed-line networks, but wireless networks have to balance range, capacity, power and the frequency spectrum in what is increasingly ‘noisy’ environment. Ideally without ‘frying’ anything en route.

While the resources are constrained, the boundless enthusiasm and appetite to access mobile data and applications is not. Nor, given the numbers of subscribers and devices, is the number of endpoints diminishing. In fact, with a re-awakened interest in machine-to-machine communications (M2M), or an ‘internet of things’, this is likely to accelerate further.

So what about unwired net neutrality?

There are already differential services that break the spirit, if not the letter, of the principle. To observe this, consider the way hotels have been offering Wi-Fi. Initially it appeared to be a new revenue stream, but then establishments realised it was costly to get right. As more venues started to offer it, the differentiation was lost and it became a ‘table-stakes’ offering of free Wi-Fi once hoteliers realised that actually they really made money from renting out rooms and selling food and drinks.

Not all have reached this point yet, but the more progressive organisations have already gone a step further. They offer ‘basic’ Wi-Fi for free, but have a premium service that offers greater bandwidth, improved latency etc—what might be described as ‘professional’ Wi-Fi, compared to currently simple hotspots. Basic allows a bit of email and gentle browsing, but the premium service would be good enough for consumers’ IP telephony, gaming and video streaming or virtual desktops and unified communications for the enterprise user.

Then there are cellular networks. Some carriers are premium-pricing their higher speed 4G offerings compared to the tariffs on their 3G networks. Of course with differential caps on usage it also gets a little confusing as to which is the best service for an individual user. In countries where only one or a few of the mobile networks are offering 4G today, there will be rapid pricing changes as operators switch between land grab, maximising revenue and maintaining network quality modes.

Given that users have different needs—from M2M applications that might only require a few guaranteed kilobytes to video streaming gamers who need high bandwidth and low latency—there will have to be different types of services offered. Setting caps on how many minutes of communication or megabytes of capacity will be bundled and then charged for will no longer be sufficient.

Different qualities of service will need to be differentially priced. This might require application bundling, e.g. all the social media you can eat, but video is charged by the megabyte or guaranteed service levels, e.g. all gaming traffic in sub XYZ latency, but email transmitted as ‘best efforts’.

It will be a real challenge for rating, billing and marketing, but there is no dark fibre in the sky and all the innovative use of spectrum has its eventual limit, which, with ever more users and usage, is close by.

The superfast mobile net is unlikely to be very neutral, but that might work out to be beneficial in the long run.

I've noticed that some people seem to assume that if you talk about technology as being important in organisations today, and (I believe) it is, then you are also saying that everything should be looked after by the IT group.

Not true; technology should be managed by the business as part of the business; and although you do need technologists that understand technology (the laws of physics really can't be repealed by the marketing director), I think that, ideally, they should probably work in the business and share business goals and successes. Perhaps the 'IT Group' of the future is a purely cyberspace organisation, maintained with the aid of robust collaboration tools. OK, I do realise that's some way off.

Nevertheless, this all applies in spades to [{page:Social Collaboration:Social Collaboration}]. This is not primarily a software thing and it doesn't 'belong' to the IT group - collaboration tools are only an enabler (although they can be an important enabler, an aspect Bloor focuses on; and don't forget that some technicians will have to install and maintain them). Collaboration belongs to the whole organisation and is more about people and process than it's about tools (just buying trendy collaboration tools certainly won't deliver a collaborative organisation by itself). In fact, if you see Enterprise Architecture (EA) as being about transforming the enterprise (in accordance with business strategy and vision), perhaps Social Collaboration is part of EA - and that shouldn't be owned by the IT group either.

Which brings me to an interesting newsletter I've found: Tom Woolff's Newsletter from Tom Wolff & Associates. Tom Wolff, Ph.D. is apparantly a recognised consultant on coalition building and community development, with over 30 years' experience training and consulting with individuals, organisations and communities across North America. He claims that his clients include federal, state and local government agencies, foundations, hospitals, non-profit organizations, professional associations, and grassroots groups.

He has pubilished "The Power of Collaborative Solutions: Six Principles and Effective Tools for Building Healthy Communities". His Newsletter is mostly about collaborative communities; and more about people and process than technology and software.

His thesis is that collaborative processes are the key to addressing the critical challenges that confront our communities, our states and our nation in the new millennium. Perhaps, on a smaller scale, much the same applies to the challanges facing businesses today. He isn't at all focussed on implementing collaboration software technologies but on higher collaborative issues.

I think his Newsletter and his book might provide an interesting read, even for people outside of its intended audience, for anyone trying to implement social collaboration in a business enterprise. They are about as far as you can get from 'guides to installing SharePoint'. For example, the current edition of the Newsletter looks at seven keys to success when implementing multi-site coalitions, including things like "building a learning community"; the book looks at (for instance) how collaboration differs from (in decreasing order of capacity) cooperation, co-ordination and networking.

Who knows, success with social collaboration may even have a spiritual component; to quote Woolff, "we must create collaborative social processes that parallel and reflect what we hope the outcomes will look like" - in other words, if we want to build an altruistic and ethical collaborative organisation, then we can't build it in a 'blame culture' by diktat, enforced use of software and diversive individual monetary incentives.

This sort of thinking probably underlies successful implementations of social collaboration software, which is only a means to an end.

Quocirca has recently published a free checklist to help those looking at investing in self-service solutions. So, why might it be useful?

Well, there has been a rush in the UK in recent retail situations towards customer self-service and automation. Pay at pump petrol stations, self-checkout tills and so on.

The reasons for this are presented as ‘customer convenience’, but it is pretty clear that it is all too often about cutting costs and too little thought is given as to how to how it might affect the overall customer experience.

Specialist retailers will argue they have to do this in order to compete with online or other, higher footfall, locations such as supermarkets, hypermarkets and shopping malls.

There may be some truth in this but, by simply commoditising the shopping experience, those making knee-jerk decisions to automate customer service run the risk of further business decline.

Clearly something is amiss as so many major and well established specialist companies have disappeared and continue to do so, mainly with a wail about “habits have changed”, “it’s all gone online” after they have narrowed stock ranges, made the stores feel like warehouses and trained the staff to be as friendly as a bent nail.

The best (and surviving) retailers—whether online, mobile or physical stores—provide service excellence irrespective of the technology or channel. Automation and self-service has a very important part to play in all these routes to the market, but it has to be delivered with the customer in mind, not simply as a cost cutting exercise.

The first thing to realise is that self-service is not a standalone tool or alternative to existing processes, but has to be integrated into the wider business in order to be successful. It should be viewed as a strategic and well-researched investment, not a simple tactical option. For this reason, the decision-making process of how to implement self-service and what solutions or tools to should be implemented has to be well thought out and comprehensive.

To start with, an organisation must identify why the move to self-service is being made in the first place and what the main requirements are. There may be a cost reduction element, but how important are other matters such as increasing cross-channel co-ordination or improving customer service levels and internal communications?

For example, are customers automatically invited to chat if their website interaction indicates they might need help or can support agents see what customers have done, requested or replied in order to avoid duplication of effort on the part of the customer?

However, this process may reveal that there are underlying issues with poor business systems, such as lack of a formal handover at shift changes or problem departments—e.g. a technical group refusing to get involved in customer contact. These will need to be addressed separately to the implementation process as simply deploying self-service alone will not fix these internal problems.

Next consider which suppliers will need to be approached and investigated. As well as taking the partisan views of the vendors themselves and some of their ‘tame’ customers, dig deeper and find out the broader market perspectives from a wider mix of customers, perhaps through trade shows and conferences. Industry analyst perceptions may also be valuable, but be aware that some analyst houses may overlook specialist or niche vendors and it is best to take a broad view.

The bulk of any product or service suitability assessment will come down to comparing features and functions, and a checklist will be useful. However, as this is an important investment, it is always important to check the people, company and its current client base of an intended supplier to get the full insight.

It is never easy going through the process by oneself, and even self-service benefits from some sort of external guidance. So for an idea of how to approach the self-service product and vendor selection process, download a free checklist.

1 Cloud-Based Phone CallsPhone calls will increasingly become cloud-based and less reliant on a network provider. We will also see more cities roll out widespread free Wi-Fi. Most recently, we’ve seen Google, in an expansion of its role as an Internet Service Provider, introducing New York City's biggest contiguous free public Wi-Fi network in the Chelsea neighbourhood of Manhattan. Chelsea is home to Google's New York headquarters, which conveniently means employees out at lunch breaks or area meetings will be able to remain productive even while out of the office!

WIder access to Wi-Fi access points outside the home and office will facilitate more people switching to freely available data-based communication methods such as Google Voice, Google+ Hangouts and Skype. Mobile phone network providers will have to adapt quickly as their market share for voice calls and text messages begins to diminish in favour of data equivalents—ultimately if everything runs via Wi-Fi, who needs a network provider?

2 Further Enhancements To The Shopping ExperienceThe shopping experience online will mirror in store, with 360 degree visuals via Google maps, interactive shopping, reviews and ratings pulled in via microdata and checkout shopping via Google shopping. We may even see the introduction of banking with Google too! The in store experience will generally continue to blur with the online world, pulling in reviews and price checking through apps and later, gadgets, which can be worn while shopping. ‘Scan while you shop’ will include the ability to also see what your friends and family like.

Google is already introducing an online game—Ingress—which transforms your local area into a real time strategy game, allowing you to do battle with other gamers as you walk down the street or explore new towns, fighting for control over monuments or areas of the city. Imagine the potential of interfacing with the much coveted Google Glass, and the potential for developing the Augmented Reality aspect for businesses is quite mind boggling.

The question is, will you ever get to work on time?!

3 SEO Gets PersonalSearch engine optimisation will increasingly become more about content optimisation and author reputation. Personal profiles will become more important, as author rank develops its impact on search results. Businesses will need to refocus on the people behind the business and not just the brand itself. Those who embrace this approach will take leaps forward in their search engine results page position, those who choose to ignore will stay where they are or lose ranking.

People don’t buy from people, they buy from people they know, like and respect.

4 Social Signals Take Priority Social signals will influence where search engine listings appear, with those which your network have recommended or shared being ranked higher than those with no interaction. For example, Google is building a trust-based network (Google+), whereby your social habits and connections inform your search results. If you search for something in Google when you're logged in, results which have been recommended (by '+1' or sharing) by your network (people in your Circles) will begin to be served up above those which haven't—the relevance algorithm won't be ignored completely, but precedence is beginning to be given to resources which people in your network think are useful.

5 Open Source is King Businesses will increasingly turn to open source technologies, as closed source systems become less able to keep pace with the rapid pace of change, with the crowd-sourced development model being unsustainable in a closed source environment. Small businesses in particular will drive this shift in mentality, requiring their larger business suppliers to adapt. Joomla! For example, is fast becoming one of the most popular open source content management systems in the world, powering almost 3% of the web and exceeding 30 million downloads—the chances are during your day you probably browse at least one website using Joomla!.

6 Responsive Website Design Responsive website design will become the norm, allowing websites to be displayed on a range of devices with ease. Frameworks such as the Twitter Bootstrap and Foundation 3 will be adopted as best practice for website design.

7 Rising Apprenticeships More young people will turn to vocational learning and apprenticeships in preference to university as tuition fees rise and leading drops. Technology-related apprenticeships will become more popular as the benefits delivered by a tech-savvy young workforce make them increasingly attractive candidates for employment and training. At Virya Technologies, we have taken on two apprentices in recent months, having employed our first apprentice as a full time website designer following successful completion of his programme in October 2012. The level of support available means that many companies can now actively engage with a younger, more tech-savvy workforce, who have the passion, skillset and potential to thrive in a technology-based environment.

The 'Marketing Cloud' was launched with great fanfare at Salesforce's annual Dreamforce conference in September 2012. The Marketing Cloud must have struck fear into the hearts to some marketing apps vendors. The likes of Marketo, Silverpop, Eloqua and ExactTarget have placed big bets on integrating their SaaS Cloud software with Salesforce.com's SaaS Cloud platforms. The startpoint for B2B marketing automation projects is nearly always to access and integrate the client's valuable CRM data, which is most often held within the Salesforce.com Cloud.

All the marketing apps vendors have been growing nicely in double digits annually, largely on the back of Salesforce's open APIs and willingness to partner. However, a cumulative sigh of relief went up when the marketing applications vendors realised that Salesforce's Marketing Cloud was an extension of Salesforce's 'social enterprise' strategic play, rather than the launch of competitive marketing automation products.

Salesforce Marketing Cloud actually represents the integration of its recent social media acquisitions Radian6 and BuddyMedia. Radian6 provides social listening, workflow and automation and measurement. BuddyMedia provides social content, engagement, and display advertisements within the social media environment (targeting the 1 billion Facebook users primarily). BuddyMedia, in particular, provides Salesforce with some great enterprise technology, as it enables segmentation and targeting of social media users by sentiment, age, location, emotion and intention. In addition, there's also another 1,700+ third party social enterprise apps on AppExchange.

The Marketing Cloud is therefore part of Salesforce's strategy to fill out its own social platform product portfolio. Their enterprise social media tool Chatter Communities already has an installed base of 150,000 active Chatter networks and the new beta version extends Chatter externally to partners and the supply chain, for example. Chatterbox (like Dropbox) syncs files for secure viewing across all devices and utilises the Salesforce Touch platform for mobile and multi-device access.

Salesforce is a cloud technology company that happens to do CRM, rather than a CRM apps vendor that operates in the cloud. It is not really an apps company but a platform/infrastructure company and a 'trusted custodian' of customer data. "No Software" is still Salesforce's tag line. What's paid for is function, "clicks, not code". In the old days an IT Director's worth was judged by the size of his data centre. Salesforce plans to put an end to all that, and sees the future of IT Departments as being focused on regulation, control, governance and compliance rather than operational IT management.

However, Salesforce's focus on Social and Mobile mirrors the developments of the marketing applications vendors, which makes for an uneasy truce. The choice of the term 'Marketing Cloud' rather than something 'Social' and Salesforce's claim that it has 'the only unified social marketing suite' will send shivers up their spines. Also Salesforce is now competing in the Call Centre and Customer Service automation and so therefore now offers a triumvirate Sales/Marketing/Service suite of products.

All the vendors are moving in the same direction with not too dissimilar messaging. Market consolidation beckons (note ExactTarget's recent acquisition of Pardot). Salesforce is ramping up its efforts to penetrate enterprise accounts with its 'social enterprise' products and services and has a big drive on to recruit enterprise sales people. Social functionality is now an integral component in all the digital marketing apps vendors' armoury. 'Co-opetition' with the digital marketing apps vendors, at the very least, is clearly inevitable.

This should be good news for customers however. Salesforce's active participation will bring down the price point and provide credibility to Cloud/SaaS offerings in the digital marketing space. This alone should accelerate customer adoption and market growth, which will be no bad thing for all the marketing apps vendors.

Pitney Bowes is a brand name often associated with the 1980s when Xerox, Kodak, and Gestetner ruled the printing world. Those halcyon days of the printing industry are gone. Kodak went bust, Gestetner was absorbed into Ricoh, and Xerox struggled (including a spell in Chapter 11) to re-invent itself as The Document Company. Not that life has been easy for Pitney Bowes. During the downturn between 2008 and 2011 revenues have slipped by $1Bn to $5.3Bn as its key markets, the US and Europe, have faltered.

Pitney Bowes is repositioning itself and forging a new identity in the digital world as its core printing market is declining at the fast rate of -5% per annum. Rather than adopt the radical approach of Xerox, Pitney Bowes has chosen evolution over revolution. Pitney Bowes still continues to innovate in the print industry. For example, its 'white paper factory' enables large enterprises to reduce expensive stocks of pre-printed stationery to virtually zero. Personalised direct mail letters, brochures, even envelopes can be created and printed on demand by these amazing machines from a single roll of white paper. A partnership with HP provides the colour printing technology.

Pitney Bowes' strategy is to move more substantially into online marketing communications (primarily email, social and mobile marketing) which are the digital substitutes for direct hard copy mailings. Online communications such as email marketing provide businesses with customized and targeted efforts at a much lower cost than direct mailings. To this end Pitney Bowes acquired Portrait Software for customer analytics and coordinating multi-channel marketing campaign management. MapInfo was acquired for location-based intelligence and Group 1 Software for customer communications across multiple channels (such as print, email, web, SMS and social). Half a dozen other small 'tuck-in' acquisitions were also enacted. The product integration is now largely complete and forms an integral part of the Pitney Bowes modular 'customer communications management' suite.

The key elements of the suite are (i) customer segmentation and profiling, (ii) communications delivery and (iii) predictive analytics functions i.e. the ability to target an individual customer with the highest propensity to convert to purchase, and / or who is likely to be attracted to up-sell and X-sell offers. Sounding rather like a Mad Men TV series follow-up, these are 'the persuadables', who can then be targeted using Pitney Bowes' multi-channel marketing campaign management solutions.

Currently only 8% ($425m) of Pitney Bowes' revenues are associated with software; 71% of revenues are associated with mail. This ratio needs to change fast to keep pace with the worldwide shift of print to digital communications. Some commentators have indeed forecast the end of the print industry, but this is premature. In the advertising industry, the seemingly inexorable move towards digital adverts has slowed, as consumers show a renewed appetite for consuming traditional TV and press advertising.

Consumers like variety, and a balanced portfolio of online display and offline print will be preferred to a staple diet of digital. This trend plays to Pitney Bowes' strengths as they seek to harmonise, integrate and orchestrate both physical and digital communication channels for their clients, and complement their traditional strengths in print with digital marketing capability.

The key digital marketing software markets for Pitney Bowes are the B2C enterprise companies in Banking and Finance, and Telcos. Hence Lloyds Bank, Nationwide, T-Mobile and Verizon are customers. The primary focus for Pitney Bowes' digital marketing offerings will be their many enterprise customers for print services within these industry segments. With over 2m customers, the key issue will be account focus and their ability to leverage their customer contacts.

Arthur Pitney and Walter Bowes, who formed Pitney Bowes 92 years ago, must be looking down with trepidation. Business moved slowly in those days. Nowadays we all need to run like Usain Bolt to keep up. Pitney Bowes needs to get its spikes on and up its game to achieve the velocity and operational execution needed to succeed in the highly competitive digital marketing business. Its ability to attract Facebook as a new customer for its location-based intelligence software bodes well however. Game on.

Many of the online forum comments following the surprise departure from Apple of former Dixon’s chief John Browett indicate that while ‘clicks’ might be replacing ‘bricks’, customer service is still a significant demand. The comments might read as a little damming to anyone from Dixons, but they clearly show that having too few, dis-interested shop assistants lacking product knowledge, but still wanting to pump up add-on sales is not the way to sell technology.

Some of Apple’s followers may be a little too fervent to care whether it’s good or bad, but Apple’s in-store support and customer service seems to be appreciated by those with no fanatical allegiances. In Apple stores no one seems to be ushered off ‘having a play’ with the stock, staff are plentiful and knowledgeable, after sales support is generally pretty good. No wonder that there were concerns that someone from Dixons might change things to a more traditionally ‘pile-em-high’, cut the costs accountant-driven high street model.

A penny pinching supplier can also be a real turn off in B2B transactions, especially for smaller customers to larger suppliers. The mantra for many suppliers, when times were good, oriented around avoiding ‘not leaving money on the table’, i.e. making sure that they took a larger share of the customers wallet by up-selling, cross-selling and leaving less room for 3rd parties. Dress this up as benefits for the customer—fewer suppliers to deal with, fully integrated, complete solution—and it all looks fine.

In these tighter economic times however, many companies are looking for better value. This means that sometimes customers will split up their buying, shop around to try to get better deals while suppliers will try harder to increase revenues and preserve margins.

The temptation for larger companies in the supply chain is to exert even more financial pressure. This is especially the case when senior management has taken cost-cutting measures too far (usually led by the financial director and their accountants). There are several symptoms of this:

taking much longer to pay suppliers, especially smaller ones with less legal clout or ability to chase.

incrementally charging customers for things that might have once been free or at least delivered as part of a bigger project, but now being broken out and separately charged for.

Viewing customer service simply as an unnecessary cost that can be cut or trimmed to the bone.

None of these elevate the perception of a company in the eyes of its suppliers or its customers. Large companies may appear to get away with such behavior for a while, but it will increase pent-up resentment in both customers and suppliers, who will at some point switch. Smaller companies will take action faster as word of mouth is an even more important influencer of new business for them.

The increasing use of online and social media for sharing experiences only exacerbates the risk of any switching point being sudden and unexpected, especially for changes that adversely affect customer service. Good reputations take a long time and a lot of effort to build up, but news of bad experiences travels fast and brands and reputations can be destroyed while marketing folk sleep.

It might have been that the former Dixon’s man’s face didn’t fit, rather than a reaction to any changes he was trying to usher in, but the latest news of the other UK consumer retail giant, Comet, entering administration underlines this is a very difficult market. It might also indicate that cut price doesn’t cut the mustard if service standards are cut too.

Last week the Marketo Rock Star tour was in London. The lead singer was Phil Fernandez, Marketo's CEO, who sits somewhere between Freddie Mercury and Mick Jagger as software CEOs go. He did an outstanding stage presentation to 200+ marketers, and was equally impressive when I met him face-to-face backstage, once I had fought past the Marketo groupies. He was very engaging and open, clued up and creative. So moving on from the charm offensive, what did he have to say?

Marketo, which claims to be the fastest growing market automation vendor, powers on fuelled by big ticket VC funding and a groundswell of popular support. Marketo now boasts 2,200 global customers and is rapidly expanding its market reach in Europe and Asia (it has recently started direct operations in London, and has opened an office in Australia). It is also now spreading its wings from its Small Medium size Business (SMB) roots into Global Enterprise accounts. For example, GE now has over 800 Marketo seats.

Traditionally Marketo has sold into B2B high technology companies, but now is selling more broadly into all sectors, but especially Healthcare, Financial Services, and Publishing. The rate of product development is voracious. Marketo claims to introduce a new product category every 9 months. Next off the block is likely to be back-office marketing resource management (MRM) for managing marketing budgets, resources and brand assets.

Marketo is also looking to grow by extending its channel partner model and by recruiting established marketing services providers (MSPs) as referral partners. For example, creative agencies whose adverting clients are looking to adopt marketing automation are ideal.

Marketo is a software company in a hurry, with a big hairy audacious vision, and an infectious 'can do' attitude. There is a palpable 'buzz' around Marketo which is energetic and fun, and hence perfectly suited to the gregarious, outgoing personality types that inhabit Marketing Departments.

But what does Marketo actually do? Traditionally, marketing automation has been associated with vendors like Unica (acquired by IBM) and SAS Institute who sell big ticket ($500,000+) systems to large B2C companies (think Vodafone or BSkyB for example). Such companies want all-encompassing integrated enterprise systems to manage their colossal number of promotional campaigns.

Marketo, and their arch rival Eloqua, have re-defined this market space to focus on SaaS-based B2B sales lead management - helping to improve the flow of quality sales leads from Marketing to Sales at a fraction of the cost of the larger vendors' solutions. The principal tools for this are demand generation, lead nurturing and lead scoring. Technology companies were an obvious easy first target market for both Marketo and Eloqua, as they have complex and difficult sales cycles and expensive direct sales forces. Hence a small increase in sales pipeline productivity has a big effect on technology company profitability.

To automate demand creation processes, Marketo offers email marketing, event marketing, web site landing page creation, and social media capabilities. Marketo's foray into social media marketing has been accelerated by Marketo's recent acquisition of Crowd Factory, and the installation of its CEO (Sanjay Dholakia) as the Marketo CMO. Now Marketo offers social apps such as Facebook page management, social analytics, and integrated social marketing.

The whole value proposition is packaged up within the concept of Revenue Performance Management - how to organise and align Sales and Marketing, and measure and optimise their combined performance. Just to make sure you 'get' this idea, Phil has just written a book called 'Revenue Disruption' (published by Wiley) to explain the concept. When does this guy find time to sleep?!

Marketo are doing an amazing job of maintaining its momentum while keeping many plates spinning. Its product range and quality and Marketo's excellent web site education content are setting a new standard for 'next generation' marketing. Currently they are recruiting like mad (they have 50 open positions), primarily for technical roles at their HQ at San Mateo, California. Let's hope their support resources don't get too stretched in the rush for growth. For the meantime however, Marketo is exciting, fun, and long may it continue to dazzle marketers.

Interestingly, the 23% of consumers who want relationships with brands mainly want these as they share the same values as the brand, rather than necessarily wanting to be intimate with the brand.

Here's some of the reader comments that pertain to digital marketing: "The last thing I want is frequent emails . . . (send me) one email so I know you are out there and know what you offer is tolerable. More than that and you're working against yourself. When you push email at me, you're pushing me away". "Frequency of messaging in an attempt to reach that elusive new goal of 'engagement' turns me off".

"No, I don't want a 'relationship' with a rental car, banana, gallon of gas, trash bag, PC antivirus software, television, automobile or the providers thereof. When marketers add to, rather than reduce, 'cognitive overload', I unsubscribe". "Don't stalk me, if I want something I'll find you". "I don't want to talk with you after the transaction, it's over. Done. Kaput".

"Marketing people too often understand interactions with customers as an opportunity to scream their messages at them. Unfortunately too few are genuinely interested to listen what is important to the customers in context of their experience with the product or service. It is not the way to build trust in relationship."

Some marketers have hardly shrouded themselves in glory in the way they have used digital technology. Many promotional emails are technically 'spam' - untargeted and lacking relevance to customer needs. In addition, the content offered is often over-hyped and lacks substance and granularity.

Sales follow-ups can be equally unfocused. For example, I often download vendor white papers and case studies. Sales call follow-ups may happen months later (when I have forgotten the content) or the next day (when I have not read the content). Either way, the sales question is often a scripted and inappropriate "do you want to buy something?" rather than evaluating my contact details and profile and routing me to a relevant Analyst Relations or Investor Relations representative for stakeholder nurturing and development.

Some vendors, such as Virgin Media and BT, alienate their own loyal customer bases by offering cheap price deals that are only available to non-customers. Others make it difficult for customers to unsubscribe, cancel a contract, or understand their pricing programmes. This is reflected in the 2012 Edelman Trust barometer research that shows consumers trust CEOs and their marketers less. Consumers trust 'a technical expert in the company', 'a person like myself' and 'a regular employee' much more highly.

We trust 'someone like us', even when we don't know them personally - hence the importance of social media. Digitally savvy customers sense digital tricks and techniques online, and warn off friends and followers when fair play is not being followed.

Marketers need to have the discipline to use customer data in a respectful and measured way that adds value from a customer perspective. Too much digital marketing today is sales / price promotions. Early text promotions on mobile phones are going the same way - with no unsubscribe link or reply mechanism, so there's no way out.

Marketers can use digital marketing technologies to deliver relevant, personalised and exciting digital experiences for their customers and potential customers. But this is not easy. It requires investment in people, process and the correct technology. Short-cutting this process using indiscriminate spamming and message blasting actually damages brands and results in diminishing longer-term returns from marketing investments. The customer trust is gone.

In summary, marketing needs to act in a responsible manner using the digital tools at its disposal to add value to customer experiences. The time for a 'land grab' for customer attention is over, and is jeapardising marketing's own image and credibility with customers. A new enlightened approach to responsible digital marketing is required.

Anyone who has had reason to contact a large organisation – bank, telecoms company, utility, government department – will have noticed some changes in the last decade or so in how these organisations communicate. There has been a shift from having a physical presence to communicating via telephone and/or online; visiting a branch office has been replaced by interactive voice response (IVR) phone systems and ‘check on our website’; letters of complaint by emails and webchats.

The reasons are simple – cost and scale.

The demands of customers have grown beyond the ability of points of contact to adequately respond; insufficient branches, too short opening hours, insufficient contact centre staff. Customers expect to be able tocommunicate at any time with people who know who they are, the history of the relationship and direct them to experts who can solve the problem instantly. It should be no surprise that organisations find it a challenge to meet this level of customer expectation – especially as they try to maintain or boost margins, shareholder returns, and ultimately perhaps executive bonuses?

Hence the move to get people to phone or look it up online. Use automated systems where possible, and even when real people are required, lump them together into call centres and consider outsourcing them to countries where the rates of pay are lower. In the meantime, the call centre has changed to embrace new technologies and more flexible ways of working, evolving into the contact centre. No longer necessarily a physical place, but a virtual web of people and locations using diverse modes of communication to deal with customer requests.

All well and good, but frequently all the customer hears or sees is the equivalent of ‘computer says no’ and the website or IVR does not have the option they are really trying to find. All too often they might feel the message “your call is important to us…” tells them just how important they really are (Not very, otherwise there would be more staff to answer the call)

The problem is not necessarily in the technologies used, but the way they have been adopted. Automated systems have allowed companies to become lazy with their processes. They create streamlined processes that fit the perceived needs of the majority of customers but rely on automation and associated tools like a crutch. This then allows the business to have lowest cost staffing and to try and forget about exceptions.

Processes streamlined, costs cut, large volumes of customers handled – job done? Well not really as most of these same organisations notice their customer loyalty falling and churn rising, especially when regulators get involved to make the process of switching easier as they have in many industry sectors.

The problem is that while these processes address issues that impact the organisation, they do not always tackle those issues that have greatest effect on customers. This is especially the case with so called ‘exceptions’ i.e. the things that companies have not included in their processes, but tend to happen to their customers with more regularity than admitted. The problem is further exacerbated by reliance on the process as the final arbiter, and not the discretion of well-motivated and well-trained customer service employees who have been granted authority to act on their own initiative.

However there are examples of solutions to this problem through the use of social media for customer support. Here, connection is simple, contact is rapid and the ramifications can ‘go viral’. Perhaps for these reasons it becomes important to select good and well-motivated staff, give them responsibility for their actions and reward them appropriately.

Consumers will often hit their social web for unresolved issues and complaints, news of which breeds in the oxygen of widely shared experiences – “yes, something similar happened to me with…” and spreads rapidly through the Twitterverse, blogosphere and Facebook pages. Customer service teams that interact with customers through social media need tact and diplomacy couple with the ability to deliver empathy in simple text, sometimes fewer than 140 characters. They also need the same direct access to customer records and powerful support tools for data mining and root cause analysis as all other customer care channels.

Companies in the telecoms industry appear to have recognized this, and there are highly visible and responsive teams operating on behalf of BT, VirginMedia and Vodafone, among others. Anecdotal evidence suggests these people are already delivering excellent service and this seems to come from them being small, highly focused teams, close to the core of the organisation, able to call on resources as they need them and make decisions on their own initiative.

This model changes customer service in a far more positive way that the over-simplistic and ill thought out deployments of automated telephony and web technology. By taking the best of IT and communications tools, but putting people at the core of processes, organisations can improve their customer service and more importantly perhaps from their perspective, retain customers.

Organisations would do well to step up their investments in their social media customer support teams. The computer may say no, but the people say yes.

The continued growth in smartphone and tablet usage is impressive. More devices keep appearing to nip at the heels of market leaders and both interesting applications and tempting content continue to swell into app stores.

All great stuff, unless perhaps you are a mobile operator trying to keep services up and costs down, or a mobile user that has discovered what ‘capacity crunch’ means. But surely the new fourth generation technology, LTE, will fix all this? After all it will provide tens of megabits of bandwidth in each direction, so the solution is there in the near future, yes?

Well, maybe it will help but, even if it does, it will only be for a time. Despite more efficient use of the spectrum, LTE is still limited by the laws of physics, and in congested areas with large numbers of people all trying to use their favourite shiny gadgets at the same time, services break down. Later this year in the UK, London’s Olympic venues, and perhaps even the entire city, may provide some further insights as to what we might all experience in the future.

The question is, what can operators do now? In reality, there are a number of aspects that can be addressed and quite a number of vendors offering products aimed at tackling the issue. The problem is, there is no single simple solution, and operators need a portfolio of tools, each working to extract the best service from each of the different pinch points.

Since this should really be about ensuring the end customer experience is the best it can be, operators should look at both symptoms and root causes, and along every part of the network from subscriber to the core.

Fundamentally there are too many people, using too many network-hungry applications in places where the wireless edge of the network does not have capacity or there are bottlenecks in the base stations, the backhaul or core networks.

One way to address this root cause of too many people is to dissuade usage. On the face of it this might not seem palatable, but it could be accomplished in many ways without using a blunt instrument such as tariff price rises or usage caps. Operators could do more to create or promote alternative services that lessen the total demand, or introduce incentives for time and place shifting, along similar lines to the UK’s off-peak Economy 7 electricity tariffs.

The next thing to look at is the efficiency of each of the existing network elements. First the wireless edge, the radio link. Is the currently-available spectrum being used as efficiently as possible? The radio footprint of each cell tower is generally well planned overall individually, but could other radio elements such as micro and femto cells be added to usage hotspots? Could more be done to look at how usage patterns change over the day and the cells and antennas more dynamically aligned to cope better?

If cellular usage is the issue, what about offloading to Wi-Fi? A good idea in theory, but in some highly congested areas, such as airport lounges, Wi-Fi networks are more overloaded than cellular ones. The lower cost, or at least more predictable all-you-can-eat type pricing models, win over a lot of Wi-Fi users, and the sudden influx of popular tablet devices only pushes this further, straining networks that have often been planned and deployed in a very ad hoc way. While Wi-Fi has been becoming a more professionalised service, it needs to be properly managed as a carrier style solution in order to offer carrier grade service. This is often not the case.

Offloading to other radio connections does little to ease the burden on the backhaul network, however. Here the simple solution might be buy or rent more fibre, but as demand on backhaul from a mobile data capacity crunch is likely to be ‘peaky’, getting hold of more total capacity might not be economical. Traffic management then comes to the fore. Does every bit have to be delivered with the same expediency? It may seem slightly taboo to suggest it, but does the network need to treat all data equally? No. This is not about undermining ‘net neutrality’ or democracy of access, but about efficiently shaping and throttling the flow of bits to ensure that ‘live’ services flow when they need to, and other less time critical ones can be buffered and delayed to ensure better flow.

Taking a view of network content at the application level allows traffic to be better managed and allows other tools to be brought to bear, such as caching, compression and content adaptation. Here the smart application of context – user, location, device, content – means that data can be intelligently filtered before it even hits the backhaul network. This does not all have to take place in the core either, as smarter mobile devices have the power and capacity to pre-process much more information and reduce their own impact on the network. At least they could if only mobile application developers were more aware of, or parsimonious with, the precious network and its limitations.

Just like dealing with congested roads and highways, there is no single solution to congested mobile networks and simply ‘building more’ is not always better. But there are plenty of tools available; they just need to be used in a concerted and coordinated way to minimise perceived impact on subscribers and costs to carriers.

Five short stories from the cloud show how platforms are maturing and illustrate that, despite all the talk about virtualisation and mobility, there is still good old fashioned hard-wired physical infrastructure behind it all.

GoGrid – old hand from the valley now in EuropeGoGrid has recently announced its first European cloud infrastructure services, provided from an Equinix colocation facility in Amsterdam. Its name might not be that well known in Europe, but GoGrid comes with pedigree. It was founded in 2000 in San Francisco and claims to have become the “number one dedicated hosted service provider” to the Silicon Valley elite. Through the experience learned over 12 years as a managed hosting provider, it has built its own “Cloud Infrastructure Stack", an infrastructure as a service (IaaS) platform which enables the delivery of hybrid hosting services; clouds that consist of discrete physical infrastructure and public cloud resources all managed through a “single pane of glass” interface.

But, why bother with your own infrastructure?

CloudBees – taking platform-as-as-service (PaaS) to a new levelGetting someone to build your organisation a private cloud from scratch is one approach. But, how about turning existing data centre infrastructure in to a private cloud and making it easy to extend by adding on-demand resources from established IaaS providers? CloudBees AnyCloud aims to do just that. It was founded a little under two years ago by a team of IT industry veterans, including CEO Sacha Labourey (ex: Red Hat/JBoss). AnyCloud is a Java-based PaaS offering that is layered over existing infrastructure; either that already owned by a given organisation or other IaaS offerings such as Amazon EC2, Rackspace Cloud Servers or any other local IaaS provider. Once set up CloudBees undertakes to manage it all for you.

And if you are UK-based that local IaaS provider could be Attenda….

Attenda – local UK provider ups the anteAttenda’s IaaS offering, known as Attenda RTI, is sold alongside dedicated managed hosting services all based in the UK. In that much Attenda looks like any other respected cloud services provider, but it has added a business-focussed professional services overlay. Attenda has observed (as has Quocirca), that line-of-business managers are increasingly involved in the decision to purchase cloud-based services. This is particularly true in the mid-market where Attenda is focussed. Mid-market managers know they need applications, but are not so sure they need to worry about the infrastructure to run them on. So, Attenda has launched a new initiative it is calling "Business Critical IT” that combines a structured business engagement methodology with recommendations for supporting infrastructure and services. Attends says this addresses the need to focus on business outcomes rather than technology ones; Quocirca would not argue with that as an objective.

But ultimately someone has to run infrastructure…..

City Lifeline – baked into the heart of LondonThe big co-location and managed hosting providers are always keen to show off their state of the art, usually purpose-built data centres on sprawling out of town trading estates, for example in Slough and the London Docklands. But, just as impressive is to see how, in order to deliver the low latency and physical proximity required by financial services organisations in the City of London; City Lifeline has squeezed in 28 thousand square feet of data centre space in Hackney, just a stone’s throw away from its key customers. This is no purpose-built facility but an older building that has been adapted; finding and paying for appropriate space in such a central location would be prohibitive. Proximity allows City Lifeline to charge about a 30% premium over that of out of town providers. However, despite these seeming limitations it is still expanding on the current site by building over its small back yard. It is not just the difficulty of finding suitable locations in the City that keep it where it is. City Lifeline is hard wired in to the heart of London; the data centre sits right on the backbones of 22 internet and voice carriers, for all of which City Lifeline hosts points of presence.

Actual cables may never go away, but you can reduce the number…

Xsigo Systems eliminates miles of cablesObserve the rows of cabinets in most data centres and you will see thousands of Ethernet cables linking the individual rack units to each other and to top of rack switches and each cabinet to end of row switches. All these cables are linking servers with the infrastructure they rely on; storage, load balancers, network routers, security appliances and so on. Now it is possible to eliminate many of these cables with the Xsigo Server Fabric. It uses up to 40GB Infiniband cables to connect each server and each peripheral device to a Xsigo fabric appliance which takes up two or four RUs and acts as a broker between all the various bits of hardware. Furthermore this means that once implemented, the Xsigo appliances see and collect all data centre traffic and can act as a feed to performance monitoring tools. This has led to the vendor’s latest announcement of the “Xsigo Performance Manager”. Eliminating so many cables saves money and space, but also increases performance as its customers seem happy to testify.

Why would a blind person go into a library? Maybe to borrow a book in Braille, or more likely to borrow a talking book, CD or DVD. In Lambeth the new answer is to learn to use a computer.

Computers have the potential to improve the quality of life and job prospects of anyone who is blind or has a vision impairment (a Vision Impaired Person or VIP for short). A very large percentage of the world's knowledge has only been available in printed form on paper. This meant that it was inaccessible to anyone with a limited or no vision. Over the years various solutions have been used to close this gap a bit.

Braille was one very clever and successful solution but it is limited by: the cost of setting and printing any document; the size of a Braille document, which is a disadvantage; but probably the biggest issue is that it is difficult to learn, especially for anyone whose lost their vision later in life.

Audio books is a wonderful medium for fiction, where a book is read, and listened to, from the beginning to the end. It is not such a useful solution for reference or text books where navigation becomes a major issue. The cost of production also limits the titles available.

All of these solutions have major limitations, as the number of titles is limited and great swathes of printed material - newspapers, magazines, journals - are generally not available and certainly not in a timely fashion.

More recently the rise of the computer, the Internet and various forms of electronic publishing have enabled a whole new set of sources of textual information; emails, blogs, wikis, online news channels etc. All of this is displayed on a screen and is again not accessible.

However, the fact that this information is electronic and therefore can be manipulated means that it is possible to turn the electronic words into spoken words that are accessible to people with vision impairments. All of human knowledge is being rapidly turned into electronic format and thus the knowledge available to a VIP is growing exponentially.

Unfortunately there are two barriers that need to be removed before all this information is available. Firstly the user needs access to suitable hardware and software that will read the information on the screen and enable them to navigate easily. Secondly they need to learn how to use the hardware and software. A VIP who has not learnt to use the system will not be able to assess the benefit to them and therefore will not be able to justify the initial outlay. The cost of a suitably configured machine is a considerable barrier to adoption.

The libraries in Lambeth have recently been the venue for an experiment to fix both these problems. The initiative is being driven forward by a local resident, Christina Burnett of Wide Eye Pictures, who is passionate about the benefits of computing to VIPs.

Like every modern library Lambeth has several computers in each library. The only extra hardware required was headphones; these are obviously essential if there are going to be several screen reader users in the library at one time. It is probable that headphones would have become necessary anyway for the general public as more and more audio information is available on the Internet. The other addition was to install screen reader software on all the library machines; it was decided to install it on all machines so that a VIP could use the system whichever library they wanted to visit. Some screen reader solutions are expensive and it would have been prohibitive to equip all machines; this was resolved by installing a free screen reader called Thunder which is available from www.screenreader.net. So for a minimal expenditure the libraries removed the first of the barriers.

To assist the VIP to learn to use the system a series of seven weekly training sessions was run, called DTvip (Digital Tuesday for Vision Impaired People). The initial set of sessions trained some VIPs and some volunteers so the scheme can be repeated and extended in the future. Screenreader.net, which developed Thunder, has obtained funding from Awards for All to work with DTVIP on this pilot training scheme at the Tate South Lambeth Library.

The first set of sessions was a great success and proved that the model works. Naturally, lessons have been learnt, in particular to have a structure that can support different users, ranging from a VIP who has never used a keyboard, through to a VIP who is an expert PC user but, through failing sight, needs to learn how to use a screen reader.

The second set of sessions is under way. The question now is how to quickly extend this model throughout Lambeth and the rest of the UK.

Apple has announced the next version of OS X, the operating system for Macs, called Lion. It has 250+ new features, including 11 specific accessibility features and several more that could have accessibility benefits.

OS X ships with a built-in screen-reader, VoiceOver, which has been extended to:

support more languages,
provide higher quality voices that can be downloaded from the web,
support different preferences for different activities, fast for scanning websites, slower for reading on-line books
provide single-letter navigation in web-pages

In previous versions you have been able to increase the size of the cursor arrow but when you did this the arrow became pixelated and the edges were rough; a small improvement in Lion is that the larger cursors remain crisp and sharp. I have my cursor at a medium size, it makes it easier to find on a large iMac screen and I look forward to this small improvement.

Another feature I use quite frequently is screen zoom. If there is something on the screen that is small, some text or often an image, I zoom the whole of the screen so I can see the relevant section blown up. The problem is that I lose the rest of the screen. Lion will offer a function to have a section of the screen in a separate window and to zoom on that. This is the best of both worlds with magnification of the bit of the screen of interest whilst still being able to see the context of the rest of the screen.

Lion improves Braille support with support for more languages and more control of the verbosity.

A significant usability feature is that for existing OS X users Lion will be downloadable from the Mac App Store. The advantage being that there will be no distribution of CD and installation from CDs. For people with disabilities this should be a welcome improvement, just a couple of clicks to download (see my article Usability and Accessibility of Apple Mac App Store) then a few more to install.

FaceTime, the video calling facility built-in to Lion, provides high-definition video which should make it possible for deaf people to use sign-language when communicating remotely. Lion improves and extends the support for full-screen apps. Full screen applications are beneficial to people with vision impairments as the content can be bigger and also there are no distractions. Full-screen should also help people with dyslexia, and some cognitive limitations. With Lion you can have multiple applications open in full-screen mode and you can navigate from one to another using a gesture.

Preview is the tool for looking at images and PDF documents. Lion provides a magnify feature to enlarge specific text or images.

Safari, the built-in browser, has some new features that will benefit people with disabilities.

Double tap to zoom in on a column or an image.
Pinch in and out to zoom more precisely.
Swipe to navigate, use the swipe gesture to smoothly move to next page.
Private autofill, enables standard fields in forms such as surname or address to be autofilled on demand. This is a major benefit to people who find typing difficult or slow.

The Screen Sharing feature enables one Mac to observe or takeover control of another Mac. This provides an excellent remote user support facility. Many users with disabilities will find this useful as it means that small issues can be diagnosed and resolved quickly and effectively by a remote friend.

And finally you can resize a window from any side or corner.

Lion will ship in July and is great value at £20.99 in the UK ($29.99 in the US). I plan to upgrade as soon as it ships as the accessibility benefits are significant as well as many other of the 250 new features which will improve my usability and general user experience.

What will the 2011 Census tell us? Not much without geographic information technology! Demographic information guides the planning for all sectors. Whether for the provision of public services, the supply of power and water, or the marketing and selling of products and services, the where factor will be critical!

On March 27th everyone in England and Wales will be expected to complete the 2011 Census form. We lose an hour during the night before and hopefully that will not cause late submissions. This is the first census that can be completed on line. One hopes that this will not adversely affect an accurate information collection—the last census had an alleged 25% undercounting.

Reading an article in the local Compass Wessex magazine started a train of thoughts of what this new information can mean to public and private sector organisations.

The focus on socio and economic trends opens the usefulness of this information to countless opportunities. Is there anything that is not affected in some way? It is vital to understand where—the location—to which the statistics relate.

Comparison with the first census in 1801 reveals great change and we all know that the speed of change is increasing. In 1801, the 2 million households averaged 5.6 people compared with 2001 where an average of 2.4 people were recorded in 26 million households. The escalation curve would be very interesting to understand how the rate of change has increased. The geographic illustration of where these changes take place will provide invaluable guidance to so many facets of planning and provision.

New questions about residents include passports held, nationality, year of entry to UK and intended length of stay for recent arrivals, main language and second residence. These statistics would reveal interesting trends, showing where employment is impacted, where transitional populations reside, where different languages should be accommodated, and where homes are not permanently occupied.

Frighteningly, apparently, 1 in 6 homes in the UK fall within a flood plain and the Environment Agency's flood testing centre at HR Wallingford in Oxfordshire is investing in experiments to withstand these wet onslaughts. The 2011 Census will reveal how many people are impacted. That could be a very pessimistic picture? Insurance risk cannot be managed without evaluating where and to what extent the risk exists. This is impossible without geographic technology.

The increasing population densities are essential for network planning organisations—water, gas & electricity, telecommunications. Without geographic visualisation, they will not know where the change in demand is taking place.

The first summary results are expected in September 2012 with more details emerging in 2013 and 2014. That does seem like a while to wait, but maybe we should already be putting on our thinking hats and start planning how to use this information.

If geographic technology is not part of your solution, think again. Without knowing where change is taking place, the statistics are meaningless.

As the realisation comes to the software market that the new generation of workers need user interfaces that fit with the lifestyle of Facebook, Twitter and mobile phones with apps, we are starting to see how various software companies are meeting these changes. For the BPMS market, Appian have always been one of the companies leading the way and with the release of Appian 6.5, which includes a new interface called Appian Tempo, they have produced a release that is geared towards the end user of BPMS-driven solutions in terms of a mobile and social interface with cloud capabilities.

Malcolm Ross, Appian’s Director of Product Management, told me “The release delivers a revolutionary way to extend process visibility and participation through native mobile device access, real-time collaboration, filtered and personalised views of key business events, integration to external systems, and the ability to take direct action in a familiar and intuitive social media interface.” So what does the new Appian interface deliver?

Mobility

Appian Tempo provides native client applications for the Apple iPad, iPod Touch and iPhone as well as RIM BlackBerry devices. Ross explained that mobile BPM allows employees to stay connected, allowing them to monitor, collaborate and take action on important business decisions regardless of where they are. It also extends BPM participation beyond pre-defined process participants to include all levels of the organisation. The iPhone and iPad applications are available for immediate download from the Apple App Store. The BlackBerry application is available now from the Appian Forum community site, and will be available shortly on the BlackBerry App World site. A native application for Google Android devices will be available shortly.

Social

There always seems to be a contradiction about incorporating social media into a business world. Social technologies are powerful communication and collaboration platforms, but they must be harnessed in a business context to have business value. Ross explained, “Appian utilises familiar social tools and interfaces to drive business collaboration across the enterprise through personalised, filtered views that allow easy collaboration with the ability to take action when needed.” Users can filter views by relevant application or process areas and subscribe to customised feeds to monitor the key events and information that is meaningful to them. As well, users can comment, pose questions and collaborate on business events through real-time message posts and ad hoc updates to targeted groups within and outside of pre-planned business processes. The last user capability is to “Take Action”; here a user can generate actions and complete tasks from inside the event feed or from a mobile device, using optimised web and mobile forms to capture data and route tasks.

Customer-Driven

Samir Gulati, Appian’s Vice President of Marketing, described how Appian 6.5, and in particular Appian Tempo, had been driven by their customers’ business needs. One example is Archstone, a leading apartment management company, headquartered in the USA. Archstone have a highly mobile and dispersed workforce which is supported by a system built on Appian. David Carpenter, Director of BPM, Archstone, stated that “Appian Tempo delivers a new level of value to our customer service associates through instant mobile access to our key enterprise processes and forms.”

Comment

I was very impressed with the demonstration of Appian 6.5 and the Appian Tempo interface. From an end user viewpoint it opens up the ability to make real-time decisions where and when they are needed by using collaborative technology. The product is definitely easy-to-use and intuitive. While all events and collaborations can be secured at a granular level, organisations that make use of the new Appian release will need to think about the security implications of the information that can be shared.

In addition to on-premise deployment, Appian has emerged as the BPM-in-the-cloud market leader. When you add the capabilities of Appian Tempo to those already in the Appian BPMS and Appian Anywhere, as well as Appian’s specific knowledge about industries such as government and financial services, you have a very compelling proposition.

Data virtualisation is the latest technology to enjoy its moment in the hypelight and there has been some considerable debate within the blogosphere about what it actually is and what its relationship is to data federation, data integration and EII (enterprise information integration).

Rather than start from scratch I thought I would go back through my files and see what I had written about this in the past (if anything). I found the following definition of an EII platform (that is, what you need to support EII, which is, after all, about information rather than mere data). What I wrote, some three years ago, was that an EII platform needs to do four things:

“It virtualises your data – it makes all relevant data sources, including databases, application environments and other places where data may be sourced, appear as if they were in one place so that you can access that data as such.

“It abstracts your data – that is to say, it conforms your data so that it is in a consistent format regardless of any native structure and syntax that may be in use in the underlying data sources.

“It federates the data – it provides the connectivity that allows you to pull data together, from diverse, heterogeneous sources (which may contain either operational or historical data or both) so that it can be virtualised. It should also enable things like push-down optimisation so that query joins can be mastered in the optimal place.

“It presents the data in a consistent format to the front-end application (typically, but not always, a BI tool) either through relational views (via SQL) or by means of web/data services, or both.”

Actually, I didn’t quite write that: I have updated it somewhat but the gist is the same.

Clearly, data federation is not the same as data virtualisation. Moreover, federation is not necessary for virtualisation, depending on why you are doing the virtualisation. If you want to link a number of data marts together so that you can query across them then clearly the query optimisation capabilities of a federation engine will be necessary. On the other hand, if you want to create Mashups or other applications that have relatively lightweight access requirements, or you want to use virtualisation to support MDM-like capabilities, then such functions may not be necessary. Instead you can use data services. Data services may also be more appropriate in environments where less of the data is relational and more of it comes from a variety of unstructured sources or from the web. Indeed, there is a whole new discussion to be had about the distinctions between data virtualisation for unstructured data and structured data (or a combination of the two) but that’s a subject for another day.

The other question that arises is whether parts 1, 2 and 4 are all actually parts of the same thing. I think 2 and 4 probably are or, at least, the differences are so slight that there is no point in making a distinction.

Parts 1 and 2 are another issue. If data virtualisation is about having a virtual data source that does not necessarily mean that it is easy to work with. It is certainly easy to imagine a huge hybrid database that contains relational and non-relational data, pdf documents and a whole bunch of other things, but that would not necessarily mean that the data was all in a common format and, therefore, easy to work with. So, I think both 1 and 2 are required and are different. It is certainly true that it does not make much sense to implement data virtualisation without an abstraction layer but that doesn’t mean they are the same thing.

Finally, I haven’t talked about data integration at all. Well, the fact is that leading data integration products support data services so you should certainly be able to virtualise data sources even if you can’t federate them (they won’t typically have the sort of distributed query optimiser you would want from a data federation product). The question will be how easy it is to build the abstraction layer with a data integration tool. Of course, you can create all the transformations and mappings necessary for this purpose but what you would really like is something that automates a lot of this abstraction rather than requiring you to build it for yourself. It is in these two areas—federation and automated abstraction—that the pure players in the market, especially Composite Software and Denodo, have a significant advantage over the data integration vendors.

I am not referring to Cloud Computing but rather the cloud of confusion prevailing over geographic information amongst the general public. The confusion over this type of information; the confusion over the many terms used for information that can be linked to the earth's surface; and the confusion over maps.

Watching a TV program the other evening called, ‘The Beauty of Maps' highlighted the subjectivity of maps. The map maker has cartographic licence to create a map display which projects his interpretation of the subject; whether it is to visualise the topography correctly and read the labels easily, or to project an image that might not be true. This program described William Morgan's 1682 Map of London. He created a map of a city after it was destroyed by The Great Fire. His map illustrated the city he envisaged London would become. St Paul's Cathedral was well illustrated on the map even though it was totally destroyed and had yet to be rebuilt. Maps project what the creator intends.

There is a book written by Allan and Barbara Pease called ‘Why men don't listen and women can't read maps'.The theory goes that "due to their different roles in evolution, men had to hunt and stalk their prey, so became skilled at navigation, while women foraged for food and so became good at spotting fruits and nuts close by" [The Telegraph website]. I am not sure that explains it and, if one can generalise quite so simply, women should then be the bigger enthusiast about SatNavs. Maybe the ‘don't listen' bit prevents men from asking for or listening to directions :)

Returning to the subject—there is a great lack of understanding amongst laymen about location and geographic information systems (GIS)—as my previous article described the need to increase aWhereness. Location information—or whatever we want to call it—is simply the position on the earth's surface to the accuracy that is possible, and/or the accuracy that is required.

Initially Google Maps and Google Earth provided much needed publicity for geographic information. Google Maps, or similar, is used by most people I know to find their destination and obtain directions to reach it. Google Earth stirred an interest in places we might not visit but can view. So much good has emanated from those two applications to raise the profile of location.

The downside is that there is still not enough understanding or appreciation of the implications of geographic information and the systems. The associated costs are now even harder to sell as ‘Google is free'.

The Google application, Latitude, enables a mobile phone user to allow certain people to view their current location. I assume that these locations include both the longitude and latitude measurement; just the distance from the equator would not really help anyone.

Another term to increase the confusion, or is Google taking latitude with Latitude?

In addition, according to the latest Apollo survey table measuring the media coverage per technology company, Google came 1st in Europe and in USA, and 3rd in UK! With that much media exposure, we should not underestimate the influence of Google!

We will have to tell a convincing story about the necessary investment to add location to your business systems. We will have to ensure that the longitude accompanies the latitude and makes good sense.

That means we, geographic professionals will have to work that much harder to tell—and sell—our story.

In December 2010 the British standards Institute (BSi) published "Web accessibility - Code of practice (BS 8878:2010)" http://shop.bsigroup.com/en/ProductDetail/?pid=000000000030180388; this document is based on, and replaces, "PAS 78: Guide to good practices in commissioning accessible websites". It extends, updates and improves on its predecessor and is therefore essential reading for anyone intending to create or update a web product.

This new document, like its predecessor, concentrates on the processes, procedures and practices required to create an accessible web product; it does not discuss coding or technical issues but does provide references to relevant standards, guidelines and practices; so there is no conflict between this standard and the guidelines produced by the W3C Web Accessibility Initiative (WAI).

Jonathan Hassell, from the BBC, who lead the development of the standard says "Most web product managers know accessibility is important, but need a guide to the decisions they make during product development which can impact disabled and elderly users of the types of multi-platform, interaction-rich products they are creating. BS8878 is that guide, and encompasses the best advice and experience from many experts from all round the world on how to make products that include these people.".

Firstly it describes the policies and structures that an organisation needs to have in place to support accessibility.

Secondly it describes a series of steps required to create an accessible web product. The steps are summarised in the document as follows:

Research and understand the requirements for the web product;

Make strategic choices based on that research;

Decide whether to create or procure the web product in-house or contract out externally;

Produce the web product;

Evaluate the web product;

Launch the new product;

Post-launch maintenance.

The document describes the specific accessibility issues that should be considered at each step. At first sight this may look like a lot of new work but in reality nearly all of the steps are considered good practice for any web product development.

This is followed by an introduction to the existing guidelines for developing accessible web products as well as discussion of accessibility of non-browser interfaces and special consideration when developing for older users.

Finally there is a detailed section on "Assuring Accessibility throughout the web product's lifecycle", which identifies and discusses the various methods of accessibility validation.

Graeme Whippy, of Lloyds Banking Group, one of the authors of the standard, said "Lloyds Banking Group is committed to best practice in accessibility and sees significant business benefits in making our websites as accessible as possible".

The standard is about 90 pages long and the second half is made up of fifteen extremely useful annexes. These cover areas such as definitions, laws, standards, responsibilities, challenges, examples of web accessibility policies and statements, guides to testing and a comprehensive bibliography.

I have read the standard and found the information in it clear, concise, insightful and pragmatic. It is laid out in such a way that it can be read in small chunks as required by different audiences and steps of a project. It provides all the parties involved in the creation of web products the information they need to understand the issues, decide how to proceed towards an accessible product and, importantly, how to deal with real world conflicts between ultimate accessibility and other market forces.

It provides a single source for accessibility best practice and information on the law and standards regarding accessibility.

The only criticism I have is that it does not discuss in sufficient detail the importance of ensuring that new content added to the web product after launch is accessible. It hints and implies that this is essential but does not highlight the issue.

Having seen the document, Gail Bradbrook of Fix the Web, an organisation set up to help people with disabilities report web accessibility issues and get them fixed, said "if every web product used the standard then we would not be needed and could close down; unfortunately that is not the case yet and we are very busy and need more volunteers (see http://www.fixtheweb.net )."

To ensure the maximum benefit is obtained from the standard there is a need for a community to be built up around the standard that can add to and refine the standard based on new experiences, technologies and opportunities and I expect some organisation will step up provide the platform for this community.

The standard is an essential purchase for anyone creating web products, as it provides:

Pre-digested research into accessibility and best practice;

A roadmap showing how to ensure accessibility is built into web products;

A template for recording the decisions made about accessibility which will help to show good intentions if complaints are made.

Its cost should be recouped within a few days of starting any significant web product development and it will continue paying dividends throughout the whole life-cycle. It should be used by all commissioners and developers of web products.

For a variety of reasons geographic and geospatial considerations have been in my mind recently. To begin with, Natalie Newman will, provided all things go smoothly, shortly be joining the Information Management group here at Bloor Research, specialising in exactly these areas. She has 25 years of experience of working in this space, especially in government (local and national), defence and in the telecommunications sector, both here and in her native South Africa. Most recently she was working with BT Global Services. So Natalie is welcome addition to our team.

Then, earlier this week, I received an email from Capscan, announcing its support for CACI’s ACORN. ACORN (a classification of residential neighbourhoods) enriches UK address data with a whole load of demographic data. If you go to the CACI site you can try it for yourself. Put simply, you put in a postcode and then the software classifies that post code as being in one of a number of categories, groups and types. For example, my post code comes into category 1: “wealthy achievers”, group A: “wealthy executives” and type 3: “villages with wealthy commuters”. Not that I’m a commuter. Or very wealthy for that matter. You can then analyse this type by a variety of lifestyle and demographic attributes to see how type 3 communities compare. For example, the average household in a type 3 community is 1.76 times more likely to have 2 or more cars compared to the country as a whole. It’s really quite cool. Capscan is suggesting using ACORN in conjunction with name and address cleansing and you can see how this would make sense or even, for that matter, using it independently of data quality.

Now, this demographic data is based on locations and we’ve all heard a lot about location-based services and the like, which brings me to something I’ve been thinking about for a while, which is why GIS (geographic information systems), in particular, is not as popular as it might be?

What I have been wondering is whether it’s because of the name. It seems to me that GIS systems are not really about geography at all: they are really about locations. And, for that matter, spatial analytics is not really about spaces but about topography. Perhaps if they actually said on the tin what they are about then people might use them more.

Here are some examples:

GIS systems are often used to help decide where to put new store or depot locations. Yes, locations.

GIS systems can be used to identify hotspots for benefit fraud. That is, where (locations) this is happening.

I remember a particularly neat example from Information Builders: one of its clients had done a location-based analysis of its suppliers and found that 90% of them were on the other side of a major river, meaning that if the bridge was out for some reason, then their whole JIT (just in time) manufacturing would go out the window.

The most common application of spatial analytics is in the insurance sector, for determining things like flood risk. This is essentially worked by how close you are to a flood plain or coast and how high your property is relative to the water source. Which sounds to me like topography.

Long-time readers know that I like to call a spade a spade and this is no exception. Normally, I would say that we are stuck with these terms but that may not be the case with GIS. With the huge growth in location-based services and location analytics there is the possibility that GIS could re-brand itself and finally prove as successful and as widespread as it ought to be. Hopefully, Natalie will help to make that happen.

1. Technology Designed for EveryoneThe technology world enlarged in 2010. Consumers fell in love with the intuitive user interfaces and versatile technologies of the likes of Apple, Facebook and Google. “I love it” is how most users describe their iPad or iPhone. Now consumers want their enterprise applications to offer a similar user-oriented experience.

Consumers want to use technology to connect and collaborate with others. No wonder social networking and mobility is such a compelling combination for businesses and end users alike. Facebook’s mobile users spend twice the amount of time on Facebook than do non-mobile users. This trend is set to accelerate. Hence SAP acquired Sybase for its mobile apps platform, rather than its database technology.

Traditional consumer brands such as Sony (Vaio) and Samsung (Galaxy) and Amazon (Kindle and EC2) sense there is more money to be made in Tech. As do a vibrant new group of entrepreneurs who have developed well over a million consumer apps on various platforms. There are no barriers or caveats to entering the software market anymore.

2. Making Technology Easy to ConsumeHow do you turn 5 keystrokes into 3? How do you make software that is immediately intuitive and makes obvious sense to users? Can you eradicate training courses and user manuals? Some enterprise software user interfaces look like a flight pilot’s cockpit instrument panel.

Steve Jobs, the Tech industry’s top CEO, loves a clean design and simplicity for Apple’s users. The iPod has 5 keys; the more modern iPad has 3. Jobs launched the iPhone 3G using only 11 presentation slides, only one of which contained any words. BBC Radio 4 recently praised Apple’s use of clear, plain English in its product descriptions, in contrast to Microsoft’s “techno-babble” that can alienate potential customers.

Facebook starts product development from the premise ‘how does this product enable users to communicate and collaborate?’ Features and functions become outputs rather than inputs when viewed in this manner.

3. Getting the Price Point DownHigh price is the last great bastion of the technology industry. But now many vendors offer similar ranges of products to address similar markets; the key decision-making criteria has become availability, brand, and most importantly, price—especially as vendor pricing is increasingly transparent and available on the Internet. There are now many options open to vendors who want to offer more customer value and encourage product trial.

BI vendors such as QlikTech, Tableau, TIBCO Spotfire, and MicroStrategy offer generous free trial product downloads. Open Source vendors such as Jaspersoft, Pentaho and SugarCRM offer free entry-level products. Spiceworks’ network management software is free if you are prepared to accept the advertisements that come with it. Many excellent applications, such as Google Analytics for example, are totally free of charge. Virtually every kind of software platform, application and service is available for rent as a SaaS service in the Cloud.

4. Be differentCompetition from now on will be intense and hostile. Recent aggressive moves from industry titans such as HP, IBM, Oracle, and Microsoft set the tone. Product innovations are easy to copy and vendors are now stepping on each others’ toes. To insulate themselves against this trend the top Tech companies have transformed themselves into brands. They hope to encourage a sense of community and belonging, customer loyalty and advocacy, and a feeling that customers cannot do without them.

Brand Finance now rates Apple, Microsoft and IBM as 3 of the most valuable ($) 5 brands on earth—ahead of Coke, Mars, Persil and all the other household names. Six of the Top 20 valued brands are from the Tech industry. The thought-leadership, business model innovations and brand distinctiveness that characterise these vendors are now becoming essential pre-requisites for success in Tech.

Those that are truly market-oriented and customer-centric will thrive. Those that remain product-led will find it increasingly hard to attract new customers. Business agility will be key to vendor survival. ‘Be fast and be bold’ as Facebook says. Vendors, customers and users should endeavour to embrace this dictum.

If there are vendors or others who want advice in any of the above, drop me a line and I will be glad to help. It is Xmas after all ;-) And a happy New Year to all our readers!