All businesses are now dependent on their online presence to a greater or lesser extent. Websites are no longer simply a source of information, but a place to transact with customers. This requires investment to ensure consistent high speed performance and appropriate levels of security. Those that fail to do this will see customers and prospects go elsewhere; especially if they are consumers.

A free research report “Online Domain Maturity” sponsored by Neustar (a supplier of online monitoring and security services) shows how the consumer-facing majority (77 percent) and non-consumer-facing minority (23 percent) differ in their approach to managing their online presence and security.

The research conducted by Quocirca shows that state of the art security is a major area of investment for consumer-facing businesses. They are more likely to have in place continuous and/or emergency distributed denial of service (DDoS) protection, fraud detection, security information and event management (SIEM), and advanced threat protection. Their non-consumer-facing counter parts, on the other hand, rely more on older technologies such as host-based anti-malware, and intrusion detection systems (IDS).

The report also highlights the fact that consumer-facing businesses are almost twice as likely to be increasing the budget dedicated to online resources compared to those that only deal with other businesses. This extra investment is focussed on improving the user experience, which includes ensuring individual consumer needs are understood through recognition of their devices, browsers, connect speeds and geographic locations. The metrics gathered can be used to match user behaviours to revenue and gauge customer loyalty.

The research also shows that consumer-facing organisations are less likely to rely on in-house skills to achieve all this. They outsource both infrastructure and security leaving them free to focus on the customer experience and increasing online transaction closure rates. Content delivery network (CDN) services, domain name service (DNS) infrastructure and the management of web sites and online applications are all more likely to be trusted to third party experts.

When it comes to security, in almost all areas consumer-facing organisation are more likely to turn to third party cloud-based services. This not only frees up IT staff to focus on customers, but allows the resources used to closely mirror uptake. Pay-as-you go pricing and the ability to quickly add resources to deal with excess capacity are cited as two of the main benefits of on-demand services, along with better security than many organisations are able to achieve in-house.

A major driver for these difference is that business-to-consumer (B2C) relationships are more fragile than business-to-business (B2B) ones. For instance, consumers are more likely to abandon a potential transaction if they experience slow performance and instead seek out a faster, and ultimately better, user experience on a rival site. For business users however, their choice of services may be more limited and a slow website affects their employer's time rather than their own.

The way payments are taken also varies. B2B transactions will often have delayed payment covered by lines of credit, whilst consumer transactions are usually taken there and then, bringing many consumer-facing organisations in to the scope of the Payment Card Industry Data Security Standard (PCI DSS) and other data protection regulations.

The internet is now embedded in so many business processes that the choice is how well a given business manages its online presence rather than whether it has an online presence in the first place. Dealing with consumers raises the biggest challenges and consumer-facing organisations are rising to these through investment and successful partnering with on-demand infrastructure and security service providers.

That is not to say all consumer-facing organisations have got it right, many still have room for improvement; the laggards need to learn from the leaders. Organisations whose primary focus is B2B certainly need to shake off their complacency. As more and more digital natives enter the work place they will bring their consumer expectations and habits with them. They will expect to be able to find the resources they need online with the performance and security to match. The customer is king, and whether they are transacting for business and personal reasons, a regal online experience is expected. Businesses that fail to deliver this do not have a long term future.

In today’s world of acronyms and jargon, there are increasing references to the Internet of things (IoT), machine to machine (M2M) or a ‘steel collar’ workforce. It doesn’t really matter what you call it, as long as you recognise it’s going to be BIG. That is certainly the way the hype is looking—billions of connected devices all generating information—no wonder some call it ‘big data’, although really volume is only part of the equation.

Little wonder that everyone wants to be involved in this latest digital gold rush, but let’s look a little closer at what ‘big’ really means.

Commercially it means low margins. The first wave of mobile connectivity—mobile email—delivered to a device like a BlackBerry, typically carried by a ‘Pink collar’ executive (because they bought their stripy shirts in Thomas Pink’s in London or New York) was high margin and simple. Mobilising white-collar knowledge workers with their Office tools was the next surge, followed by mobilising the mass processes and tasks that support blue-collar workers.

With each wave volumes rise, but so too do the challenges of scale—integration, security and reliability—whilst the technology commoditises and the margins fall. Steel collar will only push this concept further.

Ok, but the opportunity is BIG, so what is the problem?

The problem is right there in the word ‘big’. IoT applications need to scale—sometimes preposterously—so much so that many of the application architectures that are currently in place or being developed are not adequately taking this into account.

Does this mean the current crop of IoT/M2M platforms are inadequate?

Not really, as the design fault is not there, but generally further up in the application architectures. IoT/M2M platforms are designed to support the management and deployment of huge numbers of devices, with cloud, billing and other services that support mass rollouts especially for service providers.

Reliably scaling the data capture and its usage is the real challenge, and if or when it goes wrong, “Garbage in, Garbage out” (GiGo) will be the least of all concerns.

Several ‘V’s are mentioned when referring to big data; volume of course is top of mind (some think that’s why it’s called ‘big’ data), generally followed by velocity for the real-timeliness and trends, then variety for the different forms or media that will be mashed together. Sneaking along in last but one place is the one often forgotten, but without which the whole of the final ‘V’—value—is lost; and that is veracity. It has to be accurate, correct and complete.

When scaling to massive numbers of chattering devices, poor architectural design will mean that messages are lost, packets dropped and the resulting data may be not quite right.

Ok, so my fitness band lost a few bytes of data, big deal, even if a day is lost, right? Or my car tracking system skipped a few miles of road—what’s the problem?

It really depends on the application, how it was architected and how it deals with exceptions and loss. This is not even a new problem in the world of connected things—supervisory control and data acquisition (SCADA)—that has been in existence since well before the internet and its things.

The recent example of problem data from mis-aligned electro-mechanical electricity meters in the UK shows just how easy this can happen, and how quickly the numbers can get out of hand. Tens of thousands of precision instruments had inaccurate clocks, but consumers and supplier alike thought they were ok, until a retired engineer discovered a fault in his own home that led to the unearthing that thousands of people had been overcharged for their electricity.

And here is the problem: it’s digital now and therefore perceived to be better; companies think the data is ok, so they extrapolate from it and base decisions on it, and, in the massively connected world of IoT, so perhaps does everyone else. The perception of reality overpowers the actual reality.

How long ago did your data become unreliable; do you know, did you check, who else has made decisions based on it? The challenge of car manufacturers recalling vehicles will seem tiny compared to the need for terabyte recalls.

Most are rightly concerned about the vulnerability of data on the internet of people and how that will become an even bigger problem with the internet of things. However, that aside, there is a pressing need to get application developers thinking about resilient, scalable and error-correcting architectures, otherwise the IoT revolution could have collars of lead, not steel, and its big data could turn out to be really big GiGo.

It was a scene you wouldn’t quite expect at a typical company meeting.

There we were: the ReQtest team; cards in hands, clutching them close to our chests and eyeing each other carefully before making our move.

No, we weren’t playing out our fantasies of winning millions at Vegas. What we were doing in fact was using a technique called planning poker to assess how long it will take to complete the task we were discussing at that moment.

Planning poker, also known as Scrum poker, is an easy and effective technique which we often use to estimate the development time needed to create any features we implement. The technique is ideally suited for Agile work, however it will work just as well in conjunction with any other method.

In this article I’ll elaborate on the principles behind planning poker and the reasons why it works.

A helping (poker) handThe objective of planning poker is basically to force each team member to independently assess their own estimate of the effort required to complete a task and then compare it with the

perception of others. This encourages a collaborative approach in order to reach a consensus among multiple participants on the duration of effort required to finish a task.

When the participants get a set of cards with numbers representing the duration of the task, they choose a card which reflects their estimation of the effort required.

The cards are initially kept private until everybody picks one and only then are the cards turned over for everyone to see. The highest and lowest estimator now have to defend their estimates. If applicable, the rest of the team can complement with their opinions.

Then a second round of estimation take place and the process continues until the values of the cards chosen by the members are close enough.

Estimating the duration of a task individually often yields numbers that are wildly off the charts; however when approached as a group, the consensus that emerges is uncannily accurate. This is popularly known as the ‘wisdom of crowds’ and the same process works in many situations.

A word of caution is in order at this point. It is not possible to estimate time for tasks that are too big. In these situations a rough timeframe would be possible, but nothing more accurate than that. For this reason it is of paramount importance that activities are broken down into the smallest possible sub-tasks before being presented to the team for estimation.

A common question is "how much time do you spend on planning poker?" At ReQtest we use sprints that are two weeks long. Planning of one sprint typically takes three hours, starting with the product owner presenting requirements, and ending with writing stickies and estimating. This is why the poker deck often contains a coffee cup-card. You need to take breaks and keeping the meeting going for three hours without breaks is not advisable. If somebody puts the coffee card on the table it is a sign that you have not planned the meeting perfectly.

Laying the cards on the tableThe principles behind planning poker are backed up by various papers that presented empirical evidence about the effectiveness of ‘crowdsourcing’ estimations.

The importance of working only with small tasks is clearly shown in an experiment conducted by Simula research labs in 2006. A group of people were told to estimate the time needed for implementing a requirements document that was presented in two different formats: one copy was much shorter than the other, although both contained the same text.

The people who were shown the bigger document instinctively responded that it would take much longer for its requirements to be implemented than those in the shorter document. This shows that it is important to estimate small tasks one at a time and that larger tasks ought to be split into smaller sub-tasks.

As a follow-up to the same experiment, a group of people had to estimate the time required for implementation of the same document presented either as a brief concise document or in a more complicated format. Again, the document that was more difficult to process was judged to take longer to implement. This highlights the fact that concise and clear writing should be used when formulating requirements. Likewise, when using planning poker, user stories are used to present the relevant information in a simple and easy-to-understand manner.

A card up your sleeveIn many ways planning poker uses the same psychological principles behind ‘groupthink’ but adapts them for a more positive outcome. This is achieved by asking the members to flip their poker cards over at the same time. In this way, any influences by other members is reduced and the estimation shared is a true reflection of how long that person believes the task will take.

Diversity isn’t a problem in planning poker. Often, participants who give longer estimations may be aware of some obstacle that the other members did not consider; for example testing might involve browser testing on many different browsers and versions which may take significant time. Likewise, a participant who gives a very short estimation may be considering a shortcut that the others didn’t think of. This diversity is the basis of the discussion that follows every round of estimation and encourages members to speak out their unique perspectives.

Play it rightIf you would like to start using planning poker with your team you don’t need to buy any fancy cards. Just head over to http://planningpoker.com/ and sign up for a free account that will let you use an excellent online version of the planning poker tool.

700 million, that’s a sizeable number; 2 billion is bigger still. The first is an estimate of the number of items of machine data generated by the commercial transactions undertaken in the average European enterprise during a year. The second figure is the equivalent for a telco. IT-generated machine data is all the background information that gets generated by the systems that drive such transactions; database logs, network flow data, web click stream data and so on.

Such data, enriched with data from other sources, is a potential gold mine. How to use all this data effectively and turning it into operational intelligence is the subject of a new Quocirca research report, Master of Machines. The report shows the extent to which European organisations are turning machine data into operational intelligence.

The numbers involved are big, so processing machine data involves volume. In fact it fits all the 5 Vs definition of big data well. V for volume is as described above; another v is for variety—the range of sources, with their wide variety of formats. If machine data can be used in near real time it gives, v for velocity; it can add lots of v for value to operational decision making. All of which gets an organisation closer to the truth about what is happening behind the scenes on their IT systems; v for veracity—machine data is what it is; you cannot hide from the facts the mining can expose.

Typically, operational intelligence has been used by IT departments to search and investigate what is going on their IT systems; over 80% already use it in this way. More advanced organisations use the data for proactive monitoring of their IT systems, some providing levels of operational visibility that were not possible before. The most advanced are providing real-time business insights derived from machine data.

To provide commercial insight, the most advanced users of operational intelligence are making it available beyond IT management. 85% of businesses provide a view to IT managers, whereas only 62% currently get a view through to board level execs. In both cases, many recognise a need to improve the view provided. 91% of the most advanced users of operational intelligence are providing a board level view compared to just 3% of the least advanced.

Although there is broad agreement around the value of operational intelligence and the need to open it up to a wide range of management, most are relying on tools that are not designed for the job. These include traditional business intelligence tools and spreadsheets; the latter were certainly not designed to process billions of items of data from a multitude of sources. 27% say they are using purpose built tools for processing machine data to provide operational intelligence. The organisations using such tools gather more data in the first place and will find it easier to share it in a meaningful manner across their organisation.

Quocirca’s research was commissioned by Splunk, which provides a software platform for real-time operational Intelligence. Quocirca and Splunk will be discussing the report and its findings at a webinar on April 3rd 2014. Find out more and register here.

Introduction

In this age of digital by default it is important that all digital content is accessible. This will include web sites and web pages but also video, audio and documents. This article will investigate the needs, challenges and issues around the creation and consumption of accessible documents.

For this article a document is a collection of words and images that can be printed as a whole. The article does not cover interactive books that require the reader to be able to access them electronically.

These documents will include: letters, memos, minutes, reports, user guides, brochures, pamphlets, transcripts of speeches, magazines, novels, etc. They will be held in one or more digital formats.

There is a potential tension between the requirements of the creator of a document, the distributor and the user:

The creator of the document will wish to use tools and technologies that they are familiar with.

The distributor of the document will wish to minimise the number of document formats used for distribution. Multiple versions cost money, cause management issues and increase the risk of different users seeing different content.

The different users will wish to consume the document in different ways (the word 'consume' is used here rather than 'read' because 'read' implies reading words on paper or a screen, whereas the user may have the document read to them, or turned into braille or sign language, or other formats).

The end user must be considered the most important of these roles; if they cannot consume the document then there is no point in creating or distributing it.

This article looks at the requirements of these different players and reviews the alternative technologies available.

It summarises the pros and cons of various solutions and makes tentative suggestions for an optimum solution. It is hoped that this will help organisations that are going digital by default to decide how to distribute accessible documents; it also hoped that it will show the weaknesses in current technology so that vendors can improve their products.

The document looks first at the end user, then the distribution process, then the creation process, it then looks at the various technologies for creating, distributing and consuming the document and concludes with some tentative best practise.

The end user experience

To understand how these documents must be created, stored and distributed we must first understand how different end-users will consume them.

However the user consumes the document they need to be able to access more than just the text and the images (or descriptions of the images), they need to be able to:

They will not expect to be able to modify the original document without the express authorisation of the owner.

Types of consumer

People with different disabilities will wish to consume the documents in different ways. The following section outlines the different disabilities and methods of consumption:

Non-disabled: a person with no relevant disabilities will want to be able to read the document electronically on some type of screen. The document should be laid out so that its structure is visually apparent by the use of different types and sizes of fonts, use of bullets, indentation, and tables. The reader software should enable the user to navigate the document by table of contents, indexes, bookmarks and searches.

This electronic version of the document should be considered the base version: any other version should contain the same information.

Besides the electronic version, non-disabled people may wish to have a printed version of the document. It must be possible to print all or parts of the document so that the printed version is an accurate reflection of the electronic version.

People who are partially sighted should be able to modify how the document is displayed: size of text, type of font, background-foreground colours, line separation, justification, etc. to enable them to see the content as clearly as possible. The electronic document should interface well with screen-magnifiers.

People who are blind should be able to access the document using a screen-reader. Tthe screen-reader should convey the structure of the document by announcing headings, lists, tables and other structural elements, and assist the user navigation by providing functions such as jump to next header, or to the end of a list, or to the next chapter.

People who are vision impaired and use braille should be able to access the document and have it presented on a braille display including the structure and the ability to navigate.

It should also be possible to create printed braille from the base document.

People with dyslexia: can improve the reading experience by using suitable background-foreground colour and brightness combinations, also by using left justified text. Having text read out aloud and highlighted at the same time can also improve the experience.

People with hearing impairments have different capabilities of reading written text. If their reading level is good then the base document should be accessible. There is a great deal of pressure from the deaf community for films and TV to be captioned but there is much less pressure for them to be signed; the main area of signing is for news and current affairs where live captioning is inadequate. Signed versions of a document should probably be limited to general introductions to an organisation and documents specifically aimed at the deaf community.

People who do not understand written English may need some introductory document which explains what the organisation does and how to get help in understanding the other documents.

People with learning disabilities may not be able to fully understand the base document. Firstly the base document should be reviewed to see if it can altered so that it is understandable by a wider range of cognitive abilities, without it becoming patronising for the majority of users. If this cannot be done then a version may need to be created that is easier to understand without losing any of the meaning. This format is often known as 'Easy Read'; it concentrates on simple language and use of images and videos to match the words.

People who cannot use keyboards and/or pointing devices should be able to access and navigate the base document using assistive technologies such as switches or voice commands.

People with sever dementia and similar problems cannot understand or make decisions independently. In these cases the document only has to be accessible to their carer. An extreme case is a person in a coma.

Formats required by the consumer

To support all these different user types ideally requires the following end user formats (requirements for readers for these formats is discussed below):

Base document, which includes text and images, the format should support:

Screen-readers

Screen-magnifiers

Changes to fonts, colours, justification etc

Easy Read documents are needed where the base document is difficult to understand by some users.

Sign language the base document can be converted into sign language by videoing a signer reading the document (there is research into automatic generation of sign language but it is not considered advanced enough to be used instead of human signers). At present there is no easy way to support navigation of signed videos.

Audio can be produced either by using a text to voice software or by recording a human reading the text. At present there is no easy way to support navigation of audio versions of documents, however if the voice is synchronised with a text version then navigating the text version will provide navigation of the audio.

Possible Distribution Formats

The question is which format(s) should the content be distributed in? The following are some options with pros and cons.

Word processor format

The documents will often be created using a word processor (Microsoft Office (.docx), Open Office (.odt) Apple iWorks (.pages)). If it is going to be distributed in this format it needs to be in a format that can be read by all systems: this means .doc or possible .docx. There are two problems with distributing in this format:

The formatting of these documents by different word processors is not always identical and in a few situations does not work at all. This can be a particular problem with mobile devices that have limited support for these formats.

The content is not intended to be edited or changed by the recipient but the program used to access it is designed to do just that. The recipient should be able to annotate and comment but not to change the original.

For these reasons it is not really a suitable format for distributing the base document. However it is a very common format for creating base documents and therefore there should be methods for converting them into formats suitable for distribution.

PDF

PDF is designed to be a final document format. The common tools used to access it, such as Adobe Reader and Apple Preview, do not support change but do provide annotation functions.

PDF used not to work well with screen readers because the format did not include any document structure information; with the publishing of the PDF/UA standard this is no longer the case.

PDF readers are available on all relevant platforms and are installed on most PCs. PDF is therefore a popular format for distribution of finalised documents.

PDF/UA has not been designed to facilitate conversion to other formats; it is possible but not easy.

PDF documents are designed to ensure the page layout is preserved. This is important if the page layout is critical to the design of the document, or if the layout has a legal significance.

ePub

The ePub format is growing in importance and is especially popular on mobile devices.

The format does not define the page layout but just the document structure. This means that the document can be rendered differently to suit the display device and user preferences. It is also suitable for converting into other formats including Braille.

It has the functionality to support screen readers as the document structure is defined as part of the format. The common reader tools that are used to access the content enable users to annotate but not to change the original.

The latest standard version of the epub standard (epub3) includes functions for synchronising audio with the text.

The present issue is that not everyone has ePub readers installed on their device. Also not everyone has an ePub creator tool.

DAISY

Daisy is a format that has been developed to support people with vision impairments. It requires a special reader and development tools. It would appear that the benefits of DAISY are being built into ePub 3 technology. Therefore it is unlikely that Daisy will become a general document distribution format.

Audio

MP3 is the common format for audio. The problem is that it does not include any facility for defining structure, for navigation or for annotation.

MP3 versions of the base document may work for short documents or for documents that are designed to be read linearly such as novels. On its own it is not a suitable format for documents such as reports, manuals or magazines.

Video

MP4 (or mov) are the standard file format for videos. It is the format that will be used for sign language. The problem, as with mp3, is that it does not include any facility for defining structure, for navigation or for annotation.

A suggestion is that a video file is created which includes the signed version of the text, an audio track with the spoken words, a closed captions track with the written text. This way there is one file that can support users with different disabilities.

Recommended Distribution Formats

Based on the discussion above it would seem that all users can be accommodated by providing two formats: ePub 3 and Video. ePub has been recommended over PDF/UA because it is designed to supported conversion and because of its widespread support on mobile devices.

Base document

The base document should be distributed using EPUB 3 format. Given a suitable reader (see discussion below) this format can be used by people with most of the disabilities described above; the one major exception is people who are dependent on sign language for communications.

The format can be converted relatively easily in to other formats. This means that users who require another format for technological, preferential or legal reasons can convert the document or have it converted for them.

Video

Sign language cannot be adequately created from an EPUB 3 document. The only solution for this requirement is to create a video of a signer reading the document. If this includes the sound track of the document being read then the video provides a single source that supports multiple users.

It is not recommended that a video is made of every document but a decision is made for each new document as to whether it is beneficial to make the video up-front or if it should only be created on request.

Readers

The three formats (EPUB, PDF and Video) have different reader technologies.

EPUB readers

There are many different readers on the market. They all support the EPUB format but vary in details such as which platforms they run on, design of the user interface, options available for the user to change the look and feel. This means that is not possible for the distributor of the document to recommend and link to a single reader (this compares to PDF readers where, although there are multiple readers on the market, Adobe Reader can be recommended for all users).

This means that the user has to decide which reader is most suitable for them. Some questions that the user will need to consider are:

Does the document have to be loaded into the reader library before being consumed or can it be opened from a standard file directory.

Does the reader interface effectively with the assistive technology they use.

Can the user set up themes so that they can use different sets of options for different types of documents.

Is the customisation interface easy enough to use. Some options should be very easy to change but ideally there should be a more sophisticated way of changing the options e.g. a standard of three background-foreground colours to choose from but with the ability to use CSS to define any combination.

Is there an in-built text-to-speech facility.

PDF Readers

There are several readers on the market, not all of them take advantage of the PDF/UA tagging.

Adobe Reader is the leading reader and is available for all major platforms. Not all of the assistive technologies available understand or take advantage of PDF/UA, especially in the mobile environment.

Video Players

Video players are available on all major platforms. The problems with video players are that they do not provide functions for: defining structure, navigation, searching, annotation, copying or extraction.

Creation and Conversion tools

EPUB tools

There are various EPUB creation tools: there are desktop publishing systems that can be used to generate EPUB documents and there are tools that convert from word processors (Microsoft Office or Open Office) to EPUB.

Assuming many of the documents will be written using a word processor this section concentrates on products that convert the source to EPUB.

Calibre is one tool that will convert from .docx and .odt to .epub and the latest version supports more styles and formats than before. The problem is that there is a lack of documentation as to what can be converted and how it is converted. This information is needed as the ideal is to create the document in the word processor and then automatically generate the .epub without any manual intervention.

Calibre and other tools can read EPUB and convert it into other formats.

PDF/UA tools

There are several tools for converting .doc, .docx and .odt files into PDF/UA. These include Adobe Acrobat, Microsoft Office and Open Office so the process is well supported by the leading players.

There are products that attempt to convert from PDF to other formats but they tend not to use the PDF/UA tagging so the output often loses much of the structure of the original.

Conclusion

To provide accessible documents to the widest possible set of users an organisation should distribute the documents in accessible EPUB format with some also available as videos with the text read out and signed.

To ensure this is practical there needs to be more research so that recommendations can be made about:

The best readers for different users.

About the creation of word documents that can be automatically generated into EPUB documents.

This recommendation is intended to provide the best long term solution to accessible documents. It should be the solution promoted by the accessibility community. However, the creation and reader technologies for EPUB are are at present (January 2014) somewhat immature and lacking a complete set of easily implemented functions. There is a need to persuade the providers of EPUB technology to improve the quality and function of their products.

Therefore, for a distributor of accessible documents who requires an immediately available, low risk solution PDF/UA could be the preferred choice.

On-demand applications are often talked about in terms of how independent software vendors (ISVs) should be adapting the way their software is provisioned to customers. However, these days the majority of on-demand applications are being provided by end user organisations to external users: consumers, users from customer or partner organisations and their own employees working remotely.

A recent Quocirca research report, “In demand: the culture of online services provision”[1] found that 58% of northern European organisations (from the UK, Ireland and Nordic region) were providing on-demand e-commerce service to external users. Not surprisingly, financial services topped the list with 84% of organisations doing so (showing how ubiquitous the provision of online banking etc. is now). This was followed by the technology, utilities and energy and the retail, distribution and transport sectors with 79% and 70% providing on-demand applications respectively.

However, there was plenty of such activity in other sectors. 61% of manufacturers were providing on-demand applications, most often to other businesses (think connected supply chain systems). For professional services it was 56%, again most often to other businesses. For educational organisations it was 37%. The public sector trailed with just 17%, surprising given the commitment of many governments to so called e-agendas.

At one level this is good news: more direct online interaction with consumers, partners and other businesses should speed up processes and sales cycles and extend geographic reach, those that do not do so will be less competitive. However, there are two big caveats.

These benefits will only be gained if these applications perform well and have a high percentage of uptime (approaching 100% in many cases).

Any application exposed to the outside world is a security risk, vulnerable to attack, either as a way into an organisation’s IT infrastructure through software vulnerabilities or to stop the application itself from running effectively (application level denial of service/DoS), thus limiting a given organisation’s ability to carry on business and often damaging its reputation.

So, how does a business ensure the performance and security of its online applications?

The performance of online applicationsTwo things need to be achieved here. First there needs to be a way of measuring performance and second there needs to be an appreciation of, and investment in, the technology that ensures and improves performance.

Testing the performance of applications before they go live can be problematic. Development and test environments are often isolated from the real world and, whilst user workloads can be simulated to test performance on centralised infrastructure, the real world network connections users rely on, which are increasingly mobile ones, are harder to test. The availability of public cloud platforms helps as run-time environments can be simulated, even if the ultimate deployment platform is an internal one. This saves an organisation having to over-invest in its own test infrastructure.

So, upfront testing is all well and good, but, ultimately, the user experience needs to be monitored in real time after deployment. This is not just because it is not possible to test all scenarios before deployment, but because the load on an application can change unexpectedly, due to rising user demand or other issues, especially over shared networks. User experience monitoring was the subject and title of a 2010 Quocirca report[2], much of which is still relevant today, however the biggest change since then has been the relentless rise in the number of mobile users.

Examples of tools for the end to end monitoring of the user experience, which covers both the application itself and the network impact on it, include CA Application Performance Management, Fluke Network’s Visual Performance Manager, Compuware APM and ExtraHop Networks (which has just released specific support for Amazon Web Services/AWS).

It is all well and good being able to monitor and measure performance, but how do you respond when it is not what it should be? There are two issues here; first the ability to increase the number application instances and supporting infrastructure to support the overall workload and, second, the ability to balance that work load between these instances.

Increasing the resources available is far easier than it used to be with the virtualisation of infrastructure in-house and the availability of external infrastructure-as-a-service (IaaS) resources. For many, deployment is now wholly on shared IaaS platforms, where increased consumption of resources by a given application is simply extended across the cloud service provider’s infrastructure. This can be achieved because with many customers sharing the same resources, each will have different demands at different times.

Global providers include AWS, Rackspace, Savvis, Dimension Data and Microsoft. There are many local IT service providers (ITSPs) with cloud platforms; for example in the UK, Attenda, Nui Solutions, Claranet and Pulsant. Some ITSPs partner with one or more global providers to make sure they too have access to a wide range of resources for their customers.

Even those organisations that choose to keep their main deployment on-premise can benefit from the use of ‘cloud-bursting’ (the movement of application workloads to the cloud to support surges in demand) to supplement their in-house resources. Indeed, in Quocirca’s “In-demand[1]” report, those organisations providing on-demand applications to external users were considerably more likely to recognise the benefits of cloud-bursting than those that did not.

Being able to measure performance and having access to virtually unlimited resources to respond to it is one thing, but how do you balance the workload across them? The key technologies for achieving this are application delivery controllers (ADCs).

ADCs are basically next generation load balancers and are proving to be fundamental building blocks for advanced application and network platforms. They enable the flexible scaling of resources as demand rises and/or falls and offload work from the servers themselves. They also provide a number of other services that are essential to the effective operation of on-demand applications, these include:

Network traffic compression – to speed up transmission

Data caching – to make sure regularly requested data is readily available

Secure sockets layer (SSL) management – acting as the landing point for encrypted traffic and managing the decryption and rules for on-going transmission

Content switching – routing requests to different web services depending on a range of criteria, for example the language settings of a web browser or the type of device the request is coming from

Server health monitoring – ensuring servers are functioning as expected and serving up data and results that are fit for transmission

The best known ADC supplier was Cisco; however, Cisco recently announced it would discontinue further development of its Application Control Engine (ACE) and recommends another leading vendor’s product instead—Citrix’s NetScaler. Other suppliers include F5, the largest dedicated ADC specialist, Riverbed, Barracuda, A10, Array Networks and Kemp.

So, you can measure performance, you have the resources the meet demand and the means to balance the workload across them as well as off-load some of the work with ADCs; but what about security?

The security of online applicationsThe first thing to say about the security of online applications is you do not have to do it all yourself. Use of public infrastructure puts the onus on the service provider to ensure security up to a certain level. Most have a shared security model; for example AWS states:

The customer is free to choose its operating environment, how it should be configured and set up its own security groups and access control lists.

However, regardless of where the application is deployed, it will be open to attack. A 2012 Quocirca report[3] underlined the scale of the application security challenge. The average enterprise tracks around 500 mission-critical applications—in financial services organisations it is closer to 800. The security challenge increases as more and more of these applications are opened up to external users.

Beyond ensuring the training of developers, there are three main approaches to testing and ensuring application security

Code and application scanning: thorough scanning aims to eliminate software flaws. There are two approaches; the static scanning of code or binaries before deployment and the dynamic scanning of binaries during testing or after deployment. On-premise scanning tools have been relied on in the past—IBM and HP bought two of the main vendors. However, the use of on-demand scanning services, for example from Veracode, has become increasingly popular as the providers of such services have visibility into the tens of thousands of applications scanned on behalf of thousands of customers. Such services are often charged for on a per-application basis, so unlimited scans can be carried out, even on a daily basis. The relatively low cost of on-demand scanning services makes them affordable and scalable for all applications including non-mission critical ones.

Manual penetration testing (pen-testing): specialist third parties are engaged to test the security of applications and effectiveness of defences. Because actual people are involved in the process, pen-testing is relatively expensive and only carried out periodically; new threats may emerge between tests. Most organisations will find pen-testing unaffordable for all deployed software and is generally reserved for the most sensitive and vulnerable applications.

Web application firewalls (WAF): these are placed in front of applications to protect them from application-focussed threats. They are more complex to deploy than traditional network firewalls and, whilst affording good protection, do nothing to fix the underlying flaws in software. WAFs also need to scale with traffic volumes, as more traffic means more cost. As has been pointed out, WAFs are a feature of many ADCs, and are less likely to be deployed as separate products than they were in the past. They also protect against application level DoS where scanning and pen-testing cannot.

100% software security is never going to be guaranteed and many organisations use multiple approaches to maximise protection. However, interestingly, as one of the reasons for having demonstrable software security is to satisfy auditors, compliance bodies do not themselves mandate multiple approaches. For example the Payment Card Industry Security Standards Council (PCI SSC) deems code scanning to be an acceptable alternative to a WAF.

The number of on-demand applications provided by businesses in all sectors is set to increase further. Users will become even less tolerant of poor performance as they rely more on on-demand services as part or all of the way they engage with suppliers. Hackers and activists will continue to become more sophisticated in the way they attack online applications. The supporting technology to support performance and provide security will continue to improve over time; the businesses that make best use of this technology will be the most effective providers of online services.

As with many things, business processes evolve over time and therefore are never perfect; one can be left wondering; why is it done that way, surely it can be improved?

Wholesale change is rarely an option; there are too many stakeholders with vested interests as well as legacy systems involved that have to be integrated. All this is especially true of any process that involves interacting with third parties as so many do. Quocirca has spent some time recently looking at a particular process that exists in all companies, the order to cash (OTC) cycle.

In a nutshell, this is what happens between a customer (a retail store, restaurant or hospital etc.) placing an order with a supplier (a manufacturer, distributor etc.) and an invoice being accepted by the former and the latter being paid. In general, the shorter the OTC cycle the happier the customer and the supplier’s finances will be healthier.

Because the OTC process also involves logistics services providers (men in trucks and vans) there are usually a minimum of three parties involved, sometimes many more; all have their own systems and way of doing things. This means OTC processes can only ever be adapted, not replaced.

This does not mean big improvements cannot be made—they can—and quick wins can be gained through better automation at various stages of the cycle. This is the subject of a recent Quocirca report “Customer automation management”.

The report explores the four elements that define customer automation management:

Support for multi-format processing (e.g. of orders and proof of delivery notes)

The creation of intelligent business rules

Improved master data management

Accelerated exception management

It looks at how each helps shorten the OTC cycle and explains why a single electronic data interchange (EDI) standard has never arrived and never will.

That many incremental improvements can have a major high-level impact is explained through looking at how days sales outstanding (DSO) can be reduced. This is a key metric for finance managers and, in public companies, one that can affect share price for better or worse.

Whilst many of the terms bandied around in IT circles would mystify outsiders, the term 'IT' itself is widely understood to mean, well, computers and all that. It is an acronym so widely used, especially in the broader business world that it is rarely spelled out—but just in case anyone is not quite sure – it stands for 'information technology'. So why state the obvious?

The very ubiquity of the term means we too often fail to break it down and consider the components. Business people may often talk about IT, but what really matters to them is the first bit, “information”. Information needs processing, storing, sharing and so on and that is where the “technology” comes in. However, it does not mean that for a given business the responsibility for providing the “technology” to handle “information” needs to reside with one person.

Indeed, in many businesses the two parts started to become separated some time ago as the role of IT Director began to give way in some organisations to that of a chief information officer (CIO). Of course, more important than a title is the way someone actually carries their job out. Nevertheless, those with the title CIO might be expected to focus on information and its value to the business rather than technology per se. In fact, with the increasing availability of cloud-based services it is quite possible for a CIO to ensure that the organisation they serve gets the value from information that they need without ever dirtying their hands with technology.

The term CIO emerged in large enterprise organisations, but in reality it is more likely that smaller mid-market organisations can shed the need to manage technology in-house and focus more and more on the processing of information and the business outcomes it can deliver. This is not just an idle thought; it is increasingly a reality as a recent freely available Quocirca research report, “The mid-market conundrum” shows (the research was sponsored by the IT service provider Attenda).

27% of the UK mid-market organisations interviewed said it was now very true that their IT operation was focussed on the best way of delivering applications to the business rather than IT platforms per se. These organisations can be said to have a CIO mind-set, whatever the job title of the individual in charge. In Quocirca’s view the other 73% could do better, although the majority of them said it was somewhat true that their organisation thought like this.

Freed from the need to focus on technology platforms and instead focussed on making sure information is managed and used optimally, the budding mid-market CIO can focus on delivering desired business outcomes, in the most practical way that best suits their organisation. To this end, the applications that actually process information are more and more likely to be run on third party platforms or be fully outsourced.

38% of mid-market "IT leaders" said they were deploying business applications on a case-by-case basis, selecting the platform that best suits the needs; and 19% said it was very true that they were less likely to run a given IT application in-house than two years ago, whilst 37% said this was somewhat true. It was even more likely to be the case for standard communications applications such as email, VoIP and web conferencing.

The evidence presented in the report is clear. As the mid-market IT manager evolves into a role that looks more and more like an enterprise CIO and the focus shifts from building technology platforms to managing information and applications to best serve the business, more and more of the technology will be outsourced to expert IT service providers.

Behind all this lies an often overlooked reality about the move to cloud based services. It is as much about more effectively using information to better ensure business outcomes as it is about procuring cheap scalable technology resources. Mid-market organisations that remain more focussed on technology rather that information will lose out to more agile competitors.

]]>rss@it-analysis.com (Bob Tarzey, Quocirca)Business Issues->QualityServices->OutsourcingTechnology->InfrastructureFri, 06 Sep 2013 07:00:00 +0100http://www.it-director.com/blogs/Quocirca/2013/9/the-mid-market-cio.html?ref=fd_side-itdBloor Research joins national campaign to help disabled people get onlinehttp://www.it-director.com/business/quality/content.php?cid=13793&ref=fd_side-itd

Bloor Research is proud to announce it has become a partner of a major new national campaign to raise awareness about the barriers faced by people with disabilities in accessing the internet and other new digital technologies, and help overcome them.This is a natural follow on to the research into accessibility that Bloor has conducted over the last 7 years.

Bloor believes that our readers should follow suit and show their support for ICT accessibility and gain the benefits available from a community of interest.

Go ON Gold aims to encourage businesses, organisations and policy makers to become more aware of the needs of disabled people - including their own staff and customers - and of the benefits to the economy of enabling everyone to be online.

New technology, from the internet to smartphones and digital TV, can be liberating for disabled people but can also turn into another way of excluding them from work, entertainment, shopping and other everyday activities. But shockingly, some four million disabled people in the UK have still never used the internet, either because of design barriers or because they may be unaware of advances in technology that can make access easier.

As part of its awareness-raising work, Go ON Gold has filmed a series of videos of campaigners and technology users.

One of the video subjects is Paralympian peer and disability rights campaigner Tanni Grey-Thompson. The sixteen-times medal winner is a firm believer in the enabling power of IT: "For people whose mobility is compromised or who lack the resources to be able to get out and about as much as they would like, full internet access can be hugely liberating. In front of the screen, we can all be equal and Go ON Gold is set to make this a reality."

Go ON Gold, funded by the Nominet Trust, is a partner campaign of Go ON UK, the new national digital inclusion charity chaired by UK digital champion Martha Lane Fox and backed by the BBC, Age UK, the Post Office, TalkTalk, Lloyds Banking Group, the Big Lottery Fund and Eon.

The Go ON Gold website will act as a central focus for links to key resources and expertise, ranging from charities providing free or subsidised equipment, to centres offering one-to-one advice, and guidance for website developers to ensure the accessibility of the digital content they produce.

There is a principle that internet service providers (ISPs) and governments should treat all data crossing the internet equally. It should not matter what type of device is being used, who the user is, or what site or application the data is being coming from/going to—net neutrality should mean no difference in charging models, no discriminating between the different use cases.

The arguments go back and forth as to whether this should be enshrined in legislation as a right, or allowed to drift in a competitive open market.

Despite the arguments and the capacity of technology to advance there are restrictions due to the laws of physics and certain resources that are therefore limited. This might not be too much of an issue with the massed bundles of fibre optics at the heart of fixed-line networks, but wireless networks have to balance range, capacity, power and the frequency spectrum in what is increasingly ‘noisy’ environment. Ideally without ‘frying’ anything en route.

While the resources are constrained, the boundless enthusiasm and appetite to access mobile data and applications is not. Nor, given the numbers of subscribers and devices, is the number of endpoints diminishing. In fact, with a re-awakened interest in machine-to-machine communications (M2M), or an ‘internet of things’, this is likely to accelerate further.

So what about unwired net neutrality?

There are already differential services that break the spirit, if not the letter, of the principle. To observe this, consider the way hotels have been offering Wi-Fi. Initially it appeared to be a new revenue stream, but then establishments realised it was costly to get right. As more venues started to offer it, the differentiation was lost and it became a ‘table-stakes’ offering of free Wi-Fi once hoteliers realised that actually they really made money from renting out rooms and selling food and drinks.

Not all have reached this point yet, but the more progressive organisations have already gone a step further. They offer ‘basic’ Wi-Fi for free, but have a premium service that offers greater bandwidth, improved latency etc—what might be described as ‘professional’ Wi-Fi, compared to currently simple hotspots. Basic allows a bit of email and gentle browsing, but the premium service would be good enough for consumers’ IP telephony, gaming and video streaming or virtual desktops and unified communications for the enterprise user.

Then there are cellular networks. Some carriers are premium-pricing their higher speed 4G offerings compared to the tariffs on their 3G networks. Of course with differential caps on usage it also gets a little confusing as to which is the best service for an individual user. In countries where only one or a few of the mobile networks are offering 4G today, there will be rapid pricing changes as operators switch between land grab, maximising revenue and maintaining network quality modes.

Given that users have different needs—from M2M applications that might only require a few guaranteed kilobytes to video streaming gamers who need high bandwidth and low latency—there will have to be different types of services offered. Setting caps on how many minutes of communication or megabytes of capacity will be bundled and then charged for will no longer be sufficient.

Different qualities of service will need to be differentially priced. This might require application bundling, e.g. all the social media you can eat, but video is charged by the megabyte or guaranteed service levels, e.g. all gaming traffic in sub XYZ latency, but email transmitted as ‘best efforts’.

It will be a real challenge for rating, billing and marketing, but there is no dark fibre in the sky and all the innovative use of spectrum has its eventual limit, which, with ever more users and usage, is close by.

The superfast mobile net is unlikely to be very neutral, but that might work out to be beneficial in the long run.

I recently blogged about the Qualitest Group and its third party testing services. I like the idea of using an external testing organisation because I believe that testers need a different mind-set to developers - a delight in breaking things and finding defects perhaps - and, in most organisations, such a mindset is career limiting. It's a question for an IT group to ask itself - do we like having people that regularly find our mistakes and publicise them to those around us? If not, perhaps it should be considering a third party testing organisation, that fully understands testing in all its aspects and employs people with the "testing mindset".

That choice raises further questions, however, which really concern the governance of the development process and its quality assurance. Does your external testing partner allow developers to unit-test their own code, for example? With old-style development practices, people probably shouldn't test their own code (the developers often have the wrong mindset and their test cases can embody the same misconception of the requirement as the code does); but developers unit-testing their own work is pretty fundamental to eXtreme Programming and Agile development. Agile, also, is generally becoming accepted as the way to go, both for productivity and quality. It's a question to ask your testing partners: "do you support agile development effectively without 'spoiling' the Agile culture we're trying to promote?".

Another thing I like about Qualitest is its results-based testing approach. However, this rather assumes that you have something to compare your results against and that the results you want are feasible. You can never claim 100% confidence that there are no bugs in a system, even a safety-critical system; and Qualitest would say that you can never say that "testing is finished".

Nevertheless, I would suggest that that's actually a matter of semantics, to a large extent. Should you discuss the semantics of a results-based testing SLA saying something like "Find at least 95% of the bugs" with your testing partner? There are ways of estimating the total bugs in a piece of code (here, for example) but does the SLA refer to these estimates or merely to finding 95% of the bugs actually reported by users? Does a design flaw count as a bug? What about the possibility of a systematic testing bias that puts the most business-critical bugs in the 5% that aren't found? And, what about latent bugs which haven't been found and perhaps can't ever be reached - with current workloads and data patterns? Are they worth wasting time on? Perhaps not; but latent bugs can represent a potential production disaster waiting to happen when workloads change or new data enters the system (perhaps you gain a significant Far East customer for the first time and its data looks different to what you've been processing before). So perhaps latent bugs are important (which is partly why static code analysis can be important).

I like results-based testing because it promises to give you a fair and equitable contract with your external testing partner. It can also, perhaps, give you a handle on the "is testing finished" issue - something else to question your testing partner about, I think.

"Is testing finished" is really another question of semantics. If you can place confidence limits on the number of bugs found relative to the number of bugs expected; if you can put numbers on the risk associated with "going live"; and if you can estimate, with confidence, the cost associated with the risk going live against the cost to the business of withholding the new automated service; then you have, in a real and practical (although limited) sense, "finished testing". Even if running some more tests (perhaps tests which you haven't thought of and which aren't in your test pack) might find some more defects.

Part of the value of employing an organisation like Qualitest is that it is a testing specialist and understands the testing process and its semantics, probably better than most developers do. However, although management can outsource responsibility for the execution of testing and quality assurance, it can't outsource responsibility for Quality. If, for example, one of the 5% of defects Qualitest hasn't found (while satisfying its testing by results SLA) results in confidential customer credit card details held by a company being splashed over the Internet, it'll be (potentially) the company's directors in the dock facing gaol, not Qualitest's directors.

So, the semantics of testing is probably important to the managers employing a firm like Qualitest. For example:

Is a "bug" in an automated system a coding error; an error in automating business logic; an error in the business logic being automated; or a fundamental misunderstanding of the business operation and its commercial context by business management? Even if you replace "bug" with "defect", depending on where I am in the organisation and how technical I am, I might reasonably expect one, some or any of these to be addressed by a quality assurance or testing team that promises to help me control the "quality" of my automated business systems. And I've been, sloppily, mixing up "bug" and "defect" throughout this piece; this is common (although I do hope that many readers noticed), but is it acceptable?

Is "defect free" software possible? Altran Praxis (formally Praxis High Integrity Systems) promises to deliver "zero defect" software and this claim has been validated by the NSA. This isn't trivial and Praxis achieves zero defects by using mathematical proof where it is cost-effective (but only where it is cost-effective, not everywhere) and by developing in a restrictive subset of Ada that doesn't support constructs which facilitate coding errors; but what it means by "zero defect" is that the code complies 100% with the spec. Is this the same as what you mean by "defect free"? Mind you, just 100% compliance with spec would be a useful step forward for many systems.

If I design my system to store credit card details in a database that is accessible via SQL queries embedded in orders sent over the web, is this a "bug", a "system defect" or a "design fault" and would you expect your testing team, or Qualitest, to find this? Or, does this, perhaps, depend on what you ask (and pay) your testers, or Qualitest, to do? Perhaps you think this is something your security team should be testing; but perhaps they think it's a development issue and, in practice, nobody takes ownership of such issues.

If I say that you have bugs in your systems because you tell your developers that you want bugs in your systems are you shocked and in immediate denial? But if you, perchance, reward developers for delivering ahead of schedule and reward them again for coming in out-of-hours to fix production bugs, aren't you, in effect, telling your developers that you are happy to live with bugs in the interests of immediate delivery and conspicuous "company loyalty" in your developers? Especially if you try to reduce what you spend on quality assurance as much as possible. I would see this as a "cultural defect" or "organisational defect" - a failure of "good governance", perhaps - which will impact the business; but, in semantic terms, does this count as a "system defect" which you could expect your quality assurance partners to help you eliminate?

I just raise the questions and they don't invalidate my view that external testing by an organisation such as Qualitest may bring significant improvements to system quality. But the outsourcing of testing has consequences and raises governance issues which are often cultural and semantic as much as technical - but no less important for all that.

Parkinson’s Law states that work expands so as to fill the time available. Something similar could be said about network bandwidth; left unchecked, the volume of data will always increase to consume what is available. In other words, continually increasing network bandwidth should never be the only approach to network capacity provision; however much is available it still needs to be used intelligently.

There are three basic ways to addressing overall traffic volume:

Cut out unwanted data

Minimise volumes of the kind of data you do want

Make use of bandwidth at all times (akin to peak and off-peak power supply)

There are two types of unwanted data. First, there are the legitimate users who are doing stuff they really should not be doing. From the network perspective, this really only becomes a problem when that stuff consumes large amounts of bandwidth such as watching video or downloading games, films and music. A mix of policy and technology can be deployed to keep users focussed on their day jobs and thus making productive use of bandwidth.

The technology available includes web content and URL filtering systems from vendors such as Blue Coat, Websense and Cisco and filtering/blocking network application traffic with technology from certain firewall vendors including Palo Alto Networks and Check Point. In both cases care must be taken to ensure false positives are avoided that end up blocking legitimate use.

The second source of unwanted data is external and insidious; cybercrime and hacktivism. At one level this means pre-filtering of network traffic to keep spam email etc. at bay, especially as spammers have started exploiting increased bandwidth to send rich media messages. Most organisations now have such filtering in place using services such as Symantec’s MessageLabs or Mimecast’s email security.

Perhaps more serious is to avoid becoming the target of a denial of service attack (DoS). Generally speaking, these are aimed at taking servers out, but one type, the distributed DoS (DDoS) attack does so by flooding servers with network requests, so also has the effect of slowing or blocking the network. Technology is available to identify and block such attacks from vendors such as Arbor, Corero and Prolexic.

So now (hopefully) only the wanted traffic is left, but this will still expand to fill the pipe if left unchecked. One way to keep it under control is to keep as much 'heavy-lifting' as possible in the data centre. This means deploying applications that minimise the chat between server and end user access devices. To achieve this, data processing should be at the application server with just results being sent to users.

For the data that does have to be sent, techniques such as compression, de-duplication and caching can minimise the volume further. Two types of vendors step up to the plate here; those that optimise WAN traffic, for example Silver Peak, Riverbed and Blue Coat. Such products also help with the local caching of regularly used content but there are also services providers that specialise in doing this, notably Akamai.

All of the above will free up bandwidth for applications that must have the capacity they need at the time the user wants it; telephony, web and video conference etc. Others applications such as data backup or uploading data to warehouses for number crunching must be given the bandwidth they need but this can be restricted to times when other applications are not in use, which in most cases will be overnight.

Of course, for global companies there is no single night time; the same is true in certain industries which may have urgent network needs at all times of day, for example healthcare. When this is the case, then both urgent and non-urgent network requirements must run side by side and this requires certain network traffic to be prioritised to ensure quality of service (QoS), an issue that it only makes sense to address when the data flowing through data pipe is clean and wanted.

The next BriefingsDirect discussion examines how two companies are extending their use of cloud computing by taking on IT service desk and incident management functions "as a service." We'll see how a common data architecture and fast delivery benefits combine to improve the efficiency, cost, and result of IT support of end users.

Our examples are intelligent energy-management solutions provider Comverge and how it’s extended its use of Salesforce.com into a self-service enabled service desk capability using BMC’s Remedyforce.

We'll also hear the story of how modern furniture and accessories purveyor, Design Within Reach, has made its IT support more responsive—even at a global scale—via cloud-based incident-management capabilities.

Learn from them more about improving the business of delivering IT services, and in moving IT support and change management from a cost center to a proactive IT knowledge asset.

Here to share their story on creating the services that empower end users to increasingly solve their own IT issues is Danielle Bailey, IT Manager at Comverge in Norcross, Georgia, and Alec Davis, the Senior System Analyst at Design Within Reach, based in Stamford, Connecticut. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: BMC Software is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: When you began looking at improving your helpdesk solutions and IT support, what were the problems were that you really wanted to solve?

Bailey: We had three pretty big pain points that we wanted to address. The first was cost. As our company was growing quickly, we were having some growing pains with our financials as far as being able to justify some of the IT expense that we had.

The current solution that we had charged by person, because there was a micro-agent involved, and so as we grew as a company, that expense continued to grow, even though it wasn’t providing us the same return on investment (ROI) per person to justify that.

So we had a little over $55,000 a year expense with our prior software-as-a-service (SaaS) solution, and so we wanted to be able to reduce that, bring it back more in line with the actual size of our IT group, so that it fit a little bit better into our budget.

One of the reasons we went with BMC Remedyforce is that rather than charging us by the end user, the license fees were by the helpdesk agent, which would allow us to stay within the scope of our IT team.

The second big issue that we had was that a lot of our end users were remote. We have field technicians who go out each day and install meters on homes, and they don’t carry laptops, and the micro-agent required laptops for them to be able to log tickets.

We wanted to be able to use something that would allow us to give our field techs the ability to log tickets on a mobile application, like their iPhones, and Remedyforce had that.

The third issue was that we were Sarbanes-Oxley (SOX) compliant and we needed to make sure that whatever solution we chose would allow us to track change management, to go through approval workflows, and to allow our management to have insight into what changes were being made as they went forward, and to be able to interact and collaborate on those changes.

So that was the third reason we chose Remedyforce. It has the change management in there, but it also has the Salesforce.com Chatter interface that we are able to use to make sure that managers can follow some of the incidents and see as we go through if we have any changes that we can quickly work with them to explain what we may need and that they can contribute to that conversation.

Davis: We have a different story. A couple of years ago we made a huge corporate move from San Francisco to Stamford, Connecticut. At that move we saw that it was an opportunity to look at our network infrastructure and examine what hardware we needed and whether we could move to the cloud.

So BMC Remedyforce was part of a bigger project. We were moving toward Salesforce and we also moved toward Google Apps for corporate email. We wanted to reduce a lot of the hardware we had, so that we didn’t have to move it across the country.

We were also looking for something that could be up and running before that move, so we wouldn't have any downtime.

We quickly signed up with Google, and that went well. And then we moved into Salesforce.com. At Dreamforce 2010, Remedyforce was announced, and I was there and I was really excited about the product. I was familiar with BMC’s previous tools, as well as some of the other IT staff, so we quickly jumped on it.

But as part of that move, something else kind of changed about our IT group. We did grow a bit smaller, but we were also more spread out. We used to all be in one location. Now, we're in San Francisco, Stamford, and also Texas. So we needed something that was easily accessible to us all. We didn’t necessarily want to have to use a virtual private network (VPN) to get onto a system, to interact with our incidents.

And we also liked the idea of a portal for our customers. Our customers are really just internal customers, our employees. We liked the idea of them being able to log in and see the status of an incident that they have reported.

We're also really big on change management. We manage our own homegrown enterprise resource planning (ERP) system. So we do lots of changes to that system and fix bugs as well. And when we add something new, we need approval of different heads of different departments, depending on what that feature is changing.

So we are big on change management, and prior to that we were just using really fancy Microsoft Word documents to get approvals that were either signed via email or printed out and specifically signed. We like the idea of change management in Remedyforce and having the improved approval process.

Gardner: Tell us about Comverge.

Bailey: Comverge is a green energy company. We try to help reduce peak load for utility companies. For example, when folks are coming home and starting to wash clothes, turn on the air-conditioning and things like that, the energy use for those utilities spikes.

We provide software and hardware that allows us to cycle air-conditioning compressors on and off, so that we reduce that peak. And by reducing that peak we are able to help utility companies to meet their own energy needs, rather than buying power from other utilities or building new power plants.

We have been in business for about 25 years. We originally started out as part of Scientific Atlanta, but they have taken on new companies across the country to integrate new technology into what we offer.

We are now nationwide. We provide services to utilities in the Northeast, from Pennsylvania, and then all the way down to Florida, and then all the way west to California, and then to Texas, New Mexico, and different areas inbetween. And we’ve recently opened new offices in South Africa, providing the same energy services to them.

Comverge tries to make sure that the energy that we're able to help provide by reducing that load is green. It’s renewable. It’s something we can continue to do. It just helps to reduce cost as well as to save the environment from some of the pollution that may happen from new energy production.

In a nutshell, Comverge is a leading provider of intelligent energy management solutions for residential, commercial and industrial customers. We deliver the insight and controls that enables energy providers and consumers to optimize their power usage through the industry’s only proven comprehensive set of technology services and information management solution.

In January, Comverge delivered two new products, the Intel P910 PCU that includes capabilities to support dynamic pricing programs, and Intel Open Source Applications for the iPhone. The iPhone is very important to us. Our field technicians are using it at residential and commercial installations, and we just want to make sure that we continue with that innovation.

Gardner: And how many IT end users are you supporting at this point?

Bailey: About 600, and those are in South Africa, as well as all around the U.S. We transitioned in April to Remedyforce from our old SaaS system, but the users say that Remedyforce is a lot easier for them to use, as far as putting in tickets and for them to see updates whenever our technicians write notes or anything on the tickets. It's a lot easier for them to share with others whenever they have to change what we are working on.

We are still building our knowledge base. We didn’t have that capability previously. So we are able to use some of the tickets that we have come in as we process and update those and control and close those. We are able to build articles that our technicians can use going forward.

I have recently switched my ERP analyst, but because I was able to pull some of that information out of Remedyforce, where I had my prior ERP analyst, it actually helped me to train this new person on some of the things they can do to troubleshoot and resolve problems.

We are also able to use the automated reporting out of Remedyforce so that I can schedule reports on our tickets, see how many we have open, and for what categories and things like that, and take that to our executive management. They're able to see our resource needs, see where we may have bottlenecks, and help us make decisions that help our IT group move faster and more efficiently.

Gardner: Tell us about Design Within Reach.

Davis: Design Within Reach is a modern furniture retailer. We've been around for 12 years, starting in San Francisco. We have a website that has the majority of our sales. We also have “studios” that are better described as showrooms. We have usually about five reps in those studios, and we have about 50 studios around the U.S. and Canada.

So those [reps] are our users that we support. We've become a very mobile company in the last couple of years. A lot of our sales reps are using iPads. One of the requirements we've had is to be able to interact with corporate in a mobile fashion. Our sales reps walk around the showroom and work with our customers and they don’t necessarily want to be tied to a desk or tied to a desktop. So that is definitely a requirement for us.

Our IT staff is small. We have an IT group, information technologies, and we also have our information systems, which is our development side. In IT we have about six people and in our IS department we also have about six people. We have kind of a tiered system. Tickets come in from our employees, and our helpdesk will triage those incidences and then raise them up to a tiered system to our development side, if needed, or to our network team.

We do have also some contractors and developers. As I mentioned before, we have our own ERP system. We do a lot of the development in house, so we don’t have to outsource it. It's important for those contractors to be able to get into Remedyforce and work the change management we have into the requirement, and also in some cases look at incidences to look how bugs are happening in our ERP environment.

Gardner: How have you been able to empower those end users to find the resources they need, to keep you fairly lean when it comes to IT?

Davis: We have put most of the onus on our IT department to know how to resolve an issue, and we did have a lot of transition with new employees during our move. So building a knowledge base with on-boarding new IT people is also very important. Again, we're a small team and we support a larger internal customer base, so we need them to start and have the answers pretty quickly.

Time is money, and we have our sales reps out there that are selling to our large customer base. If there's an issue with the reporting, we need to be able to respond to it quickly.

Gardner: And the conventional wisdom is that helpdesks are still costly, and the view has been that it’s a cost center. Is there anything about how you have done things that you think is changing that perception?

Davis: The reporting has helped us to isolate larger issues, and to also identify employees that put a lot of incidents in. With the reporting, which is very flexible, and with reporting for management, requirements can change. With the Remedyforce reporting, I can change those existing reports, create new ones, or add new value to those reports.

Mainly you see how many tickets are coming in. We can show management how many incidents we are handling on a daily basis, weekly, monthly, and so forth. But I use it mainly to identify where are the larger issues. Managing an ERP system is a large task, and I like to see what issues are happening and where can we work to fix those bugs. I work directly with the developers, so I like to be as proactive as I can to fix those bugs.

And we are very spread out and very mobile, so we like the flexibility to be able to get into Remedyforce without VPN or traditional methods.

Collaboration is becoming very important to us. We did roll out Salesforce.com Chatter to most of our company, and we are seeing the benefits in our sales team especially. We are trying to use Chatter and Remedyforce together to collaborate on issues. As I said, we are spread out, and our IT group has different skill sets.

Depending on what the issue is, we talk back and forth about how to resolve it, and that's so important, because you do build up knowledge, but the core of our knowledge is in every one of our employees. It's very important that we can connect quickly and collaborate in a more efficient way than we used to have.

Bailey: We have been able to show where IT is actually starting to save money for the rest of the company by increasing efficiency and productivity for some of our groups. There are some of the development works that we are able to do by being able to track and change processes for folks, making them more efficient.

For example, one of the issues that we had was that we were tasked with trying to reduce our telecom expense. We were able to go through and log all of the different telecom lines and accounts. We had to trace them down and see where they were being used and where they may not be used anymore. We worked with some folks within the team to reduce a lot of the lines that we didn’t need anymore. We have been moving over to digital, but we still had a lot of analog lines.

Before, we didn’t have a way to really track those particular assets to figure out who they belonged to and what their use was. Just being able to have that asset tracking and to work through each of those as a group, we were able to produce a lot.

The first quarter of the year we reduced our telecom expense over $50,000 a year and we are continuing with that effort.

With the knowledge base that we're building, we're able to let a lot of users begin to self-help. We have a pretty small IT team. We have only two people on what we call helpdesk support. Then we have two network team members, and we have about 10 people on our information services team, where we do development for the software and data services.

The knowledge base has been a lot of help for us to just start building that knowledge repository. Whereas before, if someone left the company, you would lose years and years of knowledge because there was no place that it was documented.

Because Remedyforce also ties into Salesforce.com, we'd [like to soon] be able to track some of our residential and utility customers in the Salesforce side as well, so that if the salesperson is aware that there is an issue going on with their utility, they can follow the information as it applies to that contact. Then, they're able to also reach out directly to the utility and make sure that things get handled the way they need to be handled according to contracts or relationships. So it's certainly something we are hoping to expand on.

We are also planning to use, and have already started using, Remedyforce for our HR group. When we have new hires or terminations, they're able to able to put in IT support tickets for that. We're able to build templates for each individual, so that as we receive notification that someone has been terminated, we can immediately remove them from the system too. HR has that access to put in those tickets and build those requests, and that helps maintain our SOX compliance.

Gardner: What else have you have been doing with Remedyforce?

Davis: Information is very important to us, very important to myself. I like to see what is happening in organizations from a support standpoint. We haven’t really pushed out Remedyforce to a lot of other departments outside of HR, who of course is helping us with on-boarding the new employees and off-boarding as well.

But all of our internal support teams, our operations team that support our sales teams, some people in finance, and of course HR, are all using Salesforce cases.

So we have all of our customer information. We have all of our vendor information. That would be the IT vendors, but we're also a retail company, so our product retailers are in there too.

We've also moved it out to our distribution center. They have the support team there. We've also started bringing in all of our shipping carriers and all the vendors that they work with. So we have all of our data in one place.

We can see where a lot of issues are arising, and we can be more proactive with those vendors with those issues that we are seeing.

It's great to have all of our data, all of our customer information, all of our vendor information, in one location. I don’t like to have all these disparate systems where you have your data spread out. I love having them in one location. It's very helpful. We can run lots of reports to help us identify what’s happening in our company.

When considering two or more items, there is the concept of “comparing apples with apples” – i.e. making sure that what is under consideration is being compared objectively. Therefore, comparing a car journey against an air flight for getting between London and Edinburgh is reasonable, but the same is not true between London and New York.

The same problems come up in the world of virtualised hosting. Here, the concept of a standard unit of compute power has been wrestled with for some time, and the results have led to confusion. Amazon Web Services (AWS) works against an EC2 Compute Unit (ECU), Lunacloud against a virtual CPU (vCPU). Others have their own units, such as Hybrid Compute Units (HCUs) or Universal Compute Units (UCUs) – while others do not make a statement of a nominal unit at all.

Behind the confusion lies a real problem; the underlying physical hardware is not a constant. As new servers and CPU chips emerge, hosting companies will procure the best price/performance option for their general workhorse servers. Therefore, over time there could be a range of older and newer generation Xeon CPUs with different chipsets and different memory types on the motherboard. Abstracting these systems into a pool of virtual resource should allow for a method of providing comparable units of compute power – but each provider seems to have decided that their own choice of unit is the one to stick with – and so true comparisons are difficult to work with. Even if a single comparative unit could be agreed on, it would remain pretty meaningless.

Let’s take two of the examples listed earlier – AWS and Lunacloud. 1 AWS ECU is stated as being the “equivalent of a 1.0-1.2 GHz 2007 (AMD) Opteron or 2007 (Intel) Xeon processor”. AWS then goes on to say that this is also the “equivalent of an early-2006 1.7GHz Xeon processor referenced in our original documentation”. No reference to memory or any other resource, so just a pure CPU measure here. Further, Amazon’s documentation states that AWS reserves the right to add, change or delete any definitions as time progresses.

Lunacloud presents its vCPU as the equivalent of a 2010 1.5GHz Xeon processor – again, a pure CPU measure.

Note the problem here – the CPUs being compared are 3 years apart, and with a 50% spread on clock speed. Here’s where the granularity also gets dirty – a 2007 Xeon chip could have been manufactured to the Allendale, Kentsfield, Wolfdale or Harpertown Intel architectures. The first two of these were 65 nm architectures, the second two 45nm. The differences in possible performance were up to 30% across these architectures – depending on workload. A 2010 Xeon processor would have been to the Beckton 45nm architecture.

Now, here’s a bit of a challenge: Intel’s comprehensive list of Xeon processors (see this link http://www.intel.com/pressroom/kits/quickreffam.htm) does not list a 2007 (or any other date) 1.0-1.2 GHz Xeon processor, other than a Pentium III Xeon from 2000. Where has this mysterious 1.0 or 1.2GHz Xeon processor come from? What we see is the creation of a nominal convenient unit of compute power that the hosting company can use as a commercial unit. The value to the purchaser is in being able to order more of the same from the one hosting company – not to be able to compare any actual capabilities between providers.

Furthermore, the CPU (or a virtual equivalent) is not the end of the problem. Any compute environment has dependencies between the CPU, its supporting chipsets, the memory and storage systems and the network knitting everything together. Surely, though, a gigabyte of memory is a gigabyte of memory, and 10GB of storage is 10GB of storage? Unfortunately not – there are many different types of memory that can be used – and the acronyms get more technical and confusing here. As a base physical memory technology, is the hosting company using DDR RDIMMS or DDR2 FBDIMMS or even DDR3? Is the base storage just a RAIDed JBOD, DAS, NAS, a high-speed SAN or an SSD-based PCI-X attached array? How are such resources virtualised, and how are the virtual resource pools then allocated and managed?

How is the physical network addressed? Many hosting companies do not use a virtualised network, so network performance is purely down to how the physical network is managed. Others have implemented full fabric networking with automated virtual routing and failover, providing different levels of priority and quality of service capabilities.

To come up with a single definition of a “compute unit” that allows off-the-page comparisons between the capabilities of one environment and another to deal with a specific workload is unlikely to happen. Even if it could be done, it still wouldn’t help to define the complete end user experience, as the wide area network connectivity then comes in to play.

Can anything be done? Yes – back in the dim, dark depths of the physical world, a data centre manager would take servers from different vendors when looking to carry out a comparison and run some benchmarks or standard workloads against them. As the servers were being tested in a standardised manner under the control of the organisation, the results were comparable – so apples were being compared to apples.

The same approach has to be taken when it comes to hosting providers. Any prospective buyer should set themselves a financial ceiling and then try and create an environment for testing that fits within that ceiling. This ceiling is not necessarily aimed at creating a full run-time environment, and may be as low as a few tens of pounds. Once an environment has been created, then load up a standardised workload that is similar to what the run-time workload is likely to be and measure key performance metrics. Comparing these key metrics will then provide the real-world comparison that is needed – and arguments around ECU, vCPU, HCU, UCU or any other nominal unit becomes a moot point.

Only through such real-world measurement will an apple be seen to be an apple – as sure as eggs are eggs.

Interestingly, the 23% of consumers who want relationships with brands mainly want these as they share the same values as the brand, rather than necessarily wanting to be intimate with the brand.

Here's some of the reader comments that pertain to digital marketing: "The last thing I want is frequent emails . . . (send me) one email so I know you are out there and know what you offer is tolerable. More than that and you're working against yourself. When you push email at me, you're pushing me away". "Frequency of messaging in an attempt to reach that elusive new goal of 'engagement' turns me off".

"No, I don't want a 'relationship' with a rental car, banana, gallon of gas, trash bag, PC antivirus software, television, automobile or the providers thereof. When marketers add to, rather than reduce, 'cognitive overload', I unsubscribe". "Don't stalk me, if I want something I'll find you". "I don't want to talk with you after the transaction, it's over. Done. Kaput".

"Marketing people too often understand interactions with customers as an opportunity to scream their messages at them. Unfortunately too few are genuinely interested to listen what is important to the customers in context of their experience with the product or service. It is not the way to build trust in relationship."

Some marketers have hardly shrouded themselves in glory in the way they have used digital technology. Many promotional emails are technically 'spam' - untargeted and lacking relevance to customer needs. In addition, the content offered is often over-hyped and lacks substance and granularity.

Sales follow-ups can be equally unfocused. For example, I often download vendor white papers and case studies. Sales call follow-ups may happen months later (when I have forgotten the content) or the next day (when I have not read the content). Either way, the sales question is often a scripted and inappropriate "do you want to buy something?" rather than evaluating my contact details and profile and routing me to a relevant Analyst Relations or Investor Relations representative for stakeholder nurturing and development.

Some vendors, such as Virgin Media and BT, alienate their own loyal customer bases by offering cheap price deals that are only available to non-customers. Others make it difficult for customers to unsubscribe, cancel a contract, or understand their pricing programmes. This is reflected in the 2012 Edelman Trust barometer research that shows consumers trust CEOs and their marketers less. Consumers trust 'a technical expert in the company', 'a person like myself' and 'a regular employee' much more highly.

We trust 'someone like us', even when we don't know them personally - hence the importance of social media. Digitally savvy customers sense digital tricks and techniques online, and warn off friends and followers when fair play is not being followed.

Marketers need to have the discipline to use customer data in a respectful and measured way that adds value from a customer perspective. Too much digital marketing today is sales / price promotions. Early text promotions on mobile phones are going the same way - with no unsubscribe link or reply mechanism, so there's no way out.

Marketers can use digital marketing technologies to deliver relevant, personalised and exciting digital experiences for their customers and potential customers. But this is not easy. It requires investment in people, process and the correct technology. Short-cutting this process using indiscriminate spamming and message blasting actually damages brands and results in diminishing longer-term returns from marketing investments. The customer trust is gone.

In summary, marketing needs to act in a responsible manner using the digital tools at its disposal to add value to customer experiences. The time for a 'land grab' for customer attention is over, and is jeapardising marketing's own image and credibility with customers. A new enlightened approach to responsible digital marketing is required.

Late in 2011 Quocirca conducted a research project across the USA and Europe to investigate what would be top of mind for CIOs and their management teams in 2012. We asked respondents to select their top 5 priorities from a list of 15 hot IT issues. These ranged from desktop upgrades, through various types of cloud deployment, network issues to improving the way applications are delivered.

Top of the list by a long chalk, selected by more than half the 500 respondents as a top 5 issue, was application performance management (APM). Perhaps this is not surprising; the other issues with high scores included private cloud deployment, data centre virtualisation, optimising the application lifecycle, deploying new customer applications and business transaction management. All of these involve delivering more effective applications to the business, but APM is about ensuring that this goal is actually achieved.

As with any investment that a business makes, ensuring that it delivers as promised requires measurement. Ultimately IT is about delivery of the applications that enable the business, be they utilities such as email and document management systems or core applications that drive the business processes that differentiate one business from another. APM is about measuring the effectiveness of applications and therefore IT.

APM tools enable the proactive monitoring of the various factors that affect the overall performance of an application and ultimately the experience of its users. This includes the various application software layers (database, application server etc.), the network and user access environment. APM tools also provide the ability to see how performance changes through time. The output is actionable advice on how to maintain and improve application performance levels.

Consistently through the research it was CIOs (20% of the sample) who recognised the importance of these issues and expressed greatest concern about their organisation’s ability to address them. That is not to say other IT managers were complacent, they were not far behind their bosses in most cases.

There was widespread recognition of the pressure to deliver better application performance with 70% overall saying user demand will increase. CIOs were particularly worried with 80% saying that they did not have the application performance metrics well mapped to business goals and that monitoring needed to be more proactive. This latter point was consistent across the industries covered by the survey which were ecommerce, financial services, technology and a range of other commercial organisations.

The value of being able to better measure application performance and deliver measurable improvements efficiently goes beyond business and user satisfaction. One of the key aims, especially for CIOs, was to free up their staff to focus on more strategic goals rather than just fighting to keep the lights on. There was a clear willingness to invest in APM tools that delivered on promise rather than just seeking out those that cost the least. IT managers recognise that being able to measure the performance of their applications is the only sure-fire ways of ensuring all IT investments are delivering as promised.

Quocirca’s report “2012 – The year of Application Performance Management (APM)” is free for download here.

The report includes a self-evaluation tool to enable readers to measure where their organisation’s maturity, with regard to APM, sits.

Quocirca will be presenting the report findings at two webinars on June 28th and at UK seminar on July 5th, links below:

Too many creators of content only worry about how fast they can create the content rather than worrying about how easy the user will find it to absorb the content.

I came to this conclusion after being asked to complete an online survey. The survey was being run by a major public opinion company and was about my views on the facilities available to me to play my favourite sport, squash.

The questionnaire started OK with some general questions and then went into a series of specific questions relating to facilities, access, friendliness etc. After five minutes I begun to wonder how much longer it would go on so I looked at the per cent complete and it said 50%. I had hoped I was more than 50% complete but having got that far I decided to persevere. I filled in two more rounds and looked again and the per cent complete was still 50% at which point I stopped.

I write about usability and accessibility so I felt I could not just forget this so I found a contact us button on the survey and complained. The good news was that I got an answer the same day so plus points for the organisation. The bad news was the reason that the complete per cent did not move is that I was answering the questions relevant to my sport which were part of a bigger survey relating to many sports. This meant that the per cent complete was meaningless, I was probably more than 90% complete, but I did not have the time or the inclination to try and finish the questionnaire.

The problem was that they had thought creator and not user and the result was that I, and probably many other people, aborted the process and the survey results were less useful than they could have been.

Many years ago when I first started writing business reports I went on a course. I do not remember much about it except the adage 'think reader not writer'. Later I learnt that Shaw ended a letter to a friend 'Sorry this letter is so long but I did not have time to write a shorter one'.

I have updated the sentiment to 'Think user not creator' and I hope the reasons are obvious, they include:

The user will be pleased by a process that is quick, easy and requires the minimum of thought to understand.

There will normally be multiple users and therefore a little extra effort by the creator will be multiplied to a large reduction in effort for the user community.

Clarity will reduce, hopefully eliminate, the number of users who abort the process midway, whether that is completing a survey, buying a product, agreeing an action or just being better informed.

So please, whenever you create a digital artifact, web site, mobile app, on-line document or even a blog, like this one, please 'Think user not yourself'. Conversely if you are a user of a digital artifact and it is clear that the creator thought of themselves rather than you tell them.

In recent Quocirca research, businesses report that, on average, their system administrators (sys-admins) make errors carrying out about 6% of tasks. This might not sound much, but actually it adds up to quite a big number.

If system administrators carry out an average of 10 tasks per day, or 50 per working week, that is 3 errors per week or, around 150 per year. And remember, these are errors under privilege. “Normal” users may accidentally delete a file or send an email to the wrong recipient. Privileged users may be reformatting a disk drive or writing new rules for a firewall. Here errors may lead to lost data, major security vulnerabilities or inconvenienced users who can no longer access systems they need to do their job.

The degree to which errors are made varies from one organisation to the next; the research shows industrial organisations to have the highest error rate and retail ones the lowest. This may be because industrial organisation deal with less regulated data, but they are still vulnerable to system outages caused by errors.

Making the task of identifying target devices requiring maintenance easier and getting system administrators to confirm the identity of devices and their intended actions before carrying them out can mitigate the problem and reduce overall error rates.

I took a lot away from the event. What struck me was the oft-mentioned challenges organisations face in their bid to attain a single version of the truth—Data Quality and Data Governance. These challenges were not restricted to organisations intent on implementing an MDM solution, but in general something faced by many.

What is impressive about Gartner events, and this was not an exception, is their ability to collate existent and, for want of a better word, should-be practices into a structured workable framework. This was evident in Gartner’s ‘Seven building blocks for MDM’. Gartner’s seven building blocks for MDM touch on Vision, Strategy, Metrics, Information Governance, Organisation and Roles, Information Life Cycle, Enabling Infrastructure. Gartner's assertion could not have been more accurate. Organisations are aware of these practices, some have already adopted it in one shape or form, but most don't realise its importance in the successful delivery of an MDM or data integration project.

These processes, which practitioners have been preaching for a long time, are a business driven, holistic approach to MDM.

During the course of the event, I spoke to many delegates and they all had one common question—how do we deal with data governance and data quality?

Governance as a concept is not new. In an MDM context, Gartner has defined MDM Governance as ‘the specification of a framework for decision rights and accountability to encourage desirable behaviour in the valuation, creation, storage, use, archiving and deletion of master data.’ Decision making and accountability becomes a thorny issue, considering Master Data is shared across functions and lines of business. Addressing this challenge plays a big role in putting into place an effective Data Governance program.

In a survey by Gartner, 38% of respondents indicated post-implementation that they should have more forcefully managed the analysis and processes pertaining to the initial data quality of the source system master data.

This challenge is illustrated in one delegate's question to me: ''what is the difference between Data Quality and MDM?”. The delegate went further to say they were considering carrying out a data quality initiative, apparently having already implemented an ‘MDM’ solution. The question and statement laid bare the lack of understanding of the importance of Data Quality in an MDM implementation and business processes in general.

It was a well-attended summit and we left the arena with one thing in mind—ensure organisations understand the pivotal roles Data Governance and Data Quality play in MDM and Data Integration, and continue to help them achieve their goal.

The human race is spending more and more time inputting information into electronic devices of all types. So it is important that we find easier, faster and more accurate ways of transferring the information from our heads to our electronic beasts.

Using a keyboard has been the way to do this since the beginning of the computer age. More recently, voice recognition has taken off but still accounts for a small percentage of the information entered. Video cameras and audio recorders now account for most of the new content but do not displace much of the text content being produced. Gestures are the latest input method but are really only used for controlling the device not for input, although we might see some simple gestures for: hello, goodbye, yes, no, etc. Thought transference is in the labs but it will some considerable time before I can think this sentence and then see it on the screen.

All of this suggests that typing is going to remain a major method of input to electronic devices for years to come. To make matters worse, devices are getting smaller so that a full size QWERTY keyboard becomes impractical. Tablets and smartphones with touch screens do not even give any tactile feedback, although this may change in the next few years.

So, as typing is going to remain and the physical interface is not going to improve how can we make it easier, faster and more accurate? Predictive text has been around, especially for 12 key telephone input, for some years but has been of limited use because the predictions were often not right and just got in the way.

KeyPoint Technologies (KPT) have extended the concept of predictive text technology with new methods and greater intelligence; to such a degree that to type the 1700 odd characters above should require less than 500 key presses. With that increase in speed we should all become more productive and the use of on screen keyboards would become an acceptable input device for more than just a quick note. Hence helping to narrow and bridge the gap, what KPT describe as 'the chasm of inutility', between the desires of the users and capabilities of the input devices. To promote the technology KPT has announced the Open Adaptxt engine; this is an open source version of the engine freely available for a variety of mobile platforms.

What does the engine do that makes it so much more productive than standard predictive text? There are a collection of techniques which include:

Intelligent prediction. As you type it will predict the word you are typing not just by the letters you typed but also by the context of the sentence and your personal word usage. This greatly increase the chance that the word you are trying to type will be in the prediction list and will require fewer characters to be typed. Further it will predict the next word before you even start typing; it can also predict whole phrases when that would be helpful.

Intelligent error processing. If you type a word that is not recognised it will provide a list of alternatives. If a QWERTY keyboard is being used these alternatives will include those that would occur because of typical typing errors; for example letters typed in the wrong order, or adjacent letters ('a' instead of 's'). It can also automatically correct the word when you press space and will deal with capitalisation of proper names and acronyms.

There are further methods for specific issues that complete the engine.

Adaptxt is being marketed as a general purpose solution that should benefit all users by speeding up text entry from a keyboard. However, it should be of particular interest to users with limited dexterity who type slowly and are more likely to hit the wrong key. In fact it was originally developed to help a relative, who had lost an arm, to be able to type more easily.

I am keen to see examples of Adaptxt being built-in to applications and will write about them and hopefully with them soon.

At its recent Analyst Summit in San Francisco, HP delivered a strong vision on how it aims to grow its printing revenues across consumer, SMB, enterprise and commercial markets. Whether it's consumer web aware printers, retail publishing such as SnapFish, managed print services (MPS) or digitising the commercial print processes, HP demonstrated a range of products and services and an integrated go-to-market strategy that will enable it to extend the reach for its vast portfolio.

HP certainly has a strong vision to integrate its cloud, mobile and security offerings and the one area where HP is certainly able to exploit the convergence of these trends is printing. HP has the technology expertise in each of these areas, to provide it with a competitive advantage over its traditional print and copier competitors who are all looking to capture more revenues from products and services in a mature market where HP currently dominates.

HP‘s Imaging and Printing Group’s (IPG) revenues grew by 7% in 2010, and overall, IPG accounted for 20% of HP’s revenue. Supplies revenue represents 67% of overall IPG revenue, with commercial printer hardware and consumer printer hardware accounting for 22% and 11% respectively. The consumer market for printers is highly commoditised, so HP is increasing its focus on grabbing a larger share of the commercial market. Commercial printer hardware shipments growth is important, not only for revenue but also the supplies revenue growth these devices can deliver on an on-going basis.

HP’s vision for its IPG business includes having an “ecosystem for on- and off-ramps and a comprehensive cloud-based platform”. In simple terms, this means enabling users to connect to any HP networked printer, multifunction peripheral (MFP), print shop and retail storefront from any device, securely and seamlessly wherever the user is at any one time. Behind this objective is the goal to ultimately drive higher value pages, such as colour which generate much more revenue than black and white pages, which in turn drives supplies revenue.

The mobile opportunityHP also described its innovation around its web-enabled printers, which use the webOS platform. It’s ePrint service enables printing on any internet connected device by sending the output as an email attachment directly to the printer. HP has high hopes for adoption of this among home and business users alike. It shipped 3 million units of its web-enabled printers in Q1 2011 and expects to ship 20 million by the end of this year.

Indeed, the advent of smartphones and tablet devices such as the iPad has generated a new wave in development of printing solutions for platforms such as the BlackBerry, Android and iOS. As well as ePrint, HP has also worked closely with Apple to develop direct printing support for HP printers and MFPs in the latest release of AirPrint available on devices running iOS 4.2 or later. HP also announced that it would provide support for Google’s Cloud Print later this year.

The launch of its webOS TouchPad tablet also this year will undoubtedly bring native driver support into webOS for HP devices and, as such, HP is well positioned to integrate the mobile and printing experience for these devices—although it remains to be seen how popular they will be. While HP has brought mobility to the forefront of its print strategy—other vendors such as Xerox and Ricoh have also released products for printing to their printers and MFPs from smartphones.

Growing service and solutions revenueHP is also looking to drive high value recurring business through managed print services (MPS) where it currently has 3,000 customers. MPS is a burgeoning market offering printer vendors an opportunity to capture more pages through managing office, commercial and production print environments. HP is already seeing the fruits of its joint go-to-market MPS activities between IPG and its Enterprise Business (EB) unit. This has resulted in a 200% rise in joint IPG/ES total contract value growth with 74% of the HP enterprise funnel including joint pursuits. HP also indicated that its average deal size is seven times higher through joint activities.

HP is certainly well positioned to capitalise on these joint opportunities and the two groups seem to be well aligned in their go-to-market approach. HP intends to further drive the value of MPS contracts by increasing the sales of attached document workflow solutions. In 2010, these accounted for 75% of its MPS contracts, compared to 25% in 2008.

Having developed a strong service portfolio for enterprise clients, HP is now building an infrastructure for its channel partners to deliver MPS to SMBs encouraging them to move to a contractual model away from traditional transactional sales. HP has developed QuickPage, a turnkey service offering that provides billing, account management and financing for channel partners. This hosted infrastructure minimises the resources and investment necessary for channel partners to participate in the lucrative MPS market.

An expanding print service provider ecosystemAccelerating the analogue-to-digital transformation in graphics is another opportunity for HP to drive supplies and page growth in the commercial printing market. HP estimates that 1.46 billion pages were printed on its high speed inkjet presses in 2010. The fact that over 95% of graphics pages such as labels and packaging, signage, publishing and collateral are still analogue clearly represents a huge opportunity for HP.

As a technology giant, HP has the breadth and scale to operate in all areas of the print industry—covering consumer, SMB, enterprise and commercial print. Its vast integrated go-to-market infrastructure sets it apart from some of its competitors, and certainly the joint approach with its Enterprise Services business will boost MPS revenues. But in the enterprise and commercial print arena it faces stiff competition from rivals such as Xerox and Ricoh who are both adapting their portfolios to capture wider enterprise print opportunities. HP has got its finger in many print pies, but it will be the ability to execute on increasing page growth through its product and services that will ultimately drive its revenues in the future.

Nearly two months ago, we announced the formation of The Open Group Trusted Technology Forum (OTTF), a global standards initiative among technology companies, customers, government and supplier organizations to create and promote guidelines for manufacturing, sourcing, and integrating trusted, secure technologies.

The framework outlines industry best practices that contribute to the secure and trusted development, manufacture, delivery and ongoing operation of commercial software and hardware products. Even though the OTTF has only recently been announced to the public, the framework and the work that led to this whitepaper have been in development for more than a year: first as a project of the Acquisition Cybersecurity Initiative, a collaborative effort facilitated by The Open Group between government and industry verticals under the sponsorship of the U.S. Department of Defense (OUSD (AT&L)/DDR&E).

The framework is intended to benefit technology buyers and providers across all industries and across the globe concerned with secure development practices and supply chain management. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

More than 15 member organizations joined efforts to form the OTTF as a proactive response to the changing cyber security threat landscape, which has forced governments and larger enterprises to take a more comprehensive view of risk management and product assurance. Current members of the OTTF include Atsec, Boeing, Carnegie Mellon SEI, CA Technologies, Cisco Systems, EMC, Hewlett-Packard, IBM, IDA, Kingdee, Microsoft, MITRE, NASA, Oracle, and the U.S. Department of Defense (OUSD(AT&L)/DDR&E), with the forum operating under the stewardship and guidance of The Open Group.

Over the past year, OTTF member organizations have been hard at work collaborating, sharing and identifying secure engineering and supply chain integrity best practices that currently exist. These best practices have been compiled from a number of sources throughout the industry including cues taken from industry associations, coalitions, traditional standards bodies and through existing vendor practices. OTTF member representatives have also shared best practices from within their own organizations.

From there, the OTTF created a common set of best practices distilled into categories and eventually categorized into the O-TTPF whitepaper. All this was done with a goal of ensuring that the practices are practical, outcome-based, aren’t unnecessarily prescriptive and don’t favor any particular vendor.

The frameworkBest practices were grouped by category because the types of technology development, manufacturing or integration activities conducted by a supplier are usually tailored to suit the type of product being produced, whether it is hardware, firmware, or software-based. Categories may also be aligned by manufacturing or development phase so that, for example, a supplier can implement a secure engineering/development method if necessary.

Provider categories outlined in the framework include:

Product engineering/development method

Secure engineering/development method

Supply chain integrity method

Product evaluation method

Establishing conformance and determining accreditation

In order for the best practices set forth in the O-TTPF to have a long-lasting effect on securing product development and the supply chain, the OTTF will define an accreditation process. Without an accreditation process, there can be no assurance that a practitioner has implemented practices according to the approved framework.

After the framework is formally adopted as a specification, The Open Group will establish conformance criteria and design an accreditation program for the O-TTPF. The Open Group currently manages multiple industry certification and accreditation programs, operating some independently and some in conjunction with third party validation labs. The Open Group is uniquely positioned to provide the foundation for creating standards and accreditation programs. Since trusted technology providers could be either software or hardware vendors, conformance will be applicable to each technology supplier based on the appropriate product architecture.

At this point, the OTTF envisions a multi-tiered accreditation scheme, which would allow for many levels of accreditation including enterprise-wide accreditations or a specific division. An accreditation program of this nature could provide alternative routes to claim conformity to the O-TTPF.

Over the long-term, the OTTF is expected to evolve the framework to make sure its industry best practices continue to ensure the integrity of the global supply chain. Since the O-TTPF is a framework, the authors fully expect that it will evolve to help augment existing manufacturing processes rather than replace existing organizational practices or policies.

There is much left to do, but we’re already well on the way to ensuring the technology supply chain stays safe and secure. If you’re interested in shaping the Trusted Technology Provider Framework best practices and accreditation program, please join us in the OTTF.

Over the last two decades, technology innovation has brought the world closer together and has given people more ways to communicate with each other. While these changes have brought new heights in productivity and created a more mobile, global, and “always-on” world of work, this rapid transformation also created new challenges in today’s business environment.

Information workers and IT professionals are each struggling to manage multiple systems for communications—desktop and mobile phones, email and voicemail, Voice over Internet Protocol (VoIP), Instant Messaging, and web- and videoconferencing. While many of these individual communication tools are considered indispensible, they do not necessarily work well together to help people collaborate and increase their productivity. To foster efficient communication and collaboration within the workforce, organisations need a way to streamline both one-to-one and one-to-many communications, giving employees access to the information they need, when they need it.

Companies face high costs when using traditional communication methods. Long-distance charges, maintenance costs for fax and voicemail systems, and travel costs for employees all cut into company margins. Increasingly aware of the bottom line, organisations frequently look for more cost-effective means of communication and collaboration across all boundaries. But the new methods must be more than just cost-effective; they have to be fully accessible and user-friendly, and they should not trigger extra costs such as additional IT support or staff requirements. These issues lead to large IT departments and a inflated cost of ownership.

Working anytime, anywhereBusiness communications are increasingly complex and require workers to manage multiple devices, applications, and face-to-face interactions in an attempt to stay productively connected with one another. As the information worker population shifts from working in headquarter locations to working anywhere, anytime, and across corporate boundaries, the challenge of reaching key decision makers in a timely manner increases. The inability to reach others at critical times results in numerous delays and lost productivity. Star has found that sometimes businesses slow down or even halt mission-critical projects due to employees’ inability to reach key decision-makers.

As soon as the challenges of this sort of person-to-person latency have been addressed, the challenge is raised to one of boosting the effectiveness of teams by improving collaboration. Unified Communications support such efforts by shifting communications, as appropriate, from asynchronous channels (email, voicemail) to synchronous modes like instant messaging, PC-to-PC audio and video, electronic white boarding, Web conferencing, application sharing, and mobile access.

Building blocks of Unified Comms

Presence Information: Knowing The Availability Of Colleagues: Presence information lets people know whether others are available (e.g., online, away, busy, in a meeting, out to lunch). People can publish their availability so others know how best to reach them. The system provides some automation; for example, if a user has not touched the keyboard or mouse for a set number of minutes, that user’s presence information turns to “away.” Additional state information can also be automatically published using information from Microsoft Outlook, Communicator, SharePoint, calendaring and the PBX or IP telephone system—for example “in a meeting,” “on the phone,” “out of the office,” or “free in x hours.” In a Forrester survey commissioned by Microsoft in 2009, 59% of workers stated they would save more than 15 minutes per day with this feature.

Instant Messaging: More Immediate Communication: Instant messaging (IM) is the capability to send and receive text messages in real time over the Internet or a corporate network. The recipient typically sees an alert on the desktop indicating an incoming message and from whom. Enterprise IM maintains this capability within, and increasingly beyond, the corporate network, adding security that does not exist with public IM systems like AOL, Yahoo!, MSN, and Google Talk.

Web And Videoconferencing: Cost And Time Savings: Ad hoc Web and video conferencing improves efficiency in real-time decision-making by providing easy setup, links to presence management, and point-and-click conference launches. Value increases when the time to set up a videoconference drops to near zero. 60% of workers surveyed for a Forrester report indicated that they could save from 1 to 5 hours per week using real-time conferencing.

Hosted IP Telephony: Hosted IP telephony makes it possible to communicate via telephone over an IP network instead of over traditional PBX telephony infrastructure. Voice communications can be integrated with email, calendaring, voicemail/unified messaging, IM, and conferencing to provide a streamlined experience rather than the disconnected experience provided by legacy systems today. Further, IP telephony can significantly reduce the cost of telephone communications. Companies interviewed for this study were engaged in pilot testing of software-powered VoIP, including PC-to-PC calling using various devices and integration of voice with email, IM, and conferencing.

One-Click Communication: We are approaching a time where all you need to find someone is his name, and all the means of contact are available immediately. Several of the organisations interviewed are looking toward a single identity for each employee that aggregates all the contact information (even individual’s areas of expertise) stored in Active Directory with some of the ways staff in the organisation communicate (phone, mobile device, conferencing, IM, email, calendaring). Finding the right person becomes faster, and determining his availability and communicating via his preferred, context-dependent medium is smoothed because presence is integrated into Microsoft Office applications.

Mobility: A minority of users in the interviewed companies carry mobile devices that have been integrated into the UC platform. For some organisations, mobility is an important part of their UC solutions, while for others it is an adjunct set of capabilities for select users. Certain mobile devices can run email and IM clients, thus integrating the mobile phone with the individual’s presence, IM, and email. Further, with a mobile device, users can open and modify email attachments, attachments within IM and other Word, Excel, or PowerPoint documents.

Unified Comms streamline communicationsUnified Communications technologies streamline communications for end users, increase operational efficiency for IT professionals, and provide built-in protection for an organisation, while serving as a future-ready foundation to enable business process innovation.

For many end-users, communications take place in disparate, disconnected silos. For voice communications, you turn to the desktop or mobile phone. For email and instant messaging, you turn to your PC. With the multitude of applications and tools from which to communicate, end-users face a chaotic environment. WorkLife, Star’s managed communications platform, breaks down traditional silos and allows end-users to collaborate within the context of the desktop and mobile applications they use every day, with the ability to switch seamlessly between modes.

An organisation’s internal communications systems often consist of a set of diverse applications and capabilities, making it difficult for employees to use the various systems and equally challenging for the IT departments to deploy, manage, and maintain the systems—all of which leads to user frustration and high total cost of ownership for IT. Unified Communications simplifies the deployment and management of this infrastructure to make IT operations more efficient and reduce the frustration associated with disparate systems.

Increased productivity, fostering and collaborationUnified Communications offers significant benefits to organisations, including increased individual and team productivity, fostering of collaboration, improved relationships, enhanced security, and enterprise-class scalability. By granting instant access to team members, partners, suppliers, and customers across geographies, time zones, and organisational boundaries, timely information can flow rapidly and efficiently. Organisations can improve team results by using Unified Communications to share ideas and information faster and more effectively.

For many businesses, the traditional role of the CIO is to help drive the company’s business strategy forward through the appropriate application of technology to automate processes, reduce costs and open up access to new markets and opportunities. There are many challenges facing IT leaders ranging from mobile working to security and data protection. Unfortunately, most of the people working in the IT department today are primarily occupied with maintaining and updating existing systems, or working hard just to ‘keep the lights on’, so to speak. If they are not doing routine work of this nature then they are typically fire-fighting as entropy sets in to existing systems and processes making them fail as they become outdated.

This means that most people working in IT are working reactively and it’s no surprise they are finding it difficult to do more with an ever-decreasing IT budget. The result for most IT departments is that they are now being challenged by their business leaders who do not believe that IT is serving them sufficiently to help meet their corporate goals. Having recently conducted a survey of 360 senior IT managers across every sector of UK enterprise, we discovered that 60% of managers cite administration and trouble shooting as the main time consumers within their jobs. Now is the time to begin to challenge this poor application of important resources and ensure that the role the IT department plays is securing business success by accelerating the execution of business objectives. So the big question for CIOs and their IT people is how do you move from being seen as the maintenance team to a key strategic enabler?

Why IT mattersDespite the fact that IT can be harnessed to provide an important driving force for any organisation, 44% of IT managers feel that they are not consulted on business issues because senior managers see them as the maintenance engineers. This is because they are often locked into the hardware and software upgrade and maintenance cycle, an area proving to be increasingly challenging with dwindling budgets. This cycle is holding them and their business leaders back from realising their potential.

This is not helped by the fact that many managers still feel that IT vendors do not really understand small and medium sized companies in the UK, nor have a workable business model to match their needs. Historically, the mid-market has been neglected by the larger vendors, mainly because it was seen as more desirable to focus on large enterprises. There has been a recent shift in attention but it’s not nearly enough. 11% of respondents in the survey said they are already using managed services that are hosted by a third party and this is providing them with the platform they need to get more of the existing IT resources they already have and freeing them up from the undesirable day-to-day tasks to focus more on activity that adds value to the business. This is the strategic and innovative focus that 53% of IT Managers believe their role should be about.

Blending IT with cloud computing servicesFor some businesses, managed services delivered via a cloud computing platform are the only way they can afford to deliver new services to their staff. However, many businesses are unsure how to link hosted services and integrate them with existing systems and 38% of IT managers in UK SMEs are challenged by the ‘perceived’ loss of control.

Business leaders want their IT to be better, faster and cheaper, and technology needs to provide the platform that delivers business agility, aiding organisations to focus their existing people and resources where they need them most. To do this they must align IT resources to the business strategy, not just the pursuit of keeping the lights on so existing systems don’t fail. This is an opportunity for everyone concerned, although it is often preferred to be seen as the exact opposite. As time and money becomes more stretched the warped view that cloud computing is a threat to IT department is now beginning to be understood.

In smaller businesses, IT departments do not always have expert and specialist skills or the budget to take on new solutions and support them. Cutting costs is still the big issue for many UK SMEs and to do this many are now turning to cloud computing services that provide easy access to enterprise-grade solutions with no hardware or software to buy. The services are easy to use and pay for, at a low and predictable monthly per user fee. It’s a great way to cut out the drain of capital from the business. One of the key benefits of cloud computing is the on-demand aspect, meaning that businesses only pay for the services they consume. This means the expenditure is seen to be accounted for as an operation expense, which is usually much more desirable.

These services are appealing because they can be delivered securely to any employee, wherever they are and at anytime. Deploying the right technologies to the business without having to recruit more IT people is a great advantage.

Seeking operational excellenceEvery CEO and CFO wants and expects excellence from the IT investments that they sign off. At the very least they want to ensure that any operational and financial risks are mitigated. What is often taken for granted is how difficult it is to run IT systems with the required power and cooling, not to mention the right level of security to ensure the environment is kept safe and enough resiliency and back up systems to ensure business continuity. What many of them are now realising is that their data and applications are much safer and better provisioned when they are hosted in a professionally run third party data centre and wrapped around with a solid Service Level Agreement. This is in stark contrast to when their business critical systems are hastily cobbled together from their own facilities that simply can’t compete with the level of investment and sophistication on offer from a managed service provider.

As more business leaders push their IT departments down this route the role of the CIO is now becoming one of managing relationships rather than managing technology and getting lost in the detail. This is an exciting proposition as cloud computing is freeing up IT professionals to think more strategically and offload the donkey work to someone who can do it better, faster and cheaper, allowing them to focus on the key aspects that differentiate the business from its competitors. This is the real role of the Chief Information (or ‘Innovation’) Officer.

1. Technology Designed for EveryoneThe technology world enlarged in 2010. Consumers fell in love with the intuitive user interfaces and versatile technologies of the likes of Apple, Facebook and Google. “I love it” is how most users describe their iPad or iPhone. Now consumers want their enterprise applications to offer a similar user-oriented experience.

Consumers want to use technology to connect and collaborate with others. No wonder social networking and mobility is such a compelling combination for businesses and end users alike. Facebook’s mobile users spend twice the amount of time on Facebook than do non-mobile users. This trend is set to accelerate. Hence SAP acquired Sybase for its mobile apps platform, rather than its database technology.

Traditional consumer brands such as Sony (Vaio) and Samsung (Galaxy) and Amazon (Kindle and EC2) sense there is more money to be made in Tech. As do a vibrant new group of entrepreneurs who have developed well over a million consumer apps on various platforms. There are no barriers or caveats to entering the software market anymore.

2. Making Technology Easy to ConsumeHow do you turn 5 keystrokes into 3? How do you make software that is immediately intuitive and makes obvious sense to users? Can you eradicate training courses and user manuals? Some enterprise software user interfaces look like a flight pilot’s cockpit instrument panel.

Steve Jobs, the Tech industry’s top CEO, loves a clean design and simplicity for Apple’s users. The iPod has 5 keys; the more modern iPad has 3. Jobs launched the iPhone 3G using only 11 presentation slides, only one of which contained any words. BBC Radio 4 recently praised Apple’s use of clear, plain English in its product descriptions, in contrast to Microsoft’s “techno-babble” that can alienate potential customers.

Facebook starts product development from the premise ‘how does this product enable users to communicate and collaborate?’ Features and functions become outputs rather than inputs when viewed in this manner.

3. Getting the Price Point DownHigh price is the last great bastion of the technology industry. But now many vendors offer similar ranges of products to address similar markets; the key decision-making criteria has become availability, brand, and most importantly, price—especially as vendor pricing is increasingly transparent and available on the Internet. There are now many options open to vendors who want to offer more customer value and encourage product trial.

BI vendors such as QlikTech, Tableau, TIBCO Spotfire, and MicroStrategy offer generous free trial product downloads. Open Source vendors such as Jaspersoft, Pentaho and SugarCRM offer free entry-level products. Spiceworks’ network management software is free if you are prepared to accept the advertisements that come with it. Many excellent applications, such as Google Analytics for example, are totally free of charge. Virtually every kind of software platform, application and service is available for rent as a SaaS service in the Cloud.

4. Be differentCompetition from now on will be intense and hostile. Recent aggressive moves from industry titans such as HP, IBM, Oracle, and Microsoft set the tone. Product innovations are easy to copy and vendors are now stepping on each others’ toes. To insulate themselves against this trend the top Tech companies have transformed themselves into brands. They hope to encourage a sense of community and belonging, customer loyalty and advocacy, and a feeling that customers cannot do without them.

Brand Finance now rates Apple, Microsoft and IBM as 3 of the most valuable ($) 5 brands on earth—ahead of Coke, Mars, Persil and all the other household names. Six of the Top 20 valued brands are from the Tech industry. The thought-leadership, business model innovations and brand distinctiveness that characterise these vendors are now becoming essential pre-requisites for success in Tech.

Those that are truly market-oriented and customer-centric will thrive. Those that remain product-led will find it increasingly hard to attract new customers. Business agility will be key to vendor survival. ‘Be fast and be bold’ as Facebook says. Vendors, customers and users should endeavour to embrace this dictum.

If there are vendors or others who want advice in any of the above, drop me a line and I will be glad to help. It is Xmas after all ;-) And a happy New Year to all our readers!

We're here in the week of November 29, 2010 to explore some major enterprise software and solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This customer case-study from the conference focuses on AIG-Chartis insurance and how their business has benefited from ongoing application transformation and modernization projects.

To learn more about AIG-Chartis insurance’s innovative use of IT consolidation and application lifecycle management (ALM) best practices, I interviewed Abe Naguib, Director of Global Performance Architecture and Infrastructure Engineering at AIG-Chartis in Jersey City, NJ.

Here are some excerpts:

Naguib: AIG is a global insurance firm, supporting worldwide international insurance of different varieties.

We're structured with 1,500 companies and roughly about eight lines of business that manage those companies. Each group has their own CIO, CTO, COO structure, and I report to the global CTO.

What we look at is supporting their global architecture and performance behavioristics, if you will. One of the key things is how to federate the enterprise in terms of architecture and performance, so that we can standardize the swing over into the Java world, as well as middleware and economy of scale.

When I came on board to standardize architecture, I saw there was a proliferation of various middleware technologies. As we started going along, we thought about how to standardize that architecture.

As we faced more and more applications coming into the Java middleware world, we found that there’s a lot of footprint waste and there’s a lot of delivery cycles that are also slipped and wasted. So, we saw a need to control it.

After we started the architectural world, we also started the production support world and a facility for testing these environments. We started realizing, again, there were things that impacted business service level agreements (SLAs), economy of scale, even branding. So, we asked, how do we put it together?

One of the key things is, as we started the organizational performance, we were part of QA, but then we realized that we had to change our business strategy, and we thought about how to do that. One key thing is we changed our mindset from a performance testing practice to a performance engineering practice, and we've evolved now to performance architecture.

The engineering practice was focused on testing and analyzing, providing some kind of metrics. But, the performance architectural world now has influence into strategies, design practices, and resolution issues. We're currently a one-man or one-army team, kind of a paratrooper level. We're multi-skilled, from architecture, to performance, to support, and we drive resolution in the organization.

We also saw that resolution had to happen quickly and effectively. Carnegie Mellon did a study about five years ago and it said that post-live application resolution of performance issues was seven times the cost of pre-live [performance application resolution].

In other words, we realized that the faster we resolved issues, the faster to market, the faster we can address things, the less disruption to the delivery practices.

Too many people involvedIn normal firefighting mode, architecture is involved, development is involved, and infrastructure is involved. What ends up happening is there are too many people involved. We're all scrambling, pointing fingers, looking at logs. So, we figured that the faster we get to resolution, the better for everyone to continue the train on the track.

... I have experience with Quality Center and the improvements that have gone on over the years. Because of our focus, we built our paradigm out of QA and into the performance world, and we started focusing on improving that process.

The latest TruClient product, which is a LoadRunner product, has been a massive groundbreaking point solution. In the last two years, frankly, with HP and Mercury getting adjusted, there’s been kind of a lag, but I have to give kudos to the team.

One of the key things is that they have opened up their doors in terms of the delivery, in terms of their roadmap. I've worked extensively for the last, roughly, year with their product development team, and they have done quite a bit of improvement in their solution.

Good partnership roleThey have also improved their service support model; the help desk actually resolves questions a lot faster. And we also have a good partnership role, and we actually work with things that we see, and to the influence of their roadmap as well.

This TruClient product has been phenomenal. One of the key things we're seeing now is BPM solutions are more Ajax-based, and there are so many varieties of Ajax frameworks out there than we know how to deal with. One of the key things with the partnership is that we're able to target what we need, they are able to deliver, and we are able to execute.

LoadRunner and TruClient allow us to get in front of the console, work with the business team, capture their typical use cases in a day-in-the-life scenario, and automate that. That gets buy-in and partnership with the business.

We're also able to execute a test case now and bring that in front of the IT side and show them the actual footprint from a business perspective and the impact and the benefits. What ends up happening is that now we're bringing the two teams together. So, we're bridging the gap basically from execution.

... We also started working with the CIOs to figure out a strategy to develop a service-level target, if you will. As we went along, we began working with the development teams to build a relationship with the architectural teams and the infrastructure teams.

We became more of a team model, building more of a peace-maker model. We regrouped the organization, so that rather than resolve and point fingers at each other, we resolved issues a lot faster.

Now, we're able to address the issue. We call it "isolate, identify, and resolve." At that point, if it’s a database issue, we work directly with the DBA. If it’s an infrastructure or architecture issue, we work directly with that group. We basically cut the cycle down in the last two or three years by about 70 percent.

Because there is a change in our philosophy, in our strategy to focus more on business value, a lot more CIOs have started bringing in more applications. We see a trend growth internally of roughly about 20–30 percent every year.

I have a staff of nine. So, it’s a very agile, focused team, and they're very delivery-conscious. They're very business value-conscious, and we translate our data, the metrics that we capture, into business KPIs and infrastructure KPIs.

Because of that metric, the CIOs love what we do, because we make them look good with the business, which helps foster the relationship with the business, which helps them justify transformation in the future.

There is a new paradigm now, they call it the "Escalator Message." In 60 seconds or less, we can talk to a CIO, CTO, COO, or CFO about our strategy and how we can help them shift from the firefighting mode to more of an architecture mode.

If that’s the case, the more they can salvage their delivery, the more they can salvage their effective costs, and the more they can now shift to more of an IT-sensitive solutions shop. That helps build a business relationship and helps improve their economy of scale.

I would definitely send the message out to think in business value. Frankly, nobody really cares as much about the footprint cost, until they start realizing the dollars that are spent.

Also, now, business wants to see us more involved from the IT side, in terms of solutions, top-line improvements, and bottom-line improvements. As the performance teams expand and mature and we have the right toolsets, innovative toolsets like TruClient, we're able to now shift the cost of waste into a cost of improvements, and that’s been a huge factor in the last couple of years.

Last, I would say that in 8,000+ engagements—we're actually closing in on now 10,000 events this year—we've seen roughly $127 million in infrastructure savings that we have recouped. Again, that helps to benefit the firm. Instead of waste, now we're able to leverage that into more improvement side.

The latest BriefingsDirect podcast discussion examines a new book on application lifecycle management (ALM) best practices, one that offers new methods and insights for dramatic business services delivery improvement.

The topic of ALM will be a big one at this week's HP Software Universe conference in Barcelona. In anticipation, join us as we explore the current state of applications in large organizations. Complexity, silos of technology and culture, and a shifting landscape of application delivery options have all conspired to reduce the effectiveness of traditional applications approaches.

In the forthcoming book, called The Applications Handbook: A Guide to Mastering the Modern Application Lifecycle, the authors evaluate the role and impact of automation and management over an application's lifecycle, as well as delve into the need to gain better control over applications through a holistic governance perspective.

This is the first in a series of three podcasts with the authors the ALM book to learn why they wrote it and to explore their major findings. They are: Mark Sarbiewski, Vice President of Marketing for HP Applications, and Brad Hipps, Senior Manager of Solution Marketing for HP Applications. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Sarbiewski: In most large enterprises, applications have been built up over many, many years. You throw acquisitions into that and you end up with layers of applications, in a lot of which there is redundancy. You have this wide mix of technology, huge amounts of legacy, all built different ways, and the business just wants response faster, faster, and faster.

So, we have old technologies hampering us. We have an old approach that we've built that technology on, and the modern world is dramatically different in a whole host of ways. We're changing our process. We're changing the way our teams are structured to be much more global teams, outsourced, nearshore, far-shore, all of that stuff, and the technology is fundamentally shifting as well.

That's the context for why you see all these horror stories and these stats about the businesses' level of satisfaction with the responsiveness of IT, particularly in applications. If you think about it, that's what the business experience is... IT organizations are looking to change the game.

Hipps: A lot of these trends that we talk about—outsourcing, service-based architectures, more flexible methodologies, whether it's iterative or agile—you wouldn't necessary call any one of those brand new. Those things have been around for a few years now. Many enterprises we speak with and deal with have been leveraging them for a few years in some form or fashion.

If you're an owner of application teams or of a series of applications within an enterprise, these things tend to sneak in. ...you wake up one morning and realize all of a sudden that fundamentally the way your teams have long operated has been changed.

In some ways, it's death by a thousand cuts. No single one of these initiatives is going to force you to take a step back and say, hold the phone, let's figure out if the way we deliver applications now requires us to, in some significant way, rethink the mechanisms by which we conduct delivery.

From my own experience, it's difficult to get the time or the brain space to do that. Usually, you're neck deep in getting the next application out the door. You've got deadlines. You've got other applications or enhancements coming down the pike.

You may not have the time to take a step back and say, "Wow, we're using these different methods" or "We're relying more on outsource teams, so we are not all colocated."

One of the objectives of this book was to do just that. Mark and I had the luxury to take a step back and think about what these trends mean soup-to-nuts for the way applications get stood up and delivered, and how—from an enterprise perspective—we have responded or not responded to those new complexities.

The nature of an application today is that it's not a monolith. It's not owned by a single project team or a program consisting of several teams.

More often than not, it's something that has been assembled using a series of subcomponents, reusable services, or borrowed function points from other applications, etc. It's this thing that is, in the best sense, cobbled together. Rather than writing it all from scratch, we're leveraging what we can.

We can all agree that this makes sense, it’s the right way to do it, it's much more assembly line production versus handcrafting everything, which is certainly the direction we want to be headed in, from a software perspective.

But, that also presents us a lot of new challenges. How do I have visibility or discover the components that are out there, that are available for me to use? How do I trust that those components are reliable, that they are going to behave and perform in the way I want them to? Given the fact that I, as a given developer, didn't actually create it myself, how can I have faith in it? And, how are we going to authenticate all these different pieces?

So you've got these questions. How do we collaborate? How do we communicate? How do we notify each other of defects? How am I aware when something is ready to retest?

Relying on email is, let's just say, less than ideal. And, of course, we may be using different methods. Multiple teams could be using different methods. Those over there are working in agile fashion, we are working in waterfall fashion.

So the catchphrase we have, which may or may not make sense, it's not complexity plus complexity, it's more like complexity times complexity, when you consider modern delivery and its particulars.

Sarbiewski: The idea now is that you need both management and automation to achieve your end-goals.

People have long thought of those things in very narrow ways. They've thought of management of a narrow domain space, like managing requirements and automating GUI functional tests. Those were all good steps forward, important things, but there was little connection between management across the lifecycle and automation across the lifecycle.

Part of what we're trying to get at here is this interplay. You've got to think about both—not only across the lifecycle, but how they interlock—to create the situation where I see what's happening. I see across these very complex endeavors that I'm undertaking; many people, many teams, many stakeholders, lots of projects, lots of interdependencies, so I have that visibility. When we need to step on the gas and go in a particular direction, and speed everything up without blowing everything up, that's when I can rely on nicely integrated automation.

Just about every square inch of the enterprise is automated in some way by software. What it has meant for IT teams is that you now have to understand every square inch of the business, and the businesses are incredibly dynamic. So any part that changes almost drags along, or in some cases, is led by, and has to be led by, innovation in the software to make that happen.

...You need to make software a core competency if you are going to differentiate your business going forward. So it's hugely important.

Hipps: Business can't twitch without requiring some change in a set of applications somewhere ...we've got applications everywhere. They're going to be under constant review, modification, enhancement, addition, etc., and that's going to be a an endless stream.

We've got an expectation, given the web world we live in, that these applications, many of them anyway, are going to be always on, always available, always morphing to meet whatever the latest, greatest idea is, and we have got to run them accordingly.

We have got to make sure that once they are out there and available, they are responsive. We have got to make sure that the teams that own them in the data centers are aware of their behaviors, and aware of which of those behaviors are configurable, without even coming back to the application teams.

The legacy view said, "Wow, the software development lifecycle (SDLC) is the end-all, be-all. If I get the SDLC right, if I get requirements and deployment done right, I win." We realize that this is still critical. What we would describe as the core lifecycle is still where it all begins.

If I'm going to really be successful against what it is the business is after, I do have to account for this complete lifecycle? All the stuff that's happening before requirements, the portfolio investigation that's occurring, the architectural decisions I am making, have got to be true across the enterprise, as well of course as everything that happens once that thing goes live.

How well connected am I with my operation peers? Have I shared the right information? Have I shared test scripts where possible? Am I linked into service desk? Am I aware of issues, as they are arising, ideally before the business is hearing about it?

Those things are what we mean by getting your arms around the complete lifecycle. It is what's necessary, when you think about the modern delivery of applications.

Sarbiewski: Even in the requirements, there is an aspect that can be a level of automation and a level of management.

Automation can come in when I am building a visualization, a quick prototype, and there are some great solutions that have emerged into the market to help a non-technical user create a representation of an application that has almost the perfect look and feel. We're not talking about generating code. We're talking about using HTML and tools to create the flow, the screen views, and the data input of what an application is going to look like.

... Once we get to that look and feel of an app, at the push of a button, I can interpret all those business rules, all those rules about where was data, what was on the screen, was this data hidden, what was inputted, when did it flow to the next one, under what condition. All of that will get translated into a series of text-based requirements, test assets to test for that logic, and even the results and the rules and the data that needs to be input.

So, I have a process. I have had discussion and used some technology to visualize these requirements. At the push of a button, I automated the complete articulation, with perfect fidelity, including the positive test cases I want to run. I can manage those now, as I have always have, and as my systems and teams expected to.

Those requirements trigger test and defects and go against code, all of which can be linked. Whenever progress is made in any dimension against those requirements, I have created a test for one, I have run a test for one. I have run ten tests and eight paths. I have checked new coding against the bugs. All of that can be tied together and automated with workflow.

So, you start to see how I've got a creative series of information. I use automation to advance it to the next stage. I now can push that information to each of the key stakeholders and automate the workflow behind that.

This is what we mean when talk about changing the game and how you deliver software, by doing just that, thinking about, what are the things that I have to manage and how does automation speed things up, and create outputs with greater fidelity and greater speed.

Hipps: The endgame should be that what I've got is a unified way of getting these various operations connected, so that my management picture has a straight flow through from the automated things that it's kicked off.

As those automated events occur, I'm getting a single, unified view of the results in my management view, which is, nine times out of 10, not the world we have when we look at big, big enterprise delivery. It is still a series of point tools, with maybe Excel laid over the top to try to unify it all.

... If you want to understand the future of IT, you just need to look at where manufacturing has come. We've plagiarized the lion’s share of what we do in IT and the way we work a lot from what we have seen in manufacturing and mechanical engineering. That extends to lean methods. It starts probably all the way back to waterfall.

Maybe it's no surprise that when you ask us to talk about what you mean by integrated management and automation, we are borrowing an analogy from the world of mechanical engineering. We're talking about what planes can do, what ships can do, and what cars can do. So, I hope this is very much a natural advancement.

Sarbiewski: I talk about the industrialization of IT. Sometimes, there's a little pushback on that, because it feels heavy. Then, I say, "Wait a minute. Think about how flexible Toyota or Boeing is." These companies have these very complex undertakings and yet can manage parts and supplies for providers and partners from every corner of the world, and every other car can be different coming off that assembly line. Look at how quickly they have shrunk their product lifecycles from design to a finished model.

Part of what's done that is exactly what Brad was talking about, an enormous investment in understanding the process and optimizing that, in supporting the various stakeholders, whether it's through design software, or automation on the factory line, all of that investment. We didn't do it in IT. We built it ourselves. We used Excel and post-it notes and other things, and we created from scratch everything that we have done, because we can, because we made it easy to do that. We have made it easy to design and build it a thousand different ways.

There is this counterintuitive perception that because there is an infinite number of ways, we hold ourselves to be different than that. People are realizing that's not really the case. In fact, the more I can industrialize and keep it lean and agile, how I do this, the tools I use, if I give the people incredible tools to do it, and not just point tools but integrated, the results really speak for themselves.

When we talk to customers that have done this, they achieve incredible results in three critical dimensions. There's a very longstanding joke that you can't go faster, you can't raise quality and take cost down. It's not just possible. This is this impenetrable triangle or it’s squeezing a balloon. We see with our customers that you absolutely can.

They have essentially industrialized their approach, they have integrated their approach, they support their stakeholders with great technology, and they adopt to change their process. Guess what, they go faster, they take cost down, they drive quality up.

For more information on Application Lifecycle Management and how to gain an advantage from application modernization, please click here.

Some of you have been following my articles may remember an article I wrote about Convergent Software (A demonstrator for the new standards for RFID in Libraries) in February 2009. Well this week, Paul Chartier, their Managing Director, let me know that the company was launching a range of software products for the library community. These meet the new conformance requirements recently announced on the RFID for Libraries Support website (http://biblstandard.dk/rfid/).

There are two products in the initial offering designed to help stakeholders to future-proof their investment in ISO 28560-2:

ISO 28560-2 Planning and Modelling software: This software allows libraries and other stakeholders to experiment with the encoding options of ISO 28560-2 by selecting and arranging data elements and encoding these on a simulated tag. The main advantage of this software is that it can be used as part of a pre-investment process without requiring any RFID hardware or tags. This product incorporates their Template Builder and Data Builder tools that I reviewed in February 2009.

ISO 28560-2 Quality Control software: This software combines the functionality of a fully compliant decoder with the additional powerful function of diagnostic software that identifies encoding errors and points to possible causes of those encoding errors. This product incorporates our Template Builder, Data Decoder and Data Doctor tools.

The announcement also contained details of 2 other products that are to follow shortly; namely:

ISO 28560-2 Comprehensive software: This software combines the functionality of the planning software and the quality control software products with their Data Editor tool. Chartier stated that this will provide the most comprehensive support for ISO 28560-2.

An interface module that enables the various software-only products to be linked to specific RFID encoding/decoding devices. This version of the software will take the simulation one stage further and allow prototype tags to be produced for testing purposes. It also can read tags claiming compliance with ISO 28560-2 and report any errors in a comprehensive diagnostic report.

All the products meet the requirements of the recently published Guidelines for ISO 28560-2 Conformant Devices and Processes. A Compliance Statement is available on their website which explains in detail how the software products achieve this.

Convergent Software Limited is still offering its two software development schemes to help RFID vendors in the library sector to rapidly develop their support for the new ISO standard:

The Benchmark scheme provides software and support to those companies developing their own bespoke system to support ISO 28560-2.

The Integration scheme enables software developed by Convergent Software Limited to be embedded in vendors' software as an OEM component.

How do IT architectures at software-as-a-service (SaaS) providers provide significant advantages over traditional enterprise IT architectures?

We answer that "Architecture is Destiny" question by looking at how one human resources management (HRM), financial management and payroll SaaS provider, Workday, has from the very beginning moved beyond relational databases and distributed architectures that date to the mid-1990s.

Instead,
Workday has designed its architecture to provide secure transactions,
wider integrations, and deep analysis off of the same optimized data
source—all to better serve business needs. The advantages of these
modern services-based architecture can
be passed on to the end users—and across the ecosystem of business
process partners—at significantly lower cost than conventional
IT.

Joining us here is a technology executive from Workday, Petros Dermetzis,
Vice President of Development there, to explore how architecting
properly provides the means to adapt and extend how businesses need to operate, and not be limited by how IT has to operate. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Dermetzis: We have a unique opportunity to stand back and see what history and evolution provided over the past 20 years
and say, "Okay, how can we provide one technology stack that starts
addressing all those individual problems that started appearing over
time?"

If you think of the majority of the systems out there,
the way we describe them is that they were built from the ground up as
islands. It was really very data-centric. The whole idea was that the
enterprise resource planning (ERP) system gave all the solutions, which in reality isn't true.

What
we tried to do at Workday was start from a completely white sheet of
paper. The reality around ERP systems is actually making all this work
together. You want your transactions, you want your validations, you
want to secure your data, and at the same time you want access to that
data and to be able to analyze it. So, that’s the problem we set out
to do.

What drove our technology architecture was first, we
have a very simple mentality. You have a central system that stores
transactions, and you make sure that it's safe, secure, encrypted, and
all these great words. At the same time, we appreciate that systems,
as well as humans, interact with this central transactional system. So
we treat them not as an afterthought, but as equal citizens.

If you go back in time to when mainframes
started appearing, it was about transactions, capturing transactions,
and safeguarding those transactions. IT was the center of the
universe and they called the shots. As it evolved over time, IT began
to realize that departments wanted their own solutions. They try to
extract the data and take them into areas, such as spreadsheets and
what have you, for further analysis.

ERP
solutions evolved over time and started adding technology solutions as
problems occurred. They started with a need to report data and very
quickly realized it was like climbing a ladder of hierarchic needs.
When you get your basic reporting right, you need to start analyzing
data.

The technologies at the time, around the relational
models, don’t actually address that very well. Then, you find other
industries, like business intelligence (BI) vendors, appeared who tried to solve those problems.

The
way things evolved, you started with an application, and integrations
were an afterthought; they got bolted on. ... They kept on adding more
and more and more layers of vendors, and the more the poor enterprise
IT customers are trying to peel it, the more they start crying—crying in terms of maintenance and maintenance dollars.

Old approach won't scale
Right
now, the state of the art is hard-wiring most of these central
solutions to these third-party solutions, and that basically doesn't
scale. That’s where technology kicks in and you have to adopt new open
standard and web services standards.

What we try to do at Workday is understand holistically what the current problems are today,
and say, "This is a golden opportunity." This is opposed to finding
all existing technologies, cobbling them all together, and trying to
solve the problems exactly the same way.

If
you're managing any system with HRM systems, you need to communicate
with other systems, be it for background checks, for providing
information to benefit providers, connecting to third-party payrolls,
or what have you.

Obviously, [traditional ERP vendors] were
solving the problem incrementally, as they were going along. What we
tried to do was address it all in the same place. Where we are right
now is what I would describe as very business transaction-centric
in what I define as legacy applications. Then, we want to take it
more to an area which is business interactions, and interactions can
happen from humans or machines.

We're creating a revolution in the ERP industry. As always, you have early adopters. At the other end of the bell-shaped curve,
you've got the laggards. When you're talking to forward thinking,
modern thinking, profit-oriented, innovative companies, they very
quickly appreciate that the way to go is SaaS.

Now, they've got a bunch of questions, and most of the questions are around security—"Is my data safe?" We have a huge variety of ways of assuring our
customers that these are actually probably safer in our environment
than on-premise.

Some customers wait, and some will just jump in
the pool with everyone else. We are in our fifth year of existence,
and it’s very interesting to see how our customers are scaling from the
small, lower end, to huge companies and corporations that are running
on Workday.

A blast from the past
Applications
are built on top of relational databases today, and then they are
being designed thinking about the end-user, sitting in front of a
browser, interacting with the system. But, really they were designed
around capturing the transaction and being able to report straight-off
that transaction.

The idea of integrating with third parties
was an afterthought. Being an afterthought, what happened was that you
find this new industry emerging, which is around extract, transform and load (ETL) tools and integration tools. It was a realization that we have to coexist within the many systems.

What
happened was that they bolted on these integration third-party
systems straight onto the database. That sounds very good. However,
all the business logic, all the security, and the whole data structure
that hangs together is known by the application—and not by the
database. When you bolt-on an integration technology on the side, you
lose all that. You have to recreate it in the third-party technology.

Similarly, when it comes to reporting, relational technology does a phenomenal job with the use of SQL
and producing reports, which I will define as two-dimensional
reports, for producing lists, matrix reports, and summary reports.
But, eventually, as business evolves, you need to analyze data and you
have to create this idea of dimensionality. Well, yet another
industry was created—and it was bolted back onto the database
level, which is the [BI] analytics, and this created cubes.

In
fact, what they used were object-oriented technologies and in-memory
solutions for reasons of performance to be able to analyze data. This
is currently the state of the art.

The same treatment
Conversely, any request that comes into our system, be it from a UI
or from a third-party system by integrations, we treat exactly the
same way. They go through exactly the same functional application
security. It knows exactly what the structure of your object model is.
It gets evaluated exactly the same way and then it serves back the
answer. So that fundamental principle solves most of our integration
problems.

On the integration side, we just work off open
standards. The only way that you can talk with a third-party system
with Workday is through web services, and those services are contracts that we spec to the outside world. We may change things internally, but that’s our problem.

That’s
the point where we have a technology around our enterprise service
plus our integration server that actually talks the language that we
do, standards web service based. At the same time, it's able to
transform any bit of that information to whatever the receiving
component wants, whether it’s banking, the various formats, or whatever
is out there.

We put the technology into the hands of our
customers to be able to ratchet down the latest technology to whatever
other file structures that they currently have. We provide that to
our customers, so they can connect them to the card-scanning systems,
security systems, badging systems, or even their own financial systems
that they may have in house.

We're a SaaS vendor, and we do
modify things and we add things, but those external contracts, which
are the Web services talking to third-party systems, we respect and we
don’t change. So, in effect, we do not break the integrations.

Best way to access data
The
next architectural benefit is about analyzing data. As I said, there
are a lot of technologies out there that do a very good job at lists
and matrix reporting. Eventually, most of these things end up in
spreadsheets, where people do further analysis.

But the dream
that we are aiming for continuously is: When you are looking at a
screen, you see a number. That number could be an accumulation of
counts that you'd be really interested in clicking on and finding out
what those counts are—name of applicants, name of positions, number
of assets that you have. Or, it's an accumulation. You look at the
balance sheet. You look at the big number. You want to click and figure
out what comprises that number.

To do that, you have to have
that analytical component and your transactional component all in the
same place. You can't afford what I call I/Os. It's a huge penalty to
go back and forth through a relational database on a disk. So, that
forces you to bring everything into memory, because people expect to
click something and within earth time get a response.

The
technology solutions that we opted for was this totally in-memory
object model that allows us to do the basic embedded analytics, taking
action on everything you see on the screen.When you are
traversing, you come to a number in a balance sheet, and as you're
drilling around, what you are really doing in effect is traversing an
object model underneath, and you should be able to get that for nothing.

So the persistence
layer is really forced by the analytical components. When you're
analyzing information, it has to perform extremely fast. You only have
one option, and that is memory. So, you have to bring everything up in-memory.

We
do use a relational component, but not as a relational database. We
use a relational database, which is really good at securing
your data, encrypting your data, backing up your data, restoring it,
replicating it, and all these great utilities the database gives you,
but we don’t use a relational model. We use an object model, which is all in-memory.

But,
you need to store things somewhere. In fact, we have a belief at
Workday that the disk, which is more the relational component, is the
future tape. What you used to use in legacy systems was putting things
on tape for safety and archiving reasons. We use disk, and we actually
believe, if you look at the future, that nearly everything will be
done exclusively in-memory.

Make way for metadata
And, there is another bit of technology that you add to that. We're a totally metadata-driven
technology stack. Right now, we put out what we describe as updates
three times a year. You put new applications, new features, and new
innovations into the hands of your customers, and being in only one
central place, we get immediate feedback on the usage, which we can
enhance. And, we just keep on going on and keep on adding and adding
more and more and more.

This is something that was an absolute
luxury in your legacy stack, to take a complete release. You have to
live through all the breakages that we mentioned before around
integrations and the analytical component.

As soon as you can
have the luxury of maintaining one system, let's call it one code
line, and you're hanging our customers, our tenants, off that one
single code line, it allows you to do very, very frequent upgrades or
updates or new releases, if you wish, to that central code line,
because you only have to maintain one thing.

Multi-tenancy is
also one of the core ingredients, if you want to become a SaaS vendor.
Now, I'm not an advocate of saying multi-tenancy A is better than
multi-tenancy B. There are different ways you can solve the
multi-tenancy problems. You can do it at the database level, the
application level, or the hardware level. There’s no right or wrong
one. The main difference is, what does it cost?

All we're looking at is one single code line that we have to maintain and secure continuously. We
believe in one single code line, and multiple tenants are sharing
that single code line. That reduces all our efforts around revving it
and updating it. That does result in cost savings for the vendor, in
other words, ourselves.

And as far back as I can remember, when
humans realized that you take time and material, package that for a
profit, and send it to your end-market, as soon as you can reduce your
cost of the time or the material, you can either pocket the
difference, or move that cost saving onto your customers.

We
believe that multi-tenancy is one of the key ingredients of reducing
the cost of maintenance that we have internally. At the same time, it
allows us to rev new innovative applications out to the market very
quickly, get feedback for it, and pass that cost savings on to our
customers, which then they can take that and invest in whatever they
do—making carpets, yogurt, or electric motors.

Test and debug WSO2 Carbon-based applications directly within the IDE.

Export Carbon Applications in the new Carbon Archive format.

“We have found that many of our customers are developing sophisticated applications that span the
WSO2 Carbon product family, and they are taking advantage of the
unique strengths of our platform when used as a whole,” said Dr. Sanjiva Weerawarana,
founder and CEO of WSO2. “We’re now revving up our tooling support
with WSO2 Carbon Studio—helping developers to organize, develop, test,
and deploy these composite applications with greater ease than ever
before.”

Middleware platform
The WSO2 Carbon Studio IDE is designed to take advantage of the open source WSO2 Carbon middleware platform. The Eclipse-based offering includes graphical editors for XML configuration files, an enhanced Eclipse BPEL
editor, and easy integration of Carbon-based applications with the
WSO2 Governance Registry. Additionally, Carbon Studio offers a rich set
of third-party Eclipse plug-ins, including Maven and the OpenSocial
Gadget Editor.

Tools to support SOA development include Apache Axis2 and JAX-WS, Data Service, BPEL, ESB, and ESB Tooling, as well as a gadget editor.

WSO2
Carbon Studio, available now as a set of Eclipse plug-ins, is a fully
open-source solution released under Eclipse and Apache Licenses and
does not carry any licensing fees. WSO2 offers a range of service and
support options for Carbon Studio, including development support and
production support.