There has been plenty of talk about the threat of cyber-attacks on critical national infrastructure (CNI). So what’s the risk, what’s involved in protecting CNI and why, to date, do attacks seem to have been limited?

CNI is the utility infrastructure that we all rely on day-to-day; national networks such as electricity grids, water supply systems and rail tracks. Others have an international aspect too; for example gas pipelines are often fed by cross-border suppliers. In the past such infrastructure has often been owned by governments, but much has now been privatised.

Some CNI has never been in government hands; mobile phone and broadband networks have largely emerged after the telco monopolies were scrapped in the 1980s. The supply chains of major supermarkets have always been a private matter, but they are very reliant on road networks, an area of CNI still largely in government hands.

The working fabric of CNIs is always a network of some sort; pipes, copper wires, supply chains, rails, roads: keeping it all running requires network communications. Before the widespread use of the internet this was achieved through proprietary, dedicated and largely isolated networks. Many of these are still in place. However, the problem is that they have increasingly become linked to and/or enriched by internet communications. This makes CNIs part of the nebulous thing we call cyberspace; which is predicted to grow further and faster with the rise of the internet-of-things (IoT).

Who would want to attack CNI? Terrorists perhaps. However, some point out that it is not really their modus operandi—regional power cuts being less spectacular that flying planes in to buildings. CNI could become a target in nation state conflicts, perhaps a surreptitious attack where there is no kinetic engagement (a euphemism for direct military conflict), some say this is already happening; for example, the Stuxnet malware that targeted Iranian nuclear facilities.

Then there is cybercrime. Poorly protected CNI devices may be used to gain entry to computer networks with more value to criminals. In some cases devices could be recruited to botnets; again this is already thought to have happened with IoT devices. Others may be direct targets, for example tampering with electricity meters or stealing data from point-of-sales (PoS) devices that are the ultimate front end of many retail supply chains.

Who is ultimately responsible for CNI security? Should it be governments? After all, many of us own the homes we live in, but we expect the government to run defence forces to protect our property from foreign invaders. Government also passes down security legislation, for example at airports, and other mandates are emerging with regards to CNI. However, at the end of the day it is in the interests of CNI providers to protect their own networks, for commercial reasons as well as in the interests of security. So, what can be done?

Securing CNIOne answer is of course, CNI network isolation. However, this simply is not practical; laying private communications networks is expensive and innovations like smart metering are only practical because existing communications technology standards and networks can be used. Of course, better security can be built into to CNIs in the first place, but this will take time as many have essential components that were installed decades ago.

A starting point would be better visibility of the overall network in the first place and the ability to collect inputs from devices and record events occurring across CNI networks. If this sounds like a kind of SIEM (security information and event management) system, along the lines of those provide for IT networks by LogRhythm, HP, McAfee, IBM and others, then that is because it is; a mega-SIEM for the huge scale of CNI networks. This is the vision behind ViaSat’s Critical Infrastructure Protection. ViaSat is now extending sales of the service from USA to Europe.

The service involves installing monitors and sensors across CNI networks, setting base lines for known normal operations and looking for the absence of the usual and the presence of the unusual. ViaSat can manage the service for its customers out of its own security operations centre (SOC) or provide customers with their own management tools. Sensors are interconnected across an encrypted IP fabric, which allows for secure transmission of results and commands to and from the SOC. Where possible, the CNI’s own fabric is used for communications but, if necessary, this can be supplemented with internet communications; in other words the internet can be recruited to help protect CNI as well as attack it.

Having better visibility of any network not only helps improve security, but enables other improvements to be made through better operational intelligence. ViaSat says it is already doing this for its customers. The story sounds similar to one told in a recent Quocirca research report, Masters of Machines that was sponsored by Splunk. Splunk’s background is SIEM and IT operational intelligence, which, as the report shows, is increasingly being used to provide better commercial insight into IT driven business processes.

As it happens ViaSat already uses Splunk as a component of its SOC architecture. However, Splunk has ambitions in the CNI space too, some of its customers are already using its products to monitor and report on industrial systems. Some co-opetition will surely be good thing as the owners of CNIs seek to run and secure them better for the benefit of their customers and in the interests of national security.

The protection of personal data has been back in the news in the UK over the last month due to the government bungling plans to make anonymised NHS patient data available for research. The scheme gives NHS patients the option to opt out of sharing their data: why? NHS care in the UK is mostly provided free at the point of delivery funded by general taxation (and/or government borrowing), so why should we not all give something back for the greater good, if the government can provide the necessary reassurances?

Anyway, who would be interested in our health records, other those researching better healthcare? Providers of healthcare insurance and life assurance maybe; but we have to disclose even quite mild problems to them to make sure policies are valid, and imagine the damage to the reputation of an insurance provider who was exposed as having misused healthcare records; is it worth their risk? Celebrities and politicians may have a case; in some cases their health history may make interesting headlines. Perhaps they should consider paying the private sector to deal with embarrassing issues?

It is worth asking why any given data set is of interest to people who would put it to unauthorised and nefarious use. Payment card details hacked by cyber-thieves are pretty obvious, they can be readily monetised. Identity data and account access credentials are worth having, in some cases they can be used to gain direct access to our financial assets or be used to dupe us, or others, into giving enough extra information to gain that access. When it comes to personal data (not to be muddled with intellectual property), unless a cyber-criminal can see a way to monetise it, then it is of little interest, so that ultimately the main target will be payment information.

Hacktivists may see opportunities for bribery in health records, but this is a tricky and highly illegal business for the potential perpetrator, and most of us would be of little interest to them anyway. Journalists may seek out headlines, but, again, this does not apply to most of us. The phone hacking scandal that bought down the News of the World and is currently making its way through the UK courts is a case in point. The targets were nearly all celebrities who had failed to take the simple step of password protecting their voicemail. That is not to condone anything illegal, but just to point out how easy it would have been to prevent (for example, automating the setting up of voicemail passkeys during initial device set up). One feels most sympathy for the victims of crime whose phones were hacked, who had become of interest to the press more or less overnight.

In the IT industry there is much talk about big data and all the benefits it can provide. Big data processing needs access to big data sets and for pharmaceutical and healthcare research that means patient data. The NHS has one of the largest such data sets in the world, and it has tremendous potential value if handled in the right way. The government needs to do a better job of getting its case across, but its motives are good. Those protesting about the use of anonymised NHS data need to better explain why and when this valuable resource should be wasted when it can be used for the benefit of all.

Just as intelligent management of a datacentre’s hardware assets is critical to ensure that IT delivers business value, managing software assets is vital for maintaining a highly flexible and up-to-date IT platform.

But more often, software is purchased by the IT department out of necessity—and then forgotten due to its intangible form. Software asset management (SAM) tools have evolved into complex systems—using them correctly can ensure that an organisation maintains its software assets to better meet the needs of the organisation and keep itself within the terms of all of its software contracts.

As with hardware, the first step that needs to be carried out is a full asset discovery. The majority of systems management tools include something along these lines, but the capabilities of the systems may not provide exactly what is needed.

For example, some systems will only look within the data centre. While this may be useful in identifying the number of licences being used across what software, this is only touching a small proportion of the applications and licences an organisation has. Trawling across an organisation’s total estate of servers, desktops, laptops, tablets and other mobile devices has become a difficult task, particularly with bring your own device (BYOD) devices where licences may have been bought directly by the end user.

However, building up a full picture of what is being used brings in many different advantages. Firstly, patterns of usage can be built up. For example, is a specific group of employees using a specific application? Are employees carrying out the same tasks using different applications?

Secondly, the issue of licensing can be addressed. Once a full picture of the application landscape has been established, how these have been licensed can be looked at. In many cases, organisations will find that they have a corporate agreement for licences, yet departments and individuals have gone out and sourced their own software—with licences costing much more than if they had gone through a central purchasing capability. Bringing these licences into the central system could save a lot of money.

However, it may well be that 'golden images' have been used with corporate licences without any checks in place as to whether the number of seats implemented is within the agreed contract. Although this can lead to extra costs for the organisation in bringing the contract into line with the number of licences being used, it will be cheaper than being fined should a software audit be carried out by the likes of FAST.

The usage patterns identified may help here as well. Many SAM tools will be able to report on when an application was last used by the employee. In some cases, this may have been weeks or even months ago—in many cases, it will be apparent that the employee installed the software to try out and has never used it since. Harvesting these unused licences can help to offset the need to change existing contracts.

Thirdly, as long as the asset discovery tool is granular enough, it will be able to ascertain the status of the application—its version level, what patches have been applied and so on. This allows IT to bring applications fully up to the latest version and that patches have been applied where necessary. When combined with a good hardware asset management system, the overall hardware can be interrogated to ensure that it is capable of taking the software upgrades. Where this is not possible, the machine can be upgraded or replaced as necessary, or marked as a special case with the software remaining as is when further software updates are scheduled to run.

Good SAM tools should also be able to map the dependencies between software, tracking how a business process will use different applications as it progresses. Again, through the use of suitable rules engines, these dependencies can be managed, such that the updating of one application does not cause a whole business process to fail. Also, possible problematic areas can be identified—for example where software has a dependency on the use of IE6 and so introduce security loopholes that could be exploited by hackers.

For most organisations, the main strength of SAM tools will, however, reside in the capability to manage software licences against agreed contracts. In many cases, this is not just a case of counting licences and comparing them against how many are allowed to be used. The domain expertise built up by vendors such as Flexera and Snow Software means that the nuances of contracts can be used to the best advantage.

For example, through identifying all licences that are currently in place and the usage patterns around them, it may be possible to use concurrent licencing rather than per seat licencing. Here, licences can be allocated based on the number of people using an application, rather than on named seats, so bringing down the number of licences required considerably. If this is not an option, then it may be that by bringing all licences under one agreement, rather than several, a point may be reached where a new discount level is hit. For an organisation with, say 1,000 seats and 600 licences under one agreement, and four other contracts for 100 seats each, hitting the 1,000 seats under one contract may optimise the costs considerably, not just on licences but also on maintenance. For international organisations, this can be exceedingly valid—bringing licence agreements from several countries together under a single international contract could help save large amounts of money, as well as the time taken in managing the various contracts.

When looking at SAM tools, there is one area where Quocirca recommends caution. There is a strong move that has started away from perpetual or yearly licences plus maintenance towards subscriptions, as cloud computing pushes all software vendors to review how they market their products and attempt to maintain revenue streams. In many cases, a perpetual licence will allow a user to continue using the software even if no maintenance is paid from there on.

Increasingly, subscription models will include some automated governance of the software—if the subscription is not paid, then access to the software will be automatically blocked. SAM systems will need to be able to look more to the future and advise when subscription renewals are coming up and also to provide end-user self-service capabilities to gain access to external subscription-based services through agreed corporate policies.

Quocirca recommends strongly that an organisation ensures that its SAM partner of choice is already prepared for this and is working continuously to maintain its domain expertise in a manner that allows the organisation to move to a subscription model as and when this makes sense. Indeed, Quocirca expects to see more SAM systems come through in the market which will be able to help an organisation in identifying when this sweet spot is reached, providing helpful information on what options an organisation should be considering to optimise its software asset base.

Overall, SAM is greater than the data centre. However, by rationalising client licenses, a better picture of server requirements in the data centre can be built up. Only through a full SAM approach can the data centre be fully optimised for the business.

Quocirca research shows that the two biggest concerns organisations have when considering the use of cloud based services are the safety of personal data and complying with the data protection laws (see free Quocirca report, The adoption of cloud based services, downloadable here). The report shows that these are issues that those recognising the benefits of such services overcome by investing in security technology.

The truth is that these concerns are high on the list of IT managers in all areas of IT deployment. The need to meet governance, risk and compliance (GRC) objectives is something that cannot be avoided. Another area where concerns have been increasing is the growing number of unmanaged devices that are attaching to networks.

There are good reasons for providing network access to such devices. Most businesses now accept the reality of employees using their own devices for work purposes (bring-your-own-device/BYOD); even if they do not like the concept, they know it must be managed somehow. Furthermore, there is an increasing need in many organisations to provide network access to guests (such as contractors and consultants) on an ad hoc basis. These two requirements have seen a resurgence of interest in network access control (NAC) systems from established vendors such as ForeScout, Bradford Networks and Portnox.

At the Infosec Europe event in April this year, Quocirca chaired a panel session where three users of NAC from very different business sectors explained why they had invested in the technology, how it helps them overcome GRC challenges and better enables both BYOD and guest network access. The session was sponsored by ForeScout and the panellists were all using the vendor’s CounterACT NAC product. The findings of the session have been written up in a freely available report that can be downloaded here.

In brief, the benefits outlined by each user were as follows:

UK-based finance sector organisation: in financial services, regulations are imposed by regulatory bodies. This organisation was held back from trading if it was unable to demonstrate that its employees’ end points were secure. Implementing NAC meant the status of the systems and security software on all end points could be checked and, when necessary, updated every time they accessed the network. As the NAC system used was agentless, this could all be achieved regardless of whether the device was previously known or not. An audit trail to prove compliance could be made available to auditors.

UK-based healthcare trust: healthcare is also a tightly regulated sector; here it is not just money that is at stake, but lives. The end points on the organisation’s networks included a wide range of medical devices as well as end user ones. NAC was used to replace an aging intrusion prevention system (IPS), the former being much more dynamic, enabling all sorts of devices to safely share the same network whilst ensuring, and being able to prove, necessary levels of security and compliance.

Creative media company: for some organisations GRC controls are necessary to inspire confidence in customers and suppliers rather than satisfy regulators. This was certainly the case with the media service organisation Quocirca interviewed. It needed to make sure that its customers felt their own data was safe when their clients’ employees were working as guests on its premises. It also needed to ensure and prove its use of certain software was in-line with vendor licence agreements. NAC enabled both of these requirements.

As organisations struggle to meet GRC requirements in the face of the changing way IT systems are deployed and accessed, all areas of IT security are coming under review and advanced technologies are supplementing or replacing conventional ones. There is no silver bullet for achieving the often-related goals of better security and compliance, but NAC is proving for many to be a key building block in their overall IT security architecture.

Organizations outsource a part or whole of their IT services to third-party service providers for various reasons such as cost savings, leveraging outside expertise, need to meet business demands quickly, and other critical aspects. Usually, tasks such as software development, network management, customer support, and data center management are outsourced.

Engineers and technicians working with service providers would require remote privileged access to servers, databases, network devices, and other IT applications to discharge their contractual duties. Typically, in outsourced IT environments, the technicians working with the service provider will be located at a faraway place and will access the IT resources of your organization remotely through VPN.

Uncontrolled administrative access—a potential security threatWith remote privileged access that grants virtually unlimited access privileges and full controls to physical and virtual resources, the outsiders virtually become insiders and, in some cases, much more powerful than the real insiders of the organization. Uncontrolled administrative access is a potential security threat, which can jeopardize your business.

A disgruntled technician could plant a logic bomb on your network, create a sabotage, or steal customer information, and cause irreparable damage to your business and reputation. In fact, analysis of many cyber incidents reported in the past has revealed that misuse of privileged access had been the root cause.

So, in outsourced IT environments, controlling privileged access and keeping an eye on the actions on critical IT resources are absolutely essential, both as protective and detective security control against cyber attacks.

Essential security measures for uutsourced environments

An inventory of resources/IT assets accessed by the third-party technicians should be kept up to date.

Third-party technicians should get access only to the resources that are necessary to perform their work. The access should be time-limited.

Access should be granted without revealing the underlying passwords. That means the third-party technicians should be able to access the resources without seeing the passwords in plain text.

The remote access enabling mechanism should be highly secure.

All activities done by them should be video-recorded and monitored. Any suspicious activity should be terminated.

Comprehensive, tamper-proof audit records should be maintained on ‘who’, ‘what’ and ‘when’ of access.

Password management best practices, like usage of strong passwords, frequent rotation, etc. should be strictly enforced.

Normally, cyber incidents do not take place suddenly; they are the result of meticulous planning for several months. Logs from critical systems carry vital information that could prove effective in preventing such ‘planned’ attacks by malicious technicians. For instance, monitoring activities like user logons, failed logins, password access, password changes, attempts to delete records, and other suspicious activities could help identify hacking attempts, malicious attacks, DoS attacks, policy violations, and other incidents. Monitoring network activity and establishing real-time situational awareness is essential to enterprise security.

These simple security aspects would be difficult to implement without the aid of a proper software solution. Manual approach to consolidating, securing, controlling, managing, and monitoring privileged accounts is not only cumbersome and time-consuming, but also highly insecure.

Preventive & detective security controls through an automated approachEssentially, you need an automated approach to both privileged access management and privileged session management. You need to consolidate and control all the privileged accounts centrally in a fully automated fashion, ending convoluted manual password management practices. The automated approach should be capable of delivering the essentials as outlined above.

Of course, not all security incidents can be prevented or avoided. However, by taking proper preventive and detective security controls as explained above, you can ensure information security while outsourcing IT.

You know why you should listen to your elders? Because they've lived though it. They have fallen, and they have risen; they have had big ideas, taken risks, failed and succeeded. Whether it's a parent, boss, co-worker, spouse or idol, taking in all the advice and information you can not only shapes your business sense, but it helps you develop the passion and drive you need to succeed.

Here are some of the best nuggets of advice from the world's top entrepreneurs:

Sir Richard BransonThe founder of Virgin Group was encouraged by his mum to have no regrets. It's a waste of time to look back on failure; Richard Branson advises us to use that time to develop new ideas. In a piece he wrote for LinkedIn, he explained how his mother is constantly starting new projects (like breeding parakeets and growing Christmas trees) and considers setbacks part of life's learning curve.

Imagine climbing a steep mountain. It's much easier to climb if you're facing forward and looking up. If you take time to look behind you, you instantly slow yourself down and have to reset for the upward climb.

Bob ParsonsCEO and founder of GoDaddy, Bob Parsons has 16 rules for success. Some important ones include:

Never give up.

Focus on what you want to happen.

Don't let others push you around.

Fair is what you pay to get on a bus (i.e., fare). Life isn't fair. You make your own breaks.

Don't take yourself too seriously.

But the best might be what his father told him when he was struggling to get Parsons Technology up and running early on in his career. "Well, Robert, if it doesn't work, they can't eat you." What's the worst that could happen?

Martha StewartYou can do anything you put your mind to. That is what Martha Stewart's father taught her at a very young age. And sometimes your desires change; sometimes, what you think you will get out of a job is much different than you had imagined. After a brief modeling career and a stint working as a stockbroker in New York City, Stewart knew she was interested in houses, landscape, design and cooking. After shifting directions and finding success in a small gourmet food market she opened, she found her niche and began a catering business, which in turn, launched her empire. When trying to discover your entrepreneurial passion, Stewart encourages people to consider their strengths, weaknesses, interests and desires. Ask yourself how hard you want to work. And believe in yourself.

Even a brief encounter with a stranger can change your life. Watch. Listen. And remember, as GoDaddy's Bob Parsons reminds us on his 16 rules list, "We're not here for a long time, we're here for a good time!"

Join the conversation: What is some of the best business advice you have gotten? Your response may change someone's life.

Things change, but recent advances in technology, coupled with social changes, are changing the work/life balance, and not in the way that was once expected. Shorter days and more leisure time was a twentieth century dream for the twenty first century world of work, but the reality is somewhat different.

At one time, information and communications technology (ICT) for the working environment was only made accessible to a select few, controlled by central diktat and superior to anything you were likely to see at home. Now the complete opposite is true and consumerised IT not only extends the working day into individuals’ personal lives, but also allows them choices and to bring their personal devices (BYOD) and activities—especially social communications—into the main hours of the working day.

While this blurring may not be an issue, providing employees do not push too much personal activity so as to be a detriment to their work, it does create other challenges.

One in particular is related to another change, but this time instigated by the organisation. There is an increasing need to open up business applications to communicate and share information with users outside of the organisation. This includes outside the physical boundaries and the need to share with employees on the move or working from home, but also outside the corporate boundaries to contractors, third party suppliers, business customers and even consumers. The reasons for this are to improve relationships with customers, transact directly with them and to more tightly integrate the supply chain.

Organisations are themselves also increasingly using social media to do this as they feel that it will make it easier to identify, communicate with and retain customers.

The problem then is how and what to share, and will it be safe?

Up until recently the main method of sharing information remotely with anyone external would either be physical media—CD, memory stick, etc.—especially for large volumes of data; or, more often for smaller volumes, email. Most organisations are relatively confident they can secure email sharing, and there are certainly many tools to support this and minimise data leakage.

Physical media is more tricky and, as mobile devices have become increasingly prevalent, this increases the physical device risk further. This might be by direct connection through USB such as memory sticks (although 'podslurping' was a term coined for downloading gigabytes to a connected iPod) or over the air through a cellular or Wi-Fi connection.

The risks this brings through the potential loss or theft of device are well known and understood, with mobile device management (MDM) protections often put in place to lock or wipe, and sometimes, though not frequently enough, through on-device encryption. There are also those who avoid data residing on the device at all through virtual connections that leave no permanent data footprints.

However, a greater risk comes from user behaviours related to the increasing use of social media—posting or sharing something 'out there' on the internet. This might be as an update to 'friends' via a social media site or a dedicated cloud storage provider.

Either way it is potentially out of sight from an enterprise perspective, as employees will be using their own preferred tools to create a Bring Your Own Cloud or Collaboration (BYOC) experience. If this casual and informal usage translates into how official or formal information is shared with third party businesses and consumers, the organisation is not in control, making the demonstration of compliance virtually impossible and increasing security risks.

It might be that enterprise IT has its own set of endorsed tools for information sharing via cloud based services, but the blurring of boundaries in employee behaviour may make the use of these difficult to enforce, especially if employees have been allowed or even encouraged to BYOD in an uncontrolled manner. One way or another, lax behaviour may need to be reined in, monitored or checked.

If you have the habit of using the same password for all your online accounts, you might end-up becoming a cyber-attack victim!

2012 had been fabulous in many counts, but when it comes to information security, it had indeed been a year of high profile security breaches and identity thefts across the globe. Individual users and mighty enterprises alike have fallen prey to hackers. High profile cyber-attack victims this year include Zappos.com, one of the largest online retailers dealing with shoes and apparels; Linkedin, Dropbox and numerous others.

And in early December 2012, a shocking report revealed that a disgruntled IT administrator at a Swiss-based spy agency had allegedly downloaded terabytes of counter-terrorism information shared among the intelligence agencies in US & UK and was eyeing at selling that off to foreign and commercial buyers.

If you dig into most of the cyber-security incidents reported this year, you would realize that password reuse and insider threats have emerged the most dangerous security IT security issues in 2012. Incidentally, the solution to combat both the issues lies in deploying a Password Manager!

Password Reuse Affects All – Individuals & Enterprises AlikeWith even tech-savvy users tending to reuse the same password across many IT applications and websites, identity theft at one place leads to a compromise at numerous other places. Nowadays, it is quite common for users to use the same login credentials for multiple sites—social media, banking, brokerage and other business accounts. If the password gets exposed in any of the sites, in all probability hackers would be able to easily gain access to all your other accounts too.

If you have the habit of using a single master password for all your accounts, be prepared for security surprises and shocks!

It is always prudent to have unique passwords for every website and application and supply it ONLY on that site/app. When there is news of password expose or hacks, you can just change the password for that site/app alone. Frequently changing passwords as a habit is always a great one to have.

But, here comes the problem: You will have to remember multiple passwords—sometimes in the order of tens or even hundreds. It is quite likely that you will forget passwords and at the most needed occasion, you will struggle logging in.

The way out: Use a Password ManagerJust like you have an email account; consider using a password management application too. In order to combat cyber-threats, proper password management should ideally become a way of life. Password Managers help securely store all your logins and passwords in a centralized repository. In addition, you will get an option to launch a direct connection to the websites / applications from the password vault’s GUI itself. Saving you even the ‘Copy & Paste’ task, logging in is just a click away. Once you deploy a Password Manager, you can say goodbye to password fatigue and security lapses.

Insider Threat – The Emerging Issue for BusinessesPassword Managers could safeguard business enterprises from yet another emerging threat. As things stand today, the biggest threat to the information security of your enterprise might be germinating inside, right at your organization. The business and reputation of some of the world’s mightiest organizations have been shattered in the past by a handful of malicious insiders, including disgruntled staff, greedy techies and sacked employees.

In most of the reported cyber sabotages, misuse of Privileged Access to critical IT infrastructure has served as the ‘hacking channel’ for the malicious insiders to wreak havoc on the confidentiality, integrity and availability of the organization’s information systems, resulting in huge financial losses. In government agencies, insider threats might even result in jeopardizing the security of the Nation.

It is common to see organizations storing privileged passwords that grant virtually unlimited access privileges in haphazard manner in volatile sources like papers, text files and Excel sheets. Lack of internal controls, access restrictions, centralised management, accountability, strong policies and to cap it all, haphazard style of privileged password storage and management make the organization a paradise for malicious insiders.

Tightening Internal Controls – Need of the HourOne of the effective ways to combat insider threats is to tighten internal controls. Access to IT resources should strictly be based on job roles and responsibilities. Access restrictions are just not enough. There should be clear-cut trails on ‘who' accessed 'what' and 'when’.

Internal controls could be bolstered in organizations by automating the entire life cycle of Privileged Password Management enforcing best practices. Enterprise Password Management Solutions precisely help achieve this.

Deploying a password management solution would indeed be the best start towards information security this year!

I really hesitate to introduce a term like 'meta-governance' but that's what we need - governance of governance itself. Governance can be a barrier to business agility and business effectiveness - if done wrong or with a heavy hand. Governance itself needs to be governed to ensure that we deploy 'just enough' governance to manage real risks and promote real trust in automated systems - even if remembering that 'just enough' probably includes adhering to the letter of all applicable regulations.

Governance frameworks such as COBIT are important, not because they give us a bible that can be imposed on employees (with the implication that employees can't be trusted) but because they provide a reference against which business automation practice can be assessed: are there governance issues that we don't cover and, if so, should we; are there issues that weren't important but now are (so we should now instantiate more of the framework); are there things that we do that go beyond the framework and, if so, is this necessary or just 'gold plating'?

This is becoming an issue today particularly because of the rise of DevOps, which started as a movement when Agile developers found Operations delivery was becoming a bottleneck; and Operations realised that their future was limited if they became seen as The People Who Say NO!

However, if greater business effectiveness is the objective instead of simply more efficient software delivery (and, let's face it, delivering more and more software is only a good career move if that software is actually used by the business to make money or grow the business) then we do need to include 'just enough' governance in the DevOps process.

Despite the views of many developers, 'new' is not necessarily 'good- and software delivery can damage business service levels as well as improve them. Even assuming the software actually works (that is that it "meets spec and doesn't fall over often") - perhaps the spec is wrong (even if developed with agile techniques and with real users on the team, perhaps you got the wrong users) or out-of-date (perhaps the environment has changed and your company hasn't noticed yet); perhaps the new system is too clumsy, or too slow, to be used effectively; perhaps it falls foul of some knee-jerk regulation just introduced.

Sometimes saying "NO" before a turkey hits production is the best for all concerned. Of course, perhaps the adoption of real Agile principles makes producing a turkey 'impossible' - well, rather less likely - but is Agile as you practice it 'real Agile' with all the discipline that implies; and 'less likely' really isn't the same as 'impossible' anyway.

So, in an environment with increasing regulation and where web-based commerce means that the scope of impact of a real turkey could include destroying the business before anyone could react, governance is an important part of DevOps, something which IBM's DevOps story (just one example) appears to recognise.

So what sort of governance do we need? Well, I have a "Sim City" vision for governance, where you explore the behaviour of a developing system in a (controlled) computer-gaming-style simulation environment - this is just one possible option. As you build a new system using a model-based systems engineering approach, you execute the developing system models as a production-oriented simulation of the real business process. There are systems today that help you simulate the behaviour of any external systems or processes you'll need to integrate with, so all of the stakeholders in the new system can play with it and bring up any issues they have well before any code hits production. Participation in a simulation of a developing - evolving - business outcome could even help to facilitate the achieving of an effective feedback loop involving customers and deployed applications and developers.

With a suitably controlled development environment, you could even start collecting evidence for regulatory and safety compliance - even if this was just a framework that needed confirmation after implementation in production, this confirmation should then be quick and efficient, with no surprises.

'Sim city governance' would be lightweight 'just enough' governance and it might even be fun. But it might deliver some comparatively strong governance, in practice; strong in comparison to what IT often achieves at the moment, anyway. For instance:

If IT governance overall is about delivering automation that is cost-effective and supportive of business strategy and process, without waste, it will rapidly become obvious (as long as all stakeholders are encouraged to play the simulation) if what is being simulated is being gold-plated and/or isn't anything the business really wants. It is much easier to get the business practitioners that can tell you this interested in a computer-game simulation than in a requirements spec - or even a business process model.

Regulatory requirements are sometimes obvious to business practitioners and not mentioned; and they often make little sense to developers - and then have to be expensively bolted on at the last minute, sometimes impacting any or all of performance, usability and security. This disconnect might be overcome if the appropriate stakeholders could see a realistic simulation during development.

There's often a similar disconnect between security practitioners and developers, which could again be identified while 'playing' with a simulation.

Performance testing - end user experience validation - is really only feasible in production, with conventional development. However, with a controlled simulation, the likelihood of performance surprises in production (and, in particular, meeting the sort of performance problems that are inherent in bad design) could be much reduced. You might consider predicting real production performance, with confidence limits, from a good simulation.

Risk management and risk mitigation should be built into the design of a well-governed system - but, once again, is often a bolted-on afterthought. And, once again, a lack of appropriate risk management is more easily identified in a life-like simulation than in a system spec or formal model.

So, does anyone else think that the availability of life-like simulations, with underlying links to formal systems engineering models used to build automation, would help promote just-enough governance? Governance that could help to ensure that DevOps rapidly delivers into production safe (or adequately well-governed) and effective automation?

Earlier in the year Quocirca was asked a surprising question, which was along these lines; “if we use a cloud-based storage service and there is a leak of personal data, who is responsible, us or them?” Make no mistake, the answer is that, regardless of how and where data is stored, the responsibility for the security of any data lies with the organisation that owns it, not its service providers.

In general terms, regulators are mainly concerned about personal identifiable data (PID). In the UK, the Data Protection Act (DPA) requires any company that processes PID to appoint a data controller to ensure the safe processing and storage of such data. The controller should indeed be wary of cloud-based storage services when it comes to compliance with the DPA and EU Data Protection Directive, which is being updated this year.

As was pointed out in a previous Quocirca blog post “The highly secure cloud”, this is not because cloud storage services are inherently less secure; indeed in many cases such services are likely to be more secure than internally-provisioned storage infrastructure. The danger comes from how such services are used. There are four main use cases which data controller should be wary of:

1 – Storage provided as part of an infrastructure-as-a-service (IaaS) offering. Here the provider is simply providing a managed storage facility. As long as the provider is well selected then the base infrastructure should be more than secure enough; it will be how it is used that matters and that is down to the buyer of the service. There are two caveats:

The EU Data Protection Directive requires that personal data is processed within the physical boundaries of the EU (unless covered by a safe-harbour agreement).

Some countries have far reaching laws when it comes to the ability to request access to data, most notoriously the US Patriot Act. Safe-harbour does not protect against this.

So the physical location of the storage facility used must be defined and guaranteed in the contract with the service provider.

2 – Backup-as-a-service. Here the provider takes a copy of your data and promises to restore it should the original be lost. This may be a short term backup service or a long term archiving service. The main difference here is the provider is now responsible for selecting where the data is stored, so the service level agreement must again cover physical locations and state that the provider will not use primary or secondary locations that fall outside the compliance boundaries.

3 – Software-as-a-service (SaaS). Here a subscription is made to an on-demand application that will process and store data. Again, it must be understood where data will be stored and processed. Many of the big US-based providers (for example salesforce.com) have safe-harbour agreements with the EU, so it is OK for personal data to be processed and stored in their data centres outside the EU as part of a specific SaaS agreement.

4 – Consumer cloud storage services. These are the most insidious threat and open up a wild frontier as they are often provided on a freemium basis. They are attractive to users who want to back up their own personal data and access data from multiple devices. However, if business data gets caught up in the mix, the data controller has now lost control. This requires a mix of end-point security, mobile device management, data loss prevention and web access control to be in place that is beyond the scope of this article.

Well provisioned cloud storage services are an inherently safe place to store data. However, data controllers need to understand how they are being used and have clear SLAs in place. If a provider fails to meet an SLA, the buyer can seek compensation, but by then it too late; it is the data controller’s door that the enforcers of the DPA will come knocking on.

As a general rule I do not outright recommend any particular product from any particular vendor but I am going to make an exception in the case of Excel 2013 and the accompanying release of SharePoint.

Other analysts have described the enhanced business intelligence capabilities of Excel 2013 and these may be terrific but they represent an evolution of the existing capabilities rather than something dramatically new. What is brand new is the governance and compliance capabilities that will be in Excel 2013.

Last year Microsoft acquired Prodiance, a leading provider of spreadsheet management tools. While by no means the largest (in terms of users) of the vendors in this space I have always liked the Prodiance user interface and it had all the major features one would expect from a spreadsheet management product: the ability to discover spreadsheets, facilities to associate the risks associated with each spreadsheet (by business value, complexity, likelihood of breakage and so on), error detection and remediation, and the ability to take spreadsheets under central control.

In Excel 2013 all of these features will be built in, either to Excel itself or SharePoint. To my mind this makes Excel 2013 a more or less mandatory acquisition. It means that, for the first time, you can get proper governance and error checking and correction facilities built directly into Excel.

This release is going to have some serious consequences in the market. There are, broadly speaking, two categories of product in the spreadsheet management space: error detection and correction tools such as Spreadsheet Detective that you can download for a few hundred dollars, and full-blown spreadsheet management tools (like Prodiance) from companies such as Cimcon, Cluster Seven and Finsbury Solutions as well as Boardwalktech (which is more about collaboration) and Lyquidity (more focused on the mid-market). Where do any of these vendors go? Why bother with the likes of Spreadsheet Detective and its ilk when you can get comparable facilities for free? And why spend hundreds of thousands or millions of dollars on a solution from Cimcon or Cluster Seven for the same reason? That's not to say that you might not prefer one of these solutions to Microsoft's but that will have to be an awfully big preference to justify the cost involved. Of course, for a while there will remain a market for these companies in pre-2013 versions of Excel but that will gradually disappear. In short, I cannot see these companies surviving long-term unless they can diversify.

However, this does not necessarily mean the end for all of these companies. With Excel 2013 having built-in governance I would not recommend anyone using Google Spreadsheets or Open Office in preference to Excel 2013 for anything except trivial applications. So, if Google (say) wants to compete with Microsoft at the enterprise level then it will have to develop or acquire comparable governance capabilities to Microsoft: which means that there is the possibility of an acquisition for any of Prodiance's erstwhile competitors. But other than that I don't hold out much hope.

The position with BI companies that target what used to be known as spreadmarts is slightly different. Actuate, for example, markets BIRT Spreadsheet (what used to be Actuate e.Spreadsheet). Theoretically this directly competes with Excel 2013 in that it offers both spreadsheet and governance capabilities but what the new features of Excel 2013 mean is that Actuate has now to persuade users to give up Excel for its environment, for no immediately obvious gain in functionality. That looks like a hard task. Of course, Actuate has other strings to its bow so it's not going to be in the same position as the pure-play vendors discussed above but, again, I don't really see this product surviving either.

So the bottom line is that Excel 2013 represents a killer blow by Microsoft. It will destroy other vendors in the spreadsheet governance space and removes any technical reasons for moving away from Excel to other environments.

Cloud management and services orchestration platform provider ServiceMesh recently delivered Agility Platform 8.0, a major upgrade with features to help better govern and manage private, public, and hybrid cloud usage.

The platform provides Global 2000 enterprises with a consolidated platform for the consistent management, governance, orchestration and delivery of cloud applications, platforms and services. The control over application services—without squelching the innovation of self-provisioned benefits—has become acute for many organizations. Managing services by each cloud, SaaS provider or on-premises platform is complex, expensive and unwieldy.

And so ServiceMesh has identified the governance and policy-enabled orchestration of ecosystem-wide services as a crucial, burgeoning requirement for agile businesses, said Chairman and CEO Eric Pulier. "This is a policy-centric approach ... You need to gain a holistic view of applications," he said.

Agility Platform 8.0, which is delivered as an on-premises virtual appliance, allows companies to leverage services in an on-demand, self-service IT service management (ITSM) operating model. The platform remains independent of the cloud or enterprise applications and services. APIs are available for developers so that new services can leverage Agility right away, even as it supports legacy and existing hybrid-delivered services, said Pulier.

The result is to compress IT service delivery times, lower IT operating costs, and increase investments in IT innovation, said ServiceMesh, a venture-backed start-up in Santa Monica, CA. Commonwealth Bank of Australia is using ServiceMesh to improve its services management.

ServiceMesh has a bold vision of enterprise agility via holistic services orchestration capabilities that manage both on-premises and cloud-based services, with automation of service lifecycles through policy-based definitions and enforcement.

Enterprise customers today are clearly seeking solutions to the dual challenges of making their current IT organizations more responsive to business change, while also ensuring that business users will not get around internal IT resource constraints and delays by selecting an unauthorized external cloud provider’s self-service, pay-as-you-go IT resources. So-called shadow IT deployment of services muddies the water, especially around control and security. BYOD is another complicating factor for more and more organizations.

What's more, governance, risk and compliance (GRC) requirements are also demanding the types of centrally managed solutions from Agility Platform 8.0, said Pulier. Services management policies can vary from department to department, region to region, even as an enterprise wants to standardize on cloud or SaaS applications. Automated orchestration and events processing logic allows for such complexity of services delivery, while banking on the efficiency and cost-savings of consolidated services origins.

Accelerate adoptionThe ServiceMesh platform allows organizations to accelerate the adoption of cloud services across the enterprise and move business applications into the cloud with complete governance and control, said Pulier. The Agility Platform automates the deployment and management of cloud applications and platforms and ensures the portability of these services throughout their lifecycle, independent of the underlying private, public or hybrid cloud environment.

I have certainly seen many ways emerge in the market to try and solve the services management complexity equation, and they vary from VDI, to app stores, to SOA registries, to SOA ESBs, to PPM and extended configuration management databases (CMDBs).

Pulier says the ServiceMesh architected platform provides "a better source of truth" than these other approaches about services across their full lifecycle, and across vast IT infrastructure heterogeneity. "It's more than a catalog, and federates back to the CMDB and other management capabilities," he said.

"You need a holistic view of the problem, and to provide a platform for the business, not just the IT department," he said. This approach "creates infrastructure- and cloud-independent applications management," said Pulier.

ServiceMesh is targeting its platform at both enterprises and cloud services providers. Expect more news on the channel at VMworld later this month. While the ServiceMesh platform is on-premises now, it may also be deployed at the cloud provider layer, and many of its capabilities can also be delivered as a service.

More specifically, Agility Platform 8.0 leverages an extensible policy engine that enables the creation and enforcement of an unlimited range of custom policies. Among the features ServiceMesh offers are:

Wizard-based capabilities to discover and automatically import existing virtual machines (VMs) deployed from other third-party provisioning tools in either private or public cloud environments. Upon VM import, the platform enforces user specified policies on those VMs to ensure the desired governance, security and control. VMs can then be published through a service catalog.

Capabilities to monitor cloud-provider performance and adherence to SLAs, and to compare different cloud services, measuring a range of different cloud-provider operational parameters, such as average VM provisioning time, number of failed or degraded instances, maximum number of concurrent provisioning requests executed and others.

Support for hybrid cloud strategies by enabling workload portability across a broad range of heterogeneous private and public cloud technologies. The latest release extends these capabilities with support for Microsoft System Center Virtual Machine Manager 2012 and Microsoft Hyper-V.

Improved extensible policy-based governance controls with new policy types to govern the sharing of pay-as-you-go IT resources within large corporate settings, including new options to control IT resource scheduling, sharing, leasing and chargeback.

A cloud-native architecture that dynamically scales to meet system demand, using only the amount of resources needed to rapidly execute provisioning requests, orchestrate auto-scaling operations, and perform other management functions.

A recent Quocirca report underlines the scale of the application security challenge faced by businesses. The average enterprise tracks around 500 mission critical applications, in financial services organisations it is closer to 800 (Figure 1). The security challenge arises because more and more of these applications are web-enabled. Furthermore, businesses are increasingly relying on software provided as a service (SaaS) and apps that run on mobile devices, both of which are, by definition, exposed to the internet (Figure 2).

Businesses worry about application security for three reasons. First, security failures leave them vulnerable to hackers and malware, secondly auditors expect application security to be demonstrable and third, customers, with who they share business processes via applications, are also increasingly likely to seek security guarantees. Fixing security flaws up-front wherever possible also makes sense because of the cost involved at doing so after software if deployed. There are both products and services opportunity for resellers to help their customers achieve these goals.

There are a number of approaches that can be taken to improve application security. For in-house developed software, better practice can be ensured through training of developers, many businesses will need assistance to achieve this. For commercially acquired software, due diligence during procurement is necessary, seeking assurances from independent software vendors (ISV); resellers that sell application software could do this for their customers as part of their value add. However, these measures can never ensure that software is 100% secure.

For this reason there are three other approaches that should be considered:

Application scanning: scanning software eliminates flaws in the first place. There are two approaches, the static scanning of code or binaries before deployment and the dynamic scanning of binaries during testing or after deployment. Static scanning is pervasive, looking at every line of code. Scans can be conducted as regularly as is deemed necessary. Whilst on-premise scanning tools have been relied on in the past, the use of on-demand scanning services has become increasingly popular as the providers of such services have visibility in to the tens of thousands of applications scanned on behalf of thousands of customers. Such services are often charged for on a per-application basis, so unlimited scans can be carried out, even daily. The relatively low cost of on-demand scanning services makes them affordable and scalable for all applications including non-mission critical ones. Resellers could sell the tools, or better still use scanning services to verify code before recommending applications to their customers.

Manual penetration testing (pen-testing): where specialist third parties are engaged to test the security of applications and effectiveness of defences. These are white-hat hackers, deliberately trying to break into applications, but with no bad intent (as opposed to black hats). Because actual people are involved in the process, pen-testing is relatively expensive and only carried out periodically; new threats may emerge between tests. Most organisations will find pen-testing unaffordable for all deployed software and it is generally reserved for the most sensitive and vulnerable applications. Resellers with the right skills could offer pen-testing services or seek referral fees from specialists in this area.

Web application firewalls (WAF): these are placed in front of applications to protect them from application focussed threats. They are more complex to deploy than traditional network firewalls and whilst affording good protection do nothing to fix the underlying flaws in software. WAFs also need to scale with traffic volumes - more traffic means more cost. They represent a product resale opportunity.

100% software security is never going to be guaranteed and many organisations use multiple approaches to maximise protection (Figure 3). However, interestingly, as one of the reasons for having demonstrable software security is to satisfy auditors, compliance bodies do not themselves mandate multiple approaches for compliance. For example the Payment Card Industry Security Standards Council (PCI-SSC) deems code scanning to be an acceptable alternative to a WAF.

For today’s businesses the use of software application is not a choice; however, there is a choice when it come to the methods chosen to improve software security and, in turn, the costs involved and the benefits achieved. Using the right mix of approaches at all stages of the software development, procurement and deployment life cycle will improve the efficiency, reliability, security, compliance and competitiveness of business processes; these are all goals that resellers should be aiming to help their customers achieve.

System administrators will often need wide ranging access to systems and devices to do their jobs, but systems are not the same as data. Many individuals working in IT departments will in fact be in relatively junior roles. Indeed, they may often be contractors from third parties. Access to confidential data should be just as limited for them as it is for “normal” users.

However, this is often not the case. Many acting under privilege have access to far more data than they need to do their job. The vast majority of organisations admit this happens at least occasionally; for around 20% it is a regular practice.

Not surprisingly, the case is worse where there has been no pro-active attempt to limit the data that those acting under privilege have access to. However, even those that do take such measures admit that system administrators do have access to more data than they need to do their jobs. This is not that surprising; most tools that enable such controls are neither powerful enough nor sufficiently easy to use.

In one area, such controls are absolutely paramount. With the move to cloud computing and the shared IT infrastructure that this involves, cloud service providers must guarantee that their system administrator will be able to access only the systems they need to and not confidential customer data.

On-demand software offers a number of benefits over applications installed and managed on a company’s own premises. These benefits include infrastructure costs being shared among multiple customers, and the availability of experts dedicated to running the app, which frees up in-house resources for other tasks.

Scanning applications should be an essential part of any business’s overall approach to software security. This process applies to end-user organisations that develop and procure software for use inhouse, as well as to independent software vendors who write and sell software.

Software security scanning is an alternative, accepted by organisations such as the Payment Card Industry Security Standards Council (PCI SSC) to web application firewalls (WAFs), which are a way of protecting deployed software against application-specific attacks.

Scanning ensures problems are identified and fixed early in the software development and deployment cycle rather than left to run-time, as WAFs do.

New research published by Quocirca shows that code scanning in general is the most widely used approach to software security, and that the use of on-demand scanning services is now almost as widespread as the use of on-premise tools, especially for packaged applications bought from independent software vendors.

Some may be surprised that third-party code can be scanned in this way. To understand this approach requires an understanding of the two basic ways of addressing the issue: static and dynamic software scanning.

Static scanning is where software code or binaries are taken and run through a scanner. Every line is examined and analysed within the context of the development language and potential flaws identified with advice on how to fix.

Static scanning is thorough. It looks at all areas of the code regardless of how likely it is to actually be executed at run-time. When using an on-demand service for static scanning the application is submitted to the service provider over a secure link for a report.

Static scanning has traditionally been more suited to inhouse-developed code than commercially-acquired applications, because independent software vendors do not readily give up their source code for scrutiny. However, the advent of binary static analysis means any application can now be subjected to a static scan.

All that’s needed are the final executable files. This approach has the additional benefit of including analysis of embedded third-party components, which source-code scanning would not provide. It may be advisable to seek the co-operation and permission of independent software vendors when scanning their applications. Indeed, they may well provide details of scans they themselves have commissioned.

Dynamic scanning can also be carried out independently of the supplier. Here the focus is on web-enabled applications that are scanned in a test or run-time environment. It is not as thorough as static scanning, because only discovered executable roots through the code are followed. But these routes are the ones most prone to attack.

Since no sources or details of binaries are required, dynamic testing can be used to test any web-enabled application, including those provided as on-demand services as well as inhouse-developed and deployed ones.

The process is straightforward. Simply point the scanner at the URL for the application and let it get on with it. There seems little point in buying and installing tools to carry out such scans on-premise when you consider how easy it is to point an on-demand service at a web-enabled application.

This advantage is especially true when the benefits of using an on-demand service specific to code scanning are taken in to account. Top among these benefits is the wisdom of crowds.

Because code-scanning service providers are dealing with hundreds of customers, and scanning many thousands of applications on their behalf, they soon build up a picture of common problems.

When it comes to commercial code, they will often have seen it before and know what to look for and have an understanding of common flaws introduced through customisation.

This familiarity allows service providers to benchmark the results of a given scan against the results they have had from other scans and indicate to a customer if its code is below or above average.

This facility makes it easy to set thresholds and offer advice about the dangers of proceeding with the deployment of a given application without making modifications to the code or putting other security measures in place.

Understanding software security is the core competence of the providers of on-demand scanning services. The developers of software code, whether they’re coders working for end-users organisations or ISVs, do not necessarily have this skill.

Their focus should be on building the core functionality of their applications and ensuring they deliver the expected business value; the task of security testing can be outsourced.

Those interested in finding out more about the benefits of the dynamic and static code scanning and the results of Quocirca’s latest research the report is freely available here.

Back at the beginning of February this year, I had a briefing with a company local to me, Celaton, whose headquarters are in Milton Keynes. What had intrigued me to know more was the service that they are providing to their customers - the automation of a company's inbound document flow.

Celaton was founded in 1993 as Red Rock Technologies and are a registered UK Company. The company was floated in 2001 and sold in 2002 only to be bought back in 2004 to be reformed in 2005 as Celaton.

Their product, inSTREAM, automates the inbound information streams (post, fax, email) that flow into and through organisations every day. To their customers they offer to deliver a guaranteed input of this information into the necessary customer's systems. The company has an impressive and wide ranging set of UK businesses as their clients, including Carphone Warehouse, Asos.com, Gullivers Travel Associates, Talk Talk, Davies Group. The price of the service is based on the volume of transactions and is offered in pricing bands.

Figure 1: Celaton inSTREAM (Source: Celaton)

One of the banes of businesses today is the amount of documents, faxes, emails that are received every day. An enormous amount of time is spent by employees reading this incoming information and deciding on its importance and relevance. inSTREAM is a universal 'hub' that receives this plethora of inbound data and transforms the information contained in the various documents into a unified format for onward possessing. Any paper-based documents can be scanned and captured into inSTREAM from any location using the inSTREAM client software. Emails, attachments, faxes and other electronic data streams can be received directly into inSTREAM. One of the real differentiators of inSTREAM is its ability to self-learn. It is able to identify, locate and extract key data and make decisions from data and documents received.

inSTREAM works in the following way:

Incoming documents in whatever format are captured

The key data essential to the process is identified and extracted

The retrieved information is validated - this sometimes involves connecting to a customer's systems or contacting a person in the organisation to resolve any issues

The validated information is delivered into the line of business systems of the customer

All the collected information is securely stored and managed, so that customers can access historic information and also run trace searches on what has been collected by inSTREAM

Celaton have also produced some customised versions of inSTREAM to handle correspondence, procurement, accounts payable, travel and expenses, claims and recruitment.

For organisations whose business involves large amounts of correspondence that needs to be processed daily, then Celaton inSTREAM provides a very sophisticated service at a reasonable cost.

A few decades ago, digital communications promised to sound the death knell for printing and the paperless office was predicted to be just a matter of time. Yet the paperless office has failed to materialise, with email and the internet actually leading to more printed documents. The popularity of smartphones and tablets in the workplace is now leading to similar warnings of less printing, with iPads and other tablets in particular expected to displace the printed page. However, Quocirca believes that this supposed threat to printing actually opens a new landscape of opportunity to printer vendors – but only if they can provide simple, reliable and secure ways to print from mobile devices.

Undeniably, the consumerisaton of IT is having a profound impact on the use of smartphones and tablets in the workplace. Today’s dynamic and mobile workforce is now relying on personal devices in their professional lives and expect anytime, anywhere access to corporate systems – including printing. Even in this era of smartphones and tablets, businesses continue to rely on printing – 75% of 125 enterprise respondents in a recent Quocirca survey indicated printing as playing an important role in supporting business activities. There is certainly an appetite for printing from mobile devices with 55% of respondents indicating that employees would like to be able to print from their mobile devices. Around 25% are already investigating mobile print solutions.

Given the diversity of mobile platforms and printer hardware, it is unsurprising that the mobile printing market is fragmented, characterised by an array of hardware, software and cloud-based services. Not only is the demand for mobile printing an opportunity for more hardware sales – HP, for instance, shipped over 15 million web-enabled ePrint printers in 2011 – but it also enables vendors to capture pages as they shift from the desktop to the mobile device. In many cases these are ‘high value’ colour pages that generate additional revenue opportunities.

Mobile printing usage scenarios can be broadly categorised as either public printing/guest printing services or printing across a corporate network. Public/guest printing covers 'hot-spots' such as hotels, business centres or airports that offer Wi-Fi connectivity, web access and print and copy services. Mobile workers can discover printers and use universal print drivers, web-based means of submitting print jobs or send them as an email attachment from their mobile devices. Public print locations should require an authentication code before users can release a print job from a designated printer to ensure that print jobs are not mislaid or stolen by passing employees or members of the public. Examples include EFI’s PrintMe service which is available at more than 3,000 public locations; HP ePrint public print locations such as FedEx and Hilton; and Ricoh’s HotSpot printing which uses PrinterOn’s public printing network. PrinterOn’s Mobile Printing Solution currently supports over 7000 PrinterOn print locations worldwide.

Printing from any device to any printer or MFP across a corporate network promotes user mobility across company locations. Printing may be direct from a mobile device or application, via an email attachment to a registered printer or through a web browser, using a public or private cloud. When deployed in the enterprise, it is critical that mobile print solutions are vendor-agnostic, use a private cloud approach and employ encryption and authentication methods to ensure document security and privacy.

The mobile printing ecosystem is broadly populated by printer/copier manufacturers and independent software vendors (ISVs).Hardware manufacturers may typically offer a mobile printing portfolio that comprises hardware, software and services. Printers may be cloud or web-enabled, as in the case of HP’s ePrint or Ricoh’s HotSpot range of printers. This allows devices to be registered for these vendors’ respective cloud printing services.

Most of the hardware-centric mobile print solutions are brand-specific, although some do offer multivendor support. Hardware manufacturers such as Canon, HP, Lexmark, Ricoh and Xerox also offer mobile printing services as part of their managed print services (MPS) portfolio, enabling organisations to manage and track printing across both desktop and mobile environments. However, Canon’s uniFLOW platform, in particular, is currently the only integrated print management platform that tracks and reports on both desktop and mobile printing.

ISVs such as EFI, Cortado and PrinterOn all offer vendor-agnostic mobile print solutions. Solutions such as EFI’s PrintMe Mobile are particularly suitable for organisations operating a mixed fleet, avoiding the need to implement multiple solutions for each mobile platform and printer or MFP. In many cases, hardware vendors will partner with ISVs to deliver multivendor support where appropriate.

Currently the only mobile OS platform to offer direct printing support is Apple’s AirPrint. This offers wireless printing from iPad, iPhone (3GS or later) or iPod touch (3rd generation or later) devices to AirPrint-enabled devices. These include selected printers from Brother, Canon, Epson, HP and Lexmark. Google Cloud Print, currently in beta, offers printing from smartphones or tablets with Gmail for mobile, Google Docs for mobile and other supported apps to cloud-enabled printers.

Given the lack of standardisation around mobile printing, organisations are faced with a challenging task in navigating the range of solutions on offer. Whilst smartphones and tablets may diminish the need for a certain amount of printing, it is not going to eradicate it. Therefore, organisations should offer employees mobile printing capabilities that enable them to remain productive, whilst also ensuring mobile printing is tracked and secured in the same way as desktop printing. Quocirca believes that mobile printing will become a crucial part of an overall enterprise print strategy as pages gradually shift from the desktop to the mobile device.

Not all systems administration (sys-admin) is done by people. Some applications need administrator access to communicate and make changes.

Furthermore, remote management tasks are often carried out using pre-set procedures in sys-admin tools, for example the backup of branch office devices.

For this to work, privileged login details are often embedded in the applications or tools that require them. Should the wrong individual get access to these credentials, they may be able use them for malicious purposes.

To make things worse, when such details are embedded they rarely get changed because it burdensome to do so and consequently the credentials may remain valid for long after they have been compromised.

This risk is exacerbated by the fact that such privileged login details are often not just stored but also often transmitted as the clear text.

In recent Quocirca research around 50 per cent of organisations admitted that sys-admin login details we regularly transmitted in clear text, although it varied widely by industry.

This need not be the case.

First, applications and tools needing privileged access right should be administered and monitored in the same way as "human" privileged users (for example, they should not use group access privileges).

Furthermore, the assigned login details need not be transmitted in clear text. Passwords can easily be masked, or better still the whole transmission required to carry out a remote admin task can be encrypted.

Despite a continued reliance on printing, many businesses overlook print security in their overall approach to data protection. This may be set to change with the recent announcement that Xerox will be incorporating McAfee whitelisting technology into its multi-function printers (MFPs). This will enhance the hardware and software security capabilities that Xerox already offers to provide more secure printing, scanning, faxing and copying.

Quocirca welcomes the move; print security certainly needs to move higher up the IT security agenda. Although MFPs are an intrinsic part of the IT infrastructure, many organisations remain oblivious of the security risks they pose. These devices have the capability to scan, print, copy and email, operating as sophisticated document processing hubs with network connectivity, hard disk drives and embedded software. As such, printers and MFPs are more than peripheral in today’s IT environment.

Without the appropriate controls, it is easy for confidential data to fall into the wrong hands – whether unintentionally or maliciously. Yet, a recent Quocirca study, amongst 125 European and US enterprises, revealed that only 15% were concerned with data loss via a printer or MFP. Given the legal and financial ramifications of a data leak, as well as potential brand damage, businesses need to wake up to the print security threat.

There are a variety of measures that can be taken to mitigate the risks. A layered approach is required depending on the security posture of a given organisation. Devices can be protected through enabling features such as hard-disk encryption or overwrite, unused network ports can be disabled and user security can be applied through PIN only printing. Yet, Quocirca research shows low levels of adoption of these features. Whether this is complacency, a genuine lack of awareness or the complexity of implementation, it indicates that businesses are failing to protect themselves against an obvious threat.

The McAfee and Xerox partnership is a step in the right direction. By embedding McAfee software into its MFPs, Xerox customers will gain the benefits of whitelisting, a method that allows only approved files to run, which is more secure than traditional blacklisting, where the user has to be aware of the threat and continually update the list of malware (viruses, spyware etc.) in order to block it. Additionally, the solution provides an audit trail to track and investigate the time and origin of security events, and take action on them. The McAfee technology will be included in selected product releases over the next year. It will be available “out of the box,” meaning no special software uploads or Xerox service-driven upgrades are required post-installation. Xerox plans to roll-out the technology in new products as they are introduced.

Print security is a minefield for many businesses, particularly as standard security features vary widely between manufacturers, and even within a manufacturer’s own product range. The security threat landscape continues to encompass a wider set of threats – whether these are insider or external – and printers are far from immune. Quocirca believes that Xerox and McAfee’s proactive approach will raise print security higher up the overall IT security agenda.

IT systems don't run themselves – at least not all the time. At some point the intervention of system administrators – sys-admins – is required.

The very nature of a sys-admin's job requires that that he or she is granted a higher, privileged level of access to IT infrastructure than that granted to normal users.

When the actions taken by sys-admins are other than those expected of them, there can be far-reaching consequences. In the worst case, a sys-admin may abuse their privilege for malicious reasons, for example to steal data or set backdoor access to IT systems for themselves or others.

Sys-admins are also good targets for identity theft through techniques such as spear phishing, a privilege ID being more useful to hackers than a normal one. However, the most common problem is simply that sys-admins are human. They make mistakes.

Privileged user management tools help address a number of issues that a recent Quocirca report showed were rife among UK businesses. So here are Quocirca's top 10 tips for better and safer systems administration.

Tip 1: Know your privileged users

Certain regulations and standards make strong statements about the use of privilege. One of the controls in the IT service management (ITSM) standard ISO 27001 states that "the allocation and use of privileges shall be restricted and controlled". The Payment Card Industries Data Security Standard (PCI-DSS) recommends "auditing all privileged user activity".

In other words, the use of group admin accounts is a strict no-no. Such accounts should be blocked and all privileged user access should be via identities that are clearly associated with individuals.

Tip 2: Make sure legacy privileged accounts are closed

This measure includes the default accounts provided with systems and application software, which with the right tools can be searched for and closed, and the accounts of sys-admins who have now left your organisation. The best way to deal with the second point is to provide only short-term access for specific tasks in the first place.

Tip 3: Minimise sys-admins errors

Quocirca's research suggests that the average error rate of sys-admins runs at about 6%. Errors can waste time - for example, applying patches to the wrong device - be a security risk in cases such as changing the rules of the wrong firewall, or cause disaster - say, wiping the wrong disk volume.

Sys-admin tools that guide users to the right device in the first place and double-check their actions can help avoid errors, as can the automation of certain mundane tasks.

Tip 4: Limit sys-admins' access to devices

Another way to avoid errors is to grant sys-admins privilege access to devices that need maintenance for limited periods of time. Rather than providing wide-ranging and ongoing access, grant it only to a single device or small subset of devices and only for the period of time deemed reasonable to get the job done.

Tip 5: Encrypt sys-admin login details

Many sys-admin tasks involved maintaining remote devices, which requires the sys-admin login details and the instructions for the given task to be transmitted, sometimes embedded in scripts. It has been common for this to be done in clear text, especially when using services like Telnet. This approach provides easy pickings for hackers, so all such transmissions should be encrypted.

Tip 6: Back up all IT devices

The failure of IT devices is inevitable. What is important is that they can be recovered and up and running again as soon as possible. Most organisations are diligent about the backup of servers. They are less rigorous about the backup of network and security devices, the failure of which can be just as damaging to IT access.

Such devices should be backed up regularly and at least every time their configuration is changed. The backups should be stored securely, to prevent them being stolen and used to clone the original device. Automating such backups is the best approach.

Tip 7: Limit sys-admin access to data

To carry out their jobs, sys-admins need access to systems data, not business data. All too often, their wide-ranging privileges have given them access to both. This approach is unnecessary. To protect the data and sys-admins from the accusation of abusing their position of trust, the scope of their access should be limited.

It can be done with the right tools. Cloud service providers have to observe this distinction, managing their own infrastructure while respecting the confidentiality of their client's data.

Tip 8: Safe disposal of old devices

All IT devices carry potentially useful data to hackers. Firewalls, load-balancers, content filters all contain various network-access settings and user details along with system log files.

All devices have an end of life, so before disposal it should be ensured that all such data is safely deleted or the hard disks involved destroyed.

Tip 9: Be ready for the auditors

Auditors take a particular interest in the actions of privileged users for many of the reasons already outlined. As well as being able to associate a given sys-admin with his or her actions, a full audit trail for the admin history of a given device should be kept.

Maintaining this trail is only possible if access to the device is controlled and the tools that provide access keep a record with the necessary level of detail.

Tip 10: Free sys-admins from drudgery

Part of the reason why sys-admins make mistakes is that many of the tasks they have to carry out are mundane and repetitive. Automating as many of their tasks as possible and having the tools and procedures in place to allow safe delegation to junior and temporary staff can relieve some of the drudgery.

It leaves sys-admins free to focus on more productive tasks that increase the value IT provides to their organisation rather than just fighting to keep the lights on.

Want to see the full research? Quocirca's report “Conquering the sys-admin challenge” is freely available here.

For IT users, the most important things are the applications that enable them to do their jobs and the devices they access those applications from. However, system administrators (sys-admins), responsible for ensuring end-user devices can link to the applications, know it takes a lot more in between. Resellers know this too; selling both the high and low profile equipment is their bread and butter. What resellers may not realise is the extent to which their customers fail to manage much of their equipment securely and effectively and the additional opportunity this represents.

A new Quocirca research report—Conquering the sys-admin challenge—underlines the extent of the problem. It looked at three broad areas: the management of privilege, the ability to automate sys-admins' tasks and ensuring compliance.

The over-granting of privilege is a common problem; sys-admins are often granted access to more equipment than is necessary and they often have access to data they have no need to see (Figure 1). This is a problem, not because sys-admins are innately malicious people (although a few have turned out to be) but because, just like anyone else, they can make mistakes.

Errors made when acting under privilege can have a serious impact on the availability of IT systems. For example, the failure to backup up a server properly (or at all) may mean data is lost and a project is put back by days or weeks; wrongly reconfiguring a network firewall may lead to remote users being locked out of systems they need to access; or spinning down the wrong disk volume for maintenance purposes may leave an email server out of action.

The new research shows that the average sys-admin's error rate is about 7%. One way to reduce error rates is better management of privilege. To achieve this it is necessary to have tools in place to manage the scope of privilege access, limiting the range of data and devices a sys-admin has access to and the time they have access for.

There is another way to reduce error rates—more automation of sys-admin. Many tasks are mundane and repetitive. A good example is data protection, most organisations regularly backup file servers and many have automated this. However, other devices need protecting too and it is less likely that the settings of firewalls, routers and load balancers are backed-up (Figure 2). This is important for ensuring a quick recovery in the case of failure and the task is an easy one to automate with the right tools. Other tasks can also be automated, including the gathering of data for audits.

This brings us full circle, because one area that auditors are keen to see IT departments have control of is the use of privilege. Some standards are specific about the management of privileged users. One of the controls in the IT service management standard (ITSM) ISO 270001 states, “the allocation and use of privileges shall be restricted and controlled”. The Payment Card Industries Data Security Standard (PCI DSS) recommends, “auditing all privileged user activity”.

Many organisations do not have the controls in place to make sure this required data is gathered. Indeed some admit to appalling practices, in particular the uncontrolled changes to sys-admin procedures immediately prior to audits, which then lapse following the audit. Over two thirds of respondents admitted this happened at least occasionally; for some it was a regular practice (Figure 3).

When it comes to helping customers with the management of privilege, the automation of sys-admins and ensuring compliance, resellers can take one of two approaches. They can either ensure the tools to do their job are available as part of their portfolio or they can use such tools themselves to provide managed services. Vendors that focus on the management and privilege and the automation of IT include Osirium (the sponsors of Quocirca latest report), CA, Cyber-Ark, Quest Software and Lieberman Software.

Network and security devices age just like any other IT equipment. As the IT industry moves toward 100 gigabit/second Ethernet and 100 megabit/second broadband connections, many existing devices will no longer cope with traffic volumes. The need to replace routers, firewalls, load-balancers, content filtering devices etc. is an on-going process.

Some devices may be reusable by smaller organisations and have a second-hand value; others may just be fit for the dump; when the latter is the case they must be disposed of in line with environment regulations such as the UK Environment Agency’s waste electrical and electronic equipment (WEEE) directive.

Either way, such devices will end up in the hands of third-parties, and their eventual destination will not be guaranteed. These devices have all sorts of confidential data and settings stored on them, such as user details and network access settings. In the wrong hands these could be used to gain access to private networks, and anyway, the leaking of such data may constitute a data privacy breach. If is therefore necessary to ensure all such data is securely deleted before devices are disposed of.

It varies by industry, but a recent Quocirca research report shows that around 40% of all organisations said they were not confident all such data was safely removed prior to device deposal. Quocirca suspects that even those who claim to have done so have not actually shredded data but just “deleted” it, and a determined hacker may still be able to retrieve it. Only audited disk shredding or secure reformatting tools, carried out by screened staff, can ensure such devices are completely safe to dispose of.

All the recent compliance headlines in the financial services sector, at least in the UK and Europe, have been around Solvency II, Basel III and MiFid II. A regulation that has been largely overlooked (except by Trillium (which has just announced the Trillium FATCA Compliance Data Assessment service) by the IT industry is FATCA.

FATCA (foreign account tax compliance act) is a US law that comes into effect on 1st January 2013. It is designed to ensure that US citizens who hold assets abroad pay relevant taxes. So, suppose I lived in Boston (Massachusetts not Lincolnshire) and had an account with a UK-based bank, through which I held various investments. Today, I might be able to get away with not paying US tax on any profit I made from these investments. FATCA has been designed to ensure that that will not be possible in future.

FATCA applies to both US financial institutions that have any dealings overseas and to so-called foreign financial institutions: USFIs and FFIs respectively. These include banks, insurance companies, alternative investment companies, private equity companies, hedge funds and so on and (subject to their being some level of non-US interaction) to any financial company that either has US citizens as customers or which holds US assets.

FFIs can either register as participating or as non-participating. Non-participation means that you are effectively opting out. However, if you do this, or if you are a participating company and fail to comply with the regulations, then the US tax authorities will apply a 30% withholding tax against any sales of US assets. Moreover, this is not against profits but against revenue so you could sell a stock at a loss and then have the 30% deducted. It is difficult to imagine any company that has any significant US business not wanting to both participate and comply.

If you decide to participate then you must be able to recognise which of your clients are US citizens and you will be required to provide relevant information about those clients. You must also have relevant processes in place to recognise whether new clients are American or not. The same is also true if you formally decide not to participate: you will need to demonstrate that you have procedures in place to recognise if new clients are American and, therefore, reject them as clients.

Unfortunately, the requirement for participating FFIs to provide relevant information about their US clients will fly in the face of the data protection laws of a number of countries. Where this is the case then the FFI will need to obtain a waiver from each of its clients to confirm that that information can be passed to the IRS or it will need to close that account.

Needless to say there are significant data governance implications in order to support FATCA, whether you are a USFI or are an FFI. You will need to know which clients are US citizens, ensure that they have signed a waiver, if relevant, have procedures for identifying whether new clients are US citizens or not, and have processes that ensure that only information about US citizens is provided upon request and that you do not break data protection laws by inadvertently sending information about non-US citizens. You will also need to be very clear about your data quality processes and careful about de-duplication and merging of records.

I have to say that this makes me feel a little sorry for financial services companies. In the UK they have only recently had to comply with FSCS regulations and the insurance sector and banks (those that provide asset management) have to comply with Solvency II, which is the same official start date (it may be delayed) as FATCA. That's a lot to do in a short space of time (not to mention MiFID II and Basel III waiting in the wings). The one consolation is that you need good data governance for all three of these. Those that thought they could get away without seriously addressing data governance for FSCS may not be wishing that they had done it properly the first time.

It was interesting to see that TIBCO's acquisition of Nimbus generated so many negative comments in the analyst and blog community. Some suggested it was a strange acquisition, while others suggested it was a "Fire Sale". Perhaps I stand alone in thinking that it was a clever move by both parties.

Nimbus has acquired something of a reputation among their competitors for closing sales where others did not even know that there was a requirement! In part this is down to the difference of the Nimbus sales model. The management team at Nimbus almost all came from major consulting firms and, as such, have great connections at the CXO level. Over the years Nimbus very cleverly worked that network and focussed on real business engagement with business leaders, resulting in them being able to open doors, make a pitch and then close the door before others knew anything about it.

Many other vendors talk about selling to the business, but invariably still end up talking to the IT side. Nimbus has always talked about and executed a strategy that focussed purely on senior business leaders.

By extension, this means that TIBCO has acquired a team of senior staff who can now start to take TIBCO and the TIBCO offerings in at the very top of organisations, something that others will struggle to do.

Then we come to the issue of a "Fire Sale" with a reputed price paid of in excess of $42m dollars against probable revenues around $15m. Then a 3 times revenue price seems pretty high for a burning platform and instead looks like a pretty good deal for the Nimbus shareholders. I can think of a number of vendors who can only dream of trying to exit at this ratio. Indeed I understand that TIBCO were considering a number of players in the space before settling on Nimbus.

The Nimbus approach is very different from others in the BPA space to which they are often associated. They are not a modelling vendor in the true sense, but do fill a gap which other BPA vendors have done a poor job with over the years - that of operationalizing the maps and models. Nimbus has always focussed on the last piece of the puzzle, making required information readily available to those who need it, in ways that they can use and act on. (As a footnote, Nimbus were the first vendor in the BPA/BPM space to deliver a native app solution for IOS devices).

This focus on the consumer of the information is something that other vendors need to be more active with. It is not simply about making maps and models available but providing help, guidance and intelligent information at the point of need,

The fact that TIBCO will be maintaining Nimbus as a separate group means that existing Nimbus customers can continue to enjoy the relationships they have built up, while knowing that the company has the security of strong financial backing behind them. Beyond that, the team at Nimbus have already started to integrate other TIBCO technology into the Control product. Detailed plans have not yet been announced, but it seems as though with products like Spotfire and tibbr available to them that the analytic and social networking capabilities will be far in excess of what others in the BPA sector can offer.

As with any acquisition it will take time to fully play out, but the impression is that this is a clever move for both parties, with significant upside for customers of both companies. I do, however, wonder whether TIBCO might still consider acquiring another vendor in the BPA space, one who has a much stronger modelling component. Neither Nimbus or TIBCO are especially strong there, but adding the Nimbus offering to a full fledged BPA tool would provide a far more valuable offering to users. Indeed, I would suspect that with an integrated offering there would be significant opportunity to sell Control into the existing modelling user base, replacing what has historically been poor back end publishing capability.

One area that will be interesting to see is how Nimbus make use of the TIBCO process execution engines. This could be used in 3 ways.

Not at all - leaving Nimbus in the publishing/operational information space.

As an integral part of Control - enabling smarter use of process within the Nimbus application, particularly for areas such as change management.

Or it could also be used as an external application, taking Nimbus more into the BPMS type space and allowing people to create process-based applications from within Control.

Of these the most likely is the second scenario, where Nimbus could add greatest value by adding as the container for process applications e.g. expense handling, vacation requests or change management. This would leave the company to stay focussed on the consumers of technology and the business managers around them - rather than to mix it with the normal BPMS type players.

In conclusion, Nimbus customers should feel comfortable that there is greater financial certainty to support their purchase decision, along with faster access to technology that will enable even greater leverage from their investment to date. Meanwhile, TIBCO customers may wish to take a look at how adding Nimbus Control could help them ensure that the right information is available in an easy use format for their business users.

A recent Quocirca blog post pointed out there were good business reasons for disclosing data breaches as well as an increasing number of regulatory ones. For those organisations not convinced by these arguments and still intent on attempting to brush leaks under the carpet, there is new evidence that consumers think they should come clean too.

New research commissioned by LogRhythm, a vendor of SIEM (security information and event management) tools, surveyed 2,000 UK consumers and concludes that they are “losing patience with organisations that endanger their customers’ data”. 80% were “concerned” about trusting organisation to keep their data safe from hackers, up 17% from a similar survey in 2010. 26% assert they would “definitely” not transact with the affected organisation again, with a further 61% saying they would try to avoid future interactions.

Of course, for many, their bark will be louder than their bite; it is often said that a man is more likely to change his wife than his bank. However, what the research does show is that all the recent press coverage of data leaks has not gone unnoticed. There is widespread awareness amongst consumers of the issues and the responsibilities of organisation to who they entrust their data and the importance of disclosure.

SIEM tools help in two ways. First, they can monitor network traffic and help spot unusual activity, providing a feed to intrusion prevention systems (IPS) and data loss prevention (DLP) tools to block attempted data thefts. Second, they help clear up afterwards, enabling affected organisations to rapidly gather the information about what data has been lost and who has been affected. It is not good enough for an affected organisation to lazily issue a blanket warning to all customers, instead they should be in a position to inform those (and only those) whose data has definitely been compromised.

LogRhythm claims to be the biggest independent vendor of SIEM tools. This follows a recent round of acquisitions of its rivals by larger vendors. In 2010, HP acquired ArcSight, and this month two more intended acquisitions were announced; IBM targeting Q1 Labs while Nitro Security was approached by McAfee. There is no shortage of other vendors; for example, Symantec has its Security Information Manager and EMC/RSA has tools based around the acquisitions of Network Intelligence and enVision. However, this has not put off new entrants, such as Red Lambda, a high-end data processing vendor attempting to re-position itself in the network security market by treating it as a 'big-data' problem.

Businesses rightly expect consumers to be careful with their confidential information, account details, login credentials and so on. In return, consumers should expect business to take good care of the same data and come clean when it is stolen or they have screwed-up and leaked it to the public domain.

Quocirca saw an estimate recently that IT security managers can spend as much as 30% of their time preparing for and delivering audits. This is mundane and uninteresting work and if it can be automated – all the better. However, recent Quocirca research, sponsored by sys-admin tools vendor Osirium, shows that less than 20% of organisations fully automate the gathering of data for audits and less than 10% automate the remediation of audit gaps.

What’s more, over 70% admitted that in some cases system administrators (sys-admins) made informal, uncontrolled changes to sys-admin procedures immediately prior to audits in order to meet the audit requirements, which then lapse following the audit, with 8% saying this was a regular practice. Obviously, this is extremely bad practice; if auditors uncovered the fact the procedures had been temporarily changed to satisfy them, then the audit would surely be failed anyway?

Osirium has published the research and some suggestions for achieving better practices as the first of its Alpha Files, a series of short reports on sys-admin, privileged user management and auditing practices. Quocirca will be publishing a new free report later in 2011 that will detail and analyse in detail all the new research.

There has been plenty written, not least by Quocirca, on the danger of data loss and how to prevent it. Less has been said about how to clear up afterwards; when the measures taken to protect a business from such losses have failed or were not present in the first place. In particular the responsibilities an organisation has when it comes to disclosing that such an incident has occurred.

One of the reasons for this is that legal situation is a bit vague, so there is a temptation to think that the problem can be brushed under the carpet. Organisations that do this may find themselves in hot water if details emerge at a later date, or at least hotter water than they would have been had the leak been reported in the first place.

For any UK based business, the first stop is the Data Protection Act (DPA) enforced by the Information Commissioners Office (ICO). The specific advice on the ICO web site with regard to disclosure is as follows:

“Although there is no legal obligation in the DPA for data controllers to report breaches of security which result in loss, release or corruption of personal data, the information Commissioner believes serious breaches should be brought to the attention of his Office. The nature of the breach or loss can then be considered together with whether the data controller is properly meeting his responsibilities under the DPA”

So that’s alright then, keeping hush-hush is OK? Not really, just because the “data controller” (that is the person in any given business charged with the security of personal data) is not required to report a leak, it does not mean that the leak has not occurred. If the problem comes to light at a later date, and this is when the ICO finds out, then he is likely to take a dimmer view than if the leak had been reported up front. And remember, if personal data is involved, “data subjects” (that is you and me, in our roles as private citizens) may the first to find out and their privacy is enshrined in the Europe Human rights Act (article 8).

Furthermore, the pressure to disclose was increased on May 26th 2011, at least for certain organisations. The “Privacy and Electronic Communications (EC Directive) (Amendment) Regulations 2011” (PECR), specifically requires service providers to notify the ICO, and in some cases individuals themselves, of personal data security breaches. PECR was introduced mainly to target the use of cookies that internet service providers can use to gather personal data to personalise web services.

Beyond the DPA and ICO there are other pressures to disclose. For example, the Financial Services Authority (FSA) arguably obliges the firms it regulates to notify data breaches as part of their general reporting duties. Another standard that requires disclosure and already affects many businesses is the Payment Card Industry Data Security Standard it (PCI-DSS).

PCI-DSS compliance is required for any business that accepts payment cards – even if the quantity of transactions is just one. It is enforced via the major card brands (VISA, MasterCard, AMEX, Discover and JCB) and the obligation to disclose is in their contracts. For example VISA advises the following steps be taken:

Contact law enforcement

Contact bank

Contact VISA fraud control

Preserve logs

Make notes of all these actions

VISA also advises:

“Make sure you have a written policy with an incident response plan and make sure all employees are aware of it”.

VISAs advice is pretty good for handling any data loss, getting control of the situation at early stage and informing effect parties makes sense for any data leak.

Beyond payment card data, there is plenty of other advice available. Field, Fisher and Waterhouse, a law firm specialising in data protection law has a 10 point plan for handling the theft of a laptop. One point it makes is to have a media strategy, not just to get the media on side ASAP, but it may also be the most effective way of informing data subjects. This will depend on the nature of the data loss and if a criminal investigation is likely to ensue.

The trend towards an obligation to disclose data leaks is clearly happening on a number of fronts. However, even if you think a given circumstance you can get away without disclosing a leak, you would almost certainly be wrong to do so. A leak is a leak, whether you disclose it or not, it needs pro-active management from the moment it has occurred and your organisation needs to be prepared for the seemingly inevitable.

Quocirca will be presenting at the UK Infosecurity Virtual Conference on Sept 27th 2011 on the topic of “Responsible Data Braech Disclosure”, for more information go here.

Too many websites are not accessible and one of the reasons for this is that website owners do not know how to begin. The new initiative 'The First Seven Steps to Accessible Websites' is a response to the question posed by many website owners "My website was not designed with accessibility as a consideration, I would like to improve the situation, how should I start?"

It is being delivered as an on-line book, which I edited, and describes seven initial steps that can be implemented relatively easily and will provide real accessibility benefits and help to map out the subsequent steps on the journey.

Although it is primarily intended for newcomers to accessibility the steps should be of interest to people who are on the accessibility journey and may have missed some useful steps. Please have a look and leave comments here. OneVoice is looking for assistance in validating, improving and extending the content of the document.

At the conference an extra step was added: 'Take a basic education course about accessibility'. The course suggested was also announced at the conference and is the 'Digital Accessibility eLearning' course commissioned by the Equality and Human Rights Commission, AbilityNet and the BCS. This a level 1 accredited qualification (I will write about this further when it is available).

At the same conference Sandi Wassmer, who is a member of the UK Government e-accessibility forum, talked about the "Ten Principles of Inclusive Web Design", that she developed for the forum. These principles provide an excellent guide to the continuation of the journey after the initial steps.

E-access 11 was an excellent conference and much of the day's proceedings are now available on the website. I hope to see many more people at e-access 12Â planning their continuing accessibility journey.

There are many occasions when I want to be able to do a quick evaluation of a web site or a group of sites. To enable me to do this quickly and consistently I have developed a set of tests that I can complete in a quarter of an hour. The tests will indicate the level of accessibility of a website. It will not show every wrinkle in the website but give a good view of the level of intent of the owner to make the site accessible, ranging from: Not aware/do not care, through trying to improve, through to ensuring accessibility is integral to the design and content.

I am publishing them for three reasons:

I hope other people will find them useful.

I am interested in feedback suggesting other tests that could be incorporated, bearing in mind the limited time for the test.

I hope web site owners will check out their sites to see how well they would score. In some cases some small changes to the website could produce significant improvements.

Since this article was originally published the 15 Minute Test has been enhanced and updated, and published as part of First Seven Steps to Accessible Websites, the test is the first of the seven steps.

The rapid increase in the availability of on-demand IT infrastructure (infrastructure as a service/IaaS) gives IT departments the flexibility to cope with the ever-changing demands of the businesses they serve. In the future, the majority of larger businesses will be running hybrid IT platforms that rely on a mix of privately owned infrastructure plus that of service providers, while some small businesses will rely exclusively on on-demand IT services.

Even when it comes to the privately owned stuff, the increasing use of virtualisation means it should be easier to make more efficient use of resources through sharing than has been the case in the past. Quocirca has seen server utilisation rise from around 10% to 70% in some cases where systems have been virtualised. There will of course always be some applications that are allocated dedicated physical resources for reasons of performance and/or security.

Any given IT workload must be run in one of these three fundamental computing environments; dedicated physical, private virtualised and shared virtualised (that latter being part of the so-called “public cloud”).

However, the benefits of this flexibility to deploy computing workloads will only be fully realised if the right tools are in place to manage it. In fact, without such tools, costs could start to be driven back up. For example, if the resources of an IaaS provider are used to cope with peak demand and workloads are not de-provisioned as soon as the peak has past, unnecessary resources will be consumed and paid for.

A workload can be defined as a discrete computing task to which four basic resources can be allocated; processing power, storage, disk input/output (i/o) and network bandwidth. There are five workload types:

Desktop workloads provide users with their interface to IT

Application workloads run business applications, web servers etc.

Database workloads handle the storage and retrieval of data

Appliance workloads deal with certain network and security requirements and are either self-contained items of hardware or a virtual machine

Commodity workloads are utility tasks provided by third parties usually called up as web services

A series of linked workloads interact to drive business processes. Each workload type requires a different mix of resources and this can change with varying demand. For example, a retail web site may see peak demand in the run-up to festivities and require many times the compute power and network bandwidth it needs the rest of the time; a database that relies heavily on fast i/o may need to be run in a dedicated physical environment; virtualised desktop workloads may need plenty of storage allocated to ensure users can always save their work (thin provisioning allows such storage to be allocated, but not dedicated).

To ensure the right resources are allocated requires an understanding of the likely future requirements when the workload is provisioned, this is also the time to ensure appropriate security is in place and that the software used by the workload is fully licensed. Once workloads are deployed, it is necessary to measure their activity and monitor the environment they are running in, sometimes allocating more resources or perhaps moving the workload from one environment to another, ensuring, of course, security is maintained and that the workload always remains compliant (for example, making sure personal data is only processed and stored in permitted locations).

The intelligent management of workloads is fundamental to achieving best practice in the use of the hybrid public/private infrastructure that is here to stay. To manage workloads in such an environment requires either generic tools from vendors such as Novell, CA or BMC or virtualisation platform specific tools from VMware or Microsoft. Such products of course have a cost, but this is offset by more efficient use of resources, avoiding problems with security and compliance and providing the flexibility for IT departments to better serve the on-going IT requirements of the businesses they serve.