KuppingerCole Blog

KuppingerCole just concluded our first Consumer Identity Summit in Paris. In fact, this was the first Consumer focused digital identity event of its kind. The event was very well attended, and featured excellent expert speakers from all across the globe. The popularity of the event and enthusiasm for dialogue among attendees demonstrates the need for treating Consumer Identity differently than traditional Enterprise Identity. The technology has been evolving significantly, to meet rapidly changing business requirements and encompass newly developed technologies.

Businesses and public sector organizations are finding that they need to “Know Your Customer” (KYC) better for a number of reasons. Consumer Identity and Access Management (CIAM) services can help meet these objectives. For example, retail and media outlets can provide better experiences to registered users. These companies can offer incentives, special sales, and other features to increase loyalty to their brands. Banks and financial institutions can better comply with Anti-Money Laundering (AML) regulations by establishing digital relationships via CIAM, and provide competitive advantages.

Consumer identity is becoming more than just a competitive advantage though. Katryna Dow, CEO of Meeco, said “Consumer Identity is the new channel”. What this means is that digital service providers are in many cases beginning to bypass traditional distribution channels to directly engage and sell to consumers. This will have increasingly profound effects on business models. Consider, for example, the changes in entertainment media and its prior distribution channels. Where consumers once bought movies and programs on VHS or DVD at stores such as Blockbuster and Hollywood Video, consumers are now streaming content straight from Amazon, Hulu, Netflix, Sony, and more. The same can be said for online retailers: those utilizing consumer identity solutions have ways to alert interested buyers, solicit feedback, and create revenue streams that others can’t.

Allan Foster, VP of Community at ForgeRock and President of Kantara Initiative, described the difference between enterprise IAM and CIAM: “with enterprise IAM, IT provides the identities; in CIAM, IT provides the means for consumers to build their own identities.” This saves administrative effort, and puts control over which attributes to share back into the consumers’ hands, making them a participant in the process.

Ian Glazer, Senior Director at Salesforce Identity, highlighted the need for improved user experiences, showing how effective consumer identity management promotes a much better user journey. He stated that businesses must reduce friction for consumers by using social logins, progressive profiling, and progressive proofing. Logins should not require “Yet Another Username & Password”, or YAUP. Consumer identity should work across multiple channels, including tying users to their IoT devices.

Several speakers touched on the importance of preparing for the EU General Data Protection Regulation (GDPR), which will take effect on 25 May 2018. GDPR contains language which governs the treatment and handling of information gathered and used by CIAM systems. GDPR defines what personally identifiable information (PII) is: name, email, photos, posts on social networks, medical information, and financial information are examples. Some of the most important provisions include explicit consent for PII data usage, localized processing (EU citizen data must be housed and processed within the EU itself), data portability (EU citizens must be able to export their data from systems), and the right to be forgotten (data deletion). CIAM solutions must be able to meet all these requirements to be viable within the EU in the post-GDPR regulatory schema. For more information on GDPR, follow KuppingerCole’s updates, and to see the full text, go to http://ec.europa.eu/justice/data-protection/reform/files/regulation_oj_en.pdf.

To meet the privacy objectives of GDPR, Dr. Maciej Machulak discussed Kantara Initiative’s User Managed Access (UMA) specification. UMA provides a framework for web applications to obtain user’s consent for use of their data. KuppingerCole believes that UMA will be a major enabler for GDPR compliance. For more information on UMA, see https://kantarainitiative.org/confluence/display/uma/Home.

We also presented the results of our CIAM Leadership Compass at the Summit. For this paper, ForgeRock, Gigya, IBM, iWelcome, Janrain, LoginRadius, Microsoft, Okta, PingIdentity, Saleforce, SAP, and SecureAuth participated. Each company has products that serve the CIAM needs of their own customers, with different strength, challenges, and target markets. For the full report, see https://www.kuppingercole.com/report/lc71171.

Lastly, our own Martin Kuppinger weighed in on the ownership aspect of CIAM deployments. There are a variety of ways that CIAM can be implemented and maintained. In some companies, marketing takes the lead. In others, IT is completely responsible. The hybrid ownership approach works best: IT owns the deployment, but operates it as a service for the business as a whole. This promotes tight integration with enterprise IAM, without being encumbered by enterprise IAM limitations. It also allows businesses to efficiently promote regulatory compliance and security, while offering consistent and feature-rich solutions for sales and marketing.

KuppingerCole will continue to track with CIAM solution developers and customers to provide the most up-to-date information on CIAM, KYC, and the regulatory drivers in this space.

GlobalPlatform recently held their annual conference in Santa Clara, California. GlobalPlatform is an international standards organization that defines specifications for the Trusted Execution Environment (TEE), or the secure virtual operating system within the OSes of mobile devices. It also specifies requirements for Secure Elements (SE), the protected storage components within mobile devices. Used together, Trusted Apps run inside the TEE, protected from rogue apps and malware. Trusted Apps control access to data stored in SE. The use of TEE also protects confidentiality and integrity between the user input and display. The specifications are becoming widely used in the protection of premium content (digital media), financial apps, telecommunications, automotive components, healthcare devices, and transit systems.

TEE is being used to secure processing and messaging in many IoT scenarios already today, such as parking meters, food monitoring, and "smart cities" street light monitoring. On the consumer side, TEE is implemented in watches, home automation, and even cars. Remote monitoring of manufacturing, logistics, agriculture, and environments is increasingly being performed by IoT sensors. The number of Internet connected devices is rapidly rising.

Given the recent spate of record-breaking DDoS attacks launched from compromised IoT devices, expect to see greater emphasis and consumer demand for security and privacy on IoT manufacturers. Most consumers do not want their webcams and refrigerators to be involved in illegal activities, such as knocking websites off the air, or sabotaging food production.

Beyond providing specifications for execution and storage, GlobalPlatform can help with IoT security by adopting device identity standards. The IoT devices that have unwittingly participated in attacks have done so because bad actors took control using default usernames and passwords. In most cases, users aren't directly involved, so having a username/password identity scheme does not even make sense for IoT sensors.

The lifecycle for IoT device identities is quite different from human users. Some devices are designed to last a few hours, such as passive WiFi concrete hardening sensors. Some agricultural sensors are designed to last a growing season. However, in other cases, Internet-enabled durable goods and medical devices are expected to last from several years to perhaps decades. Thus, the identity lifecycle and difficultly associated with modifying attributes pose new security risks. Time will tell, but ultimately a PKI-lite certificate-based device identity paradigm may emerge, if revocation issues can be sufficiently addressed. IoT device vendors and third party service providers will likely find that device identity and access management could generate long-term subscription and fee revenues.

With regard to user authentication, GlobalPlatform and the FIDO Alliance will be cooperating with cross-certification and joint testing programs. The FIDO Alliance is an international Standards Development Organization (SDO) focused on multi-factor and mobile authentication technologies. GlobalPlatform currently provides test tools and certified test labs to perform independent security testing of TEEs. This yields qualified products. Many vendors are already building and certifying TEEs. FIDO authenticators and clients are being deployed in the TEE. Apps that run in the open, or "Rich OS" can request authentication from the FIDO client/authenticator, running securely in the TEE.

FIDO is also adding security certification. The first priority is protecting the authenticator’s keys. Security certification for FIDO will re-use other organizations’ standards, such as FIPS 140-2 and GlobalPlatform TEE. FIDO security certification will look for and confirm the use of TEE by FIDO components. FIDO certification will then include functional certification by FIDO (as it is today), and Global Platform TEE certification as a component of security.

GlobalPlatform and FIDO are planning to synchronize certification processes. Independent certified test labs will provide the testing services for both organizations. OEMs will be able to get both needed certifications much more quickly. Such an approach is a "win-win" for the mobile IAM community, as there are many common members between FIDO and GlobalPlatform, and this should reduce the cost and time needed to obtain security certifications from both organizations.

As we know, IoT devices are proliferating but security is severely lacking. The number of FIDO certified products is also beginning to grow. GlobalPlatform’s TEE will add needed runtime and I/O security to both IoT and FIDO applications. KuppingerCole recommends that both IoT device manufacturers and FIDO implementers utilize TEE to:

At the AWS Enterprise Security Summit in London on November 8th, Stephen Schmidt, CISO at AWS gave a keynote entitled “Democratized Security”. What is Democratized Security and does it really exist?

Well, to quote Humpty Dumpty from the book Alice in Wonderland “When I use a word it means just what I choose it to mean—neither more nor less." So, what Mr. Schmidt meant by this phrase may or may not be what other people would understand it to mean. This is my interpretation.

The word democracy originates in ancient Greece and where it meant the rule of the common people. It described the opposite of the rule by an elite. More recently, the “democratization of technology” has come to mean the process whereby sophisticated technology becomes accessible to more and more people. In the 1990s, Andrew Feenberg described a theory for democratizing technological design. He argued for what he calls “democratic rationalization” where participants intervene in the technological design process to shape it toward their own ends.

How does this relate to cloud services? Cloud services are easily accessible to a wide range of customers from individual consumers to large organizations. These services survive and prosper by providing the functionality that their customers value at a price that is driven down by their scale. Intense competition means that they need to be very responsive to their customers’ demands. Cloud computing has made extremely powerful IT services available at an incredibly low cost in comparison with the traditional model, where the user had to invest in the infrastructure, the software and the knowledge before they could event start.

What about security? There have been many reports of cyber-attacks, data breaches and legal government data intercepts impacting on some consumer cloud services (not AWS). The fact that many of these services still survive seems to indicate that individual consumers are not overly concerned. Organizations however have a different perspective – they do care about security and compliance. They are subject to a wide range of laws and regulations that define how and where data can be processed with significant penalties for failure. Providers of cloud services that are aimed at organizations have a very strong incentive to provide the security and compliance that this market demands.

Has the security elite been eliminated? The global nature of the internet and cyber-crime has made it extremely difficult for the normal guardians – the government and the law – to provide protection. Even worse, the attempts by governments to use data interception to meet the challenges of global crime and terrorism have made them suspects. The complexity of the technical challenges around cyber-threats make it impractical for all but the largest organizations to build and operate their own cyber-defences. However, the cloud service provider has the necessary scale to afford this. So, the cloud service providers can be thought of as representing a new security elite – albeit one that is subject to the market demands for the security of their services.

With democracy comes responsibility. In relation to security this means that the cloud customer must take care of the aspects under their control. Many, but not all, of the previously mentioned consumer data breaches involved factors under the customers’ control, like weak passwords. For organizations using cloud services the customer must understand the sensitivity of their data and ensure that it is appropriately processed and protected. This means taking a good governance approach to assure that the cloud services used meet these requirements.

Cloud services now provide a wide range of individuals and organizations with access to IT technology and services that were previously beyond their reach. While the main driving force behind cloud services has been their functionality; security and compliance are now top of the agenda for organizational customers. The cloud can be said to be democratizing security because organizations will only choose those services that meet their requirements in this area. In this world, the cloud service providers have become the security elite through their scale, knowledge and control. The cloud customer can choose which provider to use based on their trust in this provider to deliver what they need.

Since the notion of a corporate security perimeter has all but disappeared in the recent years thanks to the growing adoption of cloud and mobile services, information security has experienced a profound paradigm shift from traditional perimeter protection tools towards monitoring and detecting malicious activities within corporate networks. Increasingly sophisticated attack methods used by cyber criminals and even more so, the growing role of malicious insiders in the recent large scale security breaches clearly indicate that traditional approaches to information security can no longer keep up.

As the security industry’s response to these challenges, a new generation of security analytics solutions has emerged in the recent years, which are able to collect, store and analyze huge amounts of security data across the whole enterprise in real time. These Real-Time Security Intelligence solutions are combining Big Data and advanced analytics to correlate security events across multiple data sources, providing early detection of suspicious activities, rich forensic analysis tools, and highly automated remediation workflows.

Industry analysts, ourselves included, have been covering this fundamental focus shift in the information security for a few years already. However, getting that message across to the general public is not an easy task. To find out how many organizations around the world are truly understanding the critical role of security analytics technology in their corporate security strategies, earlier this year KuppingerCole has teamed up with BARC – a leading enterprise software industry analyst and consulting firm specializing in areas including Data Management and Business Intelligence – to conduct a global survey on Big Data and Information Security. Our survey was focused on security-related aspects of Big Data analytics in cybersecurity and fraud detection and is based on contributions of over 330 participants from 50 countries representing enterprises of all sizes across various industries such as IT, Services, Manufacturing, Finance, Retail or Public Sector.

The study delivers insights into the level of awareness and current approaches in information security and fraud detection in organizations around the world. It measures importance, status quo and future plans of Big Data security analytics initiatives, presents an overview of various opportunities, benefits and challenges relating to those initiatives, as well as outlines the range of technologies currently available to address those challenges.

Here are a few highlights of the study results:

Information Security and Big Data are recognized as the two most important IT trends

Over a half of the survey respondents consider Big Data technology one of the cornerstones of the Digital Transformation and consider protecting their digital assets from security risks and compliance violation extremely important. The public awareness of the potential of security analytics solutions is very impressive as well: almost 90% of the participants believe that these solutions will play a critical role in their corporate security infrastructures.

Current implementations are still lagging behind

Unfortunately, only a quarter of the respondents have already implemented big data security analytics measures. Even fewer, just 13% consider themselves best-in-class in this field, believing to have a better understanding of the technology than their competitors.

Benefits from big data security analytics are high

The overwhelming majority of the best-on-class participants believe that security analytics can bring substantial profits for their companies. In fact, over 70% of all respondents, even those who do not yet have a budget or a strategy for security analytics, already consider potential benefits from implementing such a solution to be high or at least moderate.

Best-in-class companies use a wide range of technologies

The companies with deep understanding of current information security trends and technologies clearly realize that only multi-layered and well-integrated security architectures are capable of resisting modern sophisticated cyber-attacks. They are deploying multiple security tools not just for threat protection, but for identity and access governance, strong authentication, SIEM and user behavior analytics as well. Unfortunately, many of the “laggards” are not even aware that some of these technologies exist.

Automated security controls are a key differentiator

Identifying a security incident is just the first step of a complex remediation process, which is still largely manual and requires a skilled security expert to carry it out properly using a large number of security tools. New generation security analytics solutions therefore place a strong emphasis on automation, which helps to reduce the skill gap and ideally let even a non-technical person initiate an automated incident response process. 98% of the best-in-class respondents are already aware of these developments and consider automation a key aspect of security solutions.

You’ll find a short summary of our findings in the handy infographic above. The complete study can be downloaded from our website in English or German. Thanks to the generosity of MicroStrategy, Inc., we are able to make it available free of charge.

When you’ve ever been involved in discussions between IT Security people and OT (Operational Technology, everything that runs in manufacturing environments) people – the latter not only security guys – you probably observed that such discussions have a tendency of not being fruitful because they start with a fundamental misunderstanding between the two parties.

IT security people think about security first, which is essentially about protecting against cyber-attacks and internal attackers and the “CIA” – confidentiality, integrity, and availability. OT people don’t think about security first, even if they are OT security people. They first think about safety, which is about physical safety of humans and machines, which is about reliability, and about availability.

Understanding this dichotomy is essential, because there are different requirements, but also a different history in both areas. OT has always focused on safety, reliability, and availability of production environments. Physical damage of humans, but also of machines, due to software issues (such as a non-working patch) is inacceptable. Mistakes in production are inacceptable, because they can lead to massive liability issues and cost. And availability is key for manufacturing. A production line not working can cause very high cost in a very short period of time. In fact, that is where high availability is really critical, far more than for the very most of the IT systems, even the ones that are being considered as critical.

Unfortunately, the world is changing rapidly. Buzzwords such as “Industry 4.0” or “Smart Manufacturing” stand for that change – the change from an isolated to a massively connected world of manufacturing. The quintessence of these changes is that manufacturing environments become connected; and they increasingly become connected bi-directionally, unless regulations prohibit this. The golden rule to keep in mind here is simple: “Once something is connected, it is under attack.” Computer search engines that scan for everything including IoT devices (and including Industrial IoT or IIoT devices), automated attacks, advanced attacks against manufacturing environments: The risk for these connected environments is massive.

Thus, it is time to overcome the dichotomy between security and safety. We need figuring out new ways of both connecting and protecting manufacturing environments against attacks, while keeping them safe, reliable, and available. The answer to this challenge can’t be leaving everything as is. This will not work. Outdated operating systems, a lack of regular patches, a lack of fine-grain security models in OT equipment – all this will not work anymore. On the other hand, it will take years, probably even tens of years, to modernize all these environments.

Thus, we need to find a mix of new, more modern approaches that combine security by design with the specific requirements of OT environments, while protecting all the old stuff – with unidirectional firewalls, with privilege management technologies to protect shared administrative accounts, with advanced analytical tools to identify potential attacks.

However, we will only succeed when both groups, the IT and the OT people, end their culture of not understanding each other and start working on joint initiatives – and that must start by defining a common understanding of the vocabulary, but also understanding that the requirements of both groups are not only valid but mandatory. Let’s start working together.

Last time we’ve devoted an issue of our monthly newsletter to the Internet of Things was almost a year ago. Looking back now, we can already spot a number of significant changes in this field that happened during the year 2016. Perhaps, the most profound one is that the industry has finally gone past the “peak of inflated expectations” and started thinking less about making everything smart and connected and more about such down-to-earth things as return of investment, industry standards or security concerns.

An obvious consequence of this is the growing divide between the “consumer” and “industrial” segments of the IoT. Consumers are becoming increasingly disillusioned about the very concept of “smart home”, because the technology that has promised to make their lives easier simply does not live up to the expectations. Remember the guy who spent 11 hours fixing his Wi-Fi kettle? User experiences like that, combined with inconvenient mobile apps and a complete lack of security or privacy in those smart devices make more and more people want to go back to the good old “analog” teapots and light bulbs.

The industrial IoT segment, however, continues to grow steadily. With all the new companies rushing to the market, it’s quickly becoming crowded, which inevitably leads to mergers and acquisitions, forming partnerships and growing ecosystems – in other words, the IIoT market is finally showing the signs of maturity. By the way, let the term “industrial IoT” not confuse you: IIoT is not limited by just industrial applications; it is going to expand into various market sectors. In fact, we cannot even define a clear border between the “consumer" and “industrial” IoT just by looking at their applications: although your car is definitely a consumer device, many aspects of the technology that make it connected are undoubtedly industrial.

So nowadays, the divide between the consumer and industrial IoT is not between market segments and definitely not in hardware or protocols, but rather in the way those systems are handling the information they are collecting. IoT is no longer just about connecting things over the Internet, but about collecting, storing, analyzing and (last but not least) securing the data those things are producing. Because of the nature of information collected by smart consumer devices and industrial sensors is completely different, they require different technologies to manage them, to protect them from risks and to ensure their compliance.

Consumer IoT products like thermostats or fitness trackers tend to collect relatively small amounts of data, but this information is very personal and sensitive by nature. So, as soon as we sort out the basic security requirements and prevent hackers from building botnets from webcams, the biggest priority is compliance with data protection regulations. Industrial devices like sensors or controllers, on the other hand, usually produce massive streams of data, which must be collected, stored, processed and analyzed in real-time to provide better visibility into a manufacturing process, to make your car self-steering or to save a patient from hypoglycemic shock. These use cases, of course, demand completely different technologies, like cloud computing and Big Data analytics to efficiently handle such huge amounts of information quickly and reliably. And, of course, they face a completely different set of security risks.

As we once discussed in a webinar on industrial control system security, Operational Technology security experts have traditionally had completely different priorities with regards to cyber-security vs safety and process continuity, relying more on physical network isolation and proprietary protocols to protect their control and data acquisition systems. With IIoT, however, the situation changes completely – new smart industrial sensors are utilizing the same protocols or even the same hardware as consumer products. They are also communicating over the public Internet, wide open for potential hacking attacks. And although leaking sensor data probably does not constitute a serious security problem, manipulating the data or even the sensors themselves definitely does. By disrupting manufacturing process control, a hacking attack can not only lead to a loss of very real products, but also to equipment damage and even human casualties.

This is why, before embracing the new IIoT technologies for all the great business benefits they bring, OT specialists have to radically rethink their approaches towards cyber-security. The problem is further complicated by the fact that most industrial sensors do not have enough computing power to have any security functionality built into them – so existing OT security solutions developed for Windows-based SCADA environments won’t help much.

A popular approach nowadays is to use special IoT gateways to manage large numbers of devices centrally and to perform initial processing and protocol conversion before sending the collected data to the cloud. These gateways are the most obvious points to integrate security functions as well, providing services like identity and authentication, data integrity and threat protection. Many vendors are already taking the development of such secure gateways even further by offering complete platforms integrating device management and security with the possibility to run authorized third-party software and to integrate legacy devices into the IIoT.

However, traditional approaches like air gapping industrial networks by means of unidirectional gateways, deployment of endpoint protection solutions and, of course, real-time security analytics all have their place in a well-designed layered security infrastructure. After all, if done right, security is not a liability, but a valuable business opportunity.

The proverbial Computing Troika that KuppingerCole has been writing about for years does not show any signs of slowing down. The technological trio of Cloud, Mobile and Social computing, as well as their younger cousin, the Internet of Things, have profoundly changed the way our society works. Modern enterprises were quickly to adopt these technologies, which create great new business models, open up numerous communication paths to their partners and customers, and, last but not least, provide substantial cost savings. We are moving full speed ahead towards the Digital Era, and the future is full of promise. Or is it?

Unfortunately, the Digital Transformation does not only enable a whole range of business prospects, it also exposes the company’s most valuable assets to new security risks. Since those digital assets are nowadays often located somewhere in the cloud, with an increasing number of people and devices accessing them anywhere at any time, the traditional notion of security perimeter ceases to exist, and traditional security tools cannot keep up with the new sophisticated cyberattack methods.

In the recent years, the industry has come up with a new generation of security solutions, which KuppingerCole has dubbed “Real-Time Security Intelligence”. Thanks to a technological breakthrough that finally commoditized Big Data analytics technologies previously only affordable to large corporations, it became possible to collect, store, and analyze huge amounts of security data across multiple sources in real time. Various correlation algorithms have been implemented to find patterns in the data, as well as to detect anomalies, which in most cases indicate a certain kind of malicious activities.

Such security analytics solutions have been hailed (quite justifiably) by the media as the ultimate solution to most modern cybersecurity problems. Some even go as far as referring to these technologies as “machine learning” or even “artificial intelligence”. It should be noted however, that detecting patterns and anomalies in data sets has very little to do with true intelligence – in fact, if the “IQ level” of a traditional signature-based antivirus can be compared to that of an insect, then the correlation engine of a modern security analytics solution is about as “smart” as a frog catching flies.

Unfortunately, the strong artificial intelligence, comparable in skill and flexibility to a human, is still purely a subject of theoretical academic research. Its practical applications, however, are no longer a science fiction topic. To the contrary, these applied cognitive technologies have been actively developed for quite some time already, and the exponential growth of cloud computing has been a major boost for their further development in the recent years. Such technologies as computer vision, speech recognition, natural language processing or machine learning have found practical use in many industries, and cybersecurity is the most recent field where they promise to achieve a major breakthrough.

You see, the biggest problem information security is now facing has nothing to do with computers. In fact, the vast majority (over 80%) of security-related information in the world remains completely inaccessible to computers: it exists only in an unstructured form spread across tens of thousands of publications, conference presentations, forensic reports and other sources – spoken, written or visual.

Only a human can read and interpret those data sources, but we do not have nearly enough humans trained as security analysts to cope with the amount of new security information produced daily.

This is where Cognitive Security, a new practical application of existing cognitive technologies, comes into play. A cognitive security solution would be able to utilize natural language processing and machine learning methods to analyze both structured and unstructured security information the way humans do. It would be able to read texts (or even see pictures and listen to speeches) and not just recognize patterns within them, but be able to interpret and organize the information, explain its meaning, postulate hypotheses and provide reasoning based on evidence.

This may feel like science fiction to some, but the first practical cognitive security solutions are already appearing on the market. A major player and one of the pioneers in this field is undoubtedly IBM with their Watson platform. Originally created back in 2005 to compete with human players in the game of Jeopardy, over the years Watson has expanded significantly and found many practical applications in business analytics, government, legal and even healthcare services.

In May 2016, IBM has announced Watson for Cyber Security, a completely new field for their natural language processing and machine learning platform. However, IBM is definitely not a newcomer in cyber security. In fact, their own X-Force research library is being used as the primary source of security information to be fed into the specialized instance of the platform running in the cloud. Although the learning process is still in progress, the ultimate goal is to process all of those 80% of security intelligence data and make it available in structured form.

Of course, Watson for Cyber Security will never replace a human security analyst, but that is not its goal. First, making this “dark security data” accessible for automated processing by current security analytics solutions can greatly improve their efficiency as well as provide additional external threat intelligence. Second, cognitive security would provide analysts with powerful decision support tools, simplifying and speeding up their work and thus reducing the skills gap haunting the security industry today. In the future, the same cognitive technologies may be also applied to a company’s own digital assets to provide better analytics and information protection. Potentially, they may even make developing malware capable of evading detection too costly, thus turning the tide of the ongoing battle against cybercrime.

There are good reasons for the move towards “Cognitive Security”. The skill gap in Information Security is amongst the most compelling ones. We just don’t have sufficient skilled people. If we can computers make stepping in here, we might close that gap.

On the other hand, a lot of what we see being labeled “Cognitive Security” is still far away from really advanced, “cognitive” technologies. Marketing tends to exaggeration. On the other hand, there is a growing number of examples of advanced approaches, such as IBM Watson – the latter focusing on filtering the unstructured information and delivering exactly what an Information Security professional needs.

A challenge we must not ignore is the fact that these technologies are based on what is called “machine learning”. The machines must learn before they can do their job. That is not different from humans. An experienced security expert first needs experience. That, on the other hand, leads to two challenges with machines.

One is that machines, if used in Information Security, first must learn about incidents and attacks. With other words: They only can identify attacks after learning. Potentially, that means that there must occur some attacks until the machine can identify and protect against these. There are ways to address this. Machines can share their “knowledge”, better than humans. Thus, the time until they can react on attacks can be massively shortened. Furthermore, the more “cognitive” the machines behave, the better they might detect new attacks by identifying analogies and similarities in patterns, without knowing the specific attack.

On the other hand, training the machines bears the risk that they learn the wrong things. Attackers even might systematically train cognitive security systems in wrong behavior. Botnets might be used for sophisticated “training”, before the concrete attacks occur.

While there is a strong potential for Cognitive Security, we are still in the very early stages of evolution. However, I see a strong potential in these technologies, not in replacing humans but complementing these. Systems can run advanced analysis on masses of data and help finding the few needles in the haystack, the signs of severe attacks. They can help Information Security professionals in making better use of their time, by focusing on the most likely traces of attacks.

Traditional SIEM (Security Information and Event Management) will be replaced by such technologies – an evolution that is already on its way, by applying Big Data and advanced analytical capabilities to the field of Information Security. We at KuppingerCole call this Real Time Security Intelligence (RTSI). RTSI is a first step on the journey towards Cognitive Security. Given the fact that Security on one hand is amongst the most complex challenges to solve and, on the other hand, attacks cause massive damage, this is one of the fields where the evolution in cognitive technologies will take place. It is not as popular as playing Go or chess, but it is a huge market with massive demand. Today, we can observe the first examples of “Cognitive Security”. In 2025, such solutions will be mainstream.

Intel Security recently released an in-depth survey of the cybersecurity industry, looking at causal agents of the low availability of people with training and professional accreditation in computer security. The global report titled “Hacking the Skills Shortage” concludes: “The cybersecurity workforce shortfall remains a critical vulnerability for companies and nations”.

Most respondents to the survey considered the ‘cybersecurity skills gap’ as having a negative effect on their company, three quarters felt that government were not investing appropriately to develop cybersecurity talent and a whopping 82% reported that they can’t get the cybersecurity skills they need.

Only one in five believed current cybersecurity legislation in their country is sufficient. Over half thought current legislation could be improved and a quarter felt current regulation could be significantly enhanced.

From an education viewpoint, the study concluded that colleges are not preparing their students well for a career in cybersecurity. It suggested that there should be a relaxation of the requirement for a graduate degree for cybersecurity positions and that greater stock should be placed on professional certifications. Cybersecurity appreciation should start at an earlier age and we need to move with the times, targeting a more diverse multicultural and mixed gender clientele.

But what does it mean for companies needing assistance with their cybersecurity requirements now? How should we respond to a known deficiency in available expertise? Given that companies are increasing relying on consultants and analysts there is a need to be preparing our staff and suppliers to step into the gap and assist us in identifying requirements, analysing potential solutions and developing a roadmap to follow so that we can maintain our computer security, minimise data loss and protect our intellectual property.

There’s potentially another option to fill the cybersecurity expertise gap in the future – Cognitive Security.

The term Cognitive Security refers to an increasingly important technology that combines self-learning systems with artificial intelligence to be able to look for patterns and identify situations that meet predefined conditions, which can be used to indicate network compromise activity and provide expert advice for diagnostic activity.

While artificial intelligence has had a chequered past it is likely to significantly impact society over the next decade. It’s started to being deployed in big data analysis to enable us to identify trends and understand consumer behaviour, it provides the ability to automate promotional activity and it enables us to better meet customer expectations even as marketing budgets are constrained. In the cybersecurity space it can be used to identify potential nefarious activity and to make decisions on how to respond to events. Increasingly, automated data analysis allows us to detect network compromise and artificial intelligence provides assistance in taking remedial action.

A number of large research organisations are at the forefront of Cognitive Computing. IBM is very active via the Watson initiative which is pioneering data mining, pattern recognition and natural language processing. IBM Watson for CyberSecurity, the one solution that is already announced, focuses on collecting unstructured information and providing the required information to the Information Security professionals, giving them the background information they need, without searching for it. Google DeepMind demonstrated approaching Singularity with AlphaGo beating a GO grand master earlier this year. Microsoft is also heavily involved in the sector with the release of their first set of Cognitive Services, in effect APIs for facial recognition, facial tracking, speech recognition, spell checking and smile prediction software.

So what does this have to do with security on our company’s network?

We’re seeing the beginning of cognitive security in the threat analytics that are now rapidly developing with innovative solutions that monitor corporate networks and then ‘learn’ what normal network traffic looks like. Nefarious activity can be detected via anomalies in network traffic. If an account that normally accesses a departmental subnet for access to work applications suddenly attempts to access another server, not part of their normal activity, threat analytics will identify such events and will act in accordance with the established policy, either issuing a notification for follow-up or disabling the account pending investigation. Many suppliers also maintain, or subscribe to, community threat signature services that identify known attack vectors and can automatically alert on their occurrence on the network being monitored. These systems can also provide triage services to assist in determining remedial action and forensic analysis to aid in developing preventative maintenance processes.

While there’s still a long way to go, the technology holds significant benefit. If we extrapolate the findings of the Intel survey to our own situations, it is unlikely that we will be able to fill the need for human cybersecurity experts, it is therefore prudent to track developments in cognitive security as it applies to our network monitoring and incident response requirements.

While it must not be our only approach to network protection and data loss prevention it holds significant potential to be a major component of our corporate security arsenal in the future.

‘Know your customer’ started as an anti-money laundering (AML) initiative in the financial industry. Regulators insisted that banks establish a customer ‘due-diligence’ processes to ensure that all bank accounts could be traced back to the entities that owned them. The intent was to make it difficult to establish a business to re-purpose money from illegal activity via a legitimate commercial activity. But while they focus on AML regulation, banks often miss the opportunity to know, and serve, their customers.

Increasingly businesses are realizing that the demographics of their customers are changing. It’s moving away from the ‘baby-boomers’, who are focused on value, to ‘millennials’, who are focused on experience.

Baby-boomers have grown up in a relatively stable environment, with a stable family life and long-term employment. They value ‘best practices’ and loyalty. Millennials, those coming-of-age at the turn of the century, have experienced a much more fluid upbringing. Their family life has been fractured and inconsistent and they have no expectation, nor desire for, long-term employment. They are more interested in flex-time, job-sharing arrangements and sabbaticals.

More importantly, millennials want experience over value. They are less concerned with what they pay for something than they are in their experience in purchasing it. They will not tolerate a bad experience whether it be in-store or on-line. And they have the technology to let others know about their experience.

There’s two approaches to this situation: become despondent and despair of ever attracting this market sector, or consider the vast opportunity of hundreds of millennials posting and tweeting about the fantastic service they experienced when they did business with you.

Coming from Knowing to Serving

So – how do we ‘serve’ our customers? Firstly, we need to know them and then we need to align our marketing practices to them.

Knowing them requires us to build a picture of our customer base and segment them into groups according to their propensity to purchase our products and services. This will likely require an analysis of CRM data and potentially doing some big-data analysis of customer transaction records. Engaging a Cloud service provider and using their Hadoop services and map-reduce functionality may assist. The intent is to build a customer identity management service that can be used for product/service development and automated marketing. Customer analytics and deploying user-managed access i.e. providing users control of their data and management of their transactions with your organisation, are enabled by a good customer identity and access management (CIAM) facility.

Once we know our customers we can tailor our marketing program to ‘serve’ our customers. This means that we need to modify our product or service to suit their requirements. There is no point in offering something that they don’t want, and you can’t rely on history; as the baby-boomer segment must inevitably decline their purchasing patterns becomes irrelevant. Millennials will gladly tell you what they want if they are asked, putting some effort into understanding them will not go un-rewarded.

Pricing must also be commensurate with the product or service being offered. As noted earlier millennials are far less price-conscious than baby-boomers so a ‘differentiation’ strategy is advised. Make your product or service special, and charge for it.

Promotion should also be targeted too. Hardcopy media is of little use. Focus on social networks and on-line advertising. Google AdWords do work and it can be money well-spent. Make sure your website is responsive, millennials are lost on anything bigger than a 12cm screen.

There is no doubt that doing business is becoming much more interesting. The potential for attracting new customers has never been greater and the opportunities are vast. The only question is “are we agile enough to exploit it?”

Blog

All posts by:

Spotlight

IoT, IIoT & the Identity of Things

The industrial IoT segment continues to grow steadily. With all the new companies rushing to the market, it’s quickly becoming crowded, which inevitably leads to mergers and acquisitions, forming partnerships and growing ecosystems – in other words, the IIoT market is finally showing the signs of maturity. By the way, let the term “industrial IoT” not confuse you: IIoT is not limited by just industrial applications; it is going to expand into various market sectors. In fact, we cannot even define a clear border between the “consumer" and “industrial” [...]