The commoditization of technology has reached its pinnacle with the advent of the recent paradigm of Cloud Computing. Infosys Cloud Computing blog is a platform to exchange thoughts, ideas and opinions with Infosys experts on Cloud Computing

September 30, 2017

The world is becoming more and
more innovative, intelligent with mesh of digitalized people, things and
disruptive technologies.

At one end human brain power is
being infused into machines making machines artificially intelligent for solving
human problems for good; On the other end unethical hackers are instilling their
intelligence in malicious worms that attack IT systems posing security threats
to one and all.

In short human brain power is mimicked
into machines for both good and evil purpose.
This has given rise to long debate whether AI (Artificial Intelligence)
is a force for Good or Evil; threat or opportunity for IT security? There is no single answer to this debate.
Good and Evil are like two sides of a coin; inseparable. Every invention has
good and bad potential with it. Ex. be it Fire, Knife, Engine, Fuel, our
beloved Internet and on and on. Good
wins over Evil when we as humans
strive for maximizing the positive potential of the invention and thus automatically
weakening the negative potential.

With this worthy intent let's move forward to see how AI can be leveraged to
its best for positive use cases. In this blog want to take up one such use case
that is "Adaptive Security Model"

oRetrospective:
Deep analysis of issues which were not detected at detective layer. Preventive
& detective measures would be enhanced to accommodate these learnings.

oPredictive:
Continuously learns and observes the patterns in network traffic. And keeps the
security team on alert on potential anomalies/attacks.

Machine Learning(ML) algorithms and techniques are the core to
these predictive competency
of adaptive security model. ML field be it in security arena or others, is too
vast and continuously evolving with numerous researches. Intention in this blog
is to just scratch the surface of this ML field in adaptive security context.

Out of many types of Predictive
models in security context most popular ones are Network Intrusion Detection
Models. These models focus on anomaly detection and thus differentiate between
normal and malicious data.

Broad two types of machine
learning for anomaly detection techniques are Supervised and Unsupervised.

oIn
Supervised Machine Learning method model is trained with the dataset which
contains both normal and anomalous samples which are explicitly labelled. These
use classification techniques to classify data observations based on the
attributes. Key algorithms for adaptive security model are decision tree, naïve
Bayesian classifier, neural network, genetic algorithm, and support vector
machine etc.

oUnsupervised
Machine Learning is not based on the training data. They use clustering
technique to group the data of similar characteristics. It differentiates
normal and malicious data based on a) based on the assumption that most of the
network traffic is normal traffic and only a small amount of percentage is
abnormal. b) statistical parameters variations among two clusters.

Creation of rich dataset to be used for Training
the model and Testing the model. Data source may range from retrospective
network traffic , past malicious attack patterns, audit logs, normal activity
profile patterns , attack signatures and so on.

Predictive Attributes Selection

This is popularly known as 'Feature Engineering' for
models. Dataset will have numerous attributes. Success of predictive-models
depends on impactful combination of attributes or features as called in ML
terminologies. Irrelevant and redundant attributes of the dataset have to be eliminated
from the feature set. There are many theorems and techniques for this, PCA (Principal
Component Analysis) being one of the popular technique. PCA is a common
statistical method used in multivariate optimization problems in order to
reduce the dimensionality of data while retaining a large fraction of the
data characteristic.

Classifier Model Construction

Build and train the model based on one or more
algorithms. Test the model with test data. Model should classify the data as
Normal Class OR Anomaly(malicious) class.

Test and Optimize the Model

The performance of the model depends on two parameters,
malicious activities detection rates (DR) and false positives (FP).

DR is defined as the number of intrusion
instances detected by the system divided by the total number of the intrusion
instances present in the test dataset.

FP is instances of false alarms raised for
something that is not really an attack. Model Optimization should target to maximize the DR and minimize the FP.

Employ the Model for real-time network traffic

Model performance in production will depend on
the accuracy and maturity of the trained model. Model should be maintained
to-be up-to-date with repeated re-training of the model. Retraining should
accommodate changing attack patterns and activities.

Whatever is the technology revolution there's no
silver bullet to future-proof the security. Security fencing has to be always
one level up against some of the most devious minds. Though innovative AI based
Predictive-Adaptive Models are gaining momentum, security hackers &
predators too are advancing in maliciously attacking these models. We have to
wait and watch which intelligence reigns...The Threat or The Protection J.

September 25, 2017

Microservices is now becoming the most preferred method for
creating distributed and components-based applications on cloud. This
architectural style allows developers to develop, deploy, test and integrate
modular components with much ease. When an application is built using the
microservices model, smaller modular services are created instead of one
autonomous monolithic unit. These modular services are then tied down together
with the help of HTTP or REST interfaces. But this distributed model results in
proliferation of interfaces and the communication between them generates
several secrets management challenges. Some application secrets that need to be
secured in a microservices deployment model are:

Environment variables - If not secured can pose security risk and affect the smooth running of processes.

Database credentials - Usernames and strong passwords to connect to a resource.

API keys - API keys must be used for restricted access to applications.

With automated deployment in Micro Services, there are additional credentials for creation of resources (mostly in cloud), access to code and artifact repository, machine credentials to install components, etc.

There is a need for centralized secrets management system so
that enterprises adopting a microservices model can effectively manage secrets
and handle security breaches by adhering to these must-dos:

How to keep your
microservices secrets safe without compromising on security and automation?

A secrets hierarchy design should account secrets isolation per application, environment and a fail-proof revocation of secrets when required.

To further strengthen the secrets structure, access policies and role based mappings need to be built to support emergencies by making them version controlled and automated.

Let's take a look at some secrets management scenarios
and examples:

Servers on which microservices needs to be deployed with certificates - On cloud, as the servers come and go, a centralized certificate management system helps generate certificates on the fly, thus allowing immediate deployment to servers. Certificate keyStore and trustStore need to be secured with passwords which can be kept safe and retrieved from a secrets management solution. A PKI secret backend and generic secrets storage comes in handy to automate all of these with minimum risk to security.

Microservices and applications need access to their own database or data stores. It makes sense to isolate the database/data access credentials using a generic secrets storage to maintain renewal, rotation and revokes easily as per requirement.

When automated environment provisioning needs access to a software installable repository - For example, an Apache server provisioning can be automated with an Apache software installable accessed from a software repository. The repository can be accessed using generic credentials or an API key. A centralized secrets management solution is the right place to store these credentials and achieve automation with no compromise on security.

In conclusion: to simplify and automate secrets management,
solutions are available from Cloud providers like AWS KMS, Azure Key Vault and
from specialized security solution like Hashicorp Vault. The paradigm shift
with respect to secrets management needs to be understood by enterprises
adopting microservices, to ensure that their transformation journey provides
the agility as required in the most secure manner possible.