We must stay relevant, new world of research and development coming – everything from the basics to security tool programming

Threat modelling

If AI improves scanning, hackers will have much better ways of finding application flaws

Closing thought;

Feynman’s first proposed QC was a universal quantum simulator

Seth lloyd showed a QC can perfectly simulate any quantum system in the universe

Turns out universe is a giant, 13.7-billion year old quantum computer

What will we be hacking one day?

This was a very thought proving and fast paced talk. The above notes are very high level, but cover the main points of the talk and can be used to aid searches for more in depth reading. This presentation really highlighted to me I need to read up more on this stuff.

We are not there yet, but Quantum Computers are coming and they will have huge ramifications for pretty much all areas of computing. From a security standpoint, we will likely need a full overhaul of cryptography and threat modelling, along with application and system vulnerability scanning. Of course not forgetting a whole new class of computers and networks to understand and secure!

Interesting times ahead, and I highly recommend further reading on this topic.

Communicating information security value to the business using words and pictures.

Presentation by Steve Jump from Telkom SA SOC ltd.

I have high hopes for the usefulness of this talk as we all seem great at explaining and discussing security issues with other security and technical people, but fairly terrible at getting the board and other business people to understand the issues and importance of remediating them!

Highlighted at the start that this is a work in progress, but already proving useful.

If you are trying to obtain budget for upcoming initiatives you need to get the board on board and ensure they understand the risks from a business standpoint.

Why business gets turned off by security

Too much shouting about risks, creating policies and standards, more talking about risks – who is looking at your data (criminals, governments, hacktivists), where is your data, more standards and policies

What the business actually wants (and needs) to talk about

What do these threats mean to my business?

Why should I worry?

How does this affect the bottom line?

What happens if I ignore you? (e.g. is the cost of doing nothing lower than the cost if fixing the issue?)

Prevention of business growth and reduced opportunity for profit due to reduced agility of systems and increased need to deliver custom protection of solutions.

Reputation

Loss of business reputation resulting from information loss or device interruption resulting in loss of credibility with customers and investors.

So that’s all the jargon sorted out?

Think of creating threat cubes – they have a LOT more words than this and are technical.

So how do we bridge the gab between the jargon and output from threat analysis etc. to a simple taxonomy the business can understand, relate to and use in budget and planning discussions?

Add pictures!

One for each of the six words in the simple taxonomy;

Warning triangle – Regulatory

Credit card – Fraud (may need to be different for you if you work in a PCI environment as this may get confused with the regulatory one)

Money Bag – Theft

Road block sign – Service availability (things with this could impact our ability to do business)

Rocket ship – Business agility – faster, innovative

Happy / sad masks – Reputation

So the taxonomy now has words and images for each item.

So when you create a threat cube or other form of threat analysis you can then relate each item on the list back to one or more of the taxonomy words and images – images can be added to aid understanding. For reporting, each should be mapped to the main area it impacts.

How this works in practice;

Formal Information Security Risk assessment process

Asess solution, change product or service against technical business threat models

It will be interesting to test this method out at work to see if it helps get engagement from the board and wider business. This definitely seems like a good idea, and anything that will help engage and lead to greater understanding of security issues has to be worth a try1

It would be great to hear from anyone who s trying this method, or a similar one in their business.

Well I am at the ISF (Information Security Forum) annual congress for the next couple of days. As usual I’ll blog notes and some comments from the talks I listen to, and where possible share them ‘live’ and as is.

Presentation by Stefan Frei and Francisco Artes from NSS Labs.

The risk is much larger then people thought. It is more like the 800 pound ‘cyber gorilla’ than the chimpanzee.. And to make things worse it is a whole field of these ‘cyber gorillas’.

How do we report on this with metrics that are meaningful to the board?

Threat modelling can be a useful tool here.

Live modelling solutions (such as those done by NSS labs) can be used to model differnect tools from different vendors in an environment broadly similar to yours; (NSS example)

Pick your applications and operating systems

Pick your broad network design

Pick the security solutions and where they are placed.

Devices each tested with >2000 exploits, thus when you choose different devices you can see where the exploits would be caught or missed, so for example you could layer brand X NGFW, with brand Y IPS, and brand Z AV. The ‘live’ threat model would then map the exploits that each device missed, so you can see if any would pass all the layers in your security.

All tests were done with the devices tuned as per manufacturers recommendations.

For IPS the vendors had experts tune them, this lead to a 60-85% increase in IPS performance. This point is very interesting outside to this talk – IPS devices MUST be tuned and maintained for them to deliver value and protection. Do you regularly tune and maintain IDS / IPS devices in your environment?

Report / live threat modelling also differentiates between automated attacks vs. hand crafted ones. This highlights how many attacks could relatively easily be launched by anyone with basic skills in free tools such as Metasploit. This raises the question why security tool vendors can’t at least download exploit tool kits and their updates to ensure their tools can at least prevent the available pre-packaged attacks!

This is definitely a useful tool, and whether NSS or similar I can recommend you undertake some detailed threat modelling of your environment. This type of service allows you to perform much more ‘real’ technical threat modelling rather than just doing theoretical attack scenarios which is as far as most threat modelling exercises seem to go.

What is the threat environment?

Many experts writing tools and exploits.

A huge number of people with limited skills utilising free and paid for tools created by the exports – this increases the threat exponentially – anyone can try the free tools, anyone with even limited funds can purchase the paid for tools (often around $250).

The maturing threat landscape;

there is now a thriving market for underground hacking / attack tools. This has matured and now offers regularly patched software with patching cycles, new exploits regularly added, and even full support with email and sometimes phone based support desks.

The vendors of these hacking tools even offer guarantees around how long exploits will work for and evade security tools.

These are often referred to as Crimeware Kits.

In the tests by NSS labs, no device detected all exploits available in these tools, or in the free tools.

This is the continuing problem for businesses and the security industry – they are always playing catch up and creating tools / solutions to deal with known threats, rarely the unknown threats.

Another interesting finding was in a recent test of NGFWs where combinations of two vendors were used in serial, no one pair prevented all exploits tested. However careful and planned pairing does improve security. However this needs to be tested and planned, choosing two vendors at random is the wrong way to do this. How many businesses currently have separate FW or NGFW vendors at different layers of the network? How many of these actually researched the exploits that get through these and chose the solutions for the maximum protection vs. choosing two different vendors without doing this research?

Security vendors will always be playing catch up, however threat modelling can help ensure you choose the best ones for your environment.

Threat modelling will also help choose the best investments to improve security.

As an example a business who worked with NSS was about to invest >$300M on NGFWs across their environment. The threat modelling highlighted that this wouldn’t add a huge amount of security due to a Java issue on all their sites and machines. They could invest (and did) more like £3M on migrating the app to HTML5 and removing Java from their environment. This created a much more secure environment for a mush smaller investment.

Threat modelling can also include geo-loaction and which vendors work best in which locations as well as just looking at the technologies.

Final point was a reminder that as no tools will prevent everything, see must assume we have been ‘owned’ (breached) and act accordingly. This must not be an exception process, we must search for and respond to breaches as part of our security business as usual process.

If you are not performing live threat modelling, I’d highly recommend you start as this is a great way of assessing your current security posture, and also very useful for planning you next security investments to ensure they provide the greatest value and also measurably improve your security posture.

Overall, this was a very informative talk that while demonstrating their product / service managed the stay fairly clear of too much vendor speak and promotion while still highlighting the clear benefits of ‘live threat modelling.

This was another Gartner talk covering the threat intelligence landscape, what you can expect, and things to consider.

Where did that come from?!

Important concept: “Threat”;

A threat exploits a vulnerability resulting in an incident

Threat – you can’t control this, you can only be well informed and plan for it’s arrival

Vulnerability – you can control and understand these – secure coding, defiance in depth, vulnerability databases etc.

Incident – you want to avoid this!!

The problem is getting the Visibility…

The bad guys follow the same lifecycle that we do..

They talk and research – planning – perhaps up to a year or more

They customise attacks – build

They attack – run

Without threat intelligence your view looks like;

Ignorance (they are researching)

Ignorance (they are planning)

Hacked (they are running their attack)

Understanding upcoming threats allows you to match defences and mitigations required to your strategic planning cycle. To do this we need good information on what is coming up, and what the bad guys are discussing for the future.

Important concept: “Intelligence”

Goes beyond the obvious, trivial, or self evident:

developed by correlating and analysing multiple data sources / points

Includes a range of information, for example:

Goals of the threat actor

Characteristics of the threat, and potential organisational outcomes if it is successfully executed

Indicators and defences

Life expectancy of the threat

Reliability of the information

Use it to:

Avoid the threat

Diagnose an incident

Support decisions on how to invest in security (strategic planning)

Reliability and planning horizon are key considerations;

Network traffic feeds – automated information feeds – very reliable, but not real intelligence – good for immediate issues, not for planning. Inexpensive

Strategic intelligence – Can be very tailored to your organisation, great deal of human interaction, custom made research, some human judgement. Reasonably reliable, but as planning goes further out obviously reliability lowers as criminals can change plans. Expensive, but great for strategic planning especially if you are in a high risk industry or organisation.

Snake oil – no one can predict 3-5 years out with certainty, so don’t believe anyone who says they can..

Recommendations;

Use dedicated services to plan for long term strategies, and ensure you are concerned about the right threats.

It can take up to two years to be ready for an emerging threat.

Plan – How will you use the service? How will it be consumed? Who will consume it?

Consider whether you need just the threat intelligence, or adjacent services as well.

Before using, engage heavily with the vendor;

How flexible are they to your needs?

Will they go outside of the contract in an emergency or to assist you?

How well can you work with them – need a good, trusted and close working relationship with them.

If you are considering a threat intelligence service, this talk raises come great points to consider. For me, they key point is how well you can work with them. For these service to be successful you need to work very collaboratively together and they need to have a deep understanding of your specific business and concerns as well as just the industry sector. Another recommended talk.

This was an introductory talk around Software Defined Networking (SDN) and some of it’s security implications.

What is it?

Decoupling the control pane from the data plane and centralising logical controls

Communication between network devices and SDN controllers is with both open and proprietary protocols currently – no single standard..

SDN controller supports open interface to allow external programability of the environment

– Controller tells each node how to route, vs. current where each node makes it’s own routing decisions.

How do I enforce network security in an SDN environment?

Switch as the Policy enforcement point

Switch tells controller it’s seen traffic with certain flow characteristics, Flow controller tells it what to do with the flow, and this information is cached in the local flow table for a specified time. Another flow arrives and this one is not permitted, so the controller tells the switch to just drop the packets – switch effectively becomes a stageful firewall.

Existing controls such as DLP, Firewalls, Proxy servers etc. can all be used with SDN –

e.g. sending email – no matter where it’s going flow says first point is DLP, then firewall, then onto destination

This means devices no longer need to be inline – they can be anywhere on the network. Flow controller just needs to know where to send certain traffic types!

Incoming flows can be treated in the same way

Something changes – such that it looks like DDoS – traffic can be routed to the DDoS protection device(s)

What risks does SDN introduce?

Risk is aggregated in the controller

Malicious or accidental changes could remove some or all of the security protections

Integrity of of the Flow Tables must be maintained

Switches etc must be managed from controller, not locally

Input from applications must be managed and prioritised

Application APIs are non standard

Who gets precedence?

Load balancer vs. security tools when defining traffic flows?

SDN products do exist now.

Standards do exist

OpenFlow – maintained by Open Networking Foundation

Network devices (early days)

Open vSwitch

Some products from Brocade, Cisco, HP, IBM

Controllers (limited maturity)

Floodlight (open source)

Products from Big Switch Networks, Cisco, HP, NEC, NTT Data, VMware

Applications (often tied to specific controllers)

Radware and HP produce some security applications

Recommendations;

Do not overreact to SDN hype

Combine IT disciplines when implementing SDN

Don’t forget security!!

Determine how existing control requirements can be met with SDN

Examine how SDN impacts separation of duties

Some similar issues to vitalisation

Discuss SDN with your existing security vendors

Deploy SDN in a lab or test environment

PoC and understand fully before deploying

Overall this was an informative and fast paced talk. As per the speakers recommendations, SDN is a very interesting technology, although it is still in the emerging phase with the majority of deployments currently being in testing or academia. I wouldn’t yet recommend it for production Datacentre deployments, but I would recommend you become familiar with it, especially if you work in the networking or security fields.