25 posts categorized "Cyber Attacks & Defenses"

With 7.0, ExtraHop introduces live activity maps for complete 3D interaction with the hybrid IT environment; enhanced threat anomalies and machine learning-initiated workflows for performance and security; and perfect forward secrecy (PFS) decryption at scale to support next-generation security architectures.

I have a fundamental question for you. Are you managing your security and monitoring tools or are they managing you? We all want to say that WE are in control, correct? Unfortunately, data from two EMA investigations shows that this might not be the case. It is summarized in this infographic – How to Combat Monitoring and Security Tool Overload.

The number of security and monitoring tools that IT personnel use is increasing. According to the EMA Network Management Megatrends 2016 Report, the average number of security and monitoring tools used by an “average” enterprise (1,000 to 4,999 employees) ranges anywhere from 4 to 15 different tools. In 2014, the average enterprise used 3 to 10 different tools (according to EMA). So in two years, there has been an increase of around 25 to 30% in the number of tools being used.

This causes IT several problems like:

Getting the proper access to good quality monitoring data

The sheer volume of tools makes it hard for IT to manage them

And a mixture of virtual and physical tools is making the situation even more confusing

Network visibility is fast becoming a key component of network and security planning. This is because network visibility is more than just network monitoring. It is about understanding the network—how is it actually performing, are there any current problems, where do future pain points lie, and how do I optimize my resources? IT’s fundamental challenge is to ensure that the infrastructure beneath their applications is reliable, fast, and secure.

Encrypted data further exacerbates the situation. According to a Bluecoat infographic, half of all network security attacks in 2017 will use encrypted traffic to bypass controls. In addition, internal and external SLA’s and customer quality of experience have become increasingly important for IT. These requirements are forcing IT to gain an even better insight and understanding of the network to maximize performance. What no IT team wants to find out is that all of their assumptions and architecture designs are based on incorrect or missing data. When this happens, it results in higher solution costs, confusion, rework, customer dissatisfaction, performance problems, and unplanned outages.

Every network has blind spots. In fact, blind spots have become a serious security issue for enterprises and service providers. According to the 2016 Verizon Data Breach Investigation Report, most victimized companies don’t discover security breach themselves. Approximately 75% have to be informed by law enforcement and 3rd parties (customers, suppliers, business partners, etc.) that they have been breached. To make matters worse, the average time for the breach detection was 168 days, according to the 2016 Trustwave Global Security Report.

Whether you think that security breaches are inevitable or not, you still need to be able to mitigate any damage done by quickly detecting and remediating all breaches. One fast way to do this is to capture application level traffic running on your network and analyze it from a macroscopic point of view—using indicators of compromise (IOC). Security breaches almost always leave behind some indication of the intrusion, whether it is malware, suspicious activity, some sign of other exploit, or the IP addresses of the malware controller.

A visibility architecture that uses application intelligence can be used to capture the IOC needed. The breadcrumbs are there, they just need to be illuminated. What if you could reduce the 168 day average to 168 seconds?

If you are not familiar with application intelligence, this is basically the real-time visualization of application level data. This includes the dynamic identification of known and unknown applications on the network, application traffic and bandwidth utilization, detailed breakdowns of applications in use by application type, and geo-locations of users and devices while accessing applications.

What would happen if a hacker took over control of a nuclear power plant and used it for blackmail or destruction?

What devices control refineries and power plants, even our drinking water purification facilities?

Well these and many other life necessities are run and controlled by SCADA (Supervisory Control and Data Acquisition systems) or ICS (Industrial Control Systems). SCADA and like systems have been around monitoring and controlling our industrial, power and refinery world since the 1960’s.

I actually worked for a SCADA research and monitoring company in that era designing and testing production monitoring tools for the oil industry, from acquisition to refinement requirements but all were Industrial grade level

What is Industrial Grade Level – mainly it is the operating temperature from -40CF to +85C and Military level is -55C to +125C and other factors that would be needed for down hole operations (Drilling), mine operations and even space operations which can include high pressure, shock, mechanical stress, certain types of vibrations, non-vaporizing humidity to near 100%...many different factor for the many different arenas.

David Thomason, CEO of Thomason Technologies, will be announcing a new product for the passive monitoring of industrial networks. SpyGOOSE is their proprietary software which in the past has been distributed with their industrial IPS, the TTL1000. Today, Dave will discuss the general availability of the stand-alone version of this software.