The Hacker News — Cyber Security, Hacking, Technology News

Do you know that any app you have installed on your Android phone can monitor the network activities—even without asking for any sensitive permission—to detect when other apps on your phone are connecting to the Internet?

Obviously, they cant see the content of the network traffic, but can easily find to which server you are connecting to, all without your knowledge. Knowing what apps you often use, which could be a competing or a financial app, "shady" or "malicious" app can abuse this information in various ways to breach your privacy.

But it seems like Google has planned to address this serious privacy issue with the release of its next flagship mobile operating system.

With Android P, any app will no longer be able to detect when other apps on your Android device are connecting to the Internet, according to the new code changes in Android Open Source Project (AOSP) first noticed by XDA Developers.

"A new commit has appeared in the Android Open Source Project to 'start the process of locking down proc/net,' [which] contains a bunch of output from the kernel related to network activity," XDA Developers writes.

"There's currently no restriction on apps accessing /proc/net, which means they can read from here (especially the TCP and UDP files) to parse your device's network activity. You can install a terminal app on your phone and enter cat /proc/net/udp to see for yourself."

However, the new changes applied to the SELinux rules of Android P will restrict apps from accessing some network information.

The SELinux changes will enable only designated VPN apps to access some of the network information, while other Android apps seeking access to this information will be audited by the operating system.

However, it should be noted that the new SELinux changes are coming for apps using API level 28 running on Android P—which means that apps working with API levels prior to 28 continue to have access to the device' network activities until 2019.

A few custom ROMs for Android, such as CopperheadOS, have already implemented these changes years ago, offering better privacy to their users.

As XDA developers pointed out, this new change introduced to the Android operating system appears to be very small that users will hardly notice, "but the implications for user privacy will be massive."

Security researchers at Embedi have disclosed a critical vulnerability in Cisco IOS Software and Cisco IOS XE Software that could allow an unauthenticated, remote attacker to execute arbitrary code, take full control over the vulnerable network equipment and intercept traffic.

Embedi has published technical details and Proof-of-Concept (PoC) code after Cisco today released patch updates to address this remote code execution vulnerability, which has been given a base Common Vulnerability Scoring System (CVSS) score of 9.8 (critical).

Researchers found a total of 8.5 million devices with the vulnerable port open on the Internet, leaving approximately 250,000 unpatched devices open to hackers.

To exploit this vulnerability, an attacker needs to send a crafted Smart Install message to an affected device on TCP port 4786, which is opened by default.

"To be more precise, the buffer overflow takes place in the function smi_ibc_handle_ibd_init_discovery_msg" and "because the size of the data copied to a fixed-size buffer is not checked, the size and data are taken directly from the network packet and are controlled by an attacker," Cisco explain in its advisory.

The vulnerability can also result in a denial-of-service condition (watchdog crash) by triggering indefinite loop on the affected devices.

Researchers demonstrated the vulnerability at a conference in Hong Kong after reporting it to Cisco in May 2017.

Video Demonstrations of the Attack:

In their first demonstration, as shown in the video below, researchers targeted Cisco Catalyst 2960 switch to reset/change the password and entered privileged EXEC mode:

In their second demo, researchers exploited the flaw to successfully intercept the traffic between other devices connected to the vulnerable switch and the Internet.

Affected Hardware and Software:

The vulnerability was tested on Catalyst 4500 Supervisor Engines, Cisco Catalyst 3850 Series Switches, and Cisco Catalyst 2960 Series Switches devices, as well as all devices that fall into the Smart Install Client type are potentially vulnerable, including:

Catalyst 4500 Supervisor Engines

Catalyst 3850 Series

Catalyst 3750 Series

Catalyst 3650 Series

Catalyst 3560 Series

Catalyst 2960 Series

Catalyst 2975 Series

IE 2000

IE 3000

IE 3010

IE 4000

IE 4010

IE 5000

SM-ES2 SKUs

SM-ES3 SKUs

NME-16ES-1G-P

SM-X-ES3 SKUs

Cisco fixed the vulnerability in all of its affected products on 28th March 2018, and Embedi published a blog post detailing the vulnerability on 29th March. So, administrators are highly recommended to install free software updates to address the issue as soon as possible.

Nobody likes to do router and firewall management. It often requires a lot of hard labor just keeping the infrastructure up and running.

If you ever had to set up IPsec tunnels between different firewall brands, change a firewall rule and hope nothing breaks, upgrade to the latest software or urgently patch a vulnerability – you know what I am talking about.

All of these issues have been with us basically forever. Recently, the list of complex tasks extended to getting cloud infrastructure connected to the rest of the network, and secure access for mobile users.

There seems to be a change coming to this key part of IT, a silver lining if you will. We decided to take a look at one solution to this problem – the Cato Cloud from Cato Networks.

Founded in 2015, Cato Networks provides a software-defined and cloud-based secure enterprise network that connects all locations, people and data to the Cato Cloud – a single, global, and secure network.

Cato promises to simplify networking and security by delivering enterprise-grade network with built-in network security, instead of all the appliances and point solutions currently being used for that purpose.

We were delighted to find a fresh approach to the age-old way of managing networking and security that is really compelling, especially for short-handed IT teams.

What We Tested

We set out to transform a legacy network architecture using the Cato Cloud ("Cato"), and looked into four areas:

Provisioning: connecting sites and users to the WAN. Typically, this is a time consuming and error-prone process, especially when creating a multi-vendor firewall full mesh.

Administration: define and change access and security policies. Adding new policies and extending them to each location is a key task that requires careful planning to avoid conflicts and ensure all sites maintain compliance with the corporate security policy.

Access: connect to company resources in both on-premise and cloud data centers. Multiple data centers, and especially cloud ones, contribute to increased access fragmentation. Typically, users have to connect to each resource directly, so eliminating this requirement improves the user experience.

Security: Finally, we will test security effectiveness against Internet threats such as malicious websites and files. This is expected functionality from secure web gateways but with the added benefits of zero maintenance and elastic capacity.

Testing Environment

We wanted to simulate a typical customer environment for our testing, so we’ve built a hybrid environment that includes a headquarters, remote branch, mobile user, and cloud data center.

All sites and users require access to the Internet and the data centers located in the HQ and the cloud.

For the setup, we used both physical and virtual machines. Our main office simulates the headquarters (HQ), and I’m using my home office to simulate the remote branch (Branch).

The HQ connects to the internet using a symmetrical 50/50 Mbps internet line and already has a perimeter firewall. The Branch connects to the Internet over asymmetrical 100 Mbps link and a small office firewall.

We also built two cloud data centers in Amazon AWS and Microsoft Azure. On both datacenters, we run Windows servers with a simple web application. The sites and data centers establish WAN connectivity over VPN.

Figure 1: Testing environment before Cato

Provisioning:

We tested Cato’s ability to provision new sites and users by using the Cato Management Application (CMA).

Our first task was to connect the HQ to Cato. Cato offers connectivity support using a standard IPsec tunnel, so we’ll leverage our existing firewall to connect to Cato.

The firewall initiates the connection and configures to route all traffic to Cato. The firewall enforces no security; it is simply moving the traffic to Cato where WAN connectivity and traffic inspection will be accomplished.

Next, we connected Branch to Cato. In this case, we will use a Cato-provided networking device called the Cato Socket. The Cato Socket simply forwards traffic to the Cato Cloud.

Per Cato, the Cato Socket can handle up to 1 Gbps traffic of any kind, WAN and the Internet, and does not require any manual updates or upgrades since it is self-managed from the cloud. The Cato Socket provisioning process is plug-and-play, and the only action required by on-site stuff is to plug it into power and an Internet connection.

Once connected, the Cato Socket automatically "calls home" and waits for the administrator to name it, in our case we chose "London," and confirm the connection into the network.

The advantage of using a Cato Socket instead of the firewall is that it eliminates the complexity of appliances: installation, updates, upgrades, and that it has no capacity limitation because no security enforcement is done on the device itself.

Figure: 4: Cato Socket automatic provisioning

Finally, we connected a mobile user to Cato. To enroll with the Cato service, the admin sends an email invitation using the CMA to the user (user information can be loaded using Active Directory integration or for testing purposes added manually).

The user then receives an email with a link to a Cato self-service portal that would install the Cato Client and automatically configure the user’s credentials and the Cato Cloud configuration.

Figure 6: The Cato Client installation and provisioning process

When done, the user can now connect the device to the Cato Cloud and gain access to the network. Resource access is enabled according to the access and security policy, and internet browsing from the device is protected by Cato’s built-in network security services.

Figure 7: HQ, Branch, Cloud DC and Mobile Users connects to Cato

Administration:

Network and security administrators are required to change network configurations and investigate security incidents on a daily basis. In this part of the product review, we examined the day-to-day operations granularity, simplicity, and efficiency.

Access Policy Configuration:

Once all sites, cloud datacenters, and mobile users are connected to Cato, we defined a policy that sets access permissions. In Cato, the access policy is divided into two parts: Access to WAN resources and Access to the Internet.

1. The WAN firewall controls access to business resources on physical and cloud data centers.

The approach Cato took in their access policy is really interesting. Access rules consolidate the resources that should be protected, and a direction arrow defines the allowed flow of traffic.

This way, instead of creating multiple rules, a single one can be used. In addition, the order of the rules isn't critical (unlike with traditional firewalls). This makes it simpler to add a new rule to the policy.

Security Policy Configuration:

Cato offers a built-in full network security stack in the cloud. The security stack includes URL Filtering and Anti-malware with TLS support. All WAN and internet traffic that route via Cato is inspected.

The Cato URL Filtering has a recommended out-of-the-box policy. URLs are organized in categories, and each category can be set to allow, block, monitor, and prompt.

For example, the admin can define all suspected phishing websites to block.

Figure 10: URL Filtering policy

The built-in Anti-malware scans both the internet and WAN traffic and can be set to block or monitor for incidents.

Figure 11: The Anti-malware scans both the internet and WAN traffic

What's unique about the Cato solution is that capacity and sizing is not a consideration for the customer. Unlike appliance-based security, there is no need to upgrade appliances when traffic volume, traffic mix or required security functions change.

With Cato, all traffic inspection is done in the cloud and scales to meet customer needs seamlessly. For example, because TLS inspection has a big impact on appliance performance, admins tend to be very careful when using it. With Cato we just enabled it, and it worked.

Connectivity:

Before using Cato, our HQ, Branch and cloud resources connected over VPN with a dedicated tunnel created for each resource. A mobile user also needed VPN to the datacenters, so they were required to connect and disconnect from the platforms’ dedicated VPN gateway each time they wanted to connect to a different datacenter.

With Cato, the sites, data center, and mobile users are connected to one cloud network, so all resources are accessible with a single VPN connection. Branch tunnels into the Cato Cloud using the Cato Socket, and mobile devices tunnel using the Cato Client.

Figure 12: Cato client for iOS connects the user to all resources with a single VPN connection

We wanted to test the network traffic analytics tools Cato provides with the system. Good visibility into network activities, performance, and usage is an important piece of any networking platform.

The CMA provides full visibility into connected networks and hosts. The administrator can view the usage of each network resource, and can focus on specific network events. Throughput, packet loss, latency and usage by an application are clearly shown to the administrator.

Figure 13: Network traffic analytics

Security:

Since the Cato Cloud replaces the firewall functionality we used and moved it to the cloud, we wanted to check its effectiveness and the visibility it offers for security incidents.

We decided to download a malicious file from the internet over SSL for our testing.

We browsed to malwr.com and searched for a real Ransomware:

Figure 14: Ransomware sample from malwr.com

We then clicked the "download" button on one of the files to download it to a computer located at Branch, behind the Cato Cloud. Cato indeed detected the attempt and blocked the download.

VirusTotal recognized this file as a BitcoinBlackmailer.exe which is a Ransomware file. The Cato security stack works in the cloud and inspects both internet and WAN traffic so even a malware file downloaded from one of our data centers would have been blocked.

Let's now look at Cato's application level policies and URL Filtering effectiveness. On the CMA we setup a rule to block usage of BitTorrent and Tor from Branch.

We installed the Tor browser and tried to connect to the Tor network. Cato’s firewall blocked the connection.

Figure 18: Cato blocks Tor

For the URL filtering test, we defined a rule to block Gambling websites.

Figure 19: URL Filtering Policy to block Gambling websites

When we tried to browse to a gambling site (from Chrome we browsed to www.888.com), Cato blocked it and redirected us to an error page.

Figure 20: Cato blocks browsing to gambling website

Conclusion:

Cato Networks promised to simplify networking and security management by moving it to the cloud. We were really impressed by the simplicity and speed of migrating an on-premise network and security infrastructure to the Cato Cloud.

The administration is easy and intuitive, and we found the end user experience to be simple for both setup and ongoing management of connectivity and security. But probably the most compelling feature is the relief Cato provides by eliminating the need to run distributed security appliances.

Cato takes care of the infrastructure for you. That is a huge benefit for busy and understaffed IT professionals.

How often does a vendor take away work, rather than layer extra work on top?

Two months ago, THN reported about a similar announcement made by The American Registry for Internet Numbers (ARIN), which said that the agency is no longer able to produce IPv4 addresses in North America.

Within a time frame of few months, ARIN, which handles Internet addresses in America, has announced the final exhaustion of their free pool of IPv4 addresses has reached zero...

...i.e. the availability of IPv4 (Internet Protocol version 4) addresses no more exists.

Meanwhile, they are going to accept requests for IPv4, which will be approved via two ways:

They say, "The source entity (-ies within the ARIN Region (8.4)) will be ineligible to receive any further IPv4 address allocations or assignments from ARIN for a period of 12 months after a transfer approval, or until the exhaustion of ARIN's IPv4 space, whichever occurs first."

These changes will impact the organizations existing in Transfers between Specified Recipients within the ARIN Region (Transfer 8.3) and Inter-RIR Transfers to Specified Recipients (Transfer 8.4).

Also, if they are successful in allotting IPv4 address pool to the waiting list entities and are still left with IPv4 addresses, then they will open the free pool for IPv4 addresses and add them there for future use.

We see this is just the start of an era (IPv6).

IPv6 was invented in about two decades ago in 1998, and it features much longer addresses, such as — FE80:0000:0000:0000:0202:B3FF:FE1E:8329. This means that IPv6 will offer a total available pool of 340 Trillion Trillion Trillion addresses, providing capacity for a very long term.

Microsoft has built its own Linux-based operating system called Azure Cloud Switch (ACS) and believe me, under Satya Nadella, Microsoft has become more open than ever.

According to the announcement made through an official blog post on Microsoft website, Azure Cloud Switch (ACS) describes as "cross-platform modular operating system for data center networking built on Linux." or Simply, "Commodity switch software stack for data center networks".

The Purpose of developing Linux-based Azure Cloud Switch (ACS) operating system at Microsoft is to make it simpler to control the hardware from multiple vendors (such as Switches) that powers their cloud-based services.

And here's the Kicker:

"Running on Linux, ACS [Azure Cloud Switch] is able to make use of its vibrant ecosystem. ACS allows to use and extend Open Source, Microsoft, and Third Party applications."

You can see the main functional blocks from top to the bottom of the ACS stack as shown in the image below.

However, Microsoft's Linux distribution is not going to appear on Desktops or Servers anytime soon, because this isn't a typical consumer-grade Operating System.

For Now, Azure Cloud Switch (ACS) Linux OS is just an internal tool that Microsoft uses to "debug, fix as well as test software bugs much faster", scale down software and develop features for enterprise and cloud computing services.

Microsoft Azure Cloud Switch (ACS) was demonstrated at the SIGCOMM conference in August 2015 at Imperial College London.

This move by Satya Nadella's Microsoft is really significant.

If you’re interested in the technical deep dive into Azure Cloud Switch (ACS), you will find it on the Microsoft Azure blog.

Microsoft... To Win, Make Love, Not War.

It is not the first time that Microsoft is partnering with rival technologies.

Earlier this year, Microsoft had announced its partnership with Cyanogen, the most popular third-party ROM for Android phones and tablets.

And Cyanogen is reportedly working on deeper integration of Microsoft’s Digital personal virtual assistant, Cortana, into its latest version of Operating System.

The Internet is running out of IPv4 (Internet Protocol version 4) addresses — a computer’s unique address on the Internet. It’s just become harder to get IPv4 addresses.

IPv4 Exhaustion Gets Real. Is this the end of IPv4 addresses?

Finally, North America ran out of iPv4 addresses and officially exhausted its supply of IPv4 addresses, joining Asia, Europe, and Latin America.

The American Registry for Internet Numbers (ARIN), which is responsible for handing out Internet addresses, has warned that it is unable to fulfil a request for the allocation of large blocks of IPv4 addresses due to IPv4 Exhaustion of available address pool.

On Wednesday, ARIN activated an "IPv4 Unmet Requests Policy" for the first time and placed a waitlist for companies that request blocks of IP addresses for their services.

According to the ARIN, ISPs are left with only three choices:

They can either accept a smaller block (limited to 512 and 256 addresses)

They can join the waitlist for unmet requests in the hopes that a block of the desired size will become available in future

They can also purchase addresses from another organization that has more than it needs

Is it time for organizations to begin thinking about Migrating to IPv6?

The businesses were forced to move to the updated IPv6 address ever since 2011 when IPv4 had an available pool of 4.3 Billion addresses, before officially running out of unallocated addresses in the US in 2014.

IPv4 Exhaustion would cause trouble for the growth of the Internet and the next 20 Billion Internet-of-Things devices would struggle hard to become a reality unless IPv6 addresses are widely adopted soon.

IPv6 was invented in about two decades ago in 1998, and it features much longer addresses, such as — FE80:0000:0000:0000:0202:B3FF:FE1E:8329. This simply means that IPv6 will offer a total available pool of 340 Trillion Trillion Trillion addresses, providing capacity for a very long term.

According to recent stats provided by Google, around 7 percent of Internet is using IPv6, which is increased by 2 percent from last two years. Belgium and Switzerland are the leading countries with the highest adoption of IPv6.

The year is about to end, but serious threats like Shellshock is "far from over". Cyber criminals are actively exploiting this critical GNU Bash vulnerability to target those network attached storage devices that are still not patched and ready for exploitation.

Security researchers have unearthed a malicious worm that is designed to plant backdoors on network-attached storage (NAS) systems made by Taiwan-based QNAP and gain full access to the contents of those devices.

The worm is spread among QNAP devices, which run an embedded Linux operating system, by the exploitation of the GNU Bash vulnerability known as ShellShock or Bash, according to security researchers at the Sans Institute.

QNAP vendor released a patch in early October to address the flaw in its Turbo NAS product, but because the patches are not automatic or easy to apply for many users, so a statistically significant portion of systems remain vulnerable and exposed to the Bash bug.

Shellshock vulnerability was among the critical and serious Internet vulnerabilities uncovered this year, as the vulnerability in Bash, aka the GNU Bourne Again Shell, affects Linux and UNIX distributions to a large extent, but also Windows in some cases. The flaw exploits a bug in GNU Bash that gives attackers the ability to run execute shell commands of their choice remotely on vulnerable systems using specifically crafted variables.

"The attack targets a QNAP CGI script, /cgi-bin/authLogin.cgi, a well known vector for Shellshock on QNAP devices," Johannes B. Ullrich, head of the Internet Storm Center at the SANS Institute, wrote in the blog post published Sunday. "This script is called during login, and reachable without authentication. The exploit is then used to launch a simple shell script that will download and execute a number of additional pieces of malware."

Once the device is infected by the worm, malicious components also execute a script that makes the device to carry out click fraud scam against an online advertising network JuiceADV. A number of other scripts are also installed on the infected system. The worm is dangerous because the "infected devices have been observed scanning for other vulnerable devices," Ullrich said.

According to the researcher, the infected systems are equipped with a secure shell (SSH) server on port 26 and a new administrative user, which gives the attackers a determined backdoor to hide into the device at any time in the future.

"The DNS change is likely made to avoid logging and potentially blacklisting of any affected domains," Ullrich said. "The SSH server is a second SSH server that is being launched, in addition to the normal SSH server on port 22. This second SSH server, and the additional user added to the system, provides the attacker with persistent access to the system."

More interestingly, the worm also patches the notorious Shellshock vulnerability on the infected devices by downloading and applying the security updates from QNAP and reboot the device, in order to prevent other attackers from taking over the compromised device.

Tor has always been a tough target for law enforcement for years and FBI has spent millions of dollars to de-anonymize the identity of Tor users, but a latest research suggests that more than 81% of Tor clients can be "de-anonymised" by exploiting the traffic analysis software ‘Netflow’ technology that Cisco has built into its router protocols.

NetFlow is a network protocol designed to collect and monitor network traffic. It exchanged data in network flows, which can correspond to TCP connections or other IP packets sharing common characteristics, such UDP packets sharing source and destination IP addresses, port numbers, and other information.

The research was conducted for six years by professor Sambuddho Chakravarty, a former researcher at Columbia University’s Network Security Lab and now researching Network Anonymity and Privacy at the Indraprastha Institute of Information Technology in Delhi.

Chakravarty used a technique, in order to determine the Tor relays, which involved a modified public Tor server running on Linux, accessed by the victim client, and modified Tor node that can form one-hop circuits with arbitrary legitimate nodes.

"The server modulates the data being sent back to the client, while the corrupt Tor node is used to measure delay between itself and Tor nodes," researchers wrote in a paper PDF. "The correlation between the perturbations in the traffic exchanged with a Tor node, and the server stream helped identify the relays involved in a particular circuit."

According to the research paper, to carry out large-scale traffic analysis attacks in the Tor environment one would not necessarily need the resources of a nation state, even a single AS may observe a large fraction of entry and exit node traffic, as stated in the paper – a single AS (Autonomous System) could monitor more than 39% of randomly-generated Tor circuits.

"It is not even essential to be a global adversary to launch such traffic analysis attacks," Chakravarty wrote. "A powerful, yet non- global adversary could use traffic analysis methods […] to determine the various relays participating in a Tor circuit and directly monitor the traffic entering the entry node of the victim connection."

The technique depends on injecting a repeating traffic pattern into the TCP connection that it observes as originating from the target exit node, and then correlating the server’s exit traffic for the Tor clients, as derived from the router’s flow records, to identify Tor client.

Tor is vulnerable to this kind of traffic analysis because it is designed as low-latency anonymous communication networks.

Chakravarty’s research on traffic analysis doesn't need hundreds of millions of dollars in expense, neither it needed infrastructural efforts that the NSA put into their FoxAcid Tor redirects, however it benefits from running one or more high-bandwidth, high-performance, high-uptime Tor relays.

Just few days ago, US and European authorities announced the seizure of 27 different websites as part of a much larger operation called Operation Onymous, which led to take-down of more than "410 hidden domains" that sell illegal goods and services from drugs to murder-for-hire assassins by masking their identities using the Tor encryption network.

UPDATE
However, the Tor Project responded via a blog post. In a statement Tor project member 'Arma' confirmed that they have been aware of the network analysis attacks and has already implemented security measures in place.

"It's great to see more research on traffic correlation attacks, especially on attacks that don't need to see the whole flow on each side. But it's also important to realise that traffic correlation attacks are not a new area." reads the blog post.

"The discussion of false positives is key to this new paper too: Sambuddho's paper mentions a false positive rate of 6%. ... It's easy to see how at scale, this 'base rate fallacy' problem could make the attack effectively useless," he said.

The growing threat of cyber-attacks and network hacking has reached the satellite-space sector, posing a growing challenge to the satellite operators. Because the satellite system are the critical components for the Nation to a modern military, they have become an attractive target of cyber attacks.

A security firm uncovered a number of critical vulnerabilities, including hardcoded credentials, undocumented and insecure protocols, and backdoors in the widely used satellite communications (SATCOM) terminals, which are often used by the military, government and industrial sectors.

By exploiting these vulnerabilities an attacker could intercept, manipulate, block communications, and in some circumstances, could remotely take control of the physical devices used in the mission-critical satellite communication (SATCOM).

Once the attacker gained the access of the physical devices used to communicate with satellites orbiting in space, he can completely disrupt military operations and flight-safety communications of mission-critical satellite communications (SATCOM), researchers have warned in a 25-page white paper titled “A Wake-up Call for SATCOM Security,” published Thursday by the Security consultancy IOActive.

Thousands of SATCOM devices found to be vulnerable and even if one of the affected devices compromised, the entire SATCOM infrastructure could be at risk, including Ships, aircraft, military personnel, emergency services, media services, and industrial facilities (oil rigs, gas pipelines, water treatment plants, wind turbines, substations, etc.).

According to the Guardian, British manufacturers Cobham and Inmarsat, as well as Harris Corporation, Hughes and Iridium in the US made such satellite systems that were easily hackable, and any foreign government or agency can track and target the location of units and soldiers.

According to the researchers, Harris RF-7800B terminals that offers a high-performance satellite solution for voice and data connectivity to military is also vulnerable to cyber attacks and successful exploitation could allow an attacker to install malicious firmware or execute arbitrary code.

Reported vulnerabilities also affect the US military aircraft equipped with the Cobham AVIATO, which is designed to meet the satellite communications needs of aircraft and a malicious attacker could disrupt flight communication.

IOActive is currently working with government CERT Coordination Center to alert each manufacturer to the security holes they discovered. "Until patches are available, vendors should provide official workarounds in addition to recommended configurations in order to minimize the risk these vulnerabilities pose." IOActive advised.

So, How do you currently monitor your logs and events, including network, servers, databases, applications, your router, firewall or Windows servers? Obviously, If you have thousands of machines on your network.. It will become more complicated.

Due to the massive boom in the cyber attacks and security breaches that result in financial losses and damages the goodwill of the reputed corporations, the demand for SIEM tools is increasing continuously among the IT security professionals and system administrators.

Security Information & Event Management (SIEM)is the best solution, that has evolved over the years to become one of the most trusted and reliable solutions for log management, security, and compliance.

SIEM systems provide a holistic view of an organization’s Information technology (IT) Security by collecting logs and other security-related documentation for analysis. But SIEM systems are typically expensive to deploy and complex to operate and manage.

So, here SolarWinds Log & Event Manager (LEM)meets your expectations and provides you all of the essential features required of SIEM. LEM is deployed as a virtual appliance, and also supports Hyper-V that makes it easy to get up-and-run quickly.

It boosts the capabilities of organizations of any size to improve their overall security posture, detects and remediate security threats, and achieve compliance objectives.

It not only centralizes and collects logs, but it also helps to correlate important events, provides advanced searching features, and even takes automatic action against threats, all in real-time! All logs and events can be collected in one central location from multiple sites via virtual LEM appliances, even across geographically remote data centers and branch offices.

An administrator could collect malware information from installed Antivirus products, and once a potential intrusion is detected, SolarWinds' LEM could automatically shut off Internet access to the infected machine until a technician has addressed the issue. It supports for hundreds of out-of-the-box critical security devices and applications including IDS/IPS, anti-virus software.

Active Response mechanisms allow organizations to immediately and automatically remediate all events that are out of line with policy or expected behavior, such as unauthorized access, unwanted configuration changes or abnormal traffic patterns that could indicate a compromise.

Protection and monitoring down to the endpoint: If your organization is very concerned about the large-scale data loss from USB devices. SolarWinds’ LEM extends the security protections beyond network devices to USB storage systems that users connect to the network. LEM has USB Defender and built-in technology to monitor usage of USB devices (even when disconnected from the corporate network).

It can identify unauthorized access and copying/theft of sensitive files, and enable automatic ejection of USB devices to assure that your company's secrets are never stolen by simple external devices.

IT Infrastructure of organizations is growing ever more distributed, complex and difficult to manage. To manage such networks, a log management solution is not enough.

The AlienVault Unified Security Management™ (USM) platform is the perfect solution to help manage the flood of information and analyze it in real time, to find evidence of security incidents. So, in this article we will introduce you to a security monitoring solution that provides real-time threat detection and speeds incident response.

The AlienVault Unified Security Management™ (USM) platform provides all of the essential security controls required for complete security visibility, and is designed to enable any IT or security practitioner to benefit from results on day one. Powered by the latest AlienVault Labs Threat Intelligence and the Open Threat Exchange™ (OTX)—the world’s largest crowd-sourced threat intelligence exchange—AlienVault USM delivers a unified, simple and affordable solution for threat detection and compliance management. Understanding the sensitive nature of IT environments, USM includes active, passive and host-based technologies so that you can match the requirements of your particular environment.

What can you do with USM?

All of AlienVault’s built-in security controls are pre-integrated and optimized to work together out of the box. Within minutes of installing the USM product, AlienVault’s asset discovery features – active network scanning, passive network monitoring, asset inventory, host-based software inventory – will provide visibility into the assets on your network, what software is installed on them, how they’re configured, any potential vulnerabilities and active threats being executed on them. By building in the essential security capabilities, AlienVault USM significantly reduces complexity and reduces deployment time.

Complete Security Visibility in One Day

With all of the essential security controls built-in, AlienVault USM puts complete security visibility within fast and easy reach of security teams who need to do more with less. With USM you can spend more time investigating the alarms and people attacking your systems and less time setting up and integrating all the other security tools needed for true operational security. USM gives you the security visibility you need to understand who is attacking you, what they are targeting and what your true vulnerabilities are. Within the first day of installation, you’ll be able to:

Respond to emerging threats with detailed, customized “how to” guidance for each alert

Demonstrate to auditors and management that your incident response program is robust and reliable

Simplify Regulatory Compliance Requirements

With a single platform, AlienVault USM automatically identifies important audit events in real-time, reports them and alerts on events that warrant immediate action. From file integrity monitoring to IDS to log management - USM makes compliance easier. Not only do we provide the tools you need to be compliant, USM gathers the information you need and generates the reports to give to auditors.

How does USM work?

Our most popular option, the AlienVault USM All-in-One appliance—ideal for single sites and more centralized networks—combines the following capabilities for simpler security management:

AlienVault OTX is an open information sharing and analysis network that provides access to real-time, detailed information about incidents that may impact you, allowing you to learn from, and work with, others who have already experienced them. OTX was developed for IT practitioners responsible for security who don’t want to continually deal with the same security problems as their peers without the benefit of lessons learned. Unlike closed, invitation-only information sharing and analysis networks (e.g., FS-ISAC, Infragard, ISAC), OTX provides real-time, actionable information that is open to anyone who chooses to participate. This allows IT practitioners to achieve preventative response by learning about how others are targeted, and then employ the right defenses to avoid becoming victims themselves.

THN Deals Store this week brings you the Cybersecurity Certification Mega Bundle, which will walk you through the skills and concepts you need to master three elite cybersecurity certification exams: CISA, CISM, and CISSP [...]

Good news, we bring an amazing deal of this month for our readers, where you can get hacking courses for as little as you want to pay and if you beat the average price you will receive the fully upgraded hacking bundle!