Today, Rapid7 released our second Industry Cyber-Exposure Report, examining the overall exposure of the ASX 200 family of companies. The ASX 200 is a market-capitalisation-weighted and float-adjusted stock market index of stocks listed on the Australian Securities Exchange. The index is maintained by Standard & Poor's and is considered the benchmark for Australian equity performance. It is based on the 200 largest ASX-listed stocks, which together account for about 82% (as at March 2017) of Australia’s market capitalisation.

The report reveals that even among very large, mature, and well-resourced organisations, we see evidence of cybersecurity basics being missed or deployed insufficiently. This hints at the complexity and breadth required for a comprehensive security program, which is a never-ending challenge in which there is always more that can be done, constrained by limited resources and time, regardless of the size of the organisation. If this challenge cannot be comprehensively met by these very large, high-revenue organisations, just imagine how much worse it is for smaller organisations with far fewer resources to apply to security. Sure, you might think smaller organisations are less likely to be targeted by attackers, but that’s probably not significantly the case. For one thing, everyone is a target for so-called untargeted “drive-by” attacks or internet-wide malware infections, such as NotPetya, now officially deemed the most costly cyberattack of all time.

In addition, many small- to medium-size businesses represent a very tasty target for attackers due to their intellectual property (for example, startups with cool new technology or techniques), relationship with their customers (for example, the HVAC vendor that had access to Target’s corporate network), or involvement in processing sensitive or financial data (for example, the many law firms that handle complex mergers and acquisitions between much larger companies).

The report highlights how hard it is for all organisations to adequately address cybersecurity, and the need for greater awareness of challenges and support from business leaders.

The key findings of the research include the following:

ASX 200 organisations, on average, expose a public attack surface of 29 servers/devices, with many companies exposing 200–300+ systems/devices.

Severely vulnerable services such as Telnet and Windows file-sharing were not prevalent for the most part, which is positive. However, most organisations in every sector had serious issues with patch/version management of business-critical internet-facing systems.

Of the appraised ASX 200 organisations, 134 (67%) have weak or nonexistent anti-phishing defenses (i.e., DMARC) in the public email configuration of their primary email domains.

Every industry sector in the ASX 200 signals how many and which cloud service providers they use in their public domain name system (DNS) metadata, with 144 organisations using between two and five cloud service providers and some using 10 or more. This information can be used to craft highly effective, targeted attacks, among other actions.

All industry sectors had at least one organisation with malware compromises, with the Consumer Discretionary and Information Technology sectors showing daily signs of ongoing compromise. These compromises ranged from company resources being co-opted into denial-of-service (DoS) amplification attacks to signs of EternalBlue-based campaigns similar to WannaCry and NotPetya.

Painting an international picture of cyber-hygiene

This is the second Rapid7 Industry Cyber-Exposure Report. Much of the methodology used in the ASX 200 edition parallels what was employed in the Fortune 500 edition, both of which build upon the foundational techniques used to produce the annual National Exposure Index (NEI) reports. The ASX 200 edition also includes a new section that illuminates the state of web server configuration and vulnerability management across industries.

Data from Project Sonar is used to evaluate exposure based on attack surface using raw numbers of internet-based services combined with a tally of insecure and obsolete protocols. Sonar’s FDNS study provides all the raw data necessary for our researchers to dig through DNS TXT records to see which domains have DMARC configured, a control highlighted in the United Kingdom’s Active Cyber Defence program as one of the most effective ways to combat email spoofing used by phishers. Finally, our Project Heisenberg global network of honeypots lets us round out the exposure analysis by enumerating the extent of malicious and misconfigured connections we see coming from ASX 200 hosts and networks.

Expanding exposure measurement capabilities with new methodologies

There are fewer ASX 200 organisations with named IP blocks both in total and percentage-wise than in the Fortune 500 list. When possible, the Rapid7 Labs team used similar name-based record linkage models to identify owned internet blocks, then used the results of the extensive FDNS queries—which were validated and expanded upon with the help of SecurityTrails—to directly identify owned assets and infer IP blocks that were likely tied directly to an ASX-member organisation. Once the record linkage, asset identification, and IP block inference was complete, it was merely a matter of picking apart the data collected from Sonar, Heisenberg, and our DNS crawlers and seeing what was happening in the largest, best-resourced companies on the Australian Securities Exchange.

The results of this new methodology are promising, as the report catalogued many diverse assets per organisation/industry and provided a comprehensive view into configuration and patch management practices for key internet-facing assets. The results are promising enough to use the new methodology as a starting point for future industry-focused exposure as we continue to examine the state of exposure across the globe.

Dive in!

We’re excited to present another industry-centric view of exposure and are setting our sights on other major indices of companies around the world to paint a more complete global, industry-centric picture of exposure. If you have a professional or personal interest in how Australian companies handle their internet exposure, take a moment to grab the free report. Reading through it, you will learn:

The average cyber-exposure of the ASX 200, and how this statistic relates to baseline attack surface

How far along Australian companies are when it comes to DMARC-based anti-spoofing

If you are more of a visual learner, you can join the authors of the report by registering for our webcast here. We’ll discuss the findings, take on some audience questions, and share our recommendations on what IT security professionals can do to reduce their attack surface and make life on the internet safer and more stable for everyone.

Wading through the chaos and confusion of cybersecurity attacks can sometimes feel reminiscent of old-school detective crime shows. Often, you need more than one viewpoint to successfully crack a case. Just look at Starsky and Hutch—this duo’s problem-solving skills became unmatched when they successfully combined Starsky’s streetwise, brash manner with Hutch’s quiet intellect. For internet-related cases in particular, we can call on the unique strengths of Rapid7 Labs’ Project Sonar and Project Heisenberg.

Heisenberg is a collection of low- to medium-level honeypots distributed both geographically and in the cloud-oriented IPv4 space. The honeypots allow us to passively collect data that helps us understand attackers’ methods and patterns. Project Sonar, on the other hand, allows us to conduct internet-wide scans to investigate the global exposure of vulnerabilities.

In this post, we will explore how data from both projects can be combined to offer a clearer picture of attackers’ activities. Specifically, we will look into TCP Port 2004, which was chosen after observing scan and probe attempts using Rapid7’s Early Warning System. We discovered the patterns on 2004 are related to Webuzo, which is a software application used for the deployment of web services.

Hutch and Heisenberg

Heisenberg passively records metadata related to all requests, including source IP address, destination port, and protocol, among other data. With this honeypot data, we can look at the set of ports a single IP address tried to access during an arbitrary timespan.

This investigation looks at data grouped together by day. If we treat the set of ports as a fingerprint, we can then get statistics on groups of different fingerprints. One analysis we can perform is looking at the number of IP addresses per fingerprint for a given day.

The top 25 fingerprints on July 17 are shown below in Figure 1. Unsurprisingly, SMB, Telnet, and HTTP ports are at the top of the list. Something interesting that pops out, though, is the fingerprint [80, 81, 2004, 8080, 8888]. Ports 80, 81, 8080, and 8888 are most likely HTTP-related, but what is Port 2004?

num_src

fingerprint

33387

[445]

33208

[80]

15564

[23]

11347

[5555]

4930

[22]

3881

[8080]

1915

[23, 2323]

1469

[1433]

1466

[18183]

1143

[0]

1056

[81]

763

[3389]

598

[2323]

495

[53]

381

[9000]

331

[1, 445]

321

[137, 1433]

311

[14801]

292

[443]

271

[80, 81, 2004, 8080, 8888]

207

[8088]

191

[80, 445]

164

[21]

164

[80, 8080]

150

[25]

Figure 1: The number of IPs probing for a specific fingerprint on July 17, 2018.

Looking a little deeper at all fingerprints with 2004 in them, we see that 2004 is queried either with sequential port scanners or along with HTTP ports.

num_src

fingerprint

271

[80, 81, 2004, 8080, 8888]

3

[80, 81, 2004, 8888]

1

[80, 2004]

1

[1000, 1001, 1002, 1003, 1004, 1005, 1006,....]

1

[80, 81, 2004, 8080]

1

[81, 2004, 8080, 8888]

1

[81, 2004]

1

[1000, 1001, 1002, 1003, 1004, 1005, 1006, ...]

Figure 2: Number of IPs for fingerprints with Port 2004 in them.

The honeypots save the data that was sent with initial requests. Looking at this data, we see these probes seem to be searching for install.php.

This is the picture that emerges with the Heisenberg data. Let’s see what we can now discover with Project Sonar, which luckily lets us run studies for specific reports. We ran a HTTP GET study for Port 2004 to see whether that produced any leads.

Project Sonar produces json.gz files for its GET studies. For examples, you can look at the latest Sonar HTTPS GET public datasets, available here. With this data, we can use something such as Apache Drill to get the table below, which is a count of the different “server” header fields returned in the response area.

count

server

6292

Webuzo

4854

904

Apache/2.2.15 (CentOS)

787

nginx

668

Apache

517

A2B Webserver

415

Microsoft-IIS/7.5

382

Apache-Coyote/1.1

270

Microsoft-HTTPAPI/2.0

269

lighttpd/1.4.39

259

lighttpd/1.4.31

240

WebSphere Application Server/7.0

237

DNVRS-Webs

229

App-webs/

218

GoAhead-Webs

204

Boa/0.94.14rc21

195

Microsoft-IIS/8.5

189

Apache/2.2.22 (Debian)

161

Linux/2.x UPnP/1.0 Avtech/1.0

153

Boa/0.94.13

149

lighttpd/1.4.35

149

mini_httpd/1.19 19dec2003

135

nginx/1.2.6

127

lighttpd

119

uc-httpd 1.0.0

Figure 5: Count of IPs that responded on Port 2004 by the "server" header field.

Here, we see that Webuzo seems to have the highest count. It turns out that Webuzo is software from Softaculous that allows you to deploy web apps such as WordPress, Drupal, and web app stacks (i.e., LAMP) on the cloud or virtual machines. Looking at its documentation, it appears its admin panel runs on Port 2004. Looking deeper, it seems its admin panel has a launch API that allows users to install and configure these machines remotely. Here is a snippet of the commands the API uses:

Notice that the URL is similar to the data seen in Heisenberg honeypot data! It looks like the scanners are trying to figure out which IPs on the internet are running Webuzo’s admin panel. Attackers could use this information to then try to access the API with a list of default usernames and passwords in an attempt at remote execution.

It is interesting to note that Webuzo has been named in remote execution CVEs in the past. It has also been one of the ports the Muhstik botnet scanner targets, according to BleepingComputer.

Figure 6: A world tile map based on the IP’s geographical information shows that the majority of the Webuzo apps exist in the United States and Europe.

In conclusion, we’ve shown how Project Heisenberg and Project Sonar data can be used together. By observing attackers’ access patterns on the Heisenberg honeypots, we were able to notice behavior on a strange port. Using that data, we could recognize its behavior is related to other HTTP ports and the data that was being sent. We then used Project Sonar to further inform our investigation by looking at global HTTP exposure on the specific port. With this, we were able to fully realize what the attackers were up to in the first place.

When my colleagues and I are out on penetration tests, we have a fixed amount of time to complete the test. Efficiency is important. Analyzing password data like we’re doing here helps pen testers better understand the likelihood of password patterns and choices, and we use that knowledge to our advantage when we perform penetration testing service engagements at Rapid7..

In my experience, most password complexity policies require at least three of the following:

Lowercase letter

Uppercase letter

Number

Special character

When employees are faced with this requirement, they tend to:

Choose a dictionary word or a name

Make the first character uppercase

Add a number at the end, and/or an exclamation point

If we know that is a common pattern, then we know where to start: by figuring out the dictionary word employees choose. Let’s take a look at an example.

I recently went on a penetration test where I was able to get access to the company’s full database of accounts and password hashes because I successfully guessed one user’s password: Winter2018. Once I have a user’s password, I have the same access to servers and workstations as that user. From there, I test whether that user’s credentials will let me log in to other workstations and servers. If I can log in, I have tools where I can check if an administrator’s password has been stored in the computer’s memory. Then with an administrator’s password and elevated privileges, I can often access things like company financial data, payroll information, customer data, and anything else stored in servers. Takeaway: A weak password doesn’t just affect the user who created it; it can also impact the security of the entire company network.

Why did I try Winter2018? Pen testers have tools that can assist with password data analysis. One of these tools is Pipal. Pipal is able to read through a file with thousands of passwords and spot patterns and count similar words. Running a Pipal analysis on my 100K+-strong password dataset showed that many other people have used the season and year as a password. In fact, when I looked in the dataset of passwords that the Rapid7 pen testing team has cracked over the last few weeks, winter is the third most-common dictionary word used, behind two company names. summer is the fifth most-popular and spring is the tenth most-popular word that someone uses in a password. (I’m not sure what happened to autumn and fall!) Also note that Winter2018 meets the password complexity requirements described above. When we look at it that way, it doesn’t seem terribly secure, does it?

Pipal is also able to analyze the characters used in passwords. It can tell us if people are using just lowercase characters, or upper and lowercase, or special characters. 55% of the passwords we cracked didn’t use a special character, but they still adhered to the password policy mentioned above (i.e., they have an upper, a lower, and a number, much like Winter2018 or Password1). Pipal also tells us that digits are most frequently appended to a password; the dataset shows the top four-digit combinations added to passwords are, 2018, 2017, and 1234. The top three-digit combos are 123, 018, and 017, and the top two-digit combos are 23, 18, and 17. Pattern unlocked!

It’s analysis like this that helps us to work efficiently. When time is short, we can refer back to what the data tells us and rely on pattern analysis to predict user choices. I bet next month I’ll be able to access a system with Summer2018, and in a year, Winter2019 will get me in. If that doesn’t work, I’ll add an exclamation point at the end.

If you’re someone who likes data and numbers, here are a few interesting points from the 104,000 passwords Rapid7 pen testers garnered over the last few weeks:

46% of passwords were exactly eight characters

15% were nine characters (the next most common length)

40% matched the format: letters/digit

43% had their first character as uppercase and the last character as a number or symbol

The longest password cracked was 67 characters (I definitely did not guess this one!). The top two words in the password sample dataset were a company’s name. And finally, 74 passwords were literally just password.

What types of questions do you have about password usage? What analysis are you curious about? What kind of information would be helpful to you in hardening your systems and networks? Please post your comments and questions below.

Interested in more password research from Rapid7? Check out The Attacker’s Dictionary, research based on nearly a year’s worth of opportunistic credential scanning data collected from Heisenberg, Rapid7’s public-facing network of low-interaction honeypots.

Gotten a chance to read Rapid7’s Quarterly Threat Report for 2018 Q1? If not (or if you’re more of an auditory learner), we’ve put together a 6-minute recap video of the major findings. In our Quarterly Threat Reports, our security researchers provide a wide-angle view of the threat landscape by leveraging intelligence from the Rapid7 Insight platform, Managed Services, Incident Response engagements, Project Sonar, Heisenberg Cloud, and the Metasploit community.

In this Whiteboard Wednesday discussion, Kwan Lin, Senior Data Scientist, takes us through the major trends and patterns of the threat landscape in 2018 Q1. Our researchers saw three main areas of concern for the modern IT defender: user identity, DDoS, and SMB & SMI, all of which are covered in the video below.

The key takeaways from the report? The research team suggests:

Staying extra vigilant if you work in the healthcare industry—a growing target for malicious actors

Double-checking for exposed systems, given the ubiquity of threat movement and remote entry attempts

Re-training your team around the dangers of phishing to prevent credential leaks

Regular readers of Rapid7 blogger ramblings will likely remember — albeit not too fondly — our WannaCry coverage this time last year. If you’re new to the blog or have repressed all memory of this harrowing past event, the TL;DR is that in April of 2017 the Shadow Brokers released some exploits that were later used by some malfeasants to create a fairly devastating ransomworm (which, incidentally, coined the term “ransomworm”). [Most of] the rest of us were spared significant harm due to the fast-thinking on the part of one security researcher who stood up a kill switch domain which the WannaCry malware checked for the existence of before bringing on the chaos to individual systems.

It’s hard to definitively suggest the success of WannaCry spurred the creation and launch of NotPetya attacks but their proximity suggests attackers learned from both the success and rookie-mistakes of the WannaCry perpetrators and leveled-up a bit before wreaking havoc of their own.

It’s Getting Better, Right?

In a word: no.

In our coverage last year we noted that the Shadow Brokers dump combined with WannaCry had a net-positive impact on the cadre of open SMB servers on the internet. What does the view look like one year later?

The United States still leads the pack when it comes to exposure:

...but the internet is still holding steady at about 500K exposed Microsoft SMB servers just ready to help cause damage.

But, Shadow Brokers Exploits Are Old News, Right?

Also: no.

SMB is a pretty choice target and we’ve tuned Project Heisenberg to watch for exploits that contain traces of EternalBlue. As you can see, EternalBlue is living up to the “Eternal” part pretty well so far:

Is There Any Hope?

We’d like to close with a message of hope but 2018 has seen both corporations and municipalities hit with Wannacry — yes, WannaCry. Despite all the warnings and costly infections of WannaCry and NotPetya in 2017, other municipalities were hit with equally powerful ransomware attacks.

The best we can leave you with is this call to action on the first WannaCryversery: take some focused time and effort to honestly assess your IT and application development/deployment practices with an eye for threat modeling a ransomware / ransomworm attack. Identify the areas that need improvement and start working on project plans to fix issues, even if they are systemic, longstanding issues. This includes ensuring you have a solid backup, continuity and disaster recovery (BCDR) plan that is honestly validated (i.e. no more rigging the tests to pass audits!).

The threat of ransomware will be with us for quite a while as it’s a lucrative and relatively easy path for attackers. The good news is that with some preparation and attention to detail, you need not suffer too greatly and can use your operational excellence to thwart these criminal intentions.

Every week, Rapid7 conducts penetration testing services for organizations that cracks hundreds—and sometimes thousands—of passwords. Our current password trove has more than 500,000 unique passwords that have been collected over the past two years. Where do these come from? Some of them come from Windows domain controllers

Every week, Rapid7 conducts penetration testing services for organizations that cracks hundreds—and sometimes thousands—of passwords. Our current password trove has more than 500,000 unique passwords that have been collected over the past two years. Where do these come from? Some of them come from Windows domain controllers and databases such as MySQL or Oracle; some of them are caught on the wire using Responder, and some are pulled out of memory with Mimikatz. In just the first two weeks of collecting passwords, the team gathered a new dataset of more than 100,000 passwords.

This blog is the first of our new password tips series, which has two goals: educate and entertain. First, we’ll look at what patterns emerge. Do people still typically put a capital letter in the first position? Do they often end with a 1 or an exclamation point? How long do they make their passwords? Do a lot of people use “l33t sp3@k” to make passwords harder to guess? (Spoiler: it doesn’t!) If penetration testers can make out these patterns and use them to our advantage, so can the malicious actors attacking your systems. I hope to help readers understand common usage patterns so they can use this information to create stronger policies and educate employees and peers through security awareness training on what makes a strong password.

Second, we’ll look at password choices for the fun of it. People create passwords expecting that no one else will ever see them; and because we need to remember passwords, we use things that are meaningful to us. About a year ago, there was a television commercial that showed a military general having to say his password out loud and it turned out to be ihatemyjob1. What topics, words, or phrases do people use as their secret keys, and what does this say about their interests (or dislikes)? Does that data show that people mention sports teams or celebrities? For example, how many people use their password to make a statement on Tom Brady or Lebron James? Do people use swears or obscene terms in their passwords? Do people include loved ones or favorite public personas? We’ll look at all of these things and offer analysis on what we find.

When I’m on a penetration test and I compromise a network or a domain due to a weak password, I often get the question, “How do I get my employees to use secure passwords without making it too much of a pain?” That’s a great question. We know that if we force people to use long and complex passwords like 8!NbOF6$MEaURrr8*A(s&5H06VAd8Y, there’s no way they are going to remember them. Instead, they will write it down somewhere where someone (like me) can find it. The honest answer is that eliminating the low-hanging fruit is the best option. That means get rid of the easily guessed passwords. I tell people that if they can eliminate three passwords from their systems, they will make life a lot harder for me—the pen tester—and also for malicious actors looking to compromise a system.

What are the three passwords to eliminate?

Any version of “password”. Yes, this still happens—a lot. We do sometimes literally see “password”, but not as often as variations of it: Password, Password1, P@ssw0rd, password2018 and so on are one of the first things we pen testers will try when brute forcing access into a network.

Any variant of your company name. It can be hard to come up with unique passwords, and your company’s name is understandably easy to remember. If I were pen testing Rapid7, I might try RapidSeven, rapid7, Rapid7!, R@p!d7, and so on. If there is a minimum password length requirement, people will often put the year at the end, so I’d bet on Rapid72018. You get the idea: Your company name might be easy to remember, but it’s also easy for pen testers to guess.

The last item on my list is actually the one we see the most. Many password policies specify that passwords must be changed every 90 days, or roughly every three months. What else changes every three months? The season! Yep, one of the most common passwords we see is simply the season and the year. If I were to start a brute force attack against a network, I would start with Winter2018 for the password. Is that your password? If so, change it. And don’t just add ! to the end of it, as that’s what I’ll try next.

Eliminating these three passwords and their variations will make my job a lot harder. In future posts here, I’ll make sure to follow up with statistics and data on these passwords from systems along with other trends that we see. We’ll also talk about how you can do password audits on your own systems and what to look for.

What types of questions do you have about password usage? What analysis are you curious about? What kind of information would be helpful to you in hardening your systems and networks? Please post your comments and questions below!

Interested in more password research from Rapid7? Check out The Attacker’s Dictionary, research based on nearly a year’s worth of opportunistic credential scanning data collected from Heisenberg, Rapid7’s public-facing network of low-interaction honeypots.

Waves of new companies, products and applications exist, often in the form of just wedging a blockchain into an existing application or simply adding “blockchain” or “coin” strategically to an existing name.

In this paper we combine intelligence from Project Heisenberg, our global honeypot network, and Project Sonar, our Internet scanning project with data from the Bitnodes Project, which aims to study the membership of the Bitcoin peer-to-peer network, and offer a variety of our observations.

Since we began monitoring the Bitcoin network in August 2017, we observed 11,000 to 15,000 unique nodes participating in the network in any given day, and over 144,000 unique nodes since the observations began. Germany, China and the United States dominate the network.

Our honeypots are not advertised or published, so any interaction with them is suspect. In this timeframe, Project Heisenberg observed interactions on our honeypots from over 900 unique nodes known to be participating in the Bitcoin network.

Investigations into these interactions showed familiar patterns. Port scans and active reconnaissance with tools like Nmap were rampant, as was repeated attempted exploitation of MS17-010, largely from China.

Who are the perpetrators of these attacks against our honeypots? Are the legitimate owners of these Bitcoin nodes actively attacking other nodes on the public Internet? Are these systems that have been compromised and are now being used to sling exploits and mine bitcoin? We may never know, but we offer several possible explanations along with our research.

Metasploit Notes:

The initial patch was non-specific on what the actual vulnerable path was. It added a request sanitizer that applied broadly to the application, making it hard to understand what code path needed to be exploited. That was pretty difficult across the community of exploit developers to actually find the code path for exploitation. In Drupal’s design, #<name> properties are used by the Forms API, which is how it generates forms, dynamically modifies forms, etc. These various properties can take different inputs. There are many different properties that accept function callbacks, which was key for exploitation. Exploit developers had to look through all of the API reference to identify which properties were actually exploitable. Several were found. But finding how any of them applies to the codebase was the hard part, especially for a non-expert in Drupal’s vast developer API. It took about 2 weeks for internet researchers to eventually find the right code path, which was facilitated by developers who were already experienced with Drupal. In effect, the patch was obfuscated in what it actually protected, so exploit developers had to largely rediscover the vulnerable functions from scratch. (Hmm, if you simply tell someone ‘this software is known to be vulnerable’, would you always find a vulnerability, even if you were bluffing?)

People started writing PoC’s once the vulnerable code paths were identified. Drupal 7 vs 8 were different in how the bug was triggered, due to different APIs. An effective exploit wants to target unauthenticated forms, since those can be targeted to any reachable installation. An authenticated vulnerability is much less effective. So, identifying those unauthenticated code paths was the next step after identifying the exploitable form properties. These were the login form, the password reset form, and the registration form. Other forms may also be exploitable, but these paths have not been identified yet. Note, if you are simply blocking particular routes via an application-level firewall based on known exploited routes, this may be insufficient to actually protect all vulnerable paths.

With the Drupalgeddon Metasploit module, the Password form is used for Drupal 7 (needs two requests to stage code), the registration form for Drupal 8 (this only needs one request). The fact that the Forms API allows dynamically generated forms was the game changer as far as CMS design of Drupal, but its complexity also gives it a larger attack surface. Dynamically generated things based on user input are always suspect to data sanitization issues.

Once the code injection method was available via an unauthenticated path, there were a dozen different ways to get code execution within PHP. We used the passthru function - it is itself not a great design for secure coding (https://secure.php.net/manual/en/function.passthru.php), seemingly tailor-made for getting a remote shell on a target. PHP setups can be locked down to restrict functionality, but a CMS like Drupal needs a lot of functionality. Metasploit’s targets are able to adapt to different amounts of lockdowns in a PHP environment. For instance, the ‘eval‘ function is sometimes blacklisted, but ‘assert‘, which achieves the same end, is not. Code is executed in memory as the process that is executing PHP. Often this is directly inside of Apache. This worked fine for Drupal 7, but on Drupal 8, we have to run PHP as a subprocess, which is easier to notice in process listings.

Why is the Metasploit exploit nice and accurate? First, it checks for the right application fingerprints first (header and HTML tag). Then it checks for CHANGELOG.txt file for patch level, which nobody removes in most environments (folks probably should, but it seems like bad guys spray and pray anyway seehttps://www.drupal.org/node/766404). We don’t send shell commands, but send a printf function to verify code execution, which is very safe (no callbacks or file drops).

Drupalgeddon 3 in Comparison

Code path is potentially post-authentication only. Only one code path has been identified so far (where you can delete a node). This is basically the result of an edge case in the Drupalgeddon 2 patch, where a parameter was not being filtered. Lower risk from mass-exploitation.

What If I’m Running Drupal?

First and foremost adopt a proactive stance and regularly scan for Drupal instances in your perimeter, cloud environments and even internally. The public attacks make the headlines, but your internal instances may be soft targets for attackers who manage to get in using the a tried and true phishing attack.

You should also continue to monitor the Drupal security advisories and have an immediate response plan ready to go in the event more critical advisories are released in the coming weeks/months.

Organizations running Drupal instances can watch for the following indicators of compromise:

New PHP processes created by the webserver user, particularly php -r <encoded command>

As noted, this is not the first serious Drupal issue of 2018 and there’s a pretty good chance it won’t be the last. Keep an eye out on the Drupal security team releases and be ready to patch if/when Drupalgeddon 4 comes around.

Step 1: Are You Impacted by the Drupalgeddon Vulnerability? Scan Your Environment to Find Out

We started performing weekly monitoring of open/amplification-vulnerable memcached servers after the recent memcrashed amplification distributed denial-of-service (DDoS) attack and today we have some truly awesome news to report, along with some evidence

We started performing weekly monitoring of open/amplification-vulnerable memcached servers after the recent memcrashed amplification distributed denial-of-service (DDoS) attack and today we have some truly awesome news to report, along with some evidence that the recent spate of DDoS attacks may not be the last.

The Good News: Substantial Decline in Exposed memcached Instances

Project Sonar has recorded a drop from nearly 140,000 exposed unique TCP memcached endpoints through 2017 down to just under 58,000 in our March 1, 2018 scan, and an additional drop to just under 54,000 on March 6, 2018. While we don’t have Sonar-specific stats on UDP prior to March of 2018, even with just two days of monitoring we have seen a significant drop there, too: from almost 18,000 unique UDP memcached endpoints on March 1, 2018, to under 12,000 on March 5, 2018.

Apart from a >50% remediation rate after the 2017 Barix exposure, we’ve never really seen a drop this large in a publicly exposed service. Even the WannaCry disaster did not prompt a serious decrease in SMB exposure, though it did dip slightly before leveling out in 2017 (SMB exposure grew a bit later in the year).

Several things could have contributed to this massive drop in exposure, including:

CVE-2018-1000115 was assigned to this issue. Assigning a CVE improves visibility of the vulnerability and enables organizations and technology/security vendors to develop tools and processes to identify vulnerable systems and begin remediation procedures faster.

Some Linux distributions, including Red Hat, have now disabled the UDP listener and configured memcached to only listen on the loopback. Several other distributions will likely follow, or already have; Ubuntu, for example, has had the reconfigured the default package install to expose memcached only on the loopback interface since 2007 and as such is unaffected by this out of the box (they just recently disabled UDP, however, to further lock things down).

A >1Tb/s DDoS is nothing to sneeze at, particularly when it hits such a central and critical service such as Github.

What does memcached exposure look like now?

Even though the exposure has been significantly reduced, it’s not all good news.

While the memcached service should never be exposed directly to the internet without source IP restrictions, for either TCP or UDP, one would hope that folks stay current with versions even if they’re making exposure mistakes (yes, we are ever the eternal optimists). Sadly, this is absolutely not the case; and, unlike a fine wine, vintage 2012 and 2013 memcached services do not age well:

That mix of versions is riddled with remote execution, buffer overflow, service bypass and denial of service vulnerabilities. Even with the reduced number of exposed nodes, 10,000 recruits joining in an attack with a worst-case scenario amplification factor of 10,000 to 50,000 can still do a great deal of damage. The outlook is even less pleasant when you consider the above mentioned RCEs, which add potential for these nodes to be turned into command and control nodes.

If you are interested in any of the raw data for the above mentioned Sonar findings, see the below studies:

The Bad News: More DDoS in the Works?

There’s also a bit more disconcerting news to share.

Rapid7’s Heisenberg Cloud passive sensor network has detected a series of probes on other ports used for amplification attacks (we’re calling these “ampli-ports”). Below is a breakdown of the unusual activity for six of them: the DNS probes are likely related to another recent DDoS, and the most recent spike was on the Simple Service Discovery Protocol (SSDP).

Attackers may be taking inventory of available amplification DDoS targets for their bot armies in preparation for future attacks this year.

Rapid7 is monitoring these probes and will continue to report on any unusual (good or bad) activity related to this very curious sequential probing of DDoS services. As noted above, you can find our Project Sonar data at scans.io, monitor live memcached stats at Shadowserver, and continue to reach out to research@rapid7.com with questions about these studies, our Heisenberg Cloud findings, or other studies.

]]>

Rapid7 Labs keeps a keen eye on research and findings from other savvy security and technology organizations and noticed Cloudflare’s report on new distributed denial of service (DDoS) amplification attacks using memcached. If you haven’t read Cloudflare’s (excellent) analysis yet, the TLDR is, memcached over UDP makes

Rapid7 Labs keeps a keen eye on research and findings from other savvy security and technology organizations and noticed Cloudflare’s report on new distributed denial of service (DDoS) amplification attacks using memcached. If you haven’t read Cloudflare’s (excellent) analysis yet, the TLDR is, memcached over UDP makes for an ideal amplifier — the spoofed source requests from an attacker are tiny, and the resulting replies to the spoofed source can be enormous.

Rapid7’s Project Sonar sees well over 100,000 exposed memcached servers at any given time

That’s quite a spread of potential DDoS soldiers just sitting and waiting to be brought into the amplification army.

Since we perform both active and passive internet information and intelligence gathering, we also took a look at the data from our Heisenberg Cloud honeypot agent network thinking we’d see somewhat similar activity to that of Cloudflare. What we found was far more interesting (and inspired this post).

On February 20th (about four days before Cloudflare’s reported attack), we saw a spike in memcached probes:

When we correlated the source IPv4s with our Sonar data we noticed that none of the IPv4s talking to Heisenberg were in the memcached data set.

Our source lists are also very different:

Country

Number of nodes

United States

257

China

108

Russia

8

Romania

7

Seychelles

6

United Kingdom

6

France

4

Germany

3

Iran

3

Netherlands

3

Other

10

ASO

AS #

Unique IPs

Hurricane Electric, Inc.

AS6939

189

No.31,Jin-rong Street

AS4134

51

CNCGROUP China169 Backbone

AS4837

39

LeaseWeb Netherlands B.V.

AS60781

36

Quasi Networks LTD.

AS29073

8

Flokinet Ltd

AS200651

7

China Unicom Shanghai network

AS17621

5

Digital Ocean, Inc.

AS14061

5

B2 Net Solutions Inc.

AS55286

4

Steadfast

AS32748

4

Other

Other

54

Rapid7’s early warning system caught the protocol probes for active/exposed memcached servers just a few days before the amplification attacks started. Since we just track payloads and connections to 11211 and do not try to emulate a full memcached server, the bot herders mostly left us alone, though we are still tracking more elevated probe counts than we were seeing before the DDoS campaign began.

We have a better picture of what infrastructure is going into this novel DDoS campaign and must echo Cloudflare’s advice: double check your use of memcached and secure your configurations.

We have a Metasploit module in the works that will scan for and identify memcached instances that are vulnerable to amplification attacks, so keep an eye out!

This research was produced with immense help from Vasudha Shivamoggi, Kwan Lin, Bob Rudis, and Jon Hart. Their work to gather, analyze, and interpret the data presented here is deeply appreciated.

In the spirit of HaXmas, we at Rapid7 Labs are digging up some of the research nuggets that got buried during the rest of the year. Thus we give you a haunting story to warm you by the fire this winter. Join us as we tell the tale of the disappearing port 81 botnet this HaXmas—otherwise known as “When GoAhead Was Left Behind.”

For a tumultuous 11 days in April (2017-04-16 through 2017-04-26) Rapid7 Labs observed a botnet with roughly 18,000 distinct IP addresses marauding across the public internet in search of prized quarry laying in wait on TCP port 81. The emergence of this new cadre of corrupted computing devices was first documented by the fine sleuths over at the Network Security Research Lab 360. They noted that the primary weapon of choice for these attackers were HTTP GET requests targeting login.cgi and the botnet appeared to have its eyes fixed on finding vulnerable GoAhead servers, which are embedded web servers present in over 1,000 different types of Internet of Things (IoT) devices.

These fiends disappeared almost as quickly as they arrived: botnet traffic slowed to a crawl after 2017-04-26. While the danger seems to have passed, we at Rapid7 Labs urge vigilance lest ye be DDoS'd away in a resurgence of activity.

Come with us as we don our data science deerstalkers and explore the case of the disappearing port 81 botnet.

I don't know what of our tools you, our readers, use on a regular basis, but one of the things, I like to look at first when I login to isc.sans.edu is the Top 10 Ports by Unique Sources chart. This suggests coordinated (think botnets) scanning. So, I was really shocked to see port 81 had jumped up to 2nd position just behind all the Mirai-ish port 23 scanning. Take a look at the port 81 chart. If any of our readers have any insight into what is going on here since 16 Apr, plase [sic] let us know.

On 2017-04-24, the 360 Network Security Research Lab posted a new threat report about the botnet on their blog which provides a very detailed analysis of the traffic to port 81 that they had seen up to that point.

On 2017-04-26, the heightened scanning activity dropped as the botnet appeared to go quiet.

Digging In

On 2017-04-26, having seen several write-ups about a botnet spreading through port 81, we decided to use the data we have collected with Project Heisenberg and Project Sonar to analyze the botnet’s activity. Project Heisenberg detected the attempted connections on port 81 and queries for login.cgi. Project Sonar subsequently performed an HTTP study of TCP port 81 on the public IPv4 Internet. We examined data from before the dramatic increase in activity, as well as data from after the increase in activity.

Storming A Very Particular Port

There's no particular standard around port 81 usage, though it does include a mixed range of web servers, connected cameras, and various IoT devices.
Figure 1 below shows that when aggregating unique IP connection attempts per hour on port 81 to our Heisenberg honeypots, we can see a distinct surge in activity between 2017-04-16 and 2017-04-26.

Dissecting The GoAhead Botnet

The 360 Network Security Research Lab report stated that members of the botnet attempted to spread by initiating HTTP connections to IP addresses on port 81, and then requesting login.cgi. For the month of April 2017, and almost exclusively between 2017-04-16 and 2017-04-26, Project Heisenberg recorded 18,000 distinct IP addresses making requests for login.cgi.
Breaking those connections down by hour, we see some periodicity in the number of unique IPs:

Figure 2 shows that the population of IP addresses that scans for login.cgi is almost entirely inactive during the rest of the month of April.

Figure 3 below shows their scanning behavior for the entire month of April, which highlights the increase in activity between 2017-04-16 and 2017-04-26:

On either side of those dates, we saw at most six out of those 18,000 IPs performing any sort of scanning activity in any given hour in the month.

We noted that the botnets are fairly indiscriminate in their scanning. Project Heisenberg's honeypots are scattered across the internet in a variety of places, and when we separated our analysis by those places, Figure 4 shows that we observed a fairly consistent pattern.

HTTP Requests with a Relative Path

The botnet requests took advantage of poor design in GoAhead servers that additionally complicated efforts to study the botnet within legal constraints.
Astute readers may have wondered why we continually make mention of HTTP GET requests for the relative path login.cgi when a properly formed request would be for the absolute path of /login.cgi as required by the the HTTP RFC. It is indeed curious why anything would make requests for login.cgi, since most every reasonably well-behaved HTTP server out there is going to return an error, likely an HTTP 400 indicating a bad request.

As it happens, GoAhead isn't exactly well-behaved. In addition to responding positively to requests for resources with relative paths, GoAhead's configuration is (was?) also such that requests for resources with relative paths bypassed any required authentication AND disclosed the source code for the requested resource, conveniently including the username and password (https://blogs.securiteam.com/index.php/archives/3043).

Many HTTP configurations that require authentication do so for large swaths of resources. In the case of many of these IoT devices, they are likely requiring authentication for anything from / "down". Furthermore, the majority of these configurations should prompt for authentication before serving any resource, valid or otherwise.

A vulnerability like this presents an interesting challenge from a security research perspective.

If you were to attempt to assess the potential exposure of assets on the public Internet to this particular rash of vulnerabilities, one reasonably accurate way is to perform the very request that the botnet is performing—that is, making an HTTP GET request to login.cgi. If the source code is returned, the endpoint is vulnerable; however, performing a test like this is prohibited according to various regulations. On the other hand, an assessment that made HTTP GET requests to /login.cgi would suffer from the fact that a proper HTTP configuration should require authentication and return an HTTP 401 error regardless of whether or not the resource exists. Contrast this with what would happen when valid credentials are provided for a resource that doesn't exist: it would (hopefully) return a 404.

Scanning Locations

By backtracing the IP addresses of the machines pinging our honeypots, we can explore where the requests distributing the payload are coming from, along with information about the networks they live on. The set of IP addresses that we examined was not reduced by any exclusion lists, such as for known research devices.

The first component—geographic data—shows that the majority of requests are sourced from China, followed by the United States, with a smaller number coming from France and the United Kingdom.

Digging into those IP addresses' metadata, we can look at which networks (broadly construed) the requests came from. Doing so revealed that the IP addresses largely belonged to generic consumer internet providers. In combination with information on where GoAhead cameras are sold and prominent, this strongly suggested that the payloads were being distributed by already-infected devices.

Looking For Clues Across The Globe

Using Project Sonar data, we were able to identify a non-trivial population of devices that looked like potential targets for this botnet. Several days after the botnet’s scanning activity stopped, there were tens of thousands of devices from which we did not see scanning activity, but that fit the profile for targets (for which the botnet was scanning).

Of the 18,000 IPs that Project Heisenberg saw scanning for login.cgi, 3,900 of them were picked up in a subsequent Project Sonar scan; of those, 98.95% provided a 401 Unauthorized HTTP status code when they received a request for /login.cgi, suggesting that most devices were properly configured.

So What Did Happen to the Port 81 Botnet?

As mentioned in the beginning, we logged a drop-off in scanning activity on port 81 after 2017-04-26. However, the decline in activity did not persist; our monitoring activities have revealed periodic increases in activity on port 81.

While few of the spurts of activity have matched the initial onslaught in terms of sustained duration or scale, there have been exceptions. On 2017-05-12, there was a dramatic but momentary spike in activity that was more than twice as large as the highest point in the 2017-04-16 through 2017-04-26 range; further investigation revealed the spike originated from a single source.

A similar but smaller event occurred on 2017-10-27, due mainly to an uptick in connections from a single source. By contrast, a spike observed on 2017-10-23 was associated with many different sources, each making many connections per hour. This broad source base suggests possible ramping-up of the botnet—however, like before, the activity wasn’t sustained. Further observation indicated that there were still occasional but non-cyclical spikes in activity on port 81 that exceeded the original uptick in scale.

As mentioned in the beginning, we logged a drop-off in scanning activity on port 81 after 2017-04-26. The devices we had seen participating in botnet scanning activity went silent, but we saw a population of devices that could still have been infected. So why did the botnet stop scanning?

A few theories in question form:

Did the attackers believe they had identified a sufficient number of devices on fairly-fixed addresses and leave those devices dormant only to be used at a more precise or opportune time?

Could the botnet controllers — possibly — have put their initial plans in motion too soon? Were they unprepared to take advantage of their newly acquired drone arsenal?

Perhaps the botnet operator(s) read the reports about researchers tracking the scanning activity and decided to shut down their activities?

Could a rival botnet have taken over?

Did ISPs actively work to mitigate the impacts of the botnet?

Whatever the reason, the curious case of the disappearing port 81 botnet fell into the bucket of mysteries hidden away until HaXmas time, when we at Rapid7 Labs gladly dusted off this gift from our telemetry trove to accompany your figgy pudding. Botnet research is a gift that keeps on giving, so keep an eye on this space (particularly our Heisenberg and Sonar blog tags) to see what treasures we mine in 2018!

Don’t worry—I’m not going to regale you with tales of the vendetta between a sibling of mine and I that stemmed from a $20 cooking griddle gift at a Yankee Swap from long ago, or of the holiday riots that have occurred over the years when mixing gift giving with eggnog. Nor will I gloat about the sweet blanket I scored/stole from a White Elephant with friends this year.

Instead, I’m going to swap you some research we’ve been doing that relates to a simple, cheap and potentially entertaining protocol: MQTT.

Primer

MQTT is the Message Queuing Telemetry Transport, or MQ Telemetry Transport, and is a simple publish-subscribe messaging protocol built on TCP/IP.

MQTT messages are sent to and read from topics, which is a simple method of organizing messages that mimics a directory structure. As an example, if MQTT were used as part of a system responsible for collecting temperatures readings from remote sensors, a possible topic structure might be temperatures/<location>/<sensor_num>, where <location> could be any number of sub-topics broken down by physical location and <sensor_num> would represent the topic of the temperature from a given sensor.

An MQTT client requires an MQTT broker to be of any use. The MQTT broker is responsible for handling subscription requests from MQTT clients who wish to receive topical messages as well as handling topical publication requests from MQTT clients and the subsequent publication of messages to subscribed MQTT clients. MQTT supports the concept of topic wildcards with + and #, but only from a subscription perspective—the protocol does not support publishing to wildcard topics.

From a security perspective, the only capabilities the MQTT standard really provides are optional username and password fields that can be specified in the initial connection, however the standard says that the handling of these fields is implementation specific. All remaining aspects of security with regards to MQTT are, again, implementation specific.

In our testing, the username and password fields are plaintext, null-terminated fields that are compared against hashed versions stored on the broker, if enabled. Some support disabling anonymous authentication. Additionally, in practice, most implementations provide for user, topic, action and client identifier-based restrictions such that the actions of individual clients can be controlled. TLS can be used to encrypt the whole shebang as well as provide additional authentication mechanisms when client certificates are used.

Again, these are all implementation specific and not provided for by the protocol itself. For more depth on this, the folks over at HiveMQ put together a multi-part series on MQTT Security Fundamentals that is worth a read.

Several of MQTT’s characteristics make it ideal for use in IoT applications, which is not surprising considering that what we now know as MQTT was once referred to as the “SCADA protocol” and the “MQ Integrator SCADA Device Protocol” (MQIsdp). It is unclear which real IoT products out there utilize MQTT, but both Amazon and Azure have MQTT support, Hackster.io has a dozen or so MQTT-based IoT projects, and Home Assistant has support for MQTT-enabled lights, vacuums, switches, locks, cameras and more. MQTT places few restrictions on the messages it it handles, and as such, developers integrating MQTT into their solutions are using it for everything from sensor reading and event notification to device configuration and updates, and more, for better or worse.

Even though the protocol has been around in various forms since 1999, there are only a handful of known vulnerabilities in MQTT-enabled products, all from this year, including:

Exposure

In order to understand the current and future exposure of MQTT on the public internet, a few months back we put together a Sonar study for MQTT. We’ve been running it monthly against the plaintext (1883/TCP) and TLS-wrapped (8883/TCP) MQTT endpoints.

Sonar’s MQTT study, like the protocol, is very simple. It first locates all public IPv4 nodes with the respective port open with zmap.

Analysis with just this data is deceptive, as for almost any given TCP port you can guarantee that there are millions of public IPv4 addresses listening on that port offering all manner of oddball services. Simply looking at 1883/TCP vs 8883/TCP exposure, we see 3.6M and 3.3M supposedly open endpoints. For the sake of completeness, we did a country-based analysis of 1883/TCP and 8883/TCP combined and saw what we usually see: the United States is high on the list along with an assortment of other technologically adept countries. The table below shows the number of unique IP addresses exposing one or more of the MQTT ports, by country:

For every IPv4 address claiming to have the given MQTT port open, the study then establishes a connection and sends an MQTT Connect message. While the MQTT protocol does support authentication at this step, our study utilizes anonymous authentication, which is to say that no credentials are specified. Responses are inspected and those other than valid connection acknowledgments are discarded. In this way the study is able to identify endpoints that are truly speaking MQTT and provide additional insight into how the MQTT broker might be configured.

Repeating the same simple counting and grouping exercise from before, but on this new dataset, we observed nearly 36,000 MQTT speaking endpoints on 1883/TCP and just over 10,000 on 8883/TCP, a far cry from the 3 million+ found on each endpoint previously. This indicates that the majority of the things listening on MQTT endpoints on the public IPv4 Internet are not actually speaking MQTT at all. By country, we see a similar smattering of countries but with significantly smaller numbers:

In the connection acknowledgment responses, one field is used to indicate if the connection was successful or not, and if it wasn’t, what might have been the cause. Ignoring, for a moment, the two different endpoints, if we analyze the result codes of the connection acknowledgements across endpoints, we can make the following observations:

count result
------- ----------
25893 Connection accepted
11081 The Client is not authorized to connect
3778 The data in the user name or password is malformed
443 The Server does not support the level of the MQTT protocol requested by the Client
280 The Network Connection has been made but the MQTT service is unavailable
209 The Client identifier is correct UTF-8 but not allowed by the Server
22 Unknown or proprietary response

Repeating this process against just the plain-text 1883/TCP port:

count result
------- ----------
24665 Connection accepted
7408 The Client is not authorized to connect
3120 The data in the user name or password is malformed
202 The Network Connection has been made but the MQTT service is unavailable
158 The Server does not support the level of the MQTT protocol requested by the Client
41 The Client identifier is correct UTF-8 but not allowed by the Server
16 Unknown or proprietary response

And again against just the TLS 8883/TCP port:

count result
------- ----------
4932 The Client is not authorized to connect
2775 Connection accepted
1979 The data in the user name or password is malformed
299 The Server does not support the level of the MQTT protocol requested by the Client
169 The Client identifier is correct UTF-8 but not allowed by the Server
103 The Network Connection has been made but the MQTT service is unavailable
18 Unknown or proprietary response

Some takeaways from this:

Better than 70% of the plaintext MQTT endpoints require no authentication and have no immediately apparent security restrictions in place.

More than half of the TLS-wrapped MQTT endpoints require authentication or are using some manner of security restrictions such as IP or client ID based limitations. This is a good thing.

Exploitation

Examining our globally deployed fleet of honeypots for any activity on 1883/TCP or 8883/TCP, it was no surprise to see the evidence of Sonar and other internet scanning projects in there. However, beyond these activities, there is only minimal background noise on these ports on the order of a few hundred unique connections per month, the vast majority of which don’t ultimately attempt to speak MQTT.

Exploration

Even though the protocol has been around for a considerable amount of time, as mentioned earlier there has only been minimal coverage of this protocol from a security perspective.

Given MQTT’s use in IoT and the rise IoT, I figured it prudent to jumpstart additional work in this area with Metasploit support for MQTT. As such, I’ve added an MQTT connection brute-force module in metasploit-framework PR #9330 that will identify MQTT endpoints and attempt to brute-force authentication if it is discovered to be in use. Also in that PR I’ve provided documentation on how to configure a mosquitto MQTT broker for purposes of testing the module and exploring MQTT. Additionally I’ve provided a mixin to ease development of future MQTT modules in metasploit-framework PR #9329 that is now available for use as of becc05b.

Some ideas for future MQTT modules or research include:

Client ID brute forcing

Fingerprinting and information leakage through $SYS and wildcard topics

Evaluating the security capabilities of MQTT broker implementations

Are you interested in MQTT? Have ideas for future research? Comments? We welcome feedback and collaboration, so please feel free to reach out to us via the comments below or via email.

It’s been a busy 2017 at Rapid7 Labs. Internet calamity struck swift and often, keeping us all on our toes and giving us a chance to fully test out the capabilities of our internet-scale research platform. Let’s take a

It’s been a busy 2017 at Rapid7 Labs. Internet calamity struck swift and often, keeping us all on our toes and giving us a chance to fully test out the capabilities of our internet-scale research platform. Let’s take a look at how two key components of Rapid7 Labs’ research platform—Project Heisenberg and Heisenberg Cloud—came together to enumerate and reduce exposure the past two quarters. (If reading isn't your thing, we'll cover this in person at today's UNITED talk.)

Project Sonar Refresher

was the extent of what would eventually become the internet and it literally had a printed directory that held all the info about all the hosts and users:

Fast-forward to Q1 2017 where Project Sonar helped identify a few hundred million hosts exposing one or more of 30 common TCP & UDP ports:

Project Sonar is an internet reconnaissance platform. We scan the entire public IPv4 address range (except for those in our opt-out list) looking for targets, then do protocol-level decomposition scans to try to get an overall idea of “exposure” of many different protocols, including:

In 2016, we began a re-evaluation and re-engineering project of Project Sonar that greatly increased the speed and capabilities of our core research gathering engine. In fact, we now perform nearly 200 “studies” per-month collecting detailed information about the current state of IPv4 hosts on the internet. (Our efforts are not random, and there’s more to a scan than just a quick port hit; there’s often quite a bit of post-processing engineering for new scans, so we don’t just call them “scans.”)

Sonar has been featured in over 20 academic papers (see for yourself!) and is a core part of the foundation for many popular talks at security conferences (including 3 at BH/DC in 2017).

We share all our scan data through a research partnership with the University of Michigan — https://scans.io. Keep reading to see how you can use this data on your own to help improve the security posture in your organization.

Cloudy With A Chance Of Honeypots

Project Sonar enables us to actively probe the internet for data, but this provides only half the data needed to understand what’s going on. Heisenberg Cloud is a sensor network of honeypots developed by Rapid7 that are hosted in every region of every major cloud provider (the following figure is an example of Heisenberg global coverage from three of the providers).

Heisenberg agents can run multiple types and flavors of honeypots. From simple tripwires that enable us to enumerate activity:

to more stealthy ones that are designed to blend in by mimicking real protocols and servers:

All of these honeypot agents are managed through traditional, open source cloud management tools.

We collect all agent-level log data using Rapid7's InsightOps tool and collect all honeypot data—including raw PCAPs—centrally on Amazon S3. We have Hesienberg nodes appearing to be everything from internet cameras to MongoDB servers and everything in between.

But, we’re not just looking for malicious activity. Heisenberg also enables us to see cloud and internet service “misconfigurations”—i.e., legit, benign traffic that is being sent to a node that is no longer under the control of the sending organization but likely was at some point. We see database queries, API calls, authenticated sessions and more and this provides insight into how well organizations are (or aren’t) configuring and maintaining their internet presence.

Putting It All Together

We convert all our data into a column-storage format called “parquet” that enables us to use a wide array of large-scale data analysis platforms to mine the traffic. With it, we can cross-reference Sonar and Heisenberg data—along with data from feeds of malicious activity or even, say, current lists of digital coin mining bots—to get a pretty decent picture of what’s going on.

This past year (to date), we’ve publicly used our platform to do everything from monitoring Mirai (et al) botnet activity to identifying and quantifying (many) vulnerable services to tracking general protocol activity and exposure before and after the Shadow Brokers releases. Privately, we’ve used the platform to develop custom feeds for our Insight platform that helps users identify, quantify and reduce exposure. Let’s look into a few especially fun and helpful cases we’ve studied:

What we didn’t tell you is that Rapid7’s Rebekah Brown worked with the National Association of Broadcasters to get the word out to vulnerable stations. Within 24 hours the scope of the issue was reduced by 50% and now only a handful (~15%) remain open and unprotected. This is an incredible “win” for the internet as exposure reduction like this is rarely seen.

We used our Sonar HTTP study to look for candidate systems and then performed a targeted scan to see if each device was — in fact — vulnerable. Thanks to the aforementioned re-engineering efforts, these subsequent scans take between 30 minutes to three hours (depending on the number of targets and complexity of the protocol decomposition). That means, when we are made aware of a potential internet-wide issue, we can get active, current telemetry to help quantify the exposure and begin working with CERTs and other organizations to help reduce risk.

Internet of Exposure

It’d be too easy to talk about the Mirai botnet or stunt-hacking images from open cameras. Let’s revisit the exposure of a core component of our nation’s commercial backbone: petroleum. Specifically, the gas we all use to get around.

We’ve talked about it before and it’s hard to believe (or perhaps not, in this day and age) such a clunky device...

...can be so exposed. We’ve shown you we can count these IoThings but we’ve taken the ATG monitoring a step further to show how careless configurations could possibly lead to exposure of important commercial information.

Want to know the median number of gas tanks at any given petrol station? We’ve got an app for that:

Most stations have 3-4 tanks, but some have many more. This can be sliced-and-diced by street, town, county and even country since the vast majority of devices provide this information with the tank counts.

How about how much inventory currently exists across the stations?

We won’t go into the economic or malicious uses of this particular data, but you can likely ponder that on your own. Despite previous attempts by researchers to identify this exposure—with the hopeful intent of raising enough awareness to get it resolved—we continue to poke at this and engage when we can to help reduce this type of exposure. Think back on this whenever your organization decides to deploy an IoT sensor network and doesn’t properly risk-assess the exposure depending on the deployment model and what information is being presented through the interface.

But, these aren’t the only exposed things. We did an analysis of our Port 80 HTTP GET scans to try to identify IoT-ish devices sitting on that port and it’s a mess:

You can explore all the items we found here but one worth calling out is:

These are 251 buildings—yes, buildings—with their entire building management interface directly exposed to the internet, many without authentication and not even trying to be “sneaky” and use a different port than port 80. It’s vital that you scan your own perimeter for this type of exposure (not just building management systems, of course) since it’s far too easy to have something slip on to the internet than one would expect.

Wiping Away The Tears

Rapid7 was quick to bring hype-free information and help for the WannaCry “digital hurricane” this past year. We’ve migrated our WannaCry efforts over to focused reconnaissance of related internet activity post-Shadow Brokers releases.

Since WannaCry, we’ve seen a major uptick in researchers and malicious users looking for SMB hosts (we’ve seen more than that but you can read our 2017 Q2 Threat Report for more details). As we work to understand what attackers are doing, we are developing different types of honeypots to enable us to analyze—and, perhaps even predict—their intentions.

We’ve done even more than this, but hopefully you get an idea of the depth and breadth of analyses that our research platform enables.

Take Our Data...Please!

We provide some great views of our data via our blog and in many reports:

But, YOU can make use of our data to help your organization today. Sure, Sonar data is available via Metasploit (Pro) via the Sonar C, but you can do something as simple as:

to see if you’re in one of the study results. Some ones you really don’t want to show up in include SMB, RDP, Docker, MySQL, MS SQL, MongoDB. If you’re there, it’s time to triage your perimeter and work on improving deployment practices.

You can also use other Rapid7 open source tools (like dap) and tools we contribute to (such as the ZMap ecosystem) to enrich the data and get a better picture of exposure, focusing specifically on your organization and threats to you.

Fin

We’ve got more in store for the rest of the year, so keep an eye (or RSS feed slurper) on the Rapid7 blog as we provide more information on exposure.

After reading the findings, and noting that some of the characteristics seemed similar to trends we’ve seen in the past, we were eager to gauge the exposure of these vulnerabilities on the public internet. Vulnerabilities such as default passwords or command injection, which are usually trivial to exploit, in combination with a sizable target pool of well-connected, generally unmonitored internet-connected devices, such as DSL/cable routers, can have a significant impact on the general health of the internet, particularly in the age of DDoS and malware for hire, among other things. For example, starting around this time last year and continuing until today, the internet has been dealing with the Mirai malware that exploits default passwords as part of its effort to replicate itself. The SharknAT&To vulnerabilities seemed so similar, we had to get a better idea of what we might be facing.

What we found surprised us: the issues are actually not as universal as initially surmised. Indeed, we found that clusters of each of the vulnerabilities are found almost entirely in their own, distinct regional pockets (namely, Texas, California, and Connecticut). We also observed that these issues may not be limited to just one ISP deploying a particular model of Internet router but perhaps a variety of different devices that is complicated by a history of companies, products, and services being bought, sold, OEM’d and customized.

For more information about these SharknAT&To vulnerabilities and Rapid7’s efforts to understand the exposure these vulnerabilities represent, please read on.

Five Vulnerabilities Disclosed

NoMotion identified five vulnerabilities that, at the time, seemed limited to Arris modems being deployed as part of AT&T U-Verse installations:

Successful exploitation of even just one of these vulnerabilities would result in a near complete compromise of the device in question and would pose a grave risk to the computers, mobile devices, and IoT gadgets on the other side. If exploited in combination, the victim’s device would be practically doomed to persistent, near-undetectable compromise.

Scanning to Gauge Risk

NoMotion did an excellent job of using existing Censys and Shodan sources to gauge exposure; however, they also pointed out that some of the at-risk services on these devices are not regularly audited by scanning projects like this. In an effort to assist, Rapid7 Labs fired off several Sonar studies shortly after learning of the findings in order to get current information for all affected services, within reason.

As such, we queued fresh analysis of:

SSH on port 22/TCP to cover vulnerability 1

HTTPS on 49955/TCP and 61001/TCP, covering vulnerabilities 2-4

A custom protocol study on port 49152/TCP for vulnerability 5

Findings

Vulnerability 1: SSH Exposure

Not having a known vulnerable Arris device at our disposal, we had to take a bit of an educated guess as to how to identify affected devices. In NoMotion’s blog post, they cite Censys as showing 14,894 vulnerable endpoints. A search through Sonar’s SSH data from early August showed just over 7,000 hosts exposing SSH on 22/TCP with “ARRIS” in the SSH banner, suggesting that these may be made by Arris, one of the vendors involved in this issue. There are several caveats that could explain the difference in number, including the fact that Arris makes several other devices, which are unaffected by these issues, and that there is no guarantee that affected and/or vulnerable devices will necessarily mention Arris in their SSH protocol. A follow-up study today showed similar results with just over 8,000. It is assumed that the difference in Rapid7’s numbers as compared to NoMotion’s is caused by the fact that Sonar maintains a blacklist of IP addresses that we’ve been asked to not study, as well as normal changes to the landscape of the public Internet.

A preliminary check of our Project Heisenberg honeypots showed no noticeable change in the patterns we observe related to the volume and variety of SSH brute force and default account attempts prior to this research. However, the day after NoMotion's research was published, our honeypots started to see exploitation attempts using the default credentials published by NoMotion.

September 13, 2017 UPDATE on SSH exposure findings

The researchers from NoMotion reached out to Rapid7 Labs after the initial publication of this blog and shared how they estimated the number of exposed, vulnerable SSH endpoints. They did so by searching for SSH endpoints owned by AT&T U-Verse that were running a particular version of dropbear. Repeating our some of our original research with this new information, we found nearly 22,000 seemingly vulnerable endpoints in that same study from early August concentrated in Texas and California.

Armed with this new knowledge, we re-analyzed SSH studies from late August and early September and discovered that seemingly none of the endpoints that appeared vulnerable in early August were still advertising the vulnerable banner, indicating that something changed with regards to SSH on AT&T U-Verse modems that caused this version to disappear entirely. Sure enough, a higher level search for just AT&T U-Verse endpoints shows that there were nearly 40,000 AT&T U-Verse SSH endpoints in early August and just over 10,000 in late August and early September, with the previously seen California and Texas concentrations drying up. What changed here is unknown.

Vulnerabilities 2 and 3: Port 49955/TCP Service Exposure

US law understandably prohibits research that performs any exploitation or intrusive activity, which rules out specifically testing the validity of the default credentials, or attempting to exploit the command injection vulnerability. Combined with no affected hardware being readily available to us at the time of this writing, we had to get creative to estimate the number of exposed and potentially affected Arris devices.

As mentioned in NoMotion’s blog, they observed several situations in which the HTTP(S) server listening on 49955/TCP would return various messages implying a lack of authorization, depending on how the request was made. Our first slice through the Sonar data from August 31, 2017 showed ~3.4 million 49955/TCP endpoints open, though only approximately 284,000 of those appear to be HTTPS. Further summarization showed that better than 99% of these responses were identical HTTP 403 forbidden messages, giving us high confidence that these were all the same types of devices and were all likely affected. In some HTTP research situations we are able to examine the HTTP headers in the response for clues that might indicate a particular vendor, product or version that would help narrow our search, however the HTTP server in question here returns no headers at all.

Furthermore, by examining the organization and locality information associated with the IPs in question, we start to see a pattern that this is isolated almost entirely to AT&T-related infrastructure in the Southern United States, with Texas cities dominating the top results:

The ~53k likely affected devices that we failed to identify a city and state for all report the same latitude and longitude, smack in the middle of Cheney Reservoir in Kansas. This is an anomaly introduced by MaxMind, our source of Geo-IP information, and is the default location used when an IP cannot be located any more precisely than being in the United States.

As further proof that we were on the right track, NoMotion has two locations, both in Texas. It’s likely that these Arris devices were first encountered in day-to-day work and life by NoMotion employees, and not scrounged off of eBay for research purposes. We’ve certainly happened upon interesting security research this way at Rapid7—it’s our nature as security researchers to poke at the devices around us.

Because this HTTP service is wrapped in SSL, Sonar also records information about the SSL session. A quick look at the same devices identified above shows another clear pattern -- that most have the same, default, self-signed SSL certificate:

This presents another vulnerability. Because the vast majority of these devices have the same certificate, they will also have the same private key. This means that anyone with access to the private key from one of these vulnerable devices is poised to be able to decrypt or manipulate traffic for other affected devices, should a sufficiently-skilled attacker position themselves in an advantageous manner, network-wise. Because some of the SharknAT&To vulnerabilities disclosed by NoMotion allow filesystem access, it is assumed that access to the private key, even if password protected, is fairly simple. To add insult to injury, because these same vulnerable services are the very services an ISP would use to manage and update or patch affected systems against vulnerabilities like these, should an attacker compromise them in advance, all bets are off for patching these devices using all but a physical replacement.

It is also very curious that outside of the top SSL certificate subject and fingerprint, there is still a clear pattern in the certificates: there is a common name with a long integer after it, which looks plausibly like a serial number. Perhaps at some point in their history, these devices used a different scheme for SSL certificate generation, and inadvertently included the serial number. Some simple testing with a supposedly unaffected device showed that this number didn’t necessarily match the serial number.

Examining Project Heisenberg’s logs for any traffic appearing on 49955/TCP shows only a minimal amount of background noise, and no obvious widespread exploitation yet in 2017.

Vulnerability 4: Port 61001/TCP Exposure

Much like with vulnerabilities 2 and 3 on port 49955/TCP, Sonar is a bit hamstrung when it comes to its ability to test for the presence of this vulnerability on the public internet.

Following the same steps as we did with 49955/TCP, we observed ~5.8 million IPs on the public IPv4 Internet with port 61001/TCP open. A second pass of filtering showed that nearly half of these were HTTPS. Using the same payload analysis technique as before didn’t pay dividends this time, because while the responses are all very similar -- large clusters of HTTP 404, 403, and other default looking HTTP response -- there is no clear outlier. The top response from ~874,000 endpoints looks similar to what we observed on 49955/TCP -- lots of Texas with some Cali slipping in:

The vast majority of the remainder appear to be 2Wire DSL routers that are also used by AT&T U-Verse. The twist here is that Arris acquired 2Wire several years ago. Whether or not these 2Wire devices are affected by any of these issues is currently unknown:

As shown above, there is still a significant presence in the Southern United States, but there is also a sizeable Western presence now, which really highlights the supply chain problem that NoMotion mentioned in their research. While the 49955/TCP vulnerability appears to be isolated to just one region of the United States, the 61001/TCP issue has a broader reach, further implying that this extends beyond just the Arris models named by NoMotion, but not necessarily beyond AT&T.

Repeating the same investigation into the SSL certificates on port 61001/TCP shows that there are likely some patterns here, including the same exact Arris certificate showing up again, this time with over 45,000 endpoints, and Motorola making an appearance with 3/4 of a million:

Examining Project Heisenberg’s logs for any traffic appearing on 61001/TCP shows there is only a minimal amount of background noise and no obvious widespread exploitation yet in 2017.

Vulnerability 5: Port 49152/TCP Exposure

The service listening on 49152/TCP appears to be used as a kind of a source-routing, application layer to MAC layer TCP proxy. By specifying a magic string, the correct opcode, a valid MAC and a port, the “wproxy” service will forward on any remaining data received during a connection to port 49152/TCP from (generally) the WAN to a host on the LAN with that specified MAC to the specified. Why exactly this needs to be exposed to the outside world with no restrictions whatsoever is unknown, but perhaps the organizations in question deploy this for debugging and maintenance purposes and failed to properly secure it.

In order to gauge exposure of this issue, we developed a Sonar study that sends to the wproxy service a syntactically valid payload that elicits an error response. More specifically, the study sends a request with a valid magic string, an invalid opcode, an invalid MAC and an invalid port, which in turn generally causes the remote endpoint to return an error that allows us to positively identify the wproxy service. Because this vulnerability is inherent in the service itself due to a lack of any authentication or authorization, any endpoint exposing this service is at risk.

As with the other at risk services described so far, our first step was to determine how many public IPv4 endpoints seemed to have the affected port open, 49152/TCP. A quick zmap scan showed nearly 8 million hosts with this port open. With our limited knowledge of the protocol service, we looked for any wproxy-like responses, which quickly whittled down the list to approximately 42,000 IPv4 hosts exposing the wproxy service.

We had hoped that a quick application of geo-IP and we’d be done, but it wasn’t quite that simple. Using the same techniques as with other services, we grouped by common locations until something caught our eye, and immediately we knew something was up. Up until this point, all of this had landed squarely in AT&T land, clustering around Texas and California, but several different lenses into the 49152/TCP data pointed to one region—Connecticut:

Sure, there are a few AT&T mentions and even 5 oddly belonging to Arris in Georgia, but otherwise this particular service seemed off. Why all Texas/California AT&T previously, but now Frontier in Connecticut? Guesses of bad geo-IP data wouldn’t be too far off, but in reality, Frontier acquired all of AT&T’s broadband business in Connecticut 3 years ago.

This means that AT&T broadband customers who were at risk of having their internal networks swiss-cheesed by determined attackers with a penchant for packets for at least 3 years are now actually Frontier customers using AT&T hardware, almost certainly further complicating the supply chain problem and definitely putting customers at risk because of a service that should have never seen the public internet in the first place.

Examining Project Heisenberg’s logs for any traffic appearing on 49152/TCP and there is largely just suspected background noise in 2017, albeit a little higher than port 49955/TCP and 61001/TCP. There are a few slight spikes back in February 2017, perhaps indicating some early scouting, but it is just as likely to have been background noise or probes for entirely different issues. Some high level investigation shows a deluge of blindly lobbed HTTP exploits at this port.

Conclusions

The issues disclosed by NoMotion are certainly attention-grabbing, since the initial analysis implies that AT&T U-Verse, a national internet service provider with millions of customers, is powered by dangerously vulnerable home routers. However, our analysis of what’s actually matching the described SharknAT&To indicators seems to point to a more localized phenomenon; Texas and other Southern areas are primarily indicated, with flare ups in California, Chicago, and Connecticut, with significantly lower populations in other regions of the U.S.

These results seem to imply which vendor is in the best position to fix these bugs, but the supply chain problems detailed above add a level of complication that will inevitably leave some customers at risk unnecessarily.

Armed with these Sonar results, we can say with confidence that these vulnerabilities are almost wholly contained in the AT&T U-Verse and associated networks, and not part of the wider Arris ecosystem of hardware. This, in turn, implies that the software was produced or implemented by the ISP, and not natively shipped by the hardware manufacturer. This knowledge will hopefully speed up remediation.

Interested in further collaboration on this? Have additional information? Questions? Comments? Leave them here or reach out to research@rapid7.com!

]]>

WannaCry Overview

Last week the WannaCry ransomware worm, also known as Wanna Decryptor, Wanna Decryptor 2.0, WNCRY, and WannaCrypt started spreading around the world, holding computers for ransom at hospitals, government offices, and businesses. To recap: WannaCry exploits a vulnerability in the Windows Server Message Block (SMB) file sharing

WannaCry Overview

Last week the WannaCry ransomware worm, also known as Wanna Decryptor, Wanna Decryptor 2.0, WNCRY, and WannaCrypt started spreading around the world, holding computers for ransom at hospitals, government offices, and businesses. To recap: WannaCry exploits a vulnerability in the Windows Server Message Block (SMB) file sharing protocol. It spreads to unpatched devices directly connected to the internet and, once inside an organization, those machines and devices behind the firewall as well. For full details, check out the blog post: Wanna Decryptor (WannaCry) Ransomware Explained.

Since last Friday morning (May 12), there have been several other interesting posts about WannaCry from around the security community. Microsoftprovided specific guidanceto customers on protecting themselves from WannaCry. MalwareTechwrote abouthow registering a specific domain name triggered a kill switch in the malware, stopping it from spreading. Recorded Futureprovided a very detailed analysis of the malware's code.

However, the majority of reporting about WannaCry in the general news has been that while MalwareTech's domain registration has helped slow the spread of WannaCry, a new version that avoids that kill switch will be released soon (or is already here) and that this massive cyberattack will continue unabated as people return to work this week.

In order to understand these claims and monitor what has been happening with WannaCry, we have used data collected byProject SonarandProject Heisenbergto measure the population of SMB hosts directly connected to the internet, and to learn about how devices are scanning for SMB hosts.

We find that there are over 1 million internet-connected devices that expose SMB on port 445. Of those, over 800,000 run Windows, and — given that these are nodes running on the internet exposing SMB — it is likely that a large percentage of these are vulnerable versions of Windows with SMBv1 still enabled (other researchers estimate up to 30% of these systems are confirmed vulnerable, but that number could be higher).

We can look at the geographic distribution of these hosts using the following treemap (ISO3C labels provided where legible):

The United States, Asia, and Europe have large pockets of Windows systems directly exposed to the internet while others have managed to be less exposed (even when compared to their overall IPv4 blocks allocation).

We can also look at the various versions of Windows on these hosts:

The vast majority of these are server-based Windows operating systems, but there is also a further unhealthy mix of Windows desktop operating systems in the mix—, some quite old. The operating system version levels also run the gamut of the Windows release history timeline:

Using Sonar, we can get a sense for what is out there on the internet offering SMB services. Some of these devices are researchers running honeypots (like us), and some of these devices are other research tools, but a vast majority represent actual devices configured to run SMB on the public internet. We can see them with our light-touch Sonar scanning, and other researchers with more invasive scanning techniques have been able to positively identify that infection rates are hovering around 2%.

Part 2: In which Rapid7 uses Heisenberg to listen to the internet

While Project Sonar scans the internet to learn about what is out there, Project Heisenberg is almost the inverse: it listens to the internet to learn about scanning activity. Since SMB typically runs on port 445, and the WannaCry malware scans port 445 for potential targets, if we look at incoming connection attempts on port 445 to Heisenberg nodes as shown in Figure 4, we can see that scanning activity spiked briefly on 2017-05-10 and 2017-05-11, then increased quite a bit on 2017-05-12, and has stayed at elevated levels since.

Not all traffic to Heisenberg on port 445 is an attempt to exploit the SMB vulnerability that WannaCry targets (MS17-010). There is always scanning traffic on port 445 (just look at the activity from 2017-05-01 through 2017-05-09), but a majority of the traffic captured between 2017-05-12 and 2017-05-14 was attempting to exploit MS17-010 and likely came from devices infected with the WannaCry malware. To determine this we matched the raw packets captured by Heisenberg on port 445 against sample packets known to exploit MS17-010.

Figure 5shows the number of unique IP addresses scanning for port 445, grouped by hour between 2017-05-10 and 2017-05-16. The black line shows that at the same time that the number of incoming connections increases (2017-05-12 through 2017-05-14), the number of unique IPs addresses scanning for port 445 also increases. Furthermore, the orange line shows the number of new, never- before- seen IPs scanning for port 445. From this we can see that a majority of the IPs scanning for port 445 between 2017-05-12 and 2017-05-14 were new scanners.

Finally, we see scanning activity from 157 different countries in the month of May, and scanning activity from 133 countries between 2017-05-12 and 2017-05-14. Figure 6shows the top 20 countries from which we have seen scanning activity, ordered by the number of unique IPs from those countries.

While we have seen the volume of scans on port 445 increase compared to historical levels, it appears that the surge in scanning activity seen between 2017-05-12 and 2017-05-14 has started to tail off.

So what?

Using data collected by Project Sonar we have been able to measure the deployment of vulnerable devices across the internet, and we can see that there are many of them out there. Using data collected by project Heisenberg, we have seen that while scanning for devices that expose port 445 has been observed for quite some time, the volume of scans on port 445 has increased since 2017-05-12, and a majority of those scans are specifically looking to exploit MS17-010, the SMB vulnerability that the WannaCry malware looks to exploit.

Coming Soon

If this sort of information about internet wide measurements and analysis is interesting to you, stay tuned for the National Exposure Index 2017. Last year, we used Sonar scans to evaluate the security exposure of all the countries of the world based on the services they exposed on the internet. This year, we have run our studies again, we have improved our methodology and infrastructure, and we have new findings to share.