Average amount of bandwidth used in DDoS attacks spiked eight-fold last quarter.

Coordinated attacks used to knock websites offline grew meaner and more powerful in the past three months, with an eight-fold increase in the average amount of junk traffic used to take sites down, according to a company that helps customers weather the so-called distributed denial-of-service campaigns.

The average amount of bandwidth used in DDoS attacks mushroomed to an astounding 48.25 gigabits per second in the first quarter, with peaks as high as 130 Gbps, according to Hollywood, Florida-based Prolexic. During the same period last year, bandwidth in the average attack was 6.1 Gbps and in the fourth quarter of last year it was 5.9 Gbps. The average duration of attacks also grew to 34.5 hours, compared with 28.5 hours last year and 32.2 hours during the fourth quarter of 2012. Earlier this month, Prolexic engineers saw an attack that exceeded 160 Gbps, and officials said they wouldn't be surprised if peaks break the 200 Gbps threshold by the end of June.

The spikes are brought on by new attack techniques that Ars first chronicled in October. Rather than using compromised PCs in homes and small offices to flood websites with torrents of traffic, attackers are relying on Web servers, which often have orders of magnitude more bandwidth at their disposal. As Ars reported last week, an ongoing attack on servers running the WordPress blogging application is actively seeking new recruits that can also be harnessed to form never-before-seen botnets to bring still more firepower.

Also fueling the large-scale assaults are well-financed attackers who are increasingly able to coordinate with fellow crime organizations, Prolexic officials wrote in quarterly global DDoS report published Wednesday.

"These types of attack campaigns appear to be here to stay as a staple on the global threatscape," they wrote. "Orchestration of such large attack campaigns can only be achieved by having access to significant resources. These resources include manpower, technical skills and an organized chain of command."

The most prominent targets of DDoS attacks over the past six months have been the nation's largest banks, which at times have become completely unreachable following above average floods of traffic. Most of the assaults were preceded by online posts that showed the writer had foreknowledge of what was about to happen. The posts were penned by self-proclaimed members of Izz ad-Din al-Qassam Brigades, the military wing of the Hamas organization in the Palestinian Territories, and said the attacks were in retaliation for videos posted to YouTube that were insulting to Muslims. The Prolexic report cast doubt on some of that narrative.

Prolexic "believes these attacks go beyond common script kiddies as indicated by the harvesting of hosts, coordination, schedules and specifics of the selected attack targets," the report stated. "These indicators point to motives beyond ideological causes, and the military precision of the attacks hints at the use of global veteran criminals that consist of for-hire digital mercenary groups."

Not the only one

Prolexic is by no means the only DDoS mitigation service that's seeing more powerful attacks. For 45 minutes on Tuesday, San Francisco-based CloudFlare's network was bombarded by data sent by more than 80,000 servers across the Internet that all appeared to be running WordPress. Over the past half-year, CloudFlare has seen a dramatic uptick in attacks that target website applications, such as those that provide encrypted HTTPS sessions. In many cases, those types of attacks are much harder to block.

"Sometimes the nastiest attacks aren't the biggest ones," CloudFlare CEO Matt Prince told Ars. "The nasty attacks that we're seeing right now are the ones that go after the underlying application by doing something like sending a ton of traffic to a log-in page."

Attackers in such cases will unleash scripts that enter a legitimate user name along with passwords that are known to be invalid. When repeated millions of times, the technique overwhelms targeted systems as servers perform database lookups, report the authentication failure, and then record it in internal logs.

In addition to increasingly well-funded and organized attackers and new techniques, the growing firepower of DDoS attacks is also getting a boost from the proliferation of do-it-yourself Web applications such as WordPress and Joomla, Prince said. In that respect these applications, which are designed to help people with only moderate levels of technical expertise deploy websites, could become to this decade what early versions of Microsoft's Windows XP were to the previous decade.

"It is clear that if the story of the 2000s was how easy it was to compromise desktop PCs and turn them into spam-sending engines or botnets to do other nefarious things, the story of the 2010s is going to be how easy it is to compromise server software, which has gotten very consumerized and doesn't necessarily have the best security in place," Prince said. "If a server is 10 times as powerful as a desktop computer then you only need one-tenth to do the same level of damage."

Promoted Comments

I'm really OK with pulling their internet plug (any country harboring these thugs). Turn the whole area off from the outside world. Let them attack each other. Why should they be allowed to corrupt our systems in their never-ending war of hate against everything modern?

Just received word your neighbor is operating a botnet. In order to prevent corruption of our systems, we are pulling the internet plug for your neighborhood. Sorry, no Netflix for you tonight.

So, remember that article about millions of open telnet ports (port 23) found around the interwebs? And most of those were accessible w/default passwords (admin/admin, etc.). Every time I see an article talking about spikes in DDoS attacks, I wonder how many of those have been taken over as bots. My understanding is, most of those aren't user-errors, but manufacturer faults for leaving open ports on things like printers (thanks, HP!) and the like. Yeah, good luck fixing all of those.

Oh, I know a lot of ppl poo-poo his credentials, but I find him informative so... Steve Gibson went into more detail on the above findings on his podcast. Transcript here.

I operate a Wordpress site. For a while I was seeing better then 150 illicit attempts to login per hour. Being sort of a stubborn cuss, I opened wp-login and rewrote parts of it. Now an IP address has two attempts to login. If those two attempts are unsuccessful, the IP receives a temp ban for a period of time. If there are three other unsuccessful attempts to login, that IP address is written to .htaccess as deny from aaa.bbb.ccc.ddd as a permanent ban. That slowed the attacks. My host is also recording and banning IP addresses that make repeated failed attempts to login. This solution will not work for those that allow readers to login.

Fortunately there is another solution that works well. Install the plugins "Mute Screamer" and "Better WP-Security" set as much as the security as is possible for your arrangement in the WP-Security dashboard and the "Tweaks" panel and set login limits to three - five failed logins for a temp ban of 15 minutes. Mute Screamer is an IDS system that will notify you of attempted intrusions. Better WP-Security can be toggled to record changed files and make recovery after any alteration easier.

Quick note, since these specific attacks are coming from a huge pool of IPs, this approach isn't going to be very effective against this specific attack. Plugins that slow or throttle login attempts are more useful than those that simply block IPs (look at Login Security Solution, or better the Google Authenticator two factor login).

In any case security for this is best handled as far down the web stack as possible where it'll consume the least resources. Allowing 90000 IPs to attempt to access the login pages is going to knock most sites offline, even if no login attempt is made. This is why for this attack the most effective place to block it is before it reaches the server by a security proxy/web application firewall service like CloudFlare, Incapsula, Cloud Proxy, etc...

It's unlikely any solution to this will make it into WordPress because as mentioned above its mostly the wrong place for it. The firewall should be on apache/nginx (mod_security/nasxi) before a php thread is even gets spun up. WP could bake required password strength, login throttling, or two factor auth into core, but why should WP dictate one solution when there are many good solutions available for each of the many possibly attack vectors?

If site owners don't know or want to be bothered will decions about server configuration or what WP security precautions to implement they have the option to hire a professional. That might be a consultant or might be paying for managed hosting from WP Engine, ZippyKid, etc...

I feel the root problem with all of this is the consumerization of web hosting by companies like GoDaddy/Host Gator/etc which offer lay people a one-click installs of web applications without really explaining that they (the hosting company) isn't providing support for the software or even basic security measures at the server level.

Thats not entirely fair as host do most of what security hardening they can, but it is limited. Shared hosting is hard to impossible to secure because what you do for WP might break phpBB, what you do for Drupal might break OS Commerce. Further since it is shared hosting, users often don't have access to lock things down better even if they had the techincal skill... Which all leads to users trying to block attacks like this where they do have control, wordpress/php, even when that's a poor place to do it.