just like in real life i get these hunchs or more like a bad feeling same with cyber space when you watch it and surf it you kinda pick up on how things are going and get a idea of the direction its going.

im not sure but it like i get the feeling its the calm befor the storm.

"The only way to stop such attacks is to fix the vulnerabilities on the machines that ultimately get taken over and used to launch them," Paller said. "There's no defense once the machines are under the attacker's control."

Click to expand...

This is exactly why I try my best to preach security where ever I go. One of these days, one of these DDoS attacks is going to take out the net if people like us, here at Wilders, don't get enough people educated before some nasty little group musters up enough machines to pull it off.
The potential for financial disaster is frightening.
When people like those that tried this latest attack fail, they learn and they will come back even stronger the next time. I fear it is going to be easier to educate the masses, than to capture those that would do such damage.
We certainly have our work cut out for us, don't we?

Is it already found out what ddossed it this time? Was it the overdose on UDP 137? For that i hope there will soon be filters or other solutions somehow.
You know i posted i open the TDS TCP Port listen function on that port while blocked in the FW and not any scan. Today i forgot it for half an hour and there are UDP 137 scans every minute of that time, so i was happy to put up that listen function and it's all quiet again since.
You can filter them away with the routers and fw and whatever, but it seems all those scans do attack resources and can cause ddossing as we've seen.

Well, Lawrence is quite worried in regard to for example Opasoft. Quoted:

"I have been tracking the Opaserv worm for the last few
weeks...I'm very concerned. So far we've seen Opaserv-like scanning behavior from > 650,000 distinct Ips...and the number of distinct IPs is growing at a rate of 2,000 - 2,500/hr.

The *known* backchannels for the orginal Opaserve and the Brazil variant are reported as shutdown...however, if there is some other backchannel mechanism, that would be one heck of a Zombie net."

Well, the Port 137/139 probes are apparently still being processed by my ISP here -- I don't see any of them.

However, I do see a large number of TCP Port 6492 probes in recent days. I've got 251 of these from 20 Oct through 22 Oct and they come from all sorts of IP addresses, including a healthy number of *.edu domains. They stopped when I changed my local IP address, but I couldn't find anything that should be banging that port by default. The attached JPG shows the top 25 out of probably around 70.

Attached Files:

Wow, J.V.M! That's quite a graph you got there! Impressive. I read the article linked by Paul. Why is it important to use hierarchal structures for Internet connections? Would it make sense to use Lateral connections too? I would not suggest restructuring the Internet but perhaps reorganize it a little so it is not so dependent on old-fashioned structural dynamics. These are just some thoughts rolling around in this head-thing of mine!

It seems to me that Monday's happening with the root directories was just a test. That's my hunch. Have there been any other "tests" lately of other key Internet connections sites? Little ones? If so, then I expect we will see more of this problem. Will we be ready?

quoting: Prince_Serendip link=board=18;threadid=4355;start=0#28454 date=1035407544]. . . . Wow, J.V.M! That's quite a graph you got there! Impressive. I read the article linked by Paul. Why is it important to use hierarchal structures for Internet connections?

Click to expand...

Oh, I was just screwing around and trying to post a graphic here for the first time.

The only interesting thing I've found to date is that the first six IP addresses in the graphic all have something listening on TCP Port 80 (HTTP), but whatever it is is apparently not a standard web server. (Visual Route would identify a web server in this instance.) That says RAT to me, and listening on Port 80, but I still can't identify the purpose of the TCP port 6492 probes, nor can I find a vulnerability or exploit associated with TCP Port 6492, nor a RAT known to use this as a default port. And, quite frankly, I got probed by more remote IP addresses than are indicated as being present at the Internet Storm Center in the past month or so. That suggests to me that the probes may well be targeted.

The heirarchical structure for IP addresses has its roots in antiquity (relatively speaking) -- and it's already breaking down. Still, there are some good reasons for this structure and, indeed, a lateral structure would simply compound the problems that need to be addressed.

It seems to me that Monday's happening with the root directories was just a test. That's my hunch. Have there been any other "tests" lately of other key Internet connections sites? Little ones? If so, then I expect we will see more of this problem. Will we be ready?

Click to expand...

On this subject, I would simply point out that there were quite a few individuals that considered the various Code Red variants as a carefully controlled test scenario. Still, nothing came of that (excluding Nimda), but there remains a nagging suspicion that many of these vulnerable servers were subverted by Code Red II and exist, even today, as 'sleeper' sites. However, if Lawrence Baldwin's statistics are correct, OPASERV has now subverted an even wider population.

I am still being hit on UDP 137 at a rate of about 30 an hour, give or take a little. This is far worse than anything I have ever seen, even at the peak of code red, nimda, etc.
I am also seeing a lot of port 80 hits, right now. I don't normally try things like keeping figures, but my spidey senses tell me this one is really bad news.
That article was sure a grabber too. 99% of the internet users don't have a clue that such stuff is happening.
Scary as hell to me.

This is one of those cases where you simply cannot, unfortunately, separate computer security and politics.

The biggest threat to our security may not be from physical bombs, but from data bombs that have the potential of wrecking the world economy. There have been rumors for quite sometime that there is a group of South Koreans who are sympathetic to the government of Kim Jong Il in North Korea that are actively preparing for a massive "cyber" attack. Many in India are very upset with the United States cuddling up with a military dictator in Pakistan in the "war on terrorism." Does one need to look further than that country alone for the minds to do whatever is possible to cause a cyber-calamity?

The possibility of a simultaneous cyber attack with physical attacks on one or more of the 13 root nameservers is very frightening. The robust root server operated by NSI is officially said to be in Herndon, Virginia -- but in actuality the actual location is super-secret known only to be in the Virginia hills. It has long been known that the location would love to be discovered by potential terrrorists. One seminar at Sector5 dealt strictly with the physical protection of these locations and called for air defense to protect the root servers. The thought of an Internet calamity is scary considering that (against all advice from IT Security specialists) parts of the USA power grid now communicate solely through the Internet. Hopefully, other countries haven't been as reckless.

A portion of the above is worth posting here -"...several studies have shown that critical infrastructures are potentially vulnerable to cyberterrorist attack. Eligible Receiver, a "no notice" exercise conducted by the Department of Defense in 1997 with support from NSA red teams, found the power grid and emergency 911 systems had weaknesses that could be exploited by an adversary using only publicly available tools on the Internet. Although neither of these systems were actually attacked, study members concluded that service on these systems could be disrupted. Also in 1997, the President's Commission on Critical Infrastructure Protection issued its report warning that through mutual dependencies and interconnectedness, critical infrastructures could be vulnerable in new ways, and that vulnerabilities were steadily increasing, while the costs of attack were decreasing.

A widespread but unsophisticated attack on the computers that act as the address books for the Internet failed to cause any major problems, but experts warn that more security is necessary.
Beginning Tuesday, a flood of data barraged the Internet's 13 domain-name service (DNS) root servers in what's known as a denial-of-service attack. But the simple nature of the attack, and the system's resiliency, allowed administrators to quickly block the data stream.

According to security experts, a more sophisticated attack could have disrupted the root servers long enough to impair Net access. Had the attack prevented access to the servers for eight to 10 hours, the average computer user may have noticed slower response times, said Craig Labovitz, director of network architecture for denial-of-service prevention firm Arbor Networks.

"If someone can really take over the infrastructure, it becomes a very different ball game," he said.

Although the attack failed to hobble the Net, there were indications Wednesday that it wasn't over yet, continuing at a lower intensity. In addition, locating the perpetrators will be difficult because the type of attack they used--known as a distributed denial-of-service--typically mask the origins of the assault.

In the wake of the attack, some of the companies and organisations that maintain the 13 key servers have pledged to reassess the security of the computers for which they are responsible.

VeriSign, which maintains two root servers as well as just over a dozen .com top-level domain servers, is evaluating whether it needs to revamp security, said company spokesman Brian O'Shaughnessy.

"VeriSign always look for ways to improve its security," he said. "We are in a fluid environment--the bad guys always try to do bad things."

O'Shaughnessy refuted claims that the company's two charges--the "A" and "J" root servers--went down during the onslaught. "That's wrong," he said. "Two of the four that stayed up were ours."

Monday's assault took down seven of the 13 servers for as long as three hours, according to Internet performance measuring service Matrix NetSystems. The attack took the form of a data flood, sending a deluge of Internet control message protocol (ICMP) packets to the 13 root servers, which maintain the addresses for the hundreds of top-level domain servers. Top-level domains are recognised by familiar suffixes such as .com, .org and .uk.

ICMP packets carry network data used for reporting errors or checking network connectivity, as in the case of the common "ping" packet. A flood of such data can block access to servers by clogging bottlenecks in the network infrastructure, thus preventing legitimate data from reaching its destination.

However, ICMP data is not essential to network administration, and many servers and the routers that direct data to its destination tend to block the protocol. That's precisely what administrators did Monday afternoon to stop the flood of data from reaching the DNS root servers.

Continuing and future attacks

Still, experts are concerned about a better executed attack.

"(This attack) didn't impact the Internet much, because the Internet is resilient and operators were quick to respond," said Tiffany Olsen, spokeswoman for the President's Critical Infrastructure Board, the group responsible for creating the United States' National Strategy to Secure Cyberspace. However, there "will be larger attacks than this one was."

The FBI has opened an investigation into the attacks, but the agency will have a hard time finding the responsible person or group because the distributed attack randomised the source information on each piece of data, experts said.

"There are tens and dozens of scripts and tools that could have generated an attack of this kind," said Arbor's Labovitz. "It wouldn't even require a computer scientist, or even a wily hacker, to do this."

Meanwhile, Matrix NetSystems said Wednesday that the attack may be ongoing. "There are five servers right now that are showing issues," said company CEO Bill Palumbo. He acknowledged that the five may be down for maintenance or other reasons, but said that there are still delays in requests for domain name information.

Like a telephone book, domain name servers link a name, such as "zdnet.com.au," with its numerical Internet Protocol address.

The system also works in a layered manner, so that someone who wants to go a specific address is first directed to a local server. If the domain is not found, the request gets bumped up to a domain name server for the top-level domain, such as ".com."

Requests only rarely consult the root servers, usually when a new name server is added locally. In addition, each entry in a DNS server has an expiration date, known as the time to live (TTL). When that time arrives, the entry is supposed to be deleted and the local DNS server has to ask the top-level domain server for the latest address information.

"You have to realise that there are several tens of thousands of new routes advertised every day," said Matrix NetSystem's Palumbo. "Because of that, the authoritative nature of a cache deteriorates rather rapidly."

Thus, even a complete outage of all 13 DNS root servers wouldn't bring the Internet to a halt, unless it went on for hours or days--time enough for the local DNS caches to expire.

Paul Mockapetris, the inventor of DNS and chief scientist for domain-name software company Nominum, said that compared to the 300 or so records that each root server contains, a future target that administrators should worry about is the 3 million or so records held by the .com DNS servers.

"The root servers will be harder in a month than they are today," he said. "This was really sort of--to borrow from Afghanistan--was 'dumb bombs,' and you have to worry about more sophisticated attacks in the future."

That's what worries so many. Look what did happen with an "unsophisticated" attack. "Unsophisticated" is relative.

Are you a football fan? (American football).......

An offense will many times if they are unsure of the defensive gameplan, run a series of "unsophisticated" plays to see where the weaknesses are, to see where the holes are, to see the defensive scheme. What results is a series or two of a bunch of one yard gains, three yard gains, maybe a loss of yardage -- but it is all to serve a purpose. To give false confidence to the defense, to learn the scheme, to see the way the plays were defended, and to soften up the defense. However, in reality the real gameplan is not the "unsophisticated" offense being shown to the other side. About the time they have the defense confident and secure that they can stop these guys, the offense pulls out the real gameplan, which is anything but unsophisticated - it is all of a sudden power football, finesse passing, speed on the outside, the deep ball -- all with success because of the clever softening up of the defense and learning how they are going to react to certain offensive strategies.

If you don't understand American football that made no sense. If you do, then you see why many who know and study this day in and day out think the very fact that it was so large yet "unsophisticated," is NOT a good or positive sign. The amazing thing is that at the Sector5 conference just two months ago Craig Koerner, who is a professor in the war gaming department at the Naval Academy and Dr. Denning (Professor of Computer Science at Georgetown and wrote a paper linked in my last post) predicted this very thing!!! They said when we begin to see a series of minor assaults, relatively amateur in scope, we need to get serious and but quick. I think Blaze called it "the calm before the storm" - or maybe the softening of the defense as they test at 1/8th strength to gauge reaction. The fact that this "unsophisticated" attack took down three to seven root servers for a varying amount of time - should serve as a warning.

All 13 rootservers going down - and several being physically attacked - would prevent daily life to proceed as we know it. The wise see the wispy clouds and know the thunderheads aren't far behind.

I personally play chess...like Australian Rules...American Football is great for tailgate parties.

There is a storm brewing..no doubt about it....and it is time to put more $ into the system..no matter what country.

__________________________

Bugbear runs rampant in Australian Parliament

By Jeanne-Vida Douglas, ZDNet Australia
23 October 2002

excerpts:

"The Bugbear virus is causing havoc for the second time in a month at Australia's Parliament House in Canberra, interrupting the government's operations and highlighting dangerous security flaws."..........

"Lundy quoted from a report prepared by Leif Gamertsfelder, the head of the e-security group at the national law firm, Deacons, which indicated the federal government spends just 32 cents per person on securing the nation's IT infrastructure, compared to US$28 spent by the US government. ".....

Along the same lines, I thought other members might be interested in this information.
________________
A denial of service attack on the Internet's root DNS servers that began last night continues to vex users today.

The DNS servers resolve names queries to numbers, and the slowdown should only be apparent the first time a user hits a site. After that, your ISP's cache ought to bypass the issue.

The attack highlights the importance of DNS and its consequent vulnerability.

Over at IcannWatch, Michael Froomkin revives Karl Auerbach's proposal of a CD-based "DNS in a box" for such emergencies.

"The proposed CD would have contained the configuration files for BIND plus zone files for a root and selected contents of the big TLDs, plus some sort of wildcard for in-addr.arpa.... but it would have dented ICANN's claim to being uniquely necessary, and besides the idea came from the wrong source," observes Froomkin.

Last year ICANN vowed to take security seriously, and after the latest attack it ought to explain why this is such a bad idea.®

Source: http://www.theregister.co.uk/content/6/27731.html

More at: ABC Sci-Tech News and Boston Globe Online.

http://security-forums.com/forum/viewtopic.php?t=1488

________________

Denial of Service Attack on Root Servers
Posted by michael on Monday, October 21 @ 19:33:54 MDT
Contributed by michael

NANOG's mailing list carries reports of a denial of service attack on several DNS root servers. There are graphs, and links to the root server operators, some of whom also have graphs. According to NANOG, the trouble started at c. 20:00 UTC and spread within about half an hour. Currently, according to posts to the NANOG list, the attack is merely slowing down DNS response time rather than actually blocking service..

Paul Vixie reports that the attack is icmp requests which I believe is also known as "ping flooding". (Someone please confirm/correct this?)
At present there's no information from NANOG or elsewhere about the source of this attack; it could be anything from an accident (but I doubt it), to a new virus, to Iraqis (no bet) to an alternate root fancier trying to demonstrate why it's not a good idea to put all our eggs into one basket -- or why ICANN was wrong to rebuff Karl Auerbach's suggestion that it encourage the distribution of a CDROM "DNS in a box" kit that would contains all the pieces that one might need to build emergency DNS service. The proposed CD would have contained the configuration files for BIND plus zone files for a root and selected contents of the big TLDs, plus some sort of wildcard for in-addr.arpa.... but it would have dented ICANN's claim to being uniquely necessary, and besides the idea came from the wrong source.

Let's hope this problem goes away, harmlessly, and we learn something from it

Thanks for the good wishes . Informing our registered members seemed the apprioprate thing to do; we very rarely email them, but on ocassions like these an explanation is in place, since many wonder as of why they could hardly/not reach or board (saving the time to answer uncountable emails on this issue as well..