Ethical Hacking, Penetration Testing & Computer Security

We see it all around us, recently. Web applications get niftier by the day by utilising the various new techniques recently introduced in a few web-browsers, like I.E. and Firefox. One of those new techniques involves using Javascript. More specifically, the XmlHttpRequest-class, or object.

Webmailapplications use it to quickly update the list of messages in your Inbox, while other applications use the technology to suggest various search-queries in real-time. All this without reloading the main, sometimes image- and banner- ridden, page. (That said, it will most probably be used by some of those ads as well.)

Before we go into possible weaknesses and things to keep in mind when implementing an AJAX enabled application, first a brief description of how this technology works.

The Basics

Asynchronous Javascript and XML, dubbed AJAX is basically doing this. Let me illustrate with an example, an email application. You are looking at your Inbox and want to delete a message. Normally, in plain HTML applications, the POST or GET request would perform the action, and re-locate to the Inbox, effectively reloading it.

With the XmlHttpRequest-object, however, this request can be done while the main page is still being shown.

In the background a call is made which performs the actual action on the server, and optionally responds with new data. (Note that this request can only be made to the web-site that the script is hosted on: it would leave massive DoS possibilities if I can create an HTML page that, using Javascript, can request thousands of concurrent web-pages from a web-site. You can guess what happens if a lot of people would visit that page.)

The Question

Some web-enabled applications, such as for email, do have pretty destructive functionality that could possibly be abused. The question is — will the average AJAX-enabled web-application be able to tell the difference between a real and a faked XmlHttpRequest?

Do you know if your recently developed AJAX-enabled or enhanced application is able to do this? And if so — does it do this adequately?

Do you even check referrers or some trivial token such as the user-agent? Chances are you do not even know. Chances are that other people, by now, do.

To be sure that the system you have implemented — or one you are interested in using — is properly secured, thus trustworthy, one has to ‘sniff around’.

Incidentally, the first time I discovered such a thing was in a lame preview function for a lame ringtone-site. Basically, the XmlHttpRequest URI’s ‘len’ parameter specified the length of the preview to generate and it seemed like it was loading the original file. Entering this URI in a browser (well, actually, ‘curl‘), specifying a very large value, one could easily grab all the files.

This is a fatal mistake: implement an AJAX interface accepting GET requests. GET requests are the easiest to fake. More on this later.

The question is — can we perform an action while somebody is logged in somewhere else. It is basically XSS/CSS (Cross Site Scripting) but then again, it isn’t.

My Prediction

Some popular applications I checked are hardened in such a way that they use some form of random sequence numbering: the server tells it, encoded, what the application should use as a sequence number when sending the next command. This is mostly obscured by Javascript and a pain in the ass to dissect — but not impossible.

And as you may have already noted: if there is improper authentication on the location called by the XmlHttpRequest-object, this would leave a possibility for malicious purpose. This is exactly where we can expect weaknesses and holes to arise.There should be proper authentication in place. At all times.

As all these systems are built by men, chances are this isn’t done properly.

HTTP traffic analysis
Analysing HTTP traffic analysis with tools like ethereal (yeh I like GUIs so sue me) surely comes in handy to figure out whether applications you use are actually safe from exploitation. This application allows one to easily filter and follow TCP streams so one can properly analyse what is happening there.

If you want to investigate your own application, the use of a sniffer isn’t even necessary but I would suggest you let a colleague that hasn’t implemented it, play around with your app and a sniffer in an attempt to ‘break’ through it.

Cookies
Cookies are our friend when it comes to exploiting, I mean researching any vulnerabilities in AJAX implementations.

If the XmlHttp-interface is merely protected by cookies, exploiting this is all the easier: the moment you get the browser to make a request to that website, your browser is happily sending any cookies along with it.

Back to my earlier remark about a GET-requests being a pretty lame implementation: from a developers point of view, I can imagine one temporary accepts GET requests to be able to easily debug stuff without having to constantly enter irritating HTTP data using telnet. But when you are done with it you really should disable it immediately!

I could shove a GET request hidden in an image link. Sure the browser doesn’t understand the returned data which might not even be an image. But my browser does happily send any authenticating cookies, and the web-application on the other end will have performed some operation.

Using GET is a major mistake-a-to-make. POST is a lot better, as it harder to fake. The XmlHttpRequest can easily do a POST. But I cannot get a script, for instance I could have embedded one in this article, to do a POST request to another website because of the earlier noted restriction: you can only request to the same web-site the web-application is on.

One can modify its own browser, to make request to other websites, but it would be hard to get the browser on somebody elses machine to do this.

Or would it?

If proper authentication, or rather credential verification, still sucks, I can still set up a web-site that does the exact POST method that the AJAX interface expects. That will be accepted and the operation will be performed. Incidentally I have found a popular site that, so far, does not seem to have proper checks in place. More on that one in another article.

Merely using cookies is again a bad idea.

One should also check the User-Agent and possibly a Referrer (the XmlHttpRequest nicely allows one to send any additional headers so you could just put some other token in the Referrer-field). Sure these can still be faked — but it may fend off some investigating skiddiots.

Sequence Numbering, kinda…

A possible way of securing one’s application is using some form of ‘sequence-numbering’-like scheme.

Roughly, this boils down to this.

One should let the page, or some include javascript, generated on the server side, include some token that the performs some operation on which gives a result which is used in any consecutive request to the webserver. The webserver should not allow any request with another ‘sequence number’, so to speak.

The servers’ ‘challenge-string‘ should be as random as possible in order to make it non-predictable: if one could guess what the next sequence number will be, it is again wide open for abuse.

There are properly other ways of hardening interfaces like this, but they all basically come down to getting some fixed information from the webserver as far away from the end-users reach as possible.

You can implement this as complex as you want but can be implemented very basic as well.

For instance when I, as a logged-in user of a web-enabled email-application get assigned a Session-ID and stuff, the page that my browser receives includes a variable iSeq which contains an non-predictable number. When I click “Delete This Message”, this number is transmitted with the rest of the parameters. The server can then respond with new data and, hidden in the cookies or other HTTP Requests field, pass the next sequence number that the web-server will accept as a valid request, only.

As far as I know, these seems the only way of securing it. This can still be abused if spyware sniffs HTTP communications — which they recently started doing.

Javascript Insertion

On a side note I wanted to throw in a remark on Javascript Insertion. This is an old security violation and not really restricted to AJAX, and not an attack on AJAX. Rather, it is an attack utilising the XmlHttpRequest object for malice.

If I would be able to insert Javascript in the web-application I am currently looking at in my other browser window, I would be able to easily delete any post the site allows me to delete. Now that doesn’t seem all that destructive as it only affects that user? Wrong, any user visiting will have its own posts deleted. Ouch.

Javascript insertion has been a nasty one for years and it still is when people throw their home-brew stuff into production.

On a weak implemented forum or web-journal, one could even post new messages — including the Javascript so that any visitor — with the proper permission — would re-post the message keeping the flood of spam coming.

As these technologies keep developing — and lazy website developers do not update their websites to keep up with these changes.

The recent ‘AJAX enhancements’ that some sites got recently might have been improperly implemented. This year might be a good time to check all those old web-applications for any possible Javascript insertion tricks.

If you didn’t mind the cookies getting caught — the sudden deletion of random items and/or public embarrassment might be something to entice you to verify your the code.

90 Responses to “AJAX: Is your application secure enough?”

I must say that your analysis is a bit hyped, and most of the issues you list are not specific XMLHttpRequest, and apply to any secure web site.

From your post, I see two key arguments (a) servers can’t tell whether a request is a normal full-page request, or a request for some fragment or data, and (b) don’t use GET.

On the first point, you don’t justify why the server should care. You can always sniff headers/cookies etc, and you can change query params at will. That does not necessarily make a site insecure. There are ways to design a site against such changes. I don’t see how this is related to XMLHttpRequest at all.

On the second point, GET and POST have their own semantics and applicability. You may be able to easily recreate a GET request, hey – someone who understands the difference between GET and POST can easily recreate POST requests as well. That does not justify using POST for everything. In fact I would argue that GET should be used for all idempotent requests.

You raise fair points, however these are concerns with EVERY web based application, even ones driven by the traditional GET/POST form submission methods. Using a GET request through XMLHttpRequest isn’t by any means flawed or “less secure,” as POST can be “faked” just as easily. It all comes down to doing what is required of EVERY web programmer: assume that all incoming data is unclean. Input validation must NOT be left as an excercise for the client-side of a web. If your ajax script is working with sensative data, you’d better have user/session validation in place. If you’re concerned about session hijacking through a malicious program sniffing HTTP traffic, you’d better resort to using an HTTPS connection, and insist that any session cookies you create are set to only be transmitted across a secure connection. And, most importantly, you’d better assume that any request to the back-end component of an AJAX script contains data that is malformed.

While I think this article serves a prupose for informing new web app designers who have been drawn in by AJAX et. al., these security issues are truly a concern for all web applications, and have been for years.

“But I cannot get a script, for instance I could have embedded one in this article, to do a POST request to another website because of the earlier noted restriction: you can only request to the same web-site the web-application is on.”

This is completely untrue. A script can easily create a <form> node, using standard DOM APIs, create child <input type=”hidden”> nodes, then call formNode.submit(). Such a dynamically-created <form> node can submit to any URL- it’s not limited by the same-source restriction.

So your theory that using POST buys you some security doesn’t really hold water.

First of all, yeah the same implications arise as with usual web-applications, no doubt about that.

But never underestimate the stupidity of people in large groups. ;) People think that, because this stuff seems to go on ‘in the background’, not that much care needs to be taken.

Subbu: On GET/POST: GET is definitely easier to abuse than a POST request. As I wrote, such a request GET can be accomplished by simple supplying in an image SRC-tag. For example, if you were on an online email system that uses only GET for its operations, I could send you an email, simply with image links to all types of destructive operations within that email system, for instance to send email to others etcetera. I cannot get an image-link to perform a POST request. Filtering out scripting will take care of fixing any dangers with the POST — but to guard against a GET attack the email-system should disallow images alltogether (which, from a security point of view, isn’t that bad a practice at all though).

BTW, you failed to notice c) sequence numbering — which I really ought to have called session validation tokens — as a suggestion to further harden AJAX communications. ;)

That way — it hardens it a bit more but sure, as all the responders noted its the same for web-applications in general it being the nature of the protocol, so to speak.

But it sure as hell shouldn’t be very, very easy to abuse. ;)

Problem is, you know it, I know it, the people that responded here knew it. But, these issues have been plagueing web-applications for years, and will do still because every few years, with a new fresh crew of web-developers, history starts to repeat itself because one fails to do something or wrongly does so. (Javascript insertion or XSS for example still keep appearing now and then, which with the coming of this object has other interesting complications.)

(BTW slightly off-topic, but there is a more effective way to properly secure web-applications against almost any type of attack. If the webserver wouldn’t ever accept links that it never even has exported in the generated HTML, to that specific IP adress, how about, if a user does request it, it simply rejecting to comply or redirecting the user to the main-page?)

dburkes What you suggest is entirely true, but in that situation, the created form will redirect the end-user and prevent it from ever reading the page itself: that would be most succesful if that was done in a seperately created window.

But it will not let the users’ browser, unbeknowst to that user do any wicked stuff somewhere else as is possible with a GET request that could be hidden anywhere, see the subtle difference there?

If, and only if, properly hardened sure GET can be an option as well, but in the end I think using GET or POST boils down to each owns’ well-thought over choice.

I think any experienced person knows that web-based applications, especially very important ones, are evil. But if everybody keeps using them, it’s good to keep conversations like these alive. ;)

There are popular sites pretty easily vulnerable.

Michael: Yeah, using referrer and user-agent fields is lame but as I suggested one can perfectly use these fields as session validation tokens as one has control of the headers with an XmlHttpRequest. This is impossible to fake with any GET or POST. (Note that the danger here is not per se about myself faking communications to perform some action (as would be possible with curl), but more about tricking other peoples browsers into performing such an operation.)

Wow, you are a complete moron. None of this has anything to do with AJAX, its all just basic form security. And not only that, but you don’t suggest security, you suggest obscurity. Well done, ride that web2.0 fagwagon.

Using GET is a major mistake-a-to-make. POST is a lot better, as it harder to fake. The XmlHttpRequest can easily do a POST. But I cannot get a script, for instance I could have embedded one in this article, to do a POST request to another website because of the earlier noted restriction: you can only request to the same web-site the web-application is on.
As many other people have pointed out, there’s nothing wrong with using GET for many applications; anywhere where you want the user to be able to bookmark your URL or email your URL is a good rule of thumb.

It appears that you’re saying that you shouldn’t use POST for credential transfer (password login); that’s a bit different than a blanket statement saying don’t use GET. In that case, yes, you’re absolutely right. In fact, the RFC makes that point clear already.

And your other point is not true. You could POST to any website using something like CURL. I think you’re confusing embedded Javascript/XSS attacks with the POST/GET HTTP commands.

Most of your article seems to be implying that GET can be more easily used in cross-site scripting attacks; this is a good point, but really this entire article was about XSS. Sending and email or embedding bad content in a web page is XSS. Using POST will prevent XSS, as you point out, but that’s not a good enough reason.

For example, entries in a search form are best left as GET. That way, the entries will show up in the URL and be bookmarkable and emailable. What would Amazon work like without GET?

This is a very silly article. The only thing “harder” about faking a POST as opposed to a GET is that you might use a tool to speak HTTP instead of typing sh*t into your location bar. None of this is real security.

Someone tell me what, from a security standpoint, the difference is between an AJAX GET or POST and a traditional web form GET or POST. Answer: zero.

Next time learn a little about the field before you claim to be an expert, please.

Joe: Yeah it is basic http protocol security, I never meant to say otherwise.

However, if you think I suggested obscurity you either seem to have misinterpreted or I have failed to properly get my point across.

(And web 2.0? If only the end-user was that lucky — rather web 0.2 if you ask me.)

Another Joe: Interesting link, that’s indeed a more sophisticated implementation of basically the same idea. The links on the bottom there are quite interesting as well.

The Session Hijacking PDF for instance is a more elaborate article on sessions in general — it basically tells of the same dangers and if you develop web-apps it is definitely worth a read.

Anonymous (1st): I see I totally failed to get one important point across: I was not talking about my, myself, using curl to fake a POST — I have been doing that for ages. ;)

I mean that it should be hard for me to trick your browser into doing a post request somewhere else. It is hard to fend against, on the server side, in a normal HTTP world — everybody that responded hasn’t failed to understand that.

AJAX seemingly has these same faults, but properly implemented this can be done relatively safely (like those page and request tokens mentioned in the link by Another Joe.

Sure I can curl myself senseless in stead of using a real browser to read my own email somewhere.

But I should not be able to trick your browser into doing something you don’t want, by creating a web-form on-the-fly and automatically submitting it when one could’ve guarded against this quite easily.

Anonymous (2nd): I have never claimed to be an expert, nor am I.

And also you have failed to notice that, on the server-side, there definately can be a distinctive difference between an ‘AJAX’ GET/POST request and a traditional one that, from a security standpoint, can reduce abuse to an acceptable minimum.

“We see it all around us, recently. Web applications get niftier by the day by utilising the various new techniques recently introduced in a few web-browsers, like I.E. and Firefox. One of those new techniques involves using Javascript. More specifically, the XmlHttpRequest-class, or object.”

I knew this would be bunk. There is absolutly nothing “recent” about this technology. It has been built into IE for years. Research before blogging, else look like an idiot.

(BTW slightly off-topic, but there is a more effective way to properly secure web-applications against almost any type of attack. If the webserver wouldn’t ever accept links that it never even has exported in the generated HTML, to that specific IP adress, how about, if a user does request it, it simply rejecting to comply or redirecting the user to the main-page?)

Uh, I seriously hope you see the flaw in this. The first time you publish a website you’ll never be able to access it…

There is a claim above that he can’t create a page using AJAX to communicate with a server other that the webserver which created the page. This may be true, but you don’t have to use AJAX. We’ve been doing AJAX-like stuff for years by simply using an iframe (you can give it 0 size so you don’t see part of the page reloading over and over) and then using simple JavaScript, you can hit whatever URL you want…including the urls that return AJAX conent.

“As these technologies keep developing — and lazy website developers do not update their websites to keep up with these changes.”

Sometimes the “lazy website developers” try to get people to listen to them about security risks like this and they fall on deaf ears. Before you roll out your jump to conclusions mat, and leap to who’s fault this is, start with blaming management and work down.

In reference to “lazy website developers”, we actually would like to develop secure web applications. But do not have time to invent the wheel. We would love a book or two that not only shows the problems but how to design/code around them.

I don’t understand this entire article. I have never had to worry about any of this because I have done security the right way, on the server side on every request. This makes it impossible for a user to effect other users in anything I have written.

Each and every work around in this article makes it more difficult, but its like an alarm system, it will never deter the most serious threats.

Why is it that everyone wants to add complexity to a function that I was using 7 years ago on the web and have never had a sinlge security problem with.

Also, the growing trend is to open-up your web systems and allow users to manipulate them in new ways. The type of client-side clamp down of AJAX is un-necessary and runs counter to current trends.

Why do we need to wrap secuirty around HTTP requests in any other way then we do for a regular web page. If someone can explain to me why building security into the right layer of an application would still open my applications to security risks via AJAX, I might change my position, but this artilce certainly didn’t.

Nice article, but there’s one problem with it: you fail to point out that most of it is not about security against malicious users directly accessing the web page, but about malicious scripts (CSS and stuff) that exploit web applications directly from the users browser.

That’s why everyone is bashing you for postulating POST is “better” than GET, which it is not when it comes to basic webapp security, but when you try to do intrusion protection (like your scenario with the <img src=”thisWillExploitUrl.php?param1=deleteAllMessages>).

I needed a while and all your replys to the comments below the article to understand what you are trying to say here, and yes, you are right: CSS stuff like the img examples is much more complicated to fake when using POST.

Anyway, nothing of this is about Ajax security, because the img-get-link scenario would as well apply to an application that does <a href=”root.php?deleteAllMessages> without using Ajax.

So, it just boils down to “beware, Ajax is just as (in)secure as the whole rest of your application, so make sure it’s secure”.

BTW, I think that the only thing that fixes CSS is validating input data correctly. When you let “input through unfiltered”, you are lost anyway.

1) Anyone not authenticating every request to the server is doomed. (Attacker can hit the server directly.)

2) Anyone allowing the end user to post any html content to a site that can be viewed by another user without parsing the html and reformatting it is doomed. (XSS)

3) Anyone not html escaping there parameters(including file names if files are gotten by name) are doomed. (Emailing an image or setting an image filename to something like “delete?all=true&.jpg”)

All of these attacks were available before AJAX, it’s just that since people expose functionality with a finer grain, and have more functions exposed with AJAX, there will be more problems, and attackers may be able to perform some specific actions that they may not have been able to do before.

There is nothing wrong with trusting random cookies.

The best solution might be to create and map random urls… This is a good arguement for continuation based frameworks, as they really need to create random url’s to identify which continuation to load. (ie where to pick back up in the application)

I’m surprised that this article is getting the attention it has. I’ve seen it on Digg, and I think /. too.

If people are mis-interpretting this article, it’s only because it’s poorly written. I could sum up this article in a couple sentence, and it would have far more value to the reader:

People can submit whatever data they want to your xmlhttp responder. Servers can’t tell the difference between data sent by “your” script and data sent by an arbitrary source. Use the same security on submitted data that you do with all other web apps.

Before you go and call this guy an idiot or a moron, how about you check out this post on my blog and prove how much smarter you are then the rest of us.

At the end there’s a challenge. An insecure application is made secure JUST by changing it from allowing GETs to only allowing POSTs. If you can break the one that has the POST, then you’re justified in flaming this guy. If not, then think before you post. The point is that it’s possible to get the contents of a file across a domain if it’s available through a GET, but you can’t get the contents of that file if it’s only available through a POST. I can trick you into submitting a POST or a GET without your knowledge, but I can only get the results of a GET, not the results of a POST, because forcing the client to do a POST forces the browser to go to a new page. No POSTing behind the scenes, but GETs behind the scenes are A-OK. So there is a difference.

I’d just like to suggest that many people forget to secure the AJAX portion of their apps, because they think that it is less visible (or don’t even think at all).

I think that all web developers should make use of intercepting proxies such as WebScarab, Fiddler, Paros, Achilles, etc, throughout their development process, just to see exactly how easy it is to observe and modify the traffic “in flight”.

I’ve personally found few challeneges with AJAX Authentication. For me it’s not been too much different than authenticating a logged-in user on my pages.

Simply, when you request a page directly verses with AJAX, your browser is going to be sending the exact… same… headers (not counting GET and POST of course). I simply authenticate a user the same way I do on other pages, and make the same security checks, which is easy to do if you have a simple include system setup for your CMS.

I don’t usually need to send important data via AJAX, and mostly that data is user-dependant, so why do I care if they see it? They can’t do a whole lot with the data. Even with guest users, most AJAX-run apps that my guests are allowed to use I don’t even need to auth it, cause it’s pretty straight forward data.

The only problem I’ve run into is creating a AJAX-system for Admin Centers for which I do not already have a permission system in place. In which case I use a md5 hash of their user/pass (with a salt!) stored in a MySQL ‘auth’ table as a passing var for the AJAX application (which could then be tied to a permissions setup). So they can’t guess that auth string unless they have the user/pass, in which case… they could just log in.

As for “ease of spoofing” of GET versus POST. POST isn’t much harder, especially when you cheat and use the TamperData Firefox plugin, which is by-far the most usefull plugin for AJAX developers. Great for debug.

to jordan, and anyone else saying “POST behind the scenes is impossible/difficult”

You’re wrong. I’ve written many, many worms and exploits that are able to POST data and get the response if necessary. It is not difficult at all with the XMLHttpRequest object. Here’s an example:
function postSomething(){
var q='arbitrary=1&params=2&man=lol';
r=gX(); // gX retrieves an XMLHttpRequest/MS XMLHTTP object
r.open('POST','form.php',true);
r.setRequestHeader('Content-type','application/x-www-form-urlencoded');
r.setRequestHeader('Content-length',q.length);
r.setRequestHeader('Connection','close');
r.onreadystatechange=function() {
if(r.readyState==4 && req.status==200) {
alert(r.responseText); // Here's the response
}
}
r.send(q);
}

By the way, I have a 2.2Ghz/1GB machine and this comment preview thing runs like I’m typing in Word XP on a 486. And your “<code>” tags are useless since the comment tool appears to strip out tabs and spaces at the beginning of the lines. I used &nbsp; to make the spaces :)

Navaho Gunleg,
Unlike a few other commentators, the issues you arise are important and contextual. Sure these issues has been around for some time now and do not apply only to Ajax communication but that shouldn’t stop the topic and kill any discussions regarding this matter.

Your suggestion about a token has been on my mind as well. Furthermore, when the “token”/uniqueId is set as a session on the server, the session becomes explicitly bound to the browser window requesting the page. Which means that, a faked request with the same token will be trivial to catch. But only if the faking window is not an descendant to the original window. (which has led me to cancel CTRL+N and similar actions that might open an descending window, and thereby hi-jack the credentials/token)

This enhances the security a nudge but not a definite solution.

Greasemonkey is also a problem; an attacker can build custom tools and inject code and attack the server easily.

I am hoping for a continued discussion on this matter and look forward for additional articles from you.

some people here get it. Of course it’s possible to
cryptographically sign every single GET request
(and using this very efficient technique, a GET is
much more easy to protect than POST) and it’s also
possible to apply a partial cryptographic
hash/checksum to POST request.

This has been done on some site since years…

The whole point is not to let the webserver accepting
requests for links that he hasn’t generated, has said
previously in this thread (re-read this sentence a
thousand time if necessary :)

I implemented that years ago for a client nervous
about security. Every URL link generated had
properties: bookmarkable, tied to user, tied to session,
etc.

If the server notice that the cryptographic checksum
isn’t correct it redirects to an error page. If it’s correct,
he then checks if it’s tied to a session, to a particular
user, etc.

Here’s the post I made back in the days to Usenet’s
comp.security.misc newsgroup when I had finished
the first implementation of that idea (post from 2003):

I found and implemented the technique myself (with
some refinements suggested by others) and I’ve
later found prior-art in at least one book dating
from pre-2000.

Security had always be one of my main concern,
applying a cryptographic hash/checksum to all
the GET/POST requests seemed 101 to me (but
go explain that to other persons when they don’t
even have a clue to what a “salt value” is used
for in cryptography).

Now this technique has been in use on some
serious website developped by security-conscious
people since years… And maybe maybe maybe
if someday some “web framework” developper
get a clue it will start to become commonplace.

And this effectively stops all the “request forgers”
kiddies dead in their track when done correctly.

And this applies to “XmlHttpRequests” too : I’ve been
using it since years on various webapps too.

I don’t say it’s the be-all end-all of web security, but it
is a good beginning…

Now of course to do a correct implementation of such
a technique (which my first try wasn’t) you would have
to look more into crypto stuff than into script-kiddies-I-forge
-the-GET-request stuff.

This article needed much more research, however it is good enough to get people thinking about the subject more.

AJAX is no more/less secure than any other HTTP transaction.
POST will at least stop the common person from being malicious, though it is no protection against anyone that have any knowledge in the area.

And again, AJAX technology is not new, only the nice acronym is new.
IIRC MS put the XmlHttpRequest object into IE in 1999. I made a website utilizing it in 2001 and had plans for a CMS using it.

Lonedroid: Yeh I have implemented exactly such a system some years ago, as a DSO for Apache.

When somebody did an ‘evil’ request, like for instance ‘default.ida’ and such-and-such, it also blocked the user, for 24 hours, to any website running on that machine, telling the user its either infected and/or it should stop doing such requests.

Such a module doesn’t only add an extra level of security, but also prevents people from leeching the site.

In fact I have always wanted to take that one step further and even encrypt the request itself (well in fact they would be random, but on the server the real location would be stored).

friend also again fails to see the point. Sure one could use SSL traffic. So somebody implements a script for the XmlHttp-object to interface with on an HTTPS site, for the sake of argument, as a GET request. Identifying by merely cookies.

Yay — I can simply call an ‘https://’ location in an image I embed on a website and the action is performed regardless whether the page was sniffed or not. (For sake of argument we’ll ignore that some browsers warn when secured and non-secured content is mixed.)

But I hope you see the difference.

xero: Yeh it’s been around — but its application has had a real come-uppance only the last couple of years as its support got implemented in non-IE browsers as well. And because everybody sees it on other web-sites, some developers jump to hastely implement it.

The solution is actually to granularise and control browsers ability to execute the Javascript. This can be done with tighter ACLs around such potentially abusable functionality and the signing of all legitimate Javascript. That is to say, control the ability for malicious code to be executed.

Even if the contents aren’t exactly ground-breaking, many people forget to secure the AJAX portion of their apps, because they think that it is less visible. It never hurts to be reminded about important security concerns with web development.

If the XmlHttp-interface is merely protected by cookies, exploiting this is all the easier: the moment you get the browser to make a request to that website, your browser is happily sending any cookies along with it.

Wow! I’ve just read half of the comments and most of the people commenting are idiots! Why oh why would you write stupid things like AJAX POST/GET is the same as FORM POST/GET?!? I don’t want to belittle people but if your framework doesn’t have built in security for POST/GET you’re not professional. Hobby program shout somewhere else.
While it might seem like the same the problem is that I have only seen Microsoft ASP.NET AJAX Extensions (puh what’s up with the long names MS?) incorporate AJAX security in formal manner. Meaning that it’s very difficult to do it wrong. While PHP and Ruby On Rails are so simple frameworks (I’ve tried them both) that it’s easy to incorporate the latest net fab, none of them are really enterprise ready (yet!).

As for the article, it’s nice and all but I have to agree with the lot. It doesn’t really offer much new perspectives. But that no reason to comment like an arse (joining the idiots lol).

I would like to introduce you to a new concept http://www.visualwebgui.com which eliminates most of AJAX soft spots by simply returning back to server based computing but still having a dynamic AJAX based UI.

I’ve been reading and learning a lot about AJAX from kind people like you all for almost 6 months now. And I have just finished my first AJAX web page (still under my own testing!) and solved most of the problems using code snippets from here and there, just to face a “killer problem” nobody have discussed until now (to my knowledge):

* The requestFile is a script to deliver data.
* Anybody can access this script and steal my data.

I have searched and read a lot trying to find a way to prevent this, but to no avail. With FireFox + extensions or server to server scripts, no one can stop this. Please, correct me if I am wrong!

Applying SSL is mandatory for sensitive data web sites but is by no means a solution for the AJAX security problem as the main issue is not hijacking the network chatter but rather the client side code which interacts with the different services which can be easily manipulated. In the old days when you had all the logic and data processing done on the server SSL would do just fine. Sp as long as services are been consumed from script code ,data is manipulated and business logic runs on the client we should take the bulletproof title out of AJAX.

Unfortunately you are completely right Guy.. I wish that FireFox will support HTTP_REFERER in their XMLHttpRequest() object in near future so that AJAX scripts can be tied to running server, noting that all browsers does support it except FireFox. I will go to mozilla.org and post a wish with explanation.

For now, all I can do is to include an HTML copyright comment inside results and to append add-on links to it such as “Suggest Something”, “Tell a Friend”, etc., so that when user clicks it s/he will go to my web site in case results are displayed somewhere else!

“Analysing HTTP traffic analysis with tools like ethereal (yeh I like GUIs so sue me) surely comes in handy to figure out whether applications you use are actually safe from exploitation. This application allows one to easily filter and follow TCP streams so one can properly analyse what is happening there.”