To apply the site analysis scanner to this kind of testing, you need to have regular expressions that you can scan the incoming pages for that matches against will indicate the existence of a [potential] threat or vulnerability. Typically you will make a scan of some part of a website -- we recommend no more than 1,000 pages at a time -- and harvest the matches against the regular expression filter that is applied automatically to the complete HTML of each page as eValid retrieves each page of the specified website or sub-website.

But it's not magic. You still have to look at the data and determine if there is a real threat.

An alternative method to using the spidering capability is to use the special IndexFindElementEX command that searches the current DOM contents for regular expression matches. This kind of search may be more appropriate if you are dealing with an AJAX application because it can search not only the HTML in the page, but also the content of the JavaScript (JScript, JS) code that currently resides in the page.