Hi,
I couldn't see a method to crawl over an entire site and validate each
of the pages on that site... instead of validating each individual page by
hand.
This would mean the validator would need to recursively visit each link
on a site. A few things to consider would be:
o A limit to the depth to which it visits links
o Do not visit directories that are above the first document targetted
o Do not visit links that are on different servers
At the end of the validation, a summary would be presented for each document
visited. This would be _very_ helpful!!!
ta,
John.
--
John Papandriopoulos http://jpap.cjb.net/
5th Year DD in Communications Eng/Computer Science jpap@cs.rmit.edu.au