When running the W3C link checker recursively, it re-checks links that
it has already checked.
E.g.:
www.foo.com/page2.html links to www.foo.com/index.html
www.foo.com/page3.html links to www.foo.com/index.html
Checking the link for www.foo.com/index.html a second (or third, or
fourth, or one thousandth) time is time-consuming and seems
unnecessary.
Are there any plans to cache results for particular links and check
the cache before sending a duplicate page request?
Thanks,
Chris