Sorry for beeing really far OT this time folks, but I think its an interesting problem, that could not be solved with perl as the only tool but where as always perl could be helpfull.

There might occur a wrong information if a document you are fetching from a remote server will have an "old" content but proclaims to be just generated due to the fact that it was parsed and changed by the server or that it was newly generated due to the use of a content management system and for example a change in the layout that affected the document as well or even worse, the document does not exist at all, but is generated upon request from any datasource.

No, I have no perl solution to this as a search for "Document last modified" or "Last Updated" will not assure to get any such info, especially if you also include files in other languages into your search.

The only thing that I could think of would be "something like a webservice" via XMLRPC or alike, where a server will answer queries for a document URL with an appropriate

But thats just an idea how to avoid conflicts with subjective and objective manipulation date of that document.

The much more simpler approach would be to add an attribute for 'modified' or 'updated' to lets say a 'div' tag. So that for example if I'd visit any node on perlmonks.org it would carry its date with it, which it actually does, so easy to parse, but not always its wanted to have that date displayed. If for example you'd want to search through merlyn's webtechniques' columns it would be helpfull if he' have added such attribute to an article, which by the way gives the ability to mix "old" content and "newer" content, so its not necessary to research the old one again. A small example to get to the end:

There might occur a wrong information if a document you are fetching from a remote server will have an "old" content but proclaims to be just generated due to the fact that it was parsed and changed by the server or that it was newly generated due to the use of a content management system and for example a change in the layout that affected the document as well or even worse, the document does not exist at all, but is generated upon request from any datasource.

I'm not 100% clear on what you think the problem is. If you're trying to detect whether a remote server is presenting new content for a page, and are beging foiled by automatically generated timestamps in headers or footers (or elsewhere on the page), and you really, really need to know if content has changed, then I see two options for you.

First, write page-specific processing code that strips out the dynamic parts. Then, compute an MD5 hash on what's left. If that hash hasn't changed since the last time you looked, you don't have new content.

The other approach is to do use Algorithm::Diff to do a diff, then try to get smart (perhaps on a page by page basis) about what differences you really care about. For examaple, if the text fragments that differ look like dates or times, ignore them.