If you are only asking about links, why not parse out only the <a> tags? The anchor tags have to exist with the correct URL for it to link to another page, so that could quickly and easily filter out everything else. The only problem that would need to be figured out is if someone posted it in plain text without a link.

The logical expression for the filter would be: Find <a, then find href=, then return the url. Match this to the mask and place it in xxxx file.

Unfortunately I don't know any coding for that beyond the code for the links in the html, but I hope that helps.

steeladept: I think that kalos is referring to collecting the links as he browses the web, thus, the links would have to be collected directly from the webbrowser or something like that.At least, that's what i understand

URL Snooper can nicely grab and list all links on the pages you visit. Just click on sniff and go to your browser and start browsing. Even you can tell URL Snooper to filter listed ;links with any text you want. This is a great software but unfortunatly I wanted it to go through all pages and save links without me surfing those pages. Pitty it can't do this

URL Snooper is a great program, but not exactly the way to go for this situation, for two reasons:

1) it doesn't integrate within the browser2) it doesn't autosave files, text, links3) it sniffs the network, trying to get "hidden" files, while I only need to just see what is seen whithin the browser, nothing more

I was looking for a firefox extension or an opera javascript that will see the text and links of each webpage I visit and it will save specific links/text/files

there is no need to sniff the network in order to catch "hidden" urls etc, it's overkill

the problem is that most of programs that attempt to do what I need (offline browsers, etc) require you to enter a starting web address, then specify retrieval options and then let the program do the job

but what I want is an integrated to the browser solution to do this within my browser "as I browse"

an auto-bookmarker, an auto-file-saver, an auto-text-saver that will save info as I browse the net automaticaly

first, I need something that will "monitor" every webpage I visit, as I browse the net this monitoring has to be very accurate ofcourse, which means it must not miss any webpage even the webpages that are partially loaded etc

by "monitoring webpages" I mean to grab the text, links and files of every webpage I visitby "the text of the webpage", I mean the text that is highlighted/selected when we click ctrl+A in a webpage (included any other "hidden" text, etc)by "the links of the webpage", I mean the links that are grabbed when we hit ctrl+alt+L in Opera or any other method that shows all the links of the webpage (included any "hidden" links, javascript links, etc)by "the files of the webpages", I mean the files that are included in the folder that is created when we save a webpage which created an html file and a folder (and any other hidden files, embedded files, etc)

as far as I know (and if you know something else, please inform me) the available methods that can monitor web browser traffic are these:javascript can monitor webpages as I browse the net (opera, for example, has this javascript function: document.addEventListener('DOMContentLoaded',function() { where it does things when the webpages are loaded)internet connection sniffer can monitor webpages as I browse the net, that can sniff urlsweb proxy can monitor webpages as I browse the net, as it works as a cache proxy

then I need to apply filters to specify which of the text, links and files are useful and then we need to save the filtered information

If you're using IE6 or 7, the browser's cache is just a collection of files that can be accessed via the file system. I imagine any file-search utility that does regular expressions (FileLocator Pro?) could suss out the patterns you've described. For a fact I know UltraEdit's file-search feature will do this.

This is not real-time scanning, but you could kick off such a search after your browsing session is complete.

VisitURL is not a fully-fledged bookmark manager. It is not a replacement for Netscape's bookmark file or Internet Explorer's Favorites. It does not organize bookmarks into categories.

VisitURL is designed to help maintain a handy (as in: at hand) list of URLs that you intend to visit. For instance, if a friend sends you an email recommending that a particular URL, Visit is a good place to store the URL until you are ready to launch your browser and go surfing. If you copy the URL to clipboard, Visit will automatically intercept it and save to its database. If you copy several URLs at once, Visit will get all of them. (You may also add URLs manually or directly from an open browser window.) If you copy the URL with some text around it, Visit will optionally treat that text as a description for the URL you copied.

To access the bookmarked site, you can either view the HTML page that Visit creates in your browser, or click a toolbar button to launch the browser directly from Visit. There is no limit to how many URLs you may store, though the program is primarily designed to hold, view and edit a short, temporary list. Netscape and Explorer tend to consume so much system resources that it's not practical to keep them loaded at all times - this is where Visit comes in.