The plot thickens. The fellow who approached me with this in the first place maintains that he was using IE8 on both a Win 7 box and an XP box. Good, usable results files downloaded with the XP system, and bloated unmanageable files on the Win 7 box.

He bought a Win 7 box to progress with his project, but instead it's stopped the project dead in its tracks. He's still got the XP box, but the future is with Win 7, so that's what he'd prefer to use. Understandable, I believe.

True. wget is a bit more powerful and I'd use it instead if you need to grab a bunch of different files from a web server and want to filter out anything other than .htm or .html files. Here's an example I used recently:

I have a server that holds install and configuration files for the linux desktops I deploy and manage. In one of my automated installs, I need to grab the latest NVIDIA driver from my install server. The name of this file is not constant, but it always ends with a .run extension, so I use the following command to grab it:

wget -r -nH -np -nd -A run http://yum1:8080/nvidia/

The options basically say "look at all the files in the nvidia directory on that web server but only grab the ones that have a '.run' file extension." This works since I only ever keep one in there.

_________________________I can explain it to you but I can't understand it for you.

That'll create a file named "index.html" in your current directory. So that is easier than curl for getting a single document. You have at least tell curl what name to save the file with, or it'll just write to the screen.

Of course you can tell wget to save with a different name by just giving it the "-o filename.html" option too.