You are not logged in

I'm trying to download a 1Gb file from an ftp with filezilla and the connection seems to have some pooldelay or something, and it gets reset every now and then.

I think having this functionality would help a lot in these type of scenario. Aria2c has it (e.g --lowest-speed-limit=1K) , but it seems to have some portability issues on win32, while wget seems to work better.

Yes, it does represent a fix to the client for a server issue. Generally, when using wget, I don't control the server, and can't fix the server. I often use wget for its ability to retry and resume partial downloads, to cope with unreliable servers. This feature represents another way to help cope with unreliable servers.

What I dislike about it is that it's a fix in the client, to what amounts to a network (or more likely server?) issue... and that it's a specific fix to a general issue. Fixing the client means fixing every client (not from one person, obviously: but it means that being a proper client includes reinventing this wheel in every client)... I'd much rather spend effort determining if there are possible system-level fixes to address it, than ask that every network client deal with the issue.

That said, I'd be willing to consider it if there's popular support. Try the mailing list (bug-wget@gnu.org); curl's maintainer also hangs out there, and could perhaps expound on why he chose to include it there.

I'm unlikely to implement this myself, but if there's a demand for it, and a patch is supplied, then I'd include it.

It seems more difficult for a script to implement this functionality; the script would need to parse wget's output, determine the transfer speed, track the last few speed values over time, and restart wget when they all have too low a value. wget, on the other hand, has this information readily available.

wget wouldn't need to have any extra logic to restart the transfer; the existing logic for retrying would suffice. wget would just need to abort a transfer based on the transfer speed.

Also, I proposed this feature because curl already has it: the --speed-time and --speed-limit options. curl will abort a transfer if it drops below the speed-limit for longer than the speed-time. This represents one of the only reasons I still occasionally use curl instead of wget.

Why do you not want this functionality in wget?
Would you consider adding this functionality if you had a patch for it?

--8<--
Some sites or networks fail in ways where a connection drops to a
trickle (a few hundred or thousand bytes per second) but does not
actually die; this can happen, for instance, if few or no network
packets get through but no TCP disconnect occurs. Killing wget and
restarting it (always using -c) fixes the problem, but requires
manually babysitting the download or writing a hackish script to do
so. It would help to have a wget option which monitors the download rate and treats the connection as failed if the rate drops below a given threshold for a given time (for instance, under 10Kbps for more than 5 seconds).
--8<--