10 Wget (Linux File Downloader) Command Examples in Linux

In this post we are going to review wget utility which retrieves files from World Wide Web (WWW) using widely used protocols like HTTP, HTTPS and FTP. Wget utility is freely available package and license is under GNU GPL License. This utility can be install any Unix-like Operating system including Windows and MAC OS. It’s a non-interactive command line tool. Main feature of Wget of it’s robustness. It’s designed in such way so that it works in slow or unstable network connections. Wget automatically start download where it was left off in case of network problem. Also downloads file recursively. It’ll keep trying until file has be retrieved completely.

10 Linux Wget Command Examples

First, check whether wget utility is already installed or not in your Linux box, using following command.

# rpm -qa wget
wget-1.12-1.4.el6.i686

Please install it using YUM command in case wget is not installed already or you can also download binary package at http://ftp.gnu.org/gnu/wget/.

5. Resume uncompleted download

In case of big file download, it may happen sometime to stop download in that case we can resume download the same file where it was left off with -c option. But when you start download file without specifying -c option wget will add .1 extension at the end of file, considering as a fresh download. So, it’s good practice to add -c switch when you download big files.

10. Find wget version and help

With Options –version and –help you can view version and help as needed.

# wget --version# wget --help

In this article we have covered Linux wget command with options for daily administrative task. Do man wget if you wan to know more about it. Kindly share through our comment box or if we’ve missed out anything, do let us know.

8 Responses

We are facing slowness when we are sending traffic using wget in linux machines through TCL .
Traffic passing gets struck in the middle when the file size is more than 100MB.
Hence we are unable to meassure the bandwidth during download.
Please suggest any way to address this issue.

That means it will go through all the links on the website. So, for example, if you have a website with links to more websites then it will download each of those and any other links that are in that website. You can set the number of layers, etc (reference http://www.gnu.org/software/wget/manual/html_node/Recursive-Retrieval-Options.html ). This is actually how google works but for the whole internet, it goes through ever link on every website to every other one. Also, if you use some more commands you can actually download a whole site and make it suitable for local browsing so if you have a multipage site that you use often, you could set it recursive and then open it up even without an internet connection. I hope that makes sense(the tl;dr version is that it follows every link on that website to more links and more files in a ‘tree’)