Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices

Welcome to LinuxQuestions.org, a friendly and active Linux Community.

You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!

Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.

If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.

Having a problem logging in? Please visit this page to clear all LQ-related cookies.

Introduction to Linux - A Hands on Guide

This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.

Hi, we have a huge amount of duplicate files in a folder and I would like some pointers on to writing a bash script to create a list of the duplicate files. I've seen examples that check for the md5 sum of files... but I dont need that, the file name is enough. Can someone please help me?

LOL! True Mr. Telengard. I meant in subfolders. So I have the directory /storage which holds about 10 subfolders which hold around 3 more subfolders each with around 300 + files in. Messy, I know. So the duplicates are between subfolders.

I bet 3 Internets that someone else will have a much more elegant solution for you by tomorrow.

Hi Telengard, your script is good, but there's a little problem: all the duplicated filenames are displayed twice (Obviously .. if basename of files are duplicated in the list, they fall twice doing "grep" on the same list).
Starting from your script (thanks! ), I applied some little change in order to display the duplicated couples only once. In my variant, also file size are shown.
Hoping to help somebody else, I paste the code hereafter:

Right, I've a better result now.
I cut and pasted the original code into windows notepad....got in a state with line breaks and learned DOS2UNIX, but anyway, now got its precsiely into Linux, and it does run.

Sorry to mess you about.

However, while I scan the terminal window I see messages that it was unable to stat such and such a file and no such folder exists and the file name is too long, but maybe that's correct if there is no duplicate?

So, if there are no duplicates and I've a series of sub folder with thousands of files, it would be even better if the lines where there are no duplicates were not written to the output file.

#This section searches the hash for any file with 2 or more files which share the same filename.ext and size.
#After that, it compares all of the files with those attributes to determine if they share the same contents.
#It will print the list of files with the same filename and size and will tell you which ones share the same
#contents.