RSS

How-To Geek

If you’re a Linux user, you’ve probably heard that you don’t need to defragment your Linux file systems. You’ll also notice that Linux distributions don’t come with disk-defragmenting utilities. But why is that?

To understand why Linux file systems don’t need defragmenting in normal use – and Windows ones do – you’ll need to understand why fragmentation occurs and how Linux and Windows file systems work differently from each other.

What Fragmentation Is

Many Windows users, even inexperienced ones, believe that regularly defragmenting their file systems will speed up their computer. What many people don’t know is why this is.

In short, a hard disk drive has a number of sectors on it, each of which can contain a small piece of data. Files, particularly large ones, must be stored across a number of different sectors. Let’s say you save a number of different files to your file system. Each of these files will be stored in a contiguous cluster of sectors. Later, you update one of the files you originally saved, increasing the file’s size. The file system will attempt to store the new parts of the file right next to the original parts. Unfortunately, if there’s not enough uninterrupted room, the file must be split into multiple pieces – this all happens transparently to you. When your hard disk reads the file, its heads must skip around between different physical locations on the hard drive to read each chunk of sectors — this slows things down.

Defragmenting is an intensive process that moves the bits of files around to reduce fragmentation, ensuring each file is contiguous on the drive.

Of course, this is different for solid state drives, which don’t have moving parts and shouldn’t be defragmented – defragmenting an SSD will actually reduce its life. And, on the latest versions of Windows, you don’t really need to worry about defragmenting your file systems – Windows does this automatically for you. For more information on best practices for defragmenting, read this article:

How Windows File Systems Work

Microsoft’s old FAT file system – last seen by default on Windows 98 and ME, although it’s still in use on USB flash drives today – doesn’t attempt to arrange files intelligently. When you save a file to a FAT file system, it saves it as close to the start of the disk as possible. When you save a second file, it saves it right after the first file – and so on. When the original files grow in size, they will always become fragmented. There’s no nearby room for them to grow into.

Microsoft’s newer NTFS file system, which made its way onto consumer PCs with Windows XP and 2000, tries to be a bit smarter. It allocates more “buffer” free space around files on the drive, although, as any Windows user can tell you, NTFS file systems still become fragmented over time.

Because of the way these file systems work, they need to be defragmented to stay at peak performance. Microsoft has alleviated this problem by running the defragmentation process in the background on the latest versions of Windows.

How Linux File Systems Work

Linux’s ext2, ext3, and ext4 file systems – ext4 being the file system used by Ubuntu and most other current Linux distributions – allocates files in a more intelligent way. Instead of placing multiple files near each other on the hard disk, Linux file systems scatter different files all over the disk, leaving a large amount of free space between them. When a file is edited and needs to grow, there’s usually plenty of free space for the file to grow into. If fragmentation does occur, the file system will attempt to move the files around to reduce fragmentation in normal use, without the need for a defragmentation utility.

Because of the way this approach works, you will start to see fragmentation if your file system fills up. If it’s 95% (or even 80%) full, you’ll start to see some fragmentation. However, the file system is designed to avoid fragmentation in normal use.

If you do have problems with fragmentation on Linux, you probably need a larger hard disk. If you actually need to defragment a file system, the simplest way is probably the most reliable: Copy all the files off the partition, erase the files from the partition, then copy the files back onto the partition. The file system will intelligently allocate the files as you copy them back onto the disk.

You can measure the fragmentation of a Linux file system with the fsck command — look for “non-contiguous inodes” in the output.

Comments (78)

Now I know why certain Linux distros sometimes want to create different partitions – and sometimes even use different file systems. It would appear that one of the reasons would be for frequent vs. non-frequently accessed files. (I seem to recall someone telling me this or reading it somewhere but I forgot it until I read the article.) So, for example, a user’s /home files which might be accessed very frequently and thus change in size quite often probably should be kept on a separate partition than the more “static” files that don’t change or aren’t likely to change nearly as often.

Linux is the leader from pocket computers -Android, future Tizen – and with IOS nix at pocket are more than 90% of the market share.

Phones + tablets

At servers Linux has also more than 90%, even MS uses it.

At desktops Ubuntu is growing a lot and being preinstalled NOW at some models it is espected to grow to the 5% market share, and Chrome OS other Linux preinstalled is growing too, arriving soon to 64 bit computers.

Also some business as FON or many public administrations are migrating to Linux, saving a lot of money, in Spain almost all the education system uses Linux and almost a 25% of public administration and growing.

The deragmentation scheme on Windows only works if you leave your computer on overnight on the night the defrag is scheduled. If you’re in the habit of turning your computer off at night (like I am), it will not be automatically defragged. So, in that case, it’s a good practice to defrag it manually now and then (or remember to leave your computer on the night it’s scheduled to defrag).

Microsoft does not use Linux for there systems. They use Microsoft Server. Linux last I read was 53%, Microsoft was 46% and the other was divided between other lessor systems. Considering that fact that Microsoft has to be paid for and Linux is free, you have to wonder why Microsoft keeps climbing. Ease of use, support by the manufacturer and the number of professionals that support it. At one time Linux was at the peek 83%. It will never go away and it will never take over. It is just an options for users.

I don’t use UNIX/Linux these days (haven’t for over a decade, got nothing at all against them though), since virtually all of my clients are Windows-only users.

First point:
You stated: “If fragmentation does occur, the file system will attempt to move the files around to reduce fragmentation in normal use, without the need for a defragmentation utility” which surely is another way of saying that in effect there’s a defragmentation utility built in to the depths of the Linux OS that you don’t have to be concerned with [and over which you have no control?].

Secondly:
I reckon that it’s a bit over-casual to advise: “Copy all the files off the partition, erase the partition, then copy the files back onto the partition” when these days many users have 1 TB or 2 TB drives, even 3 TB, and soon enough bigger still. You’ve go to take into account that copying partitions of such size will take quite a few hours — in each direction — a very tedious and fraught operation, not to be undertaken lightly! This means that defragmentation utilities in Windows, assuming they’re “set and forget” can be of quite some value.

And a warning, that most Windows defragmentation utilities at this time don[t seem to detect the presence of SSD drives and would perform traditional moving around of files on the SSDs, thereby adding to their wear. Be sure to check that the utility you use is not such an offender.

Note the operative reason Linux doesn’t need defragging: intelligent. Win 7 *still* needs defragging. I know: I use both. Prefer Ubuntu because it’s a lot less upkeep. And as Citrus Rain said: it is so much faster.

Both of the camps… I like your never this… Never that… The problem is not to choose an os because of some religion… You are like some evangelists… If you have to buy a car and that is crap… You will never buy the same car a second time… Or you are an idiot or you were forced to buy the car (like it is on some pc)… If you compare the same functionnality both on windows and (by example) ubuntu… I repeat the same… Then believe me you will not go back to windows. Ubuntu boots in around 1 minute, shutdowns in also less than a minute. Is fun to use.. Is very fast… Is free… I was yesterday on the pc of my wife… A Windows vista… Ha ha ha.. You can use that ? 10 minutes to get some stability (I mean the drive is not flashing anymore)… Windows 7 ? Same problems… I am a liar ? If windows is so good why on earth all my friends, wife,… wanting me to fix theirs problems ? Give a try to ubuntu… You will not come back to Windows… TRY IT !

When I was growing up with computers and disks, the need to move the read-write head was far slower than to read even nonadjacent sectors that required no head movement. To spread data around a disk will require this slow arm movement (not for SSDs of course). I still hear disks clicking, so that still should be the case. Will someone please explain then why spreading data so that arm movement is required is a good thing? Thanks, perhaps I’m missing something.

But Linux is still essentially defragging as it rearranges the files around as it out grows it’s free space. Trust me, all systems get fragmented. What happens when the disk is nearly full and there is not enough free space to fit the file.

With the way linux does it, the drive is never contingous, therefore making it worst than Windows.

The title of Your article is a perfect example of how even a question can be false. Linux DOES need defragmenting. I’ve already written here that every dynamic space management suffers from defragmenting. Because there is no system which knows the future, some files eventually will be fragmented. Linux just tries to optimize the process by scattering the files along the disk, which inevitably results more disk head moving and slower processing!!! You gain some advance because of your files are contiguous, but loose some time because of more disk head movement. So also Linux has no silver bullet and let’s not pretend that it has.

Only one system would be free of fragmentation, a 3D storing system, a cube, but it would have enormous space waste.

So the only real solution will be a cheap SSD which will last for 8-10 years under 7/24 heavy use. This is in the far future yet.

And one last note: even the RAM would be fragmented, but each OS remaps and moves the blocks to keep contiguous what needs to be contiguous.

@abhijitrucks: Yes, it will cause troubles. I encountered the same problem with Oracle. If You drop each table but the last one you created, You can not shrink the database, because the last table is near the end. But if You drop each table but the first one, You can shrink the database (it is a very simplistic explanation, the real situation is much more complicated).
But in Windows You also has to defragment a partition before shrinking it.

There are no miracles in computer science, a dynamic system is perpetually changing and if You want it to be optimal, You should adjust it sometimes. Or You just simply admit that entropy is growing.

Anyway, the antivirus program is the most significant part of a system which make it really slow and not the disk fragmentation.

Rudy: because it’s easier to get you to fix it than learn how to do it. Linux has its problems too, but the users are generally those who want to know how to fix them, and do. So for some reason they think that means they have no problems.

The last time I used Ubuntu, I liked it very much. I would likely use it except for the usual compelling reason – the apps I need are not available. That’s not likely to change, either. I’m a freelance writer, and there simply is no substitute for MS Word when you need to fit into the industry. Oh, I can run it under wine? But even if so, that’s not easier, it’s a lot harder and more complicated, and I’d have to learn to do it – and I’d still have to buy Word. Which install with a few clicks on Windows.

And there is no speech recognition on Linux, while there’s a choice of providers on Windows. And no Evernote. And no Photoshop. And so on.

Yes, for many users, mostly geeks and hobbyists, it`s a good choice. But it`s no substitute for working in the real world.

I’ve had lots of OS over the years. To be Frank when the system is fresh there all as fast as each other. It’s only when you start using and installing and implementing do they slow down. You like Linux you like windows let’s just get on with life, it’s getting oh so boring this mine is faster than yours. And before you say it, mine is bigger and faster. :p

You’re either purposely understating Microsoft’s use of Linux, or you’re not aware of the 10,000 Linux servers they use to run Skype, or that every time you go to a MS site you’re actually retrieving the pages via Akamai and Linux.

When they grow big enough, Linux moves the files. In other words linux does defragment. You just don’t have to push a button to do it. Only those who don’t want to, do not find happiness by fooling themselves. But yeah, i get the point. Not a bad design idea.

Disk striping is the way an HDD works. An SSD uses no moving parts, therefore it needs no defragmentation. All OSs use the same basic concept, except MAC and Linux use linear striping. Windows, which is most prevalent in the computing world does have defrag issues in any HDD. If you are not running a server in Windows, just defrag it once a month and you should be OK. IMHO. Many large companies use the UNIX/Linux for server capabilities for ease of use on Windows systems. Also, Linux is the way to get your Windows system back up and running if it fails. Check out hirensbootcd.org.

Wow Superb Article…..hmm impressed …all the fragmentation process cleared….thnks..but recently i havd a problem . I have Windows 7 ultimate Os and i installed Win 8 but 2-3 of my partition aren’t working not able to open …in 7 its working fine…..any solutions coz i wanna use windows 8 but i m stuck all bcoz of that….

1) Linux DOES need defragmentation – it does it in the background so you don’t know.
2) Windows does it in the background too, since Vista you don’t have to do it yourself.

If you just simply think about it: Windows’ solution is much faster as it always tries to put files closer to the quickest parts of the hard drive – while Linux does not take this into consideration, and puts files all around almost randomly. This is slow.

@Sigman: I think I know what numbers you are talking about, Linux/MS servers at 53/46%, and that was only counting sales. Considering that there are more Linux distros that are free and have no sales to count, this tells us nothing.

@3vi1
If you read carefully, I didn’t state any numbers. I was pointing out that Microsoft did use Linux, not understating anything. Yes I am aware of everything you pointed out; that’s all obvious stuff. But, do you think the only thing Microsoft does is run Skype and web hosting?
Happy medium here. They use what gets the job at hand done best. If it’s their’s, so be it, if it’s open source, so be it.

The Linux method of scatter means that the head goes to one spot and reads an entire file – most of the time. The windows method means EACH time the file grows it will require a new space, and this requires the head to be moved more often – much slower!
* file1
– file2
) empty space
day 1 Linux )))))****)))))))))))))))—-
day 1 windows ****—-)))))))))))))))))))))
day 2 Linux ))*******))))))))))))——-
day 2 Windows ****—-***—)))))))))))))))
day 3 Linux ))**********)))))———-
day 3 windows ****—-***—***—))))))))
tada!

@michel
There are tens of thousands programs for Linux,and there are word equivalents aswell.
But even so,I use Windows instead usually as it has my favourite programs although I think it’s just what you’re used to,there is nothing wrong with Linux proggies.
@Szűcs János I have KDE it boots up in 18 secs,I wouldn’t call that slow,compared to 23 in windows.That’s with programs loading up.You obviously have something wrong with your KDE installation.

Michel: what you’re saying is completely unfounded. Ubuntu is absolutely perfect for the working world! All the apps you just named have substitutes which are on par or even better than the MS versions. And best of all: they’re all free.

LibreOffice is a very capable substitute of the MS Office suite. Yes, it does have a very small learning curve but in the end the freedom you get with LibreOffice will make you regret ever saying MS office is better :) Outlook is the only reason I would go back to MS Office, since Thunderbird just isn’t quite there yet.

Evernote is working perfectly fine on Ubuntu, but you probably didn’t even research it? There’s an app called Nixnote, which is a front-end app to Evernote, it syncs with my phone and online Evernote account just fine.

As for Photoshop: use Inkscape. I can’t say it’s quite as intelligent as Photoshop, but for SVG manipulation it’s very simple and very powerful.

Yes, it’s true Linux does require you to be absolutely willing to learn to work with a PC in a different way, but what’s the difference with MacOS? For freelancers like myself, small startups and big companies alike, Linux could be a perfect fit because of it’s speed, reliability and pricing.

But doesn’t this imply that for a newly installed OS, Linux would be slower (at least initially)? If the files are scattered across the entire disk, doesn’t the read/write head have to move further, more often, in order to load a whole bunch of files? Clearly, the strategy of keeping the disk unfragmented (either automatically in Linux’s case, or periodically in Window’s case) is a good thing.

Incidentally, just to get involved in the Linux vs Windows argument, why is it that Windows disk access is sooo much slower than in Linux? I’m a Windows user (more for the reasons of compatibility for certain software I use than out of choice), but for a brief period that I had a large collection of music stored on a Linux machine, I found the disk access to be so much faster than in Windows – and that was with a lower-spec’d machine for the Linux box.

Alex, Michael,
For Photoshop, use GIMP! Extremely powerful for being free; does some things better than Photoshop. Inkscape is great too, but it’s more geared as an alternative to Adobe Illustrator (vector art).

Anonymous remembers the ‘old days’ of using multiple partitions or drives for different file systems.

In those days we did have small (by todays standard) drives, and drives of different speeds. For a system to run fast, you put swap and temp files on ‘fast’ drives, root didn’t have to be fast or big, it was basically static mount points. /etc didn’t change much, but /var (due to /var/log etc) had lots of I/O.
/home or /h were usually big as that is where the home directories were, but didn’t have to be real fast.

Systems like SunOS or Solaris put /tmp into swap, so it was effectively erased at each reboot, and /var/tmp was where you kept ‘persistent work’ files.

Since most hobby machines have not had lots of drives (typically 1 or 2 internal) control of lots of drives has gone away for most of us. These days we tend to use san or lan/network attached drives for many things (even cloud or USB), and our drives are large enough it doesn’t matter.

When I was a Sun admin, we backed up each mount point separately because we had to restore a drive every few months, and it was easier to restore a smaller drive than a large one.

Keeping databases backed up were always a problem. Or large user data areas (5-10G) before large backup systems became available. But it was done.

Today, I do backup to hard drives over our network that are on other machines. Those drives I tend to format as ext2 not 3 or 4, because for these mostly static file systems (backups only), I would rather have the disk space used for data rather than rather than cache write buffers or other overhead, especially since ext3 & ext4 have ext2 as the base. … And for data I really don’t want to loose, I make occasional backup of those files to DVD. Since I have a slow network connection, I make less use of cloud services than I would if I had a good connection.

Excellent article. I’ve written about this as well in my blog, and have even gone on to demonstrate how horribly unstable Windows’ NTFS filesystem can be with its infamous “disappearing files” problem that has plagued it for years. To me this is a demonstration of how open source software exceeds closed source software. Microsoft keeps the NTFS secrets behind closed doors, and nobody can improve it other than Microsoft which it hasn’t been bothered to do. And, Windows 7’s secret to keeping NTFS defragmented? That’s just a scheduled task that runs the defragmenter more often, automatically for the user. In other words, just a band-aid for the inefficient filesystem that it is.

Any & all SSD are flash based micro chip storage. Defrag any SSD will shorten it’s life.
Regular hard drives have trillions of read write cycles before failing.
Solid state drives do not have that capability, since they are flash based memory & right now has a few million read & write cycles before failing.
Since defraging any SSD is bad, forget about erasing or overwriting data, doing so will irreversibly trash SSD making data recovery nearly impossible.

One more bit of information on the FAT/NTFS vs EXT file system fragmentation issue.

If I go to write a big file in Linux, for example copying a video from DVD for transcoding, each file from the DVD may get written to a different place, but the files will be put into places on the HD where they can fit whole.

On Windows, the first file will get written into the first open space on the disk, and if the file is too big for that first open space, it will be fragmented into the second open space, and third, and so on, as it’s being written the very first time.

So no, Linux does not “automatically run a defragment program”. As files get moved normally, they are moved into places where they fit. So a file that is never re-written will not get “defragmented”. But those files tend to be pretty darned static anyway so it’s not as important.

That’s why Linux systems don’t slow down from use over time unless the disk gets really close to full.

Yes, everything is a trade-off. For example, a SSD is great for files that need to be re-written all the time, for speed, but that wears the SSD out faster, so it’s better to use the SSD for files that don’t often change. Some day I may get a SSD, but keep /home, /var, /usr, /tmp (maybe in memory), on a spinning magnetic HD. Boot times would be wonderful if /boot, /bin, /lib, /etc, etc., were on a SSD.

@ michel: I’ve been using Linux professionaly since 1993. And since 2004 I am using it almost exclusively. I also have used DOS, Solaris, MacOS and some other Unices. Never have used Windows though. But I guess I am living in a virtual world?

Linux does need defragmenting, it just lacks tools to defragment and to estimate filesystem fragmentation.

Scattering thousands files over the partition actually maximizes hard drive seek times. This should be obvious. Also, large files are *guaranteed* to be fragmented b/c of scattered files leaving no space large enough to contain the large file in a single chunk.

It’s not a «more intelligent way» — it’s a naïve file placement strategy which is sufficient for most tasks linux has to deal with — but in no way it is superior to other filesystems.

@michel as a would-be “published” writer as well, I agree with you that that is no app for writing for Linux, yet, MS Word isn’t an app for it as well. I agree again that it is far better than LibreOffice, yet, it is not designed for serious writing, as it is Scrivener and so many available mostly for Mac Os (which in turn is *nix OS).

Now, in saying Linux is no substitute for real work, I have to disagree, that is a lot of fields were Linux and other *nix are most suited than windows. Take programming and network management for a chance, the tools available are far more powerful than the one Windows has.

@szucs I agree, KDE had always being a nightmare for me, Gnome and Unity on the other hand are pretty fine.

And about the title, my understanding is that in Windows, you need to defrag it yourself, while in Linux the OS does the job.

Particularly, I don’t use defrag tools for years now, the disk drive speed had grow a lot, so no worries if it is fragmented or scattered. The problem came when you are over 90% of your disk, but then you NEED a bigger one, or cut the rubbish by half, not just defrag.

@abhijitrucks about shrink, well, I had done it in well used partitions, let say over 50% and didn’t get unlucky. Also, I did shrink windows partition from linux then.

This article’s title is incredibly misleading; I expected content about why Linux forks do not need to be unified (potentially in response to an argument for merging, for example, Android back into Linux), but got an article about why the filesystems used with [many] Linux operating systems do not need defragmenting.

As “Anonymous” points out, it’s downright _naïve_ to think you can avoid fragmenting by scattering files all over the disk drive. Seek time gets longer as a result of the reading head having to move back and forth in order to go from one file to another (which is, I supect, one of the main reasons Apple is so eager to push SSD for their MacBook/Mini lines), and you end up with _less_ contiguous, unoccupied segments on the disk for as a result of scattered files cutting up into free space into small chunks. Of course, having paid good dollars for average hardware, there is invariably an _emotional_ need to justify the cost, but “HFS+ doesn’t need defragmenting”, like “MacOS doesn’t get malware”, is simply way too preposterous for a realistic reason.

“I’ve written about this as well in my blog, and have even gone on to demonstrate how horribly unstable Windows’ NTFS filesystem can be with its infamous “disappearing files” problem that has plagued it for years. To me this is a demonstration of how open source software exceeds closed source software.”

People write everything on their blogs. Don’t have anything more than mere anecdotes to back up your statements? Then don’t bother pandering to others your theory.

Remember the ordered vs. writeback feud over ext4 that got even Linus Torvalds’ panties in a bunch?

“Linux DOES need defragmentation – it does it in the background so you don’t know.”

And I am William Wallace in a kilt. At the moment, there is no utility or any slab of code that is deemed “production ready” and can defragment an ext3 or ext4 partition, which, mind you, is what most Linux distros use _exclusively_ for the “root” file system. An experimental, online defrag utility is available for ext4, but, as it stands, it is nothing beyond “experimental”.

>If you actually need to defragment a file system, the simplest way is probably the most reliable: Copy all the files off the partition, erase the files from the partition, then copy the files back onto the partition.

Sorry, but this is kind of stupid. There is e4defrag to defragment ext4 partitions or individual files.

Been running Linux in over 300 autoparts stores in five retail chains since 1999. Heavy database use, web and email, most as servers. No matter what else, it’s 2.5 to 3 times faster on the exact same hardware than running under Windows, and I have several machines that haven’t been rebooted in over 3 years…. (And never had to defrag yet…)

“Sorry, but this is kind of stupid. There is e4defrag to defragment ext4 partitions or individual files.”

As pointed out in my link above, e4defrag is still in its experimental stages. You would have to be _insane_ to try and use it on anything important.

“No matter what else, it’s 2.5 to 3 times faster on the exact same hardware than running under Windows”

Sure. Without any proper method of measuring, this is simply the equivalent of saying that I can run 2.5 to 3 times faster backwards. It’s how I _feel_, so it has to be true, right?

This is not to mention, of course, that one of the key ingredients that sets proof apart from anecdotes is a _controlled environment_, which, to a Linux advocate, would probably translate to comparing a clean install of Red Hat to a copy of Windows 98 with a dozen applications competing for CPU time in the background.

“If you like ubuntu you find good applications that replace the ones you use on Windows – or try to run them under CX-Office.”

Definitely! Why run an application the way it is intended when I can incur overhead in my _pocket_ just so I can bet my important work on a compatibility layer developed by a 3rd party? Heck, even the idea of replacing perfectly working applications just so one can migrate to a different OS for no obvious, practical benefits is absurd enough by its own rights.

I use both and have done for years. The are some apps that are written only for windows -especially in my industry, and most people have Windows installed on their pcs and few have even heard of Linux.

So, I must use both.

On my home computers -I mostly use PcLinuxOs and I quite like Mint. There are about 400 Linus “Distros” out there -usually those who use Ubuntu are the more recent “converts” as Ubuntu has become well known -I personally don’t like it so much -and there is more to Linux than Ubuntu. I have the opposite experience to Michel -I find Libre office much better than Ms Word and can do much more.

Photo management -linux wins hands down as too with Audio. Photoshop is Adobe -again -you can run Adobe on Windows but Linux and other non-Msoft Os have similar -it depends on your level of expertise with the program -rather than whether the program is good or not.

Both Windows and Linux have their downsides IMHO but I don’t have virus an malware issues on my Linux boxes -and Linux is stable and reliable -so less time spent in finding and fixing “issues” is good for me.
Neither Os will “go away” so the two camps will someday need to calm down and learn to play together nicely. Although having said that -human nature….

“Both Windows and Linux have their downsides IMHO but I don’t have virus an malware issues on my Linux boxes -and Linux is stable and reliable -so less time spent in finding and fixing “issues” is good for me.”

Well, do you wash your hands before meal? That is not supposed to be much of a burden for any rational human being, and it prevents you from getting sick.

Let’s say there was a technology out there that could transfer your consciousness to a robotic body. A robot could never get sick from food poisoning, right? Sure, dealing food poisoning sucks, but certainly the alternative would just not be food poisoning being off of the equation, would it? It would basically mean you trading one problem with a set of new one pertainig to a robot. Instead of food poisoning, you would need to worry about different types of fuel, different types of lubricants, battery power, maintenance, societal views on robotic bodies etc. Totally worth the trouble, wouldn’t it?

Being a Windows user for 20 years and having used everything from Zone Alarm and Comodo to Sophos and Norton AV, I have not yet seen a single problem with malware on any of the machines that I _personally_ maintain. Don’t know what that piece of software is? Don’t install it. Don’t know the sender of that email? Don’t open it. Keep your AV suite on auto-update. Keep your firewall up and running. All that is as simple as washing your hands before every meal. The “let’s-stick-this-turkey-into-my-computer-and-see-what-happens-derp” mentality common among most users simply doesn’t fly with any operating system – not Linux, not MacOS, and certainly not Windows.

Secondly: Copying files on and off the file system may seem a bit over-casual, but it’s the safest way to perform a defrag given that no mature tools are available.

@Szűcs János & Others

Well, this explains why Linux distros don’t include a defrag tool and why most users don’t need to think about it. As I said in the post, when your file system starts to fill up, you’re going to see some fragmentation — I never said Linux could never become fragmented.

@marad

A quick Google reveals that e4defrag is a very recent tool that isn’t necessarily considered stable yet. I wouldn’t advise everyone to use for production systems yet. Flying Toaster responded to this well.

DESCRIPTION
e4defrag reduces fragmentation of extent based file. The file targeted by e4defrag is created on ext4 filesystem made with “-O extent” option (see mke2fs(8)).
The targeted file gets more contiguous blocks and improves the file access speed.

target is a regular file, a directory, or a device that is mounted as ext4 filesystem. If target is a directory, e4defrag reduces fragmentation of all files in
it. If target is a device, e4defrag gets the mount point of it and reduces fragmentation of all files in this mount point.

You mean “.doc” files? That’s an old Word file native to Office 2003 and prior and supported by Office 2007 and latter via compability mode. I have never used a Kindle, but that simply doesn’t undermine the fact that you can download a compability pack to handle the newer, OOXML-based format (.docx)in Office 2003 and save whatever there is back to the legacy Word document format or that you can simply convert a “.docx” file to “.doc”, PDF or whatever else using the “Save As…” dialog in Office 2007/10.

I really don’t know much about the Linux file system. In my windows system, I have found that when I defrag the hard drive, I can gain hundreds of Megabytes and sometimes Gigabytes of additional storage space. This happens because the fragmentation causes wasted unused space on the drive. It does not necessarily improve system speed, but I gain space on my drive. I have done this many times to delay buying a higher capacity drive. Now according to this article, by definition Linux automatically creates wasted unused space each time it writes a file. As I said before, I do not know the file system. When it formats the drive does it use sectors and clusters like Windows or just sectors? It has to use at least sectors because all hard drives are low level formatted into sectors by the manufacturer.

For example: for Windows the number of sectors used in a cluster is determined by the size of the drive when it is formatted and the operating system that formats it. A cluster is the smallest amount of space that can be written by the system. So if 1 cluster consists of four 512 byte sectors, then the smallest amount of space used is (4×512=1048 bytes = 1Kbytes). The larger the drive, the larger the cluster size. Using my example, if you write a 12K file (1048+200=1248 bytes), it uses 1 cluster 1048 bytes and only 200 of the next cluster of 1048 bytes. You waste 848 bytes. Repeat this many times and you waste a LOT of space. Defragging recaptures much of the wasted space.

Now you are telling me that Linux automatically pads its writes with additional space. I take that to mean that Linux automatically creates wasted space (in other words fragmented space). Note I did not say fragment file. Eventually you will use up the space on your hard drive sooner than a Windows drive because you have no means to recapture this space. I bet you have a lot more files on your hard drive that you never change. Look at all the emails you have and PDF files and letters and term papers and essays or copying a HTG tip to a Word doc that once created store it for information and only read it when necessary. If you never go back to change the file, that is wasted space never to be regained.
I’ll stick with windows. Thanks.

“For example: for Windows the number of sectors used in a cluster is determined by the size of the drive when it is formatted and the operating system that formats it.”

There are also the metadata required to describe the file stream, and that adds extra pressure to a file system that is _designed_ to fragment free space.

This is not to mention that ext2/3/4 do employ the concept of allocation units, although it is called “blocks” instead of “clusters”. It’s a necessary compromise between usable space and maximum partition size.

Everyone saying one is better than the other is living in a black and white world … and trying to compare the two in terms of the other … kind of pointless and stupid don’t you think. I use both (and Mac) nearly everyday … have likes and dislikes using all of them. Almost anything you can do on Windows you can do on Linux … all that crap about can’t do this and can’t do that just doesn’t work anymore … but that doesn’t make Linux the answer for everyone.

Get over trying to prove that your decision is best for everyone in every situation it isn’t … everyone has different uses and different goals … learn to live with a little variety … it’s kind of fun.

@michel
Dude, there is Libre Office and OpenOffice. Both at least as good as Word 2003 and as good as 2007/2010 if you don’t need all the fluff.
I use a duel boot Linux Mint and Windows 7 system and only use Win 7 now when absolutely necessary.
The advantages of not ever having had to defrag the Linux system have saved me countless hours of inconvenience. Conversely, I have lost count how often I have had to defrag windows which requires me to leave it on overnight. Something that is difficult to do when required as I am on the road for work.
For the minor reduction in immediate HDD speed, the advantages far outweigh the disadvantages.
Zero maintenance, zero viruses, fast, clean and no BS.
As ‘Rudy’ has said… “Try it”.