Voted for ext2/ext3 use ext4 the tested FSVoted for ext2/ext3 use ext4 the tested FS(Score: 1, Insightful) by Anonymous Coward on Saturday February 04 2017, @05:19AM

I voted for ext2/ext3 but use ext4. It is probably one of the most well tested and understood operating systems (second to the FAT family). For the longest time, people have tried to get me to use different ones, but I always end up back with the ext2/ext3/ext4 family. Most other file systems I've used (which include HFS, FAT, NTFS, BTRFS, ReiserFS, and BeFS) have all lost data on me due to corruption of some kind or another. The winner is BTRFS, in that I installed the OS rebooted, used the system for a bit, shut it down, started it up the next day and when I rebooted it to install updates, it balked due to disk errors; from formatting to corruption in less than 6 hours.

Re:Voted for ext2/ext3 use ext4 the tested FS(Score: 0) by Anonymous Coward on Saturday February 04 2017, @07:03AM

Ditto. ext4 and have used it on my desktop systems since before it entered the mainline kernel tree.For production systems I manage, they got ext4 root filesystems as soon as it was rolled into Debian/Ubuntu as a selectable filesystem option, and ext4 for data partitions earlier than that -- pretty much as soon as the FS was no longer marked "experimental" in mainline. For DB loads especially, ext4 is noticeably better than ext3.

Re:Voted for ext2/ext3 use ext4 the tested FS(Score: 3, Insightful) by Runaway1956 on Saturday February 04 2017, @06:10PM

Something similar. Tested and understood - I keep reading about the advantages of alternative file systems. But, when it comes time to create a file system, I just rely on what I know. Ehhh - call it lazy, I guess.

Ditto the experience with ReiserFS. I got half excited about it, all those years ago. I worked with it for awhile, had some problems, then the author went apeshit, then to prison. So much for reliable support. I just fell back to Ext4, because it just works.

Can I vote more than once?....I simply don't have a single "everyday" computer, I have 3. A mac tower where I code some every day, a Windows8 where I game some every day, and a macbook that I lie on the couch or in bed and watch stupid youtube videos.

The computer that I sit in front of most days uses HFS+. The computer that does the most work for me (build server with 24 cores, 768GB of RAM at work) and the computer that has the most storage (home NAS / media centre) both use ZFS. It's a shame Rehash didn't add the option for multiple responses per poll.

"Your computer" in the question suggests that the it is assumed that you have and use only one computer. I use several over the course of the day so could justifiably answer NTFS, HFS+ and ext4, plus FAT32 if we're to count ubiquitous portable storage devices, but we're not allowed multiple answers.

And what about the people who spend all day using their Apple pocket computers? What file system do iPhones use and why isn't that an option?

Which system?Which system?(Score: 4, Interesting) by LoRdTAW on Saturday February 04 2017, @06:04PM

The main system I'm on is a Win 7 PC, used mainly for gaming, PLC software which is wndows only, and everyday nonsense. NTFS.My two Linux dev PC's are both running EXT4 on their main disks. Though, one has a 4TB mdadm raid 1 formatted XFS.

Linux Laptop, a Lenovo T410i, is EXT4 on an SSD.Funny story about that laptop. Before I put Linux on it, I bought an active Display port to HDMI converter to hook it to my TV. No Audio. Searched the net and found that Lenovo never included DP-HDMI audio pass through in their driver and there was NO FIX. Booted Linux Mint from a thumb drive and opened a video only to have Audio play through the HDMI port. Windows 7: 0 Linux: 1.

Another thing that drove me crazy was the way stupid Lenovo designed their fucked up boot process:120GB Disk has three partitions in this order: 100MB Boot, ~115GB System (Windows), 4GB Restore.Now the restore partition is at the end of the disk. You think those pricks would put it after boot like Dell does ,right? Nope. So when I tried moving the whole mess to a 256GB SSD, and expand the system partition, the boot partition for some reason looks for the restore partition at a specific sector. If it doesn't see the restore partition, you get an error and it wont boot. WTF! I dug around but found nothing to help me fix the problem. So the only way to use the entire 256GB SSD was to copy the partition layout verbatim and create another partition after the fucking restore partition. That was when the Mint boot drive was plugged back in and windows was obliterated.

I had a friend who had a similar problem. I believe the way he got around it was to boot into Windows, create the partition for the additional data after the restore partition, and then use the Disk Manager to create a spanned volume of the two partitions. To Windows, it appeared as just one large partition.

an absence of the irrelevant, or dumb=goodan absence of the irrelevant, or dumb=good(Score: 3, Interesting) by shortscreen on Saturday February 04 2017, @11:38PM

I actually like FAT32. I don't like the 4GB file limit, but aside from that it has everything a file system should have. It stores files, and it retrieves files. Sure, it has approximately zero attempt at error tolerance/recoverability, but this is made up for by the ease of creating backups (and the fact that FAT32 can be accessed by nearly anything). I can duplicate a whole partition with one XCOPY. How do you duplicate a file system which is encumbered by metadata, forks, permissions, multiple versions of the same file, symbolic links, etc.? With Windows on NTFS (IIRC it was Win7) I couldn't even use DIR /S anymore because one of their stupid USER\LOCALS~1\APPLIC~1 or some such directory linked back on itself and got the system stuck in an infinite loop.

Over-achieving file systems have been a PITA since the old days. On 68K Macs you could download files with a terminal program but couldn't do anything with them without somehow getting the file type/creator codes sorted out. One time I used LHA to make an archive of my entire Amiga harddisk, and then later when I tried to restore everything it was somewhat screwed up because of metadata that hadn't been preserved.

Re:an absence of the irrelevant, or dumb=good(Score: 3, Informative) by Subsentient on Sunday February 05 2017, @01:23AM

If both file systems respect all the same security metadata - then probably yes. Obviously, you couldn't dump to a FAT file system, then restore all the security. But, you do specify *nix-like file systems, so you could probably get the job done.

Re:an absence of the irrelevant, or dumb=goodRe:an absence of the irrelevant, or dumb=good(Score: 2) by TheRaven on Saturday February 11 2017, @04:07PM

Really? I can think of a lot of things that FAT32 lacks. Off the top of my head:

An efficient way of storing small files. A 100-byte file will use 64KB on disk on a large FAT32 filesystem. Not much for a single file, but that overhead adds up really quickly when you have a lot of small files.

Support for even UNIX file permissions, let alone ACLs.

Any metadata support for storing things like security labels for use with a MAC framework.

An efficient allocation policy for large files. FAT uses a linked list of blocks to manage the free list on disk, so to get even vaguely efficient performance you need to cache the whole thing in RAM. To efficiently allocate large files, you must also construct another data structure to sort contiguous chunks by size. This burns RAM and it's also very easy for these structures to become inconsistent with respect to the real filesystem, which brings me to:

Any kind of resilience at all. I'm not even talking about things like ZFS's block-level checksums. There's absolutely no redundancy and no checksum storage with any of the on-disk metadata structures, so it's really easy for a single-bit error to lead to massive filesystem corruption.

No support for hard or symbolic links.

That's just the list of basic features that I'd expect from a filesystem. Things like constant-time snapshots, support for large numbers of small files or for very large files, journalling, dynamic filesystem sizes, and so on are so far beyond its capability that it's not even worth comparing them.

Re:an absence of the irrelevant, or dumb=good(Score: 0) by Anonymous Coward on Saturday February 11 2017, @06:17PM

There are two FAT tables on the disk. The problem, like with any mirror system, is that when you find an inconsistency the software have to guess which one is correct or compare outputs. Windows and most disk checkers know that most people won't know without side-by-side comparison of the output or didn't want to go through the trouble, so they just report the error and guess.

Re:an absence of the irrelevant, or dumb=good(Score: 2) by shortscreen on Saturday February 11 2017, @09:16PM

An efficient way of storing small files. A 100-byte file will use 64KB on disk on a large FAT32 filesystem. Not much for a single file, but that overhead adds up really quickly when you have a lot of small files.

Why rely on the file system to do your job for you? Combine the small files into one large file yourself. For example, DOOM had its game data stored in a .WAD file instead of cluttering up your disk with a thousand bitmaps. (Lazy developers might do like Torchlight and just stuff all 20,000 files of their game assets into one giant .ZIP file. And then wonder why their game spends so much time loading.)

Support for even UNIX file permissions, let alone ACLs.Any metadata support for storing things like security labels for use with a MAC framework.

This is a good example of something I don't even want to deal with on my personal, single-user system.

An efficient allocation policy for large files. FAT uses a linked list of blocks to manage the free list on disk, so to get even vaguely efficient performance you need to cache the whole thing in RAM. To efficiently allocate large files, you must also construct another data structure to sort contiguous chunks by size. This burns RAM and it's also very easy for these structures to become inconsistent with respect to the real filesystem

It's true that this is not ideal. If I were designing my own filesystem I would not implement it this way. But still, if you have 15,000,000 clusters, with 32 bits per cluster making up the FAT, that's 60MB of RAM. Not a huge deal when you have GBs of RAM. AMD's Radeon drivers waste more RAM than that.

Any kind of resilience at all. I'm not even talking about things like ZFS's block-level checksums. There's absolutely no redundancy and no checksum storage

The storage device itself does data integrity checking. So basically the error would have to occur on the motherboard somewhere. This is possible but experience suggests that it is pretty rare. Back in the old days I would test system stability by creating a .ZIP file and then extracting it again to watch for CRC errors. I found a lot of flaky RAM, motherboards, ribbon cables, etc. by doing this. Although I think the worst instances of file system corruption were caused by programs running wild, writing garbage to the disk because of Win9x's half-assed memory protection. But ever since L2 cache got integrated with the CPU, and Intel and AMD started supplying most of the chipsets, flaky hardware has largely disappeared (except for laptops with chronic overheating problems). The only times I've had file system corruption on hardware of this millenium is when a driver BSODs, and then I might get some lost clusters or cross-linked files, which I usually just delete and go about my business.

Re:an absence of the irrelevant, or dumb=goodRe:an absence of the irrelevant, or dumb=good(Score: 2) by KritonK on Thursday February 23 2017, @02:56PM

One time I used LHA to make an archive of my entire Amiga harddisk, and then later when I tried to restore everything it was somewhat screwed up because of metadata that hadn't been preserved.

From what I remember, Amiga files did not have metadata. If you wanted to associate information with a file (basically icon and program parameters), you needed to put it in an associated .info file (e.g., foo.info for file foo), which was an ordinary file with a specific data structure.

Bad news for the btrfs folks that it's not even a *choice* here--and no one seems to be complaining about it.

I use ext4 almost everywhere on nonremovable devices, with the exception of a 400-ish GB btrfs partition mounted with lzo compression, where I store kernel source code & kernel compiles.

I tried btrfs with the default gzip first (I have a nice fast processor on that machine) but found that I can pretty reliably generate a kernel panic just by (a) using cp to put a bunch of files on it, or (b) compile a kernel whose sources are hosted there. Changing to lzo made the problem go away. Thought about filing a bug but figured "your file system is crashy" was probably already reported.

I use a raid 1 of two 1TB drives with BTRFS for file backup. I would have preferred to use ZFS, but BTRFS has better linux support and the main feature I wanted from ZFS: checksums on data and metadata. It's reassuring to start a BTRFS scrub and see the results come back OK without having to run hashdeep or some other SHA/MD5 utility on my files.

My impression was that it was supposed to be a GPL compatible imitation of ZFS. It seems concering that something that is supposed to enhance file integrity actively makes it worse.

I wonder how much of the problems reported are due to hardware problems like: low or even bad memory. I have been putting off trying ZFS for years due to lack of machines with the recommended 4GB of (ECC preferred) memory. Essentially, the "server grade" filesystem requires (what used to be) "server grade" hardware to be used effectively.

Of course, now I hear that both ZFS and BTRFS are maintained by Oracle. If Oracle wanted, they could simply release ZFS under a GPL compatible license.

You may have seen some old reports. I have been using BTRFS for a couple years with no issues at all. It's set up for mirroring. I do recommend staying away from other raid modes though since the last time I tested that it crashed and burned under a simulated disk failure. "RAID1" worked fine under the same test.

Good question. I don't know, and am too lazy to look it up at the moment.

Even if there are outside contributors, if they are limited in number, they can be asked to sign off on a GPL-compatible license as well. Code contributions from hold-outs can be re-written in the worst-case.

No, ZFS forked after the source code was released in 2005, but was released under the CDDL, and later that year the FSF concluded that the CDDL was not legally compatible with the GPL--thus why it is not in the Linux kernel.

So the actual development of ZFS is very federated. You have the the BSDs and linux projects each porting from the Sun code while Oracle still controls the closed source code used in Solaris. OpenZFS is the umbrella project for ZFS that gives a common way to test compatibility between different ZFS ports. It was determined that versioning ZFS was impractical because of the distributed development did not support the use of common release numbers (and then you'd likely have to work with Oracle). So a flag system was implemented by which ZFS file systems can be shared between different ZFS ports if the receiving system supports the flags used by the sending system.

I guess Oracle could GPL their code, but then it would become a real license mess.

I was going to complain, but then considered that I just migrated my hard drive from btrfs to ext4 after it ate a couple of disk images, and migrating my SSD from btrfs to ext4 on top of dm-crypt is on the to-do list.

ext4 for me.(Score: 2) by Subsentient on Sunday February 05 2017, @03:13AM

I prefer it, since I've had such good experience with ext4. It's the most stable filesystem I have ever seen, with remarkable error recovery. It's very hard to kill. All my systems use it as default. I have btrfs on a thumbdrive because I wanted the compression, but I've had btrfs die on me before. I wouldn't use it for mission critical stuff. ext4 is the best choice for stability on linux I'd say.

--"Sometimes the only thing more dangerous than a question is an answer." -Ferengi Rule of Acquisition #208

An elder nerd advised me to use XFS on my current array, and I have since had exactly zero issues with it. Even when a drives started to fail it recovered gracefully. The documentation is easy to read and the tools are easy to use (as far as filesystems go). It is important to note that so far I'm only trusting it to ephemeral data and nothing too important, but out of this 8 year long experiment I am wholly convinced to use XFS on the next array unless I come in to some sweet hardware that would make switching to ZFS or BTRFS worth while.

I have XFS on some large drives because after formatting it featured more available space than the ext(3 i guess) fs. Now, if that is a fake advantage as it is simply a matter of preallocating stuff for metadata, I dunno. Anyway, with the only downside of fsck + badblocks options not working, I never had problems with it.

XFS is a good filesystem. But it's not perfect. At work, we lost an entire openstack cluster just before Christmas, due to loss of the XFS storage. Likely a transient disk or memory hardware error, but it proved to be completely unrecoverable, even with the XFS tools.

RedHat seem to be falling back to XFS + LVM in the absence of Btrfs being anywhere remotely near production readiness. But XFS doesn't go much beyond metadata journalling; it's still very much a filesystem of the 90s, albeit a good one. It doesn't do data journalling, and it doesn't do block level hashing/checksumming, and it can't self heal or scrub itself. There is zero protection from data errors.

This is an area where there's a good bit of cognitive dissonance going on at the moment. The harsh truth of the matter is that Linux doesn't have a top notch native filesystem *at all* right now. You can use ZFS if you are able to use third-party modules. And at work we use expensive IBM GPFS stuff. But while Linux has a huge number of filesytems provided natively, they are all, for one reason or another, crap in different ways.

XFS got a bad reputation for a couple of reasons. The first was that XFS relied heavily on delayed allocation to avoid fragmentation and so liked to keep things in the buffer cache for a really long time. If you didn't do a graceful shutdown you'd end up with a lot of files containing nothing but zeroes. I think this is largely fixed (but doing so cost some performance, which made XFS less attractive). The second was that the initial Linux port of XFS was really buggy and you were pretty much guaranteed data loss with it. It took a few years to stabilise, and by that time other filesystems had journalling, which was the main reason for using it.

I'd like to thank systemd for pushing me into the arms of freebsd and ZFS

Sure at the time I was all WTF is this why are they destroying Linux? Why ruin something that works? Like a gang of rabid deletionists on wikipedia they just can't tolerate not destroying stuff that works...

But the grass really is greener on the other side in *BSD-land. It really is better engineered and better designed and "just works" more often.

I think the *BSD foundations and corporations should donate money toward systemd development, best recruitment tool EVER. What *BSD needs now is Linux needs another couple competing audio systems and maybe integrate GRUB into systemd (if its not already). I know make it impossible to run emacs on a systemd machine. Hell make it impossible to run anything but NANO as editor. And make systemd incompatible with gcc too. The future of *BSD is going to be awesome thanks to the systemd developers!

I think the *BSD foundations and corporations should donate money toward systemd development

I nominated Lennart Poettering for a lifetime contributors award at one of the BSD contributions a couple of years back (sadly, he didn't get the award). Both PulseAudio and SystemD have caused spikes in FreeBSD adoption, and both have led to some very competent developers deciding to start contributing to FreeBSD. I'm looking forward to his next project. My guess is that it's going to be a shell with natural language processing integrated (and no POSIX sh compatibility) that he will persuade distributions to install as /bin/sh.

ReiserFSReiserFS(Score: 2) by DECbot on Monday February 06 2017, @07:59PM

I voted by most representation. So, between work and home, there's ext3, reiserfs, zfs, reiserfs, ext4, and NTFS. The two reiser boxes were built before he was found guilty and have had no problems FS wise--they are also the two home machines that get the most use. One is a file server with LVM2 and fakeraid and the other is my desktop, again LVM2 and fakeraid. There might be more ZFS in the near future because the ext3/reiserfs machines are on Ubuntu 12.04 and I don't like the performance of 14.04 on my laptop (fresh install w/ ext4). So, when I get around nuking and paving my laptop with FreeBSD and perhaps if I get bold enough to attempt ZFS on 32-bit machines, ZFS will be the filesystem of my choice.

I'm still not sure what to do with the 12.04 LTS desktop when support stops. It runs great now, with a good number of games working under wine. Reinstalling everything under another OS will be a pain. Because of systemd, it won't pass the Hairyfeet challenge to get to 16.04 and I'm not emotionally ready for systemd or bsd on it. Worst yet, I don't have time to be a system admin to facilitate and tune a change nor the time to backport updates myself.

--cats~$ sudo chown -R us /home/base

hasn't ReiserFS died of bitrot yet?(Score: 2) by bzipitidoo on Saturday February 11 2017, @10:15AM

I used ReiserFS v3 for a long time. Tried 4 for a while. But that's all been over 10 years ago. Thought btrfs is to be all the things originally planned for Reiser4, plus more so that there's no longer any reason to use Reiser at all.

If it's an old laptop, just leave it alone. I still have my first nettop computer that came with Windows 98. It still works, but it is so slow and limited on memory it is useless for newer software. Chuck the old one into storage, buy a new, more powerful laptop and pave that one over instead.

Doing zfs on 32bit is bad news. If your cpu is 32bit only you're better of with UFS on FreeBSD (also because it's likely single core only). Ram isn't quite the issue many make it out to be with zfs. I've run 3Gb fine, and 2Gb is pretty doable too with tuning assuming you're not doing anything intense with on the drive.

Re:ReiserFS(Score: 2) by SDRefugee on Sunday February 19 2017, @05:39PM

I'd do a "do_release_upgrade" to 14.04LTS and leave it there for now.. You've got two more years on 14.04 to figure out where you're going, Linux-wise.. I'm staying on 14.04 till close to EOL to see what my options are for a non-systemd distro, as I've tried 16.04 and I aint going there...Probably back to my "Linux_roots", that being Slackware...

--America should be proud of Edward Snowden, the hero, whether they know it or not..

I've never had an issue with it since I've started using it 5 years ago. Performance is very good and xfsfreeze is something everyone can appreciate after it was made a standard feature of the Linux kernel for any filesystem.

I have seven computers I use (well, one of them I need to find a Linux distro that will work in it before it's used): The future Linux box, file system will depend on distro; one large laptop running FAT 32 (Windows 7), a small dual-boot laptop with kubuntu and Win 7, FAT 32; Two Android tablets and I have no idea what file systems they us; One Android phone; one Android TV that can read media files from a thumb drive, so I guess it's FAT 32, too, although I suppose it could be FAT 16 (not likely).

Linux on the desktop? Linux already won now that desktops are a minority of computers. These days, most computers (including phones and "smart" TVs) are running Android, meaning Linux has taken over the computing world.

What interests me is those of you who choose a file system rather than using whatever comes with an OS. What are your criteria for choosing your choice?

The title of the poll is "what is you computer's file system?". The computer may be a Mac (HFS+) or for the really unfortunate, Windows (NTFS). It is reported that there are people that run those, believe it or not.

I hear that there's some weird niche operating system from a little company in Redmond that uses NTFS as its default filesystem. I can't remember the name off the top of my head, but maybe that's what people are using?

ZFS on FreeBSD since a couple of years back (FreeBSD 10.0)ZFS on Linux as the rootfs (works since Ubuntu 16.04, though you have to install by hand since the installer doesn't support it yet)

I've used Btrfs a good bit; the only file system to result in repeated data loss, loss of the ability to write, and performance problems I've ever used. Maybe it will eventually become production ready, but after it being terrible for so long, I'm not holding my breath.

Using ZFS over encrypted partitions, and the bottleneck is the encryption, not ZFS per se. And that because I'm running them on old CPUs without aesni support. Even on FreeBSD 11-STABLE amd64 boxes with only 4 GB RAM, which run just fine as desktop machines. So ZFS is actually a viable alternative. I was running UFS for a very long time, and wary of switching to ZFS for day-to-day uses, but I went to an all-ZFS-setup with the switch to FreeBSD 11 even on oldish hardware, and I'm not looking back. It's all too convenient to give up the upsides.

I, like most people, have limited experience with NILFS2. The only bad things I can think of off hand is that NILFS has absolutely horrendous performance in certain circumstances (but mostly only shows in benchmarks, not actual use) and that if the system cleaner isn't working properly, you can run out of disk space in a hurry (which murders performance as every operation requires a cleaning first).

Both ext4 NTFS(Score: 2) by KritonK on Monday February 13 2017, @02:22PM

MS's shenanigans convinced me to switch my computers to Linux more than a year ago. My system disks or partitions are thus formatted with ext4, but all my data are still in the NTFS disks/partitions from the Windows days.

JFS -- its worked very well for me(Score: 0) by Anonymous Coward on Friday February 17 2017, @10:11PM

I've been running a many TB fileserver with JFS as its filesystem for many years with great success. I've also used JFS as the filesystem for the operating system on multiple linux installs, also with great success. Its my goto filesystem on linux, and will remain so for a while. I initially picked it because it was mature code, and it was oriented towards using a minimum of resources.
Before that I had used reiserfs, but due to the main programmer of that filesystem being incarcerated, I choose to migrate away from it.