If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Long answer: Nobody is going to magically stick a label on the btrfs code and say "yes, this is now stable and bug-free". Different people have different concepts of stability: a home user who wants to keep their ripped CDs on it will have a different requirement for stability than a large financial institution running their trading system on it. If you are concerned about stability in commercial production use, you should test btrfs on a testbed system under production workloads to see if it will do what you want of it. In any case, you should join the mailing list (and hang out in IRC) and read through problem reports and follow them to their conclusion to give yourself a good idea of the types of issues that come up, and the degree to which they can be dealt with. Whatever you do, we recommend keeping good, tested, off-system (and off-site) backups.

Pragmatic answer: (2012-12-19) Many of the developers and testers run btrfs as their primary filesystem for day-to-day usage, or with various forms of "real" data. With reliable hardware and up-to-date kernels, we see very few unrecoverable problems showing up. As always, keep backups, test them, and be prepared to use them.

Depends who you ask... ask me? Its been stable and ready to be used since Fedora 17 was released (that time frame)-- thats about when I installed Arch on my home server with 2 hard drives and told Btrfs to automatically use both as a single volume. Btrfs on a home server, Btrfs on this laptop. Compression is awesome, automatic ssd optimizations are great, great performance, no bugs / crashes / corruption, nothing.

Its the same issue that KDE had... everyone got told "DONT USE IT ITLL EAT YOUR DATA" because of early bugs. People only speak up when things are bad, they dont speak when things are good. You just have to jump in the pool and see for yourself like I did guys, by the way...the water's great :P

Comment

It's been running on my home systems (8TB+12TB) for well over a year. I had one power failure that left one in a state where it couldn't be mounted and clearing something (I forgot what TBH, it's been a while) with a tool from the btrfs-progs repo fixed it.

That's the main reason why I still don't trust btrfs fully: resilience to power failures.

Comment

When the hell will BTRFS be declared "ready"? Is there any slower developing project at all?

I think the main problem is that Chris Mason is just not very good at overseeing such a large project as btrfs. The development has always lacked focus. The overseer of a project like this should always have a priority list of two or three issues that need to be urgently worked on in order to bring the project to wide usability. And then he needs to direct all the contributors to spend most of their time working on them.

Now that Mason no longer works at Oracle it seems that things are going even more slowly. Although there are several contributors who are fixing a lot of bugs, the development still appears unfocused. I think the btrfs project really needs someone new to take over and focus on dealing with the remaining issues that are preventing the major distros from using btrfs as their default filesystem.

That's the main reason why I still don't trust btrfs fully: resilience to power failures.

I'm not sure I follow. If you cut the power, there is little a filesystem can do. The data written will be truncated, and that's that. It's the same across all other filesystems. If you want to avoid that, get an uninterruptible power supply.

I think the main problem is that Chris Mason is just not very good at overseeing such a large project as btrfs. The development has always lacked focus. The overseer of a project like this should always have a priority list of two or three issues that need to be urgently worked on in order to bring the project to wide usability. And then he needs to direct all the contributors to spend most of their time working on them.

Now that Mason no longer works at Oracle it seems that things are going even more slowly. Although there are several contributors who are fixing a lot of bugs, the development still appears unfocused. I think the btrfs project really needs someone new to take over and focus on dealing with the remaining issues that are preventing the major distros from using btrfs as their default filesystem.

The issues being..?

Comment

It's been running on my home systems (8TB+12TB) for well over a year. I had one power failure that left one in a state where it couldn't be mounted and clearing something (I forgot what TBH, it's been a while) with a tool from the btrfs-progs repo fixed it.

Impressive. What type of configuration are you using? Hardware RAID? btrfs RAID?

I'm using RAID10 on a 3ware 9570 controller (4 3TB disks so I have 6TB of usable space), but while my root filesystem is btrfs, my /home is ext4. Same with my desktop (which is the same raid controller but is currently only RAID1 with 2 3TB disks). So far, not a single problem, I'm thinking next time I reinstall on the desktop, I may go full btrfs.

I also have the family computer fully on btrfs (using btrfs RAID1). No important data is stored there so I figured why not. I've been running that machine with btrfs across several installs for 1-2 years now, not a single issue.

I even tried btrfs on 2 different SPARC machines I have... but btrfs really isn't there yet on non x86 architectures (or at least not on some).

As far as power failure issues go, I believe the only real danger of data loss there is more dependent on your disks. zfs and btrfs both have an issue there. Basically some disks (but apparently not most?) will claim to have flushed all cache to disk when they really haven't (to get better numbers in the benchmarks). And due to how these 2 CoW filesystems work (I don't know the specific details), this really breaks things in certain situations, so in the event of a power loss or crash, the filesystem is an inconsistent state when the filesystem code had only committed changes to disk that would result in a clean fs.

Comment

I run a file server with 6x 2TB dm-RAID6 and a workstation with 6x 3TB RAID6 on an LSI 9271. Root is ext4 (separate LV on the file server, SSD on the workstation) and /home is btrfs.

The one time I needed surgery to get the filesystem back up was when I backed up from the workstation to the file server and pulled the power cord on the server by accident. Now it's held in with a zip tie

I was about to ask you why you'd use dm raid instead of btrfs raid... and then I remembered that RAID5/6 are fairly recent to btrfs.