Posted
by
Soulskillon Friday October 05, 2012 @03:39PM
from the file-systems-are-for-files dept.

sfcrazy writes "Samsung has created a new Linux file system called F2FS. Jaegeuk Kim of Samsung writes on the Linux Kernel Mailing List: F2FS is a new file system carefully designed for the NAND flash memory-based storage devices. We chose a log structure file system approach, but we tried to adapt it to the new form of storage. Also we remedy some known issues of the very old log structured file system, such as snowball effect of wandering tree and high cleaning overhead."

No. SSDs present themselves to the OS as contiguous block devices. Filesystems intended for bare NAND flash like jffs(2), yaffs, and this new F2Fs would be totally useless for SSDs. They're intended for bare NAND, which SSDs are not.

Bare NAND is presented as a block device. NAND SSDs are also presented as block devices. That does not imply that they are equal. SSDs have a controller that does remapping on the fly, in many cases on the fly compression, bad block handlling and much more. Bare NAND does not have that layer. That is why the ACs comment should be moderated informative, and you should be moderated "plain wrong".

No. SSDs present themselves to the OS as contiguous block devices. Filesystems intended for bare NAND flash like jffs(2), yaffs, and this new F2Fs would be totally useless for SSDs. They're intended for bare NAND, which SSDs are not.

You're wrong

f2fs work on top of block devices. f2fs sends TRIM (ATA command) down to the device. Bare NAND flash doesn't grok ATA commands.

The problem is that even with a translation layer for block access, flash-based devices have limitations, which means that different usage patterns can dramatically change the performance of the device.

For a (simplified) example, to write a file in ext3, you need to store the new data for the file, but you also need to store other metadata: the location of the data blocks themselves in the inode, the file size in the directory, the journaling data. This means that you have four 'internal block descriptors' open for writing at the same time.

But block descriptors are a limited resource in SSDs, and even more so for low-cost eMMC devices. This means that with only two or three open files with regular writing, you could quite easily lead to some kind of thrashing state, with the device quickly opening and closing descriptors. Since flash memory writing is strongly constrained, this means that a whole block (2 MiB block size is common) containing a descriptor will need to be erased before its next use. As a result, each block only contains little interesting data, and writing only a small amount of data leads to a lot of flash write and erase access. This problem is called write amplification, and reduces both the disk's performance and its durability.

The F2FS design is a log-based design, where all files on the disk share 6 common writing areas, for each kind of stored data, where the information is stored as it arrives. This will have a very positive effect against the write amplification problem, and is an example of how an adapted file system can have a positive impact, even on block-based devices.

If this is the case, then I don't see the point. Filesystems already in use support TRIM.

Just because you send TRIM down it doesn't mean that the device can erase the block. The erase block size in NAND is usually 256KB or larger. Using 256KB as IO block is just crazy, drivers use something like 16KB or 32KB. The filesystem has to be aware of the erase block size so it can send down TRIM command for an aligned and contiguous 256KB block, then the device can go on and erase it.

You appear to be correct, judging by the information in the patch (e.g. see the LWN link posted by someone else, later in these comments). (commenting mostly because I accidently modded you as 'troll' when I'd meant to click 'informative' and this is the only way to undo that).

While the primary benefit will initially be for Android devices, this will be great news for solid state drives as well. Great job Samsung!

Before you go congratulating them on a great job, remember this is the second time they did this. The original attempt was called Robust File System. It was an abortion based on FAT16/32 with a duplicated file allocation table and some sort of journalling hacked on top.

It was claimed to be optimised for NAND devices and all that other good stuff, but the community quickly came to rename it Really Fucking Slow.

This file system was so slow that on the original Galaxy S the kernel would think software locked up while writing to the disk and prompt the user to force close the device. Search for "lagfix" if you're interested in what a disaster this was. There were users world wide trying to find fixes for the slow system performance, and the fix was often in the form of a kernel which supported ext4 or yaffs and a utility which converted the entire/system and/data partitions in the phone to the more common file systems.

First thing I thought as well. We'll see how it goes - this one doesn't sound as stupid as the previous one, but anybody who knows Samsung knows that they are very weak in the software performance department...

Then let me just say thankyou for your hard work. It's quite amazing when communities and hackers can create a better product than the companies who actually have access to the full specifications and sources.

Before you go congratulating them on a great job, remember this is the second time they did this. The original attempt was called Robust File System. It was an abortion based on FAT16/32 with a duplicated file allocation table and some sort of journalling hacked on top.

*twitches*

Damn you, thegarbz! Damn you to heck! I had nearly managed to forget about RFS today before you reminded me. Now I'm going to have to attempt to wipe my memory with methylated spirits and Friends reruns. Again.

Sure, and they might not have released it to the public if it weren't for the GPL. On the other hand, they've developed something that looks like it may be very useful, and have released it without batting an eye. They're one of only seven Platinum members of the Linux Foundation. I think it's clear they understand how the ecosystem works, and they're happy to participate. Hard to fault them for that.

And actually, as I understand it, they use Linux for a lot more than just Android devices. They also have embedded Linux in other systems, like TVs.

Well considering the vast size of Samsung, they probably do far more work with Linux than Google does as well.

People forget we're talking about a company that not only builds products in pretty much every home electronics category but also ships, CCTV, aircraft (for a while), artillery and automated turrets. None of this counting the bits and pieces they research and build that go into each of those products.

They donate at least $100,000 to the Linux Foundation a year, if nothing else. Pocket money to Google, maybe. But No small chunk of change to the Linux Foundation. Creating and releasing an extremely popular and novel Linux-based OS has got to count for something, too.

Samsung donates at least $500,000 a year, so they do still win in "we love Linux" top trumps.

Creating and releasing an extremely popular and novel Linux-based OS has got to count for something, too.

Forking the Linux kernel to practically derail driver development for mobile devices counts for something as well.Here Samsung and Intel also serve as counterweights by running their own Tizen effort, which even sees some automotive use with Genivi.But the story of "Linux on mobile", between Google and Nokia, is a sad and confused one.

I know you're kidding, but I should point out that Linux is not a requirement for building bad interfaces (though one might claim that it helps). TV engineers in general seem to have some impressive skills at building bad interfaces. My last three TVs all had terrible interfaces, and none of them were Linux-based.:)

If you think TV engineers write bad UIs, you should see the ones rocket scientists cobble together.
Seriously, a PhD in astrophysics deoesn't mean there are any programming skills whatsoever, let alone any appreciation of ergonomics.

They always look great spec wise for the money, but the actual product feels just not right (their premium Android phones being an exception I hear, though I've avoided them after my other experiences with their stuff).

I also have a blue ray player, when I turn it on, it presses play automatically. If a disk is at the start, I have to watch all the unskipable ads before I can hit stop, then menu, to browse to Netflix.

The "Smart" crap in my otherwise awesome Samsung TV is inconsistent and very hit and miss. I have a full qwerty remote that some of the built in apps don't even take input from (forcing me to use arrows and an on screen keyboard.)

I bought an Apple TV which provides similar but far superior functionality to the apps built into the TV set and I couldn't be happier with it.

If I had known how bad the smart junk in the TV would turn out to be, I would have bought a less expensive Samsung TV (one with similar vis

My Samsung TV menu keeps freezing randomly if I switch the interface language to Russian.Say what you will about Linux, but gettext isn't that incompetently programmed. This must be the work of the legions of Samsung's coders.

That's the beauty of the open source model. People and businesses contribute things that benefit them directly, but they benefit everyone indirectly. Large companies don't contribute to the Linux kernel to be nice guys, they generally contribute code and patches to benefit their own products and systems. Their contributions benefit everyone, however.

Well, no, because it works in open source even outside of the copyleft world, and its only required in the copyleft world.

Copyleft probably was critical in establishing the benefits of big interests participating in the open source world rather than locking everything up, to be sure, but once it was established there's been quite a lot of stuff that has come down to the public in open source form even when no legal mandate existed.

Glad to see proof that Samsung does innovate and not steal everything from AAPL like all Apple Fanboys think.

What are you talking about? This is clearly a copy of Apple's original filesystem concept THAT THEY INVENTED when they created HFS! Why doesn't Samdung ACTUALLY innovate and find a new way to store data on a collection of sectors instead of just copying Apple all the time.

The worst part is that Samdung didn't also copy the MARVELOUS AND CLEARLY CORRECT INVENTION of hiding the filesystem (which Apple invented) from the users. They're so far behind Apple that they can only BLATANTLY STEAL the easy parts!

Apple created the LFS, Litigation File System. The unique innovation looks ahead for a user copying a file from one directory to another, blocks the request, and transfers the operation to a county in Texas to be tied up in I/O for years.

Commercial hardware companies contributing to open-source and the kernel, I mean.

It’s nice to see that Linux and the open-source philosophy more and more just is generally accepted.

Let's hope it.s because they have seen the advantages of humans working together, helping each other out... and not just for nefarious dog-eat-dog (aka capitalist aka "free market" aka law of the jungle*) purposes.

* Don’t worry. I know they're not supposed to be the same. The point I want to make, is that nowadays it gets all used to describe the same thing.

No, I don't. I remember when it was rare, but not when it was unthinkable. Even if you mean copyleft as opposed to merely open-source (there was and is a lot more reluctance about copyleft), commercial hardware companies were contributing to the GNU project even before the Linux kernel sprang into existence. GCC has always had the backing of hardware companies. The GCC Ada backend was fully funded by commercial companies several years before Linus went public with his experimental kernel.

Heck, some companies even recognized that the GPL protected their own code, even before Linux appeared. The GPL'd versions of Ghostscript existed because Aladdin recognized that the GPL prevented others from taking unfair advantage of their code, while still allowing the community to contribute.

This is a good thing, but corporations contributing to Free Software projects has been business as usual for over a decade now. Generally, they do so because they correctly perceive that cooperation is more beneficial to their respective bottom lines than keeping everything secret. Even Oracle, a corporation with a clear history of hostility to Free and Open Source software has supported development of the Btrfs Linux file system for many years. Cooperation between competing corporations is nothing new. Obs

Apple is rich enough to skip eMMC based memory for its iDevices, so it does not necessarily need this kind of file system. The NAND or eMMC trade-off is 'Spend (a lot of) money once to write your own FTL, and adapt it for each new chip' or 'Buy a chip with hardware FTL and a standard interface for a higher price".

You can check the tear-downs for all Apple devices: all of them directly use NAND, which makes sense. Apple buys large enough numbers of Flash to have reliable sources, and can invest the money

I would argue that anything is better than FAT for the NAND storage.
Under FAT for example:
Taking a picture with the FAT-formated camera can corrupt non-related files for some reason, if the camera is low on power.
Entire music collections can get lost, when the battery in the FAT-formated phone went out during the write process.
Anything better than FAT.

If we did not have patents and Windows as the dominate desktop we would not have to deal with FAT in 2012. Either your camera uses that or the vendor is stuck with paying microsoft more or trying to get users to install a driver for a proper filesystem. All those options are pretty bad.

My first reaction was "Is this to replace FAT?" Then I read about "log files" and I wonder if this is essentially a file system for more efficient logging. The article and the mailing list message seem to be somewhat empty in that regard. "Hey, there's a new file system..." That's about the size of it. So what's it about?

It would be interesting if there were an improvement to FAT and it somehow ended up as an alternative in consumer devices... but then again, how to get it onto Windows machines? "driv

This cannot replace FAT, since the whole point of FAT is to be interoperable with all those Windows machines out there. For as long as Windows only understands FAT and NTFS on removable devices, any consumer device will use those (and specifically FAT, for certain other reasons) in any of its memory that is directly exposed to be mounted as a block device.

On the other hand, for internal device memory, Android has already moved to a high-level protocol (MTP) to expose that to PCs, so they don't care what file system backs it internally. I haven't checked, but I'd expect that any 4.x device has its internal memory fully in ext4 or other Linux native FS already.

The move to MTP is something I've been speculating has to do with moving away from the FAT patent licensing issues that Microsoft is using to bilk Android manufacturers. I find it super annoying to use, since it isn't treated like a block level device Python won't interact with it, I can't read it in anything like Baobab [wikipedia.org], so I tend to lose track of what's occupying what space.

The move to MTP is something I've been speculating has to do with moving away from the FAT patent licensing issues that Microsoft is using to bilk Android manufacturers

There may be that angle to it, but I think the main reason is because it removes the requirement for the phone to unmount the device to allow the PC to mount it - this can wreck havoc on any app that has files on the unmounted partitions open, requiring them to be aware of this scenario (and for many apps that pretty much means that they have to shut down - e.g. mapping apps that store their map cache there), and it also means that manufacturers have to carve out two partitions - one for the OS and other no

Windows has some built-in support for MTP (at least since Vista), though it's not on FS level - instead, it's hooked up into Shell/Explorer, much like Libraries in 7. So if you're content with Explorer, or some file manager that's using Shell interfaces, then it should just work.

For automation, when you actually want to see it as FS, yeah, it's a mess. Ideally a driver-source-compatible port of FUSE would solve this, and people have made several attempts at FUSE-Win32, but apparently writing Win32 FS drivers is not for the faint of heart, so I don't know of anything stable.

I'm actually one of those people you mention in the second paragraph. The real problem isn't Win32, it's that the filesystem stack is below Win32. Win32 is a subsystem layer that sits on top of NT, preserving compatibility with Win9x apps and presenting a similar user experience. Direct NT programming is a different beast in many ways, even when you stay in user-mode. Kernel-mode NT is just another level of fun on top of that. It's certainly possible, of course, but the overlap of people who are interested

Ironically, it was Microsoft who developed MTP and pushed it through the standards committee.And now they extended it for Windows Phone, so regular MTP support is not enough to work with WP devices. This is perhaps the stupidest move in the whole Windows Phone story.

Mine was "they're trying to prevent exFAT from getting a foothold". With flash memory in various mobile storage media (tablets, smartphones, flash drives, memory cards) getting as prevalent as it is, interoperability with different OS's becomes very important. To my knowledge, FAT is currently the only file system that works out of the box on all the major systems (Windows, MacOS, Android, Linux, probably Windows Phone). And it's outlived its usefulness. I'm guessing Microsoft has been pushing its own propr