Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

A driver is probably needed to handle the hybrid part - to know what to do with the special features that are new to consumer drives. I think the OS has to decide what to put on the flash cache, I don't think that the drive can realistically be expected to do that on its own. With a current generic driver, I don't expect that there would be any benefit to using this type of drive.

Not at all. Hardware to do all that you are talking about is probably on the Drive side of the SATA port. It would be transparent to any host system because of the SATA interface. All that it would care is that it sees a SATA drive, and it appears to be really fast!

Many drives already have a read cache in ram. The drive electronics figure out what to cache and what not to. Sata, EIDE, and SCSI drives all have some kind of microcontroler that may handle the cache. Since the samsung site only seems to work with IE and I only have Firefox and Opera on my Linux box I have to guess.

to run the drive perhaps, but what about to use the caching? is the "write to buffer till full, then dump to disk" thing handled completely within the drive firmware itself or does it depend on OS-side drivers? TFAs are kinda sparse in that info.

Almost certainly completely hidden by the driver interface. Firstly because it would be easy to do, secondly because I doubt that (e.g.) the SATA interface has commands to handle this new technology invented about eight years after it was designed, and thirdly because keeping it built-in vastly simplifies the power fail handling.

If you read the blurb, or the article, this is not the same thing as ReadyBurst TM which MS touts as a new feature in Vista. ReadyBurst TM, allows you to plug in a flash drive and use it as a sort of replacement for part of the disk. ReadyBurst TM allows you to use up to 2GB of flash. This technology puts the flash right on the drive, uses a much smaller amount of flash (128-256MB is optimal), and is more about power management than speed increases. Think of it as a relatively large, non-volatile cache

If you read the blurb, or the article, this is not the same thing as ReadyBurst TM which MS touts as a new feature in Vista. ReadyBurst TM, allows you to plug in a flash drive and use it as a sort of replacement for part of the disk. ReadyBurst TM allows you to use up to 2GB of flash.

Close, but no ceegar, I'm afraid!

ReadyBoost doesn't replace anything, really. What it does do is provide a very fast cache for files, that are also saved to disk, but can much more quickly be accessed from the flash drive (which can be a memory stick, SD card, CF card, or whatever, provided it is overall fast enough for Vista to use (it does a complete R/W check to make sure that the device is fast enough across the whole device.

Ideally, for those with smaller amounts of memory, a ratio of 1:1, flash :

The limit that MS puts on FAT32 is 32 GB for a drive. The limit is actually much higher, and if you format a drive with Linux or many other disk utilities you can get really big drive. Up to 2 TiB, according to Wikipedia. The max file size is 4GB, but I don't see how this would really affect the maximum of this application, as they could just use multiple files to store the data.

4GB is the max size for a FAT16 drive. 2GB is the max file size for FAT32. (Trust me, I know from practical experience in trying to move large Oracle dumps via a FAT32-formatted USB drive, and I was there for the 4 GB FAT16 limit.)

The SATA interface most certainly has commands to handle this new technology -- you can send arbitrary commands over SATA, just like you can over SCSI. It's a generic data interface not a block-layer device controller.You'd just assign the controller another LUN and document the commands it accepts. You could then make the flash disk part of the address space of the primary disk or you could assign each their own LUN for use as two separate disks, with the third "control" LUN accepting commands to copy betw

Isnt flash only good for ~30,000 writes? If the flash breaks, can you still use the drive? And most importantly how much does it cost?
I think the spinning magnetic disc is still king for a while to come, unfortunately.

A new kind of flash was developed last year that had much faster read/write (closer to RAM) and didn't deteriorate. I suspect that kind is what these will use. (Unfortunately I don't remember the name...)

*tap tap* ntfs-3g [ntfs-3g.org] -- I'm using it now, and it's performing nicely even under pretty heavy BitTorrent load. ntfs.fsck still needs to be written, but the situation is now vastly better than it was less than a year ago.

ntfs.fsck still needs to be written, but the situation is now vastly better than it was less than a year ago.

Amen! I have ntfs-3g on my Ubuntu (Edgy) partition. So long as I do a safe shutdown, and the filesystem is marked clean, everything works wonderfully and very quickly (not that I had serious speed problems with captive-ntfs, but I seldom deal with very large files.)

It's quite amusing that Linux is the only OS that can natively (as in, as a filesystem, not just in some ftp-like application) handle basically every major filesystem in existence today, what with the addition of NTFS support.

Linux is the only convenient way for me to transfer files from a HFS+ volume to a NTFS volume or vice versa. You can do it on Windows by using macdrive, but that is like using winzip or something. And it's damned slow. You can't do it on macos AFAIK, at least I haven't seen working NTFS R/W on macos yet.

And of course linux also supports a shitload of BSD formats, XFS, JFS, ZFS...

macfuse [google.com] claims to support ntfs-3g under OSX. Looks like you'll have to compile ntfs-3g from source though & mount from the command line - they've only got binaries and a GUI mounter for SSHFS at the moment

A couple months ago when I set up ntfs-3g on Linux my googling around indicated that ntfs-3g on macfuse was pretty sketchtacular. I did forget that it existed at all, though. It may be dazzlingly wonderful by now.

A new kind of flash was developed last year that had much faster read/write (closer to RAM) and didn't deteriorate. I suspect that kind is what these will use. (Unfortunately I don't remember the name...)

If you burrow down to the original blog Q & A, the MS guy says that they have been very careful how they use reads & writes, and they expect memory sticks to last 10 years being used like this, so it appears that dead sticks isn't a problem.

That's surprisingly accurate:10,000 hours = Roughly 1 year, 50 days1,000,000 hours = Roughly 114 yearsMost people do die in that timespan, even if it is a little broad.

Anyway, back to flash: Those numbers aren't from the same variety of flash, they might be using one that averages say 800,000 erase/write cycles, with 99.999% of devices being within 50,000 of the average. I certainly wouldn't mind knowing how long I was going to live that precisely, and I definitely wouldn't mind living 800,000 hours (I'd b

Wikipedia says that NOR flash is good for "10,000 to 1,000,000 erase cycles" and NAND flash has "ten times the endurance". Lets hope they've used the good stuff.

NAND and NOR flash are completely different types of flash chips.

NOR flash is good for holding code - it's basically nonvolatile RAM. You can execute code straight out of NOR flash easily by hooking it up to a memory bus.

NAND flash is good for holding bulk data. It's interface is strictly I/O based (like a hard drive) - you cannot directly execute code from NAND flash without copying it to RAM first. Some NAND-based devices have fancy tricks (Like samsung's ONENAND and M-System's DiskOnChip) where they put in some SRAM so you can execute, but they basically have to copy it from the array into the SRAM. (NAND flash also has stuff like "bit flips" where read data does not exactly match written data - and reading data can change it, but this is compensated for by using ECC codes in the "spare area").

All NAND-flash handling code has to handle bad blocks as a typical chip can have up to 2% bad from the factory.

The reason we use NAND flash is because it's extremely dense. While flash gets increasingly expensive as you go larger (32-64MiB is the "sweet spot" in price/storage for NOR flash), NAND flash achieves really dense storage. For the price of a 32MiB NOR flash, you'd get 1GiB NAND flash chip easily. So for things like memory cards and stuff which use I/O interfaces, the flash is exclusively NAND. NOR is used for stuff like BIOS code which doesn't change very often anyhow, and often just enough of it to have code where we can pull out data from cheaper storage devices (NAND flash and hard disk, for example).

Does anyone know why USB and IDE flash drives don't max out their bus bandwidths? I realize that a flash chip can only go so fast, but why don't they just parallel as many as needed to get the desired bandwidth?

It's more like 1,000,000 writes, but your point is taken. Perhaps the driver takes this into account--store many small and frequent temporary files such as browser cache files into RAM rather than flash, then dump them all to flash or disk rarely, but this implies a lot of intelligence on the part of the driver.

According to PC Mag link from the article, only Vista has the correct driver to use this drive.

It sounds like a nice innovation. Now to get from hybrid drives to biofuel laptops that run 8 hours on a thimble of ethanol;)

It's more like 1,000,000 writes, but your point is taken. Perhaps the driver takes this into account--store many small and frequent temporary files such as browser cache files into RAM rather than flash, then dump them all to flash or disk rarely, but this implies a lot of intelligence on the part of the driver.

Anyone interested in how this is handled, go look up MS ReadyBoost/ReadyDrive/Superfetch technologies.

This was an early issue with the ReadyBoost technology in using USB Flash drives and MS designed

As someone else has already stated, yes flash is still limited but not as much as it used to be. These hard drives are aimed at laptops and I believe Vista requires them to be considered as "Designed for Vista" rather than Vista Ready.

The point of the flash is to provide a nonvolatile write cache which will then spin up the drive to write a queued data after the cache is filled. This is supposed to have a significant effect on the battery life of laptops.

The have limited cycles per sector, but the drives automagically allocate writes over the least-used sectors. In practice, a modern flash drive should have at least the same lifespan as a spinning disk if not longer.

Perhaps the difference between NAND flash parts rated in erase cycles and NAND flash parts rated in MTBF or MTTF has something to do with abstractions inside the controller. Flash "chips" commonly use a bare-bones interface like that of SmartMedia, while flash "drives" have an ATA, USB, or SD controller in front of the flash that performs error correction and wear leveling. I'm pretty sure that's where the 5% difference between a 512 mebibyte underlying capacity and a 512 megabyte actual capacity comes fro

In practice, a modern flash drive should have at least the same lifespan as a spinning disk if not longer.

Only for lightly used flash disks. In practice, you cannot "automagically" allocate writes over the least-used sectors because modifications to data are not distributed evenly across all of the flash chips. Your filesystem metadata will tend to be clustered in the first flash chip, which will result in much faster wear, as it is the most frequently modified data on the disk. As a result, the life e

modifications to data are not distributed evenly across all of the flash chips. Your filesystem metadata will tend to be clustered in the first flash chip, which will result in much faster wear, as it is the most frequently modified data on the disk.

Why aren't modifications to data evenly distributed over the flash? That's much of the advancement in modern flash controllers, distributing those writes. Just because your metadata is initially written in the first chip doesn't mean the updates have to be wri

Hmph. Apparently current controllers do multi-chip wear leveling. I did not know that. Regardless, the failure figures I was fiving were based on a theoretical perfect 100% wear leveling scheme across the entire set of flash parts. Any less-than-perfect organization that results in hot spots on the flash would fail even sooner.

That would fail with continuous use of less than 100k per second. That's remarkably close to the amount of paging that the machine in question does, not counting any other I/O.

If your server is paging that much, why haven't you added more RAM?

All in all, I wouldn't touch a drive with this tech unless I could permanently disable it with a jumper if the flash parts started acting up. Even then, I wouldn't pay a penny more than I'd pay for a drive without this tech. It's a lot of hype for no real purpose. J

I suspect that the intelligence built into the drive has the capability of detecting flash sectors that have gone bad, much like an ordinary hard drive can detect bad magnetic sectors. So, I think that over time one will see that the flash's capacity decreases, but is mostly still available during the life of the drive.

So, I think that over time one will see that the flash's capacity decreases, but is mostly still available during the life of the drive.

It's also possible that they put some extra flash on there with some backup blocks, just as hard drive capacity is actually greater than what is reported, but some of that space is saved over for bad block relocation (in addition to simply being able to lock out bad blocks, which is what happens when you run out of relocation blocks.)

I worked at trimble navigation, radio group in sunnyvale, ca in the summer of 2000. One of my projects was stressing flash eeprom in the embedded systems we were developing, using rapid thermal cycling, and finding ways to exceed and recover flash beyond manufacturer's rated duty-cycle spec. Yes, we all know this is similar to MTBF calcs and not the same as real world failure modes (*cough* google's hard drive paper). The funny thing was, flash rated at 10^5

My god, you fool. It's just like gasoline-electric cars, there are no real pure-electric cars out at the moment because they have too many flaws, but hybrids are on the rise because it has a little of the electric advantage with the proven performance of gasoline.

It depends on the chip.Your average chip (like the 16F88) has a 100,000 write cycle for its internal Flash. The SPI Flash chip M25P*0 has the same - 1,000,000 write lifetime. (By memory - I could be off by 10x on the `88)

Now, since this has come up before, that doesn't mean that your drive will work perfectly until it hits 1,000,000 writes and then mysteriously stop working with a blinking red LED on the top. What that means is that statistically speaking, there's a good chance that most of your chip will s

I'm certain that hard drives will slowly go away to be replaced with Flash ram devices. As the price drops it will happen.

Reasons?

1. Hard Drive reliability - See the security now podcast or read google's paper about hard drive reliability. The manufacturers are lying BIG time about how bad it's gotten. And SMART is a steaming pile of nothingness that can and is wildly inaccurate.

2. Latency (not speed) is so much better than hard drives.

3. Power and heat - Flash memory does not generate near as much heat or draw as much power. Plus we can expect densities to get higher so the footprint probably will be smaller than hard drives

We've already seen it in handhelds. It's moving to laptops (Toshiba and Fujitsu already are selling laptops)

Well yes, IF flash ram can overcome it's shortcomings AND cost which is extreme.

you can get 750 gig of HD for $350, probably less now, how much would that cost in flash?

And unfortuantely flash is about as reliable as HDs right now for long term use. Even though it is not mechanical, it still wears out and is subject to out of box failures. (Memory manufacturing is about as poor as HD manufactuing is these days based on the number opf bad flash mosdules I've run into.)

And... it is so very very slow.

So yes, it woulf be GREAT to get rid of the bulky, loud, power hungry, slow access, mechanical HD of the last century, but... there is really nothing even close on the horizon right now:( Sadly, flash just isn't practical at all in it's current form for anythig OTHER than small devices that only need a small number of gig in a tiny form factor.

"So yes, it woulf be GREAT to get rid of the bulky, loud, power hungry, slow access, mechanical HD of the last century, but... there is really nothing even close on the horizon right now:( Sadly, flash just isn't practical at all in it's current form for anythig OTHER than small devices that only need a small number of gig in a tiny form factor."In a couple of Gig you can easily store an operating system, many applications and many documents. For company PC's it would make sense to just load the OS and app

you can get 750 gig of HD for $350, probably less now, how much would that cost in flash?

For desktop-replacement applications that need more than half a terabyte, such as video editing, hard drives are probably the best option. But with fully-packaged flash retailing near $10 per GB, a laptop with a flash drive (imagine an enclosure the size of a 2.5" hard drive containing 20 miniSD cards in a RAID 5) can do a lot of things surprisingly well.

Sadly, flash just isn't practical at all in it's current form for anythig OTHER than small devices that only need a small number of gig in a tiny form factor.

Define "small number of gig" in terms of applications that laptop owners would want to run and which wouldn't work with a "small number of gig".

Sadly, flash just isn't practical at all in it's current form for anythig OTHER than small devices that only need a small number of gig in a tiny form factor.

In other words, such as on (thin-and-light or ultraportable) laptops. Although I agree it has no chance of replacing big discs on fileservers, desktops, or PVRs, I think it does have a decent chance of replacing them in portable machines.

Hard drives aren't going anywhere. The more likely scenario is that both flash and hard drives will coexist to exploit the benefits of the strengths of each medium. With a hard drive you can get high transfer rate and high storage capacity. With flash you get low latency and low power consumption.

In fact it's already happening. Windows has that readyboost stuff and samsung is developing these drives. All that's really happened is we've added another type of memory to the hierarchy: registers, cache, ram,

The idea is that the OS handles this and automatically caches frequently-used files. But it's also used as a delayed write cache to keep you from having to spin up your hard drive due to infrequent writes (like log entries.)

I think you're conflating two different (but related) technologies: the former function is designed to be used on separate flash disks that are about the same size as the system RAM; the latter uses 128-256 MB and is what these "hybrid hard drives" are for.

Anyway, here's how it is: Vista has three technologies, superfetch, readyboost, and readydrive.

Superfetch preloads frequently-used stuff into RAM. It does not require any extra hardware, apparently. Readyboost basically acts as a cache between RAM and the paging file on the hard disk, so stuff that gets swapped out can get swapped in a bit quicker. It uses external (i.e., not part of a hybrid hard drive) flash, such as on a

The flash in the hybrid drives won't be used as that kind of cache (you're thinking of the Vista's ReadyBoost).

This flash will be a write cache for the hard drive so that the hard drive doesn't need to spin up as often (this will potentially enhance your battery life). As you make changes to your data, it will be written to the cache and then flushed to the drive (a) when the cache is full or (b) when the drive is spun up for some other reason (a read, for example). Presumably, if the drive is already spun up, the flash won't be used at all and data will go straight to the disk.

Who invented the idea of an integrated disk read cache? Nobody knows because it's such a trivial idea that claiming credit for it just makes you look silly -- and desperate. If Microsoft is so stretched for innovation that they have to go around demanding props for "hey lets write these 100 bytes to flash instead of spinning up the drive" then they are in really bad shape. My advice is to jump ship now before the MS Titanic hits a penguin.

If Microsoft is so stretched for innovation that they have to go around demanding props for "hey lets write these 100 bytes to flash instead of spinning up the drive" then they are in really bad shape. My advice is to jump ship now before the MS Titanic hits a penguin.

Go read about ReadyDrive and ReadyBoost and Superfetch before you make such an crazy assumption.

Do you really think HD manufacturers would be working with MS on such simplistic concepts if it were merely just a generic cache concept? I would

The press release [samsung.com] from Samsung is dated April 2005. You can read more technical details there without all the annoying popups on ExtremeTech. Looks like the drivers which give the power savings were written by Microsoft. Planned ship date was late 2006, so they didn't fall too far behind.

I would like to see a battery-backed RAM drive with FLASH as well. I think that for journaling filesystems it would be great for performance since the journal could be written into RAM and then later written to disk. The drawback of the RAM based drives I saw was that the battery is only good for a limited amount of time. The way to fix it is to provide less battery time but use that time to write the RAM out to FLASH when the power is cut. The advantage of combining RAM and FLASH is that RAM is very fast to write to and has an unlimited number of write cycles. Of course, I'd really like to see one of these new memory technologies come out that combines the best of DRAM and FLASH.

Another problem with battery backed ram is that without the overhead of a quite complex circuit to refresh it only works with the far more expensive, and generally only available in lower densities SRAM.

There are RAM drives available [newegg.com] that use DRAM, but due to the refresh circuitry and whatnot it takes a bit of power so the battery will only supply power to the RAM for a limited amount of time.
Also, if the flash were removable (i.e. SD card, compact flash) then it could be possible to move to another machine.

Another problem with battery backed ram is that without the overhead of a quite complex circuit to refresh it only works with the far more expensive, and generally only available in lower densities SRAM.

There is PSRAM [wikipedia.org], a form of DRAM that has the refresh circuitry on the die. It can achieve densities comparable to DRAM because it is DRAM.

No. Since it is a journaling filesystem, as long as the journal is intact, all pending writes will be completed the next time the system is powered up. Journaling filesystems typically write data out to the journal first, before writing it again to the proper location on the hard drive and updating all the inodes. The idea is that if power is lost that fsck will play back the journal and complete all pending operations. The problem with journals is that two write operations are required. Some filesyste

Rather than ship hybrid drives now with flash chips good for a few thousand cycles, why not wait until the end of this year and ship them with Intel PRAM or equivalent. PRAM is expected to be faster, non-volatile, and handle many times more R/W cycles. Or is the lifetime of the rest of the drive no longer than for the flash itself? This seems to be to be just a bit ahead of its time, and has the potential for either problems, or performance degradation, over a relatively short timespan.

Some company waited for 802.11a chips instead of releasing 802.11b cards. That company is not in wireless business anymore. Sometimes it pays off to be first with an inferior product and then offer incremental compatible updates, rather than wait for the perfect solution and have your product perceived as offering little difference at the price of incompatibility with competing products already on the market.

Also why does the linked article and Slashdot dismiss these drives as having nothing to do with Vista, when in fact they were DESIGNED Specifically to be used with Vista and employ MS Vista technology in the hardware?

"Optimized to work in Windows Vista-capable notebook PCs, Samsung's MH80 is a 2.5-inch hybrid hard drive with 128 or 256MB of flash memory. It combines a hard disk drive with a OneNAND Flash cache and Microsoft's ReadyDrive software, offering faster boot and resume times, increased battery life and greater reliability compared to traditional magnetic media technology, the spokesperson claimed. "

Sorry slashdot, but these drives are designed for Vista. Sure they may offer performance improvements in other OSes, but will see the majority of performance gains in Vista. Also even when used with other OSes, the way the Drives internally manage the Flash caching is from MS, so thank them the next time you boot your Linux laptop with one of these drives.

As for the other questions people have about the limited write times of Flash RAM, etc, go lookup MS Superfetch technology which specifically addresses these issues by writing to various locations in the Flash space, since this this is also how these drives work to ensure the same bits don't always get used, giving the flash cache the equivalent or greater lifetime than the HD platters.

I know this is SlashDot, but someone could get the fact right once, right?

Totally agree with you! I can't believe that the editors didn't catch this amazing amount of misinformation before posting it. This points out the root problem with internet media....lack of editorial input means that fact checking just doesn't happen. Media without facts is a sign of the apocalypse!

So what? Most PC hardware is designed specifically for some Microsoft software (DOS, Windows 95, Windows XP) and has been for the last 20 years. Doesn't stop you accessing that technology with a Linux kernel.

and has been for the last 20 years. Doesn't stop you accessing that technology with a Linux kernel.

Very true, but for Linux to take full advantage of caching in a hybrid drive, it needs to also alter the memory management, caching, and paging techniques Linux uses.

The drive is going to transparently provide a boost in performance for any OS, but when used with Vista, the direct management of when and for what to use the hybrid cache for is something the OS is already designed to do. For example Vista knows

I personaly wouldn't get one of these. I use my laptop for coding and DVD watching.
For me it makes better sense a full solid state flash drive [memorydepot.com] because it uses less
battery life and is probably a little quicker and more quiet. Of course if you need more than 8GB
of storage, the price is a little prohibitive. That's why I use SVN, and store all my code at home.