Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Lucas123 writes "Intel is planning to launch its native flash memory module, code named Braidwood, in the first or second quarter of 2010. The inexpensive NAND flash will reside directly on a computer's motherboard as cache for all I/O and it will offer performance increases and other benefits similar to that of adding a solid-state disk drive to the system. A new report states that by achieving SSD performance without the high cost, Braidwood will essentially erode the SSD market, which, ironically, includes Intel's two popular SSD models. 'Intel has got a very good [SSD] product. But, they view additional layers of NAND technology in PCs as inevitable. They don't think SSDs are likely to take over 100% of the PC market, but they do think Braidwood could find itself in 100% of PCs,' the report's author said."

When given similar performance but a slightly higher price, i would prefer the SSD. I can't take the flash to the next PC as i can do with the SSD. Hard disks have a highe life expectancy than mainboards (i usually find some good use for old HDs, i never did for old mainboards). Unless the SSD will cost 2-3 times as much as the flash on the mainboard, i believe SSDs will still be used. But maybe this will lead to lower SSD prices.

"(...)if it becomes commonplace, most PCs will eventually have it (...)"

Which opens an interesting hole. That flash on motherboard will hold some data to speed up system startup. That means first n opened files. With that flash big enough it will also hold quite a lot of user documents. Unless documents can be marked as "not to be cached" it will add extra headache when getting rid of old systems. We have it already with 419ers buying old PCs and smartphones, gangs dumpsterdiving, etc.

Also try to explain to customers that they will need to erase flash they cannot see in system (and will most probably not even know about it!) or destroy the chip before throwing away old system. With HDDs it's quite hard and those are big, visible and has been around for ages.

Why not offer a simple tool. I'd name it "Last Shutdown" and it would be kind of like saying goodbye to your old computer (in style).

It would first ask if you have saved all your personal data outside the computer and/or removed that storage from the system.Then it will go and- safely delete all the hard drives- safely delete all the flash storages/caches- equalize all other residues- safely delete all ram content- empty all caches- etc.While showing a nice animation fitting to the theme.

Seems to me that this article is a thinly-veiled marketing trick. Somebody publishes a paper, "Will Intel product A beat Intel product B?", and presto, we've got buzz about product A which doesn't even come close to competing with product B (which is a market leader, dontchaknow), and increased buzz about product B. Then, people chime in with their arguments and counterarguments about which product is better... and Intel wins no matter what. Both product lines are probably going to succeed independently of one another.

That said, Braidwood sounds awesome to me, especially because my servers talk to a storage box over NFS, and fast onboard cache sounds great to me. But, I want fast local storage too, and 16GB is nothing, so I want large-capacity SSD drives. I really don't see these as competing products. This is just a slashvertizement. Move along, folks.

My concerns about this product is that flash degrades per write cycle, so the smaller the disk you have, the faster you wear through it. Since this sound like just a small buffer, I'd have concerns about it having a short lifespan.

That said - I'm more worrying about the consideration about exhausted flash on the motherboard. Have all avenues actually been considered here, or is that a built-in best before date that new motherboards will have?

MLC still hasn't improved its lower durability bound: 100K erase cycles. (Often flash companies advertise only upper bound: 1-2.5Mln erase cycles. SLC has it at about 1Mln cycles.) And 100K erases for the flash - especially if one puts a FS's journal there - is really not that much since journal has to be updates often while the operation itself might not even reach the disk. E.g. on many systems the sequence will touch only FS journal: create temp file, work with it for a sp

Actually not. It is going to store quite permanently only some files used to speed up system processing. There is not going to be any journals, and the filesystem will be highly optimised for this kind of usage. That is from press release I read somewhere. So even MLC will last long time as writes will be very limited. Only issue is that to drive costs down controller is also going to be scaled down so no great magic as with SSDs. So if somebody hacks that flash to use as HDD it will wear quite badly and quickly.

If a 16GB Brainwood used a revolving cache, where any data not already in flash was read from disk and written over the oldest data in flash, then you would see very few erase cycles per day per block. You would need to do more than 16GB of disk IO to eat up one of the 100K erase cycles.

With intelligent cache techniques you should be able to get the erase-cycle count for each block very low.

It's *cache*. It's not meant to be moved, and it doesn't prevent you from moving the hard drive. Nor does it prevent you from using an SSD, it just means the performance reasons for using an SSD may get significantly reduced.

Battery backed up (BBU) RAID controllers with volatile RAM cache are very common in the server market because of the huge performance increase of small random writes.

The RAM cache lets the controller cache writes and then send them to the disk in batches while performing write combining so multiple small writes get turned into larger writes reducing the number of disk seeks required to store the data. Also think of the case where your controller has a 512MB cache and you write 200MB to disk. The controller can say OK as soon as it's written to RAM (fraction of a second) where your typical fast disk these days will take 2 seconds.

Without having a battery to back up the volatile RAM cache, you could lose a lot of data if the server lost power, but with it, you go go at least a couple days without losing data.

So now, let's replace that 512MB BBU RAM cache with a 16GB SLC SSD. You won't quite get the burst speed of the BBU RAM controller, but in sustained server loads performance should be a lot better. The SSD will also be able to store a lot more data for reads. If the controller is smart and only uses the SSD for caching random read patterns, you could get close to SSD performance for a lot of workloads but still have 1TB of disk storage.

Something just occurred to me - this is probably why WD and Seagate aren't worried about SSDs. They know they can just slam a crapload of cache onto their HDDs to vastly improve performance, and they already have the capacity advantage.

What if you take it as "cache", one that survives reboots, but where if you really want data persistence you backup it to a more transportable device? Probably will be pretty fast regarding speed (maybe faster than normal ssds, at least regarding bus connection), and having i.e. the most requested files, database slaves for fast queries, swap/temp partitions or even the OS could improve a lot typical pc performance.

The big thing I see here isn't surviving intentional reboots for efficiency - i.e. stuff cached pre-boot woudl still be available without spinning-disk read post boot. For that matter I'd be wary of such a feature (it would need to be well implemented and very well tested to deal with odd circumstances like disk connections being rearranged physically between shutdown and restart).

The two big advantages here are standard cache/buffer behaviour during active system use, and written data surviving an unexpect

Didn't they already try this with their turbocache stuff? I seem to recall the general consensus [anandtech.com] being that it doesn't really offer any remarkable benefits. Regardless of how fast the cache is, eventually you run apps or open files that can't live on it 24x7 and you're going to revert to magnetic HD performance limits. This might improve some battery life and performance for some apps, but its not going to give you the across-the-board speed and battery life boosts that SSDs do. While this would certainly result in a better experience for the average computer user, I feel like its going to be relegated as a middle-ground between HDDs and SDDs, augmenting the low end, but by no means obsoleting the high-end.

I would expect they'd be using some sort of slot, something like this [scan.co.uk]. Motherboard manufacturers aren't exactly going to be thrilled at the idea of putting some yet more expensive components on there, but they might be happy to hook up a small ZIF socket thing like some of them do with CF.

Intel actually had some weird ZIF connected SSD's on there a while ago on preorder, but they appear to have disappeared.

Capacity is still an issue though. Although in enterprise storage SSDs offer a lower cost per transaction and provide a real benefit, they still have massive amounts of HDDs for storage on the lower tier. Outside of work where I would be classed as a standard consumer, it would cost me far, far too much to buy enough SSDs to transfer my 4TB of data from my HDDs.

I think people are going to have a lot more than that when recording HDTV with a Tivo-alike device. 1TB works out to about 100-ish hours (yes, I'm rounding heavily) of HD video. Tivo certainly has users who record & keep that much video.

A friend of mine studies architecture. He stores several TB in DSLR photos and renderings on his desktop machine. Another friend of mine stores all his audio CDs, DVDs and BluRays and lots of TV recordings on a little server for his HTPC, he recently reached 3TB.

They are not "standard consumers", but they are not hard-core nerds, either. Storage is so cheap to acquire (and so easy to use) that people can afford to not delete anything ever again. Whether that is sensible is a whole other point. But the resul

Everyone I know has at least a few TB of data on burnt CDs and DVDs. It would be nice to be able to consolidate your multimedia stuff into one storage device.
I'm running 8 terrabytes of data on 10 1TB hard drives on a ATA over ethernet [sourceforge.net] setup [freshmeat.net] in raid6. So yeah I'm pry not your average consumer but being able to access 1000's of hours of movies, tv and home video without having to pay netflix or watch the ads on hulu is pretty nice.

Capacity is still an issue though. [..] it would cost me far, far too much to buy enough SSDs to transfer my 4TB of data from my HDDs.

Go back and read what he said. It's clear that he was talking about the near to middle future, not the current situation:-

Sooner or later, no moving parts beats moving parts. The magnetic disk makers have done an amazing job so far, but eventually they're going to lose out to solid-state.

Flash memory is at present growing in capacity much faster than magnetic drives. (Actually, it's growing at the rate that the latter grew at during the 1990s and early 2000s). Of course, it's still got a long way to go to catch up, and- like hard drives- it's not guaranteed that it'll keep that rate of growth forever. Still, the current shape of solid state's curve has it intersecting th

I don't think SSDs do have a long way to go to catch up. SSD - 512GB in a 2.5" box, Hard Disk, 640GiB in a 2.5" box, that's only a small difference, and until last week, the SSD was in the lead. The only reason SSDs trail in the 3.5" drive stakes is the same reason they trail on price: not enough factories have been built yet, fix that (probably won't take long), and SSDs will very rapidly be competing with HDDs capacity wise

Flash memory is at present growing in capacity much faster than magnetic drives.

If magnetic drives really push the capacity growth that might not hold; magnetic drives have shrunk in size and increased rotational speeds to decrease latency during that time as well. If they just simply give up the performance race and go for vast capacity they could move back to 5 1/4 full height disks. Can you imagine the amount of data you could stick on that surface area with modern technology? I wouldn't be surprised if

The last few systems I have worked on for 'standard consumers' were all quite upset at being forced into purchasing a 'way too big' 300gb hard drive, simply because any drive under 100gb is both very hard to find, and likely expensive in comparison. 500gb was a waste to them, when they only sync their camera once a month and have office and a couple games installed.

Outside of work where I would be classed as a standard consumer, it would cost me far, far too much to buy enough SSDs to transfer my 4TB of data from my HDDs.

You are not allowed to use "standard consumer" and "4TB of data" in the same sentence:PCareful, they might swoop in and hole punch a warning into your geek card!

Anything >= 2tb is far far above the standard consumer. Even 1tb is far above the average consumer, although 1tb is still falling well within the power user and average gamer groups.

For customers who only want/need a 100GB of storage, SSDs are the way to go. They do currently cost a lot more than rotating storage, but a SSD makes a HUGE difference in the apparent performance of many day-to-day tasks.

A good 120GB SSD like the OCZ Agility costs about $300 compared to $40 for a 160 SATA drive so the price premium is huge.

BTW - I'm not sure why you say drives smaller than 300GB are hard to find - or why your customers complain about it. NewEgg has a ton of drives smaller than that with t

Perhaps. But they still have a long way to go on $/GB. Just checking my local price guide it's 0.09 $/GB for 1.5TB HDDs and 3.75 $/GB for the cheapest SSD. But yeah, booting off SSD and having a HDD for media sure.

Now only if they could start following the server side folks and place an internal USB connector inside and then MS and others could give us the OS on its own usb drive (read only) and we could use the hard drive for updates and programs we could enhance the security as well...

This used to be a huge PITA before GRUB supported UUIDs as groot values. It was an even bigger one before Linux would do it. On Windows you need special tricks, because Windows doesn't like to be installed there; it works on some netbooks with "special" BIOS. I think the specialness is at least partly from their EFIness but I'm just kind of firing in the dark here. I have a 4G Surf and an Aspire One, both will allegedly play this trick. Actually, I have a DT Research DT366 which seems to have some sort of U

On Windows you need special tricks, because Windows doesn't like to be installed there; it works on some netbooks with "special" BIOS. I think the specialness is at least partly from their EFIness but I'm just kind of firing in the dark here. I have a 4G Surf and an Aspire One, both will allegedly play this trick. Actually, I have a DT Research DT366 which seems to have some sort of USB disk emulation mode also.

Basically, it just needs to appear to the OS as a "fixed disk" rather than a "removable disk".

This is why I keep hoping someone will produce a complete computer architecture designed to be virtualized, so that I can genuinely run Windows and Linux (for example) and have both have access to the hardware. I'm tired of deciding who can access the video card. On a system with unified memory it seems especially silly that I can't do this gracefully.

Why a USB connector ? That causes the same problem as making SSD cards use the SATA interface - the serial interface becomes slower than the things it is connected to.

What I would like to see is a set of sockets on the motherboard, mapped into the main memory address space (not PCI), a physical switch on the board to make them read only and software in the BIOS to make them look like a bootable disk.

Four sockets with 16 or 32G in each would give you enough space to store the entire OS. I don't know how Windows would handle it, but in a Unix or Linux based system it would be fairly easy to mount the devices as read only partitions and map them into the filesystem. This would be ideal for a server system, mapping the entire OS into the main memory address space and making it read only.

In fact all the BIOS would need to do is make the first 100M visible as a boot partition, and leave the OS to handle the rest.

What I would like to see is a set of sockets on the motherboard, mapped into the main memory address space (not PCI), a physical switch on the board to make them read only and software in the BIOS to make them look like a bootable disk.

The one issue, here, is one of address space. Unless people do a wholesale migration to 64-bit, it won't be possible to simply map the address space of such a device into memory.

Sounds like a good plan. Throw cheap battery backed memory, 4-16Gb onboard to act as a transparent buffer between harddrive(s) and system.
Fast IO is ensured as most operations happen in memory, and dataloss isn't an issue as the memory is battery backed.
RAID cards have done this for ages, but it's becoming real option for desktops as memory price keeps declining.
16Gb might be overkill for most purposes, you could get away with 2 if the system is used only for low-power tasks like surfing and email.

I agree, but why would Intel want to use flash memory for this? RAM is faster, has the capability of a LOT more read/write cycles, and could be backed up by a small battery in the case of short power outages (or maybe a battery big enough to run the hard drive long enough to flush the write buffer, as others have said).

This is essentially a cache, which means it's going to get a lot of reads and writes. Under those circumstances, the flash memory's going to wear out relatively quickly and unless it's easily replaceable it means everyone's going to need to buy new motherboards every year. How could forcing people to replace motherboards annually possibly benefit Intel? Oh, wait...

It's called planed obsolescence. Due to the amount of read/write cycles on the i/o and the fact all flash memory is limited to a certain number. If they integrate this into the motherboard it means that the motherboard has a expiration date they can predict and design around. In such a situation the flash will mostly last about a year or 2.

Speed has nothing to do with it because your/still/ bound to the data flush to the disk drive which will be much slower. data security between crashes seems to only be a

This is essentially a cache, which means it's going to get a lot of reads and writes. Under those circumstances, the flash memory's going to wear out relatively quickly and unless it's easily replaceable it means everyone's going to need to buy new motherboards every year.The SLC flash they're talking about will almost certainly last longer than the hard drives it is caching.

This is essentially a cache, which means it's going to get a lot of reads and writes.

No it doesn't mean that. It's a disk cache, not a memory cache. Meaning, only file operations will hit it. The number of writes will be just the same as on SSD drives which millions of people already have.

It won't replace SSD drives anyways. Bigger cache dosn't help much at all after a point, and RAM is getting so cheap most systems have plenty of file caching. What the SSD drive gives you is near-instant access to

Sounds like a good plan. Throw cheap battery backed memory, 4-16Gb onboard to act as a transparent buffer between harddrive(s) and system.

Do you mean gigabit or gigabyte? Also, 16 gigabytes of RAM right now isn't very cheap at all. The cheapest DDR2 memory I've seen is about 12.50 dollars per gigabyte, so that's an additional 200 dollars per 16 gigabytes. Is that a good price to pay for some potential increase in speed? IMO, that's what I'd call "extremely hard to justify" for a consumer.

RAID cards have done this for ages, but it's becoming real option for desktops as memory price keeps declining.

Meh, even the most expensive RAID cards loaded up with tons of RAM aren't as fast as a couple of Intel SSD's right now, so why bother with the expense?

I read another report (maybe at Anandtech) of the same thing earlier this week. It was a sidenote in a motherboard preview claiming that Intel removed it after it showed no meaningful performance advantage in real use, unlike an SSD.

SLC flash memory, which the article claims Braidwood will use, is an order of magnitude or two more durable (in terms of write cycles) than MLC flash memory, which is what is used in most consumer-level devices like Intel's X-25M SSDs.

Wear-leveling and overprovisioning should ensure a long life for the memory used in a scheme like Braidwood. Intel, generally speaking, knows what they're doing in this area. Now if only I could afford one of their drives...

If the onboard flash is a cache, that means it will be used frequently do it will wear faster.
Won't that mean you're more likely to corrupt your data, even if your HD is still good ?

NAND flash chips are generally guaranteed for at least 100,000 erases per block. As I understand this Braidwood chip, it's a non-volatile ring buffer [wikipedia.org] for data writes. Ring buffers are the easiest thing to wear-level, meaning you can just multiply the cache capacity devoted to writes (let's say 2 GB) by the longevity guarantee to get 200 TB of buffered writes before any failure occurs. And not all blocks on a flash chip fail after the same number of writes; you'll just start to lose ring buffer capacity over

Funny - this very thing was being discussed around 1985 (I think), but using battery-backed RAM as a way to reduce boot time. The thinking was people wouldn't put up with a computer that took 30 seconds to start, and if we didn't have a 2-5 second boot time (equal to a TV), the personal computer would never fly. But since it took from 1985 (80386 chip) to 1995 (Windows 95) for a 32-bit OS to become popular, maybe 25 years is reasonable.

Or not. Man, this industry moves at a snails pace in a lot of areas. Why do we still live with the x86 instruction set. Is "the year of UNIX" here yet?

Anyway, three competitors will emerge:

- Someone will put NAND directly on the drive, and get an instant speed improvement. All the tech sites will rave about it and it will be an instant must-have item.

- Their competitor will figure out a way to put the OS files in NAND, for fast booting, via a utility or firmware. The marketing war begins.

- The third competitor will work with Microsoft or Apple to get OS support for fast boot. Apple will get there first and you'll see a commercial on TV with the Mac guy wondering why the PC guy takes the entire commercial to wake up.

In a single drive system, the cost will be about the same. Doing it on the drive will create an instant performance boost on any machine, and well worth the estimated $10 added cost.

- Someone will put NAND directly on the drive, and get an instant speed improvement. All the tech sites will rave about it and it will be an instant must-have item.

Several manufacturers did this, but it didn't offer much benefit over the existing DRAM caches that are on the drives. Further evidence of this is that Microsoft's ReadyBoost [wikipedia.org] does this, and provides no major benefit. Bottom line: Just get more RAM in your machine, or buy a drive with a bigger cache.

- Their competitor will figure out a way to put the OS files in NAND, for fast booting, via a utility or firmware. The marketing war begins.

Already covered. Windows XP and above create a Prefetch folder that the files needed during bootup, in a nice contiguous block. Once you do that, putting it into NAND doesn't matter since seek time becomes mo

The x86 and x86-64 bytecodes are a fairly dense encoding, which lets more instructions fit in level 1 instruction cache. Code density is the same thing that inspired ARM to invent the Thumb encoding.

Path dependence [wikipedia.org]: The x86 architecture is a known quantity with economies of scale from Intel and AMD. Case in point: Mac computers switched from IBM and Freescale CPUs to Intel CPUs in 2006 because they were cheaper for the same performance.

Braidwood, which is expected to offer anywhere from 4GB to 16GB capacity, will only raise the cost of a PC by about $10 to $20 per system, according to Jim Handy, the Objective Analysis analyst who authored the report.

When comparing that cost increase with the overall cost of a brand new PC it doesn't raise any red flag. Nonetheless, what it must be said is that, as this brainwood technology "resides directly on the motherboard" (i.e., it's yet another component embedded in a motherboard

The buffer should obviously be on the hard disk. That way the data on the disk will always be in sync, even if there are writes buffered in the flash cache when the computer loses power. I can't see a good reason to put it on the motherboard instead. Especially as most consumer systems have exactly one HDD.

The article says that the flash buffer could work for "all system io". I can only think of optical disks and flash drives possibilities other than hard disks. But optical disks are interchangeable, so they have to be reread on each use anyway, and could just as well be cached in RAM. And it makes no sense to cache flash drives in flash cache...

I do have to question the effectiveness in multiple drive scenarios. And they talk about 4 GB of space - how do you avoid getting your page file stored on it? And how quickly will the 4 GB be worn out and read only? From the latest AnandTech article on SSDs [anandtech.com]:

Intel estimates that even if you wrote 20GB of data to your drive per day, its X25-M would be able to last you at least 5 years. Real

Is this the latest FUD? That if a company brings out a successful product that's priced cheaply it'll "erode the market"?

How did the:"market" become so sacred that it must be preserved at all costs by keeping prices high? It's really funny the crap that'll come out of an MBA's mouth. He'll be all for "free markets" until someone comes along with a better product and then he'll start to squeal that the "market" is under siege.

It might be cheaper, but consider that flash has an upper end write limit. No one's tested that adequately yet. It could brick your motherboard, should the number of writes exceed an upper end limit of flash-- which has them and DRAM does not.

Well hopefully, there will be a BIOS option to disable this hardware in case a failure shows up. Better yet, have them removable much like the old COAST (Cache On A STick) modules of the first gen Pentium days.

No matter how many layers you add or remove, there will always be a chance of data losses when the OS crashes (Win, Mac, Linux, anything) because there is a finite time before changed data are permanently stored even on this new SSD menory. Furthermore that time can be quite large depending on the OS and file system design.

Anyway, adding one more layer adds one more point of failure, so these new machines could be faster but also for sure a little less reliable than what we have now. What happens when the B

there is a finite time before changed data are permanently stored even on this new SSD menory. Furthermore that time can be quite large depending on the OS and file system design.

If you flush and sync [opengroup.org] the file in the thread that writes the file, you can be sure that "[t]he fsync() function does not return until the system has completed [writing data] or until an error is detected." By "that time", do you refer to the time that the program blocks on fsync()?

I'd answered yes, but one doesn't control the fsync behavior of every application running on his/her system and the OS/file system can take a lot of time (even tens of seconds or more) before deciding to commit changes to the hard disk. Furthermore, a fsync may take seconds to complete and disaster can strike at any time.

There was quite a commotion about those matters when somebody filed a data loss bug [launchpad.net] against the new Linux ext4 file system in January 2009. It turned out that ext3 commits changes at least

one doesn't control the fsync behavior of every application running on his/her system

Users can choose applications that properly sync, or in the case of free software, they can put fflush(fp); fsync(fileno(fp)); in strategic places and recompile. Braidwood, which appears to implement a nonvolatile ring buffer for writes, should make it less painful for application developers to sync when appropriate.

By the way that was a sloppy application coding problem

That, and the fact that fsync() doesn't provide any sort of reasonable performance guarantees. Some of the slowdowns of Firefox 3.x on netbooks are due to an SQLite COMMIT (which calls fsync())

there is a finite time before changed data are permanently stored even on this new SSD menory. Furthermore that time can be quite large depending on the OS and file system design.

If you flush and sync [opengroup.org] the file in the thread that writes the file, you can be sure that "[t]he fsync() function does not return until the system has completed [writing data] or until an error is detected." By "that time", do you refer to the time that the program blocks on fsync()?

You can -- but that loses you all the advantages that a cache gives you. If you're going to flush/sync after every write, you might as well not have a write cache at all.

> Your OS doesn't always have time to shut down properly. Don't think anyone's fond of the idea of having their last couple of saves go poof because Windows crashed.

So, what happens if my PC crashes because of some hardware failure and I have to plug in a different HDD for some reason? Or plug the HDD into a different mainboard? All the things I thought I wrote to the disk will be gone. In fact, the file system might be inconsistent if this thing doesn't honor flush requests. But if it does honor flush r

First of all, DDR RAM is not cheap (at least, not compared to NAND RAM). It costs significantly more per gigabyte than even the most expensive of Intel's offerings for SSD's. While it should provide more theoretical throughput than any SSD, benchmarks at various places (http://techreport.com/articles.x/16255/1) haven't shown that to be significant yet, at least from the end user perspective (some synthetic benchmarks show that the RAM based disks can be faster than SSD's, but translating that to real world usage scenarios by consumers doesn't quite show any tangible benefits).

DDR RAM uses up a very large amount of power per stick compared to SSD's do. I remember seeing the power consumption of one of the DDR2 based "volatile hard drives", and it was higher than spinning drives (at least at idle), and wasn't particularly faster than the best of intel's SSD's.

So sounds like DDR RAM on board is expensive, power hungry, and doesn't provide that much of a tangible benefit to consumers. Tell me again why it's a good idea?

Not really. I don't doubt that Braidwood would increase performance since I/O be it memory or disk is not entirely random - there are parts of the memory and disk more frequently accessed therefor cacheable. Oh btw, by cacheable I mean "non-marginal" performance benefit for using the cache. Even with random I/O, having a cache can increase performance a bit, but only very slightly.

Memory is usually cacheable a lot better than information on the disk however. Caching the operating system related files and

I'm sure that would come as a great surprise to anyone who ever implemented a virtual memory system.

You should assume the word "Random" is in bold-type, then the claim makes more sense.

Virtual memory systems only work effectively to the extent that data access has 'locality of reference' - which is often, but not always, found in practice.

To my mind, the real promise of solid-state is the random access. Since the earliest DP, software has had to take into account the sequential nature of access to durable storage - disk based storage never did have a uniform access time for blocks - and this has influenc

1. SSDs handle random I/O extremely well compared to traditional harddisks.
2. Braidwood is essentially a small, cheap, 8-16GB flash based cache.
3. If Braidwood is transparent to the OS, it will have a hard time guessing what to put in the cache, because a lot of the I/O on a desktop/laptop is random, but the issue with caching the non-random part is that most OSs do caching themselves for frequently accessed parts of the disk. This means t

I'm sure that would come as a great surprise to anyone who ever implemented a virtual memory system.

-jcr

You're both right.

The problem here is that "random I/O" can have at least two subtly different meanings. In the very old days they talked about random I/O as opposed to sequential (ie, tape) I/O. In that sense, yes, random I/O is often extremely cacheable, as you say. That's why virtual memory works, as system files, drivers, commonly-used applications, and so forth are accessed much more often than other daa.

"Random I/O" can also refer to I/O that does not follow any real pattern - ie, a 50GB database in which all records are accessed about equally as often. This kind of I/O is not really cacheable, practically speaking. Unless you can cache the entire thing.

What's the correct terminology for the second kind of random I/O? Random I/O with very low locality?

Random I/O is hard to read cache it is very write cache friendly. Modern systems already have huge a huge read cache called all unused memory. A huge nonvolatile write cache can do wonders for random write I/O. Databases and the like can often write the same block multiple times in succession over a long period only the most recent needs to be written to disk. I/O can be reordered to be more sequential helping seek times (yes drives already do this but with 32 megs vs gigs of flash), take this further a

From TFA: "Braidwood, which is expected to offer anywhere from 4GB to 16GB capacity,..." - In what way would it even compete with the SSD market? I'll stick to my separate 250 gig SSD drives for a while longer methinks.

You lack imagination. Consider a case where terabyte SSDs are more than $500, but spinning terabyte spinning media is less than $100. If there was a 16 gig cache sitting on the main board that would provide SSD write speeds at all times, and SSD read speeds for most things (in a consumer a