Post Your Comment

133 Comments

I must have 50 sticks of unused PC100 and PC133 SDRAM.
Something like this for old RAM would be a value, (for me).
Does anyone know of an adapter that would take, lets say, 10 sticks of SDRAM and give me an IDE or USB connector? Reply

It was on Gigabytes site as I looked today and the past month while making the descion to get it.

There's been a lag while retailers get rid of the v.1.2 and Gigabyte sends out the v1.3 cards.

I just got one of the new ones and will use it to run my FTP server application. I have 14 or 16 drives connected (6TB) to the server and previous reviews by others have pointed to the performance increse from the FTP app. searching and retaining the disk locations.

since I just got it I am not %100 on the reality, and the real benifit will be realized by the client seeking a file from the server. Using it for MS SQL Server is also a great idea. Other than that I haven't heard any real world uses, I mean users might be able to load Doom faster, but this device seems to be a bit expensive for most.

Also this card is bigger in area than most video cards, so if your box is crammed w/ wires or liquid pumps and resivours. The logistics of getting say 2 video cards and the RamDisk in a midsized case are pretty obsurd. Plus you need a fan or 2 in there to swirl around the heat generated by 3 heat monger cards) ...There goes more money in a bigger case.

For the general user, I would go with the new Raptor (the clear one) if you want to compromise speed, size and cost on a rational level.

The simplelist and best use of the i-ram is to store the "Temp DB" for SQL Server. MS SQL Server constanly writes to this database in most larger installations. It is temporary and by definitions does need to exist after reboot. (Alas SQL Server does not/CAN NOT keep this database in RAM) So on reboot a script will need to verify that it is still formatted and the appropriate file system/ files exist -- copied from the hard disk. SQL Serve is fussy about hardware so it masquerading as a disk is perfect.

In an hour on google I can't find someone to sell it to my boss to try it out. sigh.

My prediction is a 5-10% boost to overall throughput on a SQL server installation with lots of "temp DB" activilty -- well worth the cost of the ram chips. Reply

I've been keeping an eye on ram disks for a little while now, but other than software they are just too expensive. The earlier post that had links to them (both flash and DRAM based disks) was the same stuff that I found. More recently I had been relieved by the availability of 64bit Systems and OSes with more slots/address space for ram and thus bigger ram disks. But it still really burned me that someoune couldn't make something really cheap that didn't rely on a big fat motherboard (which still has only so many slots, but admittedly faster).
This qualifies. The second I heard about this while reading computex stuff I said to myself: Self, this thing only takes power from the PCI bus, therefore it would be a trivial thing to buy some PCI slots (like 8) and wire them for power, then raid or jbod these together and get one heck of database drive at a fraction of the cost of other solutions, and scaleable at that (I can start out with 2 or 3).
I also think it would be a nice (and easy) thing for them to put it in a 3.5" form factor with both molex and/or 3.3v standby loopthrough (through a pci dummy card or something). And yes 8 slots would be much more saleable, understanding that the mem controller may not support that (though some sort of bank switch would work since you have time to wait for the SATA or SATA2 bus, 3.5 form factor would get difficult with 8 slots though).
The situation that got me looking at this stuff is I have a mysql database (tested others as well) that has to do a table scan each time I do a query since it is a '%something%' query (loading web logs and running user demanded reports on them) The database is at around 4 gigs already (about 6 months worth, including 0.5GB packed indexes) and the report takes about two minutes (2 15k drives in RAID 1, not bad) But I still have to run it at night and make a summary table. (maybe a database with multithreaded partitions or grid would do it, but how much does that cost???!?) Anyway, my 2 cents (sorry for the long post). I'd really, really like to know what benchmarks say the latency for this thing is. Reply

They should have went with a SATA II capable interface instead of regular SATA since it has much more capable bandwidth sitting there waiting to be used. Also the 4 gig of mem only hurts it a tad too. Reply

Now just as a thought to scary uses for the i-RAM. Law enforcement will hate these things. Peadophiles will have instant access to wiping there files without a trace, terrorist won't have to worry about the good guys being able to track their files. Reply

Nope, pedos have a compulsive urge to collect stuff, 4GB wouldn't even come close. Besides, if the pedo was thinking that far in advance, there are plenty of already-existent technologies far more secure. When the cops come busting down someone's door, do you think they'll saw something like "freeze, don't move, unless you prefer to go over to your computer and wipe data!" ? Then again, general ignorance about the need to keep the evidence battery charged could be an issue. Reply

Anand quotes $90 per GB of RAM here, but I'm wondering if the I-Ram works with the much cheaper high-density junk you see out there all the time. Like 128Mx4 modules. On motherboards, usually only SiS chipsets can handle that type of RAM, but there's no reason the Xilinx FPGA couldn't.

Since Athlon64 north bridge no need the memory controller. Why shouldn't the original memory controller used for iRam purpose. By supporting both SDRam and DDR Ram, people can make use of their old RAM (which no longer useful nowadays) and make it as Physical Ram Drive.

Spare some space for additional DDR module slot on motherboard exclusively for iRam, and additional daughter card can be added for even more Slots.

And more, power can be directly drive from ATA power in motherboard. By implementing similar approach to iRam, an extra battery can power the ram for certain hours.

By enabling north bridge to be DDR/SDRam capability is not a new technology, every chipset compnay have such tech. They can just stick the original memory controller with lower performance (DDR200, so more moudle can be supported and lower cost) to north bridge, the cost overhead is relatively small.

What I think the extra cost comes from extra motherboard layout, north bridge die size, chipset packaging cost (more pins). I suppose it can cost as low as $20 ? Reply

More, the original SATA physical link can be omitted as the controller in North Bridge can communicate directory to SATA controller internally (South bridge thru HT ?) In this case, would the performance increate considerably and the overall layout more tidy ? (no need external cable and cards) Reply

NO these are all problems. The purpose is to have a universal platform support that is gentle on power consumption. That means a tailored controller and even then we're seeing the main limit is the battery. "Tidy" is an unimportant human desire, particularly less important inside a closed PC case. All they have to do is route bus traces well on the card and be done. Reply

HP sell an add on for their DL 380 server for $200 (at discount) that gets you 128MB of disk write cache... makes a good system also fast for disk writes.

This card could be used by linux vendors to enable file-system data and control logging for similar money for GB(s) of write cache... Cheap, reliable, fast general purpose file servers.. that have fast disk write speed without risking data loss.. Speed meaning no disk-head latency, no rotational latency - just transfer time.

It would sell better with ECC memory.. or the ability to use two cards in a mirror.. at least to careful server buyers..

i guess it would add to the base board cost, but a SATA controller on the PCI card would make it a littl nicer as then you are not takeing up one of your SATA channels, i only have 2 and they are current both used for a Raid-0

Also if they made the PCI card a SATA interface and then short circeted the backend to conect directly to the memory, wouldn't they then be able to get much higher transfer speeds than sata and yet all the existint SATA divers could be used with it, given they emulate a existing SATA interface.
Reply

I agree with the people who mention server uses for this product. There are already quite a few products like this around in the server space, but they are all VERY expensive. There's a comprehensive list here:

The one thing to note, most of these are flash based drives, which means they retain their data, but are actually quite slow transfer speed wise. When it comes to pure performance solutions (which are usually DRAM with battery and/or HD backup), there's only a couple of companies:

We've been long time users of micro memory products, and in general they've been great. We place database journals, filesystem journals, and general server "hot" files on the device and get great performance out of it.

The biggest issue with most of these is price and support. Rocket Drive is Windows only (we have Linux srevers). HyperDrive doesn't appear to be shipping yet (we ordered one and haven't heard anything). Jetspeed I've never even been able to get a sensible reply from. Curtis seem to be focussing on fibre channel (their SCSI interface drive is now quite old, only 80MB/s), which means you need to spend an extra $1000 almost on just a controller. RamSan are incredibly expensive and FC only, but apparently have amazing performance as well. Umem does have a Linux driver, but Umem are no longer selling their retail, they are only selling wholesale to big storage vendors that use them in their products.

So that basically left us really interested in iRAM as a potential long term replacement for for Umem in new servers we buy. It's a pity that the apparent performance is a bit lacking. On the other hand, the biggest advtange of RAM based drives is the latency reduction. Basically you can write and have your data commited to "permanent" storage and move along with the next task straight away. This is the whole point of database/filesystem journals. It would be great to test the iRAM with real server scenarios that rely on this low latency ability. Rerunning the database tests with a combination of journal and full database on the drive would be really interesting.

Basically it seems that this is a really hard product to sell. There's definitely a market for it in the server space, but most of the people who realise that are big DB/file system users, and are usually willing to spend more to get an "enterprise" like product. It would be really nice if all those "middle" users with database/filesystem/email issues could be shown how to use one of these to significantly extend the life/performance of one of their servers... Reply

I see this as a much easier way to run your OS in RAM (hell, I don't think there is a way to run XP on a RAM partition).

If you have 4GB of RAM, you can partition 3.5GB and run win9x in it. That leaves the max 512MB conventional RAM for 9x to work with. It takes a lot of work, but I think it is faster than this because you don't have the PCI bus constraint, and the RAM controller on a motherboard is probably flatout superior.

It is a 300mb folder containing several files that could be located in diferent positions which means a more random access. The other is a unique file, it is larger but the data is read from adyacent positions in the disc. In the first case you have to add the overhead of the procesing time of the OS when dealing with several files. Reply

Actually, you need to make it a bit more clear: it's the Firefox source code, which is likely thousands of small files. It's not just a few or many, but *TONS* of little files. Even though the access times of the i-RAM are much lower than that of a standard HDD, there is still latency associated with the SATA bus and other portions of the system, so it's not "instantaneous". Three times as fast is still good, and that's relative to the Raptor - something like a 7200 RPM drive would be even slower relative to the i-RAM. Still, best case scenario for heavy IO seems to suggest the current i-RAM is only about 3X faster than a good HDD setup. Good but not great. Reply

There's only one comment so far in this entire thread that really touches on where the i-Ram is truly going to succeed, and a few posters flirt with the notion in an offhanded manner.

The benefits of an i-Ram would really come out during I/O intensive operations, as in high volumes of reads and writes, without really being high data transfer volumes, which is the case for a lot of database operations. A lot of the tests performed in the article really had a focus of large volume data retrieval, and that's like using the haft of a katana to hammer in a nail.

Think about web bulletin boards like PHP-nuke, Slashcode, PHPBB, any active dynamic website that is constantly accessing a database to load user preferences, banner ads, static images. Forum posting, article retrieval, content searching, etc. An applicable consumer example would be putting your web browser's cache on the I-Ram, or your mail or news reader's data files, or dumping a copy of your entire document's folder to it, then using Windows' search function to dig through them all for all occurences of "the". Throw a squid cache on it. Put your innoDB transaction log on it. Hell, for that matter, slot a handful of these and use them as innoDB raw partitions for your data.

The kinds of tests you need to perform to make an I-Ram shine would be high volumes of simultaneous searches across the entire volume, the kind of act that would make a regular disk drive grind to a screaming halt in a fit of schizophrenic head twitching. This isn't video editing, OS booting (with exceptions), game loading, or most of the scenarios commented on above. It's still a SATA drive. Your bulk data isn't going to transfer any faster, but you *can* find it quicker and open, update, and close your files faster. Leverage *those* strengths and stop thinking it's a RocketDrive. Reply

All my concerns on this product were pretty much addressed
-SATA2
-5.25" Bay drive instead of PCI slot
-Using a 4pin Molex connector or SATA power connector instead
-PCI-E instead of SATA (drivers are made everyday)

A few comments I have on this product that weren't mentioned. Everyone talked about putting these into a Raid0 array to improve size but no one mentioned that it could very well double performance. I don't know what's causing the current bottle necks with these cards besides the SATA interface but that just doesn't seem right. Anand needs to run benchmarks like Sisoft File System Benchmark or HD Tach to narrow it down. Read/Write/Sequential and Random should all be almost instaneous only limited by the bandwidth of SATA and the bridge it is attached to. This card could very well be limited by the chipset they tried it on (southbridge/northbridge interconnet). It might be even faster on a chipset that lacks a southbridge and only has a northbridge such as the nForce4.

Given the nature of this product I don't know why motherboard manufacturers just don't add this right onto a board or make a special adapter for it you can buy (with a better interface). I could see alot more use in something like this if the dimms were attached right to my board and straight to my notherbridge.

What Gigabyte should've done (all companies with a bright idea should do this) is just give this to review sites such as Anand and others just to see what feedback emerges before they try to market something like this. I guess Gigabyte is sort of doing this by only producing 1,000 but that's still 1,000 more then they need to. If my guess is correct the second revision of this product should follow quite shortley after this one hits the market.

As was mentioned the price is a killer (I would rather get a SCSI320 controller and a 15,000 RPM Cheetah). Reply

The bandwidth, which could have really blown SATA drives out of the water in certain tasks, is obviously crippled by its attachment to SATA. Yet if i-RAM was running at full PCI Express speed, then I should think opening the specs for the memory controller would quickly lead to open source drivers. The storage is, after all, cheap DDR sticks.

Sure, these drivers might be written for Linux or BSD instead of Windows, but surely porting GPL'd drivers to Windows would be easy for a company which can open the specs? nVidia and ATI have proprietary drivers because they claim it would be suicide for them to open up their proprietary chip interfaces. But i-RAM? Reply

I thought that compilation would make a good application for this. Source code, intermediate, and output files take up less than 4 GB. The large amount of small text files involved should allow the i-RAM's random access performance advantage to really shine. Add to that the fact that long compiles can take several hours - or days if you are building Gentoo, for example - and the difference should be quite noticable. Yet there don't seem to be any compiler tests in this article. Maybe they simply aren't I/O limited? Reply

This a a 4GB PCI Drive @$3000 (yes three thousand) but this is for a native drive with direct access to the PCI bus thus can sustain 133Mbit/s.

What I'd like to see is a version that fits in 5.25" drive slot 12+ slots for RAM using a std connector for power and SATA II or SCSI (SCA?).

I can see several advantages for this product IF you think about it
Webcache server (hold the cache)
Temporary files (great for those programs that write temp files like crazy)
Swap space on Database server (lookup PAE, SQL server and 36bit addressing - 32bit windows can address upto 8GB RAM IF the O/S and the app are writen for it (been there :( )
Swap space on badly behaved app - there are apps that are ported from *nix to windows that tell the OS I have pagable RAM which the server then dumps to disc (4million page faults in 2 hours!) only for the app to ask for it
Log files - DB servers write out transitional logs once per transaction, this needs a drive that is FAST

Having more than one of these in a system (power system) means that you can seperate out the I/O onto seperate physical drives or even better controller or best seperate PCI buses (Servers, Really big servers can have three PCI buses) this means for a server (Unit means logical disc made from RAID arrays, seperated out as much as possible, by controller and PCI bus)

I'm wondering about World of Warcraft. After the first article where the info debuted there was a lot of talk in the comments section, and one of the subjects was WoW. It wouldn't have been possible to install WoW to the i-RAM because it's too big (~4.6GB on my machine). However, once AnandTech recieves another i-RAM to test with, either in JBOD or RAID-0, I would like to hear at least a subjective opinion on how WoW runs in large battles and such. I know my brother's machine gets stuttery when there's a big PvP battle, and through my troubleshooting I've gathered that it's a hard drive speed issue. If any of the AnandTech team has a high level character on their account and like PvP, please post something on performance in WoW.

I can't see having the i-RAM as being more beneficial to any game than simply adding more RAM to the system. If you're going to have 4x1GB DIMMs installed on the i-RAM, why not just put them into the system itself instead? As for WoW, even if the installed size is 4.6 GB, I doubt the game actually goes much above 1GB of memory use - very few applications do. If you have 2GB or more of RAM, do you still get stuttering issues in WoW? If so, there's a reasonable chance that it's simply GPU power that's lacking rather than RAM - or perhaps GPU RAM would help?

(Note: I'm not a WoW player, so I'm just shooting from the hip.) Reply

There are at least 3 seperate data files in the WoW installation that are 1 GB in size each. A bunch of smaller but still over 100 MB files as well. All told as he said its about 4.6GB, and its more than 4GB in that one folder alone. So yeah, the game would go over 1GB in memory use if it was written well enough.

I play WoW a lot, and loading into highly populated areas sucks. You hard drive thrashes and you have no control of your character until everything is loaded. I'm assuming its busy loading the textures of the equipment that all the player charactes around you are wearing.

This I-Ram thing might help out a lot, seeing as consumer motherboards don't support over 4GB of memory and the data files alone for WoW totals over 4GB. The problem again is that you'd need to raid two of the I-Ram devices together to get that much storage, and we don't even know if it would result in a tangible benefit.

As others have mentioned, for all fast action games, it isn't the load times that Anand should be focusing on... its the in-game stutters when something suddenly has to get loaded from disk. Those are killer, and even if the initial game load times only decrease by 5%, if the stutters are eliminated, this might just be worth the cash, more than a new $600 video card certainly.
Reply

My point wasn't that WoW doesn't ever exceed 1GB, but that it doesn't exceed 2GB of RAM use. Actually, we should have probably mentioned that point as well: no single application under 32-bit Windows (not counting PAE/NUMA setups) can use more than 2GB of RAM. The 32-bit memory space is partitioned into 2GB for applications and 2GB for the OS, if I have my information right. Basically, you need to try out WoW with a 2GB setup before you can say that i-RAM would or wouldn't be able to help.

Going back to the earlier statements, though, i-RAM is still nowhere near as fast as system RAM. The delay of PC3200 is around 140ns worst case, and bandwidth is still 3.2 GBps or 6.4 GBps dual-channel. i-RAM seems to be somewhere in the microseconds range for access times, and it's limited to 150 MBps bandwidth. If you can add RAM to your PC, that would be the first step to improving performance. Reply

If you have Windows XP Pro, you should be able to make a volume that includes the I-RAM and a regular disk. Then you can make a hard links on the I-RAM that point to the additional 600 Megabytes or so on the regular disk that won't fit on the I-RAM. I've never done anything like this myself, but I think it should work. Any comments? Reply

someone's probably said all this, but i don't feel like reading all 80-odd comments:

First, this strikes me more as a proof-of-concept effort. Sure, they'll sell you the engineering samples, for $150. Rev 2 will be the real product.

Second, I did see several people suggest that interfacing the board to the SATA interface rather than directly to the PCI bus makes it slower. Why? Standard 32-bit 33Mhz PCI only has 133MB/s of bandwidth, and that's often shared by other devices as well. SATA has 150MB/s of bandwidth, and in most cases is connected to the system by at least a 66Mhz PCI link, or more often some other high-speed chipset link.

Interfacing to SATA also means that Gigabyte doesn't have to write drivers for 32- and 64-bit flavors of Windows and various Linux distributions, MAC, and more obscure but definitely presents OSes like BSD, NetWare and Solaris (/me wonders about putting the boot partition and SYS volume of a NetWare server on an iRam... probably no real benefit, but you never know).

Would have been nice to see some info on what it performed like as the temp folder for windows. all that internet web browser cache and other stuff that windows sticks off in the temp while it does stuff.

this is data that you don't usally mind if it just disapears everyone in a while :)
Reply

I remember five or six years ago there were products that would plug into a PCI slot and use PC133 RAM to do this same job. They would show up as a harddrive controller and windows would use default drivers unless you needed something different. This was when programs didn't expect you to have enough RAM to keep a scratch file in RAM, so they'd write out files after every action. A PCI card with a gig of RAM for accepting these scratch files made a huge difference. There's just less need now.

Then there's the other problem. SATA may be 150MB/s, but the PCI bus it's attached to is only 133MB/s. This certainly explains why everything runs at DDR200. If they'd made a PCI-X card there might be a bigger improvement. The bright side is that they used an FPGA. If next week they decide to implement SATA2, they can issue an update and everyone can upgrade their cards. Companies like Cisco do this several times a year in telecom products. Reply

I'd hope and pray this thing is a lot faster than the iRam for all the extra cost. But the fact that it sits in a PCI card slot (I'm talking about the QikDrive linked above, not the iRam) makes me question that.
Reply

I was really surprised at how little it helped as a page file. Myself I sometimes encounter periods of slowdown due to paging that can last for several minutes where nothing can be done. I don't know if there's a common name for this but I'll call it the "page file wall". I don't know exactly how you would recreate such a tragedy in the lab. Too many apps open with too little memory obviously. But less obviously, it seems that during a period of overnight inactivity (with apps left open) windows will page a lot of stuff out to disk and you can experience the page file wall the next morning. It'd be interesting if Anand could devise a consistent "page file wall" benchmark.

As the article and many posts above suggest doubling my RAM would probably end my problems.

I still think this product (or revision 2 or 3) could bridge an obvious gap with PC's: SLOW harddisks and EXPENSIVE ram. When you run out of ram it can be like hitting a wall. It can be like crossing the country, but you go half by jet and the other half on foot. The gap should be filled with something cheaper than modern DDR and faster than harddisks. (This product is barely either.) I'd like to see a PC with 1 GB normal ram and 2GB of cheap-o 1/8 speed auxiliary ram. The OS could use this slower ram for paging with priority over paging to the harddisk. Not just for enthusiasts, but for regular beige PC's. Owners would then have another upgrade option with a better cost/benefit ratio depending on their needs.

I was waiting for a performance review of this thing and I'm so glad trusty Anandtech provided.

I was in my local computer shop and the guy working there pointed at a stack of hardware and said some guy just dropped $8000 on a Intel 955X or whatever system that included around 16 gigs of Ram disks and I asked if it was based on ddr400 and he said no it was in fact ddr2 533 I think. A quick search on the internet found nothing about ddr2 ram drives and it defies logic to me anyway since i would think that ddr 400 would be faster due to latency issues, etc. Has anyone heard anything like this? Also the guy at the store told me that it boots in to Windows XP in 4 seconds. It sounds like a tall tale but i don't see any reason why he would be making it up as they are pretty reputable. Reply

any time you need to write something before you can continue the latency becomes critical. Database writes (and logging) are a perfect example of this.

Under *nix the Journal of a journaling filesystem is performance critical (although it's useually a sequential write so it is about as good as you can get.

For Database engines that have good crash recovery (MySQL is not that good at this, but Postgres or Oracle are) they need to make sure that their log gets to a safe storage media before they can consider the write completed and tell the caller that it's done.

even for an apache webserver, with normal logging apache will not return the page until the log has been written. Reply

As a lot of ppl have posted here, it would make sense to use this as a cache for our harddrives by making it possible to plug the harddrive into the i-ram and i-ram to the motherboard. This would overcome the 4gb limitation and we probably wouldnt need the full 4gb for cache we can use like 1gb or 2 gb. But to see more increase in performance they will need to move it to sata2 and have programmers write into their code to precache data to take full advantage of the i-ram. Reply

Well it seems that modern hard drives are getting alot faster and solid state doesn't seem to help as much as it would of say, 2 or 4 years ago when we were running crusty low density HDs...

However, I am also slighty disappointed in the design...

Why put main system memory in a drive and then limit it to SATA I (not SATA2)?
I thought the whole point of a ram drive was provide maximum i/o performance...??

Second not allowing 2GB sticks doesn't make sense to me... i mean 4gb is really small.
Maybe they should of thought this,
"Gee, let's try to offer more capacity - like, golly bunny, currently available 2gb ram modules..."

Even so, if this can do 591% higher i/o performance than a raptor in ipeak business winstone, then i'm sure there are ways to utilize this in computing tasks...
Also if u put the os on it u wont ever need to defrag...

Nice, but expensive for now ... expensive doesn't mean its crap.. just weakly spec'd to my mind for now...
Why do something like this and then water it down?

I think the disappointing benchmarks ought to say something about current OS's suitability to the iRAM, and not iRAM's capabilities. I really think this is an idea ahead of its time. Windows XP isn't tuned for solid-state storage, the FPGA chip on the iRAM isn't the best solution, and the SATA interface itself is a bottleneck. If Windows Vista and future BIOSes had support for PCIe storage, imagine a version of iRAM that had a straight PCIe interface supporting the full 1.6Gb/s or more depending on the type of memory you put on, and 8Gb or more memory thanks to 64bit addressing.

Windows Vista will already have support for hybrid drives (NAND+platter) so the caching and paging routines will be optimized for solid-state storage. I actually think iRAM might be better than hybrid drives because 1) you can use existing drives with it, 2) iRAM is expandable (up to a limit), 3) DDR is faster than NAND Reply

I could see SATA II could remove the bottleneck, but still, 4GB of data? Gigabyte is smarter then this.. it's just not going to fly. Though, it is a pretty good start.

The next logical steps is probably finding a way to get a standard harddrive to use something like this as a memory buffer (7200rpm with DDR200 1GB of cache) and then maybe it would actaully be worth it. Reply

I was disappointed that nothing was mentioned of the practicalities of moving windows or a game onto this thing. Is there any software that would transfer whatever data is on this thing (including functioning operating systems) to a normal drive at regular intervals? And keep them functioning? If not, what's the point?! Each time you have to install windows/a game to this thing (after powerfailures or just for the sake of having something different on it), you have to install all the updates/personal tweaks/mods/saved games/configurations etc which would takes SO MUCH MORE TIME than the extra few seconds you save from faster boot/game load times... why anandtech does not take these things into consideration?! To paraphrase another poster: WHOOPEE-F*CKING-DO Reply

The $150 thing is a killer. But if they can only pump out 1000 of them, it makes business sense to have the price high. This just like AMD having high X2 prices because they can't possibly make enough quantity to fill orders if the price was lower... same exact thing.

$90 per 1GB stick of ram is high, I'm sure people can shop around and find it cheaper.

As for RAIDing two of these, Anand said he only actually had one of them, but was trying to get a second. So maybe more on that later. I think that even if Raid 0 doesn't work for some reason, JBOD would work.

I'm curious what the bottleneck in computers now-a-days really is. I think Anand should get an NForce Pro with 8 GB of ram running 64bit XP, set up the largest RAM DISK (real software-type RAM disk) you can, and see how that affects performance. If performance shows the same mediocre gains that this device showed, then that means a new SATA2 version wouldn't improve things either. If that test showed there were large gains out there to be had, then yeah there's a future here. I would do it myself but I don't have access to that hardware hehe.
Reply

I'd like to see how this would change the overall latency of a system. I have a pretty nice home studio, and I can see using this as a boot drive, and then recording off to a raid array. With all the random accesses coming from the solid state drive, and only sequencial going to the raid, I'd think the latencies would drop significantly. Could be pretty handy, even extending the life of older systems. Reply

Anand, first of all great review, it's nice to see some numbers on this.
Would it be possible to bench a few tests again with 2GB of system memory? I can vouch that 2GB makes a noticeable difference when loading any game. I realize that you were going for an "enthusiast" level machine but games like HL2, Doom3,and Battlefield 2 has started a push with the high end to upgrade to either 2x1GB or 4x512MB. Reply

Could they perhaps have gone with a full-size card and then oriented the DIMM slots perpendicular to the mobo? I had something like that ages ago in an Amiga that worked well from a size perspective. It might get them to 8Gb :) Reply

cost of this unit was increased 3 times.
then it went from sata2 to sata.
Real life performance is not as gd as i expected, when i first heard i was excited to see them working on removing the bottleneck but going from 13 second load time to 10 second doesnt warrant the cost of the 150 card and 4 gb ram. Reply

I think the more useful implementation is to have the RAM pre-installed onto the drive. And I'm not talking RAM sticks. I'm talking about these guys at Gigabyte contacting Samsung, Micron, or Crucial to directly supply the chips and directly solder them onto 5.25" plates. I think in the space of a 5.25" bay, you can fit 2 of these said plates. It won't be hard to think that they'd be able to fit 15GB of RAM in a 5.25" drive's space.

Then with the remaining space, mount a MUCH larger battery. Have the battery be able to last DAYS, not hours. This will set people a little more at ease. It will sure make me feel better. (and no, this 5.25" ramdrive will not be using a molex connector. Simply put in a dummy PCI card to allow the 5.25" to draw power from it)

The fatal flaw in their product design is that most people simply won't have that many RAM sticks laying around to make this thing useful. Why not supply the RAM, and in the process increase the possible size from 4GB, to something much more useful. If we already know that only 'power users' with little budget restraints will buy this, then just supply it the way we know they want it: Big.
Reply

If they got real serrious tunned it up with on pcb ddr3. Made it something like a ZIF socket thing. Gave it a direct bus to the chip, changed the memorie contoler to let it throtle wide open. Wrote drivers, OSes to just use it. It might be like a really fast bios set up for the OS. At first it could be like an extra, but as costs came down maybe it would be intergrated into the motherboard. Humm nearly alomst instant boot up...it's a dream, even if it's only mine! Reply

I think another possible use (besides certain kinds of servers, like mail servers), is for video capture. The size is a bit small, but if you were capturing segments of footage, it might work. And the price could be reasonable. Reply

"but 32-bit Windows can't use more than 4GB of RAM, including the swap file size."

First of all... "Swap file" is a misnomer. We talked about "swap file" back in the Windows 3.1 days when the OS would swap a process' entire memory space to the *swap* file.

These days the OS will read/write selected pages of a process' memory from/to the cache manager (who may or may not elect to use the disk to get to the physical pagefile). *Paging*, not "swapping". Executables and libraries are memory mapped and thus start their lives with all pages firmly on disk (so a big executable won't necessarily load slow, but many small DLLs OTOH just might).

I don't have Windows XP in front of me, but my 32-bit Windows 2003 Standard ed. with 4GB memory and 1GB pagefile certainly doesn't seem affected by the limitation you mention. Enterprise edition can address even more physical memory... Each process is still limited to a 2GB virtual address space though. (32-bit processes marked capable of such will gain a 4GB virtual address space under 64-bit Windows)

Without PAE (or something similar), 32-bit OSes are indeed limited to 4GB of RAM. This is what is being referred to, as PAE is limited to Intel and I don't believe it's available on non-Server versions of Windows. (Correct me if I'm wrong, but PAE is pretty much only on Xeons, right?)

You're right that it's paging instead of swapping now, but there's really not much difference between the two. Basically, you put data onto the HDD in order to free up physical RAM, on the assumption that the least recently used data that was moved to the HDD won't be accessed again for a while. Reply

Wow, my friend and I talked about the possibilities for these things several times. But at 3x the initial price and not the performance increase I would have expected, the techie in me is disappointed. My wallet is happy though. Reply

Huh, if this was at the $50 price point it would be a bit more interesting.

I didn't like the pagefile test - it made no sense at all. Of course going from say 4b RAM to 2gb + 2gb iRam isn't going to improve the system... You needed to test what JUST changing the pagefile from HD to iRAM does.What about a typical 1gb RAM setup that most of us use? I still hit the pagefile on occasion and I do have ~1gb of old DDR I could use. Load times? No, I'd like to know if it smooths out gameplay. I know Doom 3 hiccups on my machine due to disk accesses.

Otherwise this doesn't look like it makes a lot of sense in its current incarnation. Reply

Know the article says it doesn't support ECC memory but will it still take it and run in in non-ECC mode? Most mobos I believe can at least do this. What about registered memory? Got a couple sticks of 1GB DDR266 RECC memory I'd like to use! Reply

If more benches are to be done, I would put in a suggestion to test some compile times. Then I guess you should compare it to boosting youe system memory and installing a RAM drive, but this could be more convenient if you have those old 256 / 512 MB memory sticks lying around.

A while. You would have to find how much power is dissipated by the i-ram, then use the capacity of your UPS to get an exact number. I would go as far as to say maybe up to a month if you have a good ups. Reply

i thought they said that they were only going to make 1000. enought for the crazies who have money to burn...
P.S. if any of you crazies are reading this i could burn some of that money for you... just let me know. Reply

Thanks for running through the multiple roles for which the iRam might be useful. I'm rather surprised it wasn't MORE useful in the benches. I'd be interested in learning (i.e. slacking back and reading the results of someone else's research) why the i-Ram is still as large a bottleneck as it is. Yes it's faster than the HD, but why isn't it much, much faster? Are we seeing OS inefficiency or something else altogether?

In the end, though, it doesn't fit my needs particularly well, so I'll pass this round. Maybe a future version will be more appealing in terms of cost, speed, size.

I think that this would be very helpful as a page file for workstations. Older workstations may be maxed out with 4GB and windows 2000 (which the company does not want to move over to xp-64) and still need additional ram for CAD/CFD/etc. This would be an easy upgrade with a reasonable amount of performance increase. Reply

Was hoping it would offer more, especially as a Pagefile. Any plans to make a PCI-e version(IIRC PCI-e has a ton more bandwidth than SATA), that would likely make this a Must-have. As it stands now I'd only use it for the silence in a HT Setup. Reply

I too am dissapointed that the article lacked any mention of SATA2, which is twice as fast as SATA (300MB/s vs 150MB/s). Considering many motherboards already on the market suport SATA2, and the 300MB/s transfer rate that goes with it, it is a bit of an oversight that the articles doesn't even MENTION if the card supports SATA2 or not. Nor do they mention what they think would happen with SATA2, or if Gigabyte is likely to produce a SATA2 version. It's a weak spot in this article, I think, considering how central the bandwidth of SATA is to the performance of the i-RAM. Reply

quote:I too am dissapointed that the article lacked any mention of SATA2, which is twice as fast as SATA (300MB/s vs 150MB/s)

33MHz PCI only gets you 133 MB/sec theoretical, and more like 110 MB/sec in the real world. The i-RAM with SATA 1 can completely saturate a PCI bus. SATA2 would cost more to implement, and give you no speed increase at all on a 33MHz bus. If you build the card for higher-end PCI specs (e.g. 66MHz, 64 bit, 66MHz/64bit, PCI-X) then you automatically exclude most PC enthusiasts (unless they like buying server boards for their game boxes).

If they end up doing a PCI Express version, then there would be some reason to support SATA2.

This board is not a replacement for a hard drive. It would be incredibly useful as a transaction log though. Reliable (i.e. won't get lost if the machine crashes) write-behind caching for RAID 5 drives will give you a huge boost to write speeds. And the controller cards that support battery-backed write behind caching cost a lot more money than an i-RAM.

Actually, scratch my comment - I had not had enough coffee when I wrote it. I forgot that the PCI connector is doing essentially squat except providing power to this device. Of course you could have a SATA2 controller on a faster bus talking to this thing. But an SATA2 version would probably cost more. (because it would need a faster FPGA, newer SATA transceivers)

You did miss that reference; on page 2 it says "The i-RAM currently implements the SATA150 spec, giving it a maximum transfer rate of 150MB/s".

Given the 1.6GB/s of the RAM, it seems completely silly not to provide a 300MB/s SATA interface instead, especially considering that the whole contraption including RAM will cost as much as 2 or more decent hard drives.

It could be useful for pagefile if you have a couple of old 128-256 DDR 333 or older sticks lying around, especially if your ram slots are filled with 4x 512. This can defenetly improve performance over the hard drive pagefiling, which is horrible. I wish Gigabyte would have done 8 sticks instead of 4. The benefit of 8 sticks is that it will allow users to truley use their old sticks of ram 128,256, etc instead of just 1GB sticks. Right now, the price is too high for the actual I-ram module, and also the price of ddr ram is too much. If Gigabye does this right, they could have a hit, but it does not look like they are moving in the right direction. IMO, 2x or 3x Irams with cheap 512 and 256 sticks of old ram running in a raid onfiguration would be an good solution to the hard drive bottleneck, especially if people these days are willing to pay a premium for the Raptors.

Here is one thing that is not mentioned on anandtech in most of the storage reviews, and that is responsiveness (as i like to call it.) Back early in the day when people were starting to use raid 0, most benchmarks showed little improvement in overall system performance, even now the difference between a WD raptor and a 7200rpm drive is little in terms of overall system performance. However most benchmarks don’t reflect how responsive your computer is, it's very hard to put a number on that. When I setup raid 0 back in the day, I noticed a huge improvement while using my computer, but I am sure that the actual boot time didn't increase much. Something with the i-ram card, using it probably feels a lot snappier than using any hard drive, which is very important.
Reply

not at all steve, the access time goes down .5ms at most (don't take my word for it i've tested it with many benchmarks) but raid 0 shines where you need to get small amounts of data fast. if you are looking for a mb of data you get it twice as fast as from a regular harddrive, (assuming around 128k raid blocks). And due to the way regular applications are written and due to locality of reference, thats where responsiveness feel comes from. Reply

RAID 0 would not improve access times. What you generally end up with is two HDDs with the same base access time that now have to both seek to the same area - i.e. you're looking for blocks 15230-15560, which are striped across both drives. Where RAID 0 really offers better performance is when you need access to a large amount of data quickly, i.e. reading a 200MB file from the array. If the array isn't fragmented, then RAID 0 would be nearly twice as fast, since you get both drives putting out their sequential transfer rate.

RAID 1 can improve access times in theory (if the controller supports it) because only one of the drives needs to get to the requested data. If the controller has enough knowledge, it can tell the drive with the closer head position to get the data. Unfortunately, that level of knowledge rarely exists. You could then just have both drives try to get each piece of data, and whichever gets it first wins. Then your average rotational latency should be reduced from 1/2 a rotation to 1/4 a rotation (assuming the heads start at the same distance from the desired track). The reality is that RAID really doesn't help much other than for Redundancy and/or heavy server loads with a high-end controller. Reply

Um yes. This is what I meant - mirroring (raid1, not raid0) would increase access times as both disks could access different data independently (if the controller was smart). Sorry about the confusion. Reply

I was reffering to raid 0 in my post if you didnt notice. There is no way RAID-0 would lower access times. Its impossible seeing as the data is spanned accross both drives, meaning the seek would be no faster than a single drive, and likely a tiny bit slower because of overhead. Reply

quote:One of the biggest advantages of the i-RAM is its random access performance, which comes into play particularly in multitasking scenarios where there are a lot of disk accesses.

Anand, how about an update with some server / database benchies?

Gigabyte might have something on its hands if it makes the card SATA-II to use the speed of the RAM. 1.6GB/s through a 150MB/s straw is not good. Anyhow, here's looking forward to REV 2.0 of the i-RAM GigaByte! Reply

This thing would still be useful as a pagefile in some circumstances--if all your memory slots were full and/or you had extra memory lying around. This is what I had been planning to do with it (currently have 4x512mb, plus a couple other smaller capacity DDR sticks which would be nice to use b/c for photoshop stuff). But the price is too high. I'll wait till it drops. Reply

- File copy performance is mostly a moot point, because copying files from disk to disk will go as fast as the slower of the two can, and other applications that typically require disk performance (unarchiving et al) will only see a minimal performance increase due to bottlenecks in other parts of the system (which becomes even less valuable when you consider that you won't be doing a whole lot of unarchiving to a disk that small).

- Gaming benefit would be okay if it you could fit more than about one modern game on it.

- Using it as a pagefile is, as Anand noted, pointless.

- It does improve boot times, but it's not a huge difference, how many of us shut down often enough to actually be bothered by a few seconds in boot?

- It does improve app loading times slightly, but if you're opening and closing apps that take a lot to open and close, it's probably because you don't have enough system memory, so buy more memory instead.

I'm just gonna pick at a single point ... you could install one game to the i-ram at a time and then archive them on another drive.

You get fast zip times on i-ram and a single file transfer to a magnetic disk is faster than multiple small files (moving the the archive won't take long). Just unzip the game you want to play to i-ram ...

but then ... that kinda defeats the purpose doesn't it ...

I could see this being fun to play with, but I have to agree with Anand -- it needs higher capacity before it is really useful.

I don't really see anyone using this, its costs way too much for too little storage and too little performance benefit, not to mention the risk of data loss. I'll give it a look again when they get some higher bandwidth flash or something like that. this i can pass on for now. Reply

Unfortunately if they did that, it would mean that your computer could never be turned off. As noted in the review, the card is currently still powered even if the machine is "off", due to the fact that when a modern ATX computer is off, it's actually more of a super-standby mode that leaves a few choice items powered on for wake-on events(LAN/modems, and the power button of course). All Gigabyte is doing here is taping in to the 3.3v line on the PCI slot that wake-on power is provided through, which is enough to keep the device powered up even when the system is in its diminished state.

Molex plugs on the other hand are completely powered down when the system is "off", so it would be running off of battery power in this case. A lot of us leave our systems on 24/7 anyhow, but I still think they'd have a hard time selling a device that would require your computer to be off for no more than 16 hours at a time. Reply

What I would be very interested in seeing is the performance of the thing using it as the source for encoding a dvd/mpeg... Most encoders are heavily disk-based and if it could reduce the time significantly it might be worth while - assuming that eventually they come out with one big enough to hold the source. There's now reasonable CPU encode performance, just have to get the data to/from it... maybe the i-drive would help.. Reply

Hmmm, the WD Raptor has a sustained transfer rate of 72MB/sec. So on a freshly formatted drive, with no fragmentation, it should still be half the speed of the iRam. But at $200 for a 74GB drive, then you could get a pair of these running in RAID0, which would run at around 140MB/sec anyway, and still have spent less than the cost of the iRam and 4GB of DDR DIMMs. It definitely seems like this product falls short.

The use of PCI 3.3V standby power is clever. Perhaps a future version should just use a dummy PCI card to provide the power, connected to a 5" drive-size case with many more DIMM slots. If you can't cram at least 16 DIMMs in there, then the ability to use old memory is kind of wasted, since the old modules will have such small capacities.

Ultimately I think this type of product will always be a failure.

What they should do instead is make it a pass-through cache for a real SATA drive. So you plug the SATA controller into it, and plug it into a real SATA drive, and it caches all I/O operations to the real drive. That's the only way that you can get meaningful benefit out of only 4GB of memory. A card like this would turn any SATA drive into a speed demon; 4GB is definitely a decent size for most caching purposes. Reply

Of course the next logical step is to put the DIMM slots on the SATA controller card, so that access to cached data occurs at real memory speeds, not just SATA bus speed. This would only be a useful product for folks stuck on 32-bit systems, because otherwise it would be best to just increase the system memory instead. But there are plenty of 32-bit systems out there that would benefit from the approach. Reply

That and/or having the possibility to install very large amounts of RAM (like 32GB) on your motherboard and BIOS settings to decide how much of that is non-volatile.

I have a feeling this is a transitional product that while being a very nice add on to your current system, will become obsolete in 4 to 5 years. If I had to capture loads of high sampled audio (96/24), I'd want one now, though. Reply

I was expecting something closer to the $50 price mentioned at computex... It would have been a nice device to tinker around with, but at that price (plus the price of ram) I dont think most of us will get it. Reply

why they have to waste pci bus speeds and run though a sata chip beyound me it should directly conect to the pci bus have its own bois and run as full fleached ram or as normal ram with a redirect to being a hdd heack u have ram disk software idea the drive is pretty useless as permenment storage why no1 could see this i do not know Reply