Posted
by
timothyon Thursday November 18, 2010 @02:34PM
from the ram-disk-returns dept.

Vigile points out a new take on SSD from Viking Modular Solutions. The SATADIMM puts an SSD in the form factor of a memory module. "The unit itself actually uses a SandForce SSD controller and draws its power from the DIMM socket directly but still connects to the computer through a SATA connection — nothing fancy like using the memory bus, etc. Performance is actually identical to other SandForce-based SSDs though the benefits for 1U servers and motherboards with dozens of DIMM slots is interesting to say the least. Likely priced outside the realm for average consumers, the SATADIMM will likely stay put in the enterprise market but represents an indicator that companies are realizing SSDs don't need to be in traditional HDD form factors."

I work w/ 1U servers all the time, and there certainly is such space. In the long ones, there's room behind the drive bays, and in a short one, tuck it in to the space between the PCI(e) card (if any) and the MB with a bit of double sticky tape. On older 1Us, put it where the floppy drive used to go.

I use Asus RS120-E5 series 1u boxes. Since they're all headless web servers and everything I need is already onboard, there is a considerable amount of unused space above the PCI slots and risers are already provided for power. These boxes only have 4 memory slots though, so in my application, a device built on a PCI card form factor would be a lot more useful.

I have, and this would have a hard time fitting in a 1U case. The data cable comes out the top, but many 1U cases have the ram sticks at a 45 degree angle because they would be too tall. It would be OK in a 2U or larger and used as the boot disk.

That was my thought as well. In the article, they seem to have a 90 degree adapter on the SATA cable to plug into the DIMM. My immediate reaction (besides "that's kinda neat") was that RAM is stacked, so if you put 4 of these in a bank of RAM, the 2-4th's SATA cables would hit the cable from the 1st. You'd need cables that connect at 90 degrees in one way and 45 in another.

If you have empty RAM slots and you want to add one or two, it's not that bad. The idea of using banks of it to put terabytes in a 1U c

They should have model variants with connectors staggered relative to the DIMM length. Have one with the connector in the first quarter, another with the connector in the second quarter, etc. So you could have a bank of four with no cable/connector overlap.

And also custom 1U racks filled with powered memory slots for such drives...

But I seriously think the point to this whole exercise is that with SSD drives we don't have to be tied to any single layout or size... they could be made to go anywhere. They could make them into stackable cubes like Lego's (with sufficient cooling, of course).

3 letters for you: SAN
If you can't fit your storage into the case you ordered the wrong server or need dedicated storage.
Use the right tool for the right job. Memory slots are for memory. Servers have extra memory slots because they often need more. *gasp* what a concept.

I guess it would be a quick way to add storage to a server that has a bunch of unused memory sockets. And the design uses off-the-shelf components which is always nice.

But there was getting to be a need for a proper SSD package, as sticking them inside HDD housings was both limiting and an inefficient use of space. Viking's solution probably won't take off, though, since Apple/PhotoFast/Toshiba just stole their thunder. [arstechnica.com]

i've read where putting the tempdb in MS SQL Server and whatever the Oracle and DB2 equivalents are on SSD is a huge performance boost for queries that rely on it. things like sorts and joins.

you can easily have multi-terabyte databases on a 1U/2U servers these days and with 16GB DIMM's enough memory in a few slots for them. but if you have idiots running select queries for hundreds of millions of rows at once then this will be a big help. i've seen queries like this run for days

16GB dimms run me about $900 each, whereas I can get 64GB X25-E's for $700.
and tit for tat, the performance won't be THAT bad by comparison.
at ~$55/GB for Ram, or ~$10/GB for flash, at 1000GB quantities... that's a pretty easy call to make personally.:P

OK lets assume 1ru's come in two basic flavors the fully integrated products from Dell HP IBM and the like these rarely have any standard power connectors let alone internal SATA ports. Then there are the custom built these normally have free standard power connectors and free SATA ports. In the first case there is nothing to plug it into unless you add a sata raid card at which point why not just get the power from the PCI-E slot? Custom servers don't need to draw power from a dimm slot. In either case

the point is that you can instead of purchasing ram at ~$25/gb you can buy flash at ~$10/gb and still stay dense.

I'm sure where you are there's room for things: but in much of the world this is not the case. try suggesting 4U storage cases for a customer wanting to host a 20TB database in Egypt. you may only get 4-6U in each building to work with, (with little cooling capacity) and $25K/building in hardware budget.

There are cases for everything. I can think of a pile of customers of mine that only filled their Vmware hosts with 64GB (of the 512GB max) of ram (leaving twenty eight sockets free in each of the three hosts for something!) that's 33.6TB of space right there! (though personally I'd PREFER to stick RAM in there, that would only be another 1.344TB of ram)

HP's 1U servers tend to have a couple SATA slots left over, especially if you forego the optical drive (and with PXE or iLO, you don't need it). The actual hard drives tend to run from a SAS RAID controller which often takes up a valuable PCI-E-slot.

The power still comes from the power supply... where else would it come from? I guess it'd be useful if you have memory slots you're not using, but no extra drive bays.

The distinction the GP was making was the power -- yes, from the power supply -- delivered through the pins of the DIMM slot rather than the cable connected directly to the PSU. And I'd have to agree with both of you in asking what the point of this is.

The price of flash has nothing to do with the price of RAM. They are completely different constructions, and for different tasks. Flash is faster than magnetic storage but still dog slow compared to RAM. For flash you talk access time in 2-3 digits of microseconds. For RAM you talk access time in single digit nanoseconds. For flash transfer rates are in the 100s of MB/sec with anything over 200 being rather exceptional. For ram transfer rates are in the 10+GB/sec.

Same sort of transition again when talking DRAM (what you put in your system) to SRAM (what processor cache is made out of). Again the price goes up massively so instead of 8GB, you are talking maybe 12MB. However again the speed goes way up and access time way down.

This is kinds like comparing apples to oranges. Think of the SSD as another intermediate fully random-accessible cache layer that is slower than ram but much faster than a hard drive. Consider the cost of placing, say, 40G of ram in a server. That's a lot of DIMM slots, a more expensive mobo, lots of expensive high density dram, verses the cost of a 40G SSD ($115 from Intel). So even though the SSD is more expensive per gigabyte than a normal HD, it is considerably less expensive per gigabyte relative t

Typically 2x2G sticks are cheaper than 1x4G stick, particularly when it has to be ECC memory and DDR3. If you are talking about non-ECC memory then you aren't talking seriously. non-ECC memory is just fine for a consumer desktop (though even that is arguable when one is talking about storage in excess of 4GB), but in a server environment ECC is pretty much required. As of about a year ago I've started buying only ECC memory for desktops too.

When i saw the headline, i was hoping that this would be a device that allowed an SSD to be connected to a RAM slot and used as RAM, rather than an SSD that takes up a RAM slot.

Well I don't know why you would want to use slow flash as your primary storage rather than fast DRAM or SRAM.Now this is not what I think you had in mind but having primary storage that does not need refresh would permit you to have a machine that could be powered on and off and remain in a consistent state. Well there were a few more things you'd need to do like preserve the content of CPU registers but there are ways to solve those problems. Such a machine also could have only primary storage be

Well I don't know why you would want to use slow flash as your primary storage rather than fast DRAM or SRAM.

Maybe rather than just taking up the slot use it for communication too? Appear as RAM to the computer and then create a RAM drive to mount the SSD? Though it does seems like a rather roundabout way just to avoid using a SATA cable.

While the write speed would be painful compared to real DRAM, the read speed would be comparable.

For large static arrays, and for custom data applications, it could have uses in the form the GP suggests, though it WOULD be a nasty throwback to the days of user ROMs...

However, I could definitely see the potential in having such a thing mapped directly to system memory, then loading a special block device driver to allocate all that "memory", so that memory IO could be used for data storage. It would eliminate the SATA controller's IO bottleneck, but would impose a slight CPU penalty. For systems with multiple CPUs, that wouldnt be much of a problem. You would need to allocate that memory fast though, to prevent the OS from trying to use it like RAM.

Sure, burst speed may be comparable: but ram will maintain that burst speed in an almost any number of IOPS. try writing 250KIOPS@1K to a flash drive, watch it slow to a crawl.

Access times, though much better in flash over the last few years are still an order of magnitude slower. and just imagine writing to ram only to find out that the process must wait while the old blocks are re-allocated due to bad sector remapping. (potentially causing micro or even millisecond access times!)

Also, while the problem of flash wearing out has been vastly exaggerated, imagine how quickly a contended lock would wear out the 100.000 write cycles. You could easily do that many in a second, and no wear levelling can cope with that.

It's actually more around 10,000 cycles for consumer grade MLC flash (which is what you find in most SSDs). SLC flash runs around 100,000 cycles. There's been a lot of misinformation on this topic but the easiest way to think about it is to consider the actual write durability that people have been experiencing with SSDs. Take an Intel 40G SSD for example. The vendor-specified write durability is 35TB (I always say 40TB just to make the numbers easy), or 1000x, which assumes a very high write amplificat

Write amplification is basically due to the fact that a MLC flash chip uses a 128KB write/erase block. Smaller writes either have to be write-combined or otherwise eat a ton more durability due to having to write the whole block than larger writes would.

I'm fairly certain that the write block size in every SSD on the market right now is not the same as the erase block size...

In other words, that 128K block is segmented into 4K blocks (32 of them,) and each 4K block can be written once per erase cycle.

..so its not fair to consider small uncombined writes as equivalent to a future erase on a 1:1 basis.. its actually 32:1.. or about 3% of small combined writes will lead to a mandatory erase.

I remember back before computers had onboard drive controllers and there was no such thing as a standard drive interface, they sold ISA hard drive cards, it was a drive & controller all in one. I dont see to much advantage running a drive on a ram slot, you can just dedicate a drive(s) to you work, swap or temp files. I typically do that when editing video, 1 drive holds the raw videos, one drive is a temp drive and one is what the final video files are outputted to when they are rendered. Much faster then using 1 drive or even a single raid to read/write large amounts of data at the same time.

Certainly putting things like swap space and database journal files on SSD would speed things up wonderfully, but how about an OS hack where an SSD drive is a sort of L3 cache between core and traditional disk for dirty disk buffers? Also, I'm wondering about the power requirements between SSD and DIMM RAM.

Not sure I would call it an OS hack but DragonFly has precisely that, called swapcache. Swapcache Manual Page [dragonflybsd.org]. It isn't so much making standard paging work better (systems rarely have to 'page' these days) but instead its ability to cache clean data and meta-data from the much larger terrabyte+ hard drive that makes the difference. Anyone who has more than a few hundred thousand files to contend with will know what I mean.
-Matt

An SSD usually uses considerably less power even during writing than RAM. Consider that a stick of RAM is going to have to continually refresh each of its 16 or 32 chips, while flash is only going to power up those that it is currently accessing at that time.

Adaptec's like of MaxIQ controllers are the cheapest I know of, Intel also has it on their high end rebadged LSI controllers though you have to pay extra to add the feature. The controllers use an SSD as an additional layer of cache (they also have a RAM cache) for the array to speed things up. Works quite well apparently, if a bit costly.

Final Thoughts: Taking power (and space) from free DIMM slots is certainly a novel idea, and is beneficial to overly cramped installations. I can easily see these being used for embedded and other custom systems where high storage performance is needed without the wasted space.

So the entire purpose of this hyper-expensive convoluted creation is to save a power cable...? The whole article reads more like an advertisement + some benchmarks. I see no benefit to this thing whatsoever. Unless I am missing something, it sounds more like Viking was trying to make a non-volatile memory chip (that would be kinda cool) and realized it wasn't going to work, so they had the engineers rip out everything novel about it and just use the DIMM slot to save a power cord.

I wouldn't be surprised if the purpose were to confuse the buyer. Imagine, you see an SSD that plugs in to a DIMM slot. "Woah, that's got to be faster than normal SSD! Or it's got to be doing something that makes it better than this other one that only connects to a little ribbon cable.

But this makes even LESS sense for large-form-factor mobos. If you want to maximize density you buy a high-density SSD (they come in terrabyte sizes now, after all). You don't buy a hundred custom-fit DIMMs with discrete SATA connectors. It doesn't even make any sense for 1U FF, since 2.5" drive bays trivially fit in that form factor.

Actually I find this potentially quite cool. Not as much for the power source, but the size. Since most mATX boards don't come with mini PCIe slots, if you want to use an SSD drive you need a 2.5" drive or a PCIe card with a mini-slot on it. Both are much larger than a DIMM option.

And with 50gb, this would be very useful in a media box streaming from a server. Now only if the price could come down.

But not more useful than actually putting ram in that slot, since most small form-factor motherboards are also going to have a minimal number of DIMM slots anyway. One rarely sees more than 4 slots and I don't know about everyone else but I always populate all my DIMM slots so I don't have to purchase ultra-high density sticks (which cost a premium).

There is a very good reason why Apple had to use this sort of thing... a custom-fit SSD in a custom, basically non-upgradeable item (people who buy Apple stuf

I'll just speak for myself but to me it seems like a strange concept to use a memory slot for power... apparently memory slots are commonly abundant in servers but not molex cables? I guess niche concept and 20 acronyms leads to mistake making territory in my case.. whatevs....

And don't forget that often times it is people such as myself modding the conversation. I try my best to moderate well, but on a subject like this I am outside of my knowledge base. I think that is a reason that so few comments on this story are modded much at all, besides the quality of some of the posts are just saying wtf without bothering to reading the article.

Slashdot does seem a bit desperate for moderation, I'm getting 30 mod points on some weeks, 15 at a time.

This device seems backwards with today's trends. With virtualizaion gaining ground fast, the ideal setup is to have as much RAM as possible with a SAN back end for storage - iSCSI, FC, whatever. Most local disks on servers today are RAID1 mirrors for the small hypervisor.

So, yes, this device wastes a valuable DIMM slot to give you a less-valuable SATA drive?

I can't think of any scenario where this would be useful unless you're talking about handheld devices - a MacBook Air or tablet of some sort.

DB servers in a leased rack. Doing DB IO over FC or iSCSI adds latency that local disks are not going to have. This gets you fast local storage without having to pay more each month for leased rack space.

Rack? Who cares about racks. It's not like there's not enough room in 1U servers. What this is awesome for, though, is for small form factor PCs. With video on the mobo or cpu the only thing left that stuck out, was the harddrive or ssd. Not anymore. Awesome!:-) Now I can go get myself a proper 17x17x5cm quad core PC:-)

I can see this in an environment when you need to stick a lot of 1U rack systems all over the place, and can't spread out over a larger footprint in any one location. But when else am I going to use this? Didn't we decide a long time ago that large amounts of internal storage wasn't really a good way to handle increasing storage needs?

I'd much rather see a big ol' SAN full of SDDs than put together something like this, unless someone else is seeing an advantage that I don't.

We need a new standard form factor or two. Clearly making an SSD in the size of a pattered hard drive makes no sense, but this product makes no sense either. It's just a way to steal power from another sort of slot. In addition to the form factor, I'm not sure SATA even makes sense anymore, so it may be time for a higher level rethink.

I'm not sure the best way to go, but there are some semi-obvious starting points. What about MiniPCI for SSD's? One or two on the motherboard could work well. Maybe a mo

The new formfactor introduced in the Macbook Air sounds interesting, and already announced to be available from a couple of different manufacturers. It's basically the same size as a DIMM, but with the pins at the end instead of along one edge.

It's not a very exciting use of non-volatile memory. It makes sense, though, to package non-volatile devices for vertical slots like DRAM, and have motherboards that have slots for them. But not DIMM slots - something that actually carries the drive data. The thing announced in the article still needs a drive cable; all it gets from the DIMM slot is power. This looks like an interim product until server motherboards go to that form factor and eliminate drive bays.
The near future for server farms prob

Let me get this straight, you want us to sacrifice valuable RAM slots, and more so, valuable RAM, to run an SSD device? What would make more sense would be to have a completely seperate 1U unit hooked up to the unit with nothing but SSD devices (or hard drives). Wait, don't they already have those?

Likely priced outside the realm for average consumers,

I also doubt the average consumer will want these. With most consumer motherboards only supporting two or four slots of RAM, I REALLY don't see sacrificing ram slots for SSD. Especially when they top out at, what,

Everything old, is new again.Sun did this in the sparc 10 &20 line, by enabling an optional NVRAMM SIMM in the primary memory slot.A whopping 4 megabytes of RAM max, I think:) so it was used for caching the "metadata" of things like NFS, rather than direct storage.But putting something directly in memory, and accessing it through the memory bus 'normally' like a basic RAMdisk) sounds a whole lot more efficient than just sucking power from the slot, but looping back around through the SATA bus, so you c

Ok now if that's the case this doesn't seem as neat... I guess if you happen to have an empty dimm slot but that seems like a small niche. Why not just sell a tiny ssd drive that you could hook up to the molex power sockets and the sata cable? I'm not a server admin or anything though so maybe I'm missing something.

there are 32 dimm slots on a modern 4 socket server board. memory comes up to 16GB/dimm densities, and few companies (well, few of the ones I sell to globally) have any need to max the boards. 8 slots gives you 128GB of RAM, leaving 24 slots available.

at 400GB usable densities per slot with this product, the machine can then host 9.6TB of SSD storage BEFORE connecting to a drive array. being that most cases will only allow 8XSSD's to be mounted, th

His OP was (to use your words) "so blatantly stupid" that if he can't figure out why he got a bad mod, well, too bad. He should realize the moderation system is imperfect, as explained here [slashdot.org]. Specifically:

I found a comment that was unfairly moderated!

Lemme know and I'll look at it. Sometimes I might agree and revoke access to a moderator. Usually I disagree and let it go. Its difficult to be the judge on this stuff since it is so subjective.

His OP was (to use your words) "so blatantly stupid" that my response was too. Tough.

You are either a clueless idiot with no social comprehension, or you are a pedantic idiot looking for something to be a pedant about and feel superior. Neither is very likable. I am going to go out on a limb and assume that you don't have many friends (if any) and most of your co-workers find you difficult to talk to and unapproachable.

1. Are you assuming that there is a direct correlation between my on-line, anonymous posts on an open forum and my face-to-face, interpersonal relationships? If so, citation needed. (Regardless, it isn't true in my case.)

2. (Directed to the OP) It's the Internet (in general; Slashdot in particular); if you don't have a sense of humor or are easily offended you shouldn't be on open forums.

Think Mac Minis or Nano-ITX boards. You could make a damn small box which for many (most?) people is more desirable than expansion room. The case could also be dead simple with the most complicated thing being the holes to attach the board.

I think its a bit silly too. For one thing, there is ALREADY a suitable form factor: mini-PCIe. And two, DIMM slots change every year. Anyone buying this dimm-based SSD is basically buying a custom part with no resale value (because its form factor will become obsolete very quickly) and wasting a memory slot that they might actually want to use in the future. Bad news all around.

I think you are misapprehending the density argument. These DIMM slot SSDs are NOT ultra-high-density items. A standard form factor 2.5" or 3.5" SSD is much higher density when you are talking about more than a few gigabytes worth of SSD. They can fit 1TB+ into a 3.5" form factor already. Also, 1U rack mount boxes have no trouble fitting a whole crapload of 2.5" or 3.5" front-loaded slots into the box. It is after all a fairly deep form factor.

You basically completely failed to look at the difference in size, or consider the type of case it is mounted in.
Sure its unnecessary in a traditional Tower Case, where there are often 8 hard drive slots available.
But in a small server rack where space is premium; this becomes very viable, especially as there are often few drive slots available; but probably there are unused RAM slots.

Not sure why you think it would be viable. Small server racks still have very deep footprints and already have plenty of front-loaded 2.5" and/or 3.5" hot-swappable slots. You are advocating that this DIMM thingy would somehow be an improvement? It isn't hot-swappable, it still needs a separate mobo SATA connector (and cable), you actually have to PULL the freaking server out of the rack to change it out. It essentially can't be upgraded. It isn't commodity hardware. AND it is low density compared to