Posted
by
CmdrTacoon Monday April 10, 2000 @11:43AM
from the now-thats-what-I'm-talking-about dept.

NWprobe writes "Some scientists at Naval Research Laboratory have developed a new super disk. Nando Times has an article about it. I want a storage device like this, but will we ever see them come into production...
" "We anticipate we can put 400 gigabits in a square inch," said solid state physicist Gary Prinz of the Naval Research Laboratory, which has just contracted with a pioneering Minnesota firm to move the technology from the lab to the production line. "

This is starting to feel like a bad visit back to the 60s. What ever happened to holographic storage - I thought that was looking promising. At least that seemed like a somewhat fresh & undeveloped idea. Mag core memory went out about when they figured out they could use something smaller than ferrite beads to store data.

According to MVE's MRAM white paper, MRAM has been demo'd with 50ns read latency and 10ns write latency (20MHz and 100MHz, respectively). From the information in the white paper, it appears to have much the same organization as DRAM, so you could have a fairly wide bus that read/writes 64 bits or 128 bits at a time. So 20MHz @ 128 bits = 3.2GB/s throughput reading, 16GB/s throughput writing.

Of course they have thought it through. Hard drives are holding back the whole speed of the computer, its a big bottleneck. When you boot up the computer, its waiting for the hard drive to read all the programs on the computer. CPU time isn't really being used all that much- its just sitting there to get some data to process.

But, if you had one of these new toys, you don't need to read any data and put it in memory - its already there! and at 50GB per square inch, thats pretty good. You could (like they said in the article) have it built into the CPU, and have your OS loaded on it. Turn on the computer - Boom! It's perfect.

As soon as you request the map in Quake3, the time to load the program would be *nothing* compared to the hard drives being used today.

I hate waiting for programs to load, and I hate waiting for my computer to boot up. And I *HATE* having to delete more MP3s to get more hard drive space.

Imagine your Palm Pilot (or other PDA of choice) with 50GB of memory on it. Or how about laptops, or portable MP3 players, or a digital camera being able to store 700 pictures at a time at high resolution. The list goes on.

Disaster in the marketplace? Only if they sell for $1000's, instead of $100's

I think people objected to your post because it was stupid, not controversial. There are only so many ways to moderate down, however. You'll note you got moderated back up by someone who thought you were funny.

If I wasn't replying in this area, I'd have moderated you down too. A way to vastly increase storage density with less power and faster access times? And all you can think of is to increase HD capacity? It indicates you didn't read the article which listed many other uses.

What a strange coincidence. Those three happen to all be large binaries. One 3MB MP3 would then equal 1024 3k and insightful web pages. Not that it matters, since we're questioning your limited imagination in this product's usefulness.

BTW, how did you calculate your figure of 195 Terabytes of data on a palm top using this technology? Unless this is some obscure marketing math, that would equal 500 square inches using the researcher's projected "400GB per square inch". That's 2 square feet!

"The device is so small, and requires so little power, that it should be possible to combine it with a computer's central processing unit, according to Max Yoder, director of electronics operations at the Office of Naval Research."

Yeah, if he hadn't been moderated up I probably wouldn't have bothered. But I'd feel a duty to at least knock him down to 1 again in order to maintain whatever small amount of respect slashdot moderating still has.

This technology is even higher density than current hard drives, which are _much_ denser than hard drives several years ago. Things like this keep making good old fashioned hacking [userfriendly.org] harder. I mean, how is anyone supposed to get started on one of these with just an ordinary magnet! Sheesh...

I can't wait to have a CPU with mass storage inside the case. They didn't say whether this stuff is as fast as RAM, but I'd assume it isn't, since IIRC magnetizing something is usually a lot slower than flipping the charge on a capacitor. Read performance should be impressive, though, and in any case reads and writes should be orders of magnitude better than hard disk seek times. (HDs have decent sequential access performance, but are miserable for random access.)#define X(x,y) x##y

10 000Gb is "only" 1.25 terabytes. You can buy RAID arrays with more than that on them. Maybe it should be enough, but for various reasons it isn't. Some of the reasons are actually logical, like experimental data or satellite images. (as opposed to wasteful users on a big file server)#define X(x,y) x##y

Well, you can wait 10 times longer for them to download! Available network bandwidth is a very real limit on many things. Of course, you can save CPU time by decompressing once and saving the PCM audio data for future playback, like a cat page vs. a man page.#define X(x,y) x##y

"The N&O has been a HIGHLY conservative newspaper for a long time and I never read it."Yeah, that's just what Jesse Helms always says:-)They should have left me at "insightful" and given you the point for "funny".

haha. There is no such thing. There/used/ to be - but in the last 12 months slashdot has turned into nothing more than a forum for celery overclocking linux kiddies who roll new kernels as if it was 0-day warez.

However, it's good to have multiple different approaches to the "my next drive" problem. One of them will be best, and several may not pan out. By having several parallel development paths, we increase the likelihood that at least one of them will actually work.

Yes, a (not-so-) longstanding has been broken today: Dumbest post ever. I can only assume that this is a troll. However, for those about to be sucked in:

The reason I wouldn't buy a 900lb hamburger for $.10 is not "because I'd never eat it all". It's because that's a waste. There's too much cost (in environmental terms) for the value (a couple of meals at most before it goes bad). Similarly for the movie: The cost ($7.95 + 3 years of my life) is too high for the value (a movie). As an example, what if the 900lb hamburger was guaranteed to never go bad and it was easy to store at your house? The economics start to look a little more attractive, don't they?

But big harddrives don't have a cost the same way. Given a choice between a Palm with 8MB of memory for $150 and a Palm with 400GB of memory for the same price, I'd choose the later in an instant and so would every sane person. What reason would there be to NOT do it?--

Yes, that's right, I'm still labelling you a troll. I'm even more sure of it now. You make claims with no proof (or even examples) to back yourself up. Your participation in the flamewars you start serves only to lengthen the argument--not shorten it.

In particular: "would you buy a 900lb burger ? No of course you wouldn't, because it makes no sense. "

Specifically HOW doesn't it make sense? Remember that I've posited the existence of an easy way to cart the thing home AND that it won't spoil. So you've got a big slab of meat (plus some extras) for $.20. How is that a bad thing?--

Speak for yourself. Maybe YOU don't want 400 GB or storage. But I'm sure there's plenty of people that do. Myself, I would buy one in an instant. You could say the same thing about Cable TV. Oh, nobody wants 150 channels, they would never watch them all. Yet there's still plenty of buyers out there.

Although Nando's website [nando.com] looks different than the News and Observer's website [news-observer.com], the content is basically the same, except for local news (Raleigh). The N&O has been a HIGHLY conservative newspaper for a long time and I never read it.

The military is more concerned with TEMPEST than they are with "anti-magnetic field" protection. The electromagnetic shielding on military comms and computing gear is a) protection against EMP, or electromagnetic pulse; b) protection against remote eavesdropping.--

Unfortunately, with that logic we should all be driving '71 Cadillacs and city buses...

It just pisses me off that more and more people are buying huge vehicles to mask their terrible driving skills, not to mention road manners, consequently making the roads more dangerous for those of us who drive responsible cars.

Given your career, perhaps you can also tell us why people seem obsessed about buying 5000lb SUVs that get 12mpg.

I have a hard time believing that very many of the full-size SUV buyers "use it all", and yet there are a lot of buyers out there.

The interesting point of this article (did you read it?) isn't that the price of storage is going down -- I agree, BFD -- but that we might have access to "instant-on" memory. *That* would be a very big deal indeed.

Consider the fact that various groups (such as a newly announced alliance of US tech companies) are pushing for networking over power lines and accessible via power outlets, might a creative engineer eventually be able to build, say, a network-capable bug inside a power cable? Or one of those ubiquitous power adapters?

You'd have the capability to have a decent amount of code with miniature electronics; it could archive a/v recordings for a while and then xmit in bursts at hours when the owner is less likely to be awake and using the network... hrmmm.

Ya, yoiu probably think 640K was enough memory for everyone too...keep in mind, the size of software keeps growing too. Also, maybe joe average won't want it, but i know a company that already has 200gb of data...they might be intrested.

...; they gave no numbers on the read/write speeds here, and I'm not convinced that this is going to be fast.

I doubt that speed is going to be an issue. What you're doing is changing the large-scale organization of magnetic domains in the material (at this scale, it is a single domain I'm sure). You're swapping the spins of a few millions of electrons; how much time can that take? Even the old-style magnetic cores weren't all that slow for their size, and these things should be about as much faster as they are smaller.--

I seem to remember micrograph photos of the garnet based bubble memory showing units shaped like the letter "L", which sequestered small magnetic domains that could be moved. Donuts were the old iron ferrite core memories. Which you access by wires.

The new stuff sounds like charge is neither moved nor accessed by wires, though the donut shape seems to be very good at holding a magnetic charge, or maybe they just shoot the charge through donut holes to reach the bottom one.. a little vague in the story.

That's not really a lot of memory since it still uses terms we can pronounce to describe it.

Figure instead that this is a crystal ball and find out what year it is to be available with serious transfer rates.

At that time,

- All AOL's forums are digital video only viewable from Netscape 16 (forked away from Mozilla) - RIAA distributes DVD quality music videos of the top 100 singles weekly in throwaway chips like those AOL CD's people get in the mail these days ("a great loss leader") with 2048 bit SDMI encoded in the hardware controller. - Sony's Playstation V, having broken another sales record, is issued a recall since someone found a secret password for the test screen which renders SDMI useless! Oh Shit! - Tons of encrypted software comes with tons more of free interactive ads and spam is a fond memory. - Transmeta slates, which allow you to plug these memory chips into thin little bays inset all around the edges, will directly connect to fiber in public spaces (spread spectrum that can handle the bandwidth will fry your zygotes in a flash) - Micro$oft still sux but who gives a shit. Bill sez "the consumer is happier now than every before" etc. - You set the deSDMI chip emulator in your slate to the public key signature of the viral FiL3Z band which infects the omnipresent fiber net, dial up your favorite virtual file system (not FreeNet since they all got arrested in early 2001) and download today's news plus the latest RIAA password. Andover has been bought out by CNet and the original Slashdot crew (fired though some were coopted by the new owners) run (virtually) for the data havens where they can broadcast Geeks in Space in peace. - As an afterthought you turn over extra computing power to the daily RIAA decryption effort. Having seen it was finished hours ago (someone in the quantum chip labs in Switzerland have been playing around over coffee break) you turn it to the distributed Seti project, having the last year of raw observatory data in an auxiliary bay. There are a couple inexplicable Wow events on the charts but more processing is needed.. - Hackers and mainstream couch potatoes incredibly unite against AOL and steal the keys to the data stream. Artists get rich from direct payments from advertisers, and with TimeWarner going down the tubes AOL is looking for an exit strategy. - Programmers are working feverishly within the GPL liscense to support the latest quantum computing hardware (for the first time Linux may support this hardware before any OS). The only problem is when the chip delves into other universes during a computation the subjectivity breaks; it keeps finding references to OS's by Linus' parallel twin sister Eunice (who makes her own wild Mozilla skins) and strangely enough.. hot grits? - World domination at last. Who won? Doh.

In economic terms You are talking about a products "utility" comparing the utility of hamburgers to direct access diskdrive is not quite thoughout eithier...if the products were substitutes for each other maybe.

The work was done by a grad student, Benjamin W. Chui, at Stanford with a grant from IBM. His thesis was published last month (which is rare for a thesis). Essentially he used a micromachined silicon cantilever with piezorestive elements. The tip was higly resistive as well. To make a pit, the device would be heated by resistance as it was dragged along the surface. To read the pit, the divece had a piezoelectric or resistive element that would sense the tip bending into the pit.

If you are very interested in this sort of thing, (Fatbrain doesn't have this book yet) go here: http://www.amazon.com/exec/obidos/ASIN/079238358 3/102-1637548-3888018

I think people will have no problems filling a storage device of that size with movies and music videos and stuff. Most videos I download today are around 50 megs for like 4 min and it looks like crap. If people start storeing dvd's on there HD like they do with audio cd's today it won't take long to fill a few TB.

Back in the days of 80 ns memory chips, the best bubble memory chips were something like 400 ns (.4 ms) as I recall. This would make them about 10x faster than the best discs today. The problem was with the magnetic domain and RLC resonances frequencies--with the "on insulator" type of tricks that are going on today, a real speed-up might be possible. Unfortunately, all this seems to have been tied up in small companies with only one customer (the US military), and there probably isn't the money to push their performance with heavy duty development efforts. There may have been some minimum size problems with the technology, so you couldn't get the densities you really would like without a lot of bucks on that too.

This is not a question of increasing the storage capacity of regular size devices (mostly). It is a case of being able to store the same amount of information in an incredibly smaller area using much less power. Applications: digital cameras wearables smart appliances CPU cache! (mentioned in article) laptops

Smaller is always a selling story for computers. My current hard disks are big, loud, and hot. I don't like them. Stacking 1 TB of 3.5" rotating disks into the colocation center costs a fortune in space, power, and cooling.

Among all the Bazillion-petabytes-on-the-head-of-a-pin stories, I rather like the 10GB roll of scotch tape, myself.

But, what I find interesting about this particular miracle is the possibility of putting a few gigs of storage on the same chip as the CPU. Probably not very practical for general-purpose computers (wanna reload your data just so you can have a faster CPU?), but there are other uses...

Hook a wee bit of this to the equivalent of a 386 or 486, put it in an affordable package, and you could have:

hyper-intelligent watch/notetaker

key-fob sized MP3 player

standard-sized 5x7 picture frame that changes pictures every hour to show a different image of the wife and kids

Nonvolatile Electronics Inc. to develop the technology to produce the devices on a commercial scale. The company was founded in 1989 by James Daughton, who pioneered the field with Honeywell, and it has already carved out a sizable market for magnetic sensors and other devices based on similar technology.

Thankfully, defragging happens in O(n) time, meaning that if you triple the hard drive space, you triple the time it takes to defrag. It's not like sorting O(n log(n)), which would get monstorous.

Factoring in access and write speed, it's actually O(n/s) where s is the read/write/access speed.

Basically, if you have a 2Tb drive that's 100 times as fast as your 20Gb drive, it'll take exactly the same amount of time to defrag, though if you wanted to sort it at the same time, it would take 7 times longer (ln 100 ~= 7.5).

Cool. If speed isn't an issue, then price is. If this is faster or as fast as SDRAM, it will replace SDRAM. However, I don't think these will ever compete with traditional magnetic harddrives. These will be VERY expensive in comparison, and only people who are willing to spend lots of money for (physical) stability (it's Solid State, ie. no moving parts) and possibly faster access time. So maybe big servers could use this, but, only if it's less expensive than RAID (but RAID has redundancy...) or if the server is located inside a paint mixing machine...;-)

SO, sure, this will replace/compete with NVRAM. And depending (a lot!) on price, it may replace SDRAM. But I doubt if it will replace physical (platter) hard drives (which, I should add, they are not claiming in the article...). I will say, though, that if they can get the price down to anywhere near traditional drives (now, about $10 per GB or less) then this will replace hard drives!

Well, imagine if this stuff was as fast as (or even faster than) memory is now. This stuff could be the "memory" and the "hard drive" and the "CPU cache"... or maybe there wouldn't be a difference anymore. Having a CPU attached to a square inch of that stuff means you have a CPU with memory and nonvolitle storage that equals 40GB... that would pave the way for more lower cost PCs.

Besides... saying that this technology wouldn't be useful and no one would care is just like saying that no one will ever need more that 640k RAM.

I am curious what the state change rate is (how fast it can be changed from a 1 to a 0) and if I can change any of the bit storage locations on the device simoultaneously or one at a time...

The rate is important, for instance memory can chage 100*10^9 bits per second (theoretically) for PC100 RAM (100 MHz). Hard disks can change 66*10^9 bits per second (ATA66), but only the bit the hard disk head is over at that time.

This is very oversimplified (all throughputs are absolute theoretical maximums), and I probably mesed up the exponents, and someone more knowledgable can refine my question...but hopefully I got enough of a point across for someone to understand my question--is this tech really feasible as a memory replacement, and where is there more hard data about it?

Actually, If the platters are about 3.5" in diameter (1.75" in radius), and there's about 1" in diameter lost in the middle, or.5" in radius, the area on this disk would be *(1.75^2) - *(.5^2) = 8.835 in^2 * 50 = 441.75 * (2*2) = 1767 or 1.767 TB

Interestingly enough, I recall working on electronics repair training equipment in the Navy in the late 80's that used this kind of memory. This memory was part of a computer (based on the Z80 I think, it's been a long time ) If you looked at it under a powerful magnifier of some sort you could actually see little round ferrite cores wrapped in extraordinarily fine copper wire. I always wondered how anybody managed to wrap that in a production environment.

yes Seagate has a big campus here, like 6 buildings or so, but its actually headquartered in California. 3M on the other hand is headquartered over in St. Paul, and their Imation division has broken off from the parent company, and has been doing vast amounts of research on storage devices. my guess would be 3M and Imation over Seagate. but i could totally be wrong.

I couldn't say I wasn't impressed about this. Especially because I know the hype 20 years ago when the first hdd appeared. It had about 5Mb and everyone was wondering who could ever fill this huge space. Yet, let's not be too enthusiastic because for the moment this technology has some possible problems (at least in what concerns the military - which seem to be the first interested in it). Being a magnetic storage device a magnetic field can destabilize the data on it, thus making the computer unusable (or unreliable). Hypothetically speaking, it would be quite easy to attack a ship with some sort of radiation that will make it vulnerable to another attack that may destroy it On the other hand, civilian users are quite protected against this, since there aren't many important secrets to be destroyed (I'm not speaking about corporate users). Then again, who will ever use 400Gb (or more) at home ? (ok I may sound silly and repeating the sentence I mentioned in the beginning). There are some limits in one's ability to gather information and one of them is the time. Although, if I were to speak frankly, I imagine Windows 2010 taking 70% of this space (with a 25Gb Solitaire). Hopefully there will be no windows any more in 2010

Well if I remember back to my digital classes, there are going to have to be ALOT of address lines to access each individual bit, which means more metal, which means more heat! Especially if they say they are going to try and stack them (like poker chips). Also, I can see why the manufacturing process would be cheaper. As the article says, there are NO transistors. If anyone has taken a VLSI class they know how many steps are involved in the fabrication process of transistors....Its a wonder how intel and amd can sell their chips for so little!

Let me guess, you're a home user. No you won't have use for this large a storage device in the near future but many network admins work with much larger storage devices already. And while I can't speak for others, this certainly has me interested. I've had to manage as much as 240 GBs of data when 10 GB hard drives were the biggest we could get out hands on. It would have made my life a lot easier to have all that data on one drive rather than spread out on 24 (more when you count the 'lost' drives in the RAIDs).

Additionally, you have to remember this is a SOLID STATE drive. No moving parts, the tranfer times will be almost as fast as your RAM or cache. While this may not seem like that big a deal to a home user, this type of speed will allow developers (Web and other) to do more processing on their own systems and then transfer just the results to your computer. Which means better content can be delievered to the customer. Do I see a immersive 3D Slashdot environment in our future?......maybe.......lol

My memory is that bubble memory involved the creation of bubble of charge within the substrate by the application of a magnetic field. These bubbles could be polarised in some way to indicate 0 or 1. The size of these bubbles depended upon the strength of the magnetic field with very small bubbles requiring very strong fields (just what you want in a computer).

Also I think that it was a linear storage medium where the bubbles were moved in a loop around the chip by placing voltages on T shaped elements. This would probably mean that the speed of retrieval is not very great, as the bubbles are led past a read-out area.

It probably was just a technology that did not offer any advantage over easier or already existing devices. One advantage was meant to be non-volatility (with a battery backup). Now we have Flash memory.

Doesn't this sound a little old? I think I've heard the "500 billion trillion gigabytes per square millimeter cube" thing one too many times. Has one of these technologies ever seen the light of day? What makes everyone so certain this will, either?

Anyway, back in reality, I'll stick to getting excited about actual product shipments.

It seems as though this could be built right into the cpu or into the package holding the cpu die, which would lead to getting your hard drive, ram, and cpu all in one unit.So what happens when that package, with a certain company's operating system permanently installed and hardwired in, is available cheaper than the same hardware without any os, or with a certain "free" os pre-installed?If this thing is "instant-on", then either os will be right there, ready to go as soon as you hit the switch, but one certain company will be able to subsidise the purchase, whereas the other os, even though free, won't have a financial behemoth behind it (unless it's AOL, and you have to be a subscriber of theirs to get the discount, or maybe even for the hardware to work at all).

Nando Times is a spin-off of the Raleigh, North Carolina newspaper "The News and Observer". The N&O (n and o, nando, get it?) got into the ISP business several years ago with, IIRC, Nando.net, which I think still exists, but the actual subscriber base got sold off to Mindspring a few years back.If linking like that becomes illegal, won't that pretty much be the end of the internet as we know it?As to whether or not Slashdot can or should provide content, I don't think that's what it's really here for. We the Slashdot audience provide the content. Unfortunately the content is sometimes of the quality of posts like yours.

Heh.. I recall reading on New Scientist about using really small pins dragged around a wax disk or cylinder or something. You could heat it up to make a pit, and if it hit a pit it would heat up slightly due to friction while getting out of it and you could detect that. Those pits could be really damn small.

I tried a search on their website, but couldn't come up with a link, could be its only in their print version (and I'm not searching THAT!) or I just wasn't lucky.

I'm wondering how this technology differs from the so-called "bubble" memory that was being researched in the mid '80s, other than the higher density.

IIRC those chips were made out of a thin layer of garnet, and the description of the individual memory cells being shaped like "tiny doughnuts" rings a bell. At the time I think the biggest one they had was about 500K, but considering how much smaller the current electronic paths are in state of the art semi-conductors compared to what was available in 1986), I find myself wondering how this "new" technology is different from the older one. Is anyone out there in/. land familiar with both enough to fill in the details?

The device is so small, and requires so little power, that it should be possible to combine it with a computer's central processing unit, according to Max Yoder, director of electronics operations at the Office of Naval Research. That would eliminate the long wires needed to connect the memory to the control unit, "so the whole computer operation itself will be significantly speeded up," Yoder said.

I think the key to this technology really is the size. The problem with PC's now is that even though the processor is getting faster, the rest of the system is lagging behind. With faster and smaller memory, and the ability to put the memory right on the CPU, we could really see the speed boost we are looking for.

A major advantage of the new technology is that the memory system is nonvolatile

Kiss your slow hard drives goodbye, and now almost all of the computer can fit on one card. All the case is needed for now it the expansion slots. Too bad it is too early in development, though...

That would eliminate the long wires needed to connect the memory to the control unit, "so the whole computer operation itself will be significantly speeded up," Yoder said.

Long wires are not the only thing that makes data storage slow; they gave no numbers on the read/write speeds here, and I'm not convinced that this is going to be fast. There is already an abundance of NVRAM (Non-Volatile RAM) available (eg. SanDisk [sandisk.com]), but it's VERY expensive (eg. 16MB = $75.00). The read is fast for Flash RAM, but writes are real slow. This, however, isn't Flash...

The real issue I see there is they're trying to say that this will be the normal RAM for a system, and that it won't be erased on reboot - think about this - if you crash WinDoze, you want the memory to be rebooted! You don't want the pooter to be in the same state upon reboot!!! If they're trying to use this as some kind of ROM that gets copied into the RAM on startup, well, that's already around. The only thing I see this doing is replacing NVRAM (eg, Flash, CMOS, etc), which will replace hard drives when it gets cheap (and fast) enough.

The thing that matters is the balance between size, speed, and price. The article doesn't seem to touch upon this point very much.

There are many very promising technologies out there. The real problem with memory right now does not seem to be the size at all. For example, hard disk density (size) doubles every year. However, the access time (speed) only decreases about 30% every 10 years.

First of all, the technology promises 400 gigabits/in^2, which translates to 50 GB. Don't even bother to translate how much it would be if you made a hard drive out of it, because this technology will push us far beyond hard drives. Imagine something like Sony's memory sticks with 50 GB on it. You could throw about 10 DVDs on it. Buisiness cards, with 100 GB of storage on it. Little MP3 players with years of listening time. Hell, we could probably just have portable WAV players. This will be particularly interesting when compact computer parts like monitors become viable. If I could have a pair of glasses with a display, and a tiny DVD player the size of a pager, with piles of DVDs stored on tiny memory sticks, this would be ideal, and not too far off. You then unplug you display glasses from the DVD player and plug it in your wearable computer, and have access to a huge amount of pawer and data in a very small size. This brings us much closer to wearable and micro-portable electronics.

It really did have to happen, but it still kicks ass. They talk in the article about how the entire system is digital - i.e. you don't have a spinning disk or any mechanical pieces. You take out the mechanical pieces, and you've eliminated most of the reason drives fail. Not only that, but the whole reason hard drives are so slow as compared to say, processor registers or RAM is that they're rotational - they're mechanical and not digital.

Of course, ideally, you could have a mass storage unit that was several gigabytes of the same stuff that processor registers are made of, but I wouldn't even care to know how much one of those drives would cost.:) They say that the two characteristics of storage, speed and price, are roughly proportional - the less you pay the slower it is.

I'm just wondering how long it will take to get one of these to market. I wish Moore's law also applied to time-to-market.

Because, dimwit, from the engineering side, it's ALWAYS bits. Memory chips are measured in BITS. We put a bunch of them together (a couple 4x128mbit's ) to make one megaBYTE, for you, the consumer who wouldn't understand bits.

Bits are more accurate. More quantifiable.

Traditionally, Kilobyte refers to 1000 bytes when dealing with data transmission, and 1024 bytes when dealing with memory. Now, when dealing with software written by those who don't know this, it could be either.

But a kilobit is always a kilobit. 1000 bits.

Ethernet is 100 MegaBIT because it's channel usage is measured in bits Things go on and off it a bit at a time. and NOT always in even increments of 8. Same for gigabit.

The SAME FOR THE WAY hard drives encode data! What is actually stored on the drive has little directly to do with what you think is stored there. All kinds of encoding is used. Each bit may be comprised of three bits on the platter....

Let's say you have a 3.5" drive chassis with an integrated RAID controller and N nano-core chips on it, with data striped between the chips as specified by the RAID-5 standard. It's still going to be RAID-5 regardless of if the individual chips are socketed or soldered -- in this usage, RAID-5 is refering to the striping algorithm. Regardless of if the individual drives are (hot-)swappable, it's still a Redundant Array of Inexpensive Disks, by definition. Using the RAID-5 technique instead of simple parity (Like in ECC RAM) gives you performance advantages as well as redundancy of data, hence the need to distinguish between the different methods

If the per-chip cost is low enough, it would probably be more cost effective to surface-mount the components and chuck the whole unit when one went bad than it would be to have a socketed board and replace an individual chip. For a consumer product, this approach would probably work well -- when one chip died, the whole unit would keep working, giving the user time to buy a new unit and back up his data onto the new one. Considering that the MTBF on solid-state electronics is pretty high, by the time one chip died the whole unit would be effectively obsolete anyway. For a server, you'd want it to be hot-swappable; but for a consumer product a sealed unit would be fine.

Microsoft announced that it was now able to ship beta copies of Windows 2002 on a single, 8" hard drive unit. Under further questioning, the spokesperson admitted that they had attempted to use the latest AI compression algorithms to remove redundancy, but that the program had simply wiped the disks clean. The spokesperson later said that, as yet, it had not been determined if this had been a bug in the AI.

It's been almost 6-10 mos since the adoption of the 'binary' prefixes: kibi, mebi, gibi, tebi

I really thought the whining would stop, but instead both users and the industry have chosen to ignore the new prefixes (summarized below to emphasize the triviality of quibbling) I did not expect everyone to start doing instant conversions, but I did expect them to start using the units as a ballpark indicator of which sort of 'mega' they were using.

True, the difference between terabit and tebibit is only 10%, but if you're going to whine about that 10% (or the 5% megabit gap), presumably you should be using the new standards.

According to the article, this is very fast, nano-sized core memory. While the bit density is very impressive, that is not the most important characteristic. What is really important is the fact that it's non-volitile and has no moving parts. With storage technology like this, you could easily have things like:

a full RAID-5 array inside a standard 3.5" (or 2.5"!) drive housing

a Palmtop with a 20Gb of storage

a portable MP3 player that could hold your entire CD collection

a digital cameras that can hold thousands of images

The possiblities for mobile & imbedded applications are staggering!

More importantly, because there are no moving parts, you'd have incredible levels of reliability and very low latency. This is sorely needed -- while hard drive capacity has been advancing rapidly, hard drive speed has only made modest improvements. In many applications (databases, for example) the biggest performance bottleneck is physical I/O. Even with the fastest hard drives, you still have latency measured in milliseconds (10^-3) -- because you have to wait while you wait for the platter to spin around to the byte you need (on a 10k RPM drive, you have to wait an average of 12ms to [physically] read an arbitrary sector). Conventional RAM, with nanosecond (10^-9) level latency, is 6 orders of magnitude faster -- that's roughly 1 million times faster, for the math-challenged. Getting rid of this disparity has enormous significance for I/O intensive computing.

The real implication here isn't having a multi-terabyte hard drive on your desktop, but having hard drives that actually keep up with the rest of the computer.