Posted
by
Soulskillon Friday September 03, 2010 @09:05AM
from the can't-wait-for-the-next-round-of-patent-battles dept.

neo12 writes with news that Hewlett-Packard is teaming with Hynix Semiconductor, the world's second-largest producer of memory chips, to mass produce memristors for the first time. Quoting the BBC:
"HP says the first memristors should be widely available in about three years. The devices started as a theoretical prediction in 1971 but HP's demonstration and publication of a real working device has put them on a possible roadmap to replace memory chips or even hard drives. ... Steve Furber, professor of computer engineering at the University of Manchester, explained that the potential benefits lie in the fact that memristors are 'much simpler in principle than transistors. Because they are formed as a film between two wires, they don't have to be implanted into the silicon surface — as do transistors, which form the storage locations in Flash — so they could be built in layers in 3D,' he told BBC News. 'Of course, the devil is in the detail, and I don't think the manufacturing challenges have been fully exposed yet.'"

Up until recently the memristor was a theoretical fundamental circuit element (like resistor, capacitor and inductor - but the are easy to create in the real world). A few years ago they were actually created and there is a lot of interesting things you can do with the technology

Nature has had millions of years to come up with a 3D storage solution and it came up with neurons. Neurons have variable activation thresholds; the dendrites can adjust to require a variable amount of chemical stimulation from axons before they fire. Our memory system is based on resistance based circuits.

Nature tries to find the most efficient path. That means achieving the most storage/computation in the least space/energy. I think memristors are big, because we're following in the path of Nature.

Perhaps for particularly weird values of "efficient". If you look carefully at 'Nature' you see kludges upon kludges upon kludges. That's why molecular biology of organisms has been so hard to decipher. Natural selection has had millions of years to twiddle with things - therefore, if something works just a bit better (in terms of organism survival or reproduction) then it can be selected for. It may be a complex, error filled process that slows ten other pro

Think flash drives with the access times of DRAM. Think of instant hibernate computers. Instead of having to write out to HD the memory could just remember where it was at. Another application they talk about is crossbar switches. Currently each node in a cross bar has some sort of 'memory' associated to it. Such that the crossbar can 'learn' what are good routes and bad ones. This would allow them to make crossbar switches much smaller and use less power. Crossbar switches are used many times with NxM sized computers and in large communication networks.

They have known about them for ages (since 1971). However, they have only recently figured out how to actually make them at micron sizes.

Now given that we have not seen what they can do. What speeds we are talking about ect... If they can make them however (and at current gate sizes, and volume) the NAND flash drives we use today will probably quickly become a niche product.

Typical application would be somewhere where you want to retain some sort of 'memory' of what is going on but do not want a processor involved. It has also been theorized you could use them for storage of n bits per resistor. So instead of 1 bit per location you could have 4 or even 16 bits. They are also nice in that supposedly you do not need to refresh them as often as we do now so they could also save power.

They have known about them for ages (since 1971). However, they have only recently figured out how to actually make them at micron sizes.

Well, no, not quite. The effect was postulated decades ago, but it was purely theoretical at the time (well, okay, it has been emulated using complicated circuitry). Furthermore, it's not that scientists "figured out how to actually make them at micron sizes"... it's that the effect only comes to the fore at micron sizes, which is why it hadn't been discovered sooner.

No, the discovery is the ability to build a very simple implementation of this theorized circuit element, and its a mighty cool discovery indeed (someone linked to a IEEE article on memresistors... check it out, it's a great read, and does a very good job of explaining the theory and mechanism behind the operation of this particular implementation).

Memristor based crossbar switches will be extremely useful for two uses:

Shuffling data between VMs in a secure manner on a host such as an IBM 795 or a zSeries that has a large number of VMs in use for different tasks. This way a bunch of VMs that talk amongst themselves frequently (a DB server to an app server) will end up being able to do high I/O without that slamming the CPU.

Another use is tiered memory, where one has a machine with fast RAM and slow RAM, with slow RAM being exponentially faster than going to SSD or magnetic platters. If memristors become able to be printed on a large scale, perhaps we will see machines with 16-32 GB of DRAM, then 256-512 GB of memristive RAM that is used as both swap space, but also a persistant cache for the OS to boot from an image with, never touching the storage media until the OS is fully loaded and the user wants to load documents, or the OS is doing a backup.

Since this stuff can be layered in three dimensions, HP is talking about petabyte/cm^3 storage densities at roughly a tenth the speed of current DRAM. If this takes off, we won't have other storage media.

RAM access is in nanoseconds (10^-9). HDD seek times are in milliseconds (10^-3) of a second. Registers are about 10 times as fast as RAM.

So, computers will have a massive speedup the less we use hard disks as part of the core computing cycles. Primary and "secondary" RAM for temporary and permanent storage, then have the HDD that runs with non-blocking DMA I/O in the background as a backup. This way, the hard disk's glacial speeds (relative to RAM access) are not slowing the machine d

IIRC, FreeScale has had (small) MRAM chips on the market for a while, memristors are just more of the same thing. I was quite surprised not to see hard drive manufacturers jump on them, at least for enterprise drives and/or RAID controllers - imagine having your HDD cache as fast as DRAM, but not needing a battery backup to retain the data in the event of sudden power loss; write-back caches could become a whole lot more widespread, for all types of storage.

I'm aware of that, but don't they both fulfil the same sorts of roles? Fast, non-volatile storage...? I'm just surprised no-one seems to have used it for things like storage caches yet, whatever the technology behind it.

"It has also been theorized you could use them for storage of n bits per resistor. So instead of 1 bit per location you could have 4 or even 16 bits."

At anything like our current level of technology, this is unrealistic. While it sounds like a good idea on its face, there are several problems with this.

First of all, it's a resistor, not a binary gate. If it's just being used as a two-state (binary) device, then interface to other digital circuits is trivial. But in order to get several bits of data out of it, you have to actually measure its resistance (by means of voltage or current). I.e., let's say that it can have 4 resistance values, representing

When I first read the news a couple years ago about the first produced memristers, I was thrilled... and a little frightened. I did not expect actual AI to happen in my lifetime. The description of memristers is very well aligned with the functional description of a neuron. Given how we pack zillions of transistors into cheap, commodity hardware, this discovery will lead to a whole different level and type of computing... and even life,... itself! I sound like a hyperbole-prone Dr. Frankenfurter on cra

The description of memristers is very well aligned with the functional description of a neuron. Given how we pack zillions of transistors into cheap, commodity hardware, this discovery will lead to a whole different level and type of computing... and even life,... itself!

It's a step closer, sure. But you're not going to see a HAL-9000 anytime soon.

A pond full of algae is a gigantic pile of single celled life, but it won't quote Shakespeare. It's not the individual neurons that are the problem. It's

I mean, I know it can store data by means of variable resistance. But how do you read and write? Specific voltages, currents, frequencies? If I understand correctly, it has only two terminals like a resistor. You just apply some variable voltage and measure the current. So how can one differentiate between a write and a read?

The resistance of the memristor can be viewed as a function of the sum of the current that has passed through it (including the effects of polarity). A write would be performed by sending a larger current through the memristor in the right direction to increase or decrease the resistance; a read would be performed at lower currents that would not change the overall state of the memristor.

as I understand from watching "Memristor and Memristive Systems Symposium" on youtube, low voltage/current (read) wont change the memory. Only applying large voltage changes the state of the memristor. No need to refresh.

Presumably they'd have some capacitors over the power supply for the memristor to stabilize the power a bit. And I'm sure they don't use raw AC - there's probably a low-voltage DC inverter in there.

This is HP we're talking about; of course you won't be using raw AC power. These will obviously only work when used in conjunction with genuine HP power supplies; HP can't guarantee that electrical current from other sources won't damage your memristor, so a special chip in the memristor package will ensure that only genuine HP electrical current is used.

You do not need to use multi-period high frequency readout, just a single period (or read it twice with current in both directions) so that total charge (or integral of current over time) is zero. Once you now a value, you can "condition" it from time to time so that it does not degrade over many readouts. Other possibility is to condition it instantly during readout: You read the value in one direction and according to the value you either apply slightly higher opposite charge or you do not apply it at all

I actually happened to read this article [ieee.org] on IEEE Spectrum about new RAM technologies, and it covers both Phase-Change RAM (PC-RAM), which may of hit a road block in its development, and Resistance RAM (RRAM), of which memristor is a particular kind of.

I finally understand the Wikipedia page on a memristor. Normal resistors don't have any state (memory), whereas memristors do. How their resistance is affected by things done to them isn't specified; that depends on the particular device.

If you put a high voltage through it, some conductive atoms move a little bit away from each other. That that makes it into an insulator instead. You can then reverse the voltage and move the atoms back in place. Actually a mechanical on/off switch, but at such small scales that it goes blazingly fast.

You choose which software to run. If you run software that you think is bloated, and everyone else thinks is swell, then you have only violated your own preferences. There is nothing wrong with the software, it's all in your head. I like my 2-gigabyte operating system much better than the 2-megabyte operating system I used in 1993, and I imagine I would like a 2-terabyte operating system very much. I bet it would do all sorts of awesome stuff, and have the kinds of visual effects that I think enhance the graphical experience. If you disagree, that's fine. Version 1.1 of Linux will still run as well as the day it was released, and you can use it.

"I like my 2-gigabyte operating system much better than the 2-megabyte operating system I used in 1993"

But do you like it 1048x as well? If the difference in size were multimedia I might be inclined to let it go but it isn't. Also your numbers are off, the current version of most popular OS is about triple that.

Considering that a 1.35GB hard drive cost $1800 in 1993 [jcmit.com], and a 1.5TB (over 1048x the storage) is only $90 right now, and as 2MB is ~.15% of the 1.35GB disk, and 2GB is ~.13% of the 1.5TB disk, I'd say he could like it less and it'd still be an improvement.

Why would I need to like it a thousand times more? (Also, it's 1024, not 1048.) If I like it better, that's what counts. If you want to amortize by something, then that something would be dollars, not bytes. Since computers cost about the same today as they did a generation ago, yet they have so much more power in them, I need only enjoy the power, not enjoy it in proportion to which the power has grown.

(And I just want to say that I meant memory footprint, not hard drive footprint; and I meant OS plus a ty

Good catch. I kept wanted to put 2048 and mentally correcting to 1024 and ended up with 1048. doh!

"If you want to amortize by something, then that something would be dollars"

It would be both bytes and dollars. I want all the hardware AND price improvement. I'm not willing to share it with software firms that will charge me the same price for bloated software developed using more rapid (read lazy) techniques.

I'm all for bloat that translates directly into actual benefits or the

Hmmm, well I can see your point, but programmers being far far more expensive than computers, I understand why we would throw more hardware at a problem rather than more programmers. I say that as a (lazy) programmer. Another thing is that all those rapid-dev tools have brought us a world with far, far fewer bugs, especially crash bugs like segfaults. Remember the 90s when software crashed all the time? That sucked! As a programmer, I know it's easy to screw up pointer arithmetic, and I really love sandboxe

The version 1.1 of Linux is fine, but it doesn't have the latest security patches, and a malformed ping packet will cause it to give up the ghost.

A 2 TB operating system won't be a nice thing to upgrade, but it will be something forced in order to keep up with the latest security issues. Security never stands still, so one has to either keep on the upgrade treadmill to stay protected, add third party software to make up for the outdated OS protections, or put the machines behind an air gap.

Me, I like pretty icons with fancy animated windows and menus. I like browsers that show flash videos and all kinds of bells and whistles. I love bloat, if that's the kind of thing you mean.

Luckily we both have options. Neither of us is stuck in a position where we want to run software that doesn't exist. The only problem is when someone runs a system that they hate, then complain about it. If you are specifically compla

I'm not against bloat that has some use, which includes the user experience. But I should be able to get that functionality without all the bloat, which some linux versions allow. I also shouldn't have to put up with crappy bloat and cruft because someone decided to link DRM (which shouldn't even be there) with network speed in such a way that viewing any media causes my effective network connection to drop to 1% of theoretical.

Storage space has outpaced software development for some time. The only thing that has a hope of keeping up with it in the consumer space is high definition 3D video. Video games consume a lot of media space too, but producing the content takes a lot more effort.

The mere storage of software? People already just deploy an individual copy of the libraries for each application.

Once we start seeing expanded RAM sizes, I'm sure we will see OO based development tools more than ready to fill up that RAM with object frameworks vastly larger than now. End result: Still needing to buy and upgrade RAM, applications performing essentially the same, except taking up a lot more space.

Where I'd like to see memristor technology be used are dumbphones and embedded devices such as cash registers and POS (point of sale, not the other POS) units. Places where having the ability to boot near in

Aren't processors already layered, and we use multiple platters in HDDs? As interesting as a "memory cube" sounds, I would expect heat dissipation and magnetic fields from the current could be a major roadblock in its production.

However, Even if they could only do three layers with insulation layers inserted in between them, it'd probably still be more cost effective than current implementations as long as the memory density is similar.

None of the in production processors are 3-D layered. Of course all physical devices are three dimensional themselves, but the height is basically moot until designers can use 3-D layering and routing (of the "wires" or interconnections.

I believe research prototypes in universities and R&D labs have been created or are being worked on, so it is not science fiction.

From what I've read about memristor's, they don't wear out like Flash does. They are also massively faster than Flash memory. Think of it as a hybrid of RAM and Flash.

In fact, from what I read I think these devices (if they live up to what people are saying about them) will be able to replace both RAM and Disks/SSD's. Instead we'd just have one set of primary memory where everything happens.

Now we just have to see if they can do what is postulated, and how much it will cost to manufacture. If cheap enough to be worth buying due to their benefits, then they will have a huge effect on computing.

I see a rather big barrier to adoption precisely for the reason y ou mentioned - these will replace both ram and storage devices - which means that these, if really pushed by manufacturers, would require a complete redesign of the modern computer system. Chipsets, motherboards, etc would all need to be redisgned with this memory in mind. Which isn't a big deal on its own, but people are still using Pentium 4s with Windows XP on them, and I suspect will be even more loathe to change over to a completely new

They're probably going to be crazy expensive at first though, just as SSDs are still relatively expensive. By the sounds of other comments, we will first see these things as replacements for disk drives, then later usable as RAM, and I expect that once they're more common we could end up with whole new architectures in which they are both the RAM and storage as the GP mentions..

If they're expensive, why not replace conventional RAM first? The board architecture will change, but the CPU architecture and overall system would remain largely the same (read from block, write to block, do computation on block). It would be a similar transition as we had from SD-RAM to DDR-RAM. You'd immediately get the benefits of a system that can go into standby mode without having to keep feeding power, and would get these chips into a place where there is still high value per byte (as compared to

I have the perfect design--not that I could make it, as I'm just some shlub, but it's something I've been working on the theory for for a while. In sum, "modular computing", or basically, a whole bunch of components that share a standardized, high speed protocol for message passing across module boundaries rather than being on the same board.

With that system, you have something like a ultra-thin hypervisor that the physical computer runs on, and guest OSes that converse with virtual devices via the backbon

The basic architecture should be cheap to fabricate in bulk. It's lines of wires, a layer running in one direction, a thin film of the memristive material, then a layer of wires on top running at right angles. Every intersection point is a bit.

DRAMs involve all sorts of careful operations to create a trench or stack, fill it with a capacitor, run the lines in and out, etc. Much more complicated on a per-bit basis. Many more things can go wrong. Memristors are pretty much the simplest to implement circu

From what I've read about memristor's, they don't wear out like Flash does.

Doesn't it depend on the technology used to implement the memristor? The name merely refers to how it behaves electrically, not how it's actually built. As an example, a normal resistor is a device where current flow and voltage are proportional. This can be implemented by many materials, for example carbon composition, metal film, wire wound. Each has different lifetime/stability characteristics, even though they are all resistors.

We could stack chips today except for the fact that it's impossible to cool the middle layers and the thing would almost instantly melt itself down. I wouldn't be surprised if this new technology ran into exactly the same problem.

Seriously, where will the work be done? Will HP set up the fab shop here, or in SK? Or set up multiple shops. I would love to see the DOD suggest to HP that they need to set up a shop here in the USA. We need to make certain that we have our electronics under control here.
In addition, the DOD, NSA, etc needs to offer up contracts to American companies that produce equipment here. Why? Because we are increasingly seeing embedded virus, etc coming in from Asia.

Any military that just "relies" on the fact that what the company sends you is actually the device you bought, rather then the one you designed, shouldn't be procuring ANYTHING from ANYONE. If you're buying stuff that was made in some foreign country, and it's for a military application, you should damn well be inspecting it before using it anyway. You should damn well be inspecting it no matter where it came from - even an ally.

And there are a million and one of these "bugs coming from Chinese-manufactur

Totally agree. I know my company has a fab, in the US, that does all of and only our military contracts, and I'm told that most other large semiconductor companies in the US do the same thing. Likewise, totally agreed about the bugs. I wrote about this more extensively elsewhere in this thread but despite having this argument with other people on slashdot -- who know what they're talking about, I admit -- we're all postulating the possibility of trojan silicon with no strong indication that it's ever hap

It used to be that we would only buy American. Then American came to mean assembled in America and it became moot. I would LOVE to see companies that do business with the federal governments and especially the defense and civil protection branches be forced to produce the products on American soil from bottom to top. It may cost more but it would be justified by the improved security and availability. Aditionally it would encourage them to produce more here to reduce costs, etc...

With you 100%. In fact, the price benefits of outsourcing overseas are so extreme that until we get a solid economy back I think it would be worthwhile to make all profits earned on goods built entirely in the US (and the income earned by workers producing them) tax free.

This makes a hell of a lot more sense than the tax breaks we give foreign oil powers and bailing out anti-consumer massive banks. Especially since it will strengthen the economy and pay for itself indirectly by increasing other tax revenues

Seriously, where will the work be done? Will HP set up the fab shop here, or in SK? Or set up multiple shops. I would love to see the DOD suggest to HP that they need to set up a shop here in the USA. We need to make certain that we have our electronics under control here.
In addition, the DOD, NSA, etc needs to offer up contracts to American companies that produce equipment here. Why? Because we are increasingly seeing embedded virus, etc coming in from Asia.

Last I heard, Hewlett Packard still has domestic fabs: Corvallis, Oregon, I believe, and I think they still have fab capability in Palo Alto. A lot of large semiconductor companies retain small cutting-edge fabs at their headquarters for doing small runs of experimental stuff.

My company has two of its fabs in the US and one in the UK. We do all our packaging and testing in southeast Asia, but the company apparently made a decision about three years ago that we're not going to put fabs there, because we closed down one we'd built less than two years before.

And, as I've said many times before, with the possible exception of processors, it's really difficult to sabotage chip design. Your profit margin is directly related to the surface area your chip layout occupies, so it is aggressively minimized in design, and there simply isn't room on the silicon to splice in new stuff. Added to that, chip companies that I've worked with usually do a planet run of many prototype silicon designs through one fab, often their domestic/in-house fab, do their initial testing on that, and only after that do they put it into production with a full-size mask on dedicated silicon in the production fab, so if you wanted to sneak stuff in you'd have to infiltrate both fabs, or you'd end up with silicon that's visually different -- and we spend a lot of time with high-power microscopes and microprobes poking around at new silicon, sometimes chipping bits out with a laser if we need to do a metal layer change, so it's not like someone wouldn't notice changes on smaller chips. And even if all of THAT didn't catch changes, test and product engineers spend months writing automated test programs that check each pin on each chip and characterize its leakage current, its current draw when functioning, its ESD resistance, all sorts of things, and added circuitry will change those values.

If a company doesn't have a fab, and they just send all their completed masks (or, even worse, just the designs) off to one company, then I think it's possible, albeit difficult, for stuff to sneak into the silicon. But a company that has a fab, or runs their designs through multiple fabs, is pretty unlikely to get compromised silicon without noticing it. It would be significantly easier for a malevolent group to just design their own silicon from the ground up, and package it to look like the target chip and get it into distribution channels by selling it as authentic stuff, than to try to compromise a company's silicon.

I can buy big bags of transistors direct from hong kong on ebay for $5 for my hobby electronics use. Will that ever be the came with the memristor or will these never be made in component size and instead restricted to larger chips that are tens of dollars or more capacities only?

I don't see why you couldn't put it into a larger package. But I am unsure if there would be any demand for such a thing, since it sounds like you can already make a component that behaves like a memristor. The advantage of these things is the size.

Pretty much everything ends up in single-function packages eventually. This type of memory especially as it would be nonvolatile. Nonvolatile memory that wont wear out is a dream come true for any developer of embedded systems.

While it would be possible to build a large device with thousands or millions of memristor junctions ganged to scale the effect up to macro scale, it would largely be a waste to do so. This innovation is more useful when millions of memristors are used in a more elaborate way as storage, and/or logic.

"you're no doubt using them for switching or amplification, which is what they're good for."

Right, in other words using them as transistors.

"Unless you want to store a single bit (or a handful of bits), discrete memristors are useless anyway."

You say that as if nobody would ever want to do such a thing. There are times when lots and lots of tiny localized memory with parallel access would be useful. In fact there are components for this purpose now.

..I'd be a lot more excited about this. But as its HP, they'll probably kill the adoption of this tech with their subpar quality control. Thanks a lot of HP, but the best thing you can do is get your hands of this and hand it to someone who takes pride in the quality of their products.
"The views expressed here are mine and do not reflect the official opinion of my employer or the organization through which the Internet was accessed."

Depends which HP we are talking about. I'm hoping the HP of today is like the HP of the 80s and 90s with kick-butt research, calculators that could be used as bludgeons against the zombie hordes while still being able to calculate the critical numbers in building a bridge, PA-RISC workstations which were great performers, and the maker of unkillable printers (I know people who buy up LJ 3s and 4s that are 10+ year old, put them in service and they are still going strong.)

I completely agree. I work in computational neuroscience, and the memristor was basically the last thing left that brains can do that can't be implemented in silicon. Neuromorphic analog VLSI circuits are going to benefit from this a lot. However, there are still a number of issues that might not be trivial to implement, such as competition between different synapses in the same neuron, which are mathematically necessarily to prevent instabilities from occurring. I think the main point is that solving t