Post navigation

Slide aside, silicon!

Posted on 13 January 2000

Slide aside, silicon!

Say you have a nettlesome computational problem: designing a protein to gum up a disease, say, or breaking a code or forecasting the weather a month ahead. These modern-day necessities could force you to wade through gobs of numbers, countless variables and an unending range of possibilities.

Photo: Jeff Miller, Office of News and Public Affairs, University of Wisconsin-Madison.

DNA attached to this gold-coated slide can work like a primitive computer. Maybe not today, maybe not tomorrow, but sometime, and soon, it’ll do the thinking for both of us!

You could always crank up a computer. The silicon-based machines have done pretty well for themselves, judging by the ever-faster, never-cheaper models oozing out of Silicon Valley. Credit the progress to miniaturization: smaller wires and transistors pack a huge wallop in the computing biz.

Hey, we’re no experts: When the silicon is all shrunk, you may be able to solve your problem. But if tomorrow’s machines come up short, wetware may be your answer.

As we talk about wetware — biological computing — don’t think the next generation of thinking machines will rely on neurons, brain cells, especially those underpowered neurons found in the Why Files Skunk Works…

The double helix fix

Instead, tomorrow’s supercomputers could rely on DNA — the chemical that genes use to remember the intricate patterns of life. In living things, DNA stores info. But in primitive DNA computers, it performs massive parallel processing, the same strategy that poweres some of the world’s baddest supercomputers.

Still, if faster, wetter computers pan out, they could be used to solve unimaginably difficult problems. Imagine: A truly superhuman supercomputer might even find your car keys or explain why children cause parents to mimic their parents.

When computer scientist Leonard Adelman jump-started the field of DNA computing in 1994, he offered some astonishing numbers to explain its potential. The processor would operate in massive parallel fashion: doing trillions of calculations at once, as bits of DNA linked up to find any possible solution. And while we’re talking processing, not storage, he did note that one gram of DNA can store as much information as one trillion CDs!

The first DNA computers have used DNA in solution. That’s not surprising: DNA itself was first synthesized in a test tube. But DNA synthesis and computers are moving toward dry land — toward glass plates. The advantage, says Lloyd Smith, a professor of chemistry at University of Wisconsin-Madison, is greater control and easier processing: “It allows you to purify molecules between stages.” Smith, a DNA expert, leads a group of researchers that just reported a new surface-based technology in the journal Nature.

DNA computing relies on subtraction. Here’s a write-up of the process, simplified so even The Why Files can understand it:

Encode all parts of the problem in strands of DNA

Allow the strands to link up to make possible answers (the parallel processing stage)

Remove strands representing bogus answers

Multiply remaining (correct) strands so you can figure out which answer they represent.

All that removal involves repeated washings with various chemicals, and that’s best done on the kind of modified glass surface that Smith’s team developed.

Gold-plated science

After coating a glass plate with a thin layer of gold, Smith’s group linked short strands of DNA to sulfur atoms. Sulfur bonds to gold, so dunking the plate in the DNA solution resulted in a “self-assembling monolayer.” Smith says this thin layer of buzz-cut DNA on gold and glass “changes an inorganic surface to an organic surface.”

The group then coded the parts of the problem into new strands of DNA, and allowed the DNA to join together into strands representing possible solutions. Then they protected the correct answers from the chemical attacks they later used to wash away the bogus answers.

In the recent test, the biological computer solved a “satisfiability” problem. The word was Greek to us, to, but here’s how Smith explains this type of problem: Say you were selecting among five actors to play Romeo and Juliet in a radio play. Three could play Romeo, and three could play Juliet (if you’re counting, you’ve noticed that one actor has a flexible voice).

If any two actors appeared at random, what are the chances you’d be able to fill both roles? What, in other words, would be the odds of satisfying the problem criteria? We might have been able to figure out the 25 out of 32 odds without scrambling DNA like a bunch of eggs on a winter morning.

But what if we needed to cast, say, the hundreds of characters in “War and Peace”? Eventually, Smith says, DNA computers might be able to tackle this thorny problem faster than you can say Leo Tolstoy.

Why? For one thing, trillions of strands of DNA can process in parallel. For another, the number of “queries” (processing cycles needed to weed out incorrect answers) goes up in a linear manner, while the number of possibilities you can evaluate rises geometrically. In other words, with 2 queries, you could assess 4 (22) possibilities, while with 5 queries, you could assess 32 (25) possibilities.

The fine print

Before you trash your silicon-based computer and start trying to process words with DNA, remember that it’ll be a while before the wet computers show up in showrooms. The technology is decades behind silicon, and wrong answers aren’t always eliminated completely. Furthermore, DNA computers might need some strange accessories, like devices to create strands of DNA, and polymerase chain reaction machines to read the results.

Finally, by the time these gizmos are perfected, silicon-based computers will be faster, better and cheaper than today.

But for solving really difficult problems, like breaking codes, analyzing proteins and selecting a cast for a radio play, DNA may eventually compute.

Indeed, Smith’s group is trying to scale things up a bit. They’re trying to replace the existing four-bit computer (which can handle 16 data points at once) with a 48-bit processor, which should be about one trillion times more capable. “That’s not enough computer power to turn the world upside down,” Smith admits, “but it would be a big step forward for non-conventional computers.”