Anyone who has seen my home knows that I have a bit of a… hmm… obsession with orchids. To control my orchid intake, I meter my purchases to buying more plants only after I have either mastered or killed my latest orchid purchase. This has resulted in nearly 20 pots of these gorgeous plants with orchid “spa treatment days” (i.e., deep watering days) lasting hours at a time. So you can probably imagine the enthusiasm with which I embraced a recent article from the journal Science, in which Harvard researchers Wim L Noorduin and colleagues proclaim that they have discovered how to create predictable complex nano- and microstructures via biomineralization techniques, and then demonstrate this by creating micro-flowers!

Self-assembly of complex forms occur regularly in nature as a result of dynamic interactions with the surrounding environment. Many of these structures have a stunning beauty to them, with snowflakes being one of the most well-known examples of this phenomenon. The claim that no two snowflakes are alike is based on the understanding that the basic shape of a snowflake is guided by a combination of atmospheric temperature and humidity. However, its individuality is “crafted” as it falls through the atmosphere, tracing unique paths and being exposed to different air patterns as its shape shifts with each tumble, spin, pitch and glide, and twirl.

“Diatom Circle” by Graham P. Matthews (gpmatthews.nildram.co.uk)

Another, perhaps lesser-known example is the shells of diatoms, a type of algae that is characterized as being encased in a shell formed of silica. These silica shells often exhibit a remarkable level of complexity, being extremely porous to permit gas and nutrient exchange with the surrounding environment.

What makes them of particular interest to physicists and materials scientists is their ability to repeatedly produce these intricate and complex shells through self-assembly with such accuracy that the identification and classification of these creatures can be guided by the patterns and placements of their pores.

Until recently, the study of how these patterns emerge has focused on looking at the chemical compositions and material properties of complex microsystems, or has worked on defining the initial conditions required to generate these forms. While the understanding that complex structures are usually created as a result of dynamic interactions with the surrounding environment is well-accepted, how these complex structures form through self-assembly had never yet reached the point of being something scientists could predict.

So Noorduin and co-authors set off undaunted to discover how dynamic environmental interactions can be used to generate predictable, patterned precipitation in synthetic systems, to create self-assembled complex structures… repeatedly (i.e., in a predictable manner).

Micro flowers grown in a beaker! (Image by W.L. Noorduin)

It turns out that the secret to growing complex structures is to subject the solution to a dynamically-changing environment, one that responds to its growing shape. The images to the left are scanning electron micrographs that have been artificially colored, but aren’t they beautiful?

It’s important to realize that while Noorduin and colleagues did produce these flowers as a beautiful demonstration of their technique, that all of this isn’t purely just for fun and games. Rather, if we can harness the basic principles guiding self-assembly, especially that of biomineralization, this will change the face of how nano- and micromaterials are made.

As of now, our go-to method for creating nanomaterials is by using lithography techniques, through which 3D structures are laboriously etched. Not only is this slow, it is also very, very expensive. Using the methods this group is developing, it will soon become possible to mass-manufacture complex materials for drug delivery, development of catalysts for chemical production, micro-circuitry, etc., by mechanisms of self-assembly.

As crazy and sci-fi as it might seem to think of building a computer using cellular components, it was even more shocking to me to learn that the term, “synthetic biology” has a history that goes all the way back to 1910. In fact, in 1974, Polish geneticist Wacław Szybalski used the term to examine the idea of using molecular biology in a synthetic manner to “devise new control elements” in a modular manner for creating new genomes, new organisms, etc… BRILLIANT!!! It took some time for technology — and science — to catch up with such revolutionary thinkers, and synthetic biology finally began to take off in 2000.

Really, the idea of synthetic biology (horribly over-simplified) is based on the recognition that biological organisms — no matter how simple — process their environment remarkably quickly, with a resolution and speed beyond anything we, as engineers, have been capable of achieving so far. Another way to think about this is that in this modern age of BIG DATA, for which our greatest challenges are how to store, manage, and analyze the continuous onslaught of exabytes (hunh??) of data, biological organisms had this figured out… well… millions of years ago. Youtube’s streaming video? Got it! Bose’s Quiet Comfort noise canceling earphones? Remember how we all ignored nagging parents or droning lecturers… oh yes, got that. iPhone’s accelerometers? Yup. And what’s even more inspirational or frustrating (depending on which side of the technological line you fall) is that we don’t only do it faster, we do it better.

Here’s an example. In 2004, in response to a synthetic biology competition call by the International Genetically Engineered Machine (iGEM) Foundation, a group of students at UT Austin invented a method for making photos out of… bacteria! The basic idea is that they genetically-engineered E. coli bacteria to respond to light, giving them a new biological circuit that would cause them to turn black when growing in dark areas and to turn clear in the light. A reasonable analogy would be to think of each bacterium as a pixel on your computer or TV screen. By then spreading and growing them evenly on a Petri dish as a homogeneous lawn of bacteria, and then projecting lighted images on them, the students could reproduce images using these genetically-engineered bacteria.

Why is this exciting? Well, for all of you owners of the latest iPad, iPhones, and MacBook Pros, your stunningly gorgeous Retina displays have approximately 326 PPI (that stands for “pixels per inch”). In contrast, with the bacterial images developed in 2004 (yes, nine years ago!), we’re talking gigapixels per square inch resolution, or thousands of pixels per inch, to make it more comparable to the PPI unit. That’s an order of magnitude greater than the Retina displays… I mean… WOW!

But, ok. So for all of you super-skeptics out there, why should we care? This is a monochromatic display that can (literally) die, when we’re into vibrant, saturated, archival colors seamlessly (sort of) integrated into our electronics… not to mention that these students had to build a seven-foot tall projector to create these relatively simple-looking images… which don’t even move! (Yes, we can all hear the collective gasp by the MTV generation and onwards.) [A side note to all you curmudgeons: I still think this is wonderfully cool.]

Analog synthetic and systems biology.From: R. Sarpeshkar, MIT

Let’s get back to the inspiration for this post. Last week, the journal Nature published an article by Daniel and colleagues from MIT, titled: “Synthetic analog computation in living cells”. While our obsession with the digital processing universe makes this sound like a step back into the Dark Ages, an advantage of analog devices is that they can be simpler while maintaining greater bandwidth and frequency range (yes, think about this from the standpoint of music!). When a signal is converted from analog to digital, it can lose some of its range (or fidelity) depending on the sensors involved, and this conversion process takes time.

Some of you may have heard of the analogy of a biological cell being somewhat like a digital computer, which processes everything as a series of ones and zeros. In reality, however, this is actually a gross oversimplification. While cells do respond to certain stimulants with a binary (on/off) response, the reality is that they use a mix of these digital-like responses and graded responses (think grays, and sorry, I’m not referring to Fifty Shades of Gray) when responding to various inputs — something the digital world of ones and zeros, black and white, cannot do. However, those who have tried to take the cell-computer analogy all the way have been able to use DNA as components for producing digital calculators, sensors and the like. The drawback is that these circuits require an enormous number of components to perform the simplest computations. Each component could be a DNA strand or a protein, which makes the process slower and more difficult to reproduce.

The system that Daniel and colleagues have assembled takes bacterial cells and transforms them into living calculators that can compute logarithms, multiplication, division, and can perform even more sophisticated functions such as acting as an in vivo pH meter… with three or fewer genetic parts. Furthermore, because their system operates in the analog signal processing domain, it can process graded information, characteristic of the natural environment in which we live and with which we interact. Such analog computation could permit the design of cellular sensors for pathogens or toxins. It may now also be possible to combine their analog technology with the digital systems to construct a synthetic digital/analog hybrid system that can swap between the two signal processing approaches according to which mechanism can produce faster or more accurate calculations or processes. Also, from the broader biological context, it now may also be possible to observe the behavior of such a synthetic, analog system and begin to gain a better understanding for how biological systems receive and process complex information, permitting the rapid responses and fine control necessary to make life a brilliant, rapid, high-resolution reality.

OK, so I just stole the title of this post from NSF”s first episode of a series they produced in collaboration with the U.S. Patent and Trademark Office and NBC Learn, to explore innovations… and their innovators, around the US. In a fantastic collection of eleven videos, they cover everything from prosthetic exoskeletons (bionic limbs) and 3-D printing, through smart materials, security, and automation. Ready to learn how these innovators came up with their inventions? Are you ready to be inspired? If so, check out the videos here.

Oh boy, oh boy… game on. It’s no news that Apple and Samsung are going head to head in the smartphone and tablet computer industry. Since 2011, they have been involved in at least 50 lawsuits all over the world, with Apple winning some, and Samsung winning others. So, why, exactly, am I now blogging about this, when this is so… like… 2011? Better yet, why is this rivalry suddenly of such interest as to warrant a full story in the Technology section of the New York Times just this last Sunday? Apparently, Samsung actually now has a chance to be a real competitor, no, perhaps even to dominate Apple in the not-too-distant future.

This is a particularly interesting prediction, seeing that Apple still clearly has the market cornered, raking in 72% of the profits in the mobile phone industry and Samsung taking the remaining drippings (Apple and Samsung surprisingly are currently the only two companies turning profits selling mobile phones!). What makes Samsung a potentially lethal competitor is their approach on design.

In 1989, Steve Jobs was named as Entrepreneur of the Decade by Inc. Magazine. Characterized as brash, boyish, and a perfectionist, in his interview with Inc. editors George Gendron and Bo Burlingham, Jobs proclaims that he designs not for what consumers want (they don’t know what they want), but what they might think is impossible. In other words, if you ask consumers what new functionality they want in a gadget, by the time you can deliver that functionality, they are ready for the next step and won’t be satisfied. As a result, in Jobs’ mind, the key to innovative design was to tell consumers what they want before they know it. Clearly, this worked quite well for Apple.

In contrast, Samsung takes a more… traditional approach, using market research to guide trends and innovation. According to Kim Hyun-suk, the executive vice president of Samsung, the company’s modus operandi is to use the market as the driver for product design, and not to drive the market in a certain direction — interesting, considering their extraordinary success with the Galaxy product line, and its reviews claiming its innovative features. Another example: while Apple just recently released the iPhone 5 with a larger screen, Samsung was already selling the 5.3″ screen Galaxy Note. This difference in design philosophy is reflected, also, in the budget allocations at the two companies, with Samsung outspending Apple in R&D, at a nearly 3:1 level ($10.5 billion to $3.4 billion).

What I found particularly notable is Samsung’s approach for design inspiration. They employ 60,000 staff members distributed across 34 research centers in multiple different countries, all saddled with the ultimate task of studying trends in each country and gaining inspiration for ideas. They look far outside electronics for inspiration, including fashion, automobiles, and interior design; and employ designers from a multitude of different backgrounds, including psychology, sociology, economy management, and engineering. Ah, curious… Much as biodesign and the application of biomimetics to idea generation benefits from multidisciplinary diversity, here we have again the melting pot of creative people coming from very different backgrounds, bringing to their design a perspective unique to their training and their lives, creating products with broad appeal and forward-thinking functionality… when the goal is not to drive the market… just a little food for thought.

My best friend in college gave me Tuesdays with Morrie while we were in our third year to keep me entertained as I was traveling about visiting graduate schools. I remember nearing the end of the book while sitting in the back of a 737, sobbing so uncontrollably that a flight attendant came to check on me.

This book launched me into the stark realization of the existence of ALS (amyotrophic lateral sclerosis), also known as Lou-Gehrig’s disease, named after a famous Yankees baseball player who was diagnosed in 1939. This disease affects the central nervous system and ultimately leads to complete paralysis of the body while the brain remains otherwise unaffected and lucid. This condition of complete physical paralysis combined with intact cognitive ability is known as locked-in syndrome and affects approximately 50,000 Americans as a result of injury or a disease such as ALS. Frank Guenther of Boston University and his team have developed a system that uses an EEG cap to read brainwaves through the scalp to move a cursor on a computer screen (see here). When the cursor moves into one of three circles, the computer will produce either an “uw”, “aa”, or “iy” sound. While this is still far from producing actual speech, it is the start to giving locked-in patients the ability to interact with society again. They can also use a similar system to move a robot.

Interestingly, back in 2009, the team also tested a more invasive device that involved implanting an electrode in the brain and using a computer system to interpret the patient’s thoughts. They actually performed this procedure on a 26-year old who had been paralyzed by a brain stem stroke. A few months after implanting the electrode, nerve cells grew into the electrode and produced detectable signals. Once the team developed software that could detect the elements of speech through all the “neural noise”, they were able to help this patient reproduce speech with an approximate 50 millisecond delay — a speed comparable to what we typically take when speaking naturally! See more on this study here.

Share this:

Like this:

A screenshot of the online Foldit game that allows players to compete against each other to figure out molecular folding patterns of proteins. (Image from the University of Washington)

Whoever said playing games is a waste of time? University of Washington researchers have harnessed the power, skill, and determination of the wide world of gamers to solve the conformation of a retrovirus enzyme that foiled scientists for over a decade. It took gamers playing Foldit, an online protein-folding game, three weeks to create a structure that was close enough to the real thing for the scientists at the University of Washington to solve the problem. Thank goodness that human intuition still can beat computer automation!

Most students in my Biomimetics/Bioinspiration course are already probably getting tired of hearing me say, “Integration is of the future!! We must learn to communicate across disciplines so that we can collaborate across disciplines. How else do we expect to be innovative if we stay in our own, isolated bubbles?!!?!!?” Considering we are only in our third week of class, this is all a little scary.

I am finally feeling a little vindicated, as someone else out there just might be thinking a little like me. Or, at the very least, he is seeing a broken educational system and is trying to address its problems. Paul Backett, Ziba’s Industrial Design Director, started a six-part series on one of my fave websites, Core77, discussing design education and how it should and can be revamped or renovated to reflect the tools and challenges of modern technology. He is seeing design students being taught concepts rather than skillsets, and using (consciously or not) technology to be lazy in their craft. Sounding familiar, fellow scientists?

Paul’s MO appears to be to teach design students through full-immersion activities so that they learn through experience rather than by reading textbooks and practicing visualization skills through copy catting an object (see Part 1). In the sciences, there are a few examples of this (e.g., see the CiBER program at UC Berkeley and Stanford’s Design Program, just for starters), but not nearly enough, considering the urgency of the situation.

In the meantime, I am looking forward to following Paul’s posts and gleaning some more wisdom from his thoughts…