Wednesday, December 21, 2011

AppId is over the quotaAppId is over the quota Summary: Still in the early stages, telepresence robots offer big advantages apart from their cool factor that could generate interest as their price and performance improves.

I’ve been on a robot kick lately, but in this post I’m going to discuss a trend that hits a little closer to home and could have implications for office workers in just a few years: the rise of telepresence robots (aka telebots).

Still in early stages, telebots are mobile machines outfitted with cameras, screens, speakers and microphones to allow remote workers to interact with on-site colleagues. Using a computer remotely, a person steers an avatar robot around an office to attend meetings or a facility to make inspections while bewildering unsuspecting onlookers.

Is this the future of the office workforce? (Credit: Anybots, Inc.)

The game is on to create less clumsy and more socially acceptable robots for the office. New models from companies like Anybots and VGo Communications have hit the marketplace while newcomers Suitable Technologies and iRobot–makers of the Roomba vacuum cleaner– are looking to push the envelope with larger screens that can convey facial expressions and gestures for better two-way communication.

Like high-end corporate telepresence and videoconferencing systems, telebots are expensive with some costing $15,000 and even $40,000. Consequently, they’ve found few corporate applications (e.g., executive speeches) and some wins in both the medical and education fields.

Today, videoconferencing is by far more practical for businesses looking to reduce or eliminate the cost of business travel. Telepresence robots seek the same objective, but offer three big advantages apart from their cool factor that could generate interest as their price and performance improves:

Move beyond the meeting room - This one is pretty obvious, but being part of the action and having the mobility to join a group as it leaves a conference room for a break or wander the hallways for chance encounters with coworkers means a telepresence robot is the only next best thing to actually being there–at least until teleportation arrives.They do a better job of being you- Even life-sized and in HD, you are still represented in 2D on a screen. As a human-controlled robot you take on a 3D construct among the people your interact with, just as if you were there in person. In fact, telebots are taking on a more realistic appearance and becoming anatomically correct. Someday, you may be sending your android twin to close a deal.Quality and compliance work - Moving away from the office environment, Dr. Brian Glassman, at Purdue University recently commented about a case for augmented telebots on Technology Review: “Many factories need consistent inspection, some of these factories are either hostile (require hard hats) or are remote and time consuming and expensive to travel to. Having telepresence robots which can visually inspect things, (using IR, telescoping video, or using microscopes) will give companies the ability to conduct inspection from HQ and make more frequent inspections (travel time takes away from work time).”

Further reading:

DesignNews: The Dawning of the Office Robot

Technology Review: Telepresence Robots Seek Office Work

e-Discovery Team: On Vacation and Can’t Attend an Important Meeting? Use a Robot Stand-in!

AppId is over the quotaAppId is over the quota Summary: Hard disk drive makers plan to forge ahead with heat-assisted magnetic recording (HAMR) technology, putting bit-patterned media on the back burner. Credit: Seagate

For years, hard disk makers argued over a defined road map that would provide the industry with a next gen standard because any further increases in data density would require huge investment.

One camp, led by Seagate, lobbied for heat-assisted magnetic recording (HAMR); others, led by Hitachi GST, called for a move to bit-patterned media.

EE Times reports that the two sides converged on HAMR as their next step.

“There’s a general consensus the huge shift beyond perpendicular is at least three years out, so mainstream [HAMR] products won’t ship until 2014 or 2015,” said Mark Geenen, president of IDEMA, the disk drive trade group and host of Diskcon, an annual industry event that will be held next week in Santa Clara, CA.

Today’s magnetic disk recording techniques (perpendicular) will hit a brick wall in a generation or two when areal density reaches 1-1.5 terabits per square inch. At this point, stored bits get too small to remain stable; a small amount of heat is all it takes to make nano-sized bits flip their magnetization direction.

HAMR technology uses a magnetic recording medium that is more stable at normal temperatures but needs to be heated before data can be written. The challenge is to heat a very small area quickly enough using the right recording materials that can also integrate laser diodes and recording heads. While difficult, sources say it’s easier than the leading alternative–patterning multiple terabits of data uniformly on a platter.

Proponents of bit-patterning have not yet demonstrated how it can be used to mass produce disks and add no more than two dollars to the cost of each disk.

Meanwhile, Japanese disk drive head supplier TDK has already built HAMR heads. According to reports, TDK could potentially manufacture a 2TB 2.5-inch disk drives with 1TB platters using this technology.

But until all the pieces are in place for HAMR, drive makers are expected to use shingle magnetic recording, a variant of perpendicular, to push areal density to or slightly beyond a terabit.

As for bit-patterning, the approach isn’t expected to take hold until HAMR reaches a limit, which could be 2020 or beyond when areal density is measured in multiple terabits, notes EE Times.

The Advanced Storage Technology Consortium (ASTCS) pools resources from 13-members including Hitachi GST, Marvell, and Seagate for R&D efforts that will help make the generational leap beyond perpendicular recording.

At Diskcon, leading researchers will share progress toward HAMR technology.

Sources: EE Times, Channel Register, IEE Spectrum

Related:

A ’stone-like’ optical disc that lasts for millenniaDrive giants plan next gen tech

Tuesday, December 20, 2011

AppId is over the quotaAppId is over the quota Summary: Researchers at Berkeley have developed a new kind of anode polymer can absorb eight times the lithium of current designs.

Lithium-ion batteries are the most common type of rechargeable battery. They are found in laptops, smartphones, and increasingly, in electric cars and smart grids.

Although there are many advantages to lithium ion batteries–they maintain full capacity even after a partial recharge and are considered to be more environmentally safe than other battery technologies–their storage capacity can be improved.

A team of scientists at Berkeley Lab have designed a new kind of anode that can absorb eight times the lithium of current designs, and has maintained its greatly increased energy capacity after over a year of testing and many hundreds of charge-discharge cycles.

“Most of today’s lithium-ion batteries have anodes made of graphite, which is electrically conducting and expands only modestly when housing the ions between its graphene layers. Silicon can store 10 times more – it has by far the highest capacity among lithium-ion storage materials – but it swells to more than three times its volume when fully charged, ” said Gao Liu of Berkeley Lab’s Environmental Energy Technologies Division (EETD).

The swelling quickly breaks the electrical contacts in the anode, so the researchers concentrated on finding other ways to use silicon while maintaining anode conductivity. Through a combination of synthesis, spectroscopy and simulation, the team tailored a polymer that conducts electricity and binds closely to lithium-storing silicon particles, even as they expand to more than three times their volume during charging and then shrink again during discharge.

The new anodes are made from low-cost materials, compatible with standard lithium-battery manufacturing technologies.

The research team reports its findings in Advanced Materials, now available online.

AppId is over the quotaAppId is over the quota Summary: Keeping smartphones and laptops charged when there’s no electrical outlet in sight is a perennial challenge. A novel LCD screen developed by UCLA engineers could potentially help solve the problem.

UCLA engineers have developed an LCD screen with built-in photovoltaic polarizers that harvest and recycle energy from ambient light, sunlight, and its own backlight.

The energy-harvesting polarizer, which in technical terms is called a polarizing organic photovoltaic, can potentially boost the function of an LCD by working simultaneously as a polarizer–a photovoltaic device and an ambient light or sunlight photovoltaic panel.

“I believe this is a game-changer invention to improve the efficiency of LCD displays,” said Yang Yang, a professor of materials science at UCLA Engineering and principal investigator on the research. “In addition, these polarizers can also be used as regular solar cells to harvest indoor or outdoor light. So next time you are on the beach, you could charge your iPhone via sunlight.”

LCDs, or liquid crystal displays, shine light through a combination of liquid crystals and polarized glass to produce a visible image, albeit inefficiently. According to the UCLA researchers, a device’s backlight can consume 80 to 90 percent of the device’s power, but as much as 75 percent of the light generated is lost through the polarizers. A polarizing organic photovoltaic LCD could recover much of that lost energy.

Youssry Boutros, program director at Intel Labs, said: “The polarizing organic photovoltaic cell demonstrated by Professor Yang’s research group can potentially harvest 75 percent of the wasted photons from LCD backlight and turn them back into electricity.” Intel supported the research through its Intel Labs Academic Research Office (ARO).

“In the near future, we would like to increase the efficiency of the polarizing organic photovoltaics, and eventually we hope to work with electronic manufacturers to integrate our technology into real products”, Yang said. “We hope this energy-saving LCD will become a mainstream technology in displays.”

Below is a short clip of the UCLA team making the polarizing film using P3HT, an organic polymer widely used in solar cells.

The research is published in the online version of the journal Advanced Materials.

Monday, December 19, 2011

AppId is over the quotaAppId is over the quota Summary: German researchers have demonstrated how regular LEDs can be turned into an optical WLAN with only a “few additional components.”

Lights are no longer just for lighting up.

Scientists from the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute (HHI) in Berlin, Germany, have developed a new kind of optical WAN with enough throughput to allow four people in a room to watch a film from the Internet on their laptops, in HD quality.

The technology can potentially be used on both laptops and mobile telephones.

Credit: Fraunhofer HHI

The researchers say they’ve achieved a transfer data rate of 100 megabits per second (Mbit/s) without any losses, using LEDs in the ceiling that light up more than ten square meters (90 square feet). This area also marks the radius in which the receiver — a simple photo diode on the laptop — can be placed before it is out of range.

In lab tests, the team pushed speeds even further using red-blue-green-white light LEDs. Those transmitted data at a blistering 800 Mbit/s, setting a record for VLC or visible light communication.

Klaus-Dieter Langer, the project leader said: “For VLC the sources of light – in this case, white-light LEDs – provide lighting for the room at the same time they transfer information. With the aid of a special component, the modulator, we turn the LEDs off and on in very rapid succession and transfer the information as ones and zeros.”

The system works because the modulation of the light is imperceptible to the human eye. Langer explains: “The diode catches the light, electronics decode the information and translate it into electrical impulses, meaning the language of the computer.“

While rigging a system to turn LEDs into a transfer medium may not require many components, sending data over light waves is not without challenges. The key one is that whenever on object (like a hand) comes between the light and the photo diode the transfer is impaired.

The HHI scientists stress that the optical WAN is not intended to replace other networks, but rather serve as an additional and low-invasive option in environments where radio transmission networks are not desired or not possible, such as hospital surgical rooms.

“Combinations are also possible, such as optical WLAN in one direction and PowerLAN for the return channel. Films can be transferred to the PC like this and also played there, or they can be sent on to another computer,” notes a release.

The scientists will demonstrate how videos are transmitted by light at the International Telecommunications Fair IFA (Internationale Funkausstellung IFA) in Berlin from September 2-7, 2011.

AppId is over the quotaAppId is over the quota Summary: At Dreamforce 2011, the director of California Institute for Quantitative Biosciences (QB3) shares a vision for the future of medicine based on precision diagnosis, empirical pharmacology, and information technology.

Here’s a sobering thought: Half of those who reach the age of 85 will have Alzheimer’s disease. Currently, there’s no cure, no treatment, and no drug or therapy in the pipeline.

The answer to this problem and other healthcare challenges could lie in a new approach that links the physical sciences – mathematics, physics, chemistry and engineering – with the bio-sciences while adopting the latest trends in the IT industry.

That was the key message from Dr. Regis B. Kelly, Director, California Institute for Quantitative Biosciences (QB3), who spoke yesterday at “Unusual Thinkers: The UCSF Track” at Dreamforce 2011 held at San Francisco’s Moscone Center.

QB3 is an academic consortium consisting of three University of California campuses (UCB, UCSC & UCSF) working together on “converting science into public benefit” so that the promise of personalized medicine, rational drug design, early diagnosis, and reduced healthcare costs may one day be realized.

Healthcare costs are spiraling out of control due to problems in pharmacology, according to Kelly. He presented a slide illustrating how R&D expenditures for the pharmaceutical industry have increased from $50 billion in ‘05 to nearly $70 billion in ‘09, while the number of new drugs to gain FDA approval have steadily declined.

“Using the best science we have we fail 9 our of 10 times, so our basic understanding is lacking,” Kelly said.

Furthermore, all drugs have side affects. This is because when we inhibit one protein with a drug it affects 50-1000 others, according to Kelly.

“The drug industry is in a perfect storm. The number of drugs have dropped by half and the cost is too high for development.”

(Coincidentally, Andy Grove, co-founder and former CEO of Intel spoke about this topic at a QB3 event this week held at Genentech. Grove said that in terms of time and investment, the closest equivalent process in history to the creation of a single drug is the construction of a single pyramid in ancient Egypt.)

So how will medicine evolve over the next 10 years to improve healthcare and reduce costs? Kelly explained that it will by combining precision diagnosis and empirical pharmacology.

Precision diagnosis doesn’t mean your doctor will no longer check your blood pressure and give you a traditional examination. It does, however, consider how individual variations in your genome can have a major impact on how your body specifically responds to disease, drugs, and other therapies.

The idea is to predict exactly how a protein’s function will change if its composition is changed. How a patient will respond to a new therapy should be looked at from a systems perspective, just as engineers do when building models to determine if a circuit will work or how well an airplane will fly.

This will become more practical as genome sequencing gets faster and cheaper, according to Kelly.

As old as human kind, empirical pharmacology is simply experimenting with potential cures until a solution is discovered by accident. With robotics and molecular diagnostics, we’ll be able to take a genetics approach to pharmacology and, through trial and error, develop the tools to predict biological processes and then develop cells and microorganisms that provide unique resources such as drugs.

One of the issues of taking a quantitative approach to bio-sciences is that you generate vast amounts of data and that’s where Salesforce.com comes into the picture.

Kelly pointed out that the new Salesforce.com headquarters will be located across the street from QB3 in the Mission Bay neighborhood of San Francisco. He said that a meaningful relationship can be forged because the cloud and heterogeneous computing solutions will be essential for the emerging big data problems in biology.

“This is not about getting new robotic systems or algorithms. It’s about figuring out a way to prevent you from asking who you are when you are in your 80s,” Kelly said.

In addition to Dr. Regis Kelly, the Unusual Thinkers (#df11ucsf) track at Dreamforce featured 7 other leading researchers and practitioners at UCSF.

Sunday, December 18, 2011

AppId is over the quotaAppId is over the quota Summary: The word’s largest urban centers will be responsible for providing food, shelter, and jobs to roughly 8 billion people by 2100. The biggest obstacle may not be adequate resources or technology, but rather management, say experts. Will megacities be surrounded by parched, lifeless lands? (Credit: ilker canikligil)

Soon after the United Nations declared that the world population has topped 7 billion people, doomsday advocates sounded off about pending shortages of energy, water, and food as optimists turned a blind eye, saying that we’ve heard it all before.

One thing is for certain: more and more people are living in cities, and increasingly in megacities (cities with over 10 million inhabitants). In 1975, there were just three cities that fit the bill: New York, Tokyo and Mexico City. Today, there are at least 20 more. Many, including Shanghai, Jakarta, and São Paulo have reached supercity status (greater than 20 million).

Financial Times editor David Pilling writes about how the character of cities is being rapidly redefined, noting that by 2050 three-quarters of the world’s population will be urban. By 2100, the figure will nudge up to 80%–that’s 8 billion urbanites among the UN’s projected 10 billion souls on earth at the turn of the century.

This raises many questions about the future of urbanization. First that comes to mind: Will human ingenuity march in lockstep with the growth by advancing agriculture, energy and technology to sustain the urban centers of tomorrow? If past performance is any measure of future success, then the answer is a cautious “yes”.

Blame limits to growth on management, not resources

The biggest challenge, say experts, is actually management. It is the leading cause of inadequate housing, transportation systems, pollution control and disaster preparedness.

Researchers at the McKinsey Global Institute have been studying urbanization and found that there is, in theory, no limit set by technology or infrastructure to how big or how fast cities can grow, and problems stemming from rapid city growth are not directly the result of insufficient resources, but rather from poor management:

….the growth of most urban centers is bound by an inability to manage their size in a way that maximizes scale opportunities and minimizes costs. Large urban centers are highly complex, demanding environments that require a long planning horizon and extraordinary managerial skills. Many city governments are simply not prepared to cope with the speed at which their populations are expanding.

McKinsey suggests that there are four principles of effective city management: (1) funding for infrastructure; (2) modern, accountable governance; (3) proper planning horizons that span 1-40 years; and, (4) dedicated policies in critical areas such as affordable housing. At least in the technology department, progress is well underway.

Mass urbanization will bring with it mass digitization

As cities grow larger and more rapidly, new “smart city” technologies will unleash massive streams of data about cities and their residents. City-scale operating systems are already in development and promise to intelligently monitor and automate traffic lights, air conditioning, water pumps, and other systems that influence the quality of urban life while driving down the costs of operating a city.

New sources of information could also provide the opportunity for cities to improve government services, alleviate poverty and inequality, and empower the poor, according to a report from the Institute of the Future.

To learn more:

An operating system for smart cities

Interview: MIT’s SENSEable City Laboratory

Urban ecosystems will work in tandem with their natural environments

Paris reinterpreted: First place winners of the Living City Design Competition--Daniel Zielinski and Maximilian Zielinski--illustrate how people can thrive in partnership with nature.

Large-scale sustainability projects like Masdar City in the United Arab Emirates and Germany’s “Morgenstadt” serve as models for green urban development. They showcase how cities can obtain power from renewable resources, run quieter with fleets of electric vehicles, and promote low-energy living using smart meters.

It could take decades, but today’s metropolises will gradually restructure using these technologies to reduce carbon emissions and achieve greater harmony with the natural environment.

Leading the charge for an ecologically restorative future is the International Living Future Institute, a non-profit organization that is “raising the bar for true sustainability”. Through its Living Building Challenge, the institute has defined a set of rigorous development standards that exceed every LEED (Leadership in Energy and Environmental Design) certification level, including platinum. To date, there are active programs in the U.S., Canada, and Ireland, and the organization is looking to expand in other countries.

The world in 2100

While speculative, a convincing McKinsey analysis suggest that by 2100, urban-to-urban cross-border migration will be more prevalent than it is today, resulting in an “immense intermingling of ethnicities”. The world will go from a 7,000-language planet to a couple of hundred languages at the most, and the gap between rich and poor should narrow in all places.

Not to be a Debbie Downer, but there’s also the potential for food shortages stemming from extreme droughts, and unemployment can be a significant issue in the face of inexorable growth in automated technologies. On the plus side, average global lifespan will increase to 81 years and hydro energy and renewable energy could be our primary sources of power by 2100 and beyond.

In 1798, Thomas Robert Malthus famously argued that poverty and famine were natural outcomes of population growth and that controls be put in place to evade disaster. It’s been over two hundred years and the population is 7x greater and the specter hasn’t come true. Hopefully, that remains the case for the next 100 years and beyond.

Related:

Massive timeline of future historyThe silver lining of a world run amuck by machinesA roadmap for growing prosperity while saving the planet

AppId is over the quotaAppId is over the quota Summary: Manufacturers are raising the performance bar for electric motorcycles, rapidly catching up to their gas-guzzling counterparts. Here are five battery-powered machines guaranteed to turn heads.

As an enthusiast of motorcycles (I own two) and a resident of the Bay Area, I’ve noticed a surge in buzz surrounding electric two-wheelers and I’m not alone. Reporting today on the recent unveiling of Red Shift, an all-electric “supermoto” from San Francisco start-up BRD Motorcycles, Jeanne Carstensen at the New York Times, writes; “With Mission Motors, also in San Francisco, and Zero Motorcycles in Santa Cruz, as well as others, the region is becoming a hub for electric motorcycle companies.”

Speaking of Mission Motors, the company made history a few weeks ago at the Red Bull U.S. Grand Prix at Laguna Seca. The company’s race bike, Mission R, posted a qualifying time of 1:31.3, the fifth fastest for the weekend’s AMA Supersport race, and a track record for an electric vehicle of any kind. Motorcycle traditionalists were left scratching their heads.

As the performance of electric motorcycles closes in on their gas-guzzling counterparts, they’re also becoming increasingly practical and cost-effective. The market for electric scooters and motorcycles is taking off worldwide with about a half a billion in use across the globe by 2016, estimates Pike Research.

For whatever shortfalls exist today with electric motorcycles, such as a max ranges that peak out between 60 - 100 miles and the lack of a gas engine growl, manufacturers are wasting no time compensating with designs and technology that could permanently impact both, motorcycling industry and culture.

Below is a sample of the latest electric motorcycles at various stages of development plus a $35K hybrid bicycle that must be seen to be believed: (Make | Model | Energy Storage | Horsepower | Top Speed | MSRP)

Saturday, December 17, 2011

AppId is over the quotaAppId is over the quota Summary: Rice University physicists have created a tiny “electron superhighway” that could one day be useful for building a quantum computer.

The promise of quantum computing is largely predicated on whether or not physicists can keep quantum bits, or “qubits,” from slipping out of their two-state existence due to quantum fluctuations. This fundamental limitation has spawned research into different approaches to creating qubits.

Credit: Jeff Fitlow/Rice University

The latest comes from Rice University, where physicists have created a device called a “quantum spin Hall topological insulator” which acts as a tiny electron superhighway designed for increased fault-tolerance.

The researchers claim that the device is one of the building blocks needed to create quantum particles that store and manipulate data.

A quantum computer uses quantum particles in place of the digital transistors found in today’s microchips. These particles — atoms, electrons, or qubits — can be both ones and zeros at the same time, thanks to the quirks of quantum mechanics. This gives quantum computers a huge edge in performing intense computing tasks like code-breaking, climate modeling and biomedical simulation.

“In principle, we don’t need many qubits to create a powerful computer. In terms of information density, a silicon microprocessor with 1 billion transistors would be roughly equal to a quantum processor with 30 qubits,” said Rui-Rui Du, a Rice physicist behind the research.

Du and colleague Ivan Knez describe their approach to topological quantum computing in a recent paper published in Physical Review Letters.

According to Rice, “topological designs are expected to be more fault-tolerant than other types of quantum computers because each qubit in a topological quantum computer will be made from a pair of quantum particles that have a virtually immutable shared identity.”

But there is a catch to the topological approach. Physicists have yet to create or observe one of these stable pairs of particles, which are called “Majorana fermions” (pronounced MAH-yor-ah-na FUR-mee-ons).

Majorana fermions were first proposed in 1937 and the search for the elusive particles is becoming an obsession in the condensed-matter community. Physicists believe the particles can be made by marrying a two-dimensional topological insulator — like the one created by Du and Knez — to a superconductor.

According to Knez, if a small square of a topological insulator is attached to a superconductor then the elusive Majorana fermions are expected to appear precisely where the materials meet. If this proves true, the devices could potentially be used to generate qubits for quantum computing.

Knez spent more than a year refining the techniques to create Rice’s topological insulator. The device is made from a commercial-grade semiconductor that’s commonly used in making night-vision goggles.

Du said it is the first 2-D topological insulator made from a material that physicists already know how to attach to a superconductor.

“We are well-positioned for the next step,” Du said. “Meanwhile, only experiments can tell whether we can find Majorana fermions and whether they are good candidates for creating stable qubits.”

AppId is over the quotaAppId is over the quota Summary: Researchers at UC Berkeley have turned a benign virus called M13 into an engineering tool for assembling materials that mimic collagen, one of nature’s building blocks. The process they developed could eventually be used to create bone, skin, and corneas.

Researchers at the University of California at Berkeley have developed a technique to coax benign viruses called M13 phages to serve as structural building blocks for complex biological materials.

The materials created with the help of viruses could eventually be used to create complex biological tissues, such as cornea, skin and bones. Credit: Woo-Jae Chung, UC Berkeley

“We took our inspiration from nature,” said Seun-Wuk Lee, an associate professor of bioengineering at UC Berkeley who describes his team’s self-templating, bio-material assembly process in the journal Nature. “Nature has a unique ability to create functional materials from very basic building blocks. We found a way to mimic the formation of diverse, complex structures from helical macromolecules, such as collagen, chitin and cellulose, which are the primary building blocks for a wide array of functional materials in animals and plants.”

Lee points to the blue-faced Mandrill as as a source of inspiration. It derives its coloring not from pigment, but from the specific scattering of light formed when thin fibers of collagen are twisted and layered in its skin.

The researchers began studying collagen, particularly the factors influencing the formation of the protein’s hierarchical structures. But they hit a wall. “Unfortunately, collagen is a difficult material to study because it is hard to tune its physical and chemical structures. We needed a convenient model system to solve this problem,” said Lee.

The teamed turned to the common bacteria-attacking virus, the M13 bacteriophage, which is routinely engineered for applications from nanomaterials to green energy and harmless to humans. They found that its long, “chopstick-like” shape with a helical groove on the surface closely resembled collagen fibers.

The scientists added varying concentrations of the virus to a soup of saline solution in. Next, they dipped a sheet of glass into the bath of M13 and pulled it out at slow, precise speeds to control the liquid’s viscosity, surface tension, and rate of evaporation–all factors which determined the type of pattern formed by the viruses. As each sheet emerged, a fresh film of viruses was attached. This technique altered the physical environment for the viral filaments and ultimately produced three distinct film patterns.

The next step was to engineer the virus to express specific peptides that influence the growth of soft and hard tissue for use in biomedical applications. They used the resulting viral films as tissue-guiding templates to form a composite similar to tooth enamel that could potentially be used as regenerative tissue.

According Lee, their technique’s simplicity is key; by setting very specific parameters, they just let self-assembly slowly take place: “We let this run overnight, and by the next morning there were trillions of viral filaments arranged in patterns on our substrate.”

One of their key findings, Lee said, is that “we have started to understand nature’s approach to creating such complex structures, and we have developed an easy way to mimic and even extend it.”

Friday, December 16, 2011

AppId is over the quotaAppId is over the quota Summary: 80% of all web communication is in ten languages, yet 95% of humanity speaks roughly 300 languages. As digital services and devices move to voice control, the commercial opportunity could help close the digital linguistic divide, says the Long Now Foundation.

The majority of the world’s languages have only a few thousand speakers each, therefore, provide no commercial incentives to preserve or to enable on the web.

If you look to the left of the long tail, however, said Dr. Laura Welcher, Director of Operations for the Rosetta Project at the Long Now Foundation, there are about 300 widely spoken languages that do provide motivation for providers of digital services and devices because this group accounts for 95% of all people on earth. (See the yellow colored band in the image).

Credit: Dr. Laura Welcher, Long Now Foundation

In a recent talk given at UC Berkeley’s Language Center, Welcher described her organization’s goal of creating an open public digital collection of all human language as well as an analog backup– the Rosetta Disk– a solid nickel surface with 13,000 microetched pages of language documentation that can last for thousands of years.

Experts say that we lose a language every two weeks and up to 90% of roughly 7,000 languages will go extinct in 100 years. To counter the trend, the Long Now Foundation is leading a herculean effort to preserve thousands of endangered languages around the world.

In her talk, Welcher applauded Google’s plan to sample 300 languages from around the world to help improve its Voice Search product, saying that ideally the data collected would find its way into the public domain such as Language Commons or Rosetta Language Base on Freebase (an open platform owned by Google).

Welcher said that the long tail of roughly 6,500 languages could benefit from development of the 300 (and vice versa) if we build better algorithms that can work with less data. Long tail languages can also be helped through philanthropic efforts.

“As companies make corpora, if it is open then linguists can access it and help build a platform to help endangered languages of the world,” she asserted.

Welcher did not cover Apple’s Siri voice controlled personal assistant technology. But it currently supports three languages (English, French, German) and in 2012 will include most of the top ten used languages on the web, namely Chinese, Japanese and Spanish. As Siri grows in both linguistic diversity and capability, any second-tier languages may take less resources to support, giving Apple the green light to contribute to open resources on human languages.

If there is anything that the Rosetta Project needs to fulfill its objective, it’s help. The current collection contains 100,000 pages of scanned material documenting over 2,500 languages, as well as a growing library of crowd-sourced audio and video recordings. But that’s just a scratch on the surface. There is substantial machine readable corpora for only about 20-30 of the world’s languages. Welcher expects to add only 500 more into the digital domain over the next 10 years unless she can substantially scale the effort.

Programs like the 300 Languages Project and “Record-a-thon” are helping to close the gap, but it will take more to reach her goal of documenting at least 5,000 languages before they disappear. Welcher asked: “How do we get the isocode for all human languages and develop a universal corpus with reliable machine translation?”

Welcher ended her talk with a vision of a free and open encyclopedia of human languages that could model Wikipedia and the encyclopedia of life.

Further reading:

Internet Archive: The Rosetta ProjectThe DVD-Sized Rosetta Disk Will Preserve Human Language For EternityFound in Translation: The blog of the Berkeley Language Center

Related:

A ’stone-like’ optical disc that lasts for millenniaThe Long Now Foundation’s 10,000 year clock

AppId is over the quotaAppId is over the quota Summary: With a belief that there’s a future in optical drives, start-up Millenniata and LG have partnered to commercialize a disc that lasts ‘forever.’

Start-up Millenniata and Hitachi-LG have teamed up to create a new optical disc along with a read/write player that will store any data — movies, photos, and music — forever. The disc is compatible with any current DVD or Blu-ray player.

Millenniata calls the product the M-Disc, and claims that it “cannot be overwritten, erased, or corrupted by natural processes.” In fact, if you were so inclined, you can dip it in liquid nitrogen and then boiling water without harming it (See video).

The M-Disc platters resemble typical DVDs and Blu-ray discs in that they are made up of multiple layers of material sans a reflective or dye layer. During the recording process, a laser “etches” permanent pits onto a proprietary rock-like data layer using higher temperatures and as much as five times more energy than ordinary optical discs.

Credit: Millenniata, Inc.

A U.S. Department of Defense study found the resiliency of the product to be greater as compared to other leading optical disc competitors.

The platters can be read on any machine that can read a DVD, however, Millenniata’s machine is required to write it. Currently, the discs can store about the same amount of data as a DVD (4.7GB) but only write at 4x or roughly half the speed of today’s DVD players. Plans to ramp up recording speed are underway.

Millenniata will target consumers first when it launches the M-Disc read-write player in early October. After that, the company plans to make a foothold in the long-term data archive market as an alternative to cloud and other storage and backup technologies.

Thursday, December 15, 2011

AppId is over the quotaAppId is over the quota Summary: A team of laser experts are studying different techniques for corralling particles and transporting them via laser light to instruments on rovers and orbiting spacecraft.

Tractor beams trap and move objects using laser light. If you’ve seen one in action you were probably watching Star Trek or a science fiction movie.

Tractor beam on rover concept (Credit: Dr. Paul Stysley)

However, laser-based trapping of particles isn’t fanciful or beyond technological know-how says Paul Stysley, one of three NASA scientists who recently won funding to study methods for corralling particles and transporting them via laser light to a robotic rover or orbiting spacecraft for analysis.

“The original thought was that we could use tractor beams for cleaning up orbital debris,” Stysley said. “But to pull something that huge would be almost impossible — at least now. That’s when it bubbled up that perhaps we could use the same approach for sample collection.”

Current sample-collection techniques work but are expensive and have a limited range and sample rate. Tractor beams, reason the scientists, could grab desired molecules from the upper atmosphere on an orbiting spacecraft or trap them from the ground or lower atmosphere from a lander.

“They could continuously and remotely capture particles over a longer period of time, which would enhance science goals and reduce mission risk,” Stysley said.

The scientists have identified three different approaches for transporting particles, as well as single molecules, viruses, ribonucleic acid, and fully functioning cells, using the power of light. They’ll pursue the technique which they determine is most technologically feasible:

The optical vortex or “optical tweezers” method — This method involves two counter-propagating beams of light that form a ring-like geometry which confines particles to the dark core of the overlapping beams. By alternately strengthening or weakening the intensity of one of the light beams — in effect heating the air around the trapped particle — researchers at Australian National University have shown in laboratory testing that they can move the particle along the ring’s center. This technique, however, requires the presence of an atmosphere.Optical solenoid beams — These light beams’ intensity peaks spiral around the axis of propagation. Testing has shown that the approach can trap and exert a force that drives particles in the opposite direction of the light-beam source. In other words, the particulate matter is pulled back along the entire beam of light. Unlike the optical vortex method, this technique relies solely on electromagnetic effects and could operate in a space vacuum, making it ideal for studying the composition of materials on one of the airless planetary moons, for example.Bessel beams – This technique exists only on paper and has never been demonstrated in the laboratory. Normal laser beams when shined against a wall appear as a small point, but with Bessel beams, rings of light surround the central dot. In other words, when seen straight on, the Bessel beam looks like the ripples surrounding a pebble dropped in a pond. According to theory, the laser beam could induce electric and magnetic fields in the path of an object. The spray of light scattered forward by these fields could pull the object backward, against the movement of the beam itself.

“We want to make sure we thoroughly understand these methods. We have hope that one of these will work for our purposes,” said team member Barry Coyle at NASA’s Goddard Space Flight Center. “Once we select a technique, we will be in position to then formulate a possible system” and compete for additional [NASA Innovative Advanced Concepts] NIAC funding to advance the technology to the next level of development.”

“We’re at the starting gate on this,” Coyle added. “This is a new application that no one has claimed yet.”

Wednesday, December 14, 2011

AppId is over the quotaAppId is over the quota Summary: MIT architects have produced the first prototype “Pinwheel House” in an effort to see if low-cost homes can be constructed for $1,000, total.

The brainpower at MIT has been harnessed to help improve housing conditions for the billions of people living in poor rural conditions on less than $1 per day.

The first prototype from the Institute’s “1K House” project–an effort launched in 2009 to see if low-cost homes can be constructed for $1,000–has been constructed in Mianyang, in Sichuan Province, China, an area ravaged by the 2008 earthquake.

Pinwheel House interior. Credit: Ying Chee Chui

Pinwheel House is modular dwelling consisting of two natural materials, earth block and bamboo, that can be easily assembled via interlocking rectangular room units that surround a central courtyard space.

It was designed by Ying Chee Chui, a graduate of MIT’s Department of Architecture and currently an architectural practitioner in New York City.

“The module can be duplicated and rotated, and then it becomes a house,” Chui says. “The construction is easy enough, because if you know how to build a single module, you can build the whole house.”

Drawing inspiration from One Laptop Per Child, the idea for a $1,000 homes was first conceived as a design challenge by Tony Ciochetti who chairs MIT’s Center for Real Estate.

Chui’s house is one of 13 plans that emerged from the first 1K House design studio. It features hollow brick walls with steel bars for reinforcement, wooden box beams, and is intended to withstand a magnitude 8.0 earthquake. The first prototype measured 800 square feet and turned out to be more costly, at $5,925, but still very inexpensive in relative terms.

A 500 square feet version of the house could be built for about $4,000, according to Chui, and still lower if a large number of the homes were built at once. Nonetheless, Yung Ho Chang, a professor of architectural design at MIT who helped oversee the 2009 1K House design studio, thinks the prototype has fulfilled the promise of Chui’s design. “The house Chee built has good ventilation and good light,” Chang says.

But plenty of hurdles remain before any home can be manufactured for $1,000 or less. “If it were easy, somebody would have done it,” Ciochetti says.

The house is made of modules 13.8 sqm. The assembly method is the same for each unit, thus, if you know how to build one module, you know how to build them all. Credit: MIT

At any rate, the project’s success has spawned a related effort for home designs intended for Japan, A new MIT design studio is working on a home that would cost $10,000 to build. It would provide housing for victims of natural disasters, such as the earthquake and tsunami that struck northern Japan last March.

Ultimately, convening further studios in the vein of the 1K House project will allow more designs to move from the drawing board and onto solid ground, according to Chang. “The inexpensive laptop got to be more than an idea, it became available for children. I hope one day we’ll be in the same position.”

AppId is over the quotaAppId is over the quota Summary: Computer scientists at Saarland University have developed a wireless bicycle brake that is 99.999999999997 percent safe. Computer scientist Holger Hermanns with the wireless bicycle brake. Credit: Saarland University

A day in the life with wireless technologies is sprinkled with connectivity hiccups. Bluetooth keyboards momentarily disconnect, mobile calls drop and WiFi networks unexpectedly go dark.

Given this reality, consider the idea of accelerating down a steep hill on a bicycle with a wireless braking system. Would you trust it?

Now what if the system was designed by German computer scientists and tested with equipment used in control systems for aircraft and chemical factories; and it worked with 99.999999999997 percent reliability.

That’s exactly what a group at Saarland University demonstrated with a wireless brake installed on a cruiser bicycle.

The bike does away with a brake lever on the handlebars and cable snaking down the frame, and instead has a rubber handle that only needs to be squeezed and some electronics mounted on the handlebar and fork, the part which attaches the wheel to the frame. The tighter a rider squeezes the handle, the harder the disk brake presses on the wheel to slow the bike.

According to Professor Holger Hermanns, who holds the chair of Dependable Systems and Software at Saarland, the system is not perfect but “acceptable,” registering three failures out of a trillion braking attempts.

“Wireless networks are never a fail-safe method. That’s a fact that’s based on a technological background. Nonetheless, the trend is to set up wireless systems that, like a simple bicycle brake, have to function all the time,” he said.

The wireless connection between sender and receiver is accomplished with TDMA, MyriaNed wireless nodes, and the 2.4 GHz ISM band. It takes roughly 250 milliseconds for the cruiser bike to brake once a rider squeezes the rubber grip (150 ms for wireless communication between the components).

The brake is engaged when the pressure sensor activates a sender if a specified pressure threshold is crossed. Then, the sender–contained within a blue plastic box attached to the handlebar–transmits radio signals to a receiver attached at the end of the bicycle’s fork. The receiver forwards the signal to an actuator, transforming the radio signal into the mechanical power by which the disk brake is activated.

To give the system a reliability boost, additional senders attached to the bicycle repeatedly send the same signal. In this way, Hermanns and his team of scientists hope to ensure that the signal arrives at the receiver in time, even if the connection causes a delay or fails. They note that simply increasing the number of senders does not result in increased reliability. “If it is not configured correctly, it is possible that three out of five braking attempts fail,” Hermanns said.

The functionality can be further improved with an integrated anti-lock braking system and traction control, and that would only take a few adjustments, according to Hermanns.

The next step is for the scientists to bring their wireless bicycle brake concept to bicycle brake manufacturers and find engineers who will help realize it.

If wireless bicycle brakes take off, similar technology can potentially be applied to the derailing systems for bicycles with gears. In addition to delivering comparable or improved performance than the status quo, the weight and size of the electronics and power supply would have to be minimized to beat or match that of cable controlled components before most bicyclists give it serious consideration.

Tuesday, December 13, 2011

AppId is over the quotaAppId is over the quota Summary: Rice University engineers have developed technology that allows for wireless devices like cell phones and tablet PCs to both “talk” and “listen” to wireless cell towers on the same frequency, paving the way for 5G networks.

Rice University engineering researchers have demonstrated a new device that could allow wireless phone companies to double throughput on their networks without adding a single cell tower, and they’ve shown that it could work on a real network.

Current wireless technologies rely on two frequencies to send and to listen. Full-duplex allows communication in both directions simultaneously, such as in land-line telephone networks. Long thought impossible for wireless networks, Rice’s team overcame the full-duplex hurdle by employing an extra antenna and some computing tricks.

“Our solution requires minimal new hardware, both for mobile devices and for networks, which is why we’ve attracted the attention of just about every wireless company in the world,” said Ashutosh Sabharwal, professor of electrical and computer engineering at Rice. “The bigger change will be developing new wireless standards for full-duplex. I expect people may start seeing this when carriers upgrade to 4.5G or 5G networks in just a few years.”

As I’ve reported last February, Stanford researchers have also developed a system that allows wireless signals to be sent and received simultaneously on a single channel, but Rice has taken it a step further with a demo (see paper) that produced a signal quality about 10 times better than any previously published result.

Jade Boyd, associate director and science editor at Rice, told me over email: “We’re also the first to demo asynchronous full-duplex. Our people have published the first experimental work on full-duplex with directional antennas, and they’ve offered a theoretical analysis to explain their experimental results.”

While Rice and Stanford teams are attacking the same problem and using the same research platform, WARP (Wireless Open Access Research Platform–an open-source development platform developed by Dr. Sabharwal’s group a few years ago), they’re using different technologies. For instance, Rice’s technology would allow wireless device makers to add full duplex as an additional mode on existing hardware by repurposing most of the components that are already used in current systems. “I believe that’s also a first — and a key one for device makers,” said Boyd.

“Device makers love this because real estate inside mobile devices is at a premium, and it means they don’t have to add new hardware that only supports full-duplex,” said Sabharwal.

In the video below, you can learn more about the full-duplex test device and the technology behind the breakthrough:

AppId is over the quotaAppId is over the quota Summary: A Belgian visualization company has unveiled a fully immersive 360-degree flight simulator designed to reproduce reality exactly as a pilot sees it.

The line between flight simulation and the real thing has been further blurred.

Barco’s RP-360 dome is said to be the first rear-projection flight simulator to fully immerse pilots in training with an unobstructed 360-degree view of the world as they conduct virtual missions.

The dome is powered by an array of 13 high-definition projectors which cast images onto the outside of an acrylic sphere which measures 3.4 meters in diameter. The trainee pilot sits on the inside looking at the inner surface freely in all directions, just as in a cockpit.

The 10-megapixel projectors help to keep costs down with liquid crystal on silicon technology (LCoS) which typically provide more resolution and contrast than LCD and plasma displays.

Depending on configuration, the system’s resolution (up to 2.9 arcmin/OLP) comes close to the limits of 20/20 human visual acuity.

“It’s not an improvement, it’s a new generation of simulators,” Geert Matthys, research and development manager at the company, told Reuters.

“If a pilot has a cockpit where he can see 360 degrees, he also needs to be trained in a system which supplies 360 degrees, all deviation from real life can be dangerous,” said Mattys.

Lasers are used to line up the 10-megapixel projectors so that the different projected images are perfectly aligned, resulting in no segments of seams in the field of view.

Barco engineers heightened the realism by replicating the exact contrast that a pilot sees by limiting the brightness of the image from throwing too much light onto the darker areas.

Pilots can even wear night vision goggles in the simulator and see true-to-life halo and blooming effects that occur in night operations.

Barco’s goal is to help reduce training costs by bringing more training tasks to ground-based training systems.

Elbit Systems bought the first dome, which will be used by the Israeli Air Force once fully operational in 2012.

A two and a half minute video describing the RP-360 dome is available here.

Related:

Top three Star Trek-style holodeck experiences‘Pixel’ covered tank blends into its surroundings

Monday, December 12, 2011

AppId is over the quotaAppId is over the quota Summary: The first nuclear power plant being considered for production of electricity for manned or unmanned bases on the Moon, Mars and other planets “may really look like it came from outer space.”

On earth, nuclear reactors are under attack because of concerns over damage caused by natural disasters. In space, however, nuclear technology may get a new lease on life.

Plans for the first nuclear power plant for the production of electricity for manned or unmanned bases on the Moon, Mars and other planets were unveiled today at the 242nd National Meeting & Exposition of the American Chemical Society (ACS).

James E. Werner, the project leader at the U.S. Department of Energy (DOE), said that innovative fission technology for surface power applications is far different from the familiar terrestrial nuclear power stations, which sprawl over huge tracts of land and have cooling towers and other large structures.

An artist’s concept of a fission surface power system on the surface of the Moon. Credit: Galaxy Wire

A fission reactor itself is about 1.5 feet wide by 2.5 feet high, roughly the size of a carry-on suitcase, according to Werner. And there are no cooling towers.

“A fission power system is a compact, reliable, safe system that may be critical to the establishment of outposts or habitats on other planets. Fission power technology can be applied on Earth’s Moon, on Mars, or wherever NASA sees the need for continuous power,” said Werner.

Nuclear fission power in space is actually old news. In 1965, the U.S. launched SNAP-10A, which was a 45 kWt thermal nuclear fission reactor that produced 650 watts using a thermoelectric converter. (It operated for 43 days before it was shut down due to a satellite malfunction–but remains in orbit today.)

Nuclear fission works by splitting uranium atoms to generate heat that is then converted into electric power. A fission power system contains components that are similar to those found in the commercial reactors currently in use: a heat source, power conversion, heat rejection and power conditioning and distribution. For space applications, however, nuclear fission features a number of differences compared with commercial reactors.

“While the physics are the same, the low power levels, control of the reactor and the material used for neutron reflection back into the core are completely different,” Werner said. “Weight is also a significant factor that must be minimized in a space reactor that is not considered in a commercial reactor.”

Sunlight and fuel cells were traditionally the mainstays for generating electricity for space missions, but engineers realized that solar energy has limitations. Solar cells do a great job supplying electricity in near-Earth orbits and for satellite-borne equipment, but nuclear power offers some unique capabilities that could support manned outposts on other planets or moons.

Werner explains:

The biggest difference between solar and nuclear reactors is that nuclear reactors can produce power in any environment. Fission power technology doesn’t rely on sunlight, making it able to produce large, steady amounts of power at night or in harsh environments like those found on the Moon or Mars. A fission power system on the Moon could generate 40 kilowatts or more of electric power, approximately the same amount of energy needed to power eight houses on Earth. Nuclear power has the ability to provide a power-rich environment to the astronauts or science packages anywhere in our solar system and that this technology is mature, affordable and safe to use.

Werner contends that once the technology is developed and validated, it may prove to be one of the most affordable and versatile options for providing long-term base power for the space exploration programs.

The team is scheduled to build a technology demonstration unit in 2012.

The project is a collaboration between NASA and DOE.

Source: American Chemical Society

Related:

LCD screen harvests light to power devicesWireless power from space: energy salvation?Acts of space warfare likely by 2025

AppId is over the quotaAppId is over the quota Summary: The skin-like wearable electronic interface developed at the University of Illinois opens up possibilities in the field of brain-computer interfaces well beyond biomedical applications.

Gartner’s newly released Hype Cycle for emerging technologies suggests it will take more than 10 years before brain-computer interfaces (BCI) go mainstream, but recent developments could shift the adoption curve.

As reported on sister site CNET (see gallery on SmartPlanet), engineers at the University of Illinois demonstrated a tattoo-like “device platform” with electronic components for medical diagnostics, communications and human-machine interfacing. It’s essentially a patch so thin and durable that it can be mounted to skin much like a temporary tattoo.

The device represents a paradigm shift in brain-computer interfacing because until now, most systems that used EEG (electroencephalography) required clunky headsets and plenty of accompanying electronic components and devices. This holds true for systems in R&D labs and those commercially available, such as from Emotiv and S.M.A.R.T. BrainGames.

BCIs in academia hold a lot of promise for those with impaired or limited brain or motor control. For instance, researchers at the University of Maryland recently created a “brain cap” that taps into a user’s neural network to control computers, robotic prosthetic limbs and motorized wheelchairs.

The BCI developed at the University of Illinois is no different in this respect, but it also opens up a slew of previously unimaginable possibilities in the field of brain-machine interfaces well beyond biomedical applications, said UC San Diego professor Todd Coleman. He explained in a news release:

The brain-machine interface paradigm is very exciting, and I think it need not be limited to thinking about prosthetics or people with some type of motor deficit. I think taking the lens of the human and computer interacting, and if you could evolve a very nice coupling that is remarkably natural and almost ubiquitous, I think there are applications that we haven’t even imagined. That is what really fascinates me — really the coupling between the biological system and the computer system.

Coleman and his researchers helped to create the wearable device’s circuit design and signal processing and used it to enable a person to control a computer game with muscles in his throat by speaking the commands. They’re now exploring what other capabilities could be achieved, such as to enhance a group’s ability to work as a team by simultaneously acquiring all of the neural signals and coupling them with a computer.

One of the advantages of using EEG-powered brain-computer interfaces is that they allow for non-invasive interfacing, meaning you do not need to make an implant directly into the brain. But there are drawbacks to a system primarily built on pattern recognition. ExtremeTech’s Sebastian Anthony lays it out well:

The problem with EEGs (and with any “mind reading” interface) is that we can’t actually understand the brain. We can see various neurons firing and measure the electrical signals and waves that they produce, but we don’t know what they mean. For these systems to work, then, the computer controllers must be trained to recognize the electrical state of the brain when it’s trying to brake or shift gears. In other words, basic cryptanalysis/linguistic analysis is used: if a tribal pygmy always greets you with a smile, a wave, and an unknown grunt, you can assume that the grunt will mean “hello.” It’s a brute force way of understanding a system, but if you only know the most basic rules of how the system works, that’s all you can do.

Brute it may be, but BCI technology is continually improving in both reliability and functionality. One day, controlling most aspects of your environment with thought alone may not be as far-fetched as it seems right now. Coleman said:

If you think about the advances that are being made in artificial hips and rehabilitation and the fact that people are living longer, it is no longer the case that your body is giving up before your mind. It’s going to be increasingly the case that we need to think about fixing minds along with fixing bodies.”

Sunday, December 11, 2011

AppId is over the quotaAppId is over the quota Summary: Japanese researchers have developed a robot that can lift a patient up to 80kg (176 lbs) off the floor and onto a wheelchair, charting a path for high-quality care for its growing elderly population.

Researchers in Japan have unveiled a robot that can lift a patient up to 80kg (176 lbs) off the floor and onto a wheelchair.

Developed by a joint team at RIKEN and Tokai Rubber Industries (TRI), the robot, nicknamed RIBA (Robot for Interactive Body Assistance) 2, uses high-precision tactile sensors and flexible motor controls to gently lift and transport patients from the floor or bed onto a wheelchair and vice versa. A human care-giver is still required to monitor and aid with embarking and disembarking the robot.

RIBA (Credit: RIKEN- TRI)

The robot frees care facility personnel of one of the most difficult and energy-consuming tasks that they are faced with roughly 40 times per day.

Shaped like a teddy bear, RIBA 2 is soft to the touch and responds to voice commands. It is more capable than its predecessor, RIBA (2009), which was the first robot able to lift a 61kg (134 lbs) patient from a bed to a wheelchair and back, but not from the floor.

RIBA 2 has new joints in its base and lower back to enable it to crouch down and lift a patient off a futon at floor level, the most physically strenuous task for care-givers. To accomplish this, it uses the first capacitance-type tactile sensors made entirely of rubber, say the researchers. These “Smart Rubber” sensors are printed in sheets and fitted onto the robot’s arms and chest to provide high-precision tactile guidance and allow the robot to quickly detect a person’s weight from touch alone.

Next for the team is to partner with nursing care facilities to test RIBA 2 and further tailor it to the needs of care-givers and their patients. They will also develop new applications in areas such as rehabilitation and take steps toward commercialization.

With an elderly population in need of nursing care projected to reach a staggering 5.69 million by 2015, Japan faces an urgent need for new approaches to assist care-giving personnel, giving RIBA 2, and robots like it a promising future.

AppId is over the quotaAppId is over the quota Summary: In this interview, author Ramez Naam provides a sneak peak into his upcoming book: The Infinite Resource - Growing Prosperity While Reducing Impact on the Earth.

Ramez Naam is not your everyday computer geek. The ex-Microsoft employee holds a seat on the advisory board of the Acceleration Studies Foundation, is a Senior Associate of the Foresight Institute, and a fellow of the Institute for Ethics and Emerging Technologies.

Naam is also the author of More Than Human: Embracing the promise of biological enhancement, an analysis of how scientific development over the next decade will make humans stronger, smarter, and live longer.

His next book, due out in 2012, looks at the current intersection of human civilization, prosperity, and natural resources and how it is possible for us to prosper like never before but with less environmental impact in the future. I caught up with Ramez over an email interview today:

In your upcoming book, The Infinite Resource – Growing Prosperity While Reducing Impact on the Earth, you point to knowledge as the path to a prosperous future. What inspired you to pick this theme?

RN: The book is really the intersection of two lines of inquiry. The first is the state of the environment and our natural resources. We’re simultaneously facing climate change and peak oil, ocean overfishing and fresh water shortages. As someone who cares about the future, I wanted to understand those challenges for myself.

The second is about innovation and its relationship to resource use and prosperity. I come from a tech background, so I’m used to the incredible onward march of Moore’s Law. But I was surprised to discover that something like Moore’s Law operates in solar energy. In the last 30 years, the price of electricity solar photovoltaic cells has dropped by more than a factor of 10. This decade, it’ll drop below the price of electricity from coal fired plants – the current cheapest. In 20 years, if the trend continues, it’ll be half the price of electricity from coal fired plants.

The driving force behind the reduction in solar energy prices is innovation. Scientists and engineers in the area keep coming up with new ways to make solar cells cheaper, thinner, lighter, and more efficient. That’s an accumulation of knowledge that has the promise to help us offset the depletion of a physical resource – oil.

That intersection led me to view our knowledge base itself as a resource. And as a resource, knowledge plays by different rules that make it incredibly powerful. Unlike physical resources like oil, our stockpile of useful ideas and engineering designs and insights into the laws of nature keeps growing. Ideas don’t get destroyed or consumed in usage. If I have a piece of knowledge and I share it with you, I don’t have to give it up myself – its impact gets multiplied by the number of holders. And best of all, the right knowledge can substitute for or multiply just about any other resource – energy, labor, materials, land, even time.

So once I understood that, I started seeing these patterns everywhere. Farming technology has reduced the amount of land it takes to feed a person from 10 million square meters to just over 1,000 square meters. Making steel from iron takes a tenth the energy it once did. Even in computing – not only are we getting more computing power per chip, we’re getting more computing power per amount of energy we use. If we went back to the efficiency of the first computers in the 1940s, then an iPhone would draw as much power as the state of California. Innovation – our accumulation of useful bits of knowledge – has substituted for natural resources in all those cases.

If we have any hope of coming out of the next few decades better off than we started, then it rests in our rate of innovation.

You argue that the cost of market externalities should be factored into the economy (e.g., carbon tax). What are some example technologies that have the greatest potential to manage or reduce them?

RN: That’s right. The market – which is this incredibly smart algorithm – gets garbage values as inputs when it comes to things like the value of a stable climate, the negative value of CO2 in the atmosphere, the value of acidification of the oceans. The market can only optimize variables that have prices attached, and none of those do.

Plenty of technologies exist that can mitigate our negative impact on climate, and along the way buy us energy security, cut off funding for middle eastern dictatorships, and cushion us against peak oil. Efficiency is a huge untapped opportunity. We know how to cut American energy consumption by half without any significant impact on our lifestyle. Solar energy is another huge potential. The sun hits the Earth with 6,000 times as much energy as all of human civilization uses, from all sources combined. In just 88 minutes, the Earth gets as much energy from the sun as humanity uses in a year. So there is a huge untapped energy potential there, larger than anything we know of except the dream of fusion. And, if we have to, there are ways to capture CO2 from the atmosphere and sequester it so it’s no longer affecting climate.

But because we don’t ask consumers and businesses to pay the full cost of burning carbon, they’re not properly incented to pursue those opportunities. If we had a price for carbon emissions that captured their full cost to the world, you’d see a huge rush of consumers towards greater efficiency, and of entrepreneurs and investors to the areas I just cited. Perversely, world governments actually subsidize the release of CO2 and other greenhouse gasses to the tune of more than $500 billion a year. Our policies are actually headed in the wrong direction, accelerating both peak oil and climate change. We need to reverse course there.

Current models show global population eventually plateauing, so wouldn’t current levels of growth and prosperity be sufficient?

RN: There’s a perception that population growth is going to apply tremendous environmental pressure, but it’s actually a smaller factor than affluence. The world’s population is only going to grow another 35% or so before it peaks, compared to the 200% growth that it went through in the 20th century. But most of the people alive today, in China, in India, in Africa, are incredibly poor. Growing their affluence means using more energy, more water, more rare earth elements. That is really the driver in consuming more resources.

Fortunately, it looks like we have plenty of headroom for that, if we innovate quickly enough.

Climate contradictions fueled by ideology vs. fact have left many of us confused. How can we know what to actually believe?

RN: I’d encourage people to look into it for themselves. Go look at the pictures of the glaciers retreating on Kilimanjaro. Go look at the extent of Arctic sea ice – the lowest its ever been at the North Pole. The planet is undeniably warming. Are humans to blame? Everything tells us yes. We know CO2 and Methane trap heat in the atmosphere. We know that we’ve released a tremendous amount of them into the atmosphere by burning fossil fuels. And we know that every time in ancient history that CO2 levels were this high, the planet was incredibly warm, much warmer than it is today.

There are always uncertainties in future predictions. There are always year-to-year and place-to-place variations in weather. But, averaged across the planet, things are warming, that warming is accelerating, and it’s clear that the fossil fuels we’ve emitted and which we continue to emit in record amounts are the primary culprit.

As a transhumanist, you support the idea of human super-intelligence and post-biological evolution. How will this trend impact our relationship with the natural environment and our dependency on it?

RN: I do think we’ll see increasingly good ways to improve on human intelligence. In a sense, we see that already. The next generation chips from Intel or the next generation solar cells from First Solar are designed and built by teams of people networked together and assisted by software. Their brainpower is effectively enhanced.

The smarter we get as a civilization, the more power we have to out innovate our problems. If we look at the basic physical limit of the resources around us – the amount of energy, land, food we could grow, water, and so on, there’s no reason that we couldn’t have 10 billion people on this Earth living in incredible prosperity – far beyond what people in America enjoy today – with dramatically less impact on the planet. So that’s my hope, that as we get smarter, we use that intelligence to progressively improve on the efficiency of our technologies, as we have done throughout history, and reduce the negative impact we have on this planet and the millions of other species we share it with.

Saturday, December 10, 2011

AppId is over the quotaAppId is over the quota Summary: Researchers in Cleveland, Ohio have built an artificial lung that reaches functional parity with a human lung. The device uses oxygen sourced from the air rather than pure oxygen as current man-made lungs require.

Large mechanical ventilators in which blood from the patient is circulated through a machine to oxygenate it could one day be a thing of the past.

Researchers in Cleveland, Ohio, have built an artificial lung that reaches functional parity with a human lung. The device uses oxygen sourced from the air rather than pure oxygen as current man-made lungs require.

The artificial lung is a major step toward creating an easily portable and implantable artificial lung, said Joe Potkay, a research assistant professor in electrical engineering and computer science at Case Western Reserve University.

Potkay and his team fashioned microfluidic channels from an organic silicon polymer (PDMS) and created ever smaller channels until the rubber tubes measured less than one-fourth the diameter of human hair, similar to the arteries and capillaries in a real lung. They added blood and air flow outlets and inlets and coated all the channels in a gas exchange membrane.

By following the design of the natural lung and making the parts on the same scale, the researchers were able to create a very large surface-area-to-volume ratio and shrink the distances for gas diffusion compared to the current state of the art.

Tests using pig blood show oxygen exchange efficiency is three to five times better, which enables them to use plain air instead of pure oxygen as the ventilating gas. Current artificial lung systems require heavy tanks and can only be used on patients at rest due to their inefficient oxygen exchange.

“Based on current device performance, we estimate that a unit that could be used in humans would be about 6 inches by 6 inches by 4 inches tall, or about the volume of the human lung. In addition, the device could be driven by the heart and would not require a mechanical pump,” Potkay said.

Not envisioned as a permanent transplant, the device would buy patients time while their own diseased lungs healed, or could act as a bridge while awaiting a lung transplant –- a wait that lasts, on average, more than a year.

Next for the team is to develop a coating to prevent clogging in the narrow artificial capillaries, and eventually, create a durable artificial lung large enough to test in rodent models of lung disease.

Within a decade, Potkay and his co-workers expects to have human-scale artificial lungs in use in clinical trials.

Potkay is the lead author of the paper describing the device and research, in the journal “Lab on a Chip.”

AppId is over the quotaAppId is over the quota Summary: Microsoft’s Office Division has posted a new concept video that illustrates what productivity may look like in 5 to 10 years.

In the future, devices like smartphones and tablets will reach the pinnacle of the “less, but better” ethos popularized by the design icon Dieter Rams, and touch displays will be embedded into everyday objects so that all you’re left with is information that is both tangible and contextually-aware.

That’s the essence of Microsoft’s new video depicting three people (at home, at work, and on-the-go) as they go about their lives immersed in a productivity utopia.

Kurt DelBene, President, Microsoft Office Division, says the reason for creating the video is to show people how technology that is available today or in research will transform from a passive tool to a more active assistant to help us “manage our time better, focus our attention on the most important things, and foster meaningful connections with the people we care about.”

The big question: Which vendors will we actually find under the hood for the envisioned capabilities if they do materialize this way?