Tag Archives: Defense Advanced Research Projects Agency

I received a March 17, 2017 Woodrow Wilson International Center for Scholars notice (via email) about this upcoming event,

The Imagineers of War: The Untold Story of DARPA [Defense Advanced Research Projects Agency], the Pentagon Agency That Changed the World

There will be a webcast of this event

In TheImagineers of War, Weinberger gives us a definitive history of the agency that has quietly shaped war and technology for nearly 60 years. Founded in 1958 in response to the launch of Sputnik, DARPA’s original mission was to create “the unimagined weapons of the future.” Over the decades, DARPA has been responsible for countless inventions and technologies that extend well beyond military technology.

Weinberger has interviewed more than one hundred former Pentagon officials and scientists involved in DARPA’s projects—many of whom have never spoken publicly about their work with the agency—and pored over countless declassified records from archives around the country, documents obtained under the Freedom of Information Act, and exclusive materials provided by sources. The Imagineers of War is a compelling and groundbreaking history in which science, technology, and politics collide.

As a point of interest, the Wilson Center (also known as the Woodrow Wilson International Center for Scholars) is one of the independent agencies slated to be defunded in the 2017 US budget as proposed by President Donald Trump according to a March 16, 2017 article by Elaine Godfrey for The Atlantic.

A living, breathing supercomputer is a bit mind-boggling but scientists at McGill University (Canada) and their international colleagues have created a working model according to a Feb. 26, 2016 McGill University news release on EurekAlert (and received via email), Note: A link has been removed,

The substance that provides energy to all the cells in our bodies, Adenosine triphosphate (ATP), may also be able to power the next generation of supercomputers. That is what an international team of researchers led by Prof. Nicolau, the Chair of the Department of Bioengineering at McGill, believe. They’ve published an article on the subject earlier this week in the Proceedings of the National Academy of Sciences (PNAS), in which they describe a model of a biological computer that they have created that is able to process information very quickly and accurately using parallel networks in the same way that massive electronic super computers do.

Except that the model bio supercomputer they have created is a whole lot smaller than current supercomputers, uses much less energy, and uses proteins present in all living cells to function.

Doodling on the back of an envelope

“We’ve managed to create a very complex network in a very small area,” says Dan Nicolau, Sr. with a laugh. He began working on the idea with his son, Dan Jr., more than a decade ago and was then joined by colleagues from Germany, Sweden and The Netherlands, some 7 years ago [there is also one collaborator from the US according the journal’s [PNAS] list of author affiliations, read on for the link to the paper]. “This started as a back of an envelope idea, after too much rum I think, with drawings of what looked like small worms exploring mazes.”

The model bio-supercomputer that the Nicolaus (father and son) and their colleagues have created came about thanks to a combination of geometrical modelling and engineering knowhow (on the nano scale). It is a first step, in showing that this kind of biological supercomputer can actually work.

The circuit the researchers have created looks a bit like a road map of a busy and very organized city as seen from a plane. Just as in a city, cars and trucks of different sizes, powered by motors of different kinds, navigate through channels that have been created for them, consuming the fuel they need to keep moving.

More sustainable computing

But in the case of the biocomputer, the city is a chip measuring about 1.5 cm square in which channels have been etched. Instead of the electrons that are propelled by an electrical charge and move around within a traditional microchip, short strings of proteins (which the researchers call biological agents) travel around the circuit in a controlled way, their movements powered by ATP, the chemical that is, in some ways, the juice of life for everything from plants to politicians.

Because it is run by biological agents, and as a result hardly heats up at all, the model bio-supercomputer that the researchers have developed uses far less energy than standard electronic supercomputers do, making it more sustainable. Traditional supercomputers use so much electricity that they heat up a lot and then need to be cooled down, often requiring their own power plant to function.

Moving from model to reality

Although the model bio supercomputer was able to very efficiently tackle a complex classical mathematical problem by using parallel computing of the kind used by supercomputers, the researchers recognize that there is still a lot of work ahead to move from the model they have created to a full-scale functional computer.

”Now that this model exists as a way of successfully dealing with a single problem, there are going to be many others who will follow up and try to push it further, using different biological agents, for example,” says Nicolau. “It’s hard to say how soon it will be before we see a full scale bio super-computer. One option for dealing with larger and more complex problems may be to combine our device with a conventional computer to form a hybrid device. Right now we’re working on a variety of ways to push the research further.”

What was once the stuff of science fiction, is now just science.

The funding for this project is interesting,

This research was funded by: The European Union Seventh Framework Programme; [US] Defense Advanced Research Projects Agency [DARPA]; NanoLund; The Miller Foundation; The Swedish Research Council; The Carl Trygger Foundation; the German Research Foundation; and by Linnaeus University.

The pioneering achievement was developed by researchers from the Technische Universität Dresden and the Max Planck Institute of Molecular Cell Biology and Genetics, Dresden in collaboration with international partners from Canada, England, Sweden, the US, and the Netherlands.

Conventional electronic computers have led to remarkable technological advances in the past decades, but their sequential nature -they process only one computational task at a time- prevents them from solving problems of combinatorial nature such as protein design and folding, and optimal network routing. This is because the number of calculations required to solve such problems grows exponentially with the size of the problem, rendering them intractable with sequential computing. Parallel computing approaches can in principle tackle such problems, but the approaches developed so far have suffered from drawbacks that have made up-scaling and practical implementation very difficult. The recently reported parallel-computing approach aims to address these issues by combining well established nanofabrication technology with molecular motors which are highly energy efficient and inherently work in parallel.

In this approach, which the researchers demonstrate on a benchmark combinatorial problem that is notoriously hard to solve with sequential computers, the problem to be solved is ‘encoded’ into a network of nanoscale channels (Fig. 1a). This is done, on the one hand by mathematically designing a geometrical network that is capable of representing the problem, and on the other hand by fabricating a physical network based on this design using so-called lithography, a standard chip-manufacturing technique.

The network is then explored in parallel by many protein filaments (here actin filaments or microtubules) that are self-propelled by a molecular layer of motor proteins (here myosin or kinesin) covering the bottom of the channels (Fig. 3a). The design of the network using different types of junctions automatically guides the filaments to the correct solutions to the problem (Fig. 1b). This is realized by different types of junctions, causing the filaments to behave in two different ways. As the filaments are rather rigid structures, turning to the left or right is only possible for certain angles of the crossing channels. By defining these options (‘split junctions’ Fig. 2a + 3b and ‘pass junctions’, Fig. 2b + 3c) the scientists achieved an ‘intelligent’ network giving the filaments the opportunity either to cross only straight or to decide between two possible channels with a 50/50 probability.

The time to solve combinatorial problems of size N using this parallel-computing approach scales approximately as N2, which is a dramatic improvement over the exponential (2N) time scales required by conventional, sequential computers. Importantly, the approach is fully scalable with existing technologies and uses orders of magnitude less energy than conventional computers, thus circumventing the heating issues that are currently limiting the performance of conventional computing.

This research from Singapore could make neuroprosthetics and exoskeletons a little easier to manage as long as you don’t mind having a neural implant. From a Feb. 11, 2016 news item on ScienceDaily,

A versatile chip offers multiple applications in various electronic devices, report researchers, suggested that there is now hope that a low-powered, wireless neural implant may soon be a reality. Neural implants when embedded in the brain can alleviate the debilitating symptoms of Parkinson’s disease or give paraplegic people the ability to move their prosthetic limbs.

Scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a small smart chip that can be paired with neural implants for efficient wireless transmission of brain signals.

Neural implants when embedded in the brain can alleviate the debilitating symptoms of Parkinson’s disease or give paraplegic people the ability to move their prosthetic limbs.

However, they need to be connected by wires to an external device outside the body. For a prosthetic patient, the neural implant is connected to a computer that decodes the brain signals so the artificial limb can move.

These external wires are not only cumbersome but the permanent openings which allow the wires into the brain increases the risk of infections.

The new chip by NTU scientists can allow the transmission of brain data wirelessly and with high accuracy.

Assistant Professor Arindam Basu from NTU’s School of Electrical and Electronic Engineering said the research team have tested the chip on data recorded from animal models, which showed that it could decode the brain’s signal to the hand and fingers with 95 per cent accuracy.

“What we have developed is a very versatile smart chip that can process data, analyse patterns and spot the difference,” explained Prof Basu.

“It is about a hundred times more efficient than current processing chips on the market. It will lead to more compact medical wearable devices, such as portable ECG monitoring devices and neural implants, since we no longer need large batteries to power them.”

Different from other wireless implants

To achieve high accuracy in decoding brain signals, implants require thousands of channels of raw data. To wirelessly transmit this large amount of data, more power is also needed which means either bigger batteries or more frequent recharging.

This is not feasible as there is limited space in the brain for implants while frequent recharging means the implants cannot be used for long-term recording of signals.

Current wireless implant prototypes thus suffer from a lack of accuracy as they lack the bandwidth to send out thousands of channels of raw data.

Instead of enlarging the power source to support the transmission of raw data, Asst Prof Basu tried to reduce the amount of data that needs to be transmitted.

Designed to be extremely power-efficient, NTU’s patented smart chip will analyse and decode the thousands of signals from the neural implants in the brain, before compressing the results and sending it wirelessly to a small external receiver.

This invention and its findings were published last month [December 2015] in the prestigious journal, IEEE Transactions on Biomedical Circuits & Systems, by the Institute of Electrical and Electronics Engineers, the world’s largest professional association for the advancement of technology.

Its underlying science was also featured in three international engineering conferences (two in Atlanta, USA and one in China) over the last three months.

Versatile smart chip with multiple uses

This new smart chip is designed to analyse data patterns and spot any abnormal or unusual patterns.

For example, in a remote video camera, the chip can be programmed to send a video back to the servers only when a specific type of car or something out of the ordinary is detected, such as an intruder.

This would be extremely beneficial for the Internet of Things (IOT), where every electrical and electronic device is connected to the Internet through a smart chip.

With a report by marketing research firm Gartner Inc predicting that 6.4 billion smart devices and appliances will be connected to the Internet by 2016, and will rise to 20.8 billion devices by 2020, reducing network traffic will be a priority for most companies.

Using NTU’s new chip, the devices can process and analyse the data on site, before sending back important details in a compressed package, instead of sending the whole data stream. This will reduce data usage by over a thousand times.

Asst Prof Basu is now in talks with Singapore Technologies Electronics Limited to adapt his smart chip that can significantly reduce power consumption and the amount of data transmitted by battery-operated remote sensors, such as video cameras.

The team is also looking to expand the applications of the chip into commercial products, such as to customise it for smart home sensor networks, in collaboration with a local electronics company.

The chip, measuring 5mm by 5mm can now be licensed by companies from NTU’s commercialisation arm, NTUitive.

Earlier this month there was a Feb. 9, 2016 announcement about a planned human clinical trial in Australia for a new brain-machine interface (neural implant). Before proceeding with the news, here’s what this implant looks like,

Caption: This tiny device, the size of a small paperclip, is implanted in to a blood vessel next to the brain and can read electrical signals from the motor cortex, the brain’s control centre. These signals can then be transmitted to an exoskeleton or wheelchair to give paraplegic patients greater mobility. Users will need to learn how to communicate with their machinery, but over time, it is thought it will become second nature, like driving or playing the piano. The first human trials are slated for 2017 in Melbourne, Australia. Credit: The University of Melbourne.

Melbourne medical researchers have created a new minimally invasive brain-machine interface, giving people with spinal cord injuries new hope to walk again with the power of thought.

The brain machine interface consists of a stent-based electrode (stentrode), which is implanted within a blood vessel next to the brain, and records the type of neural activity that has been shown in pre-clinical trials to move limbs through an exoskeleton or to control bionic limbs.

The new device is the size of a small paperclip and will be implanted in the first in-human trial at The Royal Melbourne Hospital in 2017.

The results published today in Nature Biotechnology show the device is capable of recording high-quality signals emitted from the brain’s motor cortex, without the need for open brain surgery.

Principal author and Neurologist at The Royal Melbourne Hospital and Research Fellow at The Florey Institute of Neurosciences and the University of Melbourne, Dr Thomas Oxley, said the stentrode was revolutionary.

“The development of the stentrode has brought together leaders in medical research from The Royal Melbourne Hospital, The University of Melbourne and the Florey Institute of Neuroscience and Mental Health. In total 39 academic scientists from 16 departments were involved in its development,” Dr Oxley said.

“We have been able to create the world’s only minimally invasive device that is implanted into a blood vessel in the brain via a simple day procedure, avoiding the need for high risk open brain surgery.

“Our vision, through this device, is to return function and mobility to patients with complete paralysis by recording brain activity and converting the acquired signals into electrical commands, which in turn would lead to movement of the limbs through a mobility assist device like an exoskeleton. In essence this a bionic spinal cord.”

Stroke and spinal cord injuries are leading causes of disability, affecting 1 in 50 people. There are 20,000 Australians with spinal cord injuries, with the typical patient a 19-year old male, and about 150,000 Australians left severely disabled after stroke.

Co-principal investigator and biomedical engineer at the University of Melbourne, Dr Nicholas Opie, said the concept was similar to an implantable cardiac pacemaker – electrical interaction with tissue using sensors inserted into a vein, but inside the brain.

“Utilising stent technology, our electrode array self-expands to stick to the inside wall of a vein, enabling us to record local brain activity. By extracting the recorded neural signals, we can use these as commands to control wheelchairs, exoskeletons, prosthetic limbs or computers,” Dr Opie said.

“In our first-in-human trial, that we anticipate will begin within two years, we are hoping to achieve direct brain control of an exoskeleton for three people with paralysis.”

“Currently, exoskeletons are controlled by manual manipulation of a joystick to switch between the various elements of walking – stand, start, stop, turn. The stentrode will be the first device that enables direct thought control of these devices”

Neurophysiologist at The Florey, Professor Clive May, said the data from the pre-clinical study highlighted that the implantation of the device was safe for long-term use.

“Through our pre-clinical study we were able to successfully record brain activity over many months. The quality of recording improved as the device was incorporated into tissue,” Professor May said.

“Our study also showed that it was safe and effective to implant the device via angiography, which is minimally invasive compared with the high risks associated with open brain surgery.

“The brain-computer interface is a revolutionary device that holds the potential to overcome paralysis, by returning mobility and independence to patients affected by various conditions.”

Professor Terry O’Brien, Head of Medicine at Departments of Medicine and Neurology, The Royal Melbourne Hospital and University of Melbourne said the development of the stentrode has been the “holy grail” for research in bionics.

“To be able to create a device that can record brainwave activity over long periods of time, without damaging the brain is an amazing development in modern medicine,” Professor O’Brien said.

“It can also be potentially used in people with a range of diseases aside from spinal cord injury, including epilepsy, Parkinsons and other neurological disorders.”

The development of the minimally invasive stentrode and the subsequent pre-clinical trials to prove its effectiveness could not have been possible without the support from the major funding partners – US Defense Department DARPA [Defense Advanced Research Projects Agency] and Australia’s National Health and Medical Research Council.

So, DARPA is helping fund this, eh? Interesting but not a surprise given the agency’s previous investments in brain research and neuroprosthetics.

A new analysis by the Synthetic Biology Project at the Wilson Center finds the Defense Department and its Defense Advanced Research Projects Agency (DARPA) fund much of the U.S. government’s research in synthetic biology, with less than 1 percent of total federal funding going to risk research.

The report, U.S. Trends in Synthetic Biology Research, finds that between 2008 and 2014, the United States invested approximately $820 million dollars in synthetic biology research. In that time period, the Defense Department became a key funder of synthetic biology research. DARPA’s investments, for example, increased from near zero in 2010 to more than $100 million in 2014 – more than three times the amount spent by the National Science Foundation (NSF).

The Wilson Center news release can also be found here on the Center’s report publication page where it goes on to provide more detail and where you can download the report,

The report, U.S. Trends in Synthetic Biology Research, finds that between 2008 and 2014, the United States invested approximately $820 million dollars in synthetic biology research. In that time period, the Defense Department became a key funder of synthetic biology research. DARPA’s investments, for example, increased from near zero in 2010 to more than $100 million in 2014 – more than three times the amount spent by the National Science Foundation (NSF).

“The increase in DARPA research spending comes as NSF is winding down its initial investment in the Synthetic Biology Engineering Research Center, or SynBERC,” says Dr. Todd Kuiken, senior program associate with the project. “After the SynBERC funding ends next year, it is unclear if there will be a dedicated synthetic biology research program outside of the Pentagon. There is also little investment addressing potential risks and ethical issues, which can affect public acceptance and market growth as the field advances.”

The new study found that less than one percent of the total U.S. funding is focused on synthetic biology risk research and approximately one percent addresses ethical, legal, and social issues.

Internationally, research funding is increasing. Last year, research investments by the European Commission and research agencies in the United Kingdom exceeded non-defense spending in the United States, the report finds.

The research spending comes at a time of growing interest in synthetic biology, particularly surrounding the potential presented by new gene-editing techniques. Recent research by the industry group SynBioBeta indicated that, so far in 2015, synthetic biology companies raised half a billion dollars – more than the total investments in 2013 and 2014 combined.

In a separate Woodrow Wilson International Center for Scholars Sept. 16, 2015 announcement about the report, an upcoming event notice was included,

Save the date: On Oct. 7, 2015, the Synthetic Biology Project will be releasing a new report on synthetic biology and federal regulations. More details will be forthcoming, but the report release will include a noon event [EST] at the Wilson Center in Washington, DC.

I haven’t been able to find any more information about this proposed report launch but you may want to check the Synthetic Biology Project website for details as they become available. ETA Oct. 1, 2015: The new report titled: Leveraging Synthetic Biology’s Promise and Managing Potential Risk: Are We Getting It Right? will be launched on Oct. 15, 2015 according to an Oct. 1, 2015 notice,

As more applications based on synthetic biology come to market, are the existing federal regulations adequate to address the risks posed by this emerging technology?

Please join us for the release of our new report, Leveraging Synthetic Biology’s Promise and Managing Potential Risk: Are We Getting It Right? Panelists will discuss how synthetic biology applications would be regulated by the U.S. Coordinated Framework for Regulation of Biotechnology, how this would affect the market pathway of these applications and whether the existing framework will protect human health and the environment.

In 2001, Andrew Parker and Chris Lawrence published an article in Nature magazine about work which has inspired a US startup company in 2012 to develop a water bottle that fills itself up with water by drawing moisture from the air. Parker’s and Lawrence’s article was titled Water capture by a desert beetle. Here’s the abstract (over 10 years later the article is still behind a paywall),

Some beetles in the Namib Desert collect drinking water from fog-laden wind on their backs1. We show here that these large droplets form by virtue of the insect’s bumpy surface, which consists of alternating hydrophobic, wax-coated and hydrophilic, non-waxy regions. The design of this fog-collecting structure can be reproduced cheaply on a commercial scale and may find application in water-trapping tent and building coverings, for example, or in water condensers and engines.

Some five years later, there was a June 15, 2006 news item on phys.org about the development of a new material based on the Namib desert beetle,

When that fog rolls in, the Namib Desert beetle is ready with a moisture-collection system exquisitely adapted to its desert habitat. Inspired by this dime-sized beetle, MIT [Massachusetts Institute of Technology] researchers have produced a new material that can capture and control tiny amounts of water.

The material combines a superhydrophobic (water-repelling) surface with superhydrophilic (water-attracting) bumps that trap water droplets and control water flow. The work was published in the online version of Nano Letters on Tuesday, May 2 [2006] {behind a paywall}.

Potential applications for the new material include harvesting water, making a lab on a chip (for diagnostics and DNA screening) and creating microfluidic devices and cooling devices, according to lead researchers Robert Cohen, the St. Laurent Professor of Chemical Engineering, and Michael Rubner, the TDK Professor of Polymer Materials Science and Engineering.

The MIT June 14, 2006 news release by Anne Trafton, which originated the news item about the new material, indicates there was some military interest,

The U.S. military has also expressed interest in using the material as a self-decontaminating surface that could channel and collect harmful substances.

The researchers got their inspiration after reading a 2001 article in Nature describing the Namib Desert beetle’s moisture-collection strategy. Scientists had already learned to copy the water-repellent lotus leaf, and the desert beetle shell seemed like another good candidate for “bio-mimicry.”

…

When fog blows horizontally across the surface of the beetle’s back, tiny water droplets, 15 to 20 microns, or millionths of a meter, in diameter, start to accumulate on top of bumps on its back.

The bumps, which attract water, are surrounded by waxy water-repelling channels. “That allows small amounts of moisture in the air to start to collect on the tops of the hydrophilic bumps, and it grows into bigger and bigger droplets,” Rubner said. “When it gets large, it overcomes the pinning force that holds it and rolls down into the beetle’s mouth for a fresh drink of water.”

To create a material with the same abilities, the researchers manipulated two characteristics — roughness and nanoporosity (spongelike capability on a nanometer, or billionths of a meter, scale).

By repeatedly dipping glass or plastic substrates into solutions of charged polymer chains dissolved in water, the researchers can control the surface texture of the material. Each time the substrate is dipped into solution, another layer of charged polymer coats the surface, adding texture and making the material more porous. Silica nanoparticles are then added to create an even rougher texture that helps trap water droplets.

The material is then coated with a Teflon-like substance, making it superhydrophobic. Once that water-repellent layer is laid down, layers of charged polymers and nanoparticles can be added in certain areas, using a properly formulated water/alcohol solvent mixture, thereby creating a superhydrophilic pattern. The researchers can manipulate the technique to create any kind of pattern they want.

…

The research is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.

I’m not sure what happened with the military interest or the group working out of MIT in 2006 but on Nov. 23, 2012, BBC News online featured an article about a US startup company, NBD Nano, which aims to bring a self-filling water bottle based on Namib desert beetle to market,

NBD Nano, which consists of four recent university graduates and was formed in May, looked at the Namib Desert beetle that lives in a region that gets about half an inch of rainfall per year.

Using a similar approach, the firm wants to cover the surface of a bottle with hydrophilic (water-attracting) and hydrophobic (water-repellent) materials.

The work is still in its early stages, but it is the latest example of researchers looking at nature to find inspiration for sustainable technology.

“It was important to apply [biomimicry] to our design and we have developed a proof of concept and [are] currently creating our first fully-functional prototype,” Miguel Galvez, a co-founder, told the BBC.

“We think our initial prototype will collect anywhere from half a litre of water to three litres per hour, depending on local environments.”

According to the Nov. 25, 2012 article by Nancy Owano for phys.org, the company is at the prototype stage now,

NBD Nano plans to enter the worldwide marketplace between 2014 and 2015.

The Wyss Institute will receive up to $37M US for a project that integrates ten different organ-on-a-chip projects into one system. From the July 24, 2012 news release on EurekAlert,

With this new DARPA funding, Institute researchers and a multidisciplinary team of collaborators seek to build 10 different human organs-on-chips, to link them together to more closely mimic whole body physiology, and to engineer an automated instrument that will control fluid flow and cell viability while permitting real-time analysis of complex biochemical functions. As an accurate alternative to traditional animal testing models that often fail to predict human responses, this instrumented “human-on-a-chip” will be used to rapidly assess responses to new drug candidates, providing critical information on their safety and efficacy.

…

This unique platform could help ensure that safe and effective therapeutics are identified sooner, and ineffective or toxic ones are rejected early in the development process. As a result, the quality and quantity of new drugs moving successfully through the pipeline and into the clinic may be increased, regulatory decision-making could be better informed, and patient outcomes could be improved.

Jesse Goodman, FDA Chief Scientist and Deputy Commissioner for Science and Public Health, commented that the automated human-on-chip instrument being developed “has the potential to be a better model for determining human adverse responses. FDA looks forward to working with the Wyss Institute in its development of this model that may ultimately be used in therapeutic development.”

As for the Wyss Institute, here’s a description from the news release,

The Wyss Institute for Biologically Inspired Engineering at Harvard University (http://wyss.harvard.edu) uses Nature’s design principles to develop bioinspired materials and devices that will transform medicine and create a more sustainable world. Working as an alliance among Harvard’s Schools of Medicine, Engineering, and Arts & Sciences, and in partnership with Beth Israel Deaconess Medical Center, Boston Children’s Hospital, Brigham and Women’s Hospital, , Dana Farber Cancer Institute, Massachusetts General Hospital, the University of Massachusetts Medical School, Spaulding Rehabilitation Hospital, Tufts University, and Boston University, the Institute crosses disciplinary and institutional barriers to engage in high-risk research that leads to transformative technological breakthroughs. By emulating Nature’s principles for self-organizing and self-regulating, Wyss researchers are developing innovative new engineering solutions for healthcare, energy, architecture, robotics, and manufacturing. These technologies are translated into commercial products and therapies through collaborations with clinical investigators, corporate alliances, and new start-ups.

I hadn’t thought of an organ-on-a-chip as particularly bioinspired so I’ll have to think about that one for a while.

One of my more recent (Nov. 22, 2011) postings on DARPA (Defense Advanced Research Projects Agency) highlighted their entrepreneurial focus and the person encouraging that focus, agency director Regina Dugan. Given that she’s held the position for roughly 2.5 years, I was surprised to see that she has left to joint Google. From the Mar.13, 2012 news item on physorg.com,

Google on Monday [March 12, 2012] confirmed that Defense Advanced Research Projects Agency chief Regina Dugan is taking a yet-to-be-revealed role at the Internet powerhouse.

Regina E. Dugan was the 19th Director of Defense Advanced Research Projects Agency (DARPA). She was appointed to that position on July 20, 2009. In March 2012, she left her position to take an executive role at Google. She was the first female director of DARPA.

Much of her working career (1996-2012) seems to have been spent at DARPA. I don’t think I’m going to draw too many conclusions from this move but I am intrigued especially in light of an essay about a departing Google employee, James Whitaker. From Whitaker’s March 13, 2012 posting on his JW on Tech blog,

The Google I was passionate about was a technology company that empowered its employees to innovate. The Google I left was an advertising company with a single corporate-mandated focus.

Technically I suppose Google has always been an advertising company, but for the better part of the last three years, it didn’t feel like one. Google was an ad company only in the sense that a good TV show is an ad company: having great content attracts advertisers.

He lays out the situation here,

It turns out that there was one place where the Google innovation machine faltered and that one place mattered a lot: competing with Facebook. Informal efforts produced a couple of antisocial dogs in Wave and Buzz. Orkut never caught on outside Brazil. Like the proverbial hare confident enough in its lead to risk a brief nap, Google awoke from its social dreaming to find its front runner status in ads threatened.

Google could still put ads in front of more people than Facebook, but Facebook knows so much more about those people. Advertisers and publishers cherish this kind of personal information, so much so that they are willing to put the Facebook brand before their own. Exhibit A: www.facebook.com/nike, a company with the power and clout of Nike putting their own brand after Facebook’s? No company has ever done that for Google and Google took it personally.

Larry Page himself assumed command to right this wrong. Social became state-owned, a corporate mandate called Google+. It was an ominous name invoking the feeling that Google alone wasn’t enough. Search had to be social. Android had to be social. You Tube, once joyous in their independence, had to be … well, you get the point. [emphasis mine] Even worse was that innovation had to be social. Ideas that failed to put Google+ at the center of the universe were a distraction.

That point about YouTube really strikes home as I’ve become quite dismayed with the advertising on the videos. The consequence is that I’m starting to search for clips on Vimeo first as it doesn’t have intrusive advertising.

Getting back to Whitaker, he notes this about Google and advertising,

The old Google made a fortune on ads because they had good content. It was like TV used to be: make the best show and you get the most ad revenue from commercials. The new Google seems more focused on the commercials themselves.

It’s interesting to contrast Whitaker’s take on the situation, which suggests that the company has lost its entrepreneurial spirit as it focuses on advertising, with the company’s latest hire, Regina Dugan who seems to have introduced entrepreneurship into DARPA’s activities.

As for the military connection (DARPA is US Dept. of Defense agency), I remain mindful that the military and the intelligence communities have an interest in gathering data but would need something more substantive than a hiring decision to draw any conclusions.

For anyone who’s interested in these types of queries, I would suggest reading a 2007 posting, Facebook, the CIA, and You on the Brainsturbator blog, for a careful unpacking of the connections (extremely tenuous) between Facebook and the CIA (US Central Intelligence Agency). The blog owner and essayist, Jordan Boland, doesn’t dismiss the surveillance concern; he’s simply pointing out that it’s difficult to make an unequivocal claim while displaying a number of intriguing connections between agencies and organizations.

One wonders if Morpho butterflies are going to decide that they need to protect their intellectual property. Yet another scientific group has found a way to exploit the nanostructures on the Morpho butterfly’s wing. From the Feb. 13, 2012 news item on Nanowerk,

GE [General Electric] scientists are exploring many potential thermal imaging and sensing applications with their new detection concept such as medical diagnostics, surveillance, non-destructive inspection and others, where visual heat maps of imaged areas serve as a valuable condition indicator. Some examples include:

Thermal Imaging for advanced medical diagnosis – to better visualize inflammation in the body and understand changes in a patient’s health earlier.

Advanced thermal vision – to see things at night and during the day in much greater detail than what is possible today.

Thermal characterization of wound infections – to facilitate early diagnosis.

“The iridescence of Morpho butterflies has inspired our team for yet another technological opportunity. This time we see the potential to develop the next generation of thermal imaging sensors that deliver higher sensitivity and faster response times in a more simplified, cost-effective design,” said Dr. Radislav Potyrailo, Principal Scientist at GE Global Research who leads GE’s bio-inspired photonics programs. “This new class of thermal imaging sensors promises significant improvements over existing detectors in their image quality, speed, sensitivity, size, power requirements, and cost.”

This is a thermographic video of a Morpho butterfly structure in response to heat pulses produced by breathing onto the whole butterfly structure (video part 1) and onto its localized areas (video part 2). Nanostructures on Morpho butterfly wings coated with carbon nanotubes can sense temperature chances down to .02 degrees Celsius, at a response rate of 1/40 of a second. This is a demonstration of how new bio-inspired designs by GE scientists could enable more advanced applications for industrial inspection, medical diagnostics and military. This video was filmed by Bryan Whalen in the Electronics Cooling Lab at GE Global Research.

This newest work seems to have its origins in a DARPA-funded (US Defense Advanced Research Projects Agency) GE project. From the Aug. 12, 2010 GE news release,

Scientists at GE Global Research, GE’s technology development arm, in collaboration with Air Force Research Laboratory, State University at Albany, and University of Exeter, have received a four-year, $6.3 million award from the Defense Advanced Research Projects Agency (DARPA) to develop new bio-inspired nanostructured sensors that would enable faster, more selective detection of dangerous warfare agents and explosives.

Three years ago, GE scientists discovered that nanostructures from wing scales of butterflies exhibited acute chemical sensing properties. [emphasis bold] Since then, GE scientists have been developing a dynamic, new sensing platform that replicates these unique properties. Recognizing the potential of GE’s sensing technologies for improving homeland protection, DARPA is supporting further research. [emphasis mine]

For anyone who’s particularly interested in the technical details, Dexter Johnson offers more in his Feb. 13, 2012 posting about this research on the Nanoclast blog for the IEEE (Institute of Electrical and Electronics Engineers).

Before I get to DARPA’s (Defense Advanced Research Project Agency) new spy satellite, here’s a brief description of the Panopticon from the Wikipedia essay,

The Panopticon is a type of institutional building designed by English philosopher and social theorist Jeremy Bentham in the late eighteenth century. The concept of the design is to allow an observer to observe (-opticon) all (pan-) inmates of an institution without them being able to tell whether or not they are being watched.

…

Although the Panopticon prison design did not come to fruition during Bentham’s time, it has been seen as an important development. It was invoked by Michel Foucault (in Discipline and Punish) as metaphor for modern “disciplinary” societies and their pervasive inclination to observe and normalise. Foucault proposes that not only prisons but all hierarchical structures like the army, schools, hospitals and factories have evolved through history to resemble Bentham’s Panopticon. The notoriety of the design today (although not its lasting influence in architectural realities) stems from Foucault’s famous analysis of it.

Building on Foucault, contemporary social critics often assert that technology has allowed for the deployment of panoptic structures invisibly throughout society. [emphasis mine] Surveillance by closed-circuit television (CCTV) cameras in public spaces is an example of a technology that brings the gaze of a superior into the daily lives of the populace. Furthermore, a number of cities in the United Kingdom, including Middlesbrough, Bristol, Brighton and London have recently added loudspeakers to a number of their existing CCTV cameras. They can transmit the voice of a camera supervisor to issue audible messages to the public. Similarly,critical analyses of internet practice have suggested that the internet allows for a panopticon form of observation. ISPs are able to track users’ activities, while user-generated content means that daily social activity may be recorded and broadcast online.

And now, DARPA’s new satellite as described by Nancy Atkinson (Universe Today) in a Dec. 21, 2011 news item on Physorg,

“It sees you when you’re sleeping and knows when you’re awake” could be the theme song for a new spy satellite being developed by DARPA. The Defense Advanced Research Projects Agency’s latest proof-of-concept project is called the Membrane Optical Imager for Real-Time Exploitation (MOIRE), and would provide real-time images and video of any place on Earth at any time — a capability that, so far, only exists in the realm of movies and science fiction. The details of this huge eye-in-the-sky look like something right out of science fiction, as well, and it would be interesting to determine if it could have applications for astronomy as well.

It’s not here yet (from the physorg.com news item),

The MOIRE program began in March 2010 is now in the first phase of development, where DARPA is testing the concept’s viability. Phase 2 would entail system design, with Ball Aerospace doing the design and building to test a 16-foot (5 m) telescope, and an option for a Phase 3 …

It’s all about the adhesive tape according to the researchers at Tribogenics. Yes, they can create x-rays by unrolling scotch tape in a vacuum. Neal Ungerleider’s Dec. 8, 2011 article for Fast Company,

Tribogenics’ products rely on a counterintuitive discovery: X-rays are generated when unrolling Scotch tape in a vacuum. In a Nature article, UCLA researchers Carlos Camara, Juan Escobar, Jonathan Hird, and Seth Putterman detailed how Scotch tape can generate surprisingly large amounts of X-rays thanks to visible radiation generated by static electricity between two contacting surfaces. The research encountered challenges thanks to the fact that Scotch tape and generic brand adhesive tapes generated slightly different energy signatures; the composition of Scotch tape adhesive is a closely guarded 3M trade secret. …

Fox [Dale Fox, Tribogenics’ Chief Scientist] told Fast Company that “every other X-ray source in the world uses a high-voltage transformer connected to a vacuum tube. In contrast, we’ve harnessed the power of the immense voltages in static electricity to create tiny, low-cost, battery-operated X-ray sources for the first time in history. It’s like the jump the electronics industry took when it moved from vacuum tubes to transistors.” According to Fox, Tribogenics has already developed X-ray energy sources the size of a USB memory stick. While Tribogenics representatives declined to discuss pricing for upcoming products, the firm “very comfortably” promised that the cost would be less than 10% than that of any existing X-ray technology.

Tribogenics patented technology enables portable, compact x-ray solutions for applications in precious metal, mining, military, medical imaging, security and other industries. By miniaturizing X-ray sources and eliminating the need for high voltage, we can create products and solutions unattainable using existing X-ray technology. Tribogenics revolutionary X-ray solution emerged from DARPA and TATRC-funded initiatives at UCLA and was developed by prominent scientists.

Ungerleider notes that the company has not launched any commercial products yet but this one sure looks interesting,

… ultra-portable X-ray machines show the greatest potential for becoming a disruptive medical technology. Tribogenics’ methods have revolutionary ramifications for catheterized radiation therapy, which currently poses significant radiation risks for patients, doctors, and nurses. According to Fox, the company’s products eliminate the need for radioactive isotopes.

If you are interested in this technology, I would suggest reading Ungerleider’s article for additional details.