They hope that in the near future computers will be able to communicate among themselves, recognize threats, and be able to monitor their own health -- just like the cells inside our bodies.

"We want the machines to take a more active part in their own protection," said Bruce McConnell, senior counselor for cyber security at the U.S. Department of Homeland Security. "We want to use their brains to protect themselves, but always in the context of the policies of the system administrators and owners."

McConnell is co-author of a new DHS white paper, "Enabling Distributed Security in Cyberspace: Building a Healthy and Resilient Cyber Ecosystem with Automated Collective Action."

No, it's not the dawn of Skynet. But it may be a new way of looking at how computers can be protected, and at the broader questions of privacy versus security. McConnell and others point to a marked increase in cyber-threats from organized crime, terrorists, and nation-states looking for key military, financial and other classified intelligence.

The paper imagines a "healthy ecosystem" of computers that collaborate to fight threats, adapt rapidly, and identify and defeat problems. Right now, computers are not very good at catching things that they haven't seen before, McConnell said. In contrast, the human immune system has evolved to fight intruders that it doesn't recognize. "It says: "This is not me. Maybe I need to send something down there to take a look at it, and maybe quarantine it.'" McConnell said.

McConnell says a first step would be to get computers to recognize and react to threats automatically. "Right now it's manual," he said, meaning that a human manager has to contact another human manager via e-mail to warn of a virus or other threat. Ideally, that notification would be done instantly between machines at different government agencies.

Some experts are already working on this kind of interoperability on a small scale. One of the biggest obstacles in getting computers closer to working by themselves is figuring out a better way to authenticate interactions, according to Ross Hartman, vice president for cyber-security services at Science Applications International Corp (SAIC).

"Computers are limited by their programming," Hartman said. "If it doesn't model the known versus the unknown, they can't tell the self from the other."

Hartman says experts are looking at new models of "nature-inspired defense" as computer threats become a greater security problem for government agencies and a bigger cost to industry.

"The threat is growing," Hartman said. "There are more incidents and they are becoming more sophisticated. The latest buzzword is 'advanced persistent threats.' These are sufficiently advanced methods that are difficult to detect and take a long time to discern."

Hartman said the DHS paper is a positive response to threats that are on the rise, and is provoking discussion among cyber-security experts.

Another hurdle faced by computer experts in designing collaborative systems of either individual devices or networked computers is that of privacy. How much information should be shared in the name of security?

Angelos Stavros is a computer scientist at George Mason University. He says the more that computers share information in order to deter threats, the more individual privacy is reduced.

"Although we want the cell to be curable, we want it to have our private personality that cannot be wiped or automatically checked," Stavros said. "What is an attack? It is often in the eye of the beholder."

The advance, featured this week in the early online edition of the journal Proceedings of the National Academy of Sciences, represents the first demonstration of lens-free optical tomographic imaging on a chip, a technique capable of producing high-resolution 3-D images of large volumes of microscopic objects.

"This research clearly shows the potential of lens-free computational microscopy," said Aydogan Ozcan, senior author of the research and an associate professor of electrical engineering at UCLA's Henry Samueli School of Engineering and Applied Science. "Wonderful progress has been made in recent years to miniaturize life-sciences tools with microfluidic and lab-on-a-chip technologies, but until now optical microscopy has not kept pace with the miniaturization trend."

An optical imaging system small enough to fit onto an opto-electronic chip provides a variety of benefits. Because of the automation involved in on-chip systems, scientific work could be sped up significantly, which might have a great impact in the fields of cell and developmental biology. In addition, the small size not only has great potential for miniaturizing systems but also leads to cost savings on equipment.

The optical microscope, invented more than 400 years ago, has tended to grow larger and more complex as it has been modified to image ever-smaller objects with better resolution. To address this lack of progress in miniaturization, Ozcan's research group — with graduate student Serhan Isikman and postdoctoral scholar Waheb Bishara as lead researchers — developed the new tomographic microscopy platform through the next evolution of a lens-free imaging technology the group created and has been improving for years.

Ozcan, a researcher at the California NanoSystems Institute at UCLA, makes the analogy that a traditional optical microscope is like a huge set of pipes delivering content, in the form of images, to the user. Over years of development, bottlenecks occur that impede further improvement. Even if one part of the system — that is, one bottleneck — is improved, other bottlenecks keep that improvement from being fully realized. Not so with the lens-free system, according to Ozcan.

"Lens-free imaging removes the pipes altogether by utilizing an entirely new design," he said.

The system takes advantage of the fact that organic structures, such as cells, are partially transparent. So by shining a light on a sample of cells, the shadows created reveal not only the cells' outlines but details about their sub-cellular structures as well.

"These details can be captured and analyzed if the shadow is directed onto a digital sensor array," Isikman said. "The end result of this process is an image taken without using a lens."

Ozcan envisions this lens-free imaging system as one component in a lab-on-a-chip platform. It could potentially fit beneath a microfluidic chip, a tool for the precise control and manipulation of sub-millimeter biological samples and fluids, and the two tools would operate in tandem, with the microfluidic chip depositing and subsequently removing a sample from the lens-free imager in an automated, or high-throughput, process.

The platform's 3-D images are created by rotating the light source to illuminate the samples from multiple angles. These multiple angles also allow the system to utilize tomography, a powerful imaging technique. Through the use of tomography, the system is able to produce 3-D images without sacrificing resolution.

"The field of view of lens-based microscopes is limited because the lens focuses on a narrow area of a sample," Bishara said. "A lens-free microscope has both a much larger field of view and depth of field because the imaging is done by the digital sensor array and is not constrained by a lens."

Viewers, for instance, can use the system to focus in on the details of a booth within a panorama of a carnival midway, but also reverse time to see how the booth was constructed. Or they can watch a group of plants sprout, grow and flower, shifting perspective to watch some plants move wildly as they grow while others get eaten by caterpillars. Or, they can view a computer simulation of the early universe, watching as gravity works across 600 million light-years to condense matter into filaments and finally into stars that can be seen by zooming in for a close up.

"With GigaPan Time Machine, you can simultaneously explore space and time at extremely high resolutions," said Illah Nourbakhsh, associate professor of robotics and head of the CREATE Lab. "Science has always been about narrowing your point of view — selecting a particular experiment or observation that you think might provide insight. But this system enables what we call exhaustive science, capturing huge amounts of data that can then be explored in amazing ways."

The system is an extension of the GigaPan technology developed by the CREATE Lab and NASA, which can capture a mosaic of hundreds or thousands of digital pictures and stitch those frames into a panorama that be interactively explored via computer. To extend GigaPan into the time dimension, image mosaics are repeatedly captured at set intervals, and then stitched across both space and time to create a video in which each frame can be hundreds of millions, or even billions of pixels.

An enabling technology for time-lapse GigaPans is a feature of the HTML5 language that has been incorporated into such browsers as Google's Chrome and Apple's Safari. HTML5, the latest revision of the HyperText Markup Language (HTML) standard that is at the core of the Internet, makes browsers capable of presenting video content without use of plug-ins such as Adobe Flash or Quicktime.

Using HTML5, CREATE Lab computer scientists Randy Sargent, Chris Bartley and Paul Dille developed algorithms and software architecture that make it possible to shift seamlessly from one video portion to another as viewers zoom in and out of Time Machine imagery. To keep bandwidth manageable, the GigaPan site streams only those video fragments that pertain to the segment and/or time frame being viewed.

"We were crashing the browsers early on," Sargent recalled. "We're really pushing the browser technology to the limits."

Guidelines on how individuals can capture time-lapse images using GigaPan cameras are included on the site created for hosting the new imagery's large data files, http://timemachine.gigapan.org/wiki/Main_Page . Sargent explained the CREATE Lab is eager to work with people who want to capture Time Machine imagery with GigaPan, or use the visualization technology for other applications.

Once a Time Machine GigaPan has been created, viewers can annotate and save their explorations of it in the form of video "Time Warps."

Though the time-lapse mode is an extension of the original GigaPan concept, scientists already are applying the visualization techniques to other types of Big Data. Carnegie Mellon's Bruce and Astrid McWilliams Center for Cosmology, for instance, has used it to visualize a simulation of the early universe performed at the Pittsburgh Supercomputing Center by Tiziana Di Matteo, associate professor of physics.

"Simulations are a huge bunch of numbers, ugly numbers," Di Matteo said. "Visualizing even a portion of a simulation requires a huge amount of computing itself." Visualization of these large data sets is crucial to the science, however. "Discoveries often come from just looking at it," she explained.

Rupert Croft, associate professor of physics, said cosmological simulations are so massive that only a segment can be visualized at a time using usual techniques. Yet whatever is happening within that segment is being affected by forces elsewhere in the simulation that cannot be readily accessed. By converting the entire simulation into a time-lapse GigaPan, however, Croft and his Ph.D. student, Yu Feng, were able to create an image that provided both the big picture of what was happening in the early universe and the ability to look in detail at any region of interest.

Using a conventional GigaPan camera, Janet Steven, an assistant professor of biology at Sweet Briar College in Virginia, has created time-lapse imagery of rapid-growing brassicas, known as Wisconsin Fast Plants. "This is such an incredible tool for plant biology," she said. "It gives you the advantage of observing individual plants, groups of plants and parts of plants, all at once."

Steven, who has received GigaPan training through the Fine Outreach for Science program, said time-lapse photography has long been used in biology, but the GigaPan technology makes it possible to observe a number of plants in detail without having separate cameras for each plant. Even as one plant is studied in detail, it's possible to also see what neighboring plants are doing and how that might affect the subject plant, she added.

Steven said creating time-lapse GigaPans of entire landscapes could be a powerful tool for studying seasonal change in plants and ecosystems, an area of increasing interest for understanding climate change. Time-lapse GigaPan imagery of biological experiments also could be an educational tool, allowing students to make independent observations and develop their own hypotheses.