Disaster-resilient cloud architecture

By Stephanie Kanowitz

Aug 05, 2016

Anyone who has waited while a video buffers knows that visual data takes time to process. And in the event of a disaster, those moments could be too precious to waste. To speed the process, researchers at the computer science department at the University of Missouri’s College of Engineering are working to develop a visual cloud computing architecture.

The research focuses on those important times -- disaster scenarios -- when security cameras, mobile devices with cameras and sensors and unmanned aerial systems video provide data for first responders and law enforcement officials, according to an announcement last month from the college. Such data can help officials determine where to send help, where hazardous materials pose a threat and where injured victims might be. But during disasters, the network capacity to process so much data may not exist.

To ensure that computing and networking power is available, the researchers posit a cloud architecture that links devices at the network edge of the cloud processing system, or “collection fog,” with scalable compute and big data resources at the cloud’s core. The data moves from the fog at the disaster site, to the cloud and then to the consumption fog, where it can be processed by devices that first responders and other officials use.

“It works just like we do now with Siri,” Associate Professor Kannappan Palaniappan said in the statement. “You just say, ‘Find me a pizza place.’ What happens is the voice signal goes to Apple’s cloud, processes the information and sends it back to you. Currently we can’t do the same with rich visual data because the communication bandwidth requirements may be too high or the network infrastructure may be down [in a disaster situation].”

Workflow is one part of the problem; the amount of data in a disaster situation is often another.

“The problem really is networking,” Assistant Professor Prasad Calyam said in the statement. “How do you connect back into the cloud and make decisions because the infrastructure as we know it will not be the same. No street signs, no network, and with cell phones, everybody’s calling to say they’re OK on the same channel. There are challenging network management problems to pertinently import visual data from the incident scene and deliver visual situational awareness.”

Algorithms that can determine what information the cloud needs to process and what information local devices can handle is the answer to that networking problem, according to the researchers, adding that the team also created an algorithm to aggregate similar information to reduce redundancy.

Ongoing grants from the National Science Foundation, Air Force Research Laboratory and U.S. National Academies Jefferson Science Fellowship funded the project. Looking ahead, researchers hope to further develop the algorithms and conduct field testing during actual disasters.