New advances in how we use video capture technology require a new kind of support for analytics at the data device level. SkyHawk AI is the first drive that meets this new need. SkyHawk AI handles the intensive computational workloads that accompany AI work streams. It provides unprecedented bandwidth and processing power to manage always-on, data-intensive workloads, while simultaneously analyzing and recording footage from multiple HD cameras.

Analytics on video surveillance hardware is growing exponentially, forecasted to increase from 27.6 million shipments in 2016 to 126 million shipments in 2021, as hardware manufacturers continue to include analytics capabilities on network video recorders (NVRs). This will only increase as AI — particularly deep learning and machine learning applications, such as facial recognition, traffic pattern recognition, or the ability to recognize and define details of a landscape or location — become increasingly prevalent. In parallel, the need for fast video analytics will continue to rise, increasing the workload burden on NVR storage.

Learn more about the advantages of the new SkyHawk AI here, and in the video below.

Deep learning — new ways video can solve problems

The benefits of surveillance technology have traditionally come from analyzing video information after it’s been recorded, understanding what’s happening over long periods of time, finding patterns, and then developing actions plans to improve public safety, security, traffic patterns, parking, toll management, mass transit, retail footfall and more. Or, from reviewing footage from a crime scene after the fact, to find evidence that can help solve the crime.

But today, video cameras and surveillance systems are evolving new onboard analytical capabilities that help us understand real-world situations immediately, and keep people safe in real-time.

The big change: artificial intelligence built into video systems to enable them to process, analyze and recognize patterns on site. Until recently, these systems depended on the processing power of Cloud data centers to manage all levels of this analysis.

More complex pattern recognition and analysis

AI neural networks are capable of ever more complex recognition and analysis, with greater accuracy and speed, enabling them to solve problems and suggest solutions very quickly. Video is a complex form of data, and interpreting it requires systems capable of running deep-learning algorithms. So bringing the power of AI closer to the source of data is important. As data analysis applications shift from being centralized in Cloud data centers to being out at the Edge — closer to where data is collected — developers are putting AI directly onto the systems that support cameras, video recorders and the video servers in the field.

Moving analytics to the Edge is good for cities, agencies and businesses that deploy, use, and maintain data infrastructure, because it helps balance the intense video processing requirements among tools at the Edge at the data center. With NVRs that can manage image processing and video analytics at the Edge, data managers can develop systems that don’t need to send video streams back to the Cloud data center, but send only metadata instead. This frees up Cloud processing and bandwidth, and returns actionable results faster. It also makes it much simpler to scale up and deploy more video and analytics tools to more points, as new needs arise.

How AI helps video serve society better

Surveillance system builders today are deploying applications that use a graphics processing unit (GPU) together with a CPU to accelerate deep learning and analytics — and benefits are clear already. According to an article in VentureBeat, system builder Huawei has such systems deployed that are combating traffic congestion. The systems use “intelligent video analytics, combined all the data necessary, including vehicle information, speed, direction, and more, to provide real-time traffic analysis and improve traffic flow. They have seen speed congestion rates drop by 15 percent,” the article said.

AI-equipped surveillance systems can keep roadways, bridges and railways safer without disrupting traffic and travel — because with powerful analytics they can recognize the signs of wear and imminent problems more quickly than human inspections can. In public places like shopping centers, airports and subway stations these systems can recognize citizens who are encountering trouble, danger or an emergency, and summon help earlier than humans could, potentially saving lives.

Video analytics at the Edge provides simple everyday benefits for average citizens as well. For example, smart parking systems that use video data to ease congestion (and speed up our ability to find a parking spot) can use AI onboard to analyze their video locally, so they only need to send the important result — where are the available parking slots? — to the Cloud data center, which transmits it to drivers or parking system managers.

Because of its power to more effectively solve many of the needs cities grapple with — and to improve people’s lives — within the next several years, AI will become as fundamental as electricity and internet access are today. “Every city will be leveraging AI, not just for video sensing and intelligence, from Edge to Cloud, but you will have AI in sidewalks, AI in self-driving cars, bridges, buildings, bikes, traffic signals and more,” according to Milind Naphade, CTO of AI City at NVIDIA, quoted in the VentureBeat article. “Because this will deliver value to citizens. They’ll come to expect it.”