Janus: Big Data for Security

With the Janus Project we envision developing “more” intelligent surveillance system for a variety of applications, such as security surveillance, public health surveillance, traffic surveillance, etc., with a large amount of videos, texts, and sensor data.

Focus Research Areas

We focus our research in two areas corresponding to two types of surveillance systems: 1) infrastructure-based surveillance systems which leverage existing infrastructures for data collection and information communication (e.g., home security systems), and 2) infrastructure-less surveillance systems that rely on mobile computing for data collection and information communication. Below, we briefly elaborate on two subprojects, dubbed iWatch Core and iWatch Mobile , which focus on research in the context of these two types of surveillance systems.

In this area, our focus is on Objective 2 of the iWatch project (see above) by developing techniques to scale up human-in-the-loop surveillance. Toward that end, our main design approach is to leave the tasks that are best done by humans (namely, directing and decision-making) to human and the tasks best done by machines (i.e., rapid processing of large amounts of data) to computers. In particular, we are developing novel data analytics for automated detection of isolated incidents of interest from numerous and heterogeneous data streams (including video, text, and sensor data streams) as well as efficient spatiotemporal data indexing solutions that enable connecting the incident using the common fabric of time and space to derive the more abstract events of interest (the big picture) in real-time.

The following exemplary (visionary) scenario depicts a security surveillance use-case where a public safety surveillance officer can potentially use such an intelligent surveillance system for regular area monitoring, for example, for campus monitoring. Imagine the officer is watching the 3D campus map on a screen in the surveillance room. Real-time vehicle and human traffic (which is estimated on-the-fly based on the live data received from traffic sensors and/or the GPS data collected from mobile devices of the travelers) is overlaid as a congestion heat-map on top of the campus map. Being aware of the typical traffic trends for the time of the day, the traffic analytics module of the system automatically identifies and reports one specific location on the map as a potential problem area due to its “ abnormal traffic” . The officer uses system tools to quickly cross-reference the identified area with relevant tweet data feed as well as public safety event data stream (events reported as text messages by public safety officers on duty in real-time), and applies the text analytics module of the system to detect if an incident is recently reported in the area. The result being negative, the officer cross-references the area and time with CCTV cameras to view the live video feed of the area and uses the video analytics module of the system to explore the archived and live video data for suspicious activities. As a result, he observes that even though there is no accident, two unidentified individuals have deliberately positioned a few large items on the road to slow down the traffic. Meanwhile, the officer cross-references the area with a campus location database and recognizes that there is a biochemical lab in a close-by building. Given the sensitivity of the area, while dispatching patrol officers to investigate, the officer sends a personalized alert to the campus community which gives each individual customized evacuation directions to stay away from the trouble zone until it becomes safe.

3.Coordinated Response with Personalized Instructions : iWatch Mobile closes the loop by enabling surveillance authorities to stage coordinated response/intervention by allowing for direct and personalized communication with the target community members.

The above figure shows the architecture of the iWatch Mobile subsystem which is designed as a client-server event-driven system. Note that the functionalities/capabilities discussed above enable implementing a variety of applications; for example, Personalized Alert (where alerts are customized per individual given her/his location), Geofence (that allows defining a virtual fence and monitoring all trespassing users), or simply a participatory data visualizer (for effective display of the collected participatory data).

In a multi-INT/multi-source environment, integration of the readings collected from multiple data sources/sensors (possibly of different modalities) allows for compensation of each source’s inherent deficiencies by utilizing the strengths of other sources. In particular, such multi-source integration enables more effective surveillance for activities of interest. In this project, we use novel data mining techniques to automatically detect events given mults-source multi-modal data.