Dynamics-Based Video Analytics
R4-A.1

Download Project Report

Project Description

Overview and Significance

This research effort aims to substantially enhance our ability to exploit surveillance camera networks to predict and isolate threats from explosive devices in heavily crowded public spaces, and to guide complementary detection modalities, subsequent to a threat alert. At its core is a novel approach, stressing dynamic models as a key enabler for automatic, real time interpretation of what is currently an overwhelming profusion of video data streams. It includes both theory development in an emerging new field, and an investigation of implementation issues. As part of the ALERT Center of Excellence, this project will synergistically link with several concurrent efforts at NEU, and with NEUs partners, both in academia and industrial R&D.

Video-based methods have an enormous potential for providing advance warning of terrorist activities and threats. In addition, they can assist and substantially enhance localized, complementary sensors that are more restricted in range, such as radar, infrared and chemical detectors. Moreover, since the supporting hardware is relatively inexpensive and to a very large extent already deployed (stationary and mobile net- worked cameras, including camera cell phones, capable of broadcasting and sharing live video feeds), the additional investment required is minimal.

Arguably, the critical impediment to fully realize this potential is the absence of reliable technology for robust, real time interpretation of the abundant, multi-camera video data. The dynamic and stochastic nature of this data,compounded with its high dimensionality, and the difficulty to characterize distinguishing features of benign vs. dangerous behaviors, make automatic threat detection extremely challenging. Indeed, state-ofthe-art turnkey software, such as that in use by complementary projects at NEU, heavily relies on human operators, which, in turn, severely limits the scope of its use.

The proposed research is motivated by an emerging opportunity to address these challenges, exploiting advances at the confluence of robust dynamical systems, computer vision and machine learning. A fundamental feature and key advantage of the envisioned methods is the encapsulation of information content on targeted behavior in dynamic models. Drawing on solid theoretical foundations, robust system identification and adaptation methods, along with model (in)validation tools, will yield quantifiable characterization of threats and benign behaviors, provable uncertainty bounds, and alternatives for viable explanations of observed activities. The resulting systems will integrate real time data from multiple sources over dynamic networks, cover large areas, extract meaningful behavioral information on a large number of individuals and objects, and strike a difficult compromise between the inherent conservatism demanded from threat detection, and the need to avoid a high false-alarm ratio, which heightens vulnerability by straining resources.

Our approach is inspired by the fundamental fact that visual data come in streams: videos are temporal sequences of frames, images are ordered sequences of rows of pixels and contours are chained sequences of edges.
Phase 2 Year 2 Annual Report
Project Leader
  • Octavia Camps
    Professor
    Northeastern University
    Email

  • Mario Sznaier
    Professor
    Northeastern University
    Email

Students Currently Involved in Project
  • Oliver Lehmann
    Northeastern University
  • Mengran Gou
    NEU
  • Yongfang Cheng
    NEU
  • Yin Wang
    NEU
  • Sadjad Ashari-Esfeden
    NEU
  • Tom Hebble
    NEU
  • Rachel Shaffer
    NEU
  • Burak Yilmaz
    NEU