ALERT Research Highlight: The Largest and Most Systematic Re-id Benchmark to Date

ALERT researchers Professor Octavia Camps (Northeastern University, Project R4-A.1) and Professor Rich Radke (Rensselaer Polytechnic Institute, Project R4-A.3) and their students, Srikrishna Karanam, Mengran Gou, Ziyan Wu, and Angels Rates-Borras, were recently published in the Institute of Electrical and Electronics Engineers’ (IEEE) monthly journal, Transactions on Pattern Analysis and Machine Intelligence (download here). The paper, “A Systematic Evaluation and Benchmark for Person Re-Identification: Features, Metrics, and Datasets,” provides an extensive review and performance evaluation of existing person re-identification algorithms.

Person identification, or re-id, matches observations of individuals across multiple camera views in a network of surveillance cameras, and represents a critical task in most surveillance and security applications.  For example, a police officer may want to automatically follow a person of interest tagged at a check-in counter through the branching concourses of an airport.  The research team’s recent review and evaluation of re-id algorithms helps characterize what re-id algorithms are currently capable of accomplishing, as well as what is missing and what is possible in the future. In this paper, the researchers discuss insights gained from their study, as well as put forth research directions and recommendations for re-id researchers that would help develop better algorithms in the future. Both Professor Camps and Professor Radke are involved with ALERT’s Research Thrust 4, which focuses on video surveillance and the analysis of video data with novel algorithms. This publication was the result of years of research and collaboration between their respective labs.

The fundamental re-id problem is to compare a person of interest as seen in a “probe” camera view to a “gallery” of candidates captured from a camera that does not overlap with the probe camera. If a true match to the probe exists in the gallery, it should have a high matching score, or rank, compared to incorrect candidates. Since the body of research in re-id is increasing, researchers can begin to draw conclusions about the best combinations of algorithmic subcomponents. In this paper, the researchers present a careful, fair, and systematic evaluation of feature extraction, metric learning, and multi-shot ranking algorithms proposed for re-id on a wide variety of benchmark datasets. Their general evaluation framework considers hundreds of combinations of (1) feature extraction and metric learning algorithms for single-shot datasets and (2) feature extraction, metric learning, and multi-shot ranking algorithms for multi-shot datasets.

The research team evaluated 276 algorithm combinations on 10 single-shot re-id datasets and 646 algorithm combinations on 7 multi-shot re-id datasets, making the proposed study the largest and most systematic re-id benchmark to date. Approaches were evaluated using 17 datasets that mimic real world settings, including the ALERT Airport Re-Identification Dataset. As part of the evaluation, the researchers built a public code library with an easy-to-use input/output code structure and uniform algorithm parameters that includes 11 contemporary feature extraction and 22 metric learning and ranking algorithms. Both the code library and the complete benchmark results are publicly available for community use.

There are currently no comments.

Comments are closed.

The comments are closed.