Computer Vision and Pattern Recognition

Registration of egocentric views for collaborative localization in security applications

Published on

Authors: Huiqin Chen

This work focuses on collaborative localization between a mobile camera and a static camera for video surveillance. In crowd scenes and sensitive events, surveillance involves locating the wearer of the camera (typically a security officer) and also the events observed in the images (e.g., to guide emergency services). However, the different points of view between the mobile camera (at ground level), and the video surveillance camera (located high up), along with repetitive patterns and occlusions make difficult the tasks of relative calibration and localization. We first studied how low-cost positioning and orientation sensors (GPS-IMU) could help refining the estimate of relative pose between cameras. We then proposed to locate the mobile camera using its epipole in the image of the static camera. To make this estimate robust with respect to outlier keypoint matches, we developed two algorithms: either based on a cumulative approach to derive an uncertainty map, or exploiting the belief function framework. Facing with the issue of a large number of elementary sources, some of which are incompatible, we provide a solution based on a belief clustering, in the perspective of further combination with other sources (such as pedestrian detectors and/or GPS data for our application). Finally, the individual location in the scene led us to the problem of data association between views. We proposed to use geometric descriptors/constraints, in addition to the usual appearance descriptors. We showed the relevance of this geometric information whether it is explicit, or learned using a neural network.