Automatic

Contributions by Vision Systems to Multi-sensor Object Localization and Tracking for Intelligent Vehicles

Published on

Authors: Sergio Alberto Rodriguez Florez

Advanced Driver Assistance Systems (ADAS) can improve road safety by supporting the driver through warnings in hazardous circumstances or triggering appropriate actions when facing imminent collision situations (e.g. airbags, emergency brake systems, etc). In this context, the knowledge of the location and the speed of the surrounding mobile objects constitute a key information. Consequently, in this work, we focus on object detection, localization and tracking in dynamic scenes. Noticing the increasing presence of embedded multi-camera systems on vehicles and recognizing the effectiveness of lidar automotive systems to detect obstacles, we investigate stereo vision systems contributions to multi-modal perception of the environment geometry. In order to fuse geometrical information between lidar and vision system, we propose a calibration process which determines the extrinsic parameters between the exterocep- tive sensors and quantifies the uncertainties of this estimation. We present a real-time visual odometry method which estimates the vehicle ego-motion and simplifies dynamic object motion analysis. Then, the integrity of the lidar-based object detection and tracking is increased by the means of a visual confirmation method that exploits stereo-vision 3D dense reconstruc- tion in focused areas. Finally, a complete full scale automotive system integrating the considered perception modalities was implemented and tested experimentally in open road situations with an experimental car.