Deep Learning for Sensor Fusion
Sensor fusion techniques are widely used in many real-world data science applications such as autonomous systems, remote sensing, video surveillance and military. The objective of using data fusion in multisensor environments is to combine the data provided by the multiple sensors to obtain a lower detection error probability and a higher reliability. The data can be obtained from the same sensor with several capturing parameters or multiple sensors. This tutorial presents the recent advances in deep learning sensor fusion specifically for three main computer vision tasks: classification, detection and segmentation. We will illustrate this taxonomy through relevant examples from the literature and will highlight existing open challenges and research directions that might inspire attendees to embark in the fascinating and promising area of deep learning-based sensor fusion methods.