|
Comprehensive environmental perception is crucial for autonomous driving. However, due to the issue of occlusion, current intelligent vehicle perception algorithms only recognize targets within the intelligent vehicle\'s perception area as much as possible, without predicting or annotating areas obscured by foreground objects. This limits the intelligent driving system\'s comprehensive perception and understanding of the driving environment.This paper proposes a semantic segmentation model that uses image data collected by cameras surrounding the intelligent vehicle as input. The model uses spatial transformation networks for perspective transformation and DeepLabv3p architecture as the backbone of the semantic segmentation network, which outputs the semantic segmentation perception results of the intelligent vehicle\'s driving environment from a bird\'s-eye view, including the obscured areas. In addition, this paper does not rely on manually labeled data but collects data sets through the Carla simulator and uses a designed ray-localization method for subsequent data annotation. By training on the collected data set, the proposed method achieved an MIoU score of 71.49%, which is better than traditional methods based on inverse perspective transformation and fully connected network models. |
|
Keywords:Semantic segmentation; Intelligent vehicles; Occluded area; Spatial transformation. |
|