Research on transformer-based lane segmentation algorithm
DOI:
Author:
Affiliation:

Clc Number:

TN9

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    The task of lane line detection includes difficult samples such as road wear, shadow occlusion and curves. The line information in these samples can be missing with different levels, which results in missed or false detection of the detection results. The detection scheme based on deep learning extracts feature information through convolution operation. Convolution operation discards a series of tedious operations of traditional image processing, such as manually designing filters, and benefits from weight sharing and inductive bias, which greatly reduces the workload of feature extraction. This operation not only reduces the image resolution, but also obtains long-distance information, resulting in the loss of regional edge and other details of the small resolution feature map, which affects the quality of the detection results. In deep learning, the segmentation model processes more detailed information than the detection model. Based on the segmentation model, this paper introduces transformer to improve the sampling method and improve the lack of convolution operation in obtaining global information. After the model is improved, the test accuracy on Tusimple is improved by 0. 4%, the pixel accuracy is improved by 0. 3, and the amount of multiplication and accumulation operation is increased by 36. 09 G. The results show that the transformer’s unique sampling method can improve the lack of convolution operation sampling, and improve the situation of missing detection of lane line difficult samples in semantic segmentation network.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: March 29,2023
  • Published: