俞林森,陈志国.融合前景注意力的轻量级交通标志检测网络[J].电子测量与仪器学报,2023,37(1):21-31
融合前景注意力的轻量级交通标志检测网络
Lightweight traffic sign detection network with fused foreground attention
  
DOI:10.13382/j.issn.1000-7105.2023.01.003
中文关键词:  交通标志检测  激活函数  前景注意力  特征融合  VariFocalLoss  GIoU
英文关键词:traffic sign detection  activation function  foreground attention  feature fusion  VariFocalLoss  GIoU
基金项目:国家自然科学基金(62073155)项目资助
作者单位
俞林森 1. 江南大学人工智能与计算机学院 
陈志国 1. 江南大学人工智能与计算机学院,2. 江南大学先进技术研究院,3. 江苏省模式识别与计算智能工程实验室,4. 江南大学模式识别与计算智能国际联合实验室 
AuthorInstitution
Yu Linsen 1. School of Artificial Intelligence and Computer Science, Jiangnan University 
Chen Zhiguo 1. School of Artificial Intelligence and Computer Science, Jiangnan University,2. Institute of Advanced Technology,Jiangnan University,3. Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence,4. International Joint Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University 
摘要点击次数: 1136
全文下载次数: 1052
中文摘要:
      针对目标检测算法模型在交通标志检测上容易出现错检和漏检等问题,提出一种融合前景注意力的轻量级交通标志检 测网络 YOLOT。 首先引入 SiLU 激活函数,提升模型检测的准确率;其次设计了一种基于鬼影模块的轻量级骨干网络,有效提 取目标物特征;接着引入前景注意力感知模块,抑制背景噪声;然后改进路径聚合网络,加入残差结构,充分学习底层特征信息; 最后使用 VariFocalLoss 和 GIoU,分别计算目标的分类损失和目标间的相似度,使目标的分类和定位更加准确。 在多个数据集 上进行了大量实验,结果表明,本文方法的精度优于目前最先进方法,在 CCTSDB 数据集上进行消融实验,最终精度达到 98. 50%,与基线模型相比,准确率提升 1. 32%,同时模型仅 4. 7 MB,实时检测帧率达到 44 FPS。
英文摘要:
      A lightweight traffic sign detection network incorporating foreground attention, YOLOT, is proposed to address the problem that object detection algorithm models are prone to error and miss detection on traffic sign detection. Firstly, the introduction of the SiLU activation function to improve the accuracy of model detection; secondly, a lightweight backbone network based on the ghost module is designed to effectively extract object features; thirdly, introduction of foreground attention perception module to suppress background noise; fourthly, we improve the path aggregation network by adding a residual structure to the feature fusion process; finally, we use VariFocalLoss and GIoU to calculate the classification loss of objects and the similarity between objects. Extensive experiments are conducted on several datasets, and the results show that the accuracy of the method in this paper is better than the current state-of-the-art methods. Ablation experiments are conducted on the CCTSDB dataset, and the final accuracy reaches 98. 50%, with an accuracy improvement of 1. 32% compared to the baseline model, while the model is only 4. 7 MB, and the real-time detection frame rate reaches 44 frames per second.
查看全文  查看/发表评论  下载PDF阅读器