Deep reinforcement learning fault diagnosis method under noisy interference environment
DOI:
CSTR:
Author:
Affiliation:

State Key Laboratory of Mechanical Transmission for Advanced Equipment,Chongqing University, Chongqing 400044,China

Clc Number:

TH17;TN06

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Aiming at the poor robustness of deep reinforcement learning for fault diagnosis in strong noise interference environments, a reinforcement learning fault diagnosis method with noise interference environment adaptation is proposed. The efficient channel attention mechanism based deep residual shrinkage network (ECA-DRSN) is taken as the basic framework of Q-network to avoid the phenomenon of gradient vanishing caused by the complex structure of Q-network. In the ECA-DRSN,the efficient channel attention mechanism is used to adaptively adjust the softening threshold,and the dilated convolution is introduced in the convolution layer of the residual shrinkage unit to obtain the fault characteristics in different scales under the noise environment. Meanwhile, the exponential linear unit is used as the activation function to further enhance the noise robustness. A quantized reward function based on signal-to-noise ratio is designed to stimulate self-directed exploratory learning of Agent. Combining the dueling Q network learning mechanism with the prioritized experience replay mechanism, the optimal diagnostic strategy of agent is generated and applied to identify the equipment fault states under noise interference environments. Example analysis results show that the recognition accuracy of bearing and gearbox faults using the method of this paper can reach 98.13% and 93.45%, respectively, and has better robustness to different intensity noise and adaptability to the environment.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: February 18,2025
  • Published:
Article QR Code