Multi-source domain transfer learning for electromyography-inertial feature fusion and gesture recognition
DOI:
CSTR:
Author:
Affiliation:

1.College of Electrical Engineering,Yanshan University,Qin Huangdao 066000,China; 2.Hebei Key Laboratory of Intelligent Rehabilitation and Neuroregulation,Qin Huangdao 066000,China; 3.School of Design & Arts,Beijing Institute of Technology, Beijing 100081,China; 4.College of Physical Education,Yanshan University,Qin Huangdao 066000,China

Clc Number:

TP391.4

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In the realm of cross-user gesture recognition research, addressing the challenges of negative transfer and subpar model generalization observed in single-source domain transfer learning, this study presents a multi-source domain transfer learning strategy centered around the fusion of EMG and inertial features. The pivotal innovation lies in amalgamating data resources originating from diverse source domains, and subsequently employing techniques for domain-specific feature alignment and domain classifier alignment. The primary objective of this approach is to bolster model performance in gesture recognition across different users, thus significantly enhancing the accuracy of cross-user gesture recognition systems. Initially, the long short-term memory (LSTM) network model is introduced to extract time series features, encompassing metrics such as average absolute value, variance, and peak value derived from EMG and inertia data. Subsequently, domain-specific feature alignment and domain classifier alignment procedures are executed, facilitating feature extraction within the target domain utilizing data from multiple source domains. Lastly, the fusion of three loss functions—classification loss, domain-specific feature difference loss, and domain classifier difference loss—is undertaken to collectively optimize the overall loss. The experimental results demonstrate that the proposed method exhibits an improvement in average recognition rate compared to various traditional methods, such as single-source domain and source domain combination approaches. On the NinaPro DB5 dataset, the average gesture recognition accuracy for the target users exceeds 80%.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: October 18,2024
  • Published:
Article QR Code