Abstract:In the realm of cross-user gesture recognition research, addressing the challenges of negative transfer and subpar model generalization observed in single-source domain transfer learning, this study presents a multi-source domain transfer learning strategy centered around the fusion of EMG and inertial features. The pivotal innovation lies in amalgamating data resources originating from diverse source domains, and subsequently employing techniques for domain-specific feature alignment and domain classifier alignment. The primary objective of this approach is to bolster model performance in gesture recognition across different users, thus significantly enhancing the accuracy of cross-user gesture recognition systems. Initially, the long short-term memory (LSTM) network model is introduced to extract time series features, encompassing metrics such as average absolute value, variance, and peak value derived from EMG and inertia data. Subsequently, domain-specific feature alignment and domain classifier alignment procedures are executed, facilitating feature extraction within the target domain utilizing data from multiple source domains. Lastly, the fusion of three loss functions—classification loss, domain-specific feature difference loss, and domain classifier difference loss—is undertaken to collectively optimize the overall loss. The experimental results demonstrate that the proposed method exhibits an improvement in average recognition rate compared to various traditional methods, such as single-source domain and source domain combination approaches. On the NinaPro DB5 dataset, the average gesture recognition accuracy for the target users exceeds 80%.