Abstract:Time-frequency-spatial features are widely used in motor imagery EEG classification, but effectively utilizing these features to improve classification accuracy remains challenging. Traditional methods often eliminate redundant information through feature selection but tend to overlook the intergroup dependency of time-frequency-spatial features. To address this issue, we propose an EEG classification model based on a feature recalibration network with weight fusion (FRNWF). First, we extract the time-frequency-spatial features to reveal their grouping structure, treating each group of these features as a whole and considering it as a feature map. Two branches are then established to obtain the channel weights of these feature maps: one branch derives the channel weights of global information through global average pooling, while the other derives the channel weights of local information through global maximum pooling. Next, we design a weight fusion operation to merge the two sets of channel weights and rescale the feature maps, thereby achieving intergroup dependency modeling of the time-frequency-spatial features. Finally, two fully connected layers are used for classification. Experimental validation on four publicly available motor imagery EEG datasets shows that the proposed method achieves an average classification accuracy of 80.72%. This outperforms 18 feature selection methods, existing feature recalibration network methods, and most of the recent literature. The experimental results indicate that the proposed method demonstrates significant potential in practical applications and is likely to be widely adopted in future brain computer interface research and motor rehabilitation training.