• Volume 39,Issue 2,2025 Table of Contents
    Select All
    Display Type: |
    • Research on electromagnetic conductivity detection method with wide range and high precision

      2025, 39(2):1-10.

      Abstract (238) HTML (0) PDF 9.35 M (304) Comment (0) Favorites

      Abstract:Aiming at solving the problems existing in the traditional water quality detection process with the limited measurement range and the instability of the measurement system, a wide-range and high-precision electromagnetic conductivity measuring device was designed, and the range switching control method was optimized to achieve wide-range, high-precision and high-stability detection of solution concentration. The large-range conductivity measurement system is rather sensitive to the threshold setting during range switching. Firstly, through experiments, the optimal threshold inflection point of the solution conductivity is optimized and selected. For the signals that frequently switch ranges near the threshold, the method of locking the range by comparing point by point is adopted to lock them in a certain range interval, so as to improve the stability and reliability of the system. Secondly, considering the low accuracy of signals in the locked area, the data fusion algorithm based on the fuzzy membership function is used. After conducting the least root mean square error experiments on ten kinds of conductive solutions, the optimal fuzzy interval is selected to complete the fusion processing of the locked signals. Finally, multiple sets of measured data were substituted into the proposed method to verify its effectiveness. The experimental results show that the fusion algorithm after parameter optimization can achieve accurate measurement of the conductivity at the threshold edge, with the maximum measurement error being 0.85%, which is significantly better than the traditional single-range measurement method with a maximum relative error of 2.86%. In addition, the detection range of the designed conductivity sensor is from 0.1 to 2 000 mS/cm, and the relative errors in the full-range tests are all less than 1%. It indicates that the method proposed in this paper can achieve high-precision and wide-range detection of solution conductivity.

    • LSTM-WGAN-based model predictive control method of plunger-foam compound drainage system

      2025, 39(2):11-20.

      Abstract (194) HTML (0) PDF 7.82 M (256) Comment (0) Favorites

      Abstract:Efficient production process and smart management are key to the sustainable development of natural gas wells. At present, shale gas mining in actual production still faces the problem of liquid loading in wellbores causing the gas well production capacity to decrease. In this paper, a “dual-element integration” plunger-foam compound drainage device is designed to improve the productivity and drainage efficiency of gas wells, taking full advantages of both shale gas foam drainage and plunger drainage gas recovery systems. A novel LSTM-WGAN predictive control method based on Long short-term memory networks (LSTM) and Wasserstein generative adversarial networks (WGAN) is proposed. Density-based spatial clustering of applications with noise (DBSCAN) is used to preprocess the data to avoid the impact of abnormal data on model prediction. The generator and the discriminator compete with each other and update the weights of their respective gradient directions, and the predicted values of oil-casing pressure difference and water-gas ratio are continuously optimized to approach the true value. This enables the model to accurately predict the oil-casing pressure difference and water-gas ratio at the next moment. The predicted plunger-foam drainage strategy is implemented through the plunger-foam drainage composite drainage intelligent management system. Compared with LSTM models, the LSTM-WGAN model reduces the root mean square error (RMSE), mean square error (MSE), and mean absolute error (MAE) of the predicted oil-casing pressure difference and water-gas ratio by 2.64%, 5.13%, 11.75% and 8.81%, 8.07%, 6.60%, respectively. The experimental results demonstrate that the prediction model can accurately predict the oil-casing pressure difference and water-gas ratio data, guide the plunger-foam compound drainage system to issue correct instructions to deploy foam and plungers, and the intelligent delivery of plunger-foam is realized.

    • Hardware acceleration implementation of a ground-based cloud image classification algorithm

      2025, 39(2):21-31.

      Abstract (141) HTML (0) PDF 8.77 M (249) Comment (0) Favorites

      Abstract:The automatic observation and recognition of ground-based clouds have guiding significance for analyzing atmospheric motion trends and weather forecasting. To solve the problem of low accuracy of ground-based cloud image classification algorithms and difficulties in deploying them on embedded terminals, a ground-based cloud image classification network model GBcNet based on residual network structure and a hardware implementation architecture based on ZYNQ are proposed. The PS end is used to load the weight parameters and cloud data of the model, and the PL end implements DDR3 read-write control and GBcNet hardware acceleration. For the GBcNet network, accelerated IP cores corresponding to each module were designed, including sliding window, convolutional layer, pooling layer, batch normalization layer, and fully connected layer. Experiments were conducted on the CCSN dataset, and the results showed that the GBcNet model achieved an accuracy of 96.02% on a PC platform. After hardware acceleration, the accuracy remained at 94.5%. Compared with the recognition rate of the PC model, the accuracy loss for each cloud class did not exceed 3%, and the overall accuracy loss was less than 1.5%. The maximum resource usage on the FPGA did not exceed 48%, and the inference time for a single ground-based cloud image was 0.13 seconds. Compared to existing ground-based cloud recognition methods, this approach demonstrates higher accuracy and shorter inference time. The proposed recognition model and acceleration method provide a reference solution for the development of multi-node, portable ground-based cloud observation equipment.

    • Gait recognition method integrating 3D-CBAM and cross-time scale feature analysis

      2025, 39(2):32-40.

      Abstract (150) HTML (0) PDF 5.26 M (223) Comment (0) Favorites

      Abstract:Addressing the limitation of traditional gait recognition methods that neglect temporal information in gait features, we propose a gait recognition framework that integrates 3D-CBAM and cross-temporal scale feature analysis. By incorporating an attention module into the model, it adaptively focuses on critical channels and spatial locations within the input gait sequences, enhancing the model’s gait recognition performance. Furthermore, the enhanced global and local feature extractor (EGLFE) decouples temporal and spatial information to a certain extent during global feature extraction. By inserting additional LeakyReLU layers between 2D and 1D convolutions, the number of nonlinearities in the network is increased, which aids in expanding the receptive field during gait feature extraction. This, in turn, boosts the model’s ability to learn features, achieving better global feature extraction results. Local features are also integrated to compensate for feature loss due to partitioning. A multi-scale temporal enhancement module fuses frame-level features and short-to-long-term temporal features, enhancing the model’s robustness against occlusion. We conducted training and testing on the CASIA-B and OU-MVLP datasets. On the CASIA-B dataset, the average recognition accuracy reached 92.7%, with rank-1 accuracies of 98.1%, 95.1%, and 84.9% for normal (NM), bag (BG), and clothing (CL) conditions, respectively. Experimental results demonstrate that the proposed method exhibits excellent performance under both normal walking and complex conditions.

    • Low temperature counting rate attenuation compensation method for radiation source density detection

      2025, 39(2):41-48.

      Abstract (100) HTML (0) PDF 4.86 M (238) Comment (0) Favorites

      Abstract:The radioactive source density detection system is an instrument that uses radioactive isotopes to measure material density. It has applications in petrochemical, mining, and medical industries. The principle is that the radiation source signal utilizes its excellent penetrability to pass through a sealed pipeline containing substances, be detected by scintillators and photomultiplier tubes, and converted into voltage pulse signals. However, the complex environment of chemical sites can affect the performance of the system, with the most obvious and intuitive being the impact of low temperatures. Compared to the performance of the system at calibration temperature, the output pulse count rate of the system significantly decreases in low-temperature environments, reaching up to 30% at most, resulting in a decrease in measurement density accuracy and even causing misoperation by on-site personnel, making the system output unreliable. This article introduces a temperature compensation method based on probabilistic principal component regression model, analyzes the changes of key detection devices such as scintillators and photomultiplier tubes in the system at low temperatures, collects attenuation data, and constructs a PPCR model without hardware modifications. The maximum expected estimation algorithm is used to estimate the attenuation parameter set of the model and compensate for it. The test results show that this compensation method can control the count rate loss below 3% in low-temperature environments, improving the density detection accuracy of the system in low-temperature environments.

    • Transformer-based channel-by-channel point cloud analysis network

      2025, 39(2):49-59.

      Abstract (130) HTML (0) PDF 7.31 M (254) Comment (0) Favorites

      Abstract:3D point clouds can fully describe the geometric information of target objects and have a wide range of applications in fields such as autonomous driving, medical imaging and robotics. However, existing methods lack differentiation when dealing with features between different channels, and at the same time adopt a unified coding strategy for low-level spatial coordinates and high-level semantic features, which in turn leads to incomplete point cloud feature extraction. Therefore, this paper proposes a channel-by-channel point cloud analysis network based on Transformer. First, in order to overcome the challenge of traditional graph convolution that is difficult to distinguish effective information in mixed channels, a depth-separable edge convolution is designed, which can significantly improve the inter-channel differentiation ability while preserving local geometric information during channel-by-channel feature extraction. Secondly, to address the problem that Transformer adopts a uniform coding approach in low-level spatial coordinates and high-level semantic features, which leads to insufficient information extraction, two feature coding strategies are proposed adaptive positional coding and spatial context coding, which are used to explore implicit geometric structures in low-level space and complex contextual relationships in high-level space, respectively. Finally, an effective fusion strategy is proposed, which can result in a more discriminative feature representation. In order to fully demonstrate the effectiveness of the proposed model, point cloud classification experiments are conducted on the public datasets ModelNet40 and ScanObjectNN, where the overall classification accuracies reach 93.7% and 83.2%, respectively, and the average intersection and merger ratio of overall part segmentation reaches 86.0% on the public dataset ShapeNet Part. Thus, the method in this paper has advanced performance in both classification and segmentation tasks.

    • Image denoising using dual convolutional neural network with attention mechanism

      2025, 39(2):60-71.

      Abstract (162) HTML (0) PDF 17.32 M (198) Comment (0) Favorites

      Abstract:In recent years, deep convolutional neural networks have shown superior performance in image denoising. However, deep network structures often come with a large number of model parameters, leading to high training costs and long inference times, limiting their practical application in denoising tasks. This paper proposes a new dual convolutional denoising network with skip connections (MA-DFRNet), which achieves an ideal balance between denoising effect and network complexity. The paper presents a novel attention-based dual convolutional image denoising network (MA-DFRNet) that achieves an optimal trade-off between denoising performance and network complexity. MA-DFRNet comprises a multi-scale feature extraction network, dual convolutional neural networks, and a dynamic feature refinement attention mechanism. The multi-scale feature extraction network employs convolutions at various scales to enhance flexibility in capturing image features. The dual convolutional neural networks utilize skip connections and dilated convolutions in both upper and lower branches to expand the receptive field. Furthermore, the dynamic feature refinement attention mechanism enhances the accuracy and discriminability of feature representation. This structural design not only enlarges the receptive field but also effectively extracts and integrates image features, leading to significant improvements in denoising performance. The research findings demonstrate that the proposed MA-DFRNet outperforms state-of-the-art models in terms of PSNR and SSIM values across all levels of noise considered in the comparisons. The PSNR has increased by approximately 0.2 dB, while the SSIM has improved by around 1%. Notably, MA-DFRNet demonstrates greater robustness for images with higher noise levels and better preserves image details visually, effectively balancing denoising and detail retention.

    • Aerodynamic parameter identification of projectiles optimized by improved sparrow search algorithm based kernel extreme learning machine

      2025, 39(2):72-82.

      Abstract (132) HTML (0) PDF 6.17 M (220) Comment (0) Favorites

      Abstract:The aerodynamic parameters of a projectile directly affect its flight trajectory, which in turn determines the missile’s design and performance evaluation. Due to the complex aerodynamic environment and the interactions between aerodynamic parameters during highspeed flight, accurately identifying these parameters is a challenging problem. To address this, this paper proposes a combined model using the sparrow search algorithm and kernel extreme learning machine to identify the projectile’s aerodynamic parameters. In order to fully exploit the performance of the SSA and improve identification accuracy, improvements are made to the initialization strategy, convergence factor, and position update strategy of the SSA. The effectiveness of these improvements is validated using the CEC2022 benchmark functions for the improved sparrow search algorithm. Furthermore, the ISSA is employed to optimize the kernel parameters and regularization coefficients of the KELM, leading to the proposed ISSA-KELM identification model. The results show that using the basic extreme learning machine algorithm yields the lowest identification accuracy and fails to capture the nonlinear characteristics of the aerodynamic parameters in certain regions. By introducing a kernel function into the ELM, the KELM method improves identification accuracy by 1 to 4 orders of magnitude. While the KELM and SSA-KELM models still exhibit some discrepancies from the true values in nonlinear regions, the ISSA-KELM model provides the most accurate results, improving accuracy by approximately 4 to 5 orders of magnitude compared to the basic ELM algorithm. This research offers reliable technical support for precise flight trajectory prediction and missile performance optimization.

    • Rainfall recognition method based on MFCC and PSO-SVM

      2025, 39(2):83-91.

      Abstract (141) HTML (0) PDF 3.53 M (209) Comment (0) Favorites

      Abstract:Aiming at the problem of low accuracy in rainfall recognition based on rain sound signals and machine learning methods, this paper analyzes the frequency characteristics of rain sound signals, studies the static and dynamic features of Mel frequency cepstral coefficients of rain sound signals, and proposes a rainfall recognition method that combines Mel frequency cepstral coefficients (MFCC) with particle swarm optimization support vector machine (PSO-SVM). By extracting the static and dynamic features of MFCC from rain sound signals, using the importance evaluation mechanism built into the random forest algorithm for feature selection, and introducing PSO algorithm to fine tune the penalty parameter c and kernel function parameter g of SVM, the optimal parameter combination is found to achieve accurate rainfall identification. The experimental results show that MFCC features can more effectively characterize the characteristics of raindrop voiceprint signals compared to other features. After random forest feature selection, the overall accuracy of rainfall recognition increased by 5%. Combined with optimized PSO-SVM for rainfall recognition, the overall accuracy of rainfall recognition reached 91.1%. The accuracy of rainfall recognition for heavy and light rain also exceeded 90%, while the accuracy of rainfall recognition for moderate rain was slightly lower, but still reached 86.5%.

    • Research on highlight removal method driven by three component dichromatic reflection model for transparent PET bottle images

      2025, 39(2):92-101.

      Abstract (108) HTML (0) PDF 11.65 M (192) Comment (0) Favorites

      Abstract:The highlight generated on the surface of an object under light irradiation conditions causes the loss of its own color information, which affects the quality of feature extraction in stereo matching and 3D reconstruction. Aiming at the phenomenon that the dichromatic reflection model containing diffuse reflection and specular reflection cannot accurately describe the component distribution of reflection in transparent PET bottles, a highlight removal method based on L2 normalized three component dichromatic reflection model is proposed. Firstly, a L2 normalized three component dichromatic reflection model is constructed for transparent PET images to elucidate the distribution of reflectance components in transparent PET bottles. Based on this model, decompose the global pixel information of the transparent PET image and calculate the L2 normalized chromaticity map; On this basis, calculate the L2 chromaticity intensity ratio of global pixels based on the L2 normalized chromaticity diagram. Next, perform clustering analysis on the L2 normalized chromaticity diagram using exponential transformation to detect the highlight areas of transparent PET bottles and capture the inherent color information of the PET bottles. Finally, combining L2 chromaticity intensity ratio to achieve pixel information recovery in highlight areas. The experimental part established a transparent PET bottle dataset and validated it. The experimental results showed that compared with the traditional dichromatic reflection model driven highlight removal method, the proposed method improved the mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) indicators by 12.1%, 21.1%, and 11.5%, respectively.

    • Lightning current waveform measurement based on Rogowski coil and CEEMDAN

      2025, 39(2):102-112.

      Abstract (126) HTML (0) PDF 9.04 M (210) Comment (0) Favorites

      Abstract:The lightning current waveform measurement module embedded in the surge protector provides data support for its aging analysis. To address the current issues in lightning current measurement within surge protectors, such as the inability to record complete waveforms and high noise levels, this study analyzes the characteristics of lightning current waveforms and designs a measurement circuit comprising a Rogowski coil, low-noise wideband amplification, single-ended to differential conversion, high-speed analog-to-digital converter, and FPGA. The processing of collected signals, caching and sending of data, and counting of lightning strikes are completed by FPGA. This design aims to reduce measurement errors in lightning current waveforms. A threshold and slope combining triggering method is employed to enhance the accuracy of counting lightning current impulses. Additionally, a method combining differential circuit and complete ensemble empirical mode decomposition with adaptive noise is proposed to reduce the influence of noise on lightning current waveform. Experimental tests were conducted using a lightning current composite wave generator from a lightning protection laboratory to evaluate the system and denoising methods. The results demonstrate that the system can accurately and comprehensively record lightning current waveforms. The lightning current measurement with the peak range of 1~10 kA has no leakage trigger phenomenon, a measurement error of ≤0.2 μs for half-peak arrival time, and a measurement error of ≤2.04% for the average slope of the front peak. The system will promote the intelligent development of surge protectors and provide data support for lightning research.

    • Research on a visual measurement device and method for slope displacement

      2025, 39(2):113-122.

      Abstract (133) HTML (0) PDF 9.29 M (176) Comment (0) Favorites

      Abstract:Aiming at the problems of high cost and high environmental requirements of traditional slope displacement monitoring methods, a low-cost and high-precision slope displacement visual measurement device and its method are proposed. First, a high-precision camera and a special marker are deployed in the monitoring scene, and the improved YOLOv8-Pose algorithm is utilized to realize the initial recognition of the key points of the marker. Subsequently, a sub-pixel extraction technique is used to process the key point of the marker to obtain its precise coordinates at the sub-pixel level. Next, the pixel displacement change of the marker is calculated by comparing the coordinate offsets of the keypoints at different moments. Finally, the actual displacement change is calculated by the scale conversion method in combination with the known geometric dimensions of the markers, so as to realize the accurate monitoring of slope displacement. In order to verify the practical application effect of the method, this paper selects the slope of a highway in Guizhou Province for on-site monitoring experiments. The experimental results show that the visual measurement method has good accuracy performance in slope displacement monitoring. Compared with the monitoring results of total station, the accuracy of horizontal displacement reaches 90.43%, and the accuracy of vertical displacement is 91.58%, which are more than 90%, fully verifying the feasibility and effectiveness of the method in practical engineering applications.

    • Graph structure motion segmentation method for geometric information learning

      2025, 39(2):123-135.

      Abstract (113) HTML (0) PDF 13.20 M (203) Comment (0) Favorites

      Abstract:The graph-structured motion segmentation method (GS-Net) for geometric information learning is proposed to address the shortcomings of existing motion segmentation methods in terms of their practicality in traffic scenarios, and the difficulty in balancing performance and validation time. GS-Net consists of a point embedding module, a local context fusion module, a global bilateral regularization module, and a classification module. The point embedding module maps the original key feature point data from a low-dimensional linearly difficult-to-differentiate space to a high-dimensional linearly easy-to-differentiate space, which is conducive to the network learning the relationship between moving objects in the image; the local context fusion module utilizes the dual-branching graph structure to extract local information from both the feature space and the geometric space, and then fuses the two types of information to obtain a more powerful local feature representation, The global bilateral regularization module uses point-by-point and channel-by-channel global sensing to enhance the local feature representations obtained by the local context fusion module; the classification module maps the enhanced local feature representations back to the low-dimensional classification space for segmentation. GS-Net’s mean and median misclassification rates on the KT3DMoSeg dataset are 2.47% and 0.49%, respectively, which are 8.15% and 7.95% lower than those of SubspaceNet, and 7.2% and 0.57% lower than those of SUBSET. Meanwhile, GS-Net improves the network inference speed by two orders of magnitude compared to both SubspaceNet and SUBSET. GS-Net’s recall and F-measure on the FBMS dataset are 82.53% and 81.93%, respectively, showing improvements of 13.33% and 5.36% compared to SubspaceNet, and 9.66% and 3.71% compared to SUBSET, respectively. The experimental results demonstrate that GS-Net can quickly and accurately segment moving objects in real traffic scenes.

    • Lightweight improvement of YOLOv8n for garbage detection in complex background environments

      2025, 39(2):136-146.

      Abstract (155) HTML (0) PDF 20.26 M (192) Comment (0) Favorites

      Abstract:Garbage detection and classification are essential for promoting the green economy and achieving a low-carbon circular economy. However, current models face challenges such as large parameters and high computational costs, limiting their deployment on resource-constrained devices. To address these issues, a lightweight GCAW-YOLOv8n model is proposed that balances model size and detection accuracy. Firstly, the C3Ghost and GhostConv modules from GhostNet are integrated into the YOLOv8n backbone to reduce parameters. Secondly, the context anchor attention is introduced to enhance feature extraction and detection accuracy. Then, the asymptotic feature pyramid network is used to improve multi-scale detection, and the WIoU v3 loss function optimizes bounding box regression. Finally, the improved model is validated using the Taco dataset and a custom dataset. Experimental results show that, compared with the original YOLOv8n model, the GCAW-YOLOv8n model reduces parameters by 14.3% and floating-point operations by 33.3%, while precision and recall increase by 4.4% and 1.9%, respectively. The mAP@0.5 improves to 81.3%, a 0.7% gain. This model achieves a better balance between lightweight design and detection accuracy, making it suitable for deployment in edge devices for garbage detection.

    • Research on comprehensive detection method of induced electric-magnetic coupling resonance for fault of mining underground cable

      2025, 39(2):147-159.

      Abstract (97) HTML (0) PDF 17.51 M (187) Comment (0) Favorites

      Abstract:A power cable failure in a mine with high-concentration gas has caused coal mining machines and other equipment to stop operating, severely affecting production efficiency and economic benefits. Currently, commonly used methods for detecting cable faults on the ground, such as the high-voltage pulse flash over method and the traveling wave reflection method, are not suitable for underground detection environments.Therefore, this paper proposes a comprehensive detection new method for underground mining cable faults based on induction electric field-magnetic coupling resonance. A mathematical model for a comprehensive cable fault detection method based on induction electric field-magnetic coupling resonance under low-frequency sinusoidal excitation conditions is established. Using multiphysics simulation software COMSOL, the electromagnetic field quantities are solved for open-circuit and short-circuit faults in the cable, resulting in the two-dimensional distribution of electric and magnetic field strengths, as well as the one-dimensional curve of the detection coil voltage. Simulation and experiments have determined the variation curves of the induced voltage with the lift-off height when the used coil is in open-circuit and short-circuit conditions. Research results show that the comprehensive method based on low-frequency induced electric field-magnetic coupling resonance is feasible for underground detection of mining cable faults. Under the excitation with an amplitude of 1 ~ 20 V and a frequency range of 1 kHz~20 MHz, by detecting along the cable at the same lift-off height, open-circuit and short-circuit faults of the cable within one-tenth of the excitation wavelength in length can be detected. It provides an effective method for underground detection of coal mine cable faults and product development.

    • Identification method of illegal sand mining vessels in foggy conditions based on deep learning

      2025, 39(2):160-168.

      Abstract (109) HTML (0) PDF 9.49 M (193) Comment (0) Favorites

      Abstract:To address the challenges of low monitoring efficiency and poor accuracy in identifying illegal sand mining vessels in foggy conditions, this study proposes a deep learning-based identification method. First, an improved generative adversarial network is employed to dehaze collected images, producing clear water area images. The generator integrates the feature attention mechanism to extract complex texture features of vessels in foggy environments, while spectral normalization is added to the discriminator to prevent gradient vanishing during training. Cycle consistency loss is introduced to ensure structural consistency between generated and original images. The CBAM attention mechanism is integrated into YOLOv8 algorithm to improve feature extraction, enabling precise localization and identification of illegal sand mining vessels in dehazed images. The improved GAN achieves superior dehazing performance, with PSNR and SSIM values of 31.86 and 0.64, representing 3.6%~13.1% and 4.9%~56.1% improvements over Cycle GAN and GC-GAN, respectively. The YOLOv8 enhanced by CBAM achieves mAP@0.5:0.95 of 89.6% and FPS of 36 on dehazed images, meeting the accuracy and speed requirements for practical law enforcement. The proposed method effectively enhances the informatization and intelligence of illegal sand mining supervision and enforcement in the Yangtze River Basin.

    • Uncertainty analysis for electromagnetic parameter measurement method of absorbing materials based on waveguide device

      2025, 39(2):169-176.

      Abstract (135) HTML (0) PDF 6.54 M (189) Comment (0) Favorites

      Abstract:The electromagnetic parameters of materials are characterized by complex permittivity and complex permeability. Owing to the nonlinearity of the formulas for solving electromagnetic parameters based on the transmission/reflection method, numerous sources of uncertainties and their correlations, it is extremely complicated to analyze the uncertainty of the measuring results of material electromagnetic parameters using the analytical methods. In this paper, Monte Carlo method is introduced to simplify the analysis and to study the key factors influencing the system uncertainty. The uncertainties of the amplitude and phase of the S parameters measured by the vector network analyzer and the uncertainties introduced by the dimensional tolerance of the waveguide fixture are deduced, and the various uncertainty sources and their probability density functions are analyzed. Taking the measurement of the electromagnetic parameters of PTFE sample at 18 to 26.5 GHz frequency band as an example, the systematic uncertainty of the measurement is analyzed using the Monte Carlo method, and an uncertainty budget at 22 GHz frequency point is given. The influence mechanism of the sample’s own electromagnetic parameters on the measurement uncertainty is elucidated. The research results indicate that the Monte Carlo method can effectively analyze the uncertainty of the measuring results of material electromagnetic parameters based on waveguide devices. For the PTFE sample in the example frequency range, the uncertainties of the amplitude and phase of the S parameters are the main factors affecting the uncertainty of electromagnetic parameters measurement results. If the electromagnetic parameters of the tested sample cause the vector network analyzer receiver to receive power within the noise and crosstalk influence area, the uncertainty of the measurement results will increase significantly.

    • Optimization of surface roughness and material removal rate for double belt grinding of multihead screw rotors

      2025, 39(2):177-184.

      Abstract (113) HTML (0) PDF 6.63 M (235) Comment (0) Favorites

      Abstract:In order to ensure the grinding quality of the multi-head screw rotor belt grinding quality and improve the grinding efficiency at the same time. The multi-head screw rotor was ground by using a double abrasive belt grinding device, an orthogonal test was designed, and the WOA-RBF prediction model was established by using the orthogonal test database to judge the accuracy of the prediction model by the coefficient of determination R2, root mean square error (RMSE) and mean absolute error (MAE), and the results were better than the comparison models. The output surface roughness and material removal rate values of the WOA-RBF prediction model were used as the objective functions of the dual-objective optimization model, and a dual-objective optimization model based on the Multi-objective exponential distribution optimizer (MOEDO) was established. The model is solved to obtain the Pareto optimal solution set, and the optimized process parameter values, surface roughness and material removal rate values are obtained through the evaluation function. The surface roughness and material removal rate of the grinding screw rotor after grinding were 0.462 μm and 7.78 mm3/s, respectively, and the errors between the test results and the dual-objective optimization results were within a reasonable error, which verified the accuracy of the model. The results of the dual-objective optimization were compared with the results of the best set of orthogonal tests, and the surface roughness increased by 37.5%, but still met the technical requirements of the workpiece, while the material removal rate increased by 84.23%.The results show that the proposed dual-objective optimization model can improve the grinding efficiency while ensuring the surface quality, and can also provide a reference for the decision-making optimization of surface quality and material removal rate in other processing processes.

    • Design of a new visual and tactile sensor based on tactile perception

      2025, 39(2):185-192.

      Abstract (148) HTML (0) PDF 6.97 M (205) Comment (0) Favorites

      Abstract:In response to the high manufacturing cost of existing vision-tactile sensors, a novel GelSight fingertip vision-tactile sensor design was proposed, which improved the single-layer elastic reflective film in the tactile skin of the sensor and used a dual-layer reflective coating process. In terms of material selection, two types of coatings with different properties were used for the inner and outer reflective layers, respectively, to bond the two different reflective layers chemically and achieve high sensitivity to subtle deformation. Based on this process technology, a novel vision-tactile sensor was designed and a reconstruction algorithm based on photometric stereo was developed to achieve tactile 3D reconstruction. To verify the effectiveness of the improved process, an experimental comparison of the perception resolution of the GelSight sensors manufactured using the improved process with advanced vision-tactile sensors was conducted. The experimental results showed that the new sensor had better perception resolution than the existing sensors manufactured using the existing process. Experimental tests were conducted to reconstruct the 3D shape and pose of objects with different shapes and textures. The results showed that the new sensor had better sensitivity in texture details and could accurately reconstruct and estimate the 3D shape and pose of real objects with high precision. The reconstruction root mean square error of the contact surface was less than 100 μm, and the positioning accuracy was sub-millimeter level, showing its potential for practical applications.

    • Research and implementation of nonlinear ultrasonic short pulse signal detection method

      2025, 39(2):193-204.

      Abstract (121) HTML (0) PDF 10.33 M (197) Comment (0) Favorites

      Abstract:The phase-locked amplification technique used in the general weak signal detection method is still difficult for high frequency short pulse signal. For the general phase-locked amplifier, the short pulse signal cannot reach the establishment time of the post-stage filter of the phase-locked amplifier, so it’s difficult to detect the weak high-frequency short pulse signal. Considering that the nonlinear ultrasonic signal is periodic signal, add digital average to the front stage of the phase-locked amplifier to increase the signal-to-noise ratio of the signal. According to the principle of best matching, the window length of the moving mean filter is set equal to the pulse duration, so as to realize the low-pass filtering only for the pulse signal, avoiding the problem that the common low-pass filter does not have enough time to establish the short pulse signal, and finally realizing the phase-lock amplification of high-frequency nonlinear short pulse signal detection. The system uses mixed digital analog circuit, and the logic algorithm deployed on the FPGA. The weak signal transmitter and computer are used to build a test platform to test the system. The test results show that the system can detect the pulse signals with a pulse length of 5 μS and amplitude of 100 nV when the input signal frequencies are respectively 0.6, 1, 2, 5 and 10 MHz, and has good linearity. Select the signal with frequency of 1 MHz for different pulse width test, the system can accurately detect pulse signals with pulse length of 5, 10, 30 μS and amplitude of 100 nV.

    • The rail profile point cloud simplification method based on local averaging

      2025, 39(2):205-212.

      Abstract (86) HTML (0) PDF 4.87 M (155) Comment (0) Favorites

      Abstract:This paper proposes a point cloud simplification method based on local averaging to address the problem of a large number of noise points in the actual point cloud data obtained from rail wear measurement using structured light technology, which are caused by factors such as railway operation environment interference, high gloss areas on the rail surface, and equipment problems, and seriously affect the accuracy and efficiency of subsequent rail wear calculations, therefore, a point cloud simplification method based on local averaging is proposed. The method generates a simplified point cloud by traversing each point in the point cloud and calculating the average position of all points within the circle using an enclosing circle of specified radius. The experimental results show that the proposed method is significantly superior to traditional statistical filtering and radius filtering algorithms in terms of noise reduction and preservation of rail profile details. The average noise reduction rate reaches 0.832 0, which is about 4.3 times higher than statistical filtering and 15 times higher than radius filtering. Meanwhile, the average error of wear calculation is only 0.025 01 mm, which is about 95.7% lower than statistical filtering and 85.1% lower than radius filtering. In terms of processing efficiency, the average time consumption of this method is only 0.006 5 ms, which is significantly better than other methods. This method can effectively reduce the amount of point cloud data, preserve the rail profile features to the maximum extent, and meet the measurement needs of rail wear.

    • Dynamic obstacle avoidance path planning algorithm for AGVs based on improved HLO and dynamic windows

      2025, 39(2):213-221.

      Abstract (102) HTML (0) PDF 9.06 M (231) Comment (0) Favorites

      Abstract:Aiming at the problems of low search efficiency of human learning optimization algorithm, easy to fall into local optimum, and unable to achieve dynamic obstacle avoidance, a path planning algorithm integrating improved human learning optimization algorithm and dynamic window algorithm was proposed. Firstly, the nonlinear increasing and decreasing probability parameters are used to improve the convergence rate of HLO. Introduction of particle swarm algorithms to update personal and social knowledge databases. The inertia weight coefficients are adjusted adaptively to avoid falling into the local optimum. weights to adjust the speed and angle; finally, the improved algorithm was applied to the path planning of the automated guided vehicle, and simulation experiments show that the fusion algorithm plans path lengths that by 4% less than the ant colony algorithm paths, and by 15% less than the hybrid human learning optimization and particle swarm algorithms, and that the other two algorithms make contact with the obstacles by five times as many times as the improved algorithm, which reduces the length of the path and number of transitions and improves Smoothness of the path. It avoids obstacles in T-shaped and complex map environments to verify the feasibility of the proposed algorithm.

    • Research on focus delay method for ultrasonic phased array inspection of curved surface components

      2025, 39(2):222-230.

      Abstract (92) HTML (0) PDF 7.73 M (162) Comment (0) Favorites

      Abstract:To enhance the detection capability of small defects within curved components using ultrasonic phased array, this paper proposes a phased array focusing delay method tailored for curved structures. In this paper, the Rayleigh integral mathematical model for the linear phased array focusing field is established, and the directional distribution of the sound field is analyzed. By applying Snell’s Law, the focus delay rules for inclined and curved components are derived. Compared with the traditional iterative traversal algorithm, the proposed method improves the calculation efficiency of the delay rule, reduces the influence of interface structures on the curvature of the scanning sound beam’s wavefront, thereby fully leveraging the detection advantages of linear phased arrays. In the established curved component simulation model, the delay is applied sequentially to each element based on the proposed method. Simulation results confirm that the sound waves can be effectively focused at the present location, and the computation time is reduced by approximately 70% compared to traditional iterative methods. Furthermore, practical detection experiments were conducted using a ring-shaped steel test block with a Φ0.3 mm blind hole. By comparing the B-scan images and A-scan signals before and after applying the delay rule, experimental results show that the proposed method significantly improves the signal-to-noise ratio and imaging quality in detecting small defects in curved components.

    • Contactless differential frequency electrolyte solution conductivity measurement system

      2025, 39(2):231-239.

      Abstract (94) HTML (0) PDF 8.62 M (180) Comment (0) Favorites

      Abstract:Aiming at the problems of polarization of the measuring electrode and contamination of the measured solution in the traditional contact electrolyte solution conductivity measurement, a non-invasive double-coil electrolyte solution conductivity measurement system based on eddy current effect is proposed. Through the establishment of a finite element analysis model, the effects of excitation frequency and coil geometry parameters on the sensitivity of the measurement system are discussed in depth, which provides a theoretical basis for the design of the system probe as well as the error analysis. In the experimental part, the coil probe parameters are optimized, and the FPGA-based dual-coil differential-frequency type measurement system is built, which is capable of converting inductance changes into frequency changes and realizing digital signal output. The dual-coil consists of a detection coil and a reference coil, in which the detection coil is used to measure the measured solution, the reference coil is far away from the measured solution and is only affected by the environment, and the signals of the two are realized as differential outputs, which can eliminate external interference. Comparison experiments show that the system can effectively measure different conductivity solutions, which verifies the correctness of the simulation model. Further temperature and anti-interference experiments show that the dual-coil differential structure significantly enhances the anti-interference performance of the system while ensuring the measurement accuracy, and the overall error is controlled within 1.2%, which demonstrates the practicality and reliability of the system.

    • Research and application of MFWVD time-frequency analysis in delay estimation

      2025, 39(2):240-250.

      Abstract (90) HTML (0) PDF 3.50 M (192) Comment (0) Favorites

      Abstract:A new time delay estimation method based on masked filtering Wigner-Ville distribution is proposed to address the issues of cross-term interference or low time-frequency concentration in common time-frequency analysis, which lead to inaccurate time delay estimation. The basic principle is to combine the amplitude spectrum ratio of the WVD time-frequency spectrum and the SPWVD time-frequency spectrum with a Gaussian function filter. By taking advantage of the SPWVD method’s ability to effectively suppress cross-term interference and the high time-frequency concentration of the WVD method, the SPWVD time-frequency spectrum of the signal is used as a mask to shield the cross-terms in the WVD time-frequency spectrum, thereby obtaining a high-precision time-frequency spectrum while maintaining high time-frequency resolution. Compared with common time-frequency domain reflection methods, this method exhibits better performance in two key performance indicators, namely cross-term suppression and time-frequency concentration, and the reliability of the time delay estimation results is relatively high. This method is applied to locate weak low-resistance faults in cables in combination with the time-frequency cross-correlation function. Through comparative analysis of simulation experiments, the results show that when locating a low-resistance fault at 1.5 km in the cable, the root mean square error of the proposed method is 0.652 7 m. Compared with the WVD method and the SPWVD method, the positioning errors are reduced by 1.288 4 and 0.683 4 m respectively. In addition, the positioning error of this method is smaller than that of other common methods under signal-to-noise ratios of -5、0, and 5, and the positioning effect is the best.

    • Detection of missing shockproof hammers based on YOLOv8-SPH

      2025, 39(2):251-261.

      Abstract (135) HTML (0) PDF 20.12 M (165) Comment (0) Favorites

      Abstract:To address challenges in detecting missing Shockproof Hammers on transmission lines due to their small size, complex image backgrounds, and subtle presence, this study proposes a lightweight YOLOv8-SPH model for damper absence detection. The model introduces shallow-scale feature maps of 160×160 and 320×320 within the neck of the YOLOv8n network and integrates multi-scale detection modules within the detection head. This enhances contextual information fusion across feature maps, effectively expanding the receptive field, enabling the model to capture richer semantic features related to damper absence. An innovative multi-scale high-efficiency feature extraction module (MultFaster) is also introduced, utilizing partial convolutions, multi-level feature extraction, and residual connections. This structure maintains detection accuracy for damper features while reducing computational complexity and parameter load. Additionally, a dynamic upsampling operator is incorporated into the neck network to improve feature map resolution, improving the model’s accuracy in detecting missing Shockproof Hammers. To further optimize, the original model’s decoupled detection head is replaced with a lightweight detection head, reducing computational complexity and boosting detection efficiency. The enhanced network undergoes amplitude-based layer-adaptive sparse pruning, significantly reducing model parameters and computational load. Testing on a custom damper absence dataset demonstrates YOLOv8-SPH exhibited remarkable performance, achieving an mAP@0.5 of 91.51%, which marks a 6.3% improvement over the original YOLOv8n. Additionally, parameter count is reduced by 80.73%, computational load by 48.14%, and model size by 62.41%. The model achieves improved detection accuracy while reducing computational complexity and parameter size, effectively meeting the demands for efficient and precise detection of Shockproof Hammers in transmission lines, showcasing significant practical value.

Editor in chief:Prof. Peng Xiyuan

Edited and Published by:Journal of Electronic Measurement and Instrumentation

International standard number:ISSN 1000-7105

Unified domestic issue:CN 11-2488/TN

Domestic postal code:80-403

  • Most Read
  • Most Cited
  • Most Downloaded
Press search
Search term
From To