Vision and sound fusion-based material removal rate monitoring for abrasive belt grinding using improved LightGBM algorithm

2021 
Abstract Accurate material removal modeling is the basis for optimizing the surface quality and improving the performance of equipment components. In this study, a multi-sensor fusion method of vision and sound is used to monitor in-process grinding material removal rate (MRR). First, belt grinding experiments are conducted using different grinding parameters, and vision and sound signals are captured by industrial CCD cameras and an omnidirectional condenser microphone, respectively. Second, the features of the captured grinding spark images are extracted based on two aspects: color and texture, and those of the grinding sound are investigated in the time, frequency, and time–frequency domains. Moreover, the complementarity between the vision and sound signals and their sensitivity to different grinding parameters are discussed. Finally, based on feature-level fusion strategies, the Pearson correlation coefficient and the sequential backward selection algorithms are jointly used to select the optimal feature subsets. MRR prediction models are established using the selected feature subsets and an improved light gradient boosting machine (LightGBM) algorithm. The test results show that the error in the MRR prediction model of same-specification abrasive belts is less than 3 %, and the coefficient, R2, is as high as 99.2 %. The proposed method can be used to predict the MRR resulting from a single grinding parameter and multiple ones, using the same-specification abrasive belts. Compared to other prediction models, the improved LightGBM model is superior in terms of the time factor without reduction in the accuracy of the model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    2
    Citations
    NaN
    KQI
    []