Cargando…
A novel intelligent bearing fault diagnosis method based on signal process and multi-kernel joint distribution adaptation
The present research on intelligent bearing fault diagnosis assumes that the same feature distribution is used to obtain training and testing data. However, the domain shift (distribution discrepancy) issue generally occurs in both datasets because of different operational conditions. The domain ada...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10027665/ https://www.ncbi.nlm.nih.gov/pubmed/36941284 http://dx.doi.org/10.1038/s41598-023-31648-y |
Sumario: | The present research on intelligent bearing fault diagnosis assumes that the same feature distribution is used to obtain training and testing data. However, the domain shift (distribution discrepancy) issue generally occurs in both datasets because of different operational conditions. The domain adaptation techniques are preferably applied for fault diagnosis to handle the domain shift issue. Moreover, collecting sufficient testing data or labelled data in real industries is a challenging task. Therefore, the multi-kernel joint distribution adaptation (MKJDA) with dynamic distribution alignment is proposed for bearing fault diagnosis. This method dynamically joins both the marginal and conditional distributions and uses the multi-kernel to solve the non-linear problems to extract the most effective and robust representation for cross-domain issues. Moreover, it runs with the unlabelled task domain to perform the diagnosis by iteratively updating the pseudo code. The experimental results (two public datasets and one experimental dataset) demonstrated that the proposed method (MKJDA) exhibited stable and robust accuracy while conducting bearing fault diagnosis. It can effectively address the most crucial issue: intelligent diagnosis methods must re-train the model when the distribution differs between the source domain (the model is learned) and the target domain (the learned model is applied). |
---|