arXiv Analytics

Sign in

arXiv:2505.01445 [cs.LG]AbstractReferencesReviewsResources

Explainable AI for Correct Root Cause Analysis of Product Quality in Injection Moulding

Muhammad Muaz, Sameed Sajid, Tobias Schulze, Chang Liu, Nils Klasen, Benny Drescher

Published 2025-04-29Version 1

If a product deviates from its desired properties in the injection moulding process, its root cause analysis can be aided by models that relate the input machine settings with the output quality characteristics. The machine learning models tested in the quality prediction are mostly black boxes; therefore, no direct explanation of their prognosis is given, which restricts their applicability in the quality control. The previously attempted explainability methods are either restricted to tree-based algorithms only or do not emphasize on the fact that some explainability methods can lead to wrong root cause identification of a product's deviation from its desired properties. This study first shows that the interactions among the multiple input machine settings do exist in real experimental data collected as per a central composite design. Then, the model-agnostic explainable AI methods are compared for the first time to show that different explainability methods indeed lead to different feature impact analysis in injection moulding. Moreover, it is shown that the better feature attribution translates to the correct cause identification and actionable insights for the injection moulding process. Being model agnostic, explanations on both random forest and multilayer perceptron are performed for the cause analysis, as both models have the mean absolute percentage error of less than 0.05% on the experimental dataset.

Related articles: Most relevant | Search more
arXiv:2106.04684 [cs.LG] (Published 2021-06-08)
Explainable AI for medical imaging: Explaining pneumothorax diagnoses with Bayesian Teaching
arXiv:2308.08407 [cs.LG] (Published 2023-08-16)
Explainable AI for clinical risk prediction: a survey of concepts, methods, and modalities
arXiv:2410.02970 [cs.LG] (Published 2024-10-03)
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI