Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Relation between prognostics predictor evaluation metrics and local interpretability SHAP values
Delft University of Technology (TU Delft), Mekelweg 5, 2628 CD Delft, the Netherlands.ORCID iD: 0000-0001-5168-2767
Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics. Palo Alto Research Center (PARC), Palo Alto CA 94304, USA.ORCID iD: 0000-0002-0240-0943
University of Lisbon - Instituto Superior Tecnico (IST), Av. Rovisco Pais nº1, 1049-001 Lisbon, Portugal.
2022 (English)In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921, Vol. 306, article id 103667Article in journal (Refereed) Published
Abstract [en]

Maintenance decisions in domains such as aeronautics are becoming increasingly dependent on being able to predict the failure of components and systems. When data-driven techniques are used for this prognostic task, they often face headwinds due to their perceived lack of interpretability. To address this issue, this paper examines how features used in a data-driven prognostic approach correlate with established metrics of monotonicity, trendability, and prognosability. In particular, we use the SHAP model (SHapley Additive exPlanations) from the field of eXplainable Artificial Intelligence (XAI) to analyze the outcome of three increasingly complex algorithms: Linear Regression, Multi-Layer Perceptron, and Echo State Network. Our goal is to test the hypothesis that the prognostics metrics correlate with the SHAP model's explanations, i.e., the SHAP values. We use baseline data from a standard data set that contains several hundred run-to-failure trajectories for jet engines. The results indicate that SHAP values track very closely with these metrics with differences observed between the models that support the assertion that model complexity is a significant factor to consider when explainability is a consideration in prognostics.

Place, publisher, year, edition, pages
Elsevier, 2022. Vol. 306, article id 103667
Keywords [en]
Local interpretability, Model-agnostic interpretability, SHAP values, Monotonicity, Trendability, Prognosability
National Category
Computer Sciences
Research subject
Operation and Maintenance
Identifiers
URN: urn:nbn:se:ltu:diva-89828DOI: 10.1016/j.artint.2022.103667ISI: 000911795400001Scopus ID: 2-s2.0-85125490746OAI: oai:DiVA.org:ltu-89828DiVA, id: diva2:1646523
Note

Validerad;2022;Nivå 2;2022-03-23 (hanlid)

Available from: 2022-03-23 Created: 2022-03-23 Last updated: 2023-05-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Goebel, Kai

Search in DiVA

By author/editor
Baptista, Marcia L.Goebel, Kai
By organisation
Operation, Maintenance and Acoustics
In the same journal
Artificial Intelligence
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 57 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf