Explaining predictions of an automated pulmonary function test interpretation algorithm
Background: Previous work demonstrated the possibility to automate pulmonary function test interpretation to diagnose respiratory disease using machine learning (ML). Since ML is a black box approach, understanding the reasoning behind a prediction is critical for generating trust, and is fundamental if one plans to take action based on a prediction.
Objectives: We investigated a technique called locally interpretable model-agnostic explanation (LIME) to explain the predictions of a ML classifier that takes PFT data (spirometry, resistance, lung volumes, diffusion capacity) as input to suggest a diagnosis.
Methods: We developed a ML classifier using 1400 historical cases. We tested our classifier in 50 randomly selected subjects with respiratory problems who completed PFT. An expert panel produced gold standard diagnoses from clinical, PFT and other test data. We applied LIME technique to generate interpretative explanations for each classifier prediction.
Results: The classifier accuracy was 76%. LIME showed a high FEV1 Z-score (0.41±0.71) and TLCO%pred (92±15%) as the top explanatory feature for normal and asthma prediction while a low FEV1/FVC (50±11%) and RV%pred (70±15%) for COPD and ILD prediction respectively. Three predictions were incorrect when the top feature was negative (Fig 1b).
Conclusion: By providing intuitive explanations, LIME builds trust for clinical application of a ML-based PFT interpretation algorithm.
Read the full article here.
Nilakash Das, Marko Topalovic, Jo Raskin, Jean-Marie Aerts, Thierry Troosters, Wim Janssens. Explaining predictions of an automated pulmonary function test interpretation algorithm. European Respiratory Journal 2019 54: PA2227; DOI: 10.1183/13993003.congress-2019.PA2227