When machine learning models learn chemistry II: applying WISP to real-world examples
Abstract
In our previous work, we introduced WISP (Workflow for Interpretability Scoring using matched molecular Pairs), which enables users to quantitatively assess the performance of explainability methods for machine learning models. In this work, we focus on more complex tasks, such as yield prediction, pKi values for inhibition of coagulation Factor Xa and AMES mutagenicity, where the explanations of the predicted property need to capture more intricate interaction patterns between structural motifs of either the reaction partner, the protein or DNA. Expanding upon part I of the “When Machine Learning Learns Chemistry” series, we demonstrate additional functionalities of the WISP workflow. Alongside the model and descriptor—agnostic atom attributor, WISP integrates a SHAP-based and an RDKit-based attribution method, enabling the comparison of multiple explainability approaches, as demonstrated on the Factor Xa dataset. This work also showcases WISP's capability to evaluate explanations for classification tasks such as AMES mutagenicity. The application of WISP to the AMES mutagenicity dataset reveals that the respective machine learning model fails to learn the underlying chemical relationships, instead relying primarily on numerical correlations. When applied to the yield dataset, WISP highlights specific cases where explainability methods that usually perform well fail to provide meaningful insights. This highlights WISP's ability to detect such limitations in trained models, providing valuable insights to guide targeted improvements in model development and data quality.

Please wait while we load your content...