Issue 5, 2023

Interpretable models for extrapolation in scientific machine learning

Abstract

Data-driven models are central to scientific discovery. In efforts to achieve state-of-the-art model accuracy, researchers are employing increasingly complex machine learning algorithms that often outperform simple regressions in interpolative settings (e.g. random k-fold cross-validation) but suffer from poor extrapolation performance, portability, and human interpretability, which limits their potential for facilitating novel scientific insight. Here we examine the trade-off between model performance and interpretability across a broad range of science and engineering problems with an emphasis on materials science datasets. We compare the performance of black box random forest and neural network machine learning algorithms to that of single-feature linear regressions which are fitted using interpretable input features discovered by a simple random search algorithm. For interpolation problems, the average prediction errors of linear regressions were twice as high as those of black box models. Remarkably, when prediction tasks required extrapolation, linear models yielded average error only 5% higher than that of black box models, and outperformed black box models in roughly 40% of the tested prediction tasks, which suggests that they may be desirable over complex algorithms in many extrapolation problems because of their superior interpretability, computational overhead, and ease of use. The results challenge the common assumption that extrapolative models for scientific machine learning are constrained by an inherent trade-off between performance and interpretability.

Graphical abstract: Interpretable models for extrapolation in scientific machine learning

Supplementary files

Article information

Article type
Paper
Submitted
30 Apr 2023
Accepted
17 Aug 2023
First published
21 Aug 2023
This article is Open Access
Creative Commons BY license

Digital Discovery, 2023,2, 1425-1435

Interpretable models for extrapolation in scientific machine learning

E. S. Muckley, J. E. Saal, B. Meredig, C. S. Roper and J. H. Martin, Digital Discovery, 2023, 2, 1425 DOI: 10.1039/D3DD00082F

This article is licensed under a Creative Commons Attribution 3.0 Unported Licence. You can use material from this article in other publications without requesting further permissions from the RSC, provided that the correct acknowledgement is given.

Read more about how to correctly acknowledge RSC content.

Social activity

Spotlight

Advertisements