Open Access Article
This Open Access Article is licensed under a
Creative Commons Attribution 3.0 Unported Licence

Measurements with noise: Bayesian optimization for co-optimizing noise and property discovery in automated experiments

Boris N. Slautin *a, Yu Liu b, Jan Dec c, Vladimir V. Shvartsman a, Doru C. Lupascu a, Maxim A. Ziatdinov d and Sergei V. Kalinin *bd
aInstitute for Materials Science and Center for Nanointegration Duisburg-Essen (CENIDE), University of Duisburg-Essen, Essen, 45141, Germany. E-mail: boris.slautin@uni-due.de
bDepartment of Materials Science and Engineering, University of Tennessee, Knoxville, TN 37996, USA. E-mail: sergei2@utk.edu
cInstitute of Materials Science, University of Silesia, Katowice, PL 40-007, Poland
dPacific Northwest National Laboratory, Richland, WA 99354, USA

Received 12th December 2024 , Accepted 14th March 2025

First published on 17th March 2025


Abstract

We have developed a Bayesian optimization (BO) workflow that integrates intra-step noise optimization into automated experimental cycles. Traditional BO approaches in automated experiments focus on optimizing experimental trajectories but often overlook the impact of measurement noise on data quality and cost. Our proposed framework simultaneously optimizes both the target property and the associated measurement noise by introducing time as an additional input parameter, thereby balancing the signal-to-noise ratio and experimental duration. Two approaches are explored: a reward-driven noise optimization and a double-optimization acquisition function, both enhancing the efficiency of automated workflows by considering noise and cost within the optimization process. We validate our method through simulations and real-world experiments using Piezoresponse Force Microscopy (PFM), demonstrating the successful optimization of measurement duration and property exploration. Our approach offers a scalable solution for optimizing multiple variables in automated experimental workflows, improving data quality, and reducing resource expenditure in materials science and beyond.


Introduction

The rapid advancement of self-driven laboratories ranging from individual automated tools to labs integrating all stages from synthesis to characterization in a fully automated workflow, is currently transforming approaches in material exploration and experimental science as a whole.1–5 One of the key drivers behind these groundbreaking changes is the development of advanced machine-learning (ML) algorithms that can make decisions previously fully reliant on humans.1,6–8 The capability for decision-making is the key difference between simple automation with strict algorithms and higher-level automation used in scientific research. Simple automation, which has been successfully implemented in various industrial processes at least since Henry Ford's time, follows strict algorithms with predefined rules and procedures. In contrast, automation in exploration and optimization processes required for e.g. scientific research often involves iterative workflows with complex decision-making at each step.5 This allows for dynamic adjustments of the experiment trajectory based on real-time data and optimization strategies. While the complete exclusion of the operator from the automated loop remains impossible, the human role is gradually changing for more and more high-level decisions and mind-related work. Meanwhile, routine experimental procedures, which rely heavily rely on real-time data analysis for decision-making, are increasingly being handled by automated systems.9–11

Currently, the mainstay of automated experimentation (AE) workflows across numerous scientific and industrial fields are the sequential Bayesian optimization (BO) algorithms.12,13 The process is conducted within a domain-specific experimental object space. BO-guided workflows have an iterative structure with an automated selection of the next candidate to be explored from the object space at each step. This selection is based on a predefined policy and reward function and is independent of direct human choice. The examples of BO-driven optimization could involve searching for an optimal composition within a phase diagram for material optimization,14–16 adjusting parameters like pressure, temperature, laser fluence, etc. for pulsed laser deposition,17 micro-drilling,18 laser annealing,19 or selecting locations in the image plane for spectroscopic measurements in microscopy,20,21 and so on. The BO-based frameworks have proven to be effective in guiding experiments, ranging from single-modality explorations in combinatorial synthesis, scanning probe microscopy, or electron microscopy to the co-orchestration of multiple modalities in one experimental cycle.14,15,22–24

In many cases, BO methods are built on the Gaussian Processes (GPs) as surrogate models capable of interpolation and uncertainty prediction over the parameter space.13 However, the standard GP methods are purely data-driven, limiting their efficiency in many situations.25 Realistic physical systems are often associated with ample prior physical knowledge, such as laws and dependencies. Incorporating this knowledge into the BO cycle can significantly enhance optimization efficacy.26,27 The common way to incorporate physical knowledge lies in specifying the mean function or kernel (covariance function) of the GP or defining boundary conditions.26,28 To embed more complex pre-acquired knowledge expressed in the form of high-dimensional datasets, advanced approaches such as Deep Kernel Learning can be utilized.21,29,30 Recently it has been shown that the addition of the mean function as a probabilistic physical model in structural GP significantly increases the efficiency of exploration for materials synthesis,31 combinatorial library exploration, and physics discovery.25,32

One of the key but seldom explored aspects of automated experiments is measurement noise. For purely data-driven applications, the noise level can be treated as a prior reflecting the degree of trust in the experimentally obtained observables, or as an optimization hyperparameter. Multiple approaches have been developed to enhance BO for noisy functions by advancing acquisition functions that explicitly integrate noise into the optimization process. These methods have been applied to both single-objective33–35 and multi-objective BO,36–40 enabling more robust and accurate optimization under uncertain and noisy conditions.

In physical systems, noise is not just an abstract prior but a measurable quantity that depends on multiple factors, including experimental conditions and parameters. For instance, in imaging and spectroscopic techniques, the noise level might be influenced by exposure time, sensor sensitivity, environmental conditions, and other operational parameters. The noise level can be reduced by optimizing measurement parameters and by increasing the exposure time. However, increasing the duration of the experiment inescapably leads to the growth of the experiment times and hence costs. Thus, finding a trade-off between the cost of an experiment and the signal-to-noise ratio (SNR) is an optimization problem that needs to be solved in many experiments.

More generally, reducing the financial expense of experimentation lies among the ultimate goals of experiment automatization. Thus, the cost of the experiment is one of the primary criteria for estimating the efficiency of AE. In many cases, the financial expense of experimentation is directly proportional to the time it consumes. Globally, two realistic approaches to experimentation can be distinguished: (1) budget-constrained experiments – maximizing exploration or optimization efforts within a predefined budget limit (time) for the entire experiment; (2) unrestricted experiments, where an algorithm must determine the best possible experimental trajectory according to the real-time feedback and without strict time limitations. The time expenses are determined by the number of exploration steps and the duration of each iteration. Today, most BO approaches focus on optimizing the number of required exploration steps, with the duration of each step typically being predefined. More advanced multi-fidelity and orchestrated BO approaches navigate across different fidelities or modalities to balance the cost of each iteration with the potential outcomes, thereby enhancing the efficacy of exploration.41–43 However, the cost and duration of steps for each modality are typically assumed to be known as a constant. Optimizing not only the number of steps (experimental trajectory) but also the duration (cost) of each iteration should advance the efficiency of AE. Such a budget-limited approach requires optimizing the entire experimental trajectory, allowing for variation in costs between different steps during the experiment. In contrast, in unrestricted experiments, we can optimize the cost of the individual step and use the predefined parameters for further investigation.

As a special case for automated experiments, we consider the intra-step optimization of the measurement times in the presence of noise. The exposure time is typically defined by an operator before the measurements and remains constant throughout the experiment. At the same time, for many spectroscopical methods (XRD, Raman, etc.) the exposure time is a major parameter that determines the signal-to-noise (SNR) ratio and therefore the amount of the gained information after each iteration. It is important to note that the significance of optimizing measured noise increases with longer exposure times. For example, in Raman spectroscopy of high-quality crystals, the accumulation time may not exceed 0.1 seconds. While, for Raman measurements of lithiated electrode materials, where conductivity is much higher, accumulation times can extend to minutes or even tens of minutes.44 This extended duration makes noise optimization critical, as it can significantly impact the quality, reliability, and cost of the measurements. Precise measurements of scalar properties, such as the ultra-low DC electron conductivity of dielectric crystals, also follow this principle.

Here, we present a workflow for incorporating intra-step noise optimization into the BO GP cycle. The proposed workflow defines the optimal exposure time to balance the quality and cost of each iteration directly within the cycle of the automated investigation of the target property. We explore the two alternative approaches: a reward-driven noise optimization and a double-optimization acquisition function. We validate our method through simulations and real-world experiments using Piezoresponse Force Microscopy (PFM).

In-loop noise level optimization: a concept

To introduce the noise optimization workflow, we consider a case of optimization of property f(x) in 1D space x. For example, the x might represent a compositional axis within a combinatorial library. Property f can be either a scalar or a vector. Our goal is to simultaneously optimize the property f and the noise associated with its experimental determination within a single optimization loop. To achieve this, we need to expand the input space of the optimization model by adding dimension – time (t). As a result, the optimization of property f will be carried out in the (x,t) space. While f(x) is independent of the measurement duration (exposure time), the noise (Noisef) in measuring f(x) is determined by the exposure time. In the general case, the noise level may also depend on the location along the x axis. However, for simplicity, we assume the noise to be independent of x, such that Noisef(x,t) = Noisef(t). As a result, we encounter an unusual situation where optimization of both f(x) and Noisef(t) occurs in a 2D space, however, each of the variables depends on only one of the two dimensions of the input space. We also note that whereas f(x) can be arbitrary, the measured noise is expected to be a monotonically decreasing function of measurement time and in many experimentally important cases follow simple behavior (e.g. for Gaussian white noise or 1/f noise).

At each iteration, the optimization process can be broken down into three sequential steps: (1) GP modeling of f and its uncertainty distribution within the input space (x,t), (2) construction of the acquisition function that incorporates the cost of the experiment for noise optimization, and (3) selection of the next location to discover at the extremum to the selected acquisition function in (x,t) space (Fig. 1). While the third step remains similar to the classical BO approaches, the construction of the surrogate GP model and the acquisition function are detailed below.


image file: d4dd00391h-f1.tif
Fig. 1 Scheme of double noise-property optimization workflow.

MeasuredNoiseGP for f(x) predictions

The f optimization is carried out in the low dimensional (x,t) space making reasonable direct implementing GPs as surrogate model for BO. Here we utilize the MeasuredNoiseGP model, which incorporates noise, estimated from the experimental observations, into the model rather than inferring it as in traditional GP.45 Experimental noise in the measured points can be calculated directly from spectrum acquisition or derived from repeated measurements of scalar values. The noise level at unexplored locations is predicted independently of the GP model used for f (the primary model). This prediction is performed using a separate model (noise model). The noise model is trained to predict the noise component based on the noise estimated in the already measured locations. Once the noise predictions are obtained, they are added to the diagonal of the covariance matrix of the primary model. This adjustment accounts for the additional uncertainty introduced by the noise. The resulting amended covariance matrix (K′) for f(x) in MeasuredNoiseGP model can be expressed as:
 
K′ = K + diag(Noisef),(1)
where K is the covariance matrix for the primary model, and Noisef is the prediction of the noise according to the noise model. Hence, eqn (1) enables reflection of the increased uncertainty due to the noise level.

To reflect the independence of the f(x) on t, the kernel function (covariance) of the primary model has been adjusted by deactivating the time dimension. In this scenario the kernel function solely depends on the distance between the projections of the measured points onto the x axis, ignoring their distribution relative to the t axis. The specified Matérn 5/2 kernel is expressed as:

 
image file: d4dd00391h-t1.tif(2)
where r = ((xixj)/klength)2 is the squared distance normalized by the klength. The kscale and klength are the kernel hyperparameters to be defined in GP training. It is important to note that other GP kernels can also be similarly adapted for this purpose.

In many cases, optimizing the noise level is facilitated by our understanding of its nature and the availability of models to describe it. The noise in the system typically consists of time-independent and time-dependent components. For many spectroscopical measurements, decaying noise arises as signal averaging is performed over longer acquisition times, with the noise level decreasing, following as image file: d4dd00391h-t2.tif. In turn, the time-independent noise component can be represented by a thermal, instrumental, and readout noise, for instance. The expression for the total noise may be written as:

 
image file: d4dd00391h-t3.tif(3)
where the constants A0 and A1 represent time-independent and time-dependent inclusions. The dependence of noise on the measurement duration is illustrated by the black line in Fig. 2a. Given that the noise structure is described by expression (3), a structured GP model with a specified mean function can be utilized to estimate the hyperparameters A0 and A1. It is crucial to ensure that the independence of noise from x is accurately reflected in the noise model.


image file: d4dd00391h-f2.tif
Fig. 2 Reward function for exposure time optimization. (a) Noise dependence on exposure time is depicted by the black line, representing the noise model. The background color illustrates the distribution of the reward across the (t,Noise) space. Parameters used: A0 = 0.1, A1 = 2, α = 0.1, β = 2. (b) Normalized profile of the reward function corresponding to the noise model.

Acquisition function

The primary requirement for an acquisition function in BO is to identify the location that maximizes the expected benefit for exploration during the next iteration. This means that the acquisition function defined at the (x,t) space should exhibit a maximum or minimum at the most promising location for further exploration. In our case of the in-cycle noise optimization, the constructed acquisition function should follow both policies (1) optimization/exploration of the target property f(x) and (2) optimization of the exposure time. It should be noted that, while the objective for noise investigation remains the same regardless of the experiment, the policy for property optimization is defined by an operator and may vary depending on the global experiment objective.

Direct construction of traditional acquisition functions (such as Maximum Uncertainty, Expected Improvement, Upper Confidence Bound, etc.) based on GP predictions and uncertainties over the (x,t) space enable us to prioritize the location in x axis for measuring f(x) at the next step. At the same time, this approach is not suited for optimizing measurement duration.

To accomplish this goal, we proceed with the following derivation. From eqn (1), the total uncertainty includes the time-dependent noise component and the time-independent part:

 
image file: d4dd00391h-t4.tif(4)
The measurements of f at each iteration introduce some additional knowledge and decrease the uncertainty. Concurrently, the noise level—dictated by our measurement capabilities and the exposure time—sets a theoretical limit, determining the maximum possible information gain and the minimum achievable uncertainty. In other words, we cannot define the f(x) for some exposure time t0 with lower uncertainty than Noisef(t0). Lack of time dependence in the optimized property f(x) means that projecting uncertainty onto the time axis will merely reflect the noise model behavior. Guided by classical acquisition functions and unable to effectively reduce uncertainty, the algorithm will typically select the maximum available exposure time. This occurs because, in such a direct approach, the cost function defining the optimal measurement time is not accounted for. In fact, the optimization of exposure time should be guided by a reward function, which incorporates experimental cost information into the acquisition function model.

To optimize exposure time (and thereby reduce noise), it is necessary to introduce a cost model and a reward function (R) that balance the cost against the achievable noise level for a single measurement. In the simplest case, the cost of our measurements is determined solely by the exposure time and depends linearly on it: R(Cose(t),Noise) = R(t,Noise).

Given that our goal is to minimize noise while also minimizing cost (i.e., exposure time), our reward should decrease as either exposure time or noise increases. There are no strict rules to construct the reward function besides the principles mentioned above. In our experiments, we defined the reward function for noise optimization as follows:

 
R(t,Noise) = −α × t2β × Noise,(5)
where α and β are coefficients. These coefficients are used to make the units of the noise and exposure time compatible and to balance their contributions within the reward function. We chose a quadratic dependence of the reward on the exposure time and a linear dependence on the noise level (Fig. 2a, background color). However, alternative combinations of these dependencies can also be considered. Given the distinct model Noisef(t), optimal exposure time can be defined from the profile of the reward function along the noise model R(t,Noise = Noisef) (Fig. 2b). To ensure that the reward function is well-scaled, we normalized it to the range [0,1].

Given the reward function, the acquisition function can be adjusted to capture the interdependency between the measurement cost (duration) and the noise level. Below, we outline two approaches for integrating noise optimization into the investigation of f(x) by incorporating the reward function into the acquisition function:

(1) Pure reward-driven noise optimization: in this approach, the reward serves as a weighting function for classical acquisition functions, which prioritizes certain exposure time. This method directly incorporates noise considerations into the reward mechanism.

(2) Double-optimization acquisition: this approach uses an artificial total acquisition function that integrates two ‘independent’ components: the first is an acquisition function tailored for optimizing f(x), and the second is a time acquisition function built around the noise reward function and the uncertainty in the noise model predictions. Each component addresses a distinct objective, enabling both noise and function optimization within a Bayesian framework.

In the pure reward-driven approach, the reward function acts as a weighting factor for the classical acquisition function defined over the entire (x,t) space:

 
image file: d4dd00391h-t5.tif(6)
where acqf(x,t) is the resulting acquisition function, image file: d4dd00391h-t6.tif is the classical acquisition function determined by x, t and full uncertainty image file: d4dd00391h-t7.tif, and γ is the exponent of the weighting function. High very short (high noise) and very long measurement durations (high cost) becomes smaller. The exponential factor γ is used to control the steepness of the reward function (Fig. 3a). The uncertainty of f increases as t approaches to zero, following a least 1/t (noise limit). The growth shifts the resulting acquisition function maximum when multiplied by the peak-shape reward function (Fig. 3b). However, increasing the steepness of the reward function reduces this effect, mitigating the shift (Fig. 3c).


image file: d4dd00391h-f3.tif
Fig. 3 Reward-driven optimization: (a) normalized reward functions (weighting functions) used to construct (b) acquisition functions for different exponential factors (γ). (c) Optimal time predictions as a function of the γ. The ground truth optimal time is represented by the black dashed line.

Optimizing exposure time within a Bayesian framework requires constructing an acquisition function that balances the reward with its associated uncertainty. Since the reward function is derived from the noise model, the uncertainty propagation rule can be used to calculate the uncertainty of the reward. Classical BO approaches with acquisition function (e.g., Expected Improvement, Upper Confidence Band, etc.) defined over (x,t) space (nominally, because R is independent on x) allow for the optimization of exposure time. Since we cannot reduce the Noisef(ti) for the chosen point (xi,ti), we construct a surrogate acquisition function, image file: d4dd00391h-t8.tif to optimize f(x) by extending the uncertainty of f predicted at a specific time image file: d4dd00391h-t9.tif across the entire (x,t) space.

The double-optimization acquisition function, acqf(x,t), should integrate both components described above: the exposure time optimization acquisition function and the selected acquisition function for optimizing f(x). The final double-optimization acquisition function was constructed as a sum of the normalized acquisition functions for exposure time – acqt(x,t) and for f(x) – acqx(x,t).

 
acqf(x,t) = ½[Norm(acqt(x,t)) + Norm(acqx(x,t))](7)

Experimental

The gpax Python library was used to implement the Gaussian Processes.45 To execute the workflow and principles outlined above, we modified both the MeasuredNoiseGP model and some kernel functions within the gpax library. Within the modified optimization model, noise prediction was carried out using a structured GP model with a specified mean function as in expression (3).

The efficiency of the method was evaluated both through simulations and in the actual exploration of the local domain structure using the Piezoresponse Force Microscopy (PFM) technique.

In the simulations the f(x) study was driven by the pure exploration strategy (Uncertainty Exploration, UE). The exponential factor γ is set to 5. In the double-optimization approach, the EI acquisition function is used for the exposure time optimization. All simulation experiments start with 3 seed measurements in the random locations within input space and contain 15 subsequent exploration BO-guided steps. Before noise optimization converged, 30 repeated measurements of f(xi)were simulated at each iteration. These measurements are used to determine the function value and assess the associated uncertainty at each measured point (xi,ti). The variance in the predicted time below 5 × 10−4 for the last three steps is used as a criterion of the noise model convergence. After the model has converged, f(xi) is estimated using 3 measurements, while the noise level is determined based on the model predictions for ti. The priors A0 and A1 in the noise mean function are sampled from a Uniform(0,15) distribution for both approaches.

The real automated experiment evaluation of the model efficiency was performed using PFM using an MFP-3D (Oxford Instruments, USA) scanning probe microscope. Silicon probes with the conductive Pt/Cr coating Multi75 E-G were used for the response acquisition. The Dual Amplitude Resonance Tracking (DART) mode of PFM has been used to acquire the ground truth profile. The resonance spectra were measured at a constant sampling rate of 10 kHz. Given the constant sampling rate, varying the sweep time led to changes in the number of acquired data points within the resonance curve, affecting the precision of the resonance fitting by eqn (8) and, thereby, surface displacement amplitude estimation. The microscope is operated automatically via the AESPM Python library using a control program written in a Jupyter notebook.46 The main calculations, including BO, are executed in Google Colab, which is connected to the microscope control notebook for real-time data exchange through a simple web server.

The objective of the real experiment was in the exploration of the piezoelectric response dependences (UE acquisition function) with simultaneous optimization of the sweep time (EI acquisition function). The double acquisition approach was used to guide the automated experiment. The experiment comprised 20 BO-guided steps, preceded by 3 preliminary seed measurements at random locations within the exploration space.

Result and discussion

Experiments simulations

In simulations of the real experiment, we emulated the investigation of a scalar variable dependency, with the uncertainty in the value of the variable estimated from repeated measurements. The simulations of the actual exploration were performed using the Forrester function (Fig. 4a).47 Due to the complexity and non-linearity, the Forrester function is a widely recognized benchmark often used to assess the performance of optimization algorithms. It is defined as:
 
f(x) = (6x − 2)2[thin space (1/6-em)]sin(12x − 4), x ∈ [0,1](8)

image file: d4dd00391h-f4.tif
Fig. 4 Ground truth (a) f(x) function and (b) noise model; (c) f(x) and (d) noise model defined over the exploration space.

The ground truth noise model is defined by eqn (3) with parameters A0 = 0.1 and A1 = 2 (Fig. 4b). The reward function for optimizing exposure time is given by expression 5 with α = 0.1 and β = 2. The coefficients α and β indirectly determine the cost function. The optimization process is constrained within the input space x ∈ [0,1] and t ∈ [0.2,10].

The experiment simulates the exploration of the Forrester function by the iterative measurements of f(x). At each exploration step, the automated agent selects the next location to be explored and the “exposure” time, which determines the precision of defining f(x). The noise component is modeled as a normally distributed addition to the ground truth, with its standard deviation governed by the duration based on the ground truth noise model. Before the noise model converges, the algorithm estimates the noise level through repeated measurements at the exploration locations. Once the noise optimization converges, the noise estimation is derived from the optimized noise model. The idea and main goal of the experiment is to explore f(x) automatically, simultaneously optimizing the measurement duration.

The experiment simulation, driven by both the pure reward (Fig. 5a, c, e and g) and double acquisition approaches (Fig. 5b, d, f and h), demonstrated their ability to converge after only a few BO-based exploration steps. However, notable differences in the evolution of the exploration process are observed between the two methods. We repeated the experiment in simulation mode multiple times for both approaches. The discussion below focuses on the comparison of the most representative results; additional outcomes are published on GitHub (see Data availability).


image file: d4dd00391h-f5.tif
Fig. 5 Automated experiment simulations using the (a, c, e and g) reward-driven approach and the (b, d, f and h) double-acquisition approach. (a and b) Experimental trajectories in the (x,t) exploration space. The background shading represents the acquisition function values. (c and d) Evolution of the optimal measurement time predictions with iteration number. (e and f) Predictions of f(x) and (g and h) Noisef(t) at the final exploration step. Data points in (a, b and e–h) are color-coded based on iteration number, with light green indicating the initial seed measurements. The UE acquisition function drives both simulations.

In most of our experiments, even with only three seeding locations, the algorithm often succeeds in prioritizing duration ranges close to the true optimal time. However, the accuracy of this noise estimation heavily depends on the number of iterative measurements taken at each location. To ensure reliable estimation, we follow the empirical lower limit for parameter estimation of a normal distribution, using 30 measurements at each location.

The pure reward-driven approach demonstrates fast convergence after a few BO iteration steps (Fig. 5c). Typically, the reward-driven model converges to a duration slightly lower than the ground truth value due to the combined effects of reward and noise, as discussed earlier (Fig. 5a and c). Increasing the exponential factor, γ, further reduces this difference. It is important to note that, in the limit, the reward function can be substituted by a delta function with its peak at the optimal reward point. While the algorithm converges rapidly to the optimal exposure time, the estimation of the A0 and A1 parameters that define the noise model may not always align perfectly with the ground truth. The algorithm typically approximates the noise model accurately near the optimal exposure time, but predictions for both very high and very low durations may remain less accurate.

Experiments based on the double-acquisition method typically require a similar number of steps before the model converges to the optimal measurement duration (Fig. 5d and S1). Incorporating noise prediction uncertainty into the optimization process in a double-acquisition-driven approach typically improves noise model parameter predictions. However, this doesn't always lead to a significant enhancement in the accuracy of the optimal duration prediction. Additionally, the double-acquisition method does not exhibit the optimum shift effect, often resulting in a slightly more accurate estimation of the ground truth duration.

Concurrently with exposure time optimization, both algorithms explore the Forrester function using the UE acquisition function, which is the main objective of the automated experiment. Important to note, that although we selected the uncertainty exploration, any other acquisition functions without restrictions can be employed for the f(x) exploration or optimization. The structure of the proposed approaches is capable of independent optimization of noise and function within the same cycle (Fig. 5e–h). The time dependencies of the acquisition function at different locations exhibit similar profiles. Similarly, the projections onto the x-axis of time-constant profiles show analogous patterns. This is clearly illustrated by the visualization of the acquisition function values across the exploration space (Fig. 5a and b).

In our experiments, both the pure reward-driven and double acquisition-based approaches achieved convergence to optimal duration in more than 90% of the attempts during the first 10 exploration steps independent of the seed locations. Beyond the simple exploration of the function f(x), the algorithm was also tested for optimization using the EI acquisition function (Fig. S2). In these experiments, we observe that the noise model usually converges at a similar rate (Fig. S1). No clear correlation was observed between the rate of measurement duration optimization and the optimization of the f(x) itself. This suggests that noise optimization does not interfere with the primary optimization process. It is important to note that both approaches successfully identify the optimal measurement duration with only 5 repetitive measurements at each point (Fig. S3), which can be crucial for real-world experimentation. However, as the number of measurements per point decreases, the number of optimization steps required for convergence naturally increases.

Real automated experiment

The primary objective of the real automated experiment was to reconstruct a 35 μm-long profile of the PFM electromechanical response as it traversed the ferroelectric domains in [001] cut of a high-quality PbTiO3 single crystal (Fig. 6a). The measured by PFM local surface displacements arise due to the converse piezoelectric effect, reflecting the dependence of the piezoelectric coefficient on the local domain structure. The ground truth profile has been measured using the DART method. DART is primarily a qualitative technique, offering limited precision for quantitative piezoresponse characterization.48 At the same time, it effectively captures the shape of a ‘real’ profile, reflecting variations in piezoresponse along the scanned line. The most precise approach for estimating these local surface deformations induced by the piezoelectric effect involves fitting the contact resonance curve (eqn (9)) obtained from the response to an AC voltage frequency sweep applied by the SPM probe. The measured resonances are fitted using a simple harmonic oscillator (SHO) model:
 
image file: d4dd00391h-t10.tif(9)
where ω is the frequency, r(ω) is the measured amplitude of the surface displacement, rreal is the actual amplitude, ω0 the resonance frequency and Q the quality factor. The uncertainty in the surface displacement, derived from the SHO fitting, was determined based on the approximate covariance of the fit. The x-axis in the study represents the distance from the starting point with the investigation profile.

image file: d4dd00391h-f6.tif
Fig. 6 Results of the real automated experiment. (a) PFM amplitude scan with the exploration profile marked in blue. The red point corresponds to location 0. (b) Comparison of the predicted PFM amplitude profile with the actual data extracted from the PFM scan. (c) Prediction of the NoiseA(t) along with the explored points. (d) Evolution of the predicted optimal measurement times as a function of iteration number. (e) Experimental trajectory visualized in the (x,t) exploration space. The background shading represents the acquisition function values. Data points in (b, c and e) are color-coded based on iteration number, as indicated by the colorbar in (e), with light green representing the initial seed measurements.

The predicted response dependencies after 20 exploration steps, along with the actual data extracted from the PFM scan, are shown in Fig. 6b. The predicted profile shape closely follows the real one measured by PFM, effectively reconstructing the actual profile. It is important to note that for accurate domain reconstruction, implementing structured GP as a surrogate model would be more appropriate than the GP with a modified Matérn 5/2 kernel used in this experiment. However, for our specific goals, the chosen model is more than adequate. The differences between the DART technique used for obtaining the ground truth profile and response estimation from full resonance hinder direct comparison of absolute values. Therefore, the primary objective of our experiment was to accurately reconstruct the shape of the ground truth profile rather than to match the absolute values.

The experiment trajectory clearly shows that, at each step, the algorithm selects spatial coordinates with the highest uncertainty, aligning with the UE acquisition function (Fig. 6e). The close match between the predicted profile and the PFM scan response confirms the successful progression of the experiment toward the reconstruction objective. From the analysis of the exploration trajectory (Fig. 6d and e), it is evident that in the early stages, the model actively explores along the t-axis. However, after 11 initial exploration steps, the model converges to a value of single spectrum measurement duration 0.41 s (Fig. 6c) representing the optimum for the chosen reward function (eqn (5)). This convergence toward the optimal value after initial exploration is typical for BO driven by the EI acquisition function.

Overall, the proposed approach demonstrates its ability to pursue the predefined primary objective—an exploration of the PFM response along the profile—while simultaneously optimizing the resonance curve measurement duration. For greater statistical significance, we repeated our experiment multiple times with different seed locations. In each case, the model successfully converges. The results of these experiments can be found in GitHub (see Data availability).

Conclusions

To summarize, we propose a workflow for optimizing measurement duration in real-time during automated experiments. Our algorithm employs the MeasuredNoiseGP model, modified with specialized kernels. Optimization within this workflow can be driven by either a pure reward-based approach or a double optimization acquisition function. The pure reward-driven approach relies solely on the prediction of the noise structure and demonstrates faster convergence to the optimal measurement duration. In contrast, the double optimization acquisition function, which incorporates noise model uncertainty into the process, offers a higher exploration impulse, leading to a more accurate reconstruction of the noise model, albeit requiring more iterations to converge.

The efficiency of the proposed framework was validated through a simulation of the Forrester function exploration, a benchmark for optimization processes, and a real automated PFM experiment. In both cases, the framework successfully demonstrates its ability to simultaneously optimize measurement duration and the target property. Including measurement duration in the BO cycle was found to have no noticeable impact on the primary objective of optimizing the target function. When dealing with homoscedastic noise that is independent of x and a target function that is independent of exposure time, this addition does not result in an exponential increase in the required number of observations (curse of dimensionality). However, a moderate increase in computational complexity is expected due to the expanded dimensionality of the exploration space. Important to highlight, when expanding the number of ‘ordinary’ variables, the same limitations faced by traditional BO-based approaches – such as the curse of dimensionality, the complexity of constructing accurate surrogate models, and the challenges of optimizing the acquisition function – still apply.

The proposed workflow incorporates intra-step optimization within the automated experiment, enhancing global optimization efficiency by balancing knowledge acquisition with experimental costs. This approach is particularly valuable for automating spectroscopic measurements, such as Raman, XRD, etc., where exposure (or accumulation) time is a critical hyperparameter.

Data availability

The data that support the findings of this study are available in the ESI of this article. Code and raw experimental results have been uploaded to Zenodo with https://doi.org/10.5281/zenodo.15024308 and available at GitHub with https://github.com/Slautin/2024_Noise_BO (release version 1.0.0).

Author contributions

Boris N. Slautin: conceptualization (equal); software (equal); data curation; writing – original draft. Yu Liu: software (equal). Jan Dec: resources; Vladimir V. Shvartsman: writing – review & editing (equal). Doru C. Lupascu: writing – review & editing (equal). Maxim A. Ziatdinov: software (equal). Sergei V. Kalinin: conceptualization (equal); supervision; writing – review & editing (lead).

Conflicts of interest

There are no conflicts to declare.

Acknowledgements

This research (workflow design, SVK) was primarily supported by the National Science Foundation Materials Research Science and Engineering Center program through the UT Knoxville Center for Advanced Materials and Manufacturing (DMR-2309083). VVS and DCL acknowledge the support by the German Research Foundation (DFG Project “Molectra” GR 4792/4-1, Project number 510095586). The development of the GPax Python package (MAZ) was supported by the Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy.

References

  1. H. Hysmith, E. Foadian, S. P. Padhy, S. V. Kalinin, R. G. Moore, O. S. Ovchinnikova and M. Ahmadi, Digital Discovery, 2024, 3, 621 Search PubMed.
  2. Y. Xiao, P. Zheng, T. Yang, S. K. Chakravarty, J. Rodriguez-Lopez, A. Urban and Z. Li, J. Electrochem. Soc., 2023, 170, 050538 Search PubMed.
  3. J. M. Gregoire, L. Zhou and J. A. Haber, Nat. Synth., 2023, 2, 493 CAS.
  4. S. Lo, S. G. Baird, J. Schrier, B. Blaiszik, N. Carson, I. Foster, A. Aguilar-Granda, S. V. Kalinin, B. Maruyama, M. Politi, H. Tran, T. D. Sparks and A. Aspuru-Guzik, Digital Discovery, 2024, 3, 842 Search PubMed.
  5. M. Abolhasani and E. Kumacheva, Nat. Synth., 2023, 2, 483 CrossRef CAS.
  6. K. Choudhary, B. DeCost, C. Chen, A. Jain, F. Tavazza, R. Cohn, C. W. Park, A. Choudhary, A. Agrawal, S. J. L. Billinge, E. Holm, S. P. Ong and C. Wolverton, npj Comput. Mater., 2022, 8, 59 CrossRef.
  7. C. P. Gomes, B. Selman and J. M. Gregoire, MRS Bull., 2019, 44, 538 CrossRef.
  8. Z. Wang, Z. Sun, H. Yin, X. Liu, J. Wang, H. Zhao, C. H. Pang, T. Wu, S. Li, Z. Yin and X. Yu, Adv. Mater., 2022, 34, 2104113 CrossRef CAS PubMed.
  9. A. McDannald, M. Frontzek, A. T. Savici, M. Doucet, E. E. Rodriguez, K. Meuse, J. Opsahl-Ong, D. Samarov, I. Takeuchi, W. Ratcliff and A. G. Kusne, Appl. Phys. Rev., 2022, 9, 021408 CAS.
  10. Y. Liu, J. Yang, R. K. Vasudevan, K. P. Kelley, M. Ziatdinov, S. V. Kalinin and M. Ahmadi, J. Phys. Chem. Lett., 2023, 14, 3352 CrossRef CAS PubMed.
  11. M. Rashidi and R. A. Wolkow, ACS Nano, 2018, 12, 5185 CrossRef CAS PubMed.
  12. X. Wang, Y. Jin, S. Schmitt and M. Olhofer, ACM Comput. Surv., 2023, 55, 287 Search PubMed.
  13. C. E. Rasmussen, in Advanced Lectures on Machine Learning. ML 2003. Lecture Notes in Computer Science, ed. O. Bousquet, U. von Luxburg and G. Rätsch, Springer, Berlin, 2004, vol. 176, p. 63 Search PubMed.
  14. A. G. Kusne, H. Yu, C. Wu, H. Zhang, J. Hattrick-Simpers, B. DeCost, S. Sarker, C. Oses, C. Toher, S. Curtarolo, A. V. Davydov, R. Agarwal, L. A. Bendersky, M. Li, A. Mehta and I. Takeuchi, Nat. Commun., 2020, 11, 5966 CrossRef CAS PubMed.
  15. K. Higgins, S. M. Valleti, M. Ziatdinov, S. V. Kalinin and M. Ahmadi, ACS Energy Lett., 2020, 5, 3426 CrossRef CAS.
  16. J. K. Pedersen, C. M. Clausen, O. A. Krysiak, B. Xiao, T. A. A. Batchelor, T. Löffler, V. A. Mints, L. Banko, M. Arenz, A. Savan, W. Schuhmann, A. Ludwig and J. Rossmeisl, Angew. Chem., Int. Ed., 2021, 60, 24144 CrossRef CAS PubMed.
  17. S. B. Harris, A. Biswas, S. J. Yun, K. M. Roccapriore, C. M. Rouleau, A. A. Puretzky, R. K. Vasudevan, D. B. Geohegan and K. Xiao, Small Methods, 2024, 8, 2301763 CrossRef CAS PubMed.
  18. K. Bamoto, H. Sakurai, S. Tani and Y. Kobayashi, Opt. Express, 2022, 30, 243 CrossRef CAS PubMed.
  19. C. Y. Chang, Y. W. Feng, T. S. Rawat, S. W. Chen and A. S. Lin, J. Intell. Manuf., 2024, 1, 1–14 Search PubMed.
  20. K. M. Roccapriore, S. V. Kalinin and M. Ziatdinov, Adv. Sci., 2022, 9, 2203422 CrossRef PubMed.
  21. Y. Liu, K. P. Kelley, R. K. Vasudevan, H. Funakubo, M. A. Ziatdinov and S. V. Kalinin, Nat. Mach. Intell., 2022, 4, 341 Search PubMed.
  22. Y. K. Wakabayashi, T. Otsuka, Y. Krockenberger, H. Sawada, Y. Taniyasu and H. Yamamoto, APL Mater., 2019, 7, 101114 CrossRef.
  23. C. T. Nelson, R. K. Vasudevan, X. Zhang, M. Ziatdinov, E. A. Eliseev, I. Takeuchi, A. N. Morozovska and S. V. Kalinin, Nat. Commun., 2020, 11, 6361 CrossRef CAS PubMed.
  24. M. Ziatdinov, Y. Liu, K. Kelley, R. Vasudevan and S. V. Kalinin, ACS Nano, 2022, 16, 13492 CrossRef CAS PubMed.
  25. M. A. Ziatdinov, A. Ghosh and S. V. Kalinin, Mach. Learn.: Sci. Technol., 2022, 3, 015003 Search PubMed.
  26. L. P. Swiler, M. Gulian, A. L. Frankel, C. Safta and J. D. Jakeman, J. Mach. Learn. Model. Comput., 2020, 1, 119 CrossRef.
  27. E. J. Cross and T. J. Rogers, IFAC-PapersOnLine, 2021, 54, 168 CrossRef.
  28. E. J. Cross, T. J. Rogers, D. J. Pitchforth, S. J. Gibson, S. Zhang and M. R. Jones, Data-Centric Eng., 2024, 5, e8 CrossRef.
  29. A. G. Wilson, Z. Hu, R. Salakhutdinov and E. P. Xing, Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, 2016, vol. 51, p. 370 Search PubMed.
  30. M. Valleti, R. K. Vasudevan, M. A. Ziatdinov and S. V. Kalinin, Mach. Learn.: Sci. Technol., 2024, 5, 015012 Search PubMed.
  31. S. L. Sanchez, E. Foadian, M. Ziatdinov, J. Yang, S. V. Kalinin, Y. Liu and M. Ahmadi, Digital Discovery, 2024, 3, 1577 RSC.
  32. M. A. Ziatdinov, Y. Liu, A. N. Morozovska, E. A. Eliseev, X. Zhang, I. Takeuchi and S. V. Kalinin, Adv. Mater., 2022, 34, 2201345 CrossRef CAS PubMed.
  33. A. Makarova, I. Usmanova, I. Bogunovic and A. Krause, Adv. Neural Inf. Process Syst., 2021, vol. 34, p. 17235 Search PubMed.
  34. V. Picheny, D. Ginsbourger, Y. Richet and G. Caplin, Technometrics, 2013, 55, 2 CrossRef.
  35. J. M. Hernández-Lobato, M. W. Hoffman and Z. Ghahramani, Adv. Neural Inf. Process Syst., 2014, 27 Search PubMed.
  36. S. Daulton, S. Cakmak, M. Balandat, M. A. Osborne, E. Zhou and E. Bakshy, in Proceedings of Machine Learning Research, 2022, 162, 4831 Search PubMed.
  37. S. Daulton, M. Balandat and E. Bakshy, Advances in Neural Information Processing Systems, 2021, vol. 34, p. 2187 Search PubMed.
  38. J. Zhang, D. Semochkina, N. Sugisawa, D. C. Woods and A. A. Lapkin, Comput. Chem. Eng., 2025, 194, 108983 CrossRef CAS.
  39. G. Luo, X. Yang, W. Su, T. Qi, Q. Xu and A. Su, Chem. Eng. Sci., 2024, 298, 120434 CrossRef CAS.
  40. B. Tao, L. Yan, Y. Zhao, M. Wang and L. Ouyang, Comput. Ind. Eng., 2025, 199, 110749 CrossRef.
  41. Z. Zanjani Foumani, M. Shishehbor, A. Yousefpour and R. Bostanabad, Comput. Methods Appl. Mech. Eng., 2023, 407, 115937 CrossRef.
  42. K. Swersky, J. Snoek and R. P. Adams, in Neural Information Processing Systems, ed. C. J. Burges, L. Bottou, M. Welling, Z. Ghahramani and K. Q. Weinberger, Curran Associates, Inc., 2013, p. 26 Search PubMed.
  43. A. Biswas, S. M. P. Valleti, R. Vasudevan, M. Ziatdinov and S. V. Kalinin, arXiv, 2024, preprint, arXiv:2402.13402,  DOI:10.48550/arXiv.2402.13402.
  44. R. Baddour-Hadjean, E. Raekelboom and J. P. Pereira-Ramos, Chem. Mater., 2006, 18, 3548 CrossRef CAS.
  45. M. Ziatdinov, https://github.com/ziatdinovmax/gpax, accessed: September, 2024.
  46. Y. Liu, U. Pratiush, J. Bemis, R. Proksch, R. Emery, F. D. Rack, Y.-C. Liu, J.-C. Yang, S. Udovenko, S. Trolier-McKinstry and S. V. Kalinin, Rev. Sci. Instrum., 2024, 95, 093701 CrossRef CAS PubMed.
  47. A. I. J. Forrester, A. Sóbester and A. J. Keane, Proc. R. Soc. A, 2007, 463, 3251 CrossRef.
  48. A. Gannepalli, D. G. Yablon, A. H. Tsou and R. Proksch, Nanotechnology, 2011, 22, 355705 CrossRef CAS PubMed.

Footnote

Electronic supplementary information (ESI) available. See DOI: https://doi.org/10.1039/d4dd00391h

This journal is © The Royal Society of Chemistry 2025
Click here to see how this site uses Cookies. View our privacy policy here.