Open Access Article
This Open Access Article is licensed under a Creative Commons Attribution-Non Commercial 3.0 Unported Licence

Machine-learning-guided design of electroanalytical pulse waveforms

Cameron S. Movassaghi§ *ab, Katie A. Perrotta a, Maya E. Curry c, Audrey N. Nashner a, Katherine K. Nguyen a, Mila E. Wesely de, Miguel Alcañiz Fillol f, Chong Liu a, Aaron S. Meyer g and Anne M. Andrews *abgh
aDepartment of Chemistry & Biochemistry, University of California, Los Angeles, Los Angeles, CA 90095, USA. E-mail: aandrews@mednet.ucla.edu; csmova@g.ucla.edu
bCalifornia NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA 90095, USA
cInstitute of Society and Genetics, University of California, Los Angeles, Los Angeles, CA 90095, USA
dDepartment of Ecology and Evolutionary Biology, University of California, Los Angeles, Los Angeles, CA 90095, USA
eDepartment of Psychology, University of California, Los Angeles, Los Angeles, CA 90095, USA
fInteruniversity Research Institute for Molecular Recognition and Technological Development, Universitat Politècnica de València -Universitat de València, Camino de Vera s/n, Valencia, 46022, Spain
gDepartment of Bioengineering, University of California, Los Angeles, Los Angeles, CA 90095, USA
hDepartment of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, and Hatos Center for Neuropharmacology, University of California, Los Angeles, Los Angeles, CA 90095, USA

Received 6th January 2025 , Accepted 4th June 2025

First published on 10th June 2025


Abstract

Voltammetry is widely used to detect and quantify oxidizable or reducible species in complex environments. The neurotransmitter serotonin epitomizes an analyte that is challenging to detect in situ due to its low concentrations and the co-existence of similarly structured analytes and interferents. We developed rapid-pulse voltammetry for brain neurotransmitter monitoring due to the high information content elicited from voltage pulses. Generally, the design of voltammetry waveforms remains challenging due to prohibitively large combinatorial search spaces and a lack of design principles. Here, we illustrate how Bayesian optimization can be used to hone searches for optimized rapid pulse waveforms. Our machine-learning-guided workflow (SeroOpt) outperformed random and human-guided waveform designs and is tunable a priori to enable selective analyte detection. We interpreted the black box optimizer and found that the logic of machine-learning-guided waveform design reflected domain knowledge. Our approach is straightforward and generalizable for all single and multi-analyte problems requiring optimized electrochemical waveform solutions. Overall, SeroOpt enables data-driven exploration of the waveform design space and a new paradigm in electroanalytical method development.


Introduction

Voltammetry is widely employed across fields, including energy storage,1 catalysis,2 organic synthesis,3 and electroanalysis (i.e., neuroscience,4–8 diagnostics,9 environmental applications,10 and food and beverage analysis11). Despite the many types of analytes suitable for voltammetry, few design principles exist to enable analyte-specific voltammetry waveforms to be identified and optimized systematically. This lack of objectively guided waveform design and optimization imposes significant limitations on the accuracy, selectivity, and robustness of voltammetry applications for single- or multi-analyte detection and monitoring.

A grand challenge in chemical neuroscience is to uncover the functional and dysfunctional interplay between neurotransmitters in the brain.12 Voltammetry is broadly used to characterize and quantify electroactive neurotransmitter release and reuptake using brain-implanted microelectrodes during biological perturbation,13–15 including in humans.6 Recent progress has focused on developing novel electrode materials, coatings, or data analysis procedures to improve the selectivity and sensitivity of real-time neurochemical monitoring in behaving subjects.13,16–23 Meanwhile, voltammetry waveform development (i.e., selecting optimal waveform parameters for detecting particular analytes) has remained essentially unchanged for decades. It relies principally on historic performers (e.g., pre-patterned waveforms), heuristics, and grid searches.24–29

For neurochemistry applications, historic performers include fast-scan cyclic voltammetry (FSCV) triangle or N-shape (i.e., sawtooth) waveforms for detecting evoked dopamine8 or serotonin,30 respectively, in vivo. The N-shape waveform improved serotonin detection over the FSCV waveform by increasing the scan rate to 1000 V s−1 and altering the holding potentials.31 Modifying these waveform parameters impacts sensitivity, selectivity, and temporal resolution.24,32–34 For example, increasing the switching potential from 1.0 V to 1.3 V renews the electrode surface and enhances serotonin detection.24 The development of fast-cyclic square-wave voltammetry has improved the sensitivity and selectivity of dopamine35 and serotonin36 detection by superimposing triangle and N-shape waveforms, respectively, on pre-patterned staircase waveforms. Other waveform modifications have led to fast-scan controlled absorption voltammetry and multiple cyclic square-wave voltammetry to determine basal dopamine37 or serotonin levels.38,39 These approaches required separate waveforms to measure different analytes over different timescales and were derived from the prior triangle and N-shape waveforms in a guess-and-check manner (Fig. 1, top).


image file: d5dd00005j-f1.tif
Fig. 1 Approaches to voltammetry waveform design. Funnels denote likely bottlenecks.

We developed rapid pulse voltammetry (RPV) to enable multi-analyte monitoring (e.g., simultaneous serotonin and dopamine detection) across timescales (i.e., quantification of basal and stimulated neurotransmitter levels using the same waveform in the same recording session).40 Rapid pulse voltammetry utilizes background-inclusive (i.e., non-background subtracted) data, requiring novel waveform design to produce informative background currents.41 This custom design is opposed to other popular pulse voltammetry approaches (e.g., normal, differential, staircase), which use pre-patterned approaches to longer waveforms (s to min).42 While also based on characteristic oxidation and reduction potentials derived from the triangle and N-shape waveforms, rapid pulses (i.e., 2 ms), rather than fast linear sweeps, reduce fouling and produce informative faradaic and non-faradaic currents. The resulting current–time fingerprints from our original generation (OG) RPV waveform40 yielded analyte-specific information that can be used by partial least squares regression (PLSR) or other supervised regression models (e.g., artificial neural networks, elastic net) to distinguish analytes and predict their concentrations. Because the OG waveform was inspired by heuristics from the voltammetric electronic tongue (VET) field for ‘soft’ sensing (e.g., intermediate and counter pulses),43–45 we refer to this as VET-inspired design (Fig. 1, middle).

Having shown that our VET-inspired OG waveform outperformed conventional waveforms,40 we sought a generalizable and expandable approach to designing and optimizing rapid pulse (and other types of) waveforms. Because tuning specific waveform parameters improves analyte-specific currents,13,24,46 we hypothesized that enhanced RPV waveforms for serotonin and dopamine co-detection (and many more analytes) exist but remain undiscovered due to the lack of design principles needed to explore intractably large waveform search spaces.

We focused first on detecting serotonin to address this waveform space problem (vide infra). Serotonin is involved in modulating mood, anxiety, and reward-related behavior via interconnecting brain circuits.47–51 Serotonin is an essential gut hormone. It also plays a role in spinal pain transmission and immune function.52–55 Serotonin is a challenging target to detect using voltammetry due to its relatively low physiological concentrations (high pM to low nM),48 colocalization with other neurotransmitters having similar redox profiles (e.g., dopamine, norepinephrine), and irreversible oxidation byproducts56 that can foul electrodes. We further hypothesized that a waveform development paradigm to discover optimized serotonin waveforms would generalize to other neurochemicals, other types of analytes, and their combinations.

When developing RPV or other complex waveforms, a prohibitively large number of waveform step or segment combinations prevents exhaustive empirical investigation, even for a small number of steps or segments. Step potentials, lengths, order, and hold times are all variables for investigation when exploring and improving pulse waveforms; minor modifications of each variable can have complex effects on electrochemical signals due to changes in the surface roughness, fouling propensity, and functionalization (e.g., anionic oxide groups) of carbon fiber microelectrodes.24,32 The use of various electrode materials, carbon allotropes, and polymeric coatings further complicates this landscape.57 While a ‘guess and check’ approach has yielded the handful of useful conventional and VET-inspired waveforms mentioned above, one-parameter-at-a-time or randomized58,59 optimization approaches do not take advantage of the rich information diversity encoded in complex waveforms, leaving the overall waveform search space relatively unexplored.

Recently, Bayesian optimization has been used to navigate intractable physiochemical search spaces when combined with experimental training data.60–65 This adaptive experimental approach presents an opportunity to pair machine learning with electroanalysis to create a new waveform development paradigm (Fig. 1, bottom). Here, we present a Bayesian optimization workflow (SeroOpt) that generates fit-for-purpose voltammetry waveforms for selective serotonin detection. To our knowledge, a systematic machine-learning-based approach to designing, testing, and optimizing analyte-specific waveforms has not yet been reported. We show that analyte-specific waveform information depends on specific potentials occurring in a particular order and timing, confirming the need for a parsimonious search approach across parameter dimensions. Our active learning approach outperformed randomly designed and domain expert-designed waveforms after only a handful of iterations. Our methods can be straightforwardly extended to designing any voltammetry waveform for any electroactive analyte to discover new and perhaps non-intuitive waveforms optimized for application-specific metrics. To encourage widespread adoption, we provide data, tutorial code notebooks, and videos at github.com/csmova/SeroOpt (https://github.com/csmova/SeroOpt), as well as our corresponding open-source voltammetry acquisition and analysis software66 at github.com/csmova/SeroWare (https://github.com/csmova/SeroWare) and github.com/csmova/SeroML (https://github.com/csmova/SeroML).

Results

The SeroOpt workflow casts waveform development as black-box optimization

We designed the following Bayesian optimization workflow for robust, iterative, and adaptive voltammetry waveform development (SeroOpt; Fig. 2). Representative it curves (i.e., voltammograms) are provided (Fig. S1). We sought to identify an input (a rapid pulse waveform) related to an optimized output objective (sensor performance metric; e.g., serotonin detection accuracy) by an unknown, ground-truth objective function (the black box). This function can only be accessed by obtaining experimental training data on various waveform–metric combinations, approximating the black-box function using a surrogate model, and then querying the model to generate an input (waveform) corresponding to a predicted objective optimum. The generated (new) waveform is then tested experimentally, and the true objective value for that waveform is provided as subsequent training data for the next round of optimization. When a probabilistic surrogate model is used, both the model predictions (mean) and associated uncertainty (variance) can be updated using Bayesian inference as new data (evidence) becomes available in each iteration. This optimize-update process is repeated sequentially, referred to herein as Bayesian optimization. Each of the workflow steps is described in detail below.
image file: d5dd00005j-f2.tif
Fig. 2 Bayesian optimization workflow (SeroOpt) for machine learning-guided rapid pulse voltammetry (RPV) waveform design for serotonin (5-HT) and dopamine (DA). An example visualization of optimization landscapes is shown (bottom). GP = Gaussian process, M = metric, W = waveform, S = string, a.c. = altered cation; image file: d5dd00005j-t2.tif represents estimation of true value.

Search space constraints & initialization by embedding domain knowledge

Each training waveform W was embedded as a vector in 8-dimensional space such that W = [E1, τ1, E2, τ2, E3, τ3, E4, τ4] (Fig. 2, step 1). Here, Ei is each potential step (V) and τi is each step hold time (ms). In this initial design, for eventual comparison with our original generation (OG) human-designed four-step waveform40 (Fig. 3a), we constrained the search space to four steps per waveform, with E1 and E2 constrained to 0–1.3 V and E3 and E4 constrained to −0.5–0 V. These constraints ensured that waveforms remained inside the solvent window32 and encoded a ‘pulse/counter-pulse’ concept (i.e., anodic steps followed by cathodic steps) from VET theory.67 We constrained τ to 0.5–2.0 ms based on our preliminary results showing that capacitive current completely decays after ∼2 ms, yet critical features are contained in as little as the first ∼0.5 ms of each pulse.40 Pulses do not result in voltage cross-talk (i.e., residual capacitive current from successive voltage steps).36,37 The hold time was defined as image file: d5dd00005j-t1.tif to limit the number of parameters. Each pulse was applied at 10 Hz; the hold potential was defined as E4.
image file: d5dd00005j-f3.tif
Fig. 3 (a) Bayesian optimization waveform (R1S4W2; bottom) outperformed the original generation (OG) human-designed RPV waveform (top) after four iterations. Error bars represent standard deviations. (b) Convergence plot of the minima of serotonin (5-HT) test set accuracy per string. The waveforms optimized specifically for 5-HT test set accuracy (W2) are shown in the inset. (c) Test and challenge set results for the OG waveform in triplicate across two electrodes. Error bars represent the minimum and maximum values predicted. (d) Test and challenge set results for the optimized serotonin waveform (R1S4W2) in triplicate across two electrodes. Error bars represent the minimum and maximum values predicted. (e) Average of (c) and (d). Error bars represent standard deviations.

To initialize a model of the relationship between waveform and objective (i.e., the optimization metric), six waveforms were randomly generated using the constraints above (Fig. 2, step 1). The choice of six waveforms was arbitrary and within the number of waveforms that could be experimentally evaluated in a single-day experiment. We refer to this collection of random initialization waveforms as string 1 (S1).

Model calibration & optimization metrics allow for relevant objective functions

We obtained experimental calibration curves (Table 1) for each S1 waveform (gray boxes, Fig. 2) to train a partial-least squares regression (PLSR) model as demonstrated previously.40 The choice of the PLSR model, compared to other models, such as principal components regression (PCR), is detailed elsewhere.40 Briefly, PLSR was shown to outperform PCR,40 while more advanced models (e.g., deep learning) provide modest gains in predictive accuracy at the expense of computational complexity.68 We note that our Bayesian optimization approach can be used to optimize waveforms with output metrics regardless of the choice of calibration model (PCR, PLSR, artificial neural networks, etc.).
Table 1 Training, test and challenge set concentrations, in order of injection. All solutions were prepared in artificial cerebrospinal fluid; a.c. = altered cations
Set Sample DA (nM) 5-HT (nM) 5-HIAA (μM) DOPAC (μM) Ascorbate (μM) pH (units) KCl (mM) NaCl (mM)
Training Blank 0 0 0 0 0 7.3 3.5 147
A 300 0 6 80 200 7.3 3.5 147
B 1000 20 10 70 110 7.3 3.5 147
C 0 120 6 90 190 7.3 3.5 147
D 450 350 4 0 130 7.3 3.5 147
E 600 500 1 10 170 7.3 3.5 147
Blank 0 0 0 0 0 7.3 3.5 147
F 160 250 2 20 180 7.3 3.5 147
G 700 300 0 0 100 7.3 3.5 147
H 80 160 10 60 100 7.3 3.5 147
I 20 60 0 50 160 7.3 3.5 147
J 40 40 2 100 120 7.3 3.5 147
Blank 0 0 0 0 0 7.3 3.5 147
K 800 10 8 30 150 7.3 3.5 147
L 500 0 0 0 100 7.3 3.5 147
M 0 250 0 0 100 7.3 3.5 147
N 0 0 10 0 100 7.3 3.5 147
O 0 0 0 50 100 7.3 3.5 147
P 0 0 0 0 100 7.3 3.5 147
Blank 0 0 0 0 0 7.3 3.5 147
Test T1 750 50 1 85 200 7.3 3.5 147
T2 100 400 5 9 200 7.3 3.5 147
T3 400 200 5 85 190 7.3 3.5 147
T4 70 30 5 35 200 7.3 3.5 147
Blank 0 0 0 0 0 7.3 3.5 147
Challenge (pH) T1 pH 750 50 1 85 200 7.1 3.5 147
Blank pH 0 0 0 0 0 7.1 3.5 147
T2 pH 100 400 5 9 200 7.2 3.5 147
Blank pH 0 0 0 0 0 7.2 3.5 147
Challenge (a.c.) T3 a.c. 400 200 5 85 190 7.3 120 31
Blank a.c. 0 0 0 0 0 7.3 120 31


The PLSR model predicted the test and challenge set sample concentrations of serotonin and dopamine (Fig. 2, steps 2 and 3; see Methods for definitions of training, test, and challenge samples). These predictions were used to calculate the eight optimization metrics listed (Fig. 2, step 4; defined in Table S1). All metrics were calculated on all waveforms in each string, unless otherwise noted (Fig. 2, steps 2–4). We focus on the results for the second waveform (W2) of each string, which is optimized across strings for the serotonin test set prediction accuracy metric. The latter is the mean absolute error in the PLSR model predictions of test samples T1–4 (including a blank; Table S1), thus creating a minimization task (maximum accuracy implies minimum error). We chose mean absolute error rather than relative error due to the presence of the blank (true null concentration).

The choice of test set accuracy as an optimization metric was motivated by several factors. First, we pursued single-objective optimization for simplicity and (at the time of analysis) a lack of user-friendly open-source software to perform multi-objective human-in-the-loop optimization. Having to choose only a single metric to focus on, test set accuracy is an attractive choice as it is a direct measure of waveform performance, instead of alternatives, such as PLSR model-specific metrics (e.g., scores clustering). Using model-specific metrics is less physically meaningful and would limit the extendibility of our method. Using physically meaningful parameters, such as test set accuracy, our workflow remains model-agnostic (i.e., any model that performs supervised regression prediction can be used). For similar reasons of retaining metrics in raw form, we chose not to combine multiple metrics into a single objective task (e.g., scalarization69).

Second, we encoded selectivity in our test and challenge set design. Our calibration curve varies the concentrations of all analytes and interferents across the training, test, and challenge sets used to build and evaluate the PLSR models (Table 1). If the PLSR model for a given waveform confuses any interferent for serotonin, this will be represented in the test or challenge set accuracy metric for serotonin and will contribute to the mean absolute error. Thus, serotonin test and challenge set accuracy is a proxy for selectivity in varying dopamine, 5-hydroxyindoleacetic acid (5-HIAA), ascorbate, 3,4-dihydroxyphenylacetic acid (DOPAC), pH, and K+/Na+ concentrations (see Methods).

Lastly, other analytical figures of merit that could be used as optimization metrics (sensitivity, limit of detection (LOD), linear range, etc.) are irrelevant if model accuracy and selectivity are not first established. For example, we included LOD as an alternative optimization metric (Fig. 2). The selectivity performance of LOD-optimized waveforms (inferred via test and challenge set accuracy) was poor. Thus, we did not continue to optimize for LOD in subsequent campaigns but were still able to utilize these waveforms as training data by calculating their other metrics. For these reasons, we focused on test set accuracy. Specifically, we focused on serotonin (5-HT) because it is historically more difficult to detect by voltammetry than dopamine. Serotonin concentrations are approximately 10-fold lower than dopamine in striatum,48 and serotonin has complex redox mechanisms and fouling processes.30

Regardless, we included other optimization metrics in our workflow rather than solely serotonin test set accuracy to explore which metrics have an objective landscape that is ‘optimizable’. As this was a first attempt, we had no guarantee that the serotonin test set accuracy was a viable choice of metric. We also wanted to investigate other analytes and metrics for future use with multi-objective optimization. For example, we included dopamine-specific metrics in the scheme for comparison with our original RPV work40 because serotonin/dopamine co-detection is a long-term goal for multi-objective optimization.70

To maximize the training data produced in an experimental day, we calculated the performance of all waveforms on all metrics in each string, regardless of which metric a waveform was designed to optimize. For example, the optimal serotonin test set accuracy waveform (W2) in each string was used to calculate the serotonin test set accuracy metric. Still, the performance of this waveform on the dopamine, pH, and altered cation (a.c.) accuracy metrics was also recorded. This approach allows additional waveforms (albeit waveforms not optimized specifically for that metric) to be tested per string rather than solely the one ‘optimized’ waveform for each metric. Performing single objective optimization in this parallel manner explores ‘optimizable’ metrics while obtaining additional training data per string in a simple yet sample-efficient manner. For example, if test set accuracy failed as an optimizable metric for serotonin, we could pivot to an alternative metric exhibiting promising optimization progress (e.g., serotonin pH or a.c. accuracy, or serotonin LOD), with training data already aggregated across all waveforms for that metric.

Parallel single-objective optimization of multiple metrics

The waveform embeddings and corresponding experimentally determined metrics were used to train the surrogate models (i.e., Gaussian processes)71 of the unknown objective functions (Fig. 2, step 5). As mentioned, only single-objective optimization was performed on each metric. Separate Gaussian processes were trained (one for each metric; eight total) in parallel on the aggregated data after evaluating each string. An acquisition function (i.e., expected improvement)71 finds the optima of each surrogate function and outputs the next most likely waveform that will improve each respective metric (Fig. 2, step 6). The process then repeats (Fig. 2, steps 7–9). The overall workflow is illustrated in Fig. 2 and S2.

The eight waveforms (each corresponding to optimization for one of the eight metrics) output from the first optimization loop of this workflow are shown as string 2 (S2). Eight new waveforms were generated, with each new waveform optimized on a single metric (i.e., using the training data generated from S1 (Fig. 2, steps 4–6)). Because S1 was randomly generated to initialize the surrogate model, S2 represented the first iteration of optimized waveforms produced by the workflow.

We repeated the optimization loop by obtaining experimental calibration curve data using each new S2 waveform. We then calculated the individual optimization metrics, aggregated the data with the previous string(s) (e.g., all S3 waveforms were predicted using all S1 and S2 data, one metric at a time), and predicted the next set of optimal S3 waveforms for each metric (Fig. 2, steps 7 and 8). This process was repeated again to generate four waveform strings in total (Fig. 2, step 9). We refer to the group of strings as S1–4. Each string had eight waveforms (W1–8) corresponding to the eight separate metrics, except the initial string (S1), which had only six randomly generated waveforms (arbitrary). All four strings and their associated waveforms were collectively referred to as run 1 (R1).

Machine learning outperforms human-guided waveform design

Across R1, three new waveforms were generated, optimized for serotonin test set accuracy (S1W2 was random; the three successive waveforms (S2W2, S3W2, S4W2) were each more highly optimized than the last). The evolution of the serotonin accuracy waveform across three successive strings was compared to our initial RPV OG waveform (Fig. 3). In the first run, the final waveform generated by our Bayesian optimization scheme nearly perfectly mimicked our chemically intuitive choices for the potentials of the waveform design; the step potentials differed only by ∼100 mV or less (Fig. 3a). The more discernible differences were in the individually optimized step lengths (τ) for R1S4W2, i.e., 0.7 ms, 1.5 ms, 1.9 ms, and 1.4 ms for τ1–4, respectively. Values of τ are rarely optimized individually and instead are set to a global value decided by one-factor-at-a-time optimization under single experimental conditions (e.g., 2 ms for all steps in the OG design).35–38,40

Even though R1S4W2 was only 5.5 ms long, it outperformed the OG waveform, which was 8 ms. Given the similarity in pulse potentials, the increase in data fidelity was attributed partly to changes in the hold times of each step; that is, Bayesian optimization was able to generate better-performing choices of τ. While a 2.5 ms difference in overall pulse length was ostensibly negligible at data rates of 1 MHz, this equates to a reduction of 2500 data points per scan. This reduction can easily save gigabytes of data that otherwise would need to be stored, and save computation time wasted during multi-hour experiments. Decreasing the overall length of the rapid pulse sequence also opens opportunities to increase the temporal resolution to >10 Hz or design more complex combinations of pulses with additional steps, while retaining 10 Hz sampling.

We do not attribute the success of the optimized waveform to chance, as the convergence plot (Fig. 3b) shows that for each optimization string (S2–S4), the waveform optimized for serotonin test set accuracy (W2) found a new minimum for serotonin prediction error during each iteration. This improvement across strings suggests that the surrogate model is learning a reasonable representation of the optimization landscape for serotonin accuracy. Convergence plots for all metrics and runs are provided (Fig. S3).

While sample T2 for R1S4W2 still had a mean absolute error of ∼50 nM (13% error, 2.8% coefficient of variation (CV)), predictions were improved compared to the OG waveform (22% error, 3.4% CV). Continuing the optimization campaign for additional iterations might have minimized the remaining error further. However, the T2 samples had lower DOPAC and higher 5-HIAA concentrations than other test samples. These similarly structured interferents may have had confounding effects on the serotonin concentration predictions. Moreover, these samples may have analyte instability due to degradation or surface adsorption to sample vials.

Regardless, Fig. 3a represents a single trial of the waveform on a single electrode, performed during the optimization campaign. Meanwhile, Fig. 3c–e represents a reproducibility study, performed across three total trials using two separate electrodes. These panels demonstrate a more dramatic improvement in the accuracy and precision of R1S4W2 compared to the OG waveform. For example, across these three runs, sample T2 had 0.7% error and 14% CV. Meanwhile, T2 for the OG had 34% error and 42% CV. Given that the microelectrodes were hand-made, different electrodes were used across strings, and dynamic surface changes occur at electrode surfaces, variability in concentration predictions is expected. Nonetheless, compared to the OG waveform, SeroOpt produced a more precise and accurate waveform that generalized across electrodes and replicates.

Explicit and implicit discovery of interferent-agnostic waveforms

Next, we compared the results for the test and challenge set samples from the OG waveform to R1S4W2 (which should be the best-yet waveform for test set serotonin performance). Indeed, R1S4W2 outperformed the OG waveform for serotonin detection in the test and challenge set samples (see Methods). The train and test sets contained samples with varying levels of three physiologically relevant metabolites (DOPAC, 5-HIAA, ascorbate). Meanwhile, the challenge set samples have physiologically relevant differences in pH, and Na+ and K+ levels held constant in the training set (Fig. 3a and Table 1, samples denoted pH 7.1, pH 7.2, and altered cations or “a.c.”). The optimized serotonin waveform R1S4W2 outperformed the OG waveform regarding interferents it was explicitly (DOPAC, 5-HIAA, ascorbate) and not expressly (pH, Na+/K+) trained on (Fig. 3a, c–e).

While the OG waveform confounded changes in pH and Na+/K+ in the challenge set, the R1S4W2 waveform did not suffer similar pitfalls (see samples T2 pH 7.2, T3 a.c., blank a.c. for each waveform in Fig. 3a). We discuss the performance of test and challenge set samples further in Fig. S4a and b. This result was not due to the waveform not sensing a change in current for varying cation concentrations or being ‘electrochemically silent’.72 Increases in current (hundreds of nA) were evident when aCSF a.c. blanks were injected compared to normal aCSF blanks (Fig. S4c). Similar responses were noted for pH blanks.

To investigate whether the initial results for R1S4W2 outperforming the OG waveform were precise and robust, the waveforms and training/test/challenge sets were run in triplicate using two different electrodes (Fig. 3c–e). We determined that the R1S4W2 waveform increased prediction accuracy for test samples 1–4 by ∼20% compared to the OG waveform. We found that the agnostic behavior towards pH was reproducible for R1S4W2 and not the OG waveforms. However, we noticed that the T3 a.c. challenge sample accuracy was not reproducible across electrodes for either waveform. We attribute this to variations in electrode fabrication. Standardizing the fabrication of fast voltammetry electrodes, along with multi-objective optimization with reproducibility as a metric, will help to alleviate this issue. Regardless, the performance of R1S4W2 as an early optimization candidate, showing enhanced test and challenge set accuracy, demonstrates the success and future promise of the SeroOpt workflow.

The SeroOpt workflow reproducibly outperforms random search

To investigate whether Bayesian optimization was improving waveforms by random chance or gleaning chemically relevant information, this process was repeated, starting with a new set of six random waveforms and carried out for four strings as described above (Fig. 4). We refer to this as run 2 (R2). While data were aggregated across strings within each run, data were never aggregated between runs. The runs were kept separate to compare, from a new randomized initialization, whether four rounds of Bayesian optimization repeatedly produced improved waveforms. In any case, we did not expect the convergence of R2 on the same waveform as R1. The search space is vast, and given the small subset of waveforms tested, converging on the same optima is unlikely. Instead, if R1 and R2 found improved waveforms compared to the randomized waveforms, we could examine the black box models to see what led the optimizer to generate specific waveform design patterns for each optimization metric.
image file: d5dd00005j-f4.tif
Fig. 4 Bayesian optimization outperforms random search. Mean absolute errors for run 1, run 2, and the aggregate of both runs are shown for serotonin test set accuracy (a), pH robustness (b), and ion robustness (c). Error bars represent standard deviations. Sample sizes are shown above the error bars. Red stars denote the minimum error for each group of waveforms. Random waveform types refer to string 1 waveforms. Optimized refers to waveforms optimized for 5-HT performance (i.e., W2, 4, 6, 8). (d–f) Convergence plots corresponding to (a–c), respectively, showing the minimum mean absolute errors for currents at each waveform iteration. Gray boxes represent random initialization waveform regions.

In all cases, except for the first run of pH and a.c. challenge samples, the average serotonin test/challenge set errors were lower when using the optimized serotonin waveforms (W2, 4, 6, 8 for S2, 3, 4 of R1 and R2), compared to the averages for the randomly generated S1 waveforms of R1 and R2 (Fig. 4). The error minima were lower in all cases for the optimized waveforms; random search never produced a better waveform than Bayesian optimization. Moreover, while each W2 waveform in R1 improved across strings, R2S2W2 immediately found a 5-fold lower minimum than the starting initialization. Thus, new random initialization waveforms lead to the discovery of new optimized waveforms in new local minima.

These results suggest the following. Bayesian optimization produces better waveforms than randomly generated or chemist-enabled waveforms. Moreover, Bayesian optimization finds waveforms corresponding to error minima better than random chance. The Bayesian optimization surrogate model (i.e., Gaussian process) effectively models the relationship between voltammetry waveforms and performance, as the minima only occurred for waveforms optimized specifically for serotonin detection metrics (e.g., W2, 4, 6; Table S3). For example, the average serotonin accuracy was ∼45 nM using the randomly generated waveforms. By optimizing for any serotonin parameter (test set accuracy, a.c. accuracy, pH accuracy, detection limit), serotonin accuracy, on average, was improved to 34 nM (24% improvement). While an ostensibly small return on investment, this is only the first iteration of this protocol, and the results consistently outperformed the few standard alternatives to waveform design.

Fine-grained waveform parameter tuning improves predictive performance

In total, 55 waveforms were tested experimentally (the OG waveform, 12 randomly generated waveforms from R1S1 and R2S1, and 42 Bayesian optimized waveforms from R1 and R2 S2–4) with their corresponding metrics given as optimization training data. These waveforms covered a large search space across the waveform parameters. In Fig. 5, clusters of points are interpreted as exploitation, while isolated points are interpreted as exploration. A key advantage of Bayesian optimization is that the acquisition function parsimoniously explores a search space with an exploration–exploitation trade-off.71 Bayesian optimization judiciously explored the search space over 55 waveforms. At the time of writing and to our knowledge, this is the largest optimization scheme covered in neurochemical voltammetry waveform development.
image file: d5dd00005j-f5.tif
Fig. 5 Search space of all waveforms tested experimentally from runs 1 and 2. Red stars represent optimal parameters. Histograms represent the frequency of that parameter value in the waveforms tested. (inset) Evolution of the predicted Bayesian optimization waveforms across two separate Bayesian optimization runs, 1 and 2, for serotonin accuracy metric in blue (W2). String 1 data are not shown as they were randomly generated.

Data for all waveforms and metrics are provided (Tables S2 and S3). We noticed that for serotonin accuracy (W2), the predicted waveforms between R1 and R2 looked similar, especially for S3 and S4 (Fig. 5, inset). The serotonin accuracy waveforms share characteristics with the OG waveform across R1 and R2. They exhibit low to high potential steps for the oxidative potential steps, and high to low potential steps for the reductive potential steps. By S4, all waveforms prefer the ‘intermediate’ anodic pulse step concept described in the VET literature, in which a relatively low amplitude E1 step before a higher amplitude E2 step prevents signal saturation and enhances concentration discrimination.44 Further, most waveforms exhibited a large amplitude counter-pulse (e.g., a large difference between E2 and E3 to complete the redox cycle).67 The fact that the model learned these domain knowledge heuristics across the four iterations suggests it can also learn more complex, higher-order interactions.

Waveform optimizations occurred with relatively small changes in E and τ, even for waveforms as simple as four steps, as shown here. Tuning waveforms can result in dramatic improvements in the predictive performance differences of the resulting models. The effect of varying and reorganizing pulse parameters is relatively unexplored in a systematic, multi-variate manner, as done here. For example, R1S4W6 and R1S3W8 differed by ≤0.04 V and ≤0.9 ms in E and τ (Table S2). Yet, R1S3W8 outperformed R1S4W6 for serotonin test set, pH, and ion accuracy, with up to nearly a 50% reduction in error (Table S3).

To test whether these performance increases were due to differences in electrodes across strings (separate electrodes were used across strings to encourage generalizability across electrodes), we compared two similar waveforms tested on the same electrode: R2S1W2 and R2S1W3. These waveforms differed by ≤0.21 V and ≤1.2 ms, yet R2S1W2 outperformed R2S1W3 in all serotonin metrics (Table S2 and S3). Thus, small and seemingly “insignificant” changes in step potentials and hold times can produce significant accuracy differences. These findings support the importance of a technique like Bayesian optimization to tune parameters with fine-grained adjustments.

The order of the steps in the rapid pulse also matters. For example, R1S1W1 and R1S4W3 are nearly identical, except for the order of their pulses. Yet, R1S1W1 outperformed R1S4W3 in all serotonin detection metrics up to five-fold (Tables S2 and S3).

Interpretable machine learning reveals waveform parameter interactions and learnable heuristics

In addition to the qualitative explanations above, interpretable machine learning methods73 can be applied to ‘open the black box’ and assess how Bayesian optimization decides on improved waveforms. Thus, we investigated whether the optimizer was learning the heuristics that electrochemists use to optimize waveforms, if it was learning novel relationships from the data, or both. We used a global, model-agnostic technique known as partial dependence plots (PDPs) to visualize how varying waveform parameters affect the surrogate model predictions.73 The PDPs are useful for non-parametric models, such as Gaussian processes, that are not directly interpretable.73 Essentially, PDPs average the predictions from the model over samples where all parameters, except the ones of interest, are held constant. The effect of changing only the parameter(s) of interest can then be inferred (i.e., the partial dependence of a feature).

The PDPs for the aggregated runs (R1 and R2 combined) and the individual runs are shown for the serotonin test set accuracy metric (Fig. 6a, S5 and S6, respectively). We focus on the aggregated models because these have more total samples and, thus, are more likely to uncover meaningful relationships. The 2D plots on the diagonal represent the average effect of a metric while varying that parameter. Generally, the more a PDP plot for a particular feature varies, the more important that feature is. Conversely, flat lines indicate either unimportant or interacting features.


image file: d5dd00005j-f6.tif
Fig. 6 (a) Partial dependence plots for the serotonin (5-HT) test set accuracy metric for runs 1 and 2 combined. (b) Individual conditional expectation plots. Ticks represent deciles of the feature values. (c) Shapley additive explanations summary plot.

The aggregated data PDPs (Fig. 6a) confirm a complex and interacting optimization landscape. For example, E3 oscillates, E4 is parabolic, and E1 and τ1 are monotonically decreasing or increasing, respectively. The 3D contour plots below the diagonal represent the average effects on each metric while varying two waveform parameters. Because we minimize error, the purple shading represents the optimal (minima) regions, while the yellow regions represent maxima.

Interpreting the PDPs has some weaknesses. First, PDPs represent averages, meaning heterogenous interactions can be obfuscated (e.g., an effect on one-half of the data may be averaged out by an opposite effect on the other half). Thus, non-varying parameters in PDPs could be misinterpreted. To confirm this, we examined individual conditional expectation (ICE) plots. The ICE plots show the individual contributions that make up the averages in the PDP plots.73 Thus, the 2D PDPs (blue lines, Fig. 6a) have matching structures with the average ICE plots (blue lines, Fig. 6b). The individual instances (gray lines, Fig. 6b) show that there are heterogeneous effects hidden by the PDP averages for some parameters. For example, τ1, E3, and E4 have traces that do not all follow the same general trends. Thus, varying these parameters depends on heterogeneous interactions between the other waveform parameters. Meanwhile, the remaining parameters, E1, E2, τ2, τ3, and τ4, follow the same general trends (flat lines suggesting non-interacting waveform parameters).

As an alternative to PDP and ICE plots, we used Shapley additive explanations (SHAP) plots.73 The SHAP values enable interpretations of how features contribute to individual model predictions. The SHAP plots confirmed that the essential features were E3, E4, τ1, and E1. Fig. 6c shows the spread of the SHAP value per feature. Further, the heterogeneous effects, particularly in E3 and E4, are confirmed by the different colors of the feature values that do not cluster on a single side.

Discussion

Bayesian optimization has been widely applied in diverse fields, including autonomous experimentation75 materials discovery and synthesis61,76–79 peptide and protein engineering60,80,81 and chemical reaction optimization82–85 It enables the identification of global optima in high-dimensional search spaces through data-driven experimental designs across complex interaction parameters.74 Despite its advantages and versatility, Bayesian optimization has rarely been applied to analytical chemistry,86 and specifically, electrochemistry.87–90

Other approaches can be used to design waveforms (e.g., first principles, chemometric screening, design of experiments). However, these approaches suffer from limited computational complexity, an exponential number of experiments required to optimize individual parameters, resource intensity (labor, time, materials, etc.), and an inability to account for confounding waveform parameter interactions.91 Our attempts to use feature selection to identify critical waveform step potentials and lengths were complicated by the magnitude of the current response and the pulse pattern (Fig. S7). The difficulty in designing electrochemical waveforms arises partly because each pulse (voltage and step length) influences the state of the interface between the solution and the working electrode. This interface evolves during and between pulses. The effect of an individual pulse depends not only on its characteristics (E and τ) but also on preceding pulses.

We introduced an experimental design framework to embed voltammetry waveforms and their corresponding electroanalytical performance into a Bayesian optimization workflow to overcome these limitations. Rather than optimizing for a particular electrochemical response (e.g., peak oxidative current of a single analyte), the accuracy of the supervised regression models was optimized directly by including model accuracy metrics as the objectives. We explored which model metrics were optimizable by simultaneously performing parallel single-objective optimization loops across eight metrics (Fig. 2). We found that serotonin test set accuracy optimization was sample-efficient, reproducible, and outperformed domain-guided and randomly designed waveforms across multiple metrics (Fig. 3).

We demonstrated that in two separately initialized optimization campaigns, consisting of four strings or ‘rounds’ of optimization, we generated waveforms selective for serotonin in the presence of interferents (Fig. 4). Previous applications of Bayesian optimization in other fields achieved improvements in as few as three or four string-like iterations (i.e., low data regimes). Thus, the behavior we observed was anticipated.76,82,92,93 Notably, our selectivity challenges were more arduous yet efficient than standard waveform validation schemes that test only a single interferent or interferent concentration after a waveform is developed for an analyte of interest.

Future efforts could include more lengthy optimization campaigns. In the present work, our stop criteria were somewhat arbitrary; we empirically noticed improvements in predictive accuracy by string 4, and other studies have found improvements in <5 iterations. Thus, we stopped after four strings to analyze the results. Based on the convergence plots, we identified that waveform accuracy metrics were unlikely to improve once they reached <10 nM error, even if the waveform was found early in the campaign (e.g., within the first ten waveforms; Fig. 4e and S3). This suggests a possible signal-to-noise limit in the single-digit nanomolar range, consistent with previous voltammetry methods.24 Thus, a campaign should be stopped early if the metric reaches known or reasonable instrument detection limitations. Further, only one metric (dopamine pH robustness, run 2) failed to improve after any iterations (Fig. S3, bottom). Thus, in our hands, ∼30 waveforms (the total number of waveforms tested across four strings, per run) indicated whether the waveform would improve. The campaign may be halted if a metric fails to improve after 30 waveforms.

Selectivity is a significant barrier to effective waveform design, especially for background-inclusive and multi-analyte waveforms. Most voltammetry approaches achieve selectivity by either training a machine-learning model, modifying a waveform, or changing the electrode material. Rather than independently adopting one of the latter approaches, our data-driven waveform design uses the predictive performance of a machine learning model as feedback to modify waveform parameters – the black box model decides what waveform would generate more accurate PLSR predictions.

In addition to 5-HIAA, DOPAC, and ascorbate, monovalent cation concentrations (i.e., Na+, K+, H+) fluctuate in the brain extracellular space with neural stimulation due to the biophysics of membrane polarization and repolarization, transporter dynamics, and elevated O2 consumption (and CO2/carbonic acid/H+ production) associated with synchronized action potentials.94 Thus, these species represent key interferents to test in the presence of analytes, as electrodes will likely encounter changes in cation concentrations under real-world (in vivo) conditions.

The literature suggests that specific voltage pulses can deconvolute monoamine neurotransmitter responses from cation changes.95–97 Thus, we hypothesized our search space would contain cation and interferent agnostic waveforms. We expected to find waveforms whose voltammograms, modeled in low-dimensional space by PLSR, are selective for features specific only to the analytes of interest (dopamine and serotonin) and not those affected by interferents. Training across such interferents is unnecessary if a waveform-model combination can ignore cation interferent effects (i.e., is cation agnostic). Thus, we implicitly built the search for agnostic waveforms into our Bayesian workflow by introducing the concept of a challenge set.

Challenge set samples illustrated that SeroOpt can identify implicitly (i.e., requiring no explicit training samples) interferent agnostic waveforms (Fig. 3a). While the literature has demonstrated cationic interferent agnostic waveforms,72,95–97 our approach required no manual or additional data processing, and instead automatically acquired agnostic waveforms. Combining the information content of an optimized waveform with a powerful machine learning model (PLSR) enabled this agnostic behavior.

Because step potential,44,67 step order,43 and hold time98 or hold potential96 can impact waveform performance, other pulse techniques that layer steps at constant potentials and times could maximize their performance by tuning these parameters similarly to the manner presented here.45 Adding more pulses could deteriorate model performance, as useless steps add noise to the data.45 Thus, careful selection of the number of steps is paramount. We confirmed this by noting performance differences across waveforms with only slight parameter differences. We attribute this behavior to the unique faradaic and non-faradaic processes occurring at sub-ms timescales.72,95,97,99

Optimization of individual pulse step lengths results in different transient redox responses from the preceding pulses becoming the starting state for the succeeding pulses, as opposed to letting the current decay to steady-state. A non-steady-state approach has been shown to discriminate compounds more efficiently using VETs. Yet, a lack of methods for optimizing individual step lengths has prevented the broad adoption of this practice. Differentiating dopamine from norepinephrine has been accomplished using pulses with differences as small as 0.1 V, though without systematic design patterns.100

Potential mechanisms underlying interferent agnostic waveforms include diffusion layer depletion of the interfering species by the onset pulse (E1/τ1),101 and other differentiating information provided by unique pulse sequences and transient responses of the rapid pulses to the model.95,97,98 More optimization campaigns, interpretability techniques, and numerical simulation of species at electrode surfaces could uncover the phenomena at play.

Regardless, the finding that interferent agnostic waveforms can be identified and optimized, especially when forgoing background subtraction, shows the utility of historically categorized “nonspecific” capacitive currents. These findings show that analyte-specific information from appropriately designed waveforms occurs in the background current. This information is captured by our model without explicit training, even in the presence of interferents that affect the double layer. Previous reports have shown that pH and Na+/K+ fluxes can cause hundreds to thousands of nM prediction errors in vitro.95,102 For the same fluxes, our waveform-model combinations show only tens of nM error or less, and do not require explicit training, specialized waveform augmentation, or data analysis.

We noticed that across runs and interpretability methods, E1 or τ1 (onset pulse/time), E2 and E3 (pulse/counter pulse67), and E4 (holding potential) were repeatedly ranked as the most critical features for the surrogate models of serotonin test set accuracy. These parameters represent four known heuristics: τ1 and E1 (onset time/intermediate potential; useful for selectivity and diffusion layer depletion),101 counter pulse potential (E3, useful for analyte confirmation),67 and holding potential (E4, useful for analyte accumulation, sensitivity, and reduced serotonin fouling).32 The E3 parameter completes the redox cycle of the analytes, as it is the first cathodic step after a series of anodic steps. While the relationship of E3 with other parameters is complex and affected by their choices, in general, moderate, sequential reductive steps (e.g., E3 ∼ −0.2 V) are optimal. Previous work found that a −0.1 V cathodic limit, as opposed to −0.4 V, was optimal for serotonin detection by limiting analyte polymerization, which resulted in electrode fouling.24 As mentioned for E1, an intermediate voltage of E3 may also act as a more selective step for serotonin reduction amidst its possible interferents, or have beneficial effects on the diffusion layer environment relevant to the proceeding E4 step.

Based on these results, future waveform optimization studies should include as comprehensive training sets of interfering analytes as possible, as done here, and should not use one-factor-at-a-time optimization, which is currently the most common approach. The setting of one parameter influences the optimal settings for the remaining parameters (Fig. 6). An interesting area of future exploration would be to determine whether these effects generalize to waveforms with greater than four steps, i.e., if the first cathodic step remains the key step to optimize for a 6, 8, or 10-step or greater waveforms. Further meta-analyses of these behaviors will provide essential insights into unexpected electrochemical optimization design patterns.

Small amplitude onset pulses have been shown to improve the deconvolution and differentiation of ions such as H+,97 Na+, and K+,95 along with small amplitude onset sweeps for drift and pH.72,103 Again, carefully designed waveform tuning can result in explicit and implicit interferent-agnostic waveforms. Other waveform parameters deemed unimportant in this study might be associated with the imposed constraints affecting the full exploration of parameter space or our relatively small sample size. Further, the interpretability methods are also estimates of the surrogate model, which is an estimate. Thus, our interpretations must be taken lightly as correlations, not causation.

The SeroOpt paradigm is immediately extendable to more than four steps (eight parameters) to create more complex waveforms. Future research into other optimization metrics, supervised regression and surrogate models/kernels, and additional analytes is underway.104,105 For example, pulses have been shown to differentiate norepinephrine from dopamine.100

We note the extendibility of our waveform embedding approach. This embedding can be used for any waveform type, such as sweeps, where the parameter values represent the slope (scan rate) of each segment, along with parameters for start and stop potentials. Pulse and sweep designs can also be combined.101 Similar approaches could also extend to embedding AC voltammetry parameters (e.g., amplitude, phase).106 Thus, rather than starting from a historic performer and exploring new waveforms one factor at a time, entirely new waveforms can be discovered de novo.

Our approach will accelerate waveform development for new single- and multi-analyte panels in environments that hinder selectivity or other difficult-to-optimize metrics. Further exploration of waveforms with agnostic behavior and for multi-analyte co-detection is underway. Applications of Bayesian optimization or alternative machine-learning guided workflows to electrochemical reaction studies and battery technology development have delivered robotics and other automated instrumentation solutions. An area of future work could be to develop an automated flow cell/waveform optimization pipeline to fully ‘close the loop’.65,107,108 To aid other investigators in this pursuit, we provide data, tutorial code notebooks, and videos at github.com/csmova/SeroOpt (https://github.com/csmova/SeroOpt), as well as our corresponding open-source voltammetry acquisition and analysis software66 at github.com/csmova/SeroWare (https://github.com/csmova/SeroWare) and github.com/csmova/SeroML (https://github.com/csmova/SeroML).

To our knowledge, we report the first application of active learning to electrochemical waveform design. Our study represents one of the largest-scale investigations of neurochemical detection waveforms. Using a data-driven approach, we generated a waveform for serotonin detection that outperformed our expert-designed waveform and randomly generated waveforms across various metrics. We demonstrated the ability to search for interferent-agnostic waveforms using a priori design of ‘challenge’ samples. We attributed the success of SeroOpt to the efficient fine-grained tuning of voltage and temporal waveform parameters by Bayesian optimization, each having complex interaction effects. Lastly, we interpreted our model with three separate techniques to confirm our model was learning a representation of the waveform optimization landscape that aligned with heuristics and domain knowledge.

Methods

Chemicals

Serotonin (5-HT) hydrochloride (#H9523), dopamine (DA) hydrochloride (#H8502), 5-hydroxyindoleacetic acid (5-HIAA) (#H8876), 3,4-dihydroxyphenylacetic acid (DOPAC) (#850217), and ascorbic acid (#A92902) were purchased from Sigma-Aldrich (St. Louis, MO). Artificial cerebrospinal fluid (aCSF) solutions were prepared as previously described.40,109 The aCSF solution was adjusted on the day of each experiment to pH 7.1, 7.2, or 7.3 ± 0.03 using HCl (Fluka, #84415). Altered cation (a.c.) aCSF buffer contained the following ion composition: 31 mM NaCl (#73575), 120 mM KCl (#05257), 1.0 mM NaH2PO4 (#17844), and 2.5 mM NaHCO3 (#88208) purchased from Honeywell Fluka (Charlotte, NC), and 1.0 mM CaCl2 (#499609) and 1.2 mM MgCl2 (#449172) purchased from Sigma-Aldrich. All aqueous solutions were prepared using Milli-Q grade or higher water (Sigma-Aldrich).

Electrode fabrication and polymerization

Carbon fiber microelectrodes were fabricated by vacuum-aspirating 7-μm diameter carbon fibers (T650/35, Cytec Carbon Fiber) into O.D. 1.2 mm × I.D. 0.69 mm, 10 cm length borosilicate glass capillaries (Sutter Instrument Company, Novato, CA, B120-69-10). A micropipette puller (P-1000, Sutter Instrument Company, Novato, CA) was used to pull each capillary into two electrodes tapering and sealing the glass around the carbon fiber. Four-part epoxy (Sigma Aldrich, Spurr Low Viscosity Embedding Kit- EM0300) was backfilled into the tip of each electrode. Epoxied electrodes were dried at 70 °C for 8–12 h. Electrode tips were cut to ∼100 μm using micro-scissors under an inverted microscope. For electrical conduction, the electrodes were backfilled with a non-toxic metal alloy of gallium–indium–tin, Galinstan (Alfa Aesar, 14634-18). Bare copper wire (0.0253-in. diameter, Archor B22) was polished using a 600-grit polishing disc and inserted into working electrode capillaries to serve as the electrical connection to the potentiostat. Epoxy (Loctite EA 1C) was then placed around the end of each electrode to secure the Cu wire in place. The epoxy was cured for 24 h at room temperature.

Electrode tips were cleaned with HPLC-grade isopropanol (Sigma Aldrich #34863) for 10 min. Electrodes were then overoxidized by applying a static 1.4 V potential for 20 min.110 Low-density EDOT:Nafion solution was made by first preparing a 40 mM EDOT (3,4-ethylenedioxythiophene; Sigma Aldrich, St. Louis, MO; 483028) stock; 100 μL of this stock was added to 200 μL of Nafion (Ion Power, Inc., Tyrone, PA; LQ-1105) and diluted with 20 mL of acetonitrile.16 A triangle waveform (1.5 V to −0.8 V to 1.5 V) was applied using a CHI Instruments Electrochemical Analyzer 15× at 100 mV s−1 to generate a PEDOT:Nafion coating on each electrode.

In vitro experiments

Reference electrodes were made by placing 0.025-inch silver wire (A-M Systems, 783500) into bleach (5–10% sodium hypochlorite, Clorox, Oakland CA) for 10 minutes. Each reference electrode was rinsed with distilled water before being used in experiments. A flow cell (NEC-FLOW-1, Pine Research Instrumentation Inc.) was used to make measurements with a VICI air-actuated injector (220-0302H; VICI Valco Instruments, Houston, TX). An HPLC pump by Dionex (Sunnyvale, California) pumped aCSF through the flow cell at a constant flow rate of 1.0 mL min−1 (Fig. 7).
image file: d5dd00005j-f7.tif
Fig. 7 Workflow for parallel Bayesian optimization of voltammetric waveforms with intrinsic interferent selectivity.

Standard concentrations were selected using a fractional factorial box design (Table 1). This is a chemometric approach that designs a multi-dimensional ‘box’ spanning analytes, their concentrations, and experimental conditions of interest.91,111 We selected a fractional approach to bias towards low analyte concentrations and small relative changes. High accuracy and precision in the nM range are important for monitoring basal and stimulated neurotransmitter levels using a single technique.

The fractional approach avoids a full factorial design, which would require orders of magnitude (and prohibitively) more calibration samples. In contrast, traditional calibration sets are information-poor and can lead to spurious correlations when training a multiplexed method with overlapping signals from analytes and interferents.91 The training and test sets effectively spanned the concentrations and combinations of analytes of interest without correlation (Fig. S8). Ascorbate was included in all samples (except blanks) for antioxidant properties. The concentrations of dopamine, serotonin, 5-HIAA, DOPAC, and ascorbate were altered over physiologically relevant changes in concentration throughout so the model could be trained and tested across all analytes.

Solutions of aCSF were purged with nitrogen for at least ten minutes before sample preparation. All training and test samples were prepared from stocks stored at −80 °C on the day of experiments. All solutions were adjusted to the corresponding pH each day prior to aliquoting. All solutions were kept covered from light and on ice during the experiments.

We define a training set (i.e., calibration set) as known concentration analyte mixtures, i.e., “standards”, used to train a PLSR model. A test set is defined as known concentration analyte mixtures that were not used during training but instead held out and used to measure model performance. Test set samples only include samples with conditions occurring in the training set (i.e., the same buffer conditions). We define “challenge” samples as additional test set samples prepared under conditions not included or varied in the training set, such as varied pH and cationic buffer salt concentrations (Table 1; see Data analysis). We define an injection blank or zero (0) as an injected solution containing only aCSF.

Training, test, and challenge sets were injected (∼1 mL into a 500-μL loop) into a flow cell using a six-port valve (Fig. 7). The valve was switched to the inject position for ∼20 s per injection. The time between injections was ≥200 s, depending on the waveform and time for the current to return to baseline. Samples were injected in a pseudo-randomized but consistent order. Within each string, the waveform calibration curves were completed across consecutive days. All waveforms within a string were acquired with the same electrode. A different electrode was used for each string to ensure the robustness of the waveform optimization. All waveforms were conditioned for ≥10 min in aCSF before acquiring data.

Voltammetry hardware and software

A two-electrode configuration via an Ag/AgCl reference electrode and a carbon-fiber microelectrode working electrode was used. A PC with a PCIe-6363 data acquisition card (National Instruments (NI), Austin, TX) was used to control a WaveNeuro One FSCV Potentiostat System (NEC-WN-BASIC, Pine Research Instrumentation Inc.) with a 1000 nA V−1 headstage amplifier (AC01HS2, Pine Research Instrumentation Inc.). The copper wire of the working electrode and the silver wire reference electrode were inserted into a microelectrode-headstage coupler (AC01HC0315-5, Pine Research Instrumentation Inc.) that connected the electrodes to the potentiostat.

In-house software was developed for RPV as described in a previous publication.40 The software has since been published and named SeroWare, and is described elsewhere.66

Bayesian optimization

Bayesian optimization was carried out using the open-source Python package scikit-optimize.112 This software uses an ‘ask and tell’ interface. First, the search space was constrained, as described in the Results. The surrogate model (Gaussian process regressor with a Matérn and white noise kernel, and uniform prior) was initialized through the ‘tell’ interface using vectorized and normalized string 1 waveform parameters and optimization metrics. A Matérn kernel was chosen because of its flexibility and the assumption that the true objective function of the waveform parameters is not infinitely differentiable (i.e., the potentials and time applied by the potentiostat/data acquisition card are discretized to some degree).

The acquisition function (expected improvement) was then minimized using the ‘ask’ interface to generate a vectorized waveform to be experimentally queried. Kernel hyperparameters (i.e., length scale, smoothness) and the acquisition function were optimized automatically by the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm in the software package. The acquisition function returned a vectorized waveform that was then created in SeroWare format for data acquisition. After experimental results were obtained with the predicted waveform. The metrics of all previous waveforms were aggregated with the newest metrics. The Bayesian optimizer was updated using the ‘tell’ interface to set new query points using the ‘ask’ interface.

In this work, increments of voltage were rounded to the nearest 0.001 V, and increments of time were rounded to the nearest 0.1 ms. Built-in partial dependence functions to scikit-learn and scikit-optimize were used to interpret the model, along with the SHAP Python package.

Data analysis

Data were extracted using in-house custom acquisition software written in MATLAB 2016a. Models were built as described in previous literature using open-source Python packages (scikit-learn).40,113 Briefly, roughly 40–100 voltammograms were extracted per sample injection. All voltammograms were normalized, and the number of components was chosen using 5-fold cross-validation. Optimization metrics were then calculated using the final model (Table S1).
Drift training. The PLSR model was trained to account for drift using voltammograms collected throughout the experiment while aCSF containing interferents flowed and injections were not occurring (∼2 h). We define these voltammograms as “background blanks”. They are portions of the data when no samples are being injected. The injection blanks correct for injection artifacts, while the background blanks correct for drift (Fig. S9). Data in which drift was evident were extracted from these background epochs and labeled as ‘zero’ analyte concentrations to teach the model what drift, as opposed to analyte-containing, voltammograms, looked like. Background blanks were in addition to data from injections of aCSF alone (i.e., injection blanks), which accounted for flow cell injection artifacts.

We found this process increased the accuracy and precision of the PLSR predictions. It was generalizable to test set samples. We attribute this to a low-dimensional representation of drift learned by the model (Fig. S9). All concentration predictions were constrained to be ≥0 (i.e., domain knowledge dictates concentrations cannot be negative). Negative concentration predictions were replaced with 0.

Optimization metrics. The eight different optimization metrics were dopamine accuracy (mean absolute error of the test set predictions), serotonin accuracy (mean absolute error of the test set predictions), variance of the test set blanks (proxy for LOD) for zero dopamine or serotonin, mean absolute error for dopamine or serotonin in altered cation (a.c.) aCSF (ion robustness challenge samples), and varying pH aCSF (pH robustness challenge samples) (Table S1). Due to experimental time constraints, the LOD metric was excluded from the optimization panel for the second run of Bayesian optimization (R2). This resulted in 30 unique waveforms for the first run (six random waveforms in string 1, plus three strings of eight waveforms from subsequent rounds of Bayesian optimization) and a total of 24 waveforms for the second run (six random waveforms in string 1, plus six waveforms in three rounds of optimization). In R1 and R2 combined, 55 unique waveforms were tested (with the additional OG RPV waveform also tested; Table S2).
Challenge samples. Test samples (T1–T4), prepared at pH 7.3, were used to assess dopamine and serotonin accuracy and LOD. Some test samples (T1–T3) were also prepared in aCSF at pH 7.1 or pH 7.2, and in aCSF with altered cation concentrations (Na+ and K+) to assess the accuracy of dopamine and serotonin predictions in the presence of varying H+, Na+, and K+ concentrations expected in vivo. We refer to these specially prepared test samples as ‘challenge’ samples (Table 1 and Fig. 7). These samples enabled sparse training set size. Thus, we could optimize for interferent agnostic waveforms without explicitly training on these interferents. Otherwise, training across variations in pH or other cations would require partial or up to full-fold increases in the samples injected. As an efficient alternative, we optimized for accuracy on the challenge set samples without any increase in training set size. Thus, the optimization goal of challenge samples was to find a waveform inherently agnostic to changes in pH or cations rather than a waveform that was ‘trainable’ across these interferents. In this case, the interferents implicitly optimized were pH and monovalent cations, which is extendable to any a priori domain knowledge of interferents expected. This approach is particularly useful in situations where the training data matrix differs from the model's application (i.e., in vitro to in vivo generalizability).

Data availability

Data for this article, including acquisition and analysis code, are available at https://doi.org/10.5281/zenodo.15339008, https://github.com/csmova/SeroWare (DOI: https://doi.org/10.5281/zenodo.15580629), https://github.com/csmova/SeroML (DOI: https://doi.org/10.5281/zenodo.15580636), and https://github.com/csmova/SeroOpt (DOI: https://doi.org/10.5281/zenodo.15580638). Additional data supporting this article, including all training data for the surrogate model (waveform parameters and experimental metrics), have been included as part of the ESI.

Author contributions

AMA, ASM, CL, CSM, KAP, and MAF conceived the work and designed the experiments. ANN, CSM, KKN, KAP, MEC, and MEW performed all experiments. CSM and KAP analyzed the data. CSM wrote the code for the regression and Bayesian optimization models. KAP performed statistical analyses. AMA, ASM, MAF, and CL guided the project. All authors wrote and approved the final version of the manuscript.

Conflicts of interest

There are no conflicts to declare.

Acknowledgements

This work was supported by the National Science Foundation (CHE-2404470). CSM was supported by the National Science Foundation Graduate Research Fellowship Program (DGE-1650604 and DGE-2034835). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. This research was also supported by the Spanish Ministry of Science, Innovation, and Universities under project number PID2021-126304OB-C44. The authors acknowledge Biorender.com (http://Biorender.com) for the Table of Contents graphic and Fig. 7. See: https://BioRender.com/fxn08n1 and https://BioRender.com/lre0n7d.

References

  1. Q. Pang, J. Meng, S. Gupta, X. Hong, C. Y. Kwok, J. Zhao, Y. Jin, L. Xu, O. Karahan, Z. Wang, S. Toll, L. Mai, L. F. Nazar, M. Balasubramanian, B. Narayanan and D. R. Sadoway, Fast-charging aluminium–chalcogen batteries resistant to dendritic shorting, Nature, 2022, 608(7924), 704–711,  DOI:10.1038/s41586-022-04983-9.
  2. P. Garrido-Barros, J. Derosa, M. J. Chalkley and J. C. Peters, Tandem electrocatalytic N2 fixation via proton-coupled electron transfer, Nature, 2022, 609(7925), 71–76,  DOI:10.1038/s41586-022-05011-6.
  3. W. Zhang, L. Lu, W. Zhang, Y. Wang, S. D. Ware, J. Mondragon, J. Rein, N. Strotman, D. Lehnherr, K. A. See and S. Lin, Electrochemically driven cross-electrophile coupling of alkyl halides, Nature, 2022, 604(7905), 292–297,  DOI:10.1038/s41586-022-04540-4.
  4. J. Li, Y. Liu, L. Yuan, B. Zhang, E. S. Bishop, K. Wang, J. Tang, Y.-Q. Zheng, W. Xu, S. Niu, L. Beker, T. L. Li, G. Chen, M. Diyaolu, A.-L. Thomas, V. Mottini, J. B. H. Tok, J. C. Y. Dunn, B. Cui, S. P. Paşca, Y. Cui, A. Habtezion, X. Chen and Z. Bao, A tissue-like neurotransmitter sensor for the brain and gut, Nature, 2022, 606(7912), 94–101,  DOI:10.1038/s41586-022-04615-2.
  5. L. Willmore, C. Cameron, J. Yang, I. B. Witten and A. L. Falkner, Behavioural and dopaminergic signatures of resilience, Nature, 2022, 611(7934), 124–132,  DOI:10.1038/s41586-022-05328-2.
  6. S. R. Batten, D. Bang, B. H. Kopell, A. N. Davis, M. Heflin, Q. Fu, O. Perl, K. Ziafat, A. Hashemi, I. Saez, L. S. Barbosa, T. Twomey, T. Lohrenz, J. P. White, P. Dayan, A. W. Charney, M. Figee, H. S. Mayberg, K. T. Kishida, X. Gu and P. R. Montague, Dopamine and serotonin in human substantia nigra track social context and value signals during economic exchange, Nat. Hum. Behav., 2024, 8(4), 718–728,  DOI:10.1038/s41562-024-01831-w.
  7. S. B. Flagel, J. J. Clark, T. E. Robinson, L. Mayo, A. Czuj, I. Willuhn, C. A. Akers, S. M. Clinton, P. E. M. Phillips and H. Akil, A selective role for dopamine in stimulus–reward learning, Nature, 2011, 469(7328), 53–57,  DOI:10.1038/nature09588.
  8. P. E. M. Phillips, G. D. Stuber, M. L. A. V. Heien, R. M. Wightman and R. M. Carelli, Subsecond dopamine release promotes cocaine seeking, Nature, 2003, 422(6932), 614–618,  DOI:10.1038/nature01476.
  9. M. E. Pipita, M. Santonico, G. Pennazza, A. Zompanti, S. Fazzina, D. Cavalieri, F. Bruno, S. Angeletti, C. Pedone and R. A. Incalzi, Integration of voltammetric analysis, protein electrophoresis and pH measurement for diagnosis of pleural effusions: A non-conventional diagnostic approach, Sci. Rep., 2020, 10(1), 15222,  DOI:10.1038/s41598-020-71542-5.
  10. S. Zhao, H. Li, J. Dai, Y. Jiang, G. Zhan, M. Liao, H. Sun, Y. Shi, C. Ling, Y. Yao and L. Zhang, Selective electrosynthesis of chlorine disinfectants from seawater, Nat. Sustainable, 2024, 7(2), 148–157,  DOI:10.1038/s41893-023-01265-8.
  11. Y. S. Mutz, D. do Rosario, L. R. G. Silva, D. Galvan, B. C. Janegitz, Q. de, R. Ferreira and C. A. Conte-Junior, A single screen-printed electrode in tandem with chemometric tools for the forensic differentiation of Brazilian beers, Sci. Rep., 2022, 12(1), 5630,  DOI:10.1038/s41598-022-09632-9.
  12. A. M. Andrews, The BRAIN Initiative: Toward a chemical connectome, ACS Chem. Neurosci., 2013, 4(5), 645,  DOI:10.1021/cn4001044.
  13. P. Puthongkham and B. J. Venton, Recent advances in fast-scan cyclic voltammetry, Analyst, 2020, 145(4), 1087–1102,  10.1039/c9an01925a.
  14. N. T. Rodeberg, S. G. Sandberg, J. A. Johnson, P. E. M. Phillips and R. M. Wightman, Hitchhiker's guide to voltammetry: Acute and chronic electrodes for in vivo fast-scan cyclic voltammetry, ACS Chem. Neurosci., 2017, 8(2), 221–234,  DOI:10.1021/acschemneuro.6b00393.
  15. L. Daws, A. Andrews and G. Gerhardt, Electrochemical techniques and advances in psychopharmacology, in Encyclopedia of psychopharmacology, ed. I. P. Stolerman, Springer Berlin Heidelberg, Berlin, Heidelberg, 2013, pp. 1–6 Search PubMed.
  16. R. F. Vreeland, C. W. Atcherley, W. S. Russell, J. Y. Xie, D. Lu, N. D. Laude, F. Porreca and M. L. Heien, Biocompatible PEDOT:Nafion composite electrode coatings for selective detection of neurotransmitters in vivo, Anal. Chem., 2015, 87(5), 2600–2607,  DOI:10.1021/ac502165f.
  17. Z. Shao, Y. Chang and B. J. Venton, Carbon microelectrodes with customized shapes for neurotransmitter detection: A review, Anal. Chim. Acta, 2022, 1223, 340165,  DOI:10.1016/j.aca.2022.340165.
  18. E. Castagnola, E. M. Robbins, D. D. Krahe, B. Wu, M. Y. Pwint, Q. Cao and X. T. Cui, Stable in-vivo electrochemical sensing of tonic serotonin levels using PEDOT/CNT-coated glassy carbon flexible microelectrode arrays, Biosens. Bioelectron., 2023, 230, 115242,  DOI:10.1016/j.bios.2023.115242.
  19. H. Rafi and A. G. Zestos, Multiplexing neurochemical detection with carbon fiber multielectrode arrays using fast-scan cyclic voltammetry, Anal. Bioanal. Chem., 2021, 413(27), 6715–6726,  DOI:10.1007/s00216-021-03526-x.
  20. B. E. K. Swamy and B. J. Venton, Carbon nanotube-modified microelectrodes for simultaneous detection of dopamine and serotoninin vivo, Analyst, 2007, 132(9), 876–884,  10.1039/B705552H.
  21. S. Mena, M. Visentin, C. E. Witt, L. E. Honan, N. Robins and P. Hashemi, Novel, user-friendly experimental and analysis strategies for fast voltammetry: Next generation FSCAV with artificial neural networks, ACS Meas. Sci. Au, 2022, 2(3), 241–250,  DOI:10.1021/acsmeasuresciau.1c00060.
  22. T. Twomey, L. Barbosa, T. Lohrenz and P. R. Montague, Deep learning architectures for FSCV, a comparison, arXiv (Medical Physics), 2022, preprint, arXiv:2212.01960,  DOI:10.48550/arXiv.2212.01960, (accessed 12/13/2022).
  23. P. R. Montague and K. T. Kishida, Computational underpinnings of neuromodulation in humans, Cold Spring Harbor Symp. Quant. Biol., 2018, 83, 71–82,  DOI:10.1101/sqb.2018.83.038166.
  24. K. E. Dunham and B. J. Venton, Improving serotonin fast-scan cyclic voltammetry detection: New waveforms to reduce electrode fouling, Analyst, 2020, 145(22), 7437–7446,  10.1039/D0AN01406K.
  25. H. Rafi and A. G. Zestos, Recent advances in FSCV detection of neurochemicals via waveform and carbon microelectrode modification, J. Electrochem. Soc., 2021, 168(5), 057520,  DOI:10.1149/1945-7111/ac0064.
  26. J. Fedorowski and W. R. LaCourse, A review of pulsed electrochemical detection following liquid chromatography and capillary electrophoresis, Anal. Chim. Acta, 2015, 861, 1–11,  DOI:10.1016/j.aca.2014.08.035.
  27. Z. Wei, Y. Yang, J. Wang, W. Zhang and Q. Ren, The measurement principles, working parameters and configurations of voltammetric electronic tongues and its applications for foodstuff analysis, J. Food Eng., 2018, 217, 75–92,  DOI:10.1016/j.jfoodeng.2017.08.005.
  28. G. Moro, A. Silvestri, A. Ulrici, F. Conzuelo and C. Zanardi, How to optimize the analytical performance of differential pulse voltammetry: One variable at time versus design of experiments, J. Solid State Electrochem., 2024, 28(3), 1403–1415,  DOI:10.1007/s10008-023-05753-x.
  29. A. Jaworski, T. Rapecki and K. Wikiel, Consolidated designer waveform for maximizing analytical output of voltammetric measurements for complex chemical matrices, J. Electroanal. Chem., 2023, 936, 117332,  DOI:10.1016/j.jelechem.2023.117332.
  30. P. Hashemi, E. C. Dankoski, J. Petrovic, R. B. Keithley and R. M. Wightman, Voltammetric detection of 5-hydroxytryptamine release in the rat brain, Anal. Chem., 2009, 81(22), 9462–9471,  DOI:10.1021/ac9018846.
  31. B. P. Jackson, S. M. Dietz and R. M. Wightman, Fast-scan cyclic voltammetry of 5-hydroxytryptamine, Anal. Chem., 1995, 67(6), 1115–1120,  DOI:10.1021/ac00102a015.
  32. B. J. Venton and Q. Cao, Fundamentals of fast-scan cyclic voltammetry for dopamine detection, Analyst, 2020, 145(4), 1158–1168,  10.1039/C9AN01586H.
  33. M. L. A. V. Heien, P. E. M. Phillips, G. D. Stuber, A. T. Seipel and R. M. Wightman, Overoxidation of carbon-fiber microelectrodes enhances dopamine adsorption and increases sensitivity, Analyst, 2003, 128(12), 1413–1419,  10.1039/B307024G.
  34. S. Y. Kim, Y. B. Oh, H. J. Shin, D. H. Kim, I. Y. Kim, K. Bennet, K. H. Lee and D. P. Jang, 5-hydroxytryptamine measurement using paired pulse voltammetry, Biomed. Eng. Lett., 2013, 3(2), 102–108,  DOI:10.1007/s13534-013-0093-z.
  35. C. Park, Y. Oh, H. Shin, J. Kim, Y. Kang, J. Sim, H. U. Cho, H. K. Lee, S. J. Jung, C. D. Blaha, K. E. Bennet, M. L. Heien, K. H. Lee, I. Y. Kim and D. P. Jang, Fast cyclic square-wave voltammetry to enhance neurotransmitter selectivity and sensitivity, Anal. Chem., 2018, 90(22), 13348–13355,  DOI:10.1021/acs.analchem.8b02920.
  36. H. Shin, Y. Oh, C. Park, Y. Kang, H. U. Cho, C. D. Blaha, K. E. Bennet, M. L. Heien, I. Y. Kim, K. H. Lee and D. P. Jang, Sensitive and selective measurement of serotonin in vivo using fast cyclic square-wave voltammetry, Anal. Chem., 2020, 92(1), 774–781,  DOI:10.1021/acs.analchem.9b03164.
  37. Y. Oh, M. L. Heien, C. Park, Y. M. Kang, J. Kim, S. L. Boschen, H. Shin, H. U. Cho, C. D. Blaha, K. E. Bennet, H. K. Lee, S. J. Jung, I. Y. Kim, K. H. Lee and D. P. Jang, Tracking tonic dopamine levels in vivo using multiple cyclic square wave voltammetry, Biosens. Bioelectron., 2018, 121, 174–182,  DOI:10.1016/j.bios.2018.08.034.
  38. H. Shin, A. Goyal, J. H. Barnett, A. E. Rusheen, J. Yuen, R. Jha, S. M. Hwang, Y. Kang, C. Park, H.-U. Cho, C. D. Blaha, K. E. Bennet, Y. Oh, M. L. Heien, D. P. Jang and K. H. Lee, Tonic serotonin measurements in vivo using N-shaped multiple cyclic square wave voltammetry, Anal. Chem., 2021, 93(51), 16987–16994,  DOI:10.1021/acs.analchem.1c02131.
  39. A. Abdalla, C. W. Atcherley, P. Pathirathna, S. Samaranayake, B. Qiang, E. Peña, S. L. Morgan, M. L. Heien and P. Hashemi, In vivo ambient serotonin measurements at carbon-fiber microelectrodes, Anal. Chem., 2017, 89(18), 9703–9711,  DOI:10.1021/acs.analchem.7b01257.
  40. C. S. Movassaghi, K. A. Perrotta, H. Yang, R. Iyer, X. Cheng, M. Dagher, M. A. Fillol and A. M. Andrews, Simultaneous serotonin and dopamine monitoring across timescales by rapid pulse voltammetry with partial least squares regression, Anal. Bioanal. Chem., 2021, 413(27), 6747–6767,  DOI:10.1007/s00216-021-03665-1.
  41. C. S. Movassaghi, M. Alcaniz Fillol, K. T. Kishida, G. McCarty, L. A. Sombers, K. M. Wassum and A. M. Andrews, Maximizing electrochemical information: A perspective on background-inclusive fast voltammetry, Anal. Chem., 2024, 96(16), 6097–6105,  DOI:10.1021/acs.analchem.3c04938.
  42. A. J. Bard, L. R. Faulkner and H. S. White, Electrochemical methods: Fundamentals and applications, John Wiley & Sons, 2022 Search PubMed.
  43. I. Campos, M. Alcañiz, R. Masot, J. Soto, R. Martínez-Máñez, J.-L. Vivancos and L. Gil, A method of pulse array design for voltammetric electronic tongues, Sens. Actuators, B, 2012, 161(1), 556–563,  DOI:10.1016/j.snb.2011.10.075.
  44. E. Fuentes, M. Alcañiz, L. Contat, E. O. Baldeón, J. M. Barat and R. Grau, Influence of potential pulses amplitude sequence in a voltammetric electronic tongue (VET) applied to assess antioxidant capacity in aliso, Food Chem., 2017, 224, 233–241,  DOI:10.1016/j.foodchem.2016.12.076.
  45. P. Ivarsson, S. Holmin, N.-E. Höjer, C. Krantz-Rülcker and F. Winquist, Discrimination of tea by means of a voltammetric electronic tongue and different applied waveforms, Sens. Actuators, B, 2001, 76(1), 449–454,  DOI:10.1016/S0925-4005(01)00583-4.
  46. A. E. Ross and B. J. Venton, Sawhorse waveform voltammetry for selective detection of adenosine, ATP, and hydrogen peroxide, Anal. Chem., 2014, 86(15), 7486–7493,  DOI:10.1021/ac501229c.
  47. S. C. Altieri, H. Yang, H. J. O'Brien, H. M. Redwine, D. Senturk, J. G. Hensler and A. M. Andrews, Perinatal vs. genetic programming of serotonin states associated with anxiety, Neuropsychopharmacology, 2015, 40(6), 1456–1470,  DOI:10.1038/npp.2014.331.
  48. H. Yang, A. B. Thompson, B. J. McIntosh, S. C. Altieri and A. M. Andrews, Physiologically relevant changes in serotonin resolved by fast microdialysis, ACS Chem. Neurosci., 2013, 4(5), 790–798,  DOI:10.1021/cn400072f.
  49. S. Altieri, Y. Singh, E. Sibille and A. M. Andrews, Serotonergic pathways in depression, in Neurobiology of Depression, CRC Press, 2011, vol. 20115633, pp. 143–170 Search PubMed.
  50. C. A. Marcinkiewcz, C. M. Mazzone, G. D'Agostino, L. R. Halladay, J. A. Hardaway, J. F. DiBerto, M. Navarro, N. Burnham, C. Cristiano, C. E. Dorrier, G. J. Tipton, C. Ramakrishnan, T. Kozicz, K. Deisseroth, T. E. Thiele, Z. A. McElligott, A. Holmes, L. K. Heisler and T. L. Kash, Serotonin engages an anxiety and fear-promoting circuit in the extended amygdala, Nature, 2016, 537(7618), 97–101,  DOI:10.1038/nature19318.
  51. K. M. Tye, R. Prakash, S.-Y. Kim, L. E. Fenno, L. Grosenick, H. Zarabi, K. R. Thompson, V. Gradinaru, C. Ramakrishnan and K. Deisseroth, Amygdala circuitry mediating reversible and bidirectional control of anxiety, Nature, 2011, 471(7338), 358–362,  DOI:10.1038/nature09820.
  52. C. S. Movassaghi and A. M. Andrews, Call me serotonin, Nat. Chem., 2024, 16(4), 670,  DOI:10.1038/s41557-024-01488-y.
  53. M. D. Gershon and K. G. Margolis, The gut, its microbiome, and the brain: Connections and communications, J. Clin. Invest., 2021, 131(18) DOI:10.1172/JCI143768.
  54. D. L. Murphy, M. A. Fox, K. R. Timpano, P. R. Moya, R. Ren-Patterson, A. M. Andrews, A. Holmes, K.-P. Lesch and J. R. Wendland, How the serotonin story is being rewritten by new gene-based discoveries principally related to SLC6A4, the serotonin transporter gene, which functions to influence all cellular serotonin systems, Neuropharmacology, 2008, 55(6), 932–960,  DOI:10.1016/j.neuropharm.2008.08.034.
  55. Y. S. Singh, S. C. Altieri, T. L. Gilman, H. M. Michael, I. D. Tomlinson, S. J. Rosenthal, G. M. Swain, M. A. Murphey-Corb, R. E. Ferrell and A. M. Andrews, Differential serotonin transport is linked to the rh5-HTTLPR in peripheral blood cells, Transl. Psychiatry, 2012, 2(2), e77,  DOI:10.1038/tp.2012.2.
  56. M. Z. Wrona and G. Dryhurst, Electrochemical oxidation of 5-hydroxytryptamine in aqueous solution at physiological pH, Bioorg. Chem., 1990, 18(3), 291–317,  DOI:10.1016/0045-2068(90)90005-P.
  57. Q. Cao, P. Puthongkham and B. J. Venton, Review: New insights into optimizing chemical and 3D surface structures of carbon electrodes for neurotransmitter detection, Anal. Methods, 2019, 11(3), 247–261,  10.1039/C8AY02472C.
  58. A. Eltahir, J. White, T. Lohrenz and P. R. Montague, Low amplitude burst detection of catecholamines, bioRxiv (Neuroscience), 2021,  DOI:10.1101/2021.08.02.454747.
  59. P. R. Montague, T. Lohrenz, J. White, R. J. Moran and K. T. Kishida, Random burst sensing of neurotransmitters, bioRxiv (Neuroscience), 2019,  DOI:10.1101/607077.
  60. P. A. Romero, A. Krause and F. H. Arnold, Navigating the protein fitness landscape with Gaussian processes, Proc. Natl. Acad. Sci. U. S. A., 2013, 110(3), E193–E201,  DOI:10.1073/pnas.1215251110.
  61. J. J. Patil, C. T.-C. Wan, S. Gong, Y.-M. Chiang, F. R. Brushett and J. C. Grossman, Bayesian-optimization-assisted laser reduction of poly (acrylonitrile) for electrochemical applications, ACS Nano, 2023, 17(5), 4999–5013,  DOI:10.1021/acsnano.2c12663.
  62. B. N. Slautin, Y. Liu, H. Funakubo, R. K. Vasudevan, M. Ziatdinov and S. V. Kalinin, Bayesian conavigation: Dynamic designing of the material digital twins via active learning, ACS Nano, 2024, 18(36), 24898–24908,  DOI:10.1021/acsnano.4c05368.
  63. R. J. Hickman, M. Aldeghi, F. Häse and A. Aspuru-Guzik, Bayesian optimization with known experimental and design constraints for chemistry applications, Digital Discovery, 2022, 1(5), 732–744,  10.1039/D2DD00028H.
  64. Y. Wu, A. Walsh and A. M. Ganose, Race to the bottom: Bayesian optimisation for chemical problems, Digital Discovery, 2024, 3(6), 1086–1100,  10.1039/D3DD00234A.
  65. H. Sheng, J. Sun, O. Rodríguez, B. B. Hoar, W. Zhang, D. Xiang, T. Tang, A. Hazra, D. S. Min, A. G. Doyle, M. S. Sigman, C. Costentin, Q. Gu, J. Rodríguez-López and C. Liu, Autonomous closed-loop mechanistic investigation of molecular electrochemistry via automation, Nat. Commun., 2024, 15(1), 2781,  DOI:10.1038/s41467-024-47210-x.
  66. C. S. Movassaghi, R. Iyer, M. E. Curry, M. E. Wesely, M. A. Fillol and A. M. Andrews, SeroWare: An open-source software suite for voltammetry data acquisition and analysis, ACS Chem. Neurosci., 2025, 16(5), 856–867,  DOI:10.1021/acschemneuro.4c00799.
  67. I. Campos, R. Masot, M. Alcañiz, L. Gil, J. Soto, J. L. Vivancos, E. García-Breijo, R. H. Labrador, J. M. Barat and R. Martínez-Mañez, Accurate concentration determination of anions nitrate, nitrite and chloride in minced meat using a voltammetric electronic tongue, Sens. Actuators, B, 2010, 149(1), 71–78,  DOI:10.1016/j.snb.2010.06.028.
  68. H. Choi, H. Shin, H. U. Cho, C. D. Blaha, M. L. Heien, Y. Oh, K. H. Lee and D. P. Jang, Neurochemical concentration prediction using deep learning vs. principal component regression in fast scan cyclic voltammetry: A comparison study, ACS Chem. Neurosci., 2022, 13(15), 2288–2297,  DOI:10.1021/acschemneuro.2c00069.
  69. D. M. Roijers, P. Vamplew, S. Whiteson and R. Dazeley, A survey of multi-objective sequential decision-making, J. Artif. Intell. Res., 2013, 48, 67–113 CrossRef.
  70. M. Dagher, K. A. Perrotta, S. A. Erwin, A. Hachisuka, R. Ayer, S. Masmanidis, H. Yang and A. M. Andrews, Optogenetic stimulation of midbrain dopamine neurons produces striatal serotonin release, ACS Chem. Neurosci., 2022, 13(7), 946–958,  DOI:10.1021/acschemneuro.1c00715.
  71. P. I. Frazier, A tutorial on Bayesian optimization, arXiv, 2018, preprint, arXiv:1807.02811,  DOI:10.48550/arXiv.1807.02811.
  72. C. J. Meunier, E. C. Mitchell, J. G. Roberts, J. V. Toups, G. S. McCarty and L. A. Sombers, Electrochemical selectivity achieved using a double voltammetric waveform and partial least squares regression: Differentiating endogenous hydrogen peroxide fluctuations from shifts in pH, Anal. Chem., 2018, 90(3), 1767–1776,  DOI:10.1021/acs.analchem.7b03717.
  73. C. Molnar, Interpretable machine learning. Leanpub, https://christophm.github.io/interpretable-ml-book/, 2020.
  74. E. Brochu, V. M. Cora and N. d. Freitas, A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning, arXiv, 2010, preprint, arXiv:1012.2599,  DOI:10.48550/arXiv.1012.2599.
  75. U. Pratiush, H. Funakubo, R. Vasudevan, S. V. Kalinin and Y. Liu, Scientific exploration with expert knowledge (SEEK) in autonomous scanning probe microscopy with active learning, Digital Discovery, 2025, 4(1), 252–263,  10.1039/D4DD00277F.
  76. C. B. Wahl, M. Aykol, J. H. Swisher, J. H. Montoya, S. K. Suram and C. A. Mirkin, Machine learning-accelerated design and synthesis of polyelemental heterostructures, Sci. Adv., 2021, 7(52), eabj5505,  DOI:10.1126/sciadv.abj5505.
  77. Q. Liang, A. E. Gongora, Z. Ren, A. Tiihonen, Z. Liu, S. Sun, J. R. Deneault, D. Bash, F. Mekki-Berrada, S. A. Khan, K. Hippalgaonkar, B. Maruyama, K. A. Brown, J. Fisher Iii and T. Buonassisi, Benchmarking the performance of Bayesian optimization across multiple experimental materials science domains, npj Comput. Mater., 2021, 7(1), 188,  DOI:10.1038/s41524-021-00656-9.
  78. N. Gantzler, A. Deshwal, J. R. Doppa and C. M. Simon, Multi-fidelity Bayesian optimization of covalent organic frameworks for xenon/krypton separations, Digital Discovery, 2023, 2(6), 1937–1956,  10.1039/D3DD00117B.
  79. M. Valleti, R. K. Vasudevan, M. A. Ziatdinov and S. V. Kalinin, Bayesian optimization in continuous spaces via virtual process embeddings, Digital Discovery, 2022, 1(6), 910–925,  10.1039/D2DD00065B.
  80. J. C. Greenhalgh, S. A. Fahlberg, B. F. Pfleger and P. A. Romero, Machine learning-guided acyl-ACP reductase engineering for improved in vivo fatty alcohol production, Nat. Commun., 2021, 12(1), 5825,  DOI:10.1038/s41467-021-25831-w.
  81. Y. Murakami, S. Ishida, Y. Demizu and K. Terayama, Design of antimicrobial peptides containing non-proteinogenic amino acids using multi-objective Bayesian optimisation, Digital Discovery, 2023, 2(5), 1347–1353,  10.1039/D3DD00090G.
  82. B. J. Shields, J. Stevens, J. Li, M. Parasram, F. Damani, J. I. M. Alvarado, J. M. Janey, R. P. Adams and A. G. Doyle, Bayesian reaction optimization as a tool for chemical synthesis, Nature, 2021, 590(7844), 89–96,  DOI:10.1038/s41586-021-03213-y.
  83. R.-R. Griffiths and J. M. Hernández-Lobato, Constrained Bayesian optimization for automatic chemical design using variational autoencoders, Chem. Sci., 2020, 11(2), 577–586,  10.1039/C9SC04026A.
  84. B. Ranković, R.-R. Griffiths, H. B. Moss and P. Schwaller, Bayesian optimisation for additive screening and yield improvements – beyond one-hot encoding, Digital Discovery, 2024, 3(4), 654–666,  10.1039/D3DD00096F.
  85. A. A. Schoepfer, J. Weinreich, R. Laplaza, J. Waser and C. Corminboeuf, Cost-informed Bayesian reaction optimization, Digital Discovery, 2024, 3(11), 2289–2297,  10.1039/D4DD00225C.
  86. T. M. Dixon, J. Williams, M. Besenhard, R. M. Howard, J. MacGregor, P. Peach, A. D. Clayton, N. J. Warren and R. A. Bourne, Operator-free HPLC automated method development guided by Bayesian optimization, Digital Discovery, 2024, 3(8), 1591–1601,  10.1039/D4DD00062E.
  87. L. Gundry, S.-X. Guo, G. Kennedy, J. Keith, M. Robinson, D. Gavaghan, A. M. Bond and J. Zhang, Recent advances and future perspectives for automated parameterisation, Bayesian inference and machine learning in voltammetry, Chem. Commun., 2021, 57(15), 1855–1870,  10.1039/D0CC07549C.
  88. A. M. Bond, A perceived paucity of quantitative studies in the modern era of voltammetry: Prospects for parameterisation of complex reactions in Bayesian and machine learning frameworks, J. Solid State Electrochem., 2020, 24(9), 2041–2050,  DOI:10.1007/s10008-020-04639-6.
  89. P. Puthongkham, S. Wirojsaengthong and A. Suea-Ngam, Machine learning and chemometrics for electrochemical sensors: Moving forward to the future of analytical chemistry, Analyst, 2021, 146(21), 6351–6364,  10.1039/D1AN01148K.
  90. A. M. Fenton Jr and F. R. Brushett, Using voltammetry augmented with physics-based modeling and Bayesian hypothesis testing to identify analytes in electrolyte solutions, J. Electroanal. Chem., 2022, 904, 115751,  DOI:10.1016/j.jelechem.2021.115751.
  91. J. M. Díaz-Cruz, M. Esteban and C. Ariño, Chemometrics in electroanalysis, Springer, Cham, Switzerland, 1st edn, 2019, p. 202 Search PubMed.
  92. J. A. G. Torres, S. H. Lau, P. Anchuri, J. M. Stevens, J. E. Tabora, J. Li, A. Borovika, R. P. Adams and A. G. Doyle, A multi-objective active learning platform and web app for reaction optimization, J. Am. Chem. Soc., 2022, 144(43), 19999–20007,  DOI:10.1021/jacs.2c08592.
  93. S. Pruksawan, G. Lambard, S. Samitsu, K. Sodeyama and M. Naito, Prediction and optimization of epoxy adhesive strength from a small dataset through active learning, Sci. Technol. Adv. Mater., 2019, 20(1), 1010–1021,  DOI:10.1080/14686996.2019.1673670.
  94. M. Chesler and K. Kaila, Modulation of pH by neuronal activity, Trends Neurosci., 1992, 15(10), 396–402,  DOI:10.1016/0166-2236(92)90191-A.
  95. J. A. Johnson, C. N. Hobbs and R. M. Wightman, Removal of differential capacitive interferences in fast-scan cyclic voltammetry, Anal. Chem., 2017, 89(11), 6166–6174,  DOI:10.1021/acs.analchem.7b01005.
  96. J. A. Johnson, N. T. Rodeberg and R. M. Wightman, Measurement of basal neurotransmitter levels using convolution-based nonfaradaic current removal, Anal. Chem., 2018, 90(12), 7181–7189,  DOI:10.1021/acs.analchem.7b04682.
  97. K. Yoshimi and A. Weitemier, Temporal differentiation of pH-dependent capacitive current from dopamine, Anal. Chem., 2014, 86(17), 8576–8584,  DOI:10.1021/ac500706m.
  98. S.-Y. Tian, S.-P. Deng and Z.-X. Chen, Multifrequency large amplitude pulse voltammetry: A novel electrochemical method for electronic tongue, Sens. Actuators, B, 2007, 123(2), 1049–1056,  DOI:10.1016/j.snb.2006.11.011.
  99. P. Takmakov, M. K. Zachek, R. B. Keithley, E. S. Bucher, G. S. McCarty and R. M. Wightman, Characterization of local pH changes in brain using fast-scan cyclic voltammetry with carbon microelectrodes, Anal. Chem., 2010, 82(23), 9892–9900,  DOI:10.1021/ac102399n.
  100. T. Jo, K. Yoshimi, T. Takahashi, G. Oyama and N. Hattori, Dual use of rectangular and triangular waveforms in voltammetry using a carbon fiber microelectrode to differentiate norepinephrine from dopamine, J. Electroanal. Chem., 2017, 802, 1–7,  DOI:10.1016/j.jelechem.2017.08.037.
  101. F. Zhu, J. Yan, C. Sun, X. Zhang and B. Mao, An electrochemical method for selective detection of dopamine by depleting ascorbic acid in diffusion layer, J. Electroanal. Chem., 2010, 640(1), 51–55,  DOI:10.1016/j.jelechem.2010.01.006.
  102. K. T. Kishida, I. Saez, T. Lohrenz, M. R. Witcher, A. W. Laxton, S. B. Tatter, J. P. White, T. L. Ellis, P. E. M. Phillips and P. R. Montague, Subsecond dopamine fluctuations in human striatum encode superposed error signals about actual and counterfactual reward, Proc. Natl. Acad. Sci. U. S. A., 2016, 113(1), 200–205,  DOI:10.1073/pnas.1513619112.
  103. C. J. Meunier, G. S. McCarty and L. A. Sombers, Drift subtraction for fast-scan cyclic voltammetry using double-waveform partial-least-squares regression, Anal. Chem., 2019, 91(11), 7319–7327,  DOI:10.1021/acs.analchem.9b01083.
  104. R. K. Vasudevan, M. Ziatdinov, L. Vlcek and S. V. Kalinin, Off-the-shelf deep learning is not enough, and requires parsimony, Bayesianity, and causality, npj Comput. Mater., 2021, 7(1), 16,  DOI:10.1038/s41524-020-00487-0.
  105. M. A. Ziatdinov, A. Ghosh and S. V. Kalinin, Physics makes the difference: Bayesian optimization and active learning via augmented Gaussian process, Mach. Learn.: Sci. Technol., 2022, 3(1), 015003,  DOI:10.48550/arXiv.2108.10280.
  106. A. Jaworski, H. Wikiel and K. Wikiel, Benefiting from information-rich multi-frequency AC voltammetry coupled with chemometrics on the example of on-line monitoring of leveler component of electroplating bath, Electroanalysis, 35(1), e202200478,  DOI:10.1002/elan.202200478.
  107. P. M. Attia, A. Grover, N. Jin, K. A. Severson, T. M. Markov, Y.-H. Liao, M. H. Chen, B. Cheong, N. Perkins, Z. Yang, P. K. Herring, M. Aykol, S. J. Harris, R. D. Braatz, S. Ermon and W. C. Chueh, Closed-loop optimization of fast-charging protocols for batteries with machine learning, Nature, 2020, 578(7795), 397–402,  DOI:10.1038/s41586-020-1994-5.
  108. A. Dave, J. Mitchell, S. Burke, H. Lin, J. Whitacre and V. Viswanathan, Autonomous optimization of non-aqueous li-ion battery electrolytes via robotic experimentation and machine learning coupling, Nat. Commun., 2022, 13(1), 5454,  DOI:10.1038/s41467-022-32938-1.
  109. C. Zhao, Q. Liu, K. M. Cheung, W. Liu, Q. Yang, X. Xu, T. Man, P. S. Weiss, C. Zhou and A. M. Andrews, Narrower nanoribbon biosensors fabricated by chemical lift-off lithography show higher sensitivity, ACS Nano, 2021, 15(1), 904–915,  DOI:10.1021/acsnano.0c07503.
  110. E. C. Mitchell, L. E. Dunaway, G. S. McCarty and L. A. Sombers, Spectroelectrochemical characterization of the dynamic carbon-fiber surface in response to electrochemical conditioning, Langmuir, 2017, 33(32), 7838–7846,  DOI:10.1021/acs.langmuir.7b01443.
  111. D. B. Hibbert, Experimental design in chromatography: A tutorial review, J. Chromatogr. B: Anal. Technol. Biomed. Life Sci., 2012, 910, 2–13,  DOI:10.1016/j.jchromb.2012.01.020.
  112. https://scikit-optimize.Github.Io/stable/index.Html .
  113. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss and V. Dubourg, Scikit-learn: Machine learning in Python, J. Mach. Learn. Res., 2011, 12, 2825–2830 Search PubMed.

Footnotes

Electronic supplementary information (ESI) available. See DOI: https://doi.org/10.1039/d5dd00005j
These authors contributed equally to this work.
§ Present address: Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA 90048.

This journal is © The Royal Society of Chemistry 2025
Click here to see how this site uses Cookies. View our privacy policy here.