Spectroscopic constants from atomic properties: a machine learning approach

We present a machine-learning approach toward predicting spectroscopic constants based on atomic properties. After collecting spectroscopic information on diatomics and generating an extensive database, we employ Gaussian process regression to identify the most efficient characterization of molecules to predict the equilibrium distance, vibrational harmonic frequency, and dissociation energy. As a result, we show that it is possible to predict the equilibrium distance with an absolute error of 0.04 Å and vibrational harmonic frequency with an absolute error of 36 cm − 1 , including only atomic properties. These results can be improved by including prior information on molecular properties leading to an absolute error of 0.02 Å and 28 cm − 1 for the equilibrium distance and vibrational harmonic frequency, respectively. In contrast, the dissociation energy is predicted with an absolute error ≲ 0 . 4 eV. Alongside these results, we prove that it is possible to predict spectroscopic constants of homonuclear molecules from the atomic and molecular properties of heteronuclears. Finally, based on our results, we present a new way to classify diatomic molecules beyond chemical bond properties.


Introduction
Since the beginning of molecular spectroscopy in the 1920s, the relationship between spectroscopic constants of diatomic molecules has been an intriguing and captivating matter in chemical physics.Following early attempts by Kratzer, Birge and Mecke 1-3 , Morse proposed a relationship between the equilibrium distance, R e , and the harmonic vibrational frequency, ω e , as R 2 e ω e = γ, where γ is a constant, after analyzing the spectral properties of 16 diatomic molecules 4 .However, as more spectroscopic data became available, further examination of the Morse relation revealed its applicability to only a tiny number of diatomic molecules 5 .Next, in a series of papers, Clark et al. generalized Morse's idea via the concept of a periodic table of diatomic molecules.Eventually, Clark's efforts translated into several relations, each limited to specific classes of molecules [5][6][7][8] .Simultaneously, Badger proposed a more neat relationship, including atomic properties of the atoms constituting the molecule 9 .Following Badger's proposal, multiple authors have found new relations, which have seen some utility even for polyatomic molecules [10][11][12] .Nevertheless, Badger's relations are not generalizable to all diatomic molecules [13][14][15] .In general, several empirical relationships between R e and ω e were proposed in the 1930s and the 1940s 7,8,[16][17][18][19][20][21][22][23][24][25] .In summary, from 1920 till now, the number of empirical relations published is around 70 collected by Kraka et al. 10 .Most of these empirical relations were tested by several authors, finding some constraints on their applicability 10,[13][14][15]26 . Howver, all of these relationships were based on empirical evidence rather than on a given physical or chemical principle.
On the other hand, in 1939 Newing proposed a theoretical justification for observed empirical relationships between spectroscopic constants given by where c is a constant for similar molecules, f (R e ) is some function of the equilibrium distance, and µ is the reduced mass of the molecule.In particular, Newing used Slater's application of the virial theorem, concluding that the empirical laws may be related to the existence of a universal repulsive field in diatomic molecules 27,28 .The theoretical justification given by Newing implies that several relations of the form given by (1) exist, each of which holds for a set of similar diatomic molecules; however, for any practical application of these empirical laws, the sets of similar diatomic molecules must be identified first.The approach was not viable because similarity needs to be defined precisely.
In the 1960s and 1970s, a number of authors devised the virial theorem, perturbation theory, and Helmann-Feynman theorem [29][30][31][32] to develop a better understanding of the nature of the relationship between R e and ω e via electron densities [33][34][35][36][37][38][39][40] .Most notably, Anderson and Parr were able to establish a relationship between R e , ω e , and atomic numbers Z 1 and Z 2 , as where µ is the reduced mass of the molecule, and A and ξ (the electron density decay constant) are fitting parameters.Further, assuming that R e is given by the sum of atomic radii of the constituent atoms and following simple arguments using the electron density function, it can be shown that where it is assumed that the electron density has a given decay constant ξ ′ , and B is a fitting parameter.Using Eqs.
(2) and (3), one finds where C = 4πAB (1+η) and η = (ξ ′ − ξ )/ξ ′ .Anderson and Parr found that taking C, ξ and ξ ′ as functions of the groups and periods of the constituent atoms results in better fitting 38,40 .Anderson and Par tested their relationships against 186 molecules and agreed reasonably with experimental values.Recently Liu et al. tested Eqs.
(2) and (3) against an extended data set of 256 molecules, finding that these relationships lead to errors ≳ 10% upon adding more data 26 .Therefore, these relationships are not universal and further study is required to elucidate proper relationships.However, the pioneering work of Anderson, Parr, and others provided well-motivated relationships between spectroscopic constants theoretically for the first time.Most significantly, their work pointed towards a possible direct connection between a diatomic molecule's spectroscopic properties and its individual atoms' atomic properties and positions in the periodic table.
Alongside these developments, several authors attempted connecting the dissociation energy, D 0 0 , with ω e and R e of diatomic molecules 19,[41][42][43][44][45] .However, these received little attention due to the lack of reliable experimental data 9,41,[46][47][48] .Most of the relationships are given by where A ′ and l are constants depending on the form and parameterization of the potential energy functions that describe the molecule.For instance, Sutherland found that by taking A ′ as a function of groups and periods, better results can be obtained 19,41 .
Thanks to machine learning (ML) techniques and the development of extensive spectroscopic databases 49 , it has been possible to study the relationship between spectroscopic constants from a heuristic perspective, i.e., from a data-driven approach 26 , find optimal potentials based on spectroscopy data 50 and to improve ab initio potentials to match experimental observations 51 .In particular, Gaussian process regression (GPR) models have been used on a large dataset of 256 heteronuclear diatomic molecules.As a result, it was possible to predict R e from the atomic properties of the constituent atoms.Similarly, ω e and the binding energy D 0 0 were predicted using combinations of atomic and molecular properties.However, the work of Liu et al.only studied heteronuclear molecules.Hence, the universality of the relationship between spectroscopic constants still needs to be revised.On the other hand, ML techniques can be used to boost density functional theory approaches to larger systems with low computational costs [52][53][54][55] .Hence, ML techniques are used to enlarge the capabilities of quantum chemistry methods.However, if sufficient data and information are available, could ML substitute quantum chemistry methods?
In this work, we present a novel study on the relationship between spectroscopic constants via ML models, including homonuclear molecules in a dataset of 339 molecules: the largest dataset of diatomics ever used.As a result, first, we show that it is possible to predict R e and ω e with mean absolute errors ∼ 0.026 Å and ∼26 cm −1 , leading to an improvement of factor 2 in predicted power and accuracy concerning previous ML models.Furthermore, the dissociation energy, D 0 0 , is predicted with a mean absolute error ∼0.4 eV, in accordance with previous ML models.However, our model can benefit from having a more accurate and extensive database.Second, we show that it is possible to accurately predict the molecular properties of homonuclear molecules out of heteronuclear ones.Finally, since we use the same ML model in this work, we are in a unique situation to define similarity among molecules.Thus, we propose a data-driven classification of molecules.The paper is organized as follows: in Section 2, we introduce the database and analyze the main properties; in Section 3, we present the ML models with a particular emphasis on Gaussian process regression; in Section 4, we present our results and in Section 5, the conclusions.

The data set
In this work, we extend the data set by Liu et al. 26,49 by adding the ground state spectroscopic constants of 32 homonuclear and 54 extra heteronuclear diatomic molecules from Refs.  . The daaset counts 339 entries based on experimentally determined spectroscopic constants: R e is available for 339, ω e for 327, and D 0 0 is available for 249 molecules.
To assess the variation of the spectroscopic constants in the dataset, we display the histogram and box plots of R e , ω e , and D 0 0 in Fig. 1.This Figure shows that the spectroscopic constants' histogram is nearly unimodal.However, R e and ω e show a heavytailed distribution.In the case of R e , the tail is due to the presence of van der Waals molecules.In contrast, light molecules are responsible for the tail in the histogram for ω e .On the other hand, the box plot of D 0 0 shows almost no outliers and only an incipient peak for a molecule with binding energy smaller than 0.75 eV due to the presence of van der Waals molecules.On the other hand, we investigate the relationship between pairs of spectroscopic constants in panels (d)-(f) of Fig. 1.For example, panel (d) displays R e versus ω e , showing an exponential trend similar to the one suggested by Eq. (2) or a power law (Morse relationship).On the contrary, by plotting R e versus D 0 0 and D 0 0 versus ω e in panels (e) and (f), respectively, we notice a large dispersion of the points with no observed trends in both panels.
Next, we analyze the chemical properties of the molecules under consideration, employing the Arkel-Ketelaar triangle-also known as the Jensen triangle, which separates qualitatively covalent, ionic, and van der Waals molecules.The triangle plots the absolute value of the electronegativity difference between the constituent atoms |χ a − χ b | versus their average electronegativity (χ a + χ b ) /2, as shown in Fig. 2, where χ a and χ b denote the electronegativities of the molecules' constituent atoms.The average electronegativity of the constituent atoms on the x-axis quantifies the van der Waals-covalent bonding.On the contrary, the difference in electronegativity of the constituent atoms quantifies the ionic character on the y-axis.The triangle shows that the data set comprises chemically diverse diatomic molecules with bonding characters ranging from covalent to ionic with many van der Walls.This chemical diversity strongly manifests itself in the range of the ground state spectroscopic constants depicted in Fig. 1.

The machine learning (ML) model
Machine learning (ML) is a vast discipline that utilizes datadriven algorithms to perform specific tasks (e.g., classification, regression, clustering).Among the different ML techniques, in this work, we use Gaussian process regression (GPR), which is especially suitable for small datasets.This section briefly describes GPR and our methods to generate and evaluate models.

Gaussian process regression
We define our data set D = {(x i , y i )|i = 1, • • •, n}, where x i is a feature vector of some dimension D associated with the i-th element of the dataset, y i is a scalar target label, and n is the number of observations, i.e., the number of elements in the dataset.The set of all feature vectors and corresponding labels can be grouped in the random variables X and y, respectively, where Here, y consists of values of molecular properties to be learned.y i is R e , ω e , or D 0 0 of the i-th molecule, whereas x i is a vector containing atomic or molecular properties of the same molecule.
We are interested in mapping features to target labels via a regression model y i = f (x i ) + ε i , where f (x i ) is the regression func- tion, and ε i is an additive noise.We further assume that ε i follows an independent, identically distributed (i.i.d) Gaussian distribution with variance σ 2 where ε ε ε = (ε 1 , • • •, ε n ) and I is the identity matrix.
One approach to tackle the regression problem is to specify a functional form of f (x i ).Then, set the free parameters of the regression model by fitting the data.Alternatively, one can disregard specifying a functional form of f (x i ) but instead place a prior distribution over a space of functions and infer the posterior predictive distribution following a Bayesian non-parametric approach.Within this group of methods, we find Gaussian process regression (GPR), assuming a Gaussian process prior G P over the space of functions 127,128 .
A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.A Gaussian process is specified by a mean function m(x) and a covariance function(kernel) k(x, x ′ ), we will describe both shortly.A posterior distribution of the value of f (x * ) at some point of interest, x * , is determined through the Bayes theorem as where The mean of the resulting predictive posterior distribution, µ * , is used to obtain a point estimate of the value of f (x * ), and its covariance Σ * provides a confidence interval.
In GPR, the regression model is completely specified by the kernel k(x, x ′ ).The kernel is a similarity measure that specifies the correlation between a pair of values f (x) and f (x ′ ) by only using the distance between a pair of feature vectors x and x ′ as its input variable.Specifying a kernel, we encode high-level structural assumptions (e.g., smoothness, periodicity, etc.) about the regression function.Here, we focus on the Matérn class kernel given by given by ) where d(x p , x q ) is the Euclidean distance between x and x ′ , K ν (z) is the modified Bessel function of the second kind of order ν and argument z, Γ(x) is the Euler gamma function of argument x, l is the characteristic length scale, σ 2 f is the signal variance, and δ pq is the Kronecker delta.ν controls the smoothness of the process.For instance, for ν = 1/2, the process is zero times differentiable.On the contrary, the process is infinitely differentiable at the limit ν → ∞: the so-called radial basis function (RBF) kernel.Values of ν that are suitable for regression applications are 1/2, 3/2, 5/2, and ∞ 127 .
We can encode a physical model via the relationships between spectroscopic constants by specifying the Gaussian process prior mean function m(x).A common choice of the prior mean function is m(x) = 0.This choice is satisfactory in most cases, especially in interpolation tasks.However, selecting an appropriate prior mean function can simplify the learning process (delivering better results using fewer data).The mean function can also guide the model for better predictions as k(x p , x q ) → 0; this is necessary for extrapolation and interpolation among sparse data points.Further, a model with a specified mean function is more interpretable.

Model development and performance evaluation
Its parameters and hyperparameters characterize a GPR model.Parameters involve (σ n , l, σ f ) of Eq. ( 10) plus additional parameters depending on the form of the prior mean function.On the contrary, hyperparameters involve selecting features, the form of the prior mean function, and the order ν of the Matérn kernel.To determine the parameters and the hyperparameters, we divide the dataset D into two subsets: D tv and D test .First, D tv is used for the training and validation stage, in which we determine the model's hyperparameters.Then, D test , known as the test set, is left out for model final testing and evaluation and does not take any part in determining the parameters nor the hyperparameters of the model.
To design a model, we choose an X suitable to learn y through a GPR.We then choose a convenient prior mean function m(X) based on physical intuition, alongside the last hyperparameter ν ∈ {1/2, 3/2, 5/2, ∞} is determined by running four models, each with a possible value of ν, and we chose the one that performs the best on the training data to be the final model.Precisely, a cross-validation (CV) scheme is used to evaluate the performance of each model iteratively: we split D tv into a training set D train (∼ 90% of D tv ) and a validation set D valid .We use D train to fit the model and determine its parameters by maximizing the logmarginal likelihood.The fitted model is then used to predict the target labels of D valid .We repeat the process with a different split in each iteration such that each element in D tv has been sampled at least once in both D train and D valid .After many iterations, we can determine the average performance of the model.We compare the average performance of the four models after the CV process.Finally, We determine the value of ν to be its value for the best-performing model.
We adopt a Monte Carlo (MC) splitting scheme to generate the CV splits.Using the MC splitting scheme, we expose the models to various data compositions, and thus, we can make more confident judgments about our models' performance and generality 26 .To generate a single split, we use stratified sampling 128,129 .First, we stratify the training set into smaller strata based on the target label.Stratification will be such that molecules in each stratum have values within some lower and upper bounds of the target label (spectroscopic constant) of interest.Then, we sample the validation set so that each stratum is represented.Stratified sampling minimizes the change in the proportions of the data set composition upon MC splitting, ensuring that the trained model can make predictions over the full range of the target variable.Using the Monte Carlo splitting scheme with cross-validation (MC-CV) allows our models to train on D tv in full, as well as make predictions for each molecule in D tv .In each iteration, D valid simulates the testing set; thus, by the end of the MC-CV process, it provides an evaluation of the model performance against ∼90% of the molecules in the data set before the final testing stage.
We use 1000 MC-CV iterations to evaluate each model's performance.Two estimators evaluate the models' performance at each iteration, the mean absolute error (MAE) and the root mean squared error (RMSE), given by where y * i and y i are the true and predicted values, respectively, and N is the number of observations in consideration.We report the final training/validation MAE and RMSE with the sample standard deviation (STD) and the standard error of the means (SEM) given by Where M is the number of the MC-CV iterations, and MAE i (RMSE i ) is the MAE (RMSE) of the ith MC-CV iteration.
We use learning curves to evaluate the performance of the models as a function of the size of D train .We use 500 CV splits at each training set size to generate the learning curves.The validation and training RMSE ± 0.5STD(RMSE) are plotted as a function of the size of D train .Models that have the lowest validation MAE, RMSE, and SEM are elected for the testing stage.In the testing stage, we fit the models to D tv and make predictions of the target labels of D test .Finally, we report the validation MAE ± SEM(MAE) and RMSE ± SEM(RMSE) and test MAE and RMSE as our final evaluation of the model.

Results and discussion
We have developed seven new models to predict R e , ω e , and D 0 0 : r2, r3, and r4 to predict R e , models for predicting ω e are denoted w2, w3, and w4, while only one model is used to predict D 0 0 , labeled as d1.In addition, we implemented two of the bestperforming models of Liu et al. 26 (denoted r1 and w1) using our updated dataset and compared them with our models.All models are divided into three categories: (i) r1, r2, and w2 only employ atomic properties as features in the kernel and as variables in the prior mean function, (ii) r3 and w3 employ atomic properties as features in the kernel but use spectroscopic data in the prior mean function, and (iii) r4, w4, and d1 include spectroscopic data both in the kernel and the prior mean function.
In all the seven newly developed models, we use the groups g 1 and g 2 and periods p 1 and p 2 of the molecules' constituent atoms and the square root of the reduced mass of the molecule µ 1/2 as features.Therefore, the set of properties { p 1 ,g 1 ,p 2 ,g 2 ,µ 1/2 } uniquely defines each molecule in the dataset.On the contrary, additional spectroscopic properties are added to these five features for models within the category (iii).Furthermore, we choose the models' features and prior mean functions using physical intuition based on the discussion in the introduction and observations from the data Fig. 1, and ν was set to 3/2 using the CV Table 1 Machine learning models summary.The target column includes the molecular property to be predicted.The model column refers to the ML model used.The molecules column refers to the number of molecules in the training plus validation set D tv .Features are the atomic and molecular properties employed to characterize every data point in the kernel.Prior mean stands for the prior mean function used for each model as indicated in the text, and ν represents the order of the Matérn kernel determined via the MC-CV scheme described in section 3.2.

Target
Model Molecules Features Prior mean ν R e (Å) rlr1 314 scheme discussed in the last section.The models' characteristics are given in Table 1.
For all the nine implemented models, we permute the groups and periods in D train in the training and validation stage and in D tv in the testing stage to impose permutational invariance 26 .That is, the models should not differentiate between x = (p 1 , g 1 , p 2 , g 2 , ...) and x ′ = (p 2 , g 2 , p 1 , g 1 , ...) upon exchanging the numbering of the two atoms in a molecule.Eight of the models use linear prior mean functions, the linear coefficients of which are determined by fitting the linear model to D train in each CV iteration in the training and validation stage and by fitting to D tv in the testing stage.
For the sake of comparison with baseline ML models we have implemented linear regression (LR) models based on equations (3-5).Specifically, models rlr1 and rlr2 to predict R e , wlr1 and wlr2 to predict ln(ω e ) and dlr1 to predict ln(D O O ).The same MC-CV scheme used to train the GPR models was used to train the LR models.A description of these models is given in table 1 and a statistical summary of their performance is given in table 2.

R e
The first spectroscopic constant under consideration is the equilibrium distance, R e .We have implemented and developed two models for predicting R e using only atomic properties: r1 and r2, as detailed in Table 1.Model r1 is the same as in Liu et al. 26 using groups and periods of the constituent atoms as features.The model r2 requires an extra feature related to the reduced mass of the molecule.For both models, we explicitly express the models' prior mean functions as linear functions in the groups and periods of the diatomic molecules' constituent atoms.
where β r1−r2 k , k ∈ {0, 1, 2} are the linear coefficients of m r1−r2 .A comparison between models r1 and r2 is displayed in Fig. 3.The scatter plots show a more significant dispersion of the predictions for model r1 compared to model r2.Both models show the same outliers: homonuclear and van der Waals molecules.However, for model r2, the number of outliers is smaller than in model r1, and their dispersion from the true line is significantly suppressed.As a result, model r2 performs substantially better, especially in predicting molecules with R e ≥ 3 Å(mainly van der Motivated by previously proposed relationships between R e and ln (ω e ), we introduce models r3 and r4.Model r3 employs the same features as model r2 but incorporates spectroscopic information in the prior mean function.On the contrary, model r4 uses ln (ω e ) as a feature.Both models have a prior mean given by the learning rate of model r3 is significantly higher than the other two models.Using 50% of the available data is sufficient for model r3 to achieve a validation RMSE comparable to model r1 using 90% of the data set.Overall, models r3 and r4 show an improvement in the prediction on R e ∼ 40% as shown in Table2, leading to validation MAE of 0.027 Å and 0.026 Å and a validation RMSE of 0.039 Å and 0.038 Å respectively.In other words, models r3 and r4 are almost two times more precise in predicting R e than previously ML-based or empirically-derived predictions, and almost as accurate as the state-of-the-art ab initio calculations for diatomics 130,131 .Furthermore, the lower panes of Fig. 3 show converging learning curves characterized by relatively narrow gaps between validation and training curves.The decaying trend of the validation curves suggests that convergence toward lower levels of errors is possible upon further training on more extensive datasets.The training MAE of r2 is ∼ 6 × 10 −3 Å ; this means that we might have the capacity to achieve an accuracy ∼ 0.010 Å using only atomic properties.In other words, with more data our ML models could be as accurate as ab initio quantum chemistry methods.
To highlight a few of the common outliers of the four models, we consider Li 2 , B 2 , LiCs, and LiCa.r1, r2, r3, r4 underestimate R e for Li 2 by 6 % -10 %,.r1, r2, r3, r4 underestimate R e for B 2 by 14%.15%, 7%, and 8%, respectively, which could be connected to B 2 being the only homonuclear molecule from group 13 in the data set.For LiCs, we have found R e = 3.67 Å 86 r2 predicts R e = 3.49 ± 0.15 Å; that is, the experimental value is 1.2 standard deviation away from the mean predictive posterior distribution of model r2 for LiCs, most of the theoretical R e values of LiCs are within one standard deviation 85 .For LiCa, the experimental value found by Krois

ω e
We have implemented and developed four models to predict ω e as listed in Table 1.Model w1 is the best-performing model of Liu et al. 26 .It is characterized by six features, including atomic and molecular properties.Namely, the groups and periods of the constituent atoms, the average group, ḡ = (g iso 1 + g iso 2 )/2, and R −1 e .g iso encodes isotopic information, such that g iso i = 0 for deuterium, g iso i = −1 for tritium, and g iso i = g i for every other element.The prior mean function is set to zero.On the other hand, model w2 only includes groups and periods of the constituent atoms and the reduced mass of the molecule.The prior mean of model w2 is given by where β w2−w3 k , k ∈ {0, 1, 2, 3} are the linear coefficients of m w2 .Model w3 uses the same features as model w2 but includes R e in the prior mean function.Model w4 is characterized by six features as model w1 and uses R e as a feature in both the kernel and in the prior mean function.
Motivated by the relationship between ω e and R e , both w3 and w4 use the same prior mean function where β w4 k , k ∈ {0, 1, 2, 3, 4} are the linear coefficients of m w3−w4 .The inclusion of the reduced mass in models w2, w3, and w4 eliminates the necessity of imposing isotopic information on the groups of constituent atoms.Fig. 4 compares w1, w2, and w4 (plots of w3 are similar to those of w4).We notice from panel (a) that model w1 struggles against hydrides, and hydrogen and hydrogen fluoride isotopologues.Indeed, the model significantly overestimates ω e for H 2 .On the other hand, panel (b) shows that w2 performs much better against hydrides, and hydrogen and hydrogen fluoride isotopologues.w2 predictions for H 2 and HF are accurate and even better than those of models w3 and w4, as shown in panel (c).Panels (a) and (b) clearly show that model w2 outperforms model w1 when considering molecules with larger values of ω e .Looking at the learning curves in panels (d) and (e), we see that model w2 is far more consistent than model w3, as indicated by the shade around the validation curves of both models.From Table 2, the validation SEM(RMSE) of models w2 and w1 show that model w2 is 40 % more consistent in its performance than model w1 when both models are validated using the same 1000 MC-CV splits.Furthermore, the test RMSE of w2 is 20 % lower than that of w1.Model w2 has lower dimensionality than model w1 and only implements atomic properties; nevertheless, it performs similarly to model w1.
From Table 2, we see that although model w3 has a test MAE almost equal to model w1, models w3 and w4 have validation MAEs 15 % -21 % lower than that of w1, indicating an overall better average performance of the newly developed models.Furthermore, w3 and w4 have validation RMSEs and test RMSEs 28 % -36 % lower than w1, showing the robustness of the two new models.Panel (c) of Fig. 4 shows minimal scatter around the true line.Few hydrides, along with BN and C 2 , still challenge the model; however, their absolute errors are significantly suppressed compared to w1 and w2.The validation curve of model w4 in panel (f) shows a much higher learning rate than w1 and w2, with a much shallower gap between the validation and learning curves.Moreover, The shadow around the validation curve is minimal at all training sizes.From table 2, we see that w3 and w4 are far more consistent than w1, with STD(RMSE) 60 % -70 % lower than that of w1.
On the other hand, the lower three panels in Fig. 4 show that the validation and training curves can converge towards lower error values.Hence, all the models might benefit from training on a more extensive dataset.The training MAEs of w1, w2, w3, and w4 range between 8 to 7 cm −1 , so it might be possible to reach near spectroscopic accuracy (∼ 10 cm −1 ) by training these models on larger datasets.In the case of w2, if the validation curve's decaying trend persists upon further training, near spectroscopic accuracy might be achieved only through knowledge of atomic positions in the periodic table.Similarly, these models trained in larger database could outperform the state-of-the-art ab initio quantum chemistry methods 130,131 .
We highlight some of the outliers that are common to some of the models.All the models overestimate ω e for HgH by at least 12 %, while for IrC, w1 and w2 overestimate ω e by 30 % and 25 %, while w3 and w4 only overestimate it by only 4 % and 7 %, respectively.The observed overestimation might be because HgH and IrC are the only molecules that consist of mercury or iridium in the dataset.

D 0 0
Finally, we have developed model d1 to predict the dissociation energy, D 0 0 , via ln(D 0 0 ) using (p 1 ,g 1 ,p 2 ,g 2 ,µ 1/2 ) as features in a Matérn 3/2 kernel, and a prior mean function that employs both where β d1 k , k ∈ {0, 1, 2, 3, 4, 5} are the linear coefficients of m d1 .The performance of the model is displayed in Fig. 5, where the scatter plot [panel (a)] shows some dispersion of the model predictions concerning the true values.From Table 2, the validation and test errors suggest that the model is consistent and generalizable to new data indicating that model d1 yields reasonable  performance as far as ln(D 0 0 ) is concerned.However, converting ln(D 0 0 ) back to D 0 0 , the errors are ∼ 0.4 eV≡ 10 kcal/mol, which is a significant error considering the typical chemical accuracy (±1 kcal/mol).However, as shown in panel (b) of Fig. 5, the model might benefit from training on more data, leading to a potential improvement of a factor of 3. On the other hand, it is possible to accurately predict bond energies, in complex molecules, by using intuitive chemical descriptors, as shown in Refs. 145,146, which is something that we are planning on implementing in the future.
During the development of this work, we have realized that, historically, uncertainties about the dissociation energy experimental values had restrained the development of empirical relations connecting them to other atomic and molecular properties and have led several authors to focus their efforts on the ω e -R e relation 9,41,47 .More recently, Fu et al. used an ML model to predict dissociation energies for diatomic molecules, exploiting microscopic and macroscopic properties 147 .They tested their model against CO and highlighted that the reported experimental dissociation energy in the literature had increased by 100 kcal/mol over the course of 78 years from 1936 to 2014 [147][148][149] (in Table 1 of Ref. 147 ).The data used to train model d1 is primarily collected from Huber and Herzberg's constants of diatomic molecules, first published in 1979 56 .Unlike experimental values of R e and ω e , since 1980, a significant number of D 0 0 values have been updated 48 .To name a few, MgD, MgBr, MgO, CaCl.CaO, SrI, SrO, TiS, NbO, AgF, AgBr, and BrF all have their experimental values updated with at least ±2.3 kcal/mol difference from their values in Huber and Herzberg 48,56 .Moreover, for some molecules, the uncertainties in D 0 0 experimental values are not within chemical accuracy.For instance, MgH, CaCl, CaO, CaS, SrH, BaO, BaS, ScF, Tif, NbO, and BrF have uncertainties ranging from ±1 kcal/mol up to ± 8 kcal/mol 48 .
Based on the previous discussion, we can connect the unsatisfactory performance of model d1-in comparison to our developed R e and ω e models-to noise modeling.Unlike R e and ω e , it is most likely that uncertainties around D 0 0 experimental values drive from various systematic effects.Therefore, modeling the errors in D 0 0 experimental values to be identically distributed as in Eq. ( 6) might not be a proper treatment.Thus, to develop better  models for predicting D 0 0 , more sophisticated techniques of error modeling might be required.To this endeavor, gathering more reliable data with experimental uncertainty within ±1 kcal/mol might be sufficient.Something that we are working on it, and it will be published elsewhere.

Testing ML models versus ab initio results
To further assess the accuracy of our ML models regarding R e and ω e we have exposed our models to molecules containing Fr. Indeed, our dataset does not contain any Fr-containing molecule, defining the most complicated scenario for our ML models.The results of r2 and w2 models in comparison with the state-of-theart ab initio methods are shown in Table .5, where it is noticed that our ML predictions agree well with ab initio predictions.Furthermore, more data can quickly improve ML predictions, as presented in Figs. 3 and 4. Therefore, ML predictions can be competitive with ab initio quantum chemistry methods using a larger dataset.

Predicting homonuclear spectroscopic properties from heteronuclear data
To explore the capability of our models in predicting the spectroscopic properties of homonuclear molecules from spectroscopic and atomic information of heteronuclear molecules, we train our models for predicting R e , ω e , and D 0 0 (r3, w4, and d1) using a special split.We fit the three models to heteronuclear data in D tv and then make predictions for the left-out homonuclear molecules.The performance of our models is displayed in Fig. 6, where we notice an outstanding performance.Only a few outliers are observed, showing a minimal deviation from the true line.In particular, we obtain MAEs of 0.08 Å, 74 cm −1 , and 0.149 for models r4, w4, and d1, respectively.Hence, it is possible to predict the accurate spectroscopic properties of homonuclear molecules from heteronuclear data.Furthermore, our results indicate that expanding the data set by including homonuclear molecules yield high-performing models able to predict spectroscopic properties for both heteronuclear and homonuclear molecules.

Towards a classification of diatomic molecules
For models r2, r3, w2, w3, and d1, we have achieved good results using a kernel common to all five models.That Matérn kernel given by Eq. ( 10) with ν = 3/2, is a similarity measure.Therefore, it is possible to quantify the similarity between a pair of molecules denoted by i = p, q giving their feature vector x i = p i 1 , g i 1 , p i 2 , g i 2 , µ i .The models are fitted to the whole dataset to determine the parameters (σ n , l, σ f ).The kernel given by Eq. ( 10) with ν = 3/2 and the determined parameters can be used to form a similarity matrix.Each element in the similarity matrix quantifies the similarity between a pair of molecules in the dataset.Off-diagonal elements are calculated via Eq.( 10) for p ̸ = q, with the diagonal representing the similarity of the molecules with themselves (p = q).A heat map representation of the similarity matrix is given in Fig. 7, while the degree of similarity from 0 to 1 is given over a greyscale as indicated by the color bar on the right side of the figure.
To further explore the quantified similarity among molecules, we consider three subsets of molecules and show their heatmaps in the upper panels of Fig. 8.The lower panels of Fig. 8 show the corresponding network representation of the similarity among these subsets of molecules.Black squares in the heat map plots of Fig. 8 indicate that a pair of molecules is highly similar, whereas white squares indicate 0% similarity.The network representation represents each molecule as a node.The similarity between two molecules is diagrammatically shown with a line joining their corresponding nodes.The networks show similarities above the 80% level.A line joins two nodes only if they are at least 80% similar.The length of a joining line indicates the degree of similarity between a pair above the 80% level.A short line indicates a high degree of similarity, and a long line indicates a lower degree of similarity.
From panel (a) of Fig. 8, we see noble gas dimers clustering around Xe 2 , and alkali metals-alkaline earth metals cluster around NaRb.Both clusters are isolated from each other and VF, indicating a lower degree of similarities between these clusters and VF.A similar scenario is observed in panel (b), where alkaline earth metal hydrides cluster upon themselves with tight interconnections indicating high similarity.On the other hand, ZnH is remotely connected to the cluster, indicating a lower degree of similarity.The upper right cluster shows an interconnection among diatomic reactive nonmetals, including halides and oxygen; notably, AgAl is connected to these molecules.Panel (c) displays a more involved clustering scheme involving transition metal hydrides (MnH and AgH), connected to a metalloid hydride (TlH and InH) and with a lower degree to alkaline earth metals hydrides (LiH and BeH).The right-hand side cluster consists of various transition metal diatomics, dihalides, and others, all closely related except for MgS.Note that all the molecules in the right-hand side cluster consist of atoms from the periodic table's right side.At the same time, MgS combine one atom from group 2 and one from group 16.Notably, homonuclear diatomic and heteronuclear molecules are firmly embedded within all the  clusters, emphasizing the importance of including homonuclear data in our models.Since only atomic properties are required to find elements of the matrix representation of the kernel, the similarity matrix can guide us in our data-gathering efforts.For example, we can determine which molecules can fill the gaps and connect clusters to build more robust models.More interestingly, we can systematically classify molecules based on the similarity matrix.Such classification would help develop potential energy surfaces (PES) for diatomic molecules.As pointed out by Newing, similar molecules will have similar potential energy surfaces, at least around R e 28 .

Summary and Conclusion
In this work, first, we have extended the previous database of Liu et al. 26 , gathering ground state spectroscopic data of 86 homonuclear and heteronuclear molecules leading to a data set of 339 molecules.Next, the database has been used to train 9 ML models to predict the main spectroscopic constants: R e , ω e , and D 0 0 .These models can be categorized into three categories: • Models in category (i) only employ information from the periodic table and thus can predict spectroscopic properties of any combination of two elements.These models can be used to systematically classify molecules made up of any two elements in the periodic table (section 4.6).While spectroscopic data availability does not limit these models' ability to predict spectroscopic constants of any molecule, it affects the models' accuracy.These models are characterized by a relatively larger gap between validation and learning curves than models in categories (ii) and (iii).Thus, we would expect a better performance of category (i) models upon training on larger datasets.
• Models in category (ii) use spectroscopic information only in their mean function but not in the kernel, and are robust against noise in input variables.In this case, since the mean function is a linear function, we can apply standard errorsin-variables methods 150 .This might be advantageous if we would like to use uncertain experimental data or predictions from (i) models or ab initio methods to train our models.
• Models in category (iii) include our most flexible, accurate, and consistent models(r4, w4).These models benefit from a high learning rate and a narrow gap between validation and learning curves.Apart from their outstanding performance, we can train these models using ground and excited states simultaneously since each state will be labeled by its spectroscopic constant values R e or ω e along with other properties that define the molecule {p 1 ,g 1 ,p 2 ,g 2 ,µ1/2 }.
In summary, the newly developed models in this work showed an outstanding performance in all metrics in comparison to the previous ML models and other empirical and semiempirical models, with mean absolute errors ranging between 0.02 Å and 0.04 Å for R e and 26 cm −1 to 40 cm −1 for ω e .We have been able to predict homonuclear spectroscopic properties with good accuracy upon training our models on heteronuclear molecules' data.Indeed, our models are almost as accurate as the state-ofthe-art ab inito methods for diatomics 130,131 .Fig. 7 A Heat map quantifies the degree of similarity among molecules in the data set from 0 (white, not similar) to 1 (black, identical) on a grayscale.The Heat map was generated by finding the matrix element of a similarity matrix.Each matrix element quantifies the similarity between a pair of molecules p (on the x-axis) and q (on the y-axis) via Eq.10 with ν = 3/2 and parameters determined as described in the text.
On the other hand, since we use the same kernel for all models under consideration, we are uniquely positioned to study a way to classify diatomic molecules beyond the traditional one based on the nature of the chemical bond.We expect such classification to enhance the performance and facilitate the development of ML models predicting spectroscopic and molecular properties of diatomic molecules.Further, the classification of diatomic molecules should help develop potential energy surfaces (PES).
Finally, we have shown that for molecules with large ionic character and containing heavy atoms (e.g., LiCs, LiCa, AuF, and ZnBr), our predictions are comparable to DFT and even the stateof-the-art ab initio methods.Moreover, two of our models (r2 and w2) offer a promising opportunity to predict spectroscopic properties from atomic positions in the periodic table with high accuracy.This is a stepping stone towards closing the gap between atomic and molecular information; more spectroscopy data is required to do so.More extensive, open, and user-friendly data will help the field of data-driven science to help understand the chemical bonding and spectroscopy of small molecules.Indeed, that is something that we are currently working on in our group: we need more spectroscopic data in the big data era.

Fig. 1
Fig. 1 The dataset of diatomic molecules' ground state spectroscopic constants.Panels (a)-(c) display the distribution of the main spectroscopic constants in the dataset-R e , ω e and D 0 0 -via a histogram representation and a box plot (at the top) for each.Panels (d)-(f) show the relationship between different spectroscopic constants of the molecules in the database.

Fig. 2
Fig. 2 Arkel-Ketelaar's triangle of the dataset.The average electronegativity of the constituent atoms on the x-axis, the difference in electronegativity of the constituent atoms correlates with the ionic character on the y-axis.The color of each circle demonstrates the ionic character of the corresponding molecule following the color bar on the right of the figure.The size of the circles differentiates between covalent (smaller circles) and van der Waals (larger circles) molecules, as illustrated at the top of the figure.
Fig. 3. Panel (c) shows a minimal scatter around the true line.The error bars are suppressed compared with panels (a) and (b) of the same figure, indicating a higher confidence level of the model's predictions.The validation curve in Panel (f) shows thatthe learning rate of model r3 is significantly higher than the other two models.Using 50% of the available data is sufficient for model r3 to achieve a validation RMSE comparable to model r1 using 90% of the data set.Overall, models r3 and r4 show an improvement in the prediction on R e ∼ 40% as shown in Table2, leading to validation MAE of 0.027 Å and 0.026 Å and a validation RMSE of 0.039 Å and 0.038 Å respectively.In other words, models r3 and r4 are almost two times more precise in predicting R e than previously ML-based or empirically-derived predictions, and almost as accurate as the state-of-the-art ab initio calculations for diatomics130,131 .Furthermore, the lower panes of Fig.3show converging learning curves characterized by relatively narrow gaps between validation and training curves.The decaying trend of the validation curves suggests that convergence toward lower levels of errors is possible upon further training on more extensive datasets.The training MAE of r2 is ∼ 6 × 10 −3 Å ; this means that we might have the capacity to achieve an accuracy

Fig. 3
Fig. 3 Upper row show scatter plots of experimental values of R e on the x-axis and predicted R e on the y-axis via models (a) r1 (b) r2 (c) r3, points, and error bars represent the predictive distribution means and standard deviations respectively after averaging over 1000 MC-CV steps.The lower row shows three learning curves of models (d) r1, (e) r2, and (f) r3.Points and shade around represent the RMSE and ± 0.5 STD(RMSE) over 500 MC-CV splits.

Fig. 4
Fig. 4 Upper row show scatter plots of experimental values of ω e on the x-axis and predicted ω e on the y-axis via models (a) w1 (b) w2 (c) w4, and error bars represent the predictive distribution means and standard deviations respectively after averaging over 1000 MC-CV steps.The lower row shows three learning curves of models (d) w1, (e) w2, and (f) w4.Points and shade around represent the RMSE and ± 0.5 STD(RMSE) over 500 MC-CV splits.

1- 19 J
o u r n a l N a me , [ y e a r ] , [ v o l .] ,

Fig. 5
Fig. 5 (a) Scatter plot of experimental values of − ln(D 0 0 ) on the x-axis and predicted − ln(D 0 0 ) on the y-axis via models d1, points, and error bars represent the predictive distribution means and standard deviations respectively after averaging over 1000 MC-CV steps.(b) shows the learning curves of model d1.Points and shade around represent the RMSE and ± 0.5 STD(RMSE) over 500 MC-CV splits.

Fig. 6
Fig. 6 Scatter plots of models predicting R e , ω e and − ln(D 0 0 ) for homonuclear molecules from heteronuclear molecules data (a) r3 (b) w4 (c) d1.Points and error bars represent the predictive distribution means and standard deviations, respectively.

Fig. 8
Fig.8Heat maps representing similarities among subsets of molecules (upper row) and their corresponding network representation (lower row).The color bar (top right) quantifies the similarity between a pair of molecules from 0 (white, not similar) to 1 (black, identical) on a greyscale.The network representations show similarities above the 80% (0.8) level.Each node represents a molecule.Short lines joining two nodes represent a high degree of similarity, while longer lines represent a lower degree of similarity above 80%.No line at all indicates a lower degree of similarity below 80%.

Table 2
Statistical summary of the performance of the GPR models using different features, kernels, and prior mean functions as listed in Table1.The performance of each model is evaluated using both the validation and test scores.The values for MAE and RMSE with * show an SEM ≲ 0.001 Å.

Table 3
Predictions and experimental values of R e and ω e for 25 molecules in the testing set.References of experimental values are included.Ref. column includes references for experimental values

Table 4
Predictions via model d1 and experimental values of D 0 0 for seven molecules in the testing set.References of experimental values are included.Ref. column includes references for experimental values

Table 5
131 and ω e ML predictions for molecules not contemplated in the databse.The ab-initio results are taken from Ref.131.