Bence
Paul†
*ab,
Joseph
Petrus†
b,
Dany
Savard
c,
Jon
Woodhead
a,
Janet
Hergt
a,
Alan
Greig
a,
Chad
Paton
d and
Peter
Rayner
a
aSchool of Geography, Earth and Atmospheric Sciences, The University of Melbourne, Parkville, Victoria, Australia. E-mail: bpaul@unimelb.edu.au; Tel: +61 3 8344 6531
bElemental Scientific Lasers, LLC, 685 Old Buffalo Trail, Bozeman, MT 59715, USA
cUniversite du Quebec a Chicoutimi, Earth Science Department, LabMaTer, 555 Boul. Universite, Chicoutimi, Qc G7H 2B1, Canada
dChad Beer AB, Mossaledsvägen 61, 42934, Kullavik, Sweden
First published on 8th August 2023
There are many processes that affect the measured concentration of elements determined by laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS). Depending on the element of interest, a range of corrections are required to account for sensitivity drift and downhole effects, as well as other sources of inaccuracy. Here, we present a new method of calibrating LA-ICP-MS measurements taking into consideration time-dependent sensitivity changes to produce a three-dimensional calibration surface through time. These 3D calibration surfaces result in up to 20% improvements in our example dataset for some elements. To ensure calibration surfaces that are created using multiple reference materials are not degraded by matrix effects a median yield correction factor is determined relative to a primary reference material. We also introduce sensitivity modelling for elements without accepted values or where interferences may affect calibration. In addition to the correction of drift, we further demonstrate a correction for downhole fractionation effects can improve the precision of spot analyses by up to 20% in our example dataset. A flexible solution to the sum normalisation approach is briefly introduced for calibration of multi-phase samples using different calibration parameters, along with a residual correction based on compositional affinity to a reference material. Combined, these new methodologies can improve the accuracy and precision of concentration determinations by LA-ICP-MS in a wide variety of applications.
Much of the initial development of LA-ICP-MS techniques was conducted in the geosciences where glasses or relatively homogeneous mineral phases were studied, for which there are usually appropriate reference materials available. One of the advantages of these types of samples is that accuracy (herein referred to as the percentage difference between the accepted value and the measured value) of 10% or better was often achieved with a single reference material (RM) for most elements of interest. However, with an ever-expanding range of sample types and applications, this simple single RM approach may no longer be sufficient.
In particular, composition-related ablation (‘matrix’) effects which influence the amount of material ablated per pulse produce variability in inter-element ratios between sample and reference materials, limiting accuracy (e.g. ref. 6). Accounting for these factors remains a key challenge associated with the modern LA-ICP-MS technique. While it is preferable to match sample and reference material matrices as closely as possible, for several reasons this is not always possible. These include a lack of reference materials of suitable matrix; reference materials with significant uncertainties of their own and/or small-scale heterogeneity; and a wide range of elements of interest and concentrations within an experiment that require multiple calibrants to be used. For these reasons, the amount of post-analysis processing is variable depending on the application.
In this contribution, we outline a series of options for correcting these analytical effects. The approach we describe forms a data reduction scheme (titled ‘3D Trace Elements’) for the iolite data reduction software7 although the same concepts could be applied in any software with similar functionality. We also present a new three-dimensional approach to the application of traditional calibration curves that accounts for time-dependent sensitivity drift. We introduce ‘yield normalisation’ to account for matrix effects between calibrating reference materials and demonstrate that yield modelling, where one or more elements' yield is used to calibrate another element, can provide accurate results in certain conditions. These calibration strategies can be combined with internal standard normalisation or sum normalisation to provide additional accuracy. Here we describe a criteria approach to expand on the sum normalisation approach to multi-phase samples. The incorporation of a downhole fractionation and residual corrections, based on chemical similarity to a reference material(s), may also improve precision and accuracy.
Typically for solution ICP-MS analysis multiple calibrants are used to create a calibration curve for each element by plotting concentration along the x axis and mean observed count rate (intensity) along the y axis. A fit to these data, usually an ordinary least squares linear fit,8 is used to calculate a slope and intercept for the calibration. To calculate the concentration using this calibration curve, the following equation is commonly used:
![]() | (1) |
![]() | (2) |
The use of a calibration curve does not account for time dependent sensitivity variation throughout an experiment, referred to as sensitivity drift, as it combines all the reference material analyses in an analytical session into a single slope and intercept for each element. Calibration curves may be recalculated throughout an experiment, however, this can produce step changes in the calibration, and is in practice rarely done. Combining all RM measurements into a single calibration curve is in effect the average of the sensitivity drift over time during the experiment. Conversely, the application of a spline (or similar interpolation)7,10 of the reference material's response (RRMi in eqn (2)) does allow for variations in sensitivity drift, but only takes a single reference material into account. As mentioned above, the use of multiple calibrants allows for a greater range of concentrations and less reliance on a single calibrant that may have its own sources of uncertainty (e.g., small scale heterogeneities).
Many geoscientists use a single RM to create a calibration curve. Depending on the quality of the matrix match between the RM and the samples, this single-point calibration curve is usually sufficient for geological (and many other) applications, as demonstrated by the quality of the results for secondary reference materials processed as unknowns. The advantage of this approach is that it allows for time resolved sensitivity drift correction, and allows other RMs measured concurrently to be used for quality control. However, there are several cases where a single point calibration curve may not be ideal. For example, where there is significant uncertainty in the measurement of the RM due to, for example, pores or gaps in biological RMs, or where incomplete homogenisation of elements of interest in biological RMs produces noisy signals (e.g. ref. 11). One way to address this is to use more than one RM to calculate a calibration curve to reduce the influence of any one RM measurement as is common in solution analyses. However, in LA-ICP-MS, time is an important variable and may be a proxy for sampling position (in x, y or z) and as described above a typical calibration curve lacks time resolution. Therefore, here we present a time resolved approach to creating calibration curves, and these calibration surfaces can be used in subsequent corrections, as discussed below.
Additionally, not all elements of interest have published values for each RM. With a single point calibration curve, only the elements in the primary RM can be calibrated. However, in this contribution, we introduce the concept of a ‘yield normalisation’ that allows measurements of elements common to more than one RM to be used to determine a yield factor, which can be used to normalised ablation yields between RMs.
The ablation yield of element i (yi) is defined as
![]() | (3) |
Subtle differences in ablation yield are common in different sample matrices. An element of known concentration in the sample, known as an ‘internal standard’ (IS), may be used to account for minor variations in ablation yield (e.g., Longerich et al.9). The use of an internal standard's ablation yield can be incorporated into eqn (2) as follows:9
![]() | (4) |
![]() | (5) |
One effect that may need to be addressed when using internal standards is downhole fractionation (DHF). Various causes of inter-element fractionation6 occur where the processes of ablation, transport, ionisation and detection change the observed concentration ratio between elements. The subset of those processes that can be expressed as a function of pit depth are usually collectively referred to as ‘downhole fractionation’. When using an internal standard, any downhole fractionation between the IS and the element of interest will decrease the accuracy of the measurement, and this is corrected by determining the relationship between pit depth and the observed ratio between the element of interest and the internal standard. Below we present an example where correcting for this DHF can improve precision and accuracy for certain elements.
Another potential approach to calibration is the use of ‘sum normalisation’ e.g. ref. 12 and 13. Here the concentration of an element is expressed as a fraction of the sum of all elements in the sample. Generally it is assumed that all significant elements have been measured, such that the sum of all concentrations is equal to 100 wt%
csam1 + csam2 + csam3… + csami… + csamN = 100 wt% | (6) |
![]() | (7) |
If the matrix is an oxide (e.g., a silicate mineral) all concentrations are expressed as oxides. If significant amounts of other anions are present (e.g., Cl, F, S or OH, which are generally not measured by ICP-MS, with the possible exception of S) the sum will not be 100 wt%. If the content of the non-oxygen anions is known, the sum can be adjusted to a value other than 100% (e.g., if it is known that the OH content is 3%, the sum of all measured elements will be 97 wt%). If the other anion(s) content is not accurately known, this will result in inaccuracies using this method. Due to the larger relative uncertainties on elements of low concentration, Liu et al.13 recommended weighting the yield normalisation factor based on the relative concentrations of each element.
The advantage of the sum normalisation approach is that is does not require an internal standard to account for minor matrix effects, saving having to measure the sample by a different technique, or having some other prior knowledge of the concentration. Not relying on an internal standard also means that ablation can pass across different phases and the normalisation process will account for different ablation yields without having to rely on an internal standard. However, it is complicated when passing between, for example, an oxide to a sulfide matrix. In this case, there needs to be some way to change the normalisation values (e.g. ref. 14). Similarly, if oxidation states change between phases the data reduction algorithm needs some way to account for this. Below we outline a criteria approach to apply sum normalisation to multi-phase samples.
In addition to the above corrections, we also introduce a residual correction based on the offset between the accepted values for a secondary (i.e. non-calibrating) RM and the average measured value.
The first example dataset is an imaging experiment “Scanlines_Example” where the following reference materials were ablated as approximately 33 s lines: NIST 610, NIST 612, BCR-2G, BHVO-2G and BIR-1G. The NIST glasses, BCR-2G and BHVO-2G were used as calibrants, and BIR-1G was interspersed with scanlines across a gabbroic thin-section. Baseline measurements of 10 s duration were determined at the end of every scanline.
A second dataset “Spots_Example” is included as an example of spot analyses. In this experiment, NIST 612, BHVO-2G and BCR-2G are measured for 60 s, with BCR-2G being treated as an unknown, along with spot analyses on the same gabbro as the previous example dataset. Baselines were measured for 8 s before each spot.
In both example datasets, the results for the gabbro are not included here but served simply to emulate typical experimental conditions in terms of run duration, matrix variability and drift etc. The gabbroic sample is the same as that used in ref. 14. Basic analytical conditions for both example datasets are set out in Table 1. Raw data for these datasets, along with the processed iolite (.io4) files, are available in the ESI.†
Scanlines_Example | Spots_Example | |
---|---|---|
ICP-MS conditions (Agilent 7700× Quadrupole) | ||
ICP-MS forward power (W) | 1300 | 1600 |
Reflected power (W) | 2 | 2 |
Sample depth (mm) | 3 | 4 |
Plasma gas (L min−1) | 15 | 15 |
Aux gas (L min−1) | 0.9 | 0.9 |
![]() |
||
Cell gas flows | ||
Carrier gas (Ar) (mL min−1) | 0.95 | 0.81 |
Cell gas (He) (mL min−1) | 250 | 700 |
Makeup gas (N2) (mL min−1) | Zero | 3 |
![]() |
||
Laser settings | ||
Laser | Australian Scientific Instruments RESOlutionSE Compact with ATL Lasertechnik EXCIMER laser | Resonetics-LR with Compex 110 excimer laser |
Sample cell | Laurin Technic S155 | Laurin Technic S155 |
Wavelength | 193 nm | 192 nm |
Rep rate (Hz) | 10 | 5 |
Spot size (μm) | 24 × 24 | 40 |
Fluence (J cm−2) | 2 | 3 |
![]() |
||
Calibrants | ||
NIST 610, NIST 612, BCR-2G, BHVO-2G | NIST 612, BHVO-2G | |
![]() |
||
Secondary RMs | ||
BIR-1G | BCR-2G |
The accepted values used in the accuracy calculations herein are from the GeoReM database15 preferred values. The uncertainties shown in the figures are those listed for the preferred values in the GeoReM database, expressed as two relative standard deviations.
When comparing data reduction parameter sets (i.e. spline type, fit method, etc.), a single value to indicate whether the overall accuracy is increasing or decreasing is useful. Here we use the sum of the absolute percentage differences between the mean measured value and the accepted value for all elements measured. Although this value does not provide information about the relative improvement in accuracy for any one element, it does provide an overall indication expressed as a single value as to whether a parameter set provides results closer to the accepted values. We do not report a value such as the mean or median percentage difference as there is no expectation that the deviations will be normally distributed.
The magnitude of the various analytical effects described herein will depend largely on the experiment and the type and nature of the calibrating reference materials used. In the case of the example datasets presented here, well characterised, homogeneous reference materials were used which are likely to show less of the negative effects described. They do, however, allow us to fully quantify the corrections without having to take into consideration the effects of inhomogeneity etc.
Even though results were obtained for 9Be, 74Ge, 95Mo, 115In, 133Cs and 209Bi in the “Scanlines_Example”, the accuracy for these elements was typically an order of magnitude worse than the remaining elements and/or below the limit of detection. Omission of these results does not change the overall conclusions of this contribution and so they have been excluded.
If three or more Pb isotopes have been measured, a ‘total Pb’ channel is then created. This channel is the sum of the raw counts for the individual Pb channels and is useful where sample Pb isotope ratios are significantly different from those of the reference materials. All channels are then baseline subtracted.
Once blocks have been identified, a calibration curve can be fitted for each element for each block. 3D Trace Elements comes with several fitting options including ordinary least squares (OLS);8 weighted least squares (WLS);8 robust linear model (RLM);8 orthogonal distance regression (ODR);16 or, the York et al. approach.17 The OLS, WLS and RLM algorithms are provided by the python StatsModels package8 whereas ODR is made available via SciPy18 and York is adapted from York et al.17 and the UPbPlot package.19 Additional methods can be employed by adding to the DRS script. For example, Funke et al.20 note that, due to heteroskedacity§ in fitting a calibration curve to LA-ICP-MS results, a WLS approach can provide additional accuracy especially if a custom weighting function is used. Although the latter is not currently implemented, it could be added by editing the provided script.
The most appropriate fit method to use depends on the reference materials measured and the analytical setup, but in the case of the example dataset there is little difference between the different approaches, with the York approach occasionally producing slightly different results. The sum of absolute differences for each element (nelements = 47) is 732.5, 734.2, 736.0, 736.1 and 762.4 for the ODR, WLS, OLS, RLM and York approaches, respectively.
It should be noted however that the example dataset is based on well characterised, relatively homogeneous, glass reference materials, and that the most applicable model for other reference material types and matrices may be quite different.
In addition to different fit methods, there are options to select whether the calibration curve should be forced through the origin, and whether to apply yield normalisation to one RM. The latter option calculates yield normalisation factors for each calibrating RM relative to a selected ‘primary’ RM. This process only uses elements where there is an accepted value for both RMs. A median yield factor is then applied to all measured elements for the RM. The algorithm calculates a median yield correction factor based on the elements in common and applies that factor to all results for this RM. This can help remove matrix effects in calibration curves where there are likely to be different ablation yields for the calibrating RMs.
An example showing the effect of yield normalisation is shown in Fig. 2. In this example dataset, where both basaltic and sodic glass RMs are used to construct the calibration curves, normalising ablation yields to BCR-2G provides additional accuracy especially when no internal standardisation is used. However, the combination of an internal standard with yield normalisation provides additional accuracy (when all other factors remain constant) presumably because the yield correction factor is based on multiple elements instead of just the internal standard.
![]() | ||
Fig. 2 An example showing the effect of yield normalisation and internal standardisation on the calibrating RMs, showing the accuracy of BIR-1G results (n = 24) in the example experiment “Scanlines_Example” (a), and the sum of absolute differences for each parameter set (b). All other factors were kept constant. The grey bars in (a) represent the 2RSD of the accepted values from GeoReM15 for each element. In this example using yield normalisation provides a modest improvement in accuracy for most elements when combined with an internal standard (b), but a significant improvement when no internal standard is used. |
A different combination of fit method, list of calibrants, and whether the fit is forced through zero can be set for each mass measured. The application of a yield correction applies to all masses measured.
The interpolation method to create the calibration surface can range in complexity from an overall average of the session, to linear interpolation, to step functions or cubic splines with varying degrees of smoothing. The simplest of these (a mean fit to all blocks) is effectively the same as a conventional calibration curve. An animated example of a 3D calibration surface is included in the ESI.† A comparison of the effect of using a mean fit to all blocks with using a smoothed cubic spline is shown in Fig. 3. The addition of time resolution by using a smoothed spline, in the example dataset, does not produce a significant change in the session-level accuracy of the results: the sum of absolute differences is 1155% and 1164% for the mean and smoothed spline, respectively. However, there is a significant improvement in precision, with relative standard deviations (RSDs) decreasing by up to 20% as sensitivity drift is corrected for by the calibration surface.
![]() | ||
Fig. 3 The effect of using a mean–spline (equivalent to a conventional calibration curve; filled squares), a step forward interpolation (equivalent to recalculating calibration curves; filled triangles), and a smoothed spline interpolation (open circles) of calibration factors in the example dataset “Scanlines_Example”. Using Sr as an example (a), even though the average result for each approach is approximately the same there is a significant improvement in precision using the spline approach which takes into account time-dependent sensitivity drift. This effect is observed for almost all elements in the example dataset (b), with the difference in RSD for some elements (mean–spline) being an up to 20% improvement (c). All other factors apart from the interpolation type were kept constant. Ca43 was used as the internal standard for all reductions. The shaded area in (a) represents the uncertainty in the accepted value15 and error bars show the 2SE uncertainty for each BIR-1G measurement. The step forward interpolation results are omitted from (b) and (c) for clarity but are intermediate between the spline and mean approaches (see main text). |
This is because with the mean approach, early results may be higher than the accepted values, and later results will be lower (or vice versa) but the average result is the same. However, the time resolved approach produces significant improvements in precision with no time dependence. The difference in precision can be determined by examining the difference in % RSD for each approach (Fig. 3(b)). Looking at the differences between the RSDs for the elements measured (Fig. 3(c)), where positive values indicate that the precision of the spline approach is better than mean approach, the most pronounced differences are for the heaviest elements, and for this example data the spline approach can be up to 20% more precise (e.g., Th and Pb). Only two elements have more precise results using the mean approach (Li and K), where the difference between the two methods is approximately 1%.
Intermediate between these approaches is to periodically recalculate calibration values. If the calibration is changing significantly between calibration blocks, however, step changes in the calibration may occur without gradational interpolation. This approach can be replicated in the software discussed by using the ‘Step Forward’ interpolation option. This keeps the current calibration coefficients until the next calibration block is reached and a new set of coefficients are calculated. Examining the precision of these approaches for the Scanlines Example dataset, using Sr as an example element with moderate response to these effects (see Fig. 3), observed RSDs are 2.2%, 3.6% and 9.9% for the spline, step forward and mean interpolation methods, respectively.
![]() | ||
Fig. 4 An example showing a yield interpolation for V51 using surrounding elements in the “Scanlines_Example” dataset. The yields for each channel, arranged according to m/z, are normalised to isotopic abundance and first ionisation energy. The elements used to model the yield are Sc45, Ti49, Cr53 and Mn55, and are highlighted in red. Blocks are represented using colours (starting with red, then through to black in time order). The straight diagonal lines are the linear fit with the colour of each line representing the calibration block. The dashed vertical black line represents the m/z of V51, and is the value that will be used for the yield for each block. Linear fits have been used in this example, but other fit options are available. The set of elements to use in the fit is configurable (e.g.Table 2). |
Although the applicability and accuracy of this approach will depend on the element of interest, and the availability of surrounding masses of similar character, we demonstrate here the effectiveness of this approach using several elements: V, Y, La, Ho, Tm, Lu and Th (Fig. 5). The elements used to calculate the yield of each channel is shown in Table 2. This example shows that in most cases the yield interpolation is similar in accuracy to that using the actual measured data. In the case of La the result is significantly worse, presumably because the yield calculated using the data is already quite accurate. However, it is interesting to note that in some cases the modelled yield is more accurate than that calculated from the measured data (e.g. Th). It may be that for elements with low concentrations but with nearby elements of higher concentration, a modelled yield is less affected by analytical noise and thus produces more reliable results. Similarly, elements that are affected by interferences present in the RM but absent in the sample may be better calibrated using a modelled yield rather than the actual data. Ultimately however this will depend on the element of interest and the reference materials used.
![]() | ||
Fig. 5 A comparison of the results of using yield interpolation, with and without normalisation to each elements first ionisation energy (“IE norm”), to using the yield calculated using the measured data. The results show that for the selected elements in the “Scanlines_Example” dataset the yield interpolation/extrapolation produces similar accuracy to that using the actual data. An exception is La which shows significantly less accurate results using modelled yields, whereas V and Th results are more accurate using the modelled yields. In most cases, with the exception of Lu, the result of normalising to the first ionisation energy produces a similar or better level of accuracy. Yields were modeled using a range of surrounding elements (see Table 2); all other parameters were kept constant. |
Channel | Modelled using |
---|---|
V51 | Sc45, Ti49, Cr53, Mn55 |
Y89 | Zr90 |
La139 | Ce140, Nd146, Eu151 |
Ho165 | Dy163, Er166, Yb172 |
Tm169 | Dy163, Er166, Yb172 |
Lu175 | Dy163, Er166, Yb172 |
Th232 | Pb208, U238 |
An approach for normalisation using criteria based on background-subtracted count rates to adjust normalisation parameters for selected phases is presented. The sample is a nelsonite from the Sept-Îles mafic intrusion22 and was analysed by LA-ICP-TOF-MS at LabMaTer (UQAC) following the procedures described in Savard et al.23 This example does not serve to demonstrate the accuracy of the normalisation criteria approach (which has been demonstrated elsewhere13), but rather demonstrates the concept of being able to use the sum normalisation approach in an imaging experiment by defining phases based on counts rates.
This approach examines each criterion to determine the intervals where the following normalisation parameters are applied: the normalisation total (which may not be 100% as described in the Background section); whether to convert to oxides; and the oxide forms (e.g., FeO vs. Fe2O3). This creates, in effect, a phase map of the sample (Fig. 6), each with their own normalisation parameters. Phase identification is based on distinctive elements within each phase. In the presented example, high S34 counts were used to delimit the sulfide phase (pyrite FeS2); high P31 counts for apatite; high Mg24 counts for serpentine; and high Ti47 counts for ilmenite. For magnetite, high Fe56 counts were used in conjunction with low S34 to distinguish magnetite from sulfide, rather than basing the comparison on Fe alone (any number of criteria may be used to define a phase). The absence of P in calcite was also useful to distinguish between calcite and apatite. The normalisation sum for calcite was set to 56% to allow for the 44% CO2 present in calcite. Major elements in apatite were converted to oxide form and were normalised to 95 wt% to compensate for the 5% estimated contribution of OH and Cl. Ilmenite was normalised to 100% oxides assuming iron being Fe2+ (FeO) while Fe in magnetite was assumed to be a stoichiometric mixture of Fe2+ (FeO) and Fe3+ (Fe2O3) thus equivalent to Fe3O4. The H2O content in serpentine was calculated from external EMPA analysis and the normalisation value was set to 80 wt%. Finally, Fe and S in the sulfide (pyrite FeS) were not converted to oxide and the normalisation value was 100 wt%.
This example shows that the sum normalisation approach may be used in a multiphase sample by employing criteria to define areas for similar treatment. In the example shown in Fig. 6, when using the conventional ‘semi-quantitative’ approach, the calcium content in the calcite phase appears to be approximately 1.5 times that of the apatite phase. However, this is due to the higher yield of calcite, when actually the calcite and apatite have similar Ca contents (see ESI† for additional details). Using the criteria sum normalisation approach avoids this analytical artefact to produce more accurate concentration data for imaging experiments. This approach is discussed in more detail in Savard et al.23
As DHF is a matrix dependent phenomenon, RMs with different compositions may exhibit different amounts of DHF (e.g., Fig. 7(b)) and when using multiple RMs to accurately correct for DHF it may be necessary to determine which RM each sample ablation most closely matches in composition. Here, we describe this similarity in composition “affinity”, which can be determined automatically using a subset of the measured elements in, for example, a nearest centroid classifier.26 Similarly affinity may be manually assigned if information about the sample composition is already known. With the affinity assigned, the RM with the most closely matching composition may be used to correct for DHF by fitting a model to the observed downhole trend and then correcting to this model. In the “Spot_Analysis” example dataset, the BCR-2G analyses are treated as unknowns. A nearest centroid approach correctly identified BHVO-2G as having a greater affinity to these analyses than NIST 612. Using a smoothed spline model for the DHF for BHVO-2G, the correction removes the upward trend in the data and thus reduces the uncertainty on each analysis (Fig. 7(c)). In this example, the uncertainty for Na concentrations in BCR-2G ranges from 310–390 (2SE) before correction, to 270–300 (2SE) post-correction: an average reduction of up to 20%. It should be noted that not all elements are affected by DHF, and the degree to which this correction improves precision will depend on the element of interest, the internal standard used and the composition of the sample.
Affinities may also be used in residual corrections. By residual correction, here we refer to correcting samples for the amount the calculated result for an RM is from its accepted value. For example, if the final average calculated value for a selected element in an RM is 5% above its accepted value, and a sample shares an affinity with the RM, we can correct the sample's result for this element down by 5%. While this may appear circular reasoning to calculate an additional offset to an RM used as a calibrant, if this offset between the calculated and accepted values is considered a residual to the model, this correction becomes a residual correction. Note that in our implementation of this correction, any element with a residual greater than 15% is not applied, as it suggests a poor model fit for this element. This threshold is adjustable depending on application.
In our Scanlines_Example dataset, where the BIR-1G analyses are treated as unknowns and a nearest centroid calculation based on major elements determines that these analyses share the most affinity with BHVO-2G, a residual correction to BHVO2G has been applied (Fig. 8). The sum of absolute percentage differences between no affinity correction and correction applied is 433 and 368, respectively. As shown in Fig. 8(c) this improvement is most marked for the lighter masses (Li to Rb) and the lanthanides. No correction is applied to the Zn, Tm or Pb results due to the measured values for BHVO-2G being greater than 15% from the accepted value (Fig. 8(a)). Results for Ta and Th after the affinity correction are significantly worse, going from 14% and 12% for Ta and Th respectively before correction, to 28% and 23% post correction. A more subtle effect is that the number of masses measured that are within 5% of the accepted value before the correction in this example dataset is 14, but after correction there are 26 elements within 5% of the accepted value (Fig. 8(b)). This would suggest that the application of this correction may provide additional accuracy but should be used judiciously given that some results may be worse post-correction.
Combining multi-RM calibration curves with individual sample yield correction, either in the form of an internal standard or sum normalisation (both having the same effect), is presented, along with downhole fractionation corrections and residual corrections based on compositional affinity.
The importance and magnitude of each of the corrections demonstrated will depend largely on the samples being measured and the nature of each experiment. However, in the example dataset shown, improvements of up to 20% due to correction for sensitivity drift, and up to 20% for downhole fractionation correction, suggest that for some elements these corrections can significantly improve results. The use of a residual correction based on affinity may provide final additional gains to accuracy, although its use should be carefully evaluated in practice.
Footnotes |
† These authors contributed equally to this work. |
‡ All count rates referred to herein are background subtracted count rates. |
§ Heteroskedacity in this case refers to the fact that LA-ICP-MS measurement uncertainties increase in a non-linear fashion with decreasing concentration. This affects the model error and some assumptions about the weighting of datapoints in fitting the model. See Funke et al.20 for more details. |
This journal is © The Royal Society of Chemistry 2023 |