Open Access Article
This Open Access Article is licensed under a
Creative Commons Attribution 3.0 Unported Licence

Time resolved trace element calibration strategies for LA-ICP-MS

Bence Paul *ab, Joseph Petrus b, Dany Savard c, Jon Woodhead a, Janet Hergt a, Alan Greig a, Chad Paton d and Peter Rayner a
aSchool of Geography, Earth and Atmospheric Sciences, The University of Melbourne, Parkville, Victoria, Australia. E-mail: bpaul@unimelb.edu.au; Tel: +61 3 8344 6531
bElemental Scientific Lasers, LLC, 685 Old Buffalo Trail, Bozeman, MT 59715, USA
cUniversite du Quebec a Chicoutimi, Earth Science Department, LabMaTer, 555 Boul. Universite, Chicoutimi, Qc G7H 2B1, Canada
dChad Beer AB, Mossaledsvägen 61, 42934, Kullavik, Sweden

Received 2nd February 2023 , Accepted 2nd June 2023

First published on 8th August 2023


Abstract

There are many processes that affect the measured concentration of elements determined by laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS). Depending on the element of interest, a range of corrections are required to account for sensitivity drift and downhole effects, as well as other sources of inaccuracy. Here, we present a new method of calibrating LA-ICP-MS measurements taking into consideration time-dependent sensitivity changes to produce a three-dimensional calibration surface through time. These 3D calibration surfaces result in up to 20% improvements in our example dataset for some elements. To ensure calibration surfaces that are created using multiple reference materials are not degraded by matrix effects a median yield correction factor is determined relative to a primary reference material. We also introduce sensitivity modelling for elements without accepted values or where interferences may affect calibration. In addition to the correction of drift, we further demonstrate a correction for downhole fractionation effects can improve the precision of spot analyses by up to 20% in our example dataset. A flexible solution to the sum normalisation approach is briefly introduced for calibration of multi-phase samples using different calibration parameters, along with a residual correction based on compositional affinity to a reference material. Combined, these new methodologies can improve the accuracy and precision of concentration determinations by LA-ICP-MS in a wide variety of applications.


1 Introduction

Laser ablation inductively coupled plasma-mass spectrometry (LA-ICP-MS) is a widely applied technique for the determination of trace element concentrations in biological,1 archaeological,2 forensic,3 environmental4 and geoscientific5 studies. Due to the ease of sample preparation, wide dynamic range, and relatively few interferences, the range of applications continues to grow each year.

Much of the initial development of LA-ICP-MS techniques was conducted in the geosciences where glasses or relatively homogeneous mineral phases were studied, for which there are usually appropriate reference materials available. One of the advantages of these types of samples is that accuracy (herein referred to as the percentage difference between the accepted value and the measured value) of 10% or better was often achieved with a single reference material (RM) for most elements of interest. However, with an ever-expanding range of sample types and applications, this simple single RM approach may no longer be sufficient.

In particular, composition-related ablation (‘matrix’) effects which influence the amount of material ablated per pulse produce variability in inter-element ratios between sample and reference materials, limiting accuracy (e.g. ref. 6). Accounting for these factors remains a key challenge associated with the modern LA-ICP-MS technique. While it is preferable to match sample and reference material matrices as closely as possible, for several reasons this is not always possible. These include a lack of reference materials of suitable matrix; reference materials with significant uncertainties of their own and/or small-scale heterogeneity; and a wide range of elements of interest and concentrations within an experiment that require multiple calibrants to be used. For these reasons, the amount of post-analysis processing is variable depending on the application.

In this contribution, we outline a series of options for correcting these analytical effects. The approach we describe forms a data reduction scheme (titled ‘3D Trace Elements’) for the iolite data reduction software7 although the same concepts could be applied in any software with similar functionality. We also present a new three-dimensional approach to the application of traditional calibration curves that accounts for time-dependent sensitivity drift. We introduce ‘yield normalisation’ to account for matrix effects between calibrating reference materials and demonstrate that yield modelling, where one or more elements' yield is used to calibrate another element, can provide accurate results in certain conditions. These calibration strategies can be combined with internal standard normalisation or sum normalisation to provide additional accuracy. Here we describe a criteria approach to expand on the sum normalisation approach to multi-phase samples. The incorporation of a downhole fractionation and residual corrections, based on chemical similarity to a reference material(s), may also improve precision and accuracy.

2 Background

In this section, we review the different techniques used for trace element calibration, including the use of calibration curves, the ‘semi-quantitative’ approach and the use of ‘internal standards’, inter-element fractionation, and sum normalisation approaches.

Typically for solution ICP-MS analysis multiple calibrants are used to create a calibration curve for each element by plotting concentration along the x axis and mean observed count rate (intensity) along the y axis. A fit to these data, usually an ordinary least squares linear fit,8 is used to calculate a slope and intercept for the calibration. To calculate the concentration using this calibration curve, the following equation is commonly used:

 
image file: d3ja00037k-t1.tif(1)
where cSAMPi is the concentration of analyte i in the sample, RSAMPi is the background-subtracted count rate for analyte i during the sample analysis, and mi and bi are the slope and intercept of the calibration curve for analyte i. If using a single reference material (RM) and assuming that the calibration curve passes through (0,0) (e.g., Longerich et al.9) eqn (1) can be written as:
 
image file: d3ja00037k-t2.tif(2)
where cRMi and RRMi are the concentration and count-rate of analyte i in the reference material, respectively. Eqn (2) is often known in the LA-ICP-MS community as the ‘semi-quantitative’ approach as it does not include some of the factors described below, even though it is fully quantitative.

The use of a calibration curve does not account for time dependent sensitivity variation throughout an experiment, referred to as sensitivity drift, as it combines all the reference material analyses in an analytical session into a single slope and intercept for each element. Calibration curves may be recalculated throughout an experiment, however, this can produce step changes in the calibration, and is in practice rarely done. Combining all RM measurements into a single calibration curve is in effect the average of the sensitivity drift over time during the experiment. Conversely, the application of a spline (or similar interpolation)7,10 of the reference material's response (RRMi in eqn (2)) does allow for variations in sensitivity drift, but only takes a single reference material into account. As mentioned above, the use of multiple calibrants allows for a greater range of concentrations and less reliance on a single calibrant that may have its own sources of uncertainty (e.g., small scale heterogeneities).

Many geoscientists use a single RM to create a calibration curve. Depending on the quality of the matrix match between the RM and the samples, this single-point calibration curve is usually sufficient for geological (and many other) applications, as demonstrated by the quality of the results for secondary reference materials processed as unknowns. The advantage of this approach is that it allows for time resolved sensitivity drift correction, and allows other RMs measured concurrently to be used for quality control. However, there are several cases where a single point calibration curve may not be ideal. For example, where there is significant uncertainty in the measurement of the RM due to, for example, pores or gaps in biological RMs, or where incomplete homogenisation of elements of interest in biological RMs produces noisy signals (e.g. ref. 11). One way to address this is to use more than one RM to calculate a calibration curve to reduce the influence of any one RM measurement as is common in solution analyses. However, in LA-ICP-MS, time is an important variable and may be a proxy for sampling position (in x, y or z) and as described above a typical calibration curve lacks time resolution. Therefore, here we present a time resolved approach to creating calibration curves, and these calibration surfaces can be used in subsequent corrections, as discussed below.

Additionally, not all elements of interest have published values for each RM. With a single point calibration curve, only the elements in the primary RM can be calibrated. However, in this contribution, we introduce the concept of a ‘yield normalisation’ that allows measurements of elements common to more than one RM to be used to determine a yield factor, which can be used to normalised ablation yields between RMs.

The ablation yield of element i (yi) is defined as

 
image file: d3ja00037k-t3.tif(3)
which quantifies the count rate relative to concentration for a material. In this contribution we also discuss the concept of ‘yield modelling’ to calibrate elements with no accepted values in the RMs based on the yield of other element or elements.

Subtle differences in ablation yield are common in different sample matrices. An element of known concentration in the sample, known as an ‘internal standard’ (IS), may be used to account for minor variations in ablation yield (e.g., Longerich et al.9). The use of an internal standard's ablation yield can be incorporated into eqn (2) as follows:9

 
image file: d3ja00037k-t4.tif(4)
where
 
image file: d3ja00037k-t5.tif(5)
and RIS and cIS are the count rate and concentration of the internal standard. The bracketed term in eqn (5) is effectively a yield normalisation factor that uses the internal standard element to account for differences in ablation efficiency between the reference material and the sample.

One effect that may need to be addressed when using internal standards is downhole fractionation (DHF). Various causes of inter-element fractionation6 occur where the processes of ablation, transport, ionisation and detection change the observed concentration ratio between elements. The subset of those processes that can be expressed as a function of pit depth are usually collectively referred to as ‘downhole fractionation’. When using an internal standard, any downhole fractionation between the IS and the element of interest will decrease the accuracy of the measurement, and this is corrected by determining the relationship between pit depth and the observed ratio between the element of interest and the internal standard. Below we present an example where correcting for this DHF can improve precision and accuracy for certain elements.

Another potential approach to calibration is the use of ‘sum normalisation’ e.g. ref. 12 and 13. Here the concentration of an element is expressed as a fraction of the sum of all elements in the sample. Generally it is assumed that all significant elements have been measured, such that the sum of all concentrations is equal to 100 wt%

 
csam1 + csam2 + csam3… + csami… + csamN = 100 wt%(6)
and the concentration of any element can be expressed as
 
image file: d3ja00037k-t6.tif(7)

If the matrix is an oxide (e.g., a silicate mineral) all concentrations are expressed as oxides. If significant amounts of other anions are present (e.g., Cl, F, S or OH, which are generally not measured by ICP-MS, with the possible exception of S) the sum will not be 100 wt%. If the content of the non-oxygen anions is known, the sum can be adjusted to a value other than 100% (e.g., if it is known that the OH content is 3%, the sum of all measured elements will be 97 wt%). If the other anion(s) content is not accurately known, this will result in inaccuracies using this method. Due to the larger relative uncertainties on elements of low concentration, Liu et al.13 recommended weighting the yield normalisation factor based on the relative concentrations of each element.

The advantage of the sum normalisation approach is that is does not require an internal standard to account for minor matrix effects, saving having to measure the sample by a different technique, or having some other prior knowledge of the concentration. Not relying on an internal standard also means that ablation can pass across different phases and the normalisation process will account for different ablation yields without having to rely on an internal standard. However, it is complicated when passing between, for example, an oxide to a sulfide matrix. In this case, there needs to be some way to change the normalisation values (e.g. ref. 14). Similarly, if oxidation states change between phases the data reduction algorithm needs some way to account for this. Below we outline a criteria approach to apply sum normalisation to multi-phase samples.

In addition to the above corrections, we also introduce a residual correction based on the offset between the accepted values for a secondary (i.e. non-calibrating) RM and the average measured value.

3 Example datasets

We present two example datasets here to illustrate the corrections described. However, the magnitude of the various analytical effects described herein will depend largely on the experiment and the type and nature of the calibrating reference materials used. The well characterised, homogeneous reference materials used in the example datasets are less likely to show some of the negative effects described above. They do, however, allow us to fully quantify the corrections without having to take into consideration the effects of inhomogeneity etc.

The first example dataset is an imaging experiment “Scanlines_Example” where the following reference materials were ablated as approximately 33 s lines: NIST 610, NIST 612, BCR-2G, BHVO-2G and BIR-1G. The NIST glasses, BCR-2G and BHVO-2G were used as calibrants, and BIR-1G was interspersed with scanlines across a gabbroic thin-section. Baseline measurements of 10 s duration were determined at the end of every scanline.

A second dataset “Spots_Example” is included as an example of spot analyses. In this experiment, NIST 612, BHVO-2G and BCR-2G are measured for 60 s, with BCR-2G being treated as an unknown, along with spot analyses on the same gabbro as the previous example dataset. Baselines were measured for 8 s before each spot.

In both example datasets, the results for the gabbro are not included here but served simply to emulate typical experimental conditions in terms of run duration, matrix variability and drift etc. The gabbroic sample is the same as that used in ref. 14. Basic analytical conditions for both example datasets are set out in Table 1. Raw data for these datasets, along with the processed iolite (.io4) files, are available in the ESI.

Table 1 Analytical conditions for the example datasets
Scanlines_Example Spots_Example
ICP-MS conditions (Agilent 7700× Quadrupole)
ICP-MS forward power (W) 1300 1600
Reflected power (W) 2 2
Sample depth (mm) 3 4
Plasma gas (L min−1) 15 15
Aux gas (L min−1) 0.9 0.9
[thin space (1/6-em)]
Cell gas flows
Carrier gas (Ar) (mL min−1) 0.95 0.81
Cell gas (He) (mL min−1) 250 700
Makeup gas (N2) (mL min−1) Zero 3
[thin space (1/6-em)]
Laser settings
Laser Australian Scientific Instruments RESOlutionSE Compact with ATL Lasertechnik EXCIMER laser Resonetics-LR with Compex 110 excimer laser
Sample cell Laurin Technic S155 Laurin Technic S155
Wavelength 193 nm 192 nm
Rep rate (Hz) 10 5
Spot size (μm) 24 × 24 40
Fluence (J cm−2) 2 3
[thin space (1/6-em)]
Calibrants
NIST 610, NIST 612, BCR-2G, BHVO-2G NIST 612, BHVO-2G
[thin space (1/6-em)]
Secondary RMs
BIR-1G BCR-2G


The accepted values used in the accuracy calculations herein are from the GeoReM database15 preferred values. The uncertainties shown in the figures are those listed for the preferred values in the GeoReM database, expressed as two relative standard deviations.

When comparing data reduction parameter sets (i.e. spline type, fit method, etc.), a single value to indicate whether the overall accuracy is increasing or decreasing is useful. Here we use the sum of the absolute percentage differences between the mean measured value and the accepted value for all elements measured. Although this value does not provide information about the relative improvement in accuracy for any one element, it does provide an overall indication expressed as a single value as to whether a parameter set provides results closer to the accepted values. We do not report a value such as the mean or median percentage difference as there is no expectation that the deviations will be normally distributed.

The magnitude of the various analytical effects described herein will depend largely on the experiment and the type and nature of the calibrating reference materials used. In the case of the example datasets presented here, well characterised, homogeneous reference materials were used which are likely to show less of the negative effects described. They do, however, allow us to fully quantify the corrections without having to take into consideration the effects of inhomogeneity etc.

Even though results were obtained for 9Be, 74Ge, 95Mo, 115In, 133Cs and 209Bi in the “Scanlines_Example”, the accuracy for these elements was typically an order of magnitude worse than the remaining elements and/or below the limit of detection. Omission of these results does not change the overall conclusions of this contribution and so they have been excluded.

4 Experiment setup

Ideally, an experiment would be arranged with ‘blocks’ of RM measurements interspersed between sample measurements, as in Fig. 1. These blocks provide an estimate of the calibration curve at a particular point in time (effectively the block mid-point), between which the slope and intercept of the calibration curve can be interpolated (discussed below). Additionally, one or more RMs are reserved for quality control and are interspersed with sample measurements, independent of the calibration blocks, thus providing non-calibration secondary RMs. The duration between RM blocks depends on the stability of the system, with less stable systems requiring RM measurements more often and vice versa. In our experience, blocks separated by approximately 30 minutes provide adequate monitoring of sensitivity drift, and experiments that vary at faster rates should be suspended and the instruments and gas settings retuned. Once the data have been imported into iolite, baseline, RM and sample selections can be created (either automatically or manually depending on instrument setup).
image file: d3ja00037k-f1.tif
Fig. 1 A schematic of an ideal experimental setup with blocks of reference materials interspersed between sample measurements. Secondary (noncalibrant) reference materials should be intercalated with sample measurements to provide independent measures of accuracy and repeatability.

5 Initial calculations

The data reduction scheme (“DRS”) begins by assigning an ‘index channel’. The index channel in this instance is simply a channel that is measured in all files, and thus will have a complete associated time array cf. channels that appear in a subset of files that will have gaps in their associated time arrays (there is no requisite in iolite that all channels appear in all data files). The intermediate and output channels and splines calculated by the DRS are interpolated onto the time array of the index channel. Optionally, a ‘mask’ can also be created which, depending on the chosen method, can either be via the ‘on’ periods recorded in the laser log file, or in the absence of a laser log file, the ‘cutoff’ method can be chosen where results are masked during time intervals where the index channel drops below the chosen threshold. Masking helps avoid the noisy ratios associated with baseline intervals that overwhelm the vertical scale when plotting time-series.

If three or more Pb isotopes have been measured, a ‘total Pb’ channel is then created. This channel is the sum of the raw counts for the individual Pb channels and is useful where sample Pb isotope ratios are significantly different from those of the reference materials. All channels are then baseline subtracted.

6 Block detection and calibration curve fitting

Following basic setup, the RMs to use for calibration are selected, with the remainder for quality control. The location of blocks of calibration RM measurements can either be automatically determined using the time gap between measurements, with shorter gaps within blocks and larger gaps between blocks. There are also options to automatically determine blocks using a clustering approach (using the mid-time of the RM selections) or manually assign or adjust blocks via the user interface.

Once blocks have been identified, a calibration curve can be fitted for each element for each block. 3D Trace Elements comes with several fitting options including ordinary least squares (OLS);8 weighted least squares (WLS);8 robust linear model (RLM);8 orthogonal distance regression (ODR);16 or, the York et al. approach.17 The OLS, WLS and RLM algorithms are provided by the python StatsModels package8 whereas ODR is made available via SciPy18 and York is adapted from York et al.17 and the UPbPlot package.19 Additional methods can be employed by adding to the DRS script. For example, Funke et al.20 note that, due to heteroskedacity§ in fitting a calibration curve to LA-ICP-MS results, a WLS approach can provide additional accuracy especially if a custom weighting function is used. Although the latter is not currently implemented, it could be added by editing the provided script.

The most appropriate fit method to use depends on the reference materials measured and the analytical setup, but in the case of the example dataset there is little difference between the different approaches, with the York approach occasionally producing slightly different results. The sum of absolute differences for each element (nelements = 47) is 732.5, 734.2, 736.0, 736.1 and 762.4 for the ODR, WLS, OLS, RLM and York approaches, respectively.

It should be noted however that the example dataset is based on well characterised, relatively homogeneous, glass reference materials, and that the most applicable model for other reference material types and matrices may be quite different.

In addition to different fit methods, there are options to select whether the calibration curve should be forced through the origin, and whether to apply yield normalisation to one RM. The latter option calculates yield normalisation factors for each calibrating RM relative to a selected ‘primary’ RM. This process only uses elements where there is an accepted value for both RMs. A median yield factor is then applied to all measured elements for the RM. The algorithm calculates a median yield correction factor based on the elements in common and applies that factor to all results for this RM. This can help remove matrix effects in calibration curves where there are likely to be different ablation yields for the calibrating RMs.

An example showing the effect of yield normalisation is shown in Fig. 2. In this example dataset, where both basaltic and sodic glass RMs are used to construct the calibration curves, normalising ablation yields to BCR-2G provides additional accuracy especially when no internal standardisation is used. However, the combination of an internal standard with yield normalisation provides additional accuracy (when all other factors remain constant) presumably because the yield correction factor is based on multiple elements instead of just the internal standard.


image file: d3ja00037k-f2.tif
Fig. 2 An example showing the effect of yield normalisation and internal standardisation on the calibrating RMs, showing the accuracy of BIR-1G results (n = 24) in the example experiment “Scanlines_Example” (a), and the sum of absolute differences for each parameter set (b). All other factors were kept constant. The grey bars in (a) represent the 2RSD of the accepted values from GeoReM15 for each element. In this example using yield normalisation provides a modest improvement in accuracy for most elements when combined with an internal standard (b), but a significant improvement when no internal standard is used.

A different combination of fit method, list of calibrants, and whether the fit is forced through zero can be set for each mass measured. The application of a yield correction applies to all masses measured.

7 Three-dimensional calibration surfaces

Once calibration blocks have been identified and a curve fit for each block, the slope and intercept for each block can then be interpolated to create a calibration surface with concentration, intensity and time being plotted on the x, y and z axes, respectively. This surface allows concentration to be calculated for any datapoint within the session. This is in contrast to combining all RM analyses within a session to create an overall calibration curve for each element.

The interpolation method to create the calibration surface can range in complexity from an overall average of the session, to linear interpolation, to step functions or cubic splines with varying degrees of smoothing. The simplest of these (a mean fit to all blocks) is effectively the same as a conventional calibration curve. An animated example of a 3D calibration surface is included in the ESI. A comparison of the effect of using a mean fit to all blocks with using a smoothed cubic spline is shown in Fig. 3. The addition of time resolution by using a smoothed spline, in the example dataset, does not produce a significant change in the session-level accuracy of the results: the sum of absolute differences is 1155% and 1164% for the mean and smoothed spline, respectively. However, there is a significant improvement in precision, with relative standard deviations (RSDs) decreasing by up to 20% as sensitivity drift is corrected for by the calibration surface.


image file: d3ja00037k-f3.tif
Fig. 3 The effect of using a mean–spline (equivalent to a conventional calibration curve; filled squares), a step forward interpolation (equivalent to recalculating calibration curves; filled triangles), and a smoothed spline interpolation (open circles) of calibration factors in the example dataset “Scanlines_Example”. Using Sr as an example (a), even though the average result for each approach is approximately the same there is a significant improvement in precision using the spline approach which takes into account time-dependent sensitivity drift. This effect is observed for almost all elements in the example dataset (b), with the difference in RSD for some elements (mean–spline) being an up to 20% improvement (c). All other factors apart from the interpolation type were kept constant. Ca43 was used as the internal standard for all reductions. The shaded area in (a) represents the uncertainty in the accepted value15 and error bars show the 2SE uncertainty for each BIR-1G measurement. The step forward interpolation results are omitted from (b) and (c) for clarity but are intermediate between the spline and mean approaches (see main text).

This is because with the mean approach, early results may be higher than the accepted values, and later results will be lower (or vice versa) but the average result is the same. However, the time resolved approach produces significant improvements in precision with no time dependence. The difference in precision can be determined by examining the difference in % RSD for each approach (Fig. 3(b)). Looking at the differences between the RSDs for the elements measured (Fig. 3(c)), where positive values indicate that the precision of the spline approach is better than mean approach, the most pronounced differences are for the heaviest elements, and for this example data the spline approach can be up to 20% more precise (e.g., Th and Pb). Only two elements have more precise results using the mean approach (Li and K), where the difference between the two methods is approximately 1%.

Intermediate between these approaches is to periodically recalculate calibration values. If the calibration is changing significantly between calibration blocks, however, step changes in the calibration may occur without gradational interpolation. This approach can be replicated in the software discussed by using the ‘Step Forward’ interpolation option. This keeps the current calibration coefficients until the next calibration block is reached and a new set of coefficients are calculated. Examining the precision of these approaches for the Scanlines Example dataset, using Sr as an example element with moderate response to these effects (see Fig. 3), observed RSDs are 2.2%, 3.6% and 9.9% for the spline, step forward and mean interpolation methods, respectively.

8 Yield modelling

There may be instances where an element of interest does not have an accepted value for the reference materials used. In such cases, it may be possible to interpolate (or extrapolate) the yield for the element of interest based on the yield of elements of similar mass. The approach uses the yield of each element, arranged in mass order, and grouped by calibration block (e.g.Fig. 4). Yields are normalised to the isotopic abundance and (optionally) the first ionisation energy of the element. One or more elements can be chosen for interpolation. If a single element is used, the yield used is simply the same as that for the model element, with a value for each block. Where two or more elements are chosen, an interpolation (or extrapolation) can be calculated using a variety of spline types to determine the selected channel's yield.
image file: d3ja00037k-f4.tif
Fig. 4 An example showing a yield interpolation for V51 using surrounding elements in the “Scanlines_Example” dataset. The yields for each channel, arranged according to m/z, are normalised to isotopic abundance and first ionisation energy. The elements used to model the yield are Sc45, Ti49, Cr53 and Mn55, and are highlighted in red. Blocks are represented using colours (starting with red, then through to black in time order). The straight diagonal lines are the linear fit with the colour of each line representing the calibration block. The dashed vertical black line represents the m/z of V51, and is the value that will be used for the yield for each block. Linear fits have been used in this example, but other fit options are available. The set of elements to use in the fit is configurable (e.g.Table 2).

Although the applicability and accuracy of this approach will depend on the element of interest, and the availability of surrounding masses of similar character, we demonstrate here the effectiveness of this approach using several elements: V, Y, La, Ho, Tm, Lu and Th (Fig. 5). The elements used to calculate the yield of each channel is shown in Table 2. This example shows that in most cases the yield interpolation is similar in accuracy to that using the actual measured data. In the case of La the result is significantly worse, presumably because the yield calculated using the data is already quite accurate. However, it is interesting to note that in some cases the modelled yield is more accurate than that calculated from the measured data (e.g. Th). It may be that for elements with low concentrations but with nearby elements of higher concentration, a modelled yield is less affected by analytical noise and thus produces more reliable results. Similarly, elements that are affected by interferences present in the RM but absent in the sample may be better calibrated using a modelled yield rather than the actual data. Ultimately however this will depend on the element of interest and the reference materials used.


image file: d3ja00037k-f5.tif
Fig. 5 A comparison of the results of using yield interpolation, with and without normalisation to each elements first ionisation energy (“IE norm”), to using the yield calculated using the measured data. The results show that for the selected elements in the “Scanlines_Example” dataset the yield interpolation/extrapolation produces similar accuracy to that using the actual data. An exception is La which shows significantly less accurate results using modelled yields, whereas V and Th results are more accurate using the modelled yields. In most cases, with the exception of Lu, the result of normalising to the first ionisation energy produces a similar or better level of accuracy. Yields were modeled using a range of surrounding elements (see Table 2); all other parameters were kept constant.
Table 2 Yield modelling parameters
Channel Modelled using
V51 Sc45, Ti49, Cr53, Mn55
Y89 Zr90
La139 Ce140, Nd146, Eu151
Ho165 Dy163, Er166, Yb172
Tm169 Dy163, Er166, Yb172
Lu175 Dy163, Er166, Yb172
Th232 Pb208, U238


9 Normalisation strategies

At this point, the data would be normally described as ‘semiquantitative’. As discussed above, normalising to an internal standard can help correct for differences in ablation yield and allow for situations where the entire ablation is not selected for spot analyses. In the program described herein, if just one element is chosen as an internal standard, this is equivalent to Longerich et al.'s9 approach. However, there are situations where no internal standard can be selected; when there is no well-known concentration of any element in the sample or when an analysis is performed on more than one phase, where ‘phase’ might refer to a mineral, alloy or some other compositional entity. In such cases, a calculation equivalent to Liu et al.'s13 ‘sum normalisation’ approach can be performed. If it is expected that elements will be present as oxides, there is an option for converting all channels to their oxide equivalents before normalisation. As described in the Background section, when analysing samples comprising a mixture of phases, ideally the normalisation process is optimised for each phase, but this requires the measurement of all major elements which may results in analytical time and/or spatial resolution compromises when using a quadrupole ICP-MS. On the other hand, ICP-TOF-MS provides multi-element detection (m/z 14–256) at high scanning rate (33[thin space (1/6-em)]000 Hz)21 and is thus well suited to the sum normalisation approach.

An approach for normalisation using criteria based on background-subtracted count rates to adjust normalisation parameters for selected phases is presented. The sample is a nelsonite from the Sept-Îles mafic intrusion22 and was analysed by LA-ICP-TOF-MS at LabMaTer (UQAC) following the procedures described in Savard et al.23 This example does not serve to demonstrate the accuracy of the normalisation criteria approach (which has been demonstrated elsewhere13), but rather demonstrates the concept of being able to use the sum normalisation approach in an imaging experiment by defining phases based on counts rates.

This approach examines each criterion to determine the intervals where the following normalisation parameters are applied: the normalisation total (which may not be 100% as described in the Background section); whether to convert to oxides; and the oxide forms (e.g., FeO vs. Fe2O3). This creates, in effect, a phase map of the sample (Fig. 6), each with their own normalisation parameters. Phase identification is based on distinctive elements within each phase. In the presented example, high S34 counts were used to delimit the sulfide phase (pyrite FeS2); high P31 counts for apatite; high Mg24 counts for serpentine; and high Ti47 counts for ilmenite. For magnetite, high Fe56 counts were used in conjunction with low S34 to distinguish magnetite from sulfide, rather than basing the comparison on Fe alone (any number of criteria may be used to define a phase). The absence of P in calcite was also useful to distinguish between calcite and apatite. The normalisation sum for calcite was set to 56% to allow for the 44% CO2 present in calcite. Major elements in apatite were converted to oxide form and were normalised to 95 wt% to compensate for the 5% estimated contribution of OH and Cl. Ilmenite was normalised to 100% oxides assuming iron being Fe2+ (FeO) while Fe in magnetite was assumed to be a stoichiometric mixture of Fe2+ (FeO) and Fe3+ (Fe2O3) thus equivalent to Fe3O4. The H2O content in serpentine was calculated from external EMPA analysis and the normalisation value was set to 80 wt%. Finally, Fe and S in the sulfide (pyrite FeS) were not converted to oxide and the normalisation value was 100 wt%.


image file: d3ja00037k-f6.tif
Fig. 6 An example of using criteria to assign normalisation parameters to phases within a nelsonite sample from the Sept-Îles mafic intrusion. The photomicrograph (top left) shows the minerals in the sample in reflected light. The sample area, automatically partitioned into phases based on count rate criteria, is shown lower left. Each phase is processed according to its own set of normalisation parameters allowing for sum normalisation to be used in an imaging experiment. The calcium concentration map calculated using the conventional semi-quantitative method (top right) suggests that the calcite has almost double the calcium content of the apatite. However, this is an analytical artefact due to the higher ablation yield of calcite. In the image created using the sum-normalisation approach (lower right) the calcium content of the apatite and calcite is similar, which is consistent with SEM analyses. The laser ablation image was acquired by LA-ICP-TOF-MS using a 6 × 12 μm beam at LabMaTer (UQAC). Ilm = ilmenite, Apa = apatite, Mgt = magnetite, Py = pyrite, Cal = calcite, Srp = serpentine.

This example shows that the sum normalisation approach may be used in a multiphase sample by employing criteria to define areas for similar treatment. In the example shown in Fig. 6, when using the conventional ‘semi-quantitative’ approach, the calcium content in the calcite phase appears to be approximately 1.5 times that of the apatite phase. However, this is due to the higher yield of calcite, when actually the calcite and apatite have similar Ca contents (see ESI for additional details). Using the criteria sum normalisation approach avoids this analytical artefact to produce more accurate concentration data for imaging experiments. This approach is discussed in more detail in Savard et al.23

10 Affinities, downhole fractionation and residual corrections

The effect of downhole fractionation is most commonly taken into account in U–Pb geochronology studies (e.g. ref. 24 and 25), however DHF may also occur in trace element spot analyses where fractionation occurs between an element of interest and the internal standard e.g., Fig. 7(a).
image file: d3ja00037k-f7.tif
Fig. 7 An example of the DHF correction for Na concentrations in BCR2G analyses in the “Spots_Examples” example dataset, where BCR-2G is treated as an unknown and calibrated with NIST 612 and BHVO-2G. In this example, Ca is used as the internal standard. Before DHF correction, aligning the start of each ablation and calculating the average for each time slice (bold black line) Na concentrations show an overall increasing trend as ablation continues (a) where ‘BeamSeconds’ (the x axis) is a proxy for pit depth. As the downhole fractionation trends of NIST 612 and BHVO-2G are appreciably different (b) it is important to match the unknown with a compositionally-similar reference material. In this case, a nearest centroid approach has correctly identified BHVO-2G as the most appropriate RM to correct for DHF in the BCR-2G analyses. Correcting to a smoothed cubic spline model of the DHF results in an almost flat Na concentration profile with increasing pit depth (c). By performing this correction, the uncertainty of the individual Na analyses is reduced significantly.

As DHF is a matrix dependent phenomenon, RMs with different compositions may exhibit different amounts of DHF (e.g., Fig. 7(b)) and when using multiple RMs to accurately correct for DHF it may be necessary to determine which RM each sample ablation most closely matches in composition. Here, we describe this similarity in composition “affinity”, which can be determined automatically using a subset of the measured elements in, for example, a nearest centroid classifier.26 Similarly affinity may be manually assigned if information about the sample composition is already known. With the affinity assigned, the RM with the most closely matching composition may be used to correct for DHF by fitting a model to the observed downhole trend and then correcting to this model. In the “Spot_Analysis” example dataset, the BCR-2G analyses are treated as unknowns. A nearest centroid approach correctly identified BHVO-2G as having a greater affinity to these analyses than NIST 612. Using a smoothed spline model for the DHF for BHVO-2G, the correction removes the upward trend in the data and thus reduces the uncertainty on each analysis (Fig. 7(c)). In this example, the uncertainty for Na concentrations in BCR-2G ranges from 310–390 (2SE) before correction, to 270–300 (2SE) post-correction: an average reduction of up to 20%. It should be noted that not all elements are affected by DHF, and the degree to which this correction improves precision will depend on the element of interest, the internal standard used and the composition of the sample.

Affinities may also be used in residual corrections. By residual correction, here we refer to correcting samples for the amount the calculated result for an RM is from its accepted value. For example, if the final average calculated value for a selected element in an RM is 5% above its accepted value, and a sample shares an affinity with the RM, we can correct the sample's result for this element down by 5%. While this may appear circular reasoning to calculate an additional offset to an RM used as a calibrant, if this offset between the calculated and accepted values is considered a residual to the model, this correction becomes a residual correction. Note that in our implementation of this correction, any element with a residual greater than 15% is not applied, as it suggests a poor model fit for this element. This threshold is adjustable depending on application.

In our Scanlines_Example dataset, where the BIR-1G analyses are treated as unknowns and a nearest centroid calculation based on major elements determines that these analyses share the most affinity with BHVO-2G, a residual correction to BHVO2G has been applied (Fig. 8). The sum of absolute percentage differences between no affinity correction and correction applied is 433 and 368, respectively. As shown in Fig. 8(c) this improvement is most marked for the lighter masses (Li to Rb) and the lanthanides. No correction is applied to the Zn, Tm or Pb results due to the measured values for BHVO-2G being greater than 15% from the accepted value (Fig. 8(a)). Results for Ta and Th after the affinity correction are significantly worse, going from 14% and 12% for Ta and Th respectively before correction, to 28% and 23% post correction. A more subtle effect is that the number of masses measured that are within 5% of the accepted value before the correction in this example dataset is 14, but after correction there are 26 elements within 5% of the accepted value (Fig. 8(b)). This would suggest that the application of this correction may provide additional accuracy but should be used judiciously given that some results may be worse post-correction.


image file: d3ja00037k-f8.tif
Fig. 8 An example of a residual correction based on affinity in the example dataset, where the residuals to the calculated BHVO-2G results (a) are used to correct the BIR-1G results, which are treated as unknowns (b). The improvement by applying this correction for each channel is shown in (c) where positive values are closer to the accepted value and negative are further. In (a) the results for Pb are not shown, but are approximately 60% from the accepted value (2.7 μg g−1 measured vs. the accepted value of 1.7 μg g−1). In (a) the dashed horizontal lines represent the 15% threshold, beyond which no residual correction is applied.

11 Summary

Here we present a new approach to calibration curves that include a time axis to allow for correction of time-dependent sensitivity drift when using multiple reference materials. The calculation of a yield normalisation factor between reference materials avoids yield effects when constructing calibration curves. Providing a number of methods for fitting calibration curves, along with a range of interpolation options, allows for exploration of the best set of parameters for correcting trace element data. Yields for elements lacking an accepted value in the calibrating RMs, or where interferences affect the accuracy, may be calculated from nearby elements. In some cases, where the element of interest is of low concentration relative to those surrounding it, this may provide additional accuracy than using the measured data (e.g. Th in the example dataset).

Combining multi-RM calibration curves with individual sample yield correction, either in the form of an internal standard or sum normalisation (both having the same effect), is presented, along with downhole fractionation corrections and residual corrections based on compositional affinity.

The importance and magnitude of each of the corrections demonstrated will depend largely on the samples being measured and the nature of each experiment. However, in the example dataset shown, improvements of up to 20% due to correction for sensitivity drift, and up to 20% for downhole fractionation correction, suggest that for some elements these corrections can significantly improve results. The use of a residual correction based on affinity may provide final additional gains to accuracy, although its use should be carefully evaluated in practice.

Conflicts of interest

BP and JP declare that they are currently in the employ of Elemental Scientific Inc., the company that sells the underlying software (iolite) used in this contribution. This does not, in their opinion, affect the underlying principles and advances described herein which were obtained predominantly in a university research environment prior to undertaking these roles.

Notes and references

  1. D. Hare, C. Austin, P. Doble and M. Arora, J. Dent., 2011, 39, 397–403 CrossRef CAS PubMed.
  2. R. Kovacs, S. Schlosser, S. P. Staub, A. Schmiderer, E. Pernicka and D. Günther, J. Anal. At. Spectrom., 2009, 24, 476–483 RSC.
  3. J. Almirall, A. Akmeemana, K. Lambert, P. Jiang, E. Bakowska, R. Corzo, C. M. Lopez, E. Pollock, K. Prasch, T. Trejos, P. Weis, W. Wiarda, H. Xie and P. Zoon, Spectrochim. Acta, Part B, 2021, 179, 106119 CrossRef CAS PubMed.
  4. M. Ogrizek, A. Kroflič and M. Šala, Trends Environ. Anal. Chem., 2022, 33, e00155 CrossRef CAS.
  5. D. Chew, K. Drost, J. H. Marsh and J. A. Petrus, Chem. Geol., 2021, 559, 119917 CrossRef CAS.
  6. Z. Chen, J. Anal. At. Spectrom., 1999, 14, 1823–1828 RSC.
  7. C. Paton, J. Hellstrom, B. Paul, J. Woodhead and J. Hergt, J. Anal. At. Spectrom., 2011, 26, 2508–2518 RSC.
  8. S. Seabold and J. Perktold, 9th Python in Science Conference, 2010 Search PubMed.
  9. H. P. Longerich, S. E. Jackson and D. Günther, J. Anal. At. Spectrom., 1996, 11, 899–904 RSC.
  10. J. D. Woodhead, J. Hellstrom, J. M. Hergt, A. Greig and R. Maas, Geostand. Geoanal. Res., 2007, 31, 331–343 CAS.
  11. H. Pan, L. Feng, Y. Lu, Y. Han, J. Xiong and H. Li, TrAC, Trends Anal. Chem., 2022, 156, 116710 CrossRef CAS.
  12. A. M. Leach and G. M. Hieftje, J. Anal. At. Spectrom., 2000, 15, 1121–1124 RSC.
  13. Y. Liu, Z. Hu, S. Gao, D. Günther, J. Xu, C. Gao and H. Chen, Chem. Geol., 2008, 257, 34–43 CrossRef CAS.
  14. B. Paul, J. D. Woodhead, C. Paton, J. M. Hergt, J. Hellstrom and C. A. Norris, Geostand. Geoanal. Res., 2014, 38, 253–263 CrossRef.
  15. K. P. Jochum, U. Nohl, K. Herwig, E. Lammel, B. Stoll and A. W. Hofmann, Geostand. Geoanal. Res., 2005, 29, 333–338 CrossRef CAS.
  16. P. T. Boggs, R. H. Byrd and R. B. Schnabel, SIAM J. Sci. Stat. Comput., 1987, 8, 1052–1078 CrossRef.
  17. D. York, N. M. Evensen, M. L. Martınez and J. De Basabe Delgado, Am. J. Phys., 2004, 72, 367–375 CrossRef.
  18. P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt and SciPy 1.0 Contributors, Nat. Methods, 2020, 17, 261–272 CrossRef CAS PubMed.
  19. A. Noda, Bull. Geol. Surv. Jpn., 2017, 68, 131–140 CrossRef CAS.
  20. S. K. I. Funke, M. Sperling and U. Karst, Anal. Chem., 2021, 93, 15720–15727 CrossRef CAS PubMed.
  21. L. Hendriks, A. Gundlach-Graham, B. Hattendorf and D. Günther, J. Anal. At. Spectrom., 2017, 32, 548–561 RSC.
  22. N. Tollari, S.-J. Barnes, R. Cox and H. Nabil, Chem. Geol., 2008, 252, 180–190 CrossRef CAS.
  23. D. Savard, S. Dare, L. P. Bédard and S.-J. Barnes, Geostand. Geoanal. Res., 2023, 47, 243–265 CrossRef CAS.
  24. M. Guillong and D. Günther, J. Anal. At. Spectrom., 2002, 17, 831–837 RSC.
  25. C. Paton, J. D. Woodhead, J. C. Hellstrom, J. M. Hergt, A. Greig and R. Maas, Geochem., Geophys., Geosyst., 2010, 11, Q0AA06 CrossRef.
  26. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot and E. Duchesnay, J. Mach. Learn. Res., 2011, 12, 2825–2830 Search PubMed.

Footnotes

These authors contributed equally to this work.
All count rates referred to herein are background subtracted count rates.
§ Heteroskedacity in this case refers to the fact that LA-ICP-MS measurement uncertainties increase in a non-linear fashion with decreasing concentration. This affects the model error and some assumptions about the weighting of datapoints in fitting the model. See Funke et al.20 for more details.

This journal is © The Royal Society of Chemistry 2023
Click here to see how this site uses Cookies. View our privacy policy here.