Open Access Article
Samuel
Furse
*ab,
Carlos
Martel
a,
David F.
Willer
c,
Daniel
Stabler
de,
Denise S.
Fernandez-Twinn
f,
Jennifer
Scott
d,
Ryan
Patterson-Cross
g,
Adam J.
Watkins
h,
Samuel
Virtue
f,
Thomas A. K.
Prescott
a,
Ellen
Baker
d,
Jennifer
Chennells
d,
Antonio
Vidal-Puig
f,
Susan E.
Ozanne
f,
Geoffrey C.
Kite
a,
Milada
Vítová
i,
Davide
Chiarugi
gj,
John
Moncur
k,
Albert
Koulman
bf,
Geraldine A.
Wright
d,
Stuart G.
Snowden
l and
Philip C.
Stevenson
*am
aRoyal Botanic Gardens, Kew, Kew Green, Richmond, Surrey TW9 3AE, UK. E-mail: s.furse@kew.org; samuel@samuelfurse.com; p.stevenson@kew.org; Tel: +44 (0) 20 8332 5867 Tel: +44 (0) 8332 5377
bCore Metabolomics and Lipidomics Laboratory, Wellcome-MRC Institute of Metabolic Science, University of Cambridge, Addenbrooke's Treatment Centre, Keith Day Road, Cambridge, CB2 0QQ, UK
cDepartment of Zoology, The David Attenborough Centre, University of Cambridge, Corn Exchange St., Cambridge, CB2 3QZ, UK
dDepartment of Zoology, University of Oxford, Oxford, OX1 3SZ, UK
eSchool of Biological Sciences, Faculty of Environmental and Life Sciences, University of Southampton, University Road, Southampton, SO17 1BJ, UK
fWellcome-MRC Institute of Metabolic Science and Medical Research Council Metabolic Diseases Unit, University of Cambridge, Keith Day Road, Cambridge, CB2 0QQ, UK
gBioinformatics Core, Wellcome-MRC Institute of Metabolic Science, University of Cambridge, Addenbrooke's Treatment Centre, Keith Day Road, Cambridge, CB2 0QQ, UK
hLifespan and Population Health, School of Medicine, University of Nottingham, Nottingham, NG7 2UH, UK
iInstitute of Botany, Czech Academy of Sciences, Department of Phycology, Dukelská 135, 379 01 Třeboň, Czech Republic
jMax Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Sachsen, Germany
kSpectralWorks Limited, The Heath Business and Technical Park, Runcorn, Cheshire WA7 4EB, UK
lDepartment of Biological Sciences, Royal Holloway College, University of London, Egham, Surrey TW20 0EX, UK
mNatural Resources Institute, University of Greenwich, Chatham, Kent ME4 4TB, UK
First published on 9th September 2024
Lipid metabolism is recognised as being central to growth, disease and health. Lipids, therefore, have an important place in current research on globally significant topics such as food security and biodiversity loss. However, answering questions in these important fields of research requires not only identification and measurement of lipids in a wider variety of sample types than ever before, but also hypothesis-driven analysis of the resulting ‘big data’. We present a novel pipeline that can collect data from a wide range of biological sample types, taking 1
000
000 lipid measurements per 384 well plate, and analyse the data systemically. We provide evidence of the power of the tool through proof-of-principle studies using edible fish (mackerel, bream, seabass) and colonies of Bombus terrestris. Bee colonies were found to be more like mini-ecosystems and there was evidence for considerable changes in lipid metabolism in bees through key developmental stages. This is the first report of either high throughput LCMS lipidomics or systemic analysis in individuals, colonies and ecosystems. This novel approach provides new opportunities to analyse metabolic systems at different scales at a level of detail not previously feasible, to answer research questions about societally important topics.
There has been an exponential expansion of genetics techniques and tools available for investigating how systems are controlled. These have been used in a wide range of applications, including to improve the production of foods1–3 and to investigate climate change4–6 and have given an invaluable insight into those systems and how they are constructed. However, genetics approaches are not able to directly measure how that system will respond to environmental challenges such as an increase in temperature. This requires more direct readouts of modifiable factors such as metabolites, i.e. the abundance and distribution of individual small molecules. Such an approach will provide mechanistic insight into the phenotypic effect(s) observed. Recently, investigations of how lipid metabolism is controlled have been reported.7–9 These studies used systemic or network analyses to answer questions about how metabolism is controlled and challenged in the context of dietary challenges, either through general changes (e.g., a high fat diet) or individual nutrients (e.g., individual poly-unsaturated fatty acids).
To answer questions about how metabolism is controlled or challenged in individual organisms or ecosystems, analysis of metabolites such as lipids is required from a range of sample types. This requires automation to make the scale of analyses feasible and subsequent wide-scale analysis in silico possible. Lipids are a key focus in biology because they include molecules used to supply and store energy (triglycerides), and others with a structural role (e.g., phospholipids). Furthermore, as all cells need energy and membranes, studies on lipid metabolism are important for all cells. The study of lipid metabolism therefore provides a broad and detailed way to investigate the health and behaviour in biological systems from individual organisms to whole ecosystems, i.e., across a range of scales.
Investigating lipid metabolism in ecosystems and individual organisms requires sample preparation techniques that cover the full range of sample types found in nature. This is a relatively new challenge and represents an emerging need for technological advancement as most lipidomics pipelines are designed for human blood serum and so have not been optimised for a range of sample types required for complex biological systems. Some ground work has been done on extending the range of tissue types in lipidomics studies,10,11 however none of these encompass diverse sample types such as plant material and insects.
A second challenge that emerges from the need to investigate whole ecosystems is the need to collect data from large numbers of samples in parallel. For example, high throughput techniques have emerged recently in metabolomics, with several studies using thousands of samples.12–15 For these analyses, extractions need to be automated16 with the minimum of steps to prepare samples.17 These and other methods have been reviewed18–20 and even tested.11,21,22 Direct Infusion Mass Spectrometry (DIMS) and semi-quantitative LCMS approaches have been reported for collecting lipidomics data. DIMS is an excellent tool for collecting lipidomics data from large numbers of samples without chromatography, and has been used in several of the largest lipidomics studies done to date.13,14 DIMS is a sensitive method that trades number of variables measured for the speed of data collection. Semi-quantitative high throughput LCMS has also been reported,23 measuring a greater number of lipids than DIMS, but requiring longer acquisition times per sample and with lower sensitivity.
For systemic analyses, a comprehensive survey of lipids is required, along with efficient and effective identification. Big and urgent societal questions on climate change and global food security require scope for network analysis as well as candidate biomarker analysis and similar statistical tests. This points to the need for measurement of as many lipids as possible in the system, and as consistently as possible.
To meet the needs of systemic analysis of ecosystems and individual organisms, we suggest that three major advancements are required to construct a lipidomics pipeline suitable for the task. First, the best extraction method for collecting the lipidome for high throughput LCMS in a 384 well plate format must be determined. Second, a rapid and reliable way to process raw lipidomics data to give a signals sheet with all lipid variables ID-matched. Third, a way to undertake network analysis in silico on the data acquired. We have responded to these needs by constructing a pipeline for metabolomics-based analysis of both individual organisms and multi-organism systems (Fig. 1) and using it for proof-of-principle studies on big questions in ecosystem performance and the health of individual organisms.
We successfully applied our approach as proof-of-concept studies that highlight how lipid-based systems biology can be applied to address specific questions and hypotheses in biodiversity loss and other societally important questions.
:
1
:
0.002, DMT). These four extraction methods were tested on nine different sample types (mouse brain, heart and liver, cows’ milk, whole Desmodesmus quadricauda, leaves from Eucalyptus perriniana, polyfloral pollen, whole Bombus terrestris, whole Saccharomyces cerevisae; BRA, HEA, LIV, BTM, DQu, EuL, PFH, WHB, YEA respectively), with ten measurements of each stock. Extracts from all extraction methods were run on the same 384w plate. The extraction performance measures used were (i) the number of variables found, (ii) the total signal and (iii) the coefficient of variation, i.e., a measure of how consistent the methods were. The data were then processed using two processing methods before numerical analysis and determination of which extraction method performed best.
Data from the extraction methods was initially processed using a conventional processing method.26 The number of signals (with a unique m/z and Rt, Fig. S1A, ESI†) showed little difference between methods, unlike the total signal which did differ between methods (Fig. S1B, ESI†). Coefficients of variation (CV) of signal size were calculated for each variable in each method on each sample type (Table S2, ESI†). These showed that the BAD and DMT methods were similar, with slightly more variables having a CV below 20% and 15% for the DMT method. This type of analysis provided some insight into the difference between methods, however this approach to processing LCMS data is incompatible with a systems analysis as the latter requires ID-matching for all variables and this approach identified secondary ions for more abundant signals. To overcome this limit, we automated the matching of lipid IDs to lipidomics data using commercially-available software (AnalyzerPro® XD from SpectralWorks Ltd) with a comprehensive target library (TL) generated in-house. The TL consisted of around 7.5k triglycerides, ceramides and phospholipids and was used to assess extraction methods.
ID-matched processed data were then used to assess the quality of the extraction procedures. Fig. S2 (ESI†) shows the number of variables and total signal of ID-matched signals for each method. These analyses show subtle differences between the total signal measured for each of the methods, with BAD and DMT being similar and DMT often but not always slightly higher than BAD. Student's t-tests showed that DMT gave greater total signal for BRA, BTM, DQU, EuL, HEA and WHB (p 0.015272, 0.001395, 2.63 × 10−6, 3.53 × 10−13, 4.94 × 10−6, 2.16 × 10−5, respectively) whereas BAD gave greater total signal for YEA (p 0.001856). No difference in total signal was found between DMT and BAD for either LIV or PFH (p 0.352035, 0.684561). The total signal strength of extracts collected using EAT was higher than those of the TBM method, but not as high as BAD or DMT.
Processing the data using a TL simplified and reduced the computing power needed to produce a signals sheet. This facilitated assessment of the consistency of the extraction procedures (CV). The CV of the four methods calculated using only lipid variables, shows that the BAD and DMT methods performed similarly, with DMT giving 1–3% more lipid variables overall than the BAD (Table S3, ‘Sum’, ESI†). Here too, the EAT method was more consistent than the other three methods, and TBM was less consistent. The impressively consistent performance of the EAT method is encouraging, however the total signal being less than for other methods suggested that this solvent was saturated. So, of the methods tested, the DMT method performed best and was thus the one used. These results answer the question of which of the extraction methods tested is the best for data collection of high throughput LCMS lipidomics collection across a range of sample types needed for analysis metabolic systems.
However, multi-variate analyses such as PCAs give very limited insight into the mechanism that drives the effect seen. This provides a problem for system-level studies. Interpreting lipidomics data from several different tissues within individual organisms using an MVA is limited in what it can explain about how the system is controlled, as any visible distinction relies on subgrouping of individual tissues in the different groups. Similarly, ecology studies of landscapes that comprise several trophic levels requires a strong distinction between the molecular comparison of individual samples in order to see any difference between them. This type of analysis may therefore miss a range of sub-lethal differences between groups or locations ascribed to differences in dietary intake or nutrient availability form the landscape. For example, an important question in ecology at present is how pollination services are responding to climate change and how they can be maintained in order to protect the biodiversity of flowering plants. Thus the behaviour of both social and solitary bees with the rest of their environment and whether they visit a range of plants (generalist) or are more restricted (oligolectic), by preference or necessity, demands a more systemic approach than multi-variate analyses can give.
Second, MVAs fail to exploit the relationships between the samples, i.e., the structure of the biological system from which they come. Fig. 2E and F show tissues that describe the metabolic structure of edible fish and Bombus terrestris fed contrasting diets, respectively. The difference between the groups can be seen, however what is accumulated where and thus how the system is controlled is not visible.
In order to understand how biological systems are controlled and what happens when they are stressed, the known connections between tissues or organisms must be exploited. Including the spatial distribution in the analysis sorts the metabolite composition data and allows it to be plotted such that the parts of the system when the biggest changes are found can be identified (shown schematically in Fig. 2G). We also judged that an approach that does not rely on controversial features such as p values associated with Students’ t-tests is also attractive. We therefore updated and expanded a non-statistical approach to network analysis for analysing metabolic systems, and present Lipid Traffic Analysis v3.0 (LTA). This software plots the spatial distribution of variables according to their lipid type. A-type variables are lipids found in all compartments (tissues/sample types) of a given group. B-type lipids are variables found in pairs of adjacent compartments, for example in the liver and the serum in mammals or the brain and ocular cortex in bees. U-type variables are found only in one compartment for a given group. We also introduce N2-type variables that are for variables found in pairs of non-adjacent groups. The N2-type is useful for identifying variables that exist independently or imply the existence of unexpected connections in a network.
Analysing lipid data in this way is useful because (i) it is a plot of lipid distribution that does not rely on probability or other metrics, (ii) the plots can be used to characterise the system and (iii) the analysis sifts out the most important variables and parts of the network, identifying how the control of the systems differ. This approach therefore avoids a reliance on probability and so the need for significance thresholds is avoided. The combination of the data collection strategy we have developed and the network analysis, i.e., the full pipeline, was used for two sets of proof-of-principle experiments for globally important societal challenges. One was on rearing livestock (fish) and the other on protecting biodiversity through understanding a generalist pollinator (bumble bee). These are two separate questions that require a similar approach and that this pipeline can be used to answer.
First, a proof-of-principle traffic analysis on edible fish species from the same biome but different taxonomic orders (moroniforme and perciforme) was performed, and then with an Atlantic species of another order (scombiforme). The LTA of Dicentrarchus labrax (seabass) against Sparus aurata (bream) showed that there is a surprising uniformity of the PCs found throughout the system in both species, with several phosphatidylcholines (PCs) found throughout the system in both species (A-type lipids, Fig. 3A). However, there is no general pattern of PCs throughout the network between D. labrax and S. aurata, and only a modest overlap (J) between the two species. This suggests that lipid metabolism has evolved differently in the two taxonomic orders. Importantly, the traffic analysis of triglycerides between D. labrax and S. aurata also showed that there are over 200 triglycerides found throughout each species (Fig. 3B), something that is also observed in S. scombrus (mackerel, Fig. S3, ESI†).
These analyses show that there is a remarkable complexity in the lipid metabolism of edible fish in general and hints that for these fish to be healthy, the fatty acid profile of their dietary intake may also need to be very rich. This type of analysis therefore offers ways to manage the transition to eliminating the use of wild fish in farmed fish feeds without negatively affecting farmed fish growth or nutritional profile.27 Determining the precise dietary intake even of humans is notoriously difficult28 and thus that of a wild or farmed animal is yet more challenging. Gaining a greater understanding of the specific lipid requirements for farmed fish for optimum growth is critical for the aquaculture industry as it moves towards reducing its economically and environmentally costly reliance upon fishmeal and fish oil.29,30
Systemic analysis of a colony or mini-ecosystem of individuals is useful in studies related to biodiversity loss as it can tell us about the relationships between individuals. For example, pollinating insects such as bees provide an important service to plant-based habitats that are themselves a system. However, the living arrangements of bees also has a well-defined structure that represents a system. There is also scope for analysis of individual organisms. A proof-of-principle study in a commercially available species of bumble bee (Bombus terrestris) was done both within the queens and the whole colonies of which they were part. The colonies (n = 1 per group) were fed honeybee-collected pollen from Fagopyrum esculentum (buckwheat) or Helianthus annuus (sunflower).
The traffic analysis of lipids within the queens showed that for triglycerides, a diet of pollen from Fagopyrum esculentum was associated with a greater number of triglycerides throughout the system (Fig. S4A, ESI†). However, the traffic analysis of phosphatidylcholine suggested a more mixed picture for that lipid class (Fig. S4B, ESI†) and those of both phosphatidylinositol and phosphatidylglycerol (Fig. S4C and D, ESI†) suggest that the distribution of these lipids is more complicated than simply more or fewer variables. These analyses suggest that the control of lipid metabolism changes according to dietary intake and that this differs between triglycerides (energy storage and distribution) and phospholipids (cellular structure). This has potentially far-reaching consequences as it means that feeding in bees may have short- and long-term consequences on the individual bees. This raises questions about whether the effects are similar at colony level for social insects.
Traffic analysis showed a simpler picture for the colony than within the queens (Fig. 4), with a greater number of variables throughout for TG and PC in the colony fed pollen from Fagopyrum esculentum than that fed pollen from Helianthus annuus. This is reflected in the traffic analyses of PG and PI (Fig. S5, ESI†). This therefore also shows that there are considerable diet-driven effects on the control of metabolism at colony level. These bee colonies also showed at least two fundamental features. First, both the phosphatidylcholine and triglyceride traffic showed that lipid composition of pupae, newly-emerged drones and week-old drones were similar, however the lipid composition of larvae was rather different to that of pupae whichever diet was fed. This suggested that there are considerable changes in lipid metabolism late in the larval development of bumble bees. Second, we see many more variables in 1d old frass and 7d old frass than in fresh frass. This suggests that new lipids are being made in the frass after it is produced. As several new phosphatidylcholines are found, we suggest that a eukaryotic species is probably responsible for this change in lipid composition, presumably a fungus. Bumble bee colonies may therefore represent a micro-ecosystem rather than simply a colony of one organism. Together with other evidence,31 this suggests that fungi play an important role in colony development of bumble bees.
Taken together, the evidence that dietary intake influences the control of lipid metabolism in colonies and individuals contextualises concerns about global challenges such as agricultural intensification and climate-change that can dramatically influence the nutrient landscape for bees. It suggests that changes to nutrient availability caused by biodiversity loss will have effects on the health of colonies of generalist pollinator bee species. This indicates that supporting pollination services is a key component of halting biodiversity loss.
The systemic analysis of both individuals such as fish, bees and an ecosystem has myriad applications for several timely questions in addition to understanding biodiversity loss and global food security. Lipid traffic analysis has already been used in medical research, on type 2 diabetes32 and gestational diabetes7,33 and feeding of essential nutrients.9 Studies of obesity and associated factors also require analysis of whole organisms and thus will rely on network analyses. Similarly, conditions such as cancer and infectious disease are system-wide and thus understanding of these diseases using systemic analyses can be part of an hypothesis-driven investigation of the progress of the disease and interventions to halt it. To date, much of the work on obesity, cancer, metabolic disease and infection has focused on lipid signatures of the conditions34–36 or on genetics.37–39
000
000 per 384w plate, and then perform network analyses on the processed data to answer scientific questions. This novel approach represents a substantial advance in our ability to carry out the systemic metabolic analysis of individual organisms, colonies and even ecosystems. Thorough and objective testing of lipid extraction methods was used to identify the best method for resolution and consistency. The advances described relied upon the development of end-to-end methods for sample preparation and lipidomics data collection of a wide variety of tissue types—everything from leaf to liver—promptly and precisely. This enabled new insights in the proof-of-principle studies done that show that triglyceride metabolism was more varied and complicated in edible fish than expected, and that colonies of bees represented mini-ecosystems rather than simply groups of co-habiting individuals. The study of bee colonies also found that there is considerable development of lipid metabolism through the development of the bees. The advances in breadth and capacity in lipidomics that this pipeline offers provides the necessary infrastructure to answer key questions about how metabolic systems are controlled and what happens when they are challenged. This technology has immediate application in research into metabolic disease, nutrition, conservation, sustainable farming and biodiversity loss, amongst others.
Leaf material and insect samples have not previously been used in large-scale lipidomics studies and presented unique challenges. Leaves and whole bees were made more brittle and partly preserved by being freeze-dried. Leaves were sliced to shorten the fibres (<5 mm) or crushed when dry, before being soaked in the buffer (2–6 h). The [dry] samples were then homogenised using a robust laboratory homogeniser (steel macerator). Bees required some blunt mechanical disruption immediately before mechanical homogenisation to break the head casing, and thoracic and abdominal exoskeleton. The constituent tissues of bees (brain, gut, hypopharyngeal gland, thoracic muscle, frass) and earlier developmental stages (larvae, pupae, newly emerged adults) behaved similarly to mammalian tissues (Mus musculus; brain, liver, adipose, heart, Homo sapiens; whole blood). Fish tissues (from Dicentrarchus labrax, Scomber scombrus and Sparus aurata; belly, gut, back, heart, tail, gill, head, cheek, skin, liver) also behaved in the same way. The amount of buffer used varied according to the amount of lipid in the sample, with fattier/more lipidic samples needing to be more dilute (see Table S1, ESI†).
In order that data from large numbers of samples can be collected in one batch, both for testing extractions and for continued use in a pipeline, extractions must be carried out in parallel. Parallel extractions were carried out in this study using a 96-channel pipette mounted onto a movable platform (Integra Viaflo, ∼£15k). This allows preparation of 384w microplates for data collection.
Data collection poses a particular challenge in investigating whole systems as it requires large numbers of samples to be handled in parallel. High throughput techniques have emerged relatively recently in metabolomics, with several studies reporting thousands of samples per batch.12–15 For these analyses, extractions need to be automated16 with the minimum of steps to prepare samples.17 These and other methods have been reviewed18–20 and even tested.11,21,22 Liquid Chromatography Mass Spectrometry (LCMS) was chosen for this pipeline because it is the optimum approach to separate and measure the large number of lipids present in biological samples (only an order of magnitude less than that of proteins41). Recent advances in autosampler hardware mean that 384w microplates can now be used in commercially-available LCMS set-ups.
Previously, we developed the concept of molecular traffic analysis and built software in R. Lipid traffic analysis (LTA) v1.0 and 2.3 were focused on spatial analyses within individuals.8,9,33,42 In order to be able to do systemic or network analysis suitable for colonies and ecosystems as well as individuals, we built LTA v3.0 in Python (https://pypi.org/project/lipidta/). This has additional features that are useful for complex networks (vide infra). The principle of traffic analysis in the context of metabolomics is based on the principle of lipid types. A-type variables are lipids found in all compartments (tissues/sample types) of a given phenotype group. B-type lipids are variables found in pairs of adjacent compartments, for example in the liver and the serum in mammals or the brain and ocular cortex in bees. U-type variables are found only in one compartment for a given group. We introduce N2-type variables that are for variables found in pairs of non-adjacent groups. The N2-type is useful for identifying variables that exist independently or imply the existence of unexpected connections in a network, something that is useful in complex networks or networks that have not been fully explored. These lipid types are represented on a traffic analysis diagram alongside statistics to inform interpretation of the numbers. Jaccard–Tanimoto coefficients (JTCs, J) are used to show the overlap between the identities of the variables and associated p values were used as a non-parametric measure of the probability that the dissimilarity occurred by random chance (they are not the same as the p values used in t-tests).
We mapped the connectivity of samples in the proof-of-principle tests using their known metabolic connections (see Results). How these metabolites are distributed through two different systems shows how the two differ and thus the way they are controlled differs. This is the principal information output of a traffic analysis. We ran two proof-of-principle experiments, one was to understand how the control of biological systems differed between species (fish, Fig. 3) and colonies of Bombus terrestris fed different diets (Fig. 4).
:
1 w/v). Bees fed chestnut, poppy or a combination of these pollens were purchased from Agralan Growers (Wiltshire, UK) and reared in a laboratory incubator at the Wytham research station (Oxford, UK), being held at 22–27 °C and 35–40% humidity.
All algae were cultivated in glass photobioreactors in liquid media at continuous light to OD750 of 1.5, harvested by centrifugation, frozen at −80 °C and freeze-dried.
Desmodesmus quadricauda (Turpin) Brébisson (strain Greifswald/15), Culture Collection of Autotrophic Organisms Institute of Botany, Czechia. Starting cultures were inoculated into SS medium, and cultivated at 30 °C, 750 μmol photons m−2 s−1, 2% v/v CO2.43
Chlamydomonas reinhardtii wild type 21gr (CC-1690) Chlamydomonas Resource Center at the University of Minnesota, St. Paul, MN, USA. Starting cultures were inoculated into HS medium, and cultivated at 30 °C, 500 μmol photons m−2 s−1, 2% v/v CO2.44
Galdieria sulphuraria (Galdieri) Merola, 002, Algal Collection of the University “Federico II” of Naples, Italy. Starting cultures were inoculated into Galdieria medium, pH 3, and cultivated at 40 °C, 500 μmol photons m−2 s−1, 2% v/v CO2.45Hibberdia magna K-1175, Norwegian Culture Collection of Algae, Norway. Starting cultures were inoculated into WC medium, and cultivated at 20 °C, 150 μmol photons m−2 s−1, 1% v/v CO2.46
:
1 v/w).
Once extracts from all four of the 96-well plates had been placed in the 384 well plate (glass-coated, SureSTART™ WebSeal™ Plate+), the dried films were re-dissolved (XMI-AF, 80 μL per well) and the plate was heat-sealed with aluminium foil (AB-0757, Fisher Scientific) and queued immediately, with the first injection within 5 min. The extractions were timed so that the instrument was available immediately after the completion of extractions.
000 (m/z 200) with the H-ESI spray voltage set to 2.86 kV, nitrogen gas flows of 45 (sheath), 5 (auxiliary) and 1 (sweep) arbitrary units, and ion transfer tube and vaporizer temperatures of 300 °C and 350 °C. The AGC was set to Standard (Full Scan 1
000
000 and SIM/PRM 200
000) with a maximum ion injection time of 200 ms. The mass acquisition window was m/z 480–1100, with the fluoranthene cation (m/z 202.077) used for internal mass calibration.
All signals for which the correlation was found to be >0.75 for at least one of the QC stocks used was regarded as passing the QC test. 3198 variables passed both tests, across all samples.
Footnote |
| † Electronic supplementary information (ESI) available. See DOI: https://doi.org/10.1039/d4mo00083h |
| This journal is © The Royal Society of Chemistry 2024 |