Malcolm
Burns
*a,
Gordon
Wiseman
b,
Angus
Knight
c,
Peter
Bramley
d,
Lucy
Foster
e,
Sophie
Rollinson
e,
Andrew
Damant
f and
Sandy
Primrose
g
aLGC, Queens Road, Teddington, Middlesex, TW11 0LY, UK. E-mail: Malcolm.Burns@lgcgroup.com; Tel: +44 (0)208 943 7000
bPremier Analytical Services, Premier Foods Group Ltd., The Lord Rank Centre, Lincoln Road, High Wycombe, Bucks HP12 3QS, UK
cLeatherhead Food Research, Randalls Road, Leatherhead, Surrey KT22 7RY, UK
dSchool of Biological Sciences, Royal Holloway, University of London, Egham, Surrey TW20 0EX, UK
eDepartment for the Environment, Food and Rural Affairs, 17 Smith Square, London SW1P 3JR, UK
fFood Standards Agency, Aviation House, 125 Kingsway, London, WC2B 6NH, UK
gBusiness & Technology Management, 21 Amersham Road, High Wycombe, Bucks HP13 6QS, UK
First published on 23rd November 2015
Following a report on a significant amount of horse DNA being detected in a beef burger product on sale to the public at a UK supermarket in early 2013, the Elliott report was published in 2014 and contained a list of recommendations for helping ensure food integrity. One of the recommendations included improving laboratory testing capacity and capability to ensure a harmonised approach for testing for food authenticity. Molecular biologists have developed exquisitely sensitive methods based on the polymerase chain reaction (PCR) or mass spectrometry for detecting the presence of particular nucleic acid or peptide/protein sequences. These methods have been shown to be specific and sensitive in terms of lower limits of applicability, but they are largely qualitative in nature. Historically, the conversion of these qualitative techniques into reliable quantitative methods has been beset with problems even when used on relatively simple sample matrices. When the methods are applied to complex sample matrices, as found in many foods, the problems are magnified resulting in a high measurement uncertainty associated with the result which may mean that the assay is not fit for purpose. However, recent advances in the technology and the understanding of molecular biology approaches have further given rise to the re-assessment of these methods for their quantitative potential. This review focuses on important issues for consideration when validating a molecular biology assay and the various factors that can impact on the measurement uncertainty of a result associated with molecular biology approaches used in detection of food fraud, with a particular focus on quantitative PCR-based and proteomics assays.
The fraudulent misdescription of foods for economic gain can mislead the consumer and impact on businesses, and can occur by the substitution of high-added value products that command a premium price with cheaper products which claim to be authentic. To prove conclusively that fraud has occurred it is necessary first to identify the authenticity of its composition as claimed on the label and then to quantify the analytes of interest, or provide evidence that they are present above a legislative or agreed threshold. Often the substituents are very similar biochemically to the materials that they replace and this makes their identification and quantification problematic. The fact that food matrices are extremely complex, variable, and can be subject to varying degrees of processing and treatment, further adds to the issue. Recently, methods based on the polymerase chain reaction (PCR) and proteomics have been shown to have the required discriminatory capability for the purposes of identification.4 These methods can also be used quantitatively.
A key issue with the misdescription of foods is distinguishing between adventitious contamination and deliberate substitution. The former can occur as a result of inadequate cleaning of equipment between processing different batches but often is not expected to exceed more than 5% on a weight or volume basis.5 On the other hand, if deliberate adulteration has occurred, the undeclared ingredient is likely to be present at more than 5%, in order to gain an economic advantage in make the deliberate substitution. Below the 5–10% level the economic gain probably is insufficient to make substitution worthwhile.
A reporting level of 1% (w/w) of meat species was adopted in the UK and the European Union (EU) following the findings of a significant amount of horse DNA found in beef burgers.1 This level for enforcement action was a pragmatic approach based on the experience of regulators, enforcement and industry of an appropriate level at which to distinguish trace contamination from deliberate adulteration.
In the EU, all materials originating from genetically modified (GM) sources must be labelled accordingly, subject to a threshold of 0.9% for adventitious presence of material from EU approved GM varieties.6 Basmati rice is a different case for in Europe a number of varieties can be imported tariff-free but adventitious contamination with unapproved varieties must be below 7% w/w according to the Basmati Code of Practice.7 If international trade is not to be disrupted it is essential that competent authorities have access to validated analytical methods.
Lack of harmonised best practice often leads to high measurement uncertainty associated with a result. However, the implications of poor practice frequently go beyond this, and have the potential to cause confusion in the minds of those who commission analytical and molecular biological services in food authenticity. This review makes a number of recommendations with respect to best practice guidance for the detection of food fraud, with a particular emphasis on quantitative approaches.
In order to demonstrate the methods a laboratory implements are fit for the purpose for which they were originally intended, method validation must be undertaken. This comprises both the process of obtaining data for the fitness for purpose of a method as well as documenting this evidence. Method validation is an essential component of the actions that a laboratory should implement to allow it to produce reliable analytical data. Methods of analysis of food are governed by EU legislation8 which describes the required validation. “Full” validation for an analytical method is usually taken to comprise an examination of the characteristics of the method in an inter-laboratory method performance study (also known as a collaborative study or collaborative trial). Internationally accepted protocols have been established for the “full” validation of a method of analysis by such a collaborative trial.9,10 These protocols/standards require a minimum number of laboratories and test materials to be included in the collaborative trial to validate fully the analytical method.
Most published literature on analytical method development, validation and quality control is focussed on classic analytical chemistry methodology rather than molecular biology or proteomics/metabolomics. However, many of the guiding principles can be applied to molecular biology methods, which form a key part of the food authenticity detection “tool kit”. For example, The Codex Committee for Methods of Analysis and Sampling (CCMAS) have developed guidelines on criteria for methods for the detection and identification of foods derived from biotechnology.11 These guidelines provide information for the validation of methods for the detection, identification, and quantification of specific DNA sequences and specific proteins in foods derived from modern biotechnology. They may also provide information on the validation of methods for other specific DNA sequences and proteins of interest in other foods. Information relating to general considerations for the validation of methods for the analysis of specific DNA sequences and specific protein in foods is given in the first part of the CCMAS guidelines. Specific annexes are provided that contain information on definitions, validation of qualitative and quantitative PCR methods, validation of protein-based methods, and proficiency testing. A similar set of method-acceptance criteria and method-performance requirements has been compiled by the European Network of GMO Laboratories (ENGL). Method-acceptance criteria are criteria that have to be fulfilled prior to the initiation of any method validation by the EU Reference Laboratory for GMOs in feed and food (EU-RL-GMFF).12 The method performance requirements define the minimum performance characteristics of the method that have to be demonstrated upon completion of a validation study carried out according to internationally accepted technical provisions. This latter requirement is needed in order to certify that the method validated is fit for the purpose of enforcement of Regulation (EC) No 1829/2003.6
In the field of genetically-modified organisms (GMOs), the modular approach to method validation has been discussed in great depth by molecular biologists.13 According to this approach, the analytical procedure can be described as a series of successive steps: sampling, sample processing, analyte extraction, and ending in interpretation of an analytical result produced with, for example, the real-time polymerase chain reaction. Precision estimates for each stage can be combined into a total precision estimate. In theory, this approach allows the analyst to tailor individual analytical steps to the analyte/matrix combination being analysed. Holst-Jensen and Berdal13 comment that the final analytical result is dependent on proper method selection and execution and is valid only if valid methods (modules) are used throughout the analytical procedure.
Within the molecular biology area, the major work on measurement uncertainty estimation has again been undertaken within the GMO sector. Trapmann et al.14 presented two approaches for the estimation of measurement uncertainty associated with a result. The first approach uses collaborative trial data in combination with in-house quality control data for the estimation of measurement uncertainty of a result. An alternative approach using data obtained from within-laboratory sample analysis is also presented. The approaches laid down by Trapmann et al.14 are being widely implemented by European laboratories undertaking GMO analyses and the principles proposed are widely applicable to other molecular biology analyses. Despite these measures, a recent report published by the EU-RL-GMFF regarding an international comparative test for detection and quantification of GM events in rice noodles in 2014, revealed that only 58% of participants to the study provided measurement uncertainty estimates associated with a result in a complete and consistent manner.15 This highlighted the need for improvements and harmonisation in the way that analytical testing laboratories report their measurement uncertainty estimates.
There is concern that some laboratories underestimate the size of their measurement uncertainty associated with a result. For chemical analyses, using the results from collaborative trials (i.e. the top-down approach), it would not be unreasonable to anticipate that the (expanded) uncertainties reported by laboratories would be of the orders shown in Table 1.9 Within the molecular biology sector the analyte concentration being determined is often less than 100 μg kg−1. Consequently, it is not uncommon to expect expanded relative measurement uncertainties of at least 44% for analytical results obtained using PCR-based approaches.
Concentration | Expanded uncertainty | Range of acceptable concentrations |
---|---|---|
100 g/100 g | 4% | 96 to 104 g/100 g |
10 g/100 g | 5% | 9.5 to 10.5 g/100 g |
1 g/100 g | 8% | 0.92 to 1.08 g/100 g |
1 g kg−1 | 11% | 0.89 to 1.11 g kg−1 |
100 mg kg−1 | 16% | 84 to 116 mg kg−1 |
10 mg kg−1 | 22% | 7.8 to 12.2 mg kg−1 |
1 mg kg−1 | 32% | 0.68 to 1.32 mg kg−1 |
<100 μg kg−1 | 44% | 56 to 144 μg kg−1 |
Measurement uncertainty is probably the most important single parameter that describes the quality of measurement associated with a result. However, many laboratories reporting results only report the measurement uncertainty associated with the final analysis and do not normally include the measurement uncertainty associated with sampling itself. It is widely recognised that major portion of the total measurement uncertainty budget can arise from the upstream sampling stage. The EURACHEM-CITAC Guide16 on the estimation of measurement uncertainty arising from sampling provides a set of useful tools with which the analyst can determine sampling uncertainty and thereby the total measurement uncertainty associated with a result.
It is essential that the measurement uncertainty of the test result be known before deciding if the test result shows compliance or non-compliance with a specification. The reason for this is shown in Fig. 1 where four different results for the concentration of an analyte are assessed for their compliance with an agreed limit. For each result, the vertical lines show the expanded uncertainty ± U associated with a result. Based on the assumption of a normal distribution, there is a higher probability that the concentration of the analyte will lie nearer the centre of the expanded uncertainty interval than nearer the ends. For results (a) and (d) the analyte concentrations respectively are well above and well below the limit. However, for result (b) there is a high probability that the value of the analyte is above the limit but the limit is within the uncertainty interval. Similarly, for result (c) the probability that the analyte is below the limit is high but not absolute.
Fig. 1 Assessment of compliance with a specification limit. Mean values and associated 95% confidence intervals are shown. |
It is a relatively simple matter to determine the factors contributing to uncertainty associated with the reported result for an assay where highly purified reagents are used. However, when real samples are to be analysed it is necessary to consider the total analytical procedure (Fig. 2). For example, when implementing a bottom-up approach to determine the measurement uncertainty of results obtained using a PCR-based method this will include sample preparation, DNA extraction and DNA purification steps. If the material to be analysed is blood (e.g. in a clinical assay) there will be relatively little variation in different samples and this reduces uncertainty. In the case of foodstuffs the matrices are very complex and variable and any processing that occurs only increases the variability. Consequently one expects the measurement uncertainty associated with the reported result to be high. Contributions to the overall measurement uncertainty can also occur during the PCR setup, equipment operation, software analysis, manual analysis and user interpretation stages.17 Such aspects of plasticware consumables, use of reference materials and quality of primer/probes must be carefully considered in order to minimise the uncertainty associated with the analytical result. In particular, care must be taken to ensure all analytical instruments (e.g. balances, thermal cyclers, centrifuges, etc.) are serviced and calibrated correctly.
Fig. 2 Example factors contributing to measurement uncertainty of a test result involving the use of real-time PCR. Adapted from Burns and Valdivia.17 |
Special attention should be paid to pipettes as their accuracy and precision need to be determined more frequently than for other instruments. Using gravimetric analysis, the performance of individual pipettes should be compared with manufacturer's specifications according to a routine schedule: for example, accuracy checks involving individual measurements may have to be conducted weekly, and precision tests involving multiple measurements may have to be done bi-annually. In addition, leak tests may have to be performed on a more regular and frequent basis.
The quality and quantity of DNA extracted from food products tend to decrease with the extent to which the food is processed because physical, chemical and enzymatic treatment of food can result in a marked decrease in DNA fragment size.22–24 With highly sheared DNA there may not be enough template DNA available for the PCR.25 An added complication is that the amount of DNA extracted is governed by the particle size of the food: as particle size diminishes the amount of DNA extracted increases.21,26 However, homogenisation of the food sample to reduce particle size might result in shearing of the DNA. The preferred method for determining if DNA has been extensively degraded is to determine its size using gel or capillary electrophoresis to ensure that there is a high mean fragment size, and minimal smear or a “tail” present which is indicative of fragmented DNA.
A number of methods have been used for quantifying either the amount of DNA that has been extracted or the amount being added to a PCR reaction. These methods are: spectrophotometry, fluorimetry and chemiluminescence. For a solution of purified double-stranded DNA that is not degraded, an absorbance value of one at 260 nm (A260) wavelength corresponds to a concentration of 50 μg mL−1.27 However, as the DNA becomes degraded the absorbance increases and this probably is due to the presence of single-stranded DNA. Note that single-stranded DNA can occur even in the absence of size degradation.28 If fluorimetry is used to determine DNA concentration then the samples first need to be incubated with a fluorescent dye such as PicoGreen®. There are three advantages of fluorimetry for determining DNA concentration. First, it is ∼100 times more sensitive than UV spectrophotometry. Second, the linear concentration range extends over four orders of magnitude. Third, it is relatively insensitive to the presence of contaminants with the notable exception of CTAB which is used in many DNA extraction protocols.28 Chemiluminescence can be used to quantify DNA. It has a sensitivity similar to that of fluorimetry but the DNA must be smaller than 6000 base pairs in length. If the DNA is larger than this then it must be reduced in size by treatment with an appropriate restriction enzyme. Also, the degree of sensitivity to quenching by other constituents of the solution is not known.
There also are issues associated with determining sample purity and this particularly is critical if the PCR is going to be used. A standard method of assessing DNA purity is to determine the A260:A280 ratio, which refers to the ratio of the absorbances at 260 and 280 nm wavelengths. The value obtained indicates if the DNA is contaminated with RNA, protein or aromatic compounds. However, many different substances can inhibit the PCR, even when present in trace amounts, and most of them will not be detected by simple spectrophotometry.29 These inhibitors can come from the test sample or the quality of reagents and plasticware used. The uncertainty associated with the quality of the reagents and plasticware can be minimised by specifying the grade and source in SOPs. Residual amounts of reagents such as CTAB, EDTA, ethanol, isopropanol and phenol also can be inhibitory to the PCR. Food ingredients such as acidic plant polysaccharides, polyphenolics, fat and protein also are inhibitory. Thus SOPs for nucleic acid purification need to ensure that these inhibitory materials are removed and the efficiency of removal needs to be demonstrated. This is best done by performing an inhibition test using either internal controls or evaluating the linearity of calibration curves.30,31 It should be noted that amplification of an endogenous positive control, if taken on its own, does not necessarily indicate the absence of PCR inhibitors.26 Equally well, examination of the A260:A230 ratio can be used as a quality metric to determine the likely presence of organic compounds or chaotropic salts (e.g. phenolate ions, EDTA and polysaccharides) that may have been co-extracted with the DNA and can inhibit the downstream PCR on that sample. If the A260:A280 or A260:A230 ratios are much lower than a value of around 2.0, then this is indicative of the presence of inhibitors. In such cases corrective action must be undertaken to remove these (e.g. by cleaning, re-precipitating and re-suspending the DNA pellet) or the DNA extraction procedure should be repeated.
Many different methods have been used for extracting and purifying DNA prior to amplification in the PCR and these have been reviewed.29 These methods fall into two main categories: variations on “home made” protocols, usually involving the use of cetyltrimethylammonium bromide (CTAB) or sodium dodecyl sulphate (SDS), and commercial kits. Within these two main categories, numerous variations on the exact type of DNA extraction exist, including solution based approaches (e.g. phenol/chloroform), solid based approaches (e.g. magnetic beads) or any combination of the two (e.g. CTAB followed by a column based clean up). The ideal method is the one that yields the greatest amount of DNA of the highest molecular weight and the lowest concentration of PCR inhibitors. Given the wide range of food matrices that are likely to be encountered this means that there is no generic method. For every new matrix examined it is essential to optimise the extraction and purification procedure and validate it.
The uncertainty associated with the DNA extraction phase has been minimised in some real-time PCR approaches for food authenticity testing. For example, for the quantitation of GMO ingredients, real-time PCR is used to quantify the amount of GM target analyte (e.g. DNA from GM soya) relative to the total amount of species specific DNA present (e.g. DNA from the total soya content). In this manner a relative expression is derived and reported for GMO content, and the impact of reduced DNA extraction efficiency may often be minimised as the sources of measurement uncertainty tend to effect all DNA targets in a consistent manner.
Recognising the importance of the DNA extraction phase and the impact this can have upon downstream molecular biology analyses, the Department for the Environment, Food and Rural Affairs (Defra) commissioned a one day workshop in 2014 to discuss harmonised approaches to this area between UK enforcement laboratories.32
As with all aspects involved in producing an analytical result, it is good practice to put in place quality criteria associated with each phase of an analytical approach to ensure measurement uncertainty is minimised and results are produced that are fit for purpose. Such quality criteria for the PCR phase can involve use of an internal positive control (IPC) in the PCR, and testing that the correlation coefficient (r2) and PCR efficiency of any dilution series of calibrants or test samples to ensure that these are close to the ideal expected values of 1 and 100% respectively, using real-time PCR.
The more usual approach for quantification is to express the measurement response of a test sample relative to a calibration curve. Methods using a calibration curve are ideal if one wishes to quantify a single substance in a sample relative to a reference material. However, in food authenticity work it usually is necessary to determine the relative proportions of one analyte versus another. In this case it is necessary to have standard curves for both analytes. The selection and development of suitable standards is made difficult by natural variation and any effects of processing. Ideally one uses a certified reference material (CRM) as the source of DNA for the standard curve but only a few such materials are available and only for GMOs.35
Some of the more recent certified reference materials commercially available are available only as 100% GMO. With these, quantification only can be achieved using a “relative copy number” method. This involves making logarithmic dilutions of the reference material with the PCR being carried out on each dilution to specifically amplify the event specific and endogenous gene sequences. The Ct values obtained for the dilution series are plotted against arbitrary copy numbers for each dilution to generate a linear calibration curve. Test samples are assessed within the same series of PCR and the calibration curves used to determine the “relative copy number” of each of the event specific sequence and endogenous gene sequences present in the test sample. It is important to note that, if the original CRM used to construct the calibration curve had its GM content certified on a mass per mass (m/m) basis, then the result from the test sample will also be expressed in terms of a m/m basis.
Plasmids have been investigated as an alternative calibration source to CRMs for use in detecting GMOs. These plasmids contain specific GM sequences and endogenous (reference) gene sequences. A comparison of genomic and plasmid-based calibrants concluded that plasmid calibration gave a closer mean estimate of the expected % GM content of samples and exhibited less variation.36 Plasmid calibrants also gave more accurate results in terms of trueness and precision when assessed using an inter-laboratory study. However, plasmids generated by gene manipulation can be unstable and it is necessary to be sure that there are no changes over time in the cloned genes. This could be a significant issue if the amounts of two species (e.g. chicken and beef) are being determined by exploiting nucleotide differences in the same gene. If both genes are on the same plasmid then deletions could occur through homologous recombination. Finally, quantification is only possible if the amplification efficiencies of DNA from test samples are the same as DNA used in construction of the standard curve. To be sure of this it is necessary to run a dilution series of the test sample.
A potential source of error when quantifying DNA is the concentration of magnesium ions in the buffer used in the amplification step. An assumption often is made that hybridization of the primers is highly specific but this may not be the case. If, as is usual, the magnesium is present at 5 mM then this permits non-specific PCR and the amount of amplicon may be over-estimated. This problem can be detected by measuring the melting temperature of the end product or analysing it by gel electrophoresis. If a probe is present (as in real-time PCR) then this gives added selectivity to help ensure that only DNA from the correct amplicon is quantified.
The only well documented example of the use of real-time PCR to quantify food adulteration, other than with GMOs, is the measurement of bread wheat (T. aestivum) in durum wheat (T. durum) used to make pasta.37 Durum wheat is tetraploid (AABB) whereas bread wheat is hexaploid (AABBDD). All three genomes carry the psr128 sequence and this shows little or no polymorphism except for the presence of a 53 basepair insertion in an intron sequence in the D-genome. Primers were selected that permit amplification of a 117 base-pair D-genome specific amplicon and a 121 base-pair amplicon in the coding region of psr128. The latter is used to normalise for the amount of total amplifiable wheat DNA present in the sample.
To facilitate an understanding of the analytical variation involved in quantification, two pasta standards were prepared from flour mixtures containing 0.2% and 5.89% bread wheat in durum wheat. In an “in house” study the lower performance standard gave a value of 0.19% ± 0.04% bread wheat based on 36 replicates. The coefficient of variation was 21% corresponding to an uncertainty at an approximate 95% confidence limit of 0.11 to 0.26%. Hence, for a single analytical determination of a material known to contain 0.19% contamination, the result could be expected to be in the range 0.11–0.26%, 19 times in every 20 analyses. The higher performance standard (value 5.89% ± 1.9% based on 12 replicates) had a coefficient of variation of 33% corresponding to an uncertainty at an approximate 95% confidence limit of 2.02% to 9.75%. Given that these results were generated in a laboratory that fully understands all the factors that affect the PCR, they highlight the breadth of responses where the true value may actually lie when using real-time PCR for quantification in food authenticity investigations.
The 2013 horse meat incident provided evidence for the need to develop molecular biology approaches for the quantitative determination of important food ingredients. During the same year, Defra commissioned work at LGC to develop a real-time PCR approach for the quantitation of horse DNA.38 This approach used best measurement practice guidance in the area of real-time PCR to develop a method that would quantitate the amount of horse DNA relative to the total amount of mammalian DNA present in a sample. Sets of primers and probes were chosen that were equine specific and also targeted a universal growth differentiation factor gene. A range of gravimetrically prepared horse in beef meat mixtures, as well as horse and beef DNA mixtures, were prepared and used to demonstrate the trueness and precision associated with the quantitative estimation using the real-time PCR assay across a range of concentrations.
Given the importance and prevalence of real-time PCR as an analytical and diagnostic aid, inclusive and outside of food authenticity testing, it is of paramount importance to ensure results are reported to the highest level of quality and are repeatable and reproducible. The publication of the MIQE guidelines (minimum information for publication of quantitative real-time PCR experiments)39 have helped to address harmonisation in this area, and provide a set of criteria to address and abide by when reporting results from real-time PCR.
The choice of DNA target for species detection and quantitation is equally important. The weight of current scientific evidence suggests that mitochondrial DNA, being in very high abundance within a cell, are suitable targets to facilitate sensitive detection of a species.40 However, due to the high variability in the number of mitochondria per cell (between species, within species and even between tissues within an organism), they may not be the most suitable targets for species quantitation. Nuclear DNA targets, being less abundant but generally of a stable copy number between cells, may provide a better target for species quantitation.41
Digital PCR helps facilitate absolute single molecule detection without reference to a calibration curve. It achieves this through the process of limiting dilutions: the real-time PCR reaction is split into thousands of individual reactions, and by counting the number of positive reactions relative to negative ones, an accurate estimate of the starting number of molecules can be made. As a calibration curve is no longer a necessity in digital PCR, this therefore mitigates any matrix differences between calibrant and test sample that may cause differential PCR amplification. As digital PCR allows absolute single molecule detection, it also has the advantage of producing results which are more traceable to the SI unit, instead of providing a result that is relative to a calibrant or expressed as a relative percentage. Additionally, because of the very high level of sample replication afforded, digital PCR can produce results with very tight precision. There are a number of commercially available digital PCR instruments currently on the market (including chamber and droplet based digital PCR), providing evidence of the importance of this new technology in quantitative molecular biology approaches. Burns and colleagues42 pioneered some early work of applying digital PCR for food authenticity testing and demonstrated the applicability of the technique to estimate absolute limits of detection and quantifying plasmid copy number associated with GMO analysis. In 2011, Sanders et al., examined some of the underlying factors that influenced accurate measurements in a digital PCR instrument, and provided guidance on important issues to consider when designing digital PCR experiments.43 Corbisier et al.,44 examined the suitability of this methodology for the absolute quantification of genetically modified maize and found the results to be identical to those obtained by real-time PCR. The major advantage of the digital PCR method was that it permitted accurate measurement without the need for a reference calibrator.
The growth in interest of digital PCR, both as an aid in metrological traceability and as a real-life application across a range of sectors inclusive of food testing, has meant that a plethora of data is being produced. This has led to the establishment of a set of guidelines for the production and publication of digital PCR data, as an aid to helping harmonise the approach and provide meaningful results which can be readily interpreted.45
There are a number of isothermal instruments currently available based on differing technologies, such as nucleic acid sequences-based amplification, single primer isothermal amplification, strand displacement amplification, rolling circle amplification, loop-mediated isothermal amplification (LAMP) and even whole genome amplification. Whilst there has been an increased interest in the development and application of isothermal technologies in recent years, the process itself is not without its own limitations. Background noise can often interfere with an isothermal amplification, and nonspecific priming has also been an issue. Agreement on a harmonised approach for regulating and inferring the starting point of an isothermal reaction would also be beneficial. Production of a set of harmonised guidelines for production of data from isothermal technologies could help towards standardisation and expression of results in this interesting area, as well as fuelling debate about possible quantitative applications in the future.46
Reports in the published literature provide evidence for the application of isothermal technologies for speciation and food analysis. The application of Loop-Mediated Isothermal Amplification (LAMP) for meat species detection with potential quantitative capabilities has previously been described,47 as well as its application to detection of horse meat in raw and processed meat products.48 In 2010 a LAMP based approach for detection of pork, chicken and beef was published,49 and isothermal approaches have also been described for identification of mushroom species.50
There are a number of publications describing the application of isothermal technologies for the detection of Genetically Modified Organisms.48,51,52
Whilst still considered a new and emerging technology, the current state of the art associated with isothermal approaches means that results produced from such technologies are still largely qualitative in nature, and their quantitative potential has yet to be fully realised.
Analytical chemistry is a well-established discipline but analytical molecular biology is still in an early stage of development. Although the situation is rapidly improving, only a limited range of laboratories have the requisite skills to undertake quantification using real-time PCR and most of these have applied the technique only to the determination of GM material in relatively simple matrices. An alternative and much simpler analytical platform is laboratory-on-a-chip capillary electrophoresis (LOC) and this has been used successfully by analytical chemists to identify a range of food materials.4,55 LOC analysis is based on end-point PCR and as noted above will have a higher uncertainty than methods that use real-time PCR if used for quantitative purposes. However, the LOC approach has been successfully applied for the detection of adulteration across a range of matrices when used as a qualitative tool, inclusive of fish speciation, GMO identification, durum wheat determination, basmati rice identification, and fruit juice adulteration. A number of protocols for food authenticity testing using the LOC approach have been published by the Food Standards Agency.56 However, there is another consideration and this relates to heteroduplex formation.57
The objective in many investigations of food authenticity is to determine the amount of an undeclared ingredient that is present in a sample versus a declared ingredient. If the two ingredients are similar then the PCR may amplify DNA targets that have a high degree of homology. The consequence of this is that when the PCR plateau phase is reached the predominant product will be a heteroduplex. The amount of heteroduplex can be calculated from the ratio p2:2pq:q2 where p and q represent the concentration of authentic and adulterant homoduplexes and pq represents that of each heteroduplex. It should be noted that this ratio only is valid if: the amplification efficiencies are equal for the two targets; the two sources of DNA are haploid such as mitochondrial or chloroplast DNA markers that are frequently used in PCR based tests for authenticity; and the intercalator dye used for quantification binds to heteroduplex and homoduplex molecules with the same efficiency.
An alternative method for quantifying adulterants using end-point PCR is the use of Pyrosequencing™. This is a sequencing-by-synthesis method and the results are presented as a series of peaks where peak height corresponds to the number of nucleotides incorporated. The close correlation between nucleotide incorporation and peak height can be used to determine how many of the template molecules have incorporated the added nucleotide, thereby allowing for allele (SNP) frequency determination in a mixed sample.58,59 Ortola-Vidal et al.,60 used this method to detect and quantify “undeclared” fruit in fruit yogurts. The limit of detection of the assay was 2% w/w rhubarb yoghurt in raspberry yoghurt and the limit of quantification was 5% w/w. As with all PCR-based methods it is important to have equal amplification efficiency for the different alleles.
This method of quantifying alleles using pyrosequencing has not been fully validated but it is very attractive for a number of reasons. First, reactions are internally controlled using the authentic species as control and allow the simultaneous detection of multiple adulterants. Second, the method is definitive since it depends on sequence determination rather than indirect characterisation using probes. Finally, the method is quick and simple with minimal operator intervention.
Advances in modern technologies now mean that whole genome sequencing is a reality, and this may help facilitate species identification in food samples based on Next Generation Sequencing (NGS). However, at the current time, there are only a limited number of papers describing the use of NGS for food authenticity testing, and the current high costs and complex workflow associated with NGS precludes its use for quantitative ingredient determination as part of routine food authenticity testing.
However, generating antibodies with the ability to discriminate target analytes from closely-related species can be extremely difficult and this is the major limitation in the use of ELISA in food authenticity applications. ELISA approaches also can suffer from interference from other ingredients. Since ELISA is considered as an immunological technique rather than a molecular biology approach it is not discussed further in this review.
The invention of SDS-polyacrylamide gel electrophoresis (SDS-PAGE) in the 1970s and later, the development of 2-dimensional PAGE (2-DE), were major breakthroughs in the analysis of proteins, allowing many individual proteins to be separated and analysed in a single experiment. Utilising mass spectrometry (MS), following the invention of electrospray ionisation (ESI) and matrix-associated laser desorption ionisation (MALDI) in the 1980s, allowed tryptic peptides and small proteins to be studied, as reviewed by Domon and Aebersold.63 However, it became apparent that 2-DE had limitations with respect to the range of relative abundance and solubility of the proteins under investigation. These problems can be overcome by coupling liquid chromatography (LC) with tandem mass spectrometry (MS/MS), using so-called multidimensional protein identification technology (MudPIT). The use of cation exchange and reverse phase LC, linked to MS/MS, has greatly extended the coverage of the proteome, including quantitative measurements.64
There are several reviews on the principles and applications of quantitative proteomics using 2-DE or LC-MS/MS,65–68 whilst a comprehensive text on all aspects of proteomics in foods has recently been published.69
A number of methods have been developed for labelling proteins or peptides with stable isotopes. In the context of analysis of complex matrices these include a number of chemical methods, e.g. isotope-coded affinity tag (ICAT), isotope-coded protein labelling (ICPL), and isobaric tag for relative and absolute quantification (iTRAQ). For ICAT and ICPL the tagging reaction occurs before proteolytic digestion, whereas with iTRAQ it is the peptides that are labelled. When the identity of the protein to be quantified is known, as often is the case with issues of food authenticity, the ideal method is to use isotopically labelled synthetic reference peptides. In this absolute quantification (AQUA) method the reference peptide is synthesised with one of its amino acids labelled with 13C or 15N. Additionally, there are “label-free” approaches to quantitation. Two protocols have been reported, one based on the frequency of identification, known as spectral counting70 and the other uses peak intensity in which the peak areas of peptides correlate to the amount of the parent protein from which they were derived.71 A recent application of this has been to the assessment of GM tomato fruit,72 whilst Gong and Wang73 have reviewed the use of proteomics to identify unintended effects in GM crops.
Ocaña et al.74 found another source of variability associated with sample handling. They extracted EPSPS from soya containing 0.5, 0.9, 2 and 5% GM material and determined the signal ratios for the target and labelled peptides using the AQUA method. Although the area ratios showed a good linear relationship with the amount of transgenic material present, the correlation coefficient indicated some divergence from a perfect linear correlation. Furthermore, the coefficients of variation for three replicate analyses of the different samples varied from 16–29%. When the ESPS was extracted from the 5% GM material and then diluted to 0.5, 0.9 and 2% before analysis there was a strong correlation (R2 = 0.9999) between the signal area ratios and the percentage of transgenic material. In this case, the coefficients of variation for four replicate analyses were 3% (0.9, 2 and 5% GM) and 14% (0.5% GM). These improved results are attributable to the elimination of potential variability from sample handling during extraction, precipitation and fractionation. Other groups have reported similar levels of variation from this source.74
The peptide that is used as the analyte must be unique to the protein of interest. If it is not, then over-estimation will occur. The selected peptide also must be efficiently liberated by digestion of the protein and must be stable in solution during the whole process. It also must chromatograph well and be easily detectable by MS. Finally, the selected peptide must withstand modification by any industrial processes used in the manufacture of the test sample.
The efficiency of digestion of the target protein by the selected protease is critically important as incomplete digestion will lead to underestimation of the analyte. Usually, the target protein will have multiple cleavage sites for the protease and some will be more readily cleaved than others. In an ideal situation the peptide selected as the analyte will be flanked by readily-cleavable sites and this should be tested using purified protein of known provenance. In addition, when test samples are subjected to MS analysis a search should be made for larger peptides that incorporate the target sequence as these will indicate missed cleavages and make accurate quantification very difficult. In the case of the AQUA method this is not a problem. With the iTRAQ method all the peptides are labelled and one or more that always are produced need to be selected, even before ensuring that complete cleavage has occurred. In the case of the EPSPS study of Ocaña et al.,74 only one peptide (and its isotopomer) was consistently found.
A key factor affecting accuracy and dynamic range of quantification is the choice of mass spectrometer. With some instruments the definition of very low and very strong signals can be problematic. Low intensity spectra result in higher uncertainty of measurement because of poor ion statistics. Saturation is more of a problem with quadrupole TOF instruments than ion traps but if it occurs will lead to erroneous quantification. The recent introduction of high resolution/high mass accuracy instruments should facilitate accurate quantification. This is because the increased instrument performance permits the exact discrimination of peptide isotope clusters from interfering signals caused by near isobaric peptides. Interference also can be reduced by improving the purification of the target protein prior to digestion and LC-MS/MS analysis but this can lead to increased losses and hence underestimation.
From their work on EPSPS, Ocaña et al.74 concluded that both the iTRAQ and AQUA methods had the potential to determine whether the presence of GM material is above the 0.9% limit set by the European Union. However, iTRAQ requires much more experimental and data analysis than AQUA and hence AQUA is the preferred approach when only a single protein is being quantified. Even so, the data obtained (Table 2) indicates the limitations of the method. Some of the discrepancies observed will be due to differential sample handling and processing, particularly as the reference standard is added at a late stage in the workflow.
GM ratio | Theoretical ratio | Observed ratio | % Inaccuracy |
---|---|---|---|
5/0.9 | 5.56 | 4.73 | −15 |
2/0.9 | 2.22 | 2.41 | 9 |
0.5/0.9 | 0.56 | 0.40 | −28 |
As noted earlier, the development of quantitative proteomics is at a much earlier stage compared with quantitative PCR and many issues affecting measurement uncertainty of a reported result remain to be addressed. Whilst the results shown in Table 2 are encouraging it needs to be borne in mind that they were obtained with a single food component (soya). If the methods are transferred to complex and processed foods then the problems to be overcome will be considerably greater. Highly processed foods provide a challenging complex matrix in which to extract the analyte from, and further work will highlight if the issues associated with analysis of nucleic acids from such matrices may be resolved in the future using proteomics approaches.
Topic | Issue | Recommendation |
---|---|---|
Ensuring food integrity in the supply chain | Improving laboratory testing capacity and capability to ensure a harmonised approach for testing for food authenticity | General recommendations outlined in the: |
• HM Government Elliott Review into the Integrity and Assurance of Food Supply Networks2 | ||
• Defra's AMWG: Response to Elliott review on “integrity and assurance of food supply networks” – recommendation 43 | ||
Method validation and interpretation of results | When evidence for fraudulent activity is uncovered using a method that has not undergone validation | Development of validated methods and agreed standards |
Agreement on values and criteria for minimum performance characteristics of a method | ||
Procedures for the estimation of measurement uncertainty | Measurement uncertainty estimates may not be consistently reported and may be significant underestimates | Need for harmonised guidance in estimating and reporting measurement uncertainty |
Use of SOPs | ||
Servicing and calibration of analytical instruments | ||
Choice of specific consumables and reference materials | ||
Sampling | Uncertainty from sampling and sample preparation | Requirement to develop sampling protocols tailored to specific analytical areas (e.g. GMO analysis) |
Samples chosen must be appropriate for the nature and complexity of the product | ||
Nucleic acid extraction and purification | Ensuring integrity and purity of the DNA and efficiency of DNA extraction | Use of SOPs |
Determine DNA purity using absorbances at 230, 260 and 280 nm wavelengths | ||
Check degradation by gel/capillary electrophoresis | ||
Relative quantitation of a sample (relative to both a target specific and a normalising reference gene) can reduce impact of poor DNA extraction efficiency | ||
The polymerase chain reaction (PCR) and real-time PCR | Confidence in results and accurate quantitation | Use of SOPs |
Use of suitable reference materials as controls and calibrants | ||
Harmonisation regarding reporting of results (e.g. MIQE guidelines39) | ||
Choice of DNA target (e.g. mitochondrial vs. chromosomal DNA) | ||
Correlation coefficient (r2) and PCR efficiency associated with calibrant and test sample | ||
Optimisation of primer and probe design | ||
Use of an internal positive control (IPC) | ||
New and emerging technologies (e.g. digital PCR, NGS, Isothermal approaches) | Technologies yet to firmly establish themselves for quantitative analysis of foods | Establishment of a set of harmonised guidelines for the production and publication of results (e.g. dMIQE guidelines45) |
Quantitative proteomics | Developing the quantitative potential of mass spectrometry for food analysis | Use of an identical peptide labelled isotopically to be used as a calibrant |
Production of harmonised guidance for: extraction protocol; target peptide selection; digestion stage; design of the mass spectrometry analysis; choice of mass spectrometer |
Methods based on quantitative PCR that have the necessary precision and trueness for use in detection of food fraud have been developed but only for use in relatively unprocessed foods, e.g. GMOs in flour, bread wheat in pasta, non-Basmati varieties in Basmati rice and raw meat samples. Attempts to extend quantitative PCR to more processed food have met with additional challenges. Pyrosequencing might be a viable alternative to quantitative PCR for the evaluation of complex and highly processed foods but much more work on this method is required. Quantitative proteomics is at an early stage of development and its full potential remains unknown but it could provide an alternative to PCR for the examination of unprocessed ingredients.
There is an increased requirement to develop approaches for the quantitative determination of food ingredients, to help detect food fraud and ensure the traceability of materials in the food chain. A number of molecular biology approaches, for example digital PCR, show good potential for sensitive, specific and traceable detection of target molecules. With the rapid pace at which these methods are being developed, it is equally important to ensure these methods are fully validated and the measurement uncertainty associated with a result is correctly characterised, so that objective data is generated to provide evidence of the fitness for purpose of these methods and help towards harmonisation of molecular biology results and the interpretation of data.
This journal is © The Royal Society of Chemistry 2016 |