Quality Assurance in ecological monitoring—towards a unifying perspective

Marco Ferretti
TerraData environmetrics at the Department of Environmental Sciences, University of Siena, Italy

Received 9th February 2009, Accepted 17th February 2009

Marco Ferretti

Marco Ferretti is Technical Director at TerraData environmetrics, a spin-off company of the University of Siena, and lecturer at the same University. He obtained the degree in Forest Sciences at the University of Firenze and received his PhD from the University of Siena. Dr Ferretti has done consultancy work for several Italian and international agencies and has held previous assignments at IUFRO (the International Union of Forest Research Organizations) and UN-ECE ICP-Forests (International Co-operative Programme on Assessment and Monitoring Air Pollution Effects on Forests); for the latter he now serves as chairman of the Quality Assurance Committee. Dr Ferretti has over 200 scientific publications and presentations and has contributed to over 100 national and international reports in the areas of forest and environmental monitoring, and quality assurance. His main scientific interest is on the impact of environmental stressors on forests and to ensure that monitoring is properly designed, implemented and reported.


Ecological monitoring and environmental management

Assessment and monitoring are the basic actions that ecologists, environmental scientists and environmental protection agencies undertake to document status, changes and trends of natural resources: atmosphere, marine and terrestrial ecosystems were and are being monitored by a variety of programmes.1 However, as acknowledged since the early work of Spellerberg2 and the more recent book of Elzinga et al.,3 it is important to consider that monitoring is not just a mere data collection exercise, but it is always related to specific problems and “…to evaluate changes in condition and progress toward meeting a management objective”.3 The message is that monitoring data should be conclusive, i.e. allowing clear answers on the effect of the management action undertaken (e.g. reduction of pollutants' emissions and pollutant concentration in the air; management for species conservation and population size) and on the attribute (e.g. amount of carbon stocks in forests and tree biomass estimate) of concern. Even in the case of long-term ecological monitoring, when the only objective seems to be the collection of baseline data to allow change and trend detection, data should be “good” enough (= reliable, i.e. unbiased, precise, accurate, consistent in space and time) to allow safe (= with known confidence) statements about baseline conditions and trend detection. Therefore, the question is: to what extent are ecological monitoring data reliable? The issue is of relevance for politicians, natural resource managers and for the common people: if the monitoring is not able to provide clear answers, then it is not possible to evaluate the effect of the management action and the money spent for the management and for the monitoring is wasted.3 Not only: as Peterman4 wrote “the results of inadequate monitoring can be both misleading and dangerous not only because of their inability to detect ecologically significant changes, but also because they create the illusion that something useful has been done.”

While an unprecedented effort in ecological monitoring has been expended since the 1980s, the degree to which data were conclusive in the context of environmental management is controversial.5 Although this may be due to several reasons, a possible explanation needs to consider the actual ability of monitoring results to fit monitoring objectives. Several authors provided convincing evidence that monitoring results can be seriously affected by lack of design and sound statistical concept5,6 and poor data quality.7,8 Based on this concern, the Workshop on “Quality Assurance in environmental data: to what extent environmental monitoring data are reliable?” was organized in Siena, Italy, on 7th March 2008. Although not all possible ecological monitoring fields were covered (for example there were no presentations on issues such as birds, mammals or the marine environment), the workshop was intended to provide concrete examples of possible problems arising with monitoring data, from design to data management, and to suggest a working perspective for improving the value and use of monitoring data. In this special issue, several papers arising from the above Workshop are presented.

Error sources in ecological monitoring

Moving towards improved monitoring needs to consider that several errors may occur in ecological assessment and monitoring programmes:9

(i) sampling errors. They originate from unsuited sampling design that does not represent the population of interest well enough. As reported by Fattorini,10 sampling errors can be estimated and documented. However the control and management of sampling errors depends on the inferential approach (design-based vs. model-based), and/or the sampling concept adopted (probabilistic vs. judgemental).11 When, for example, haphazard sampling is misused in place of probabilistic sampling, not only can sampling error not be controlled, but it also may remain unknown. Yet, probabilistic sampling design is often disregarded in ecological monitoring programmes either because it is perceived as impractical or considered as a sort of exercise of “statistical philosophy”, with little connection with the “real world” of monitoring. This is unfortunate as probabilistic sampling is essential to obtain assumption-free estimates of population parameters (such as totals, mean and variance) and for change detection.3,6,12 The reader is referred to the fundamental textbook by Cochran13 for the basis of sampling. In this special issue, questions related to sampling are presented by Baldaccini et al. for macroinvertabrates monitoring in freshwaters and by Gottardini et al. for pollen quantity and diversity estimates in aerobiological monitoring. In the former, the Authors reported on an exercise based on judgemental sampling and concentrated on sampling efforts in relation to species diversity, abundance and objectivity of the investigation. In the latter, Gottardini et al. evaluated the bias and error arising in pollen counts when the official Italian standard technique is applied. They discussed the assumptions inherent in the standard technique in relation to their impact on the results of pollen counts both in terms of number of grains and in number of species.

(ii) Assessment errors or observer errors. They include measurement and classification errors and are rooted in how Standard Operating Procedures (SOPs) for field and laboratory measurements were prepared and applied, and how personnel were trained and prepared. Measurement errors can be random or one-sided (biased) and can have a serious impact on survey estimators14 and change/trend detection.7 However, they can only be controlled by adequate SOPs, proper training and timely audits, and field checks.15 In this special issue, observer and measurement errors are discussed in relation to a variety of issues: diversity of vascular plants and lichens (Allegrini et al.; Bacaro et al.; Giordani et al.), pollen counting in aerobiology (Berti et al.), forest health assessment and forest inventory (Bussotti et al.; Gasparini et al.), biomonitoring of air pollution (Francini et al.) and chemistry of atmospheric deposition (Marchetto et al.). Despite covering different environmental issues, targets and monitoring techniques (e.g., ground surveys, surveys based on aerial imagery, laboratory analyses), all the above referred papers agreed on some points: (a) SOPs are essential to promote consistency of measurements; (b) continuous training and control is necessary to reduce variability between observers/laboratories; (c) it is important to define data quality objectives in order to have a formal assessment and documentation of the achieved data quality; (d) the use of internal quality tools—e.g., blanks, control charts—promote the achievement of the expected data quality objectives.

(iii) Prediction errors caused by models. These errors occur because (a) measurement errors can propagate in the model output; (b) models are applied for data ranges not covered in the construction of the model; (c) model assumption, that may be attained or not. Models are also necessary when—for example—unsuited sampling renders it difficult to use the data and to assess the value of the information arising from the monitoring. Model-related issues are presented by Gorelli et al. in relation to the validation of the results obtained from an ozone biomonitoring network. They carried out a formal spatial analysis of an ozone biomonitoring dataset based on a preferential selection of sampling sites. The analysis permitted the formal identification of problems in a monitoring network (site distribution over the study area), to quantify and to control the data estimation error in the points not sampled, and therefore to evaluate the strength of the assumption needed to endorse results about spatial and temporal comparisons.

(iv) Non-statistical errors. They include a variety of events: registration errors, errors in data entry, data transfer, errors in calculations are just examples. These errors may occur at every stage, can be serious and cannot be fully controlled. Durrant Houston and Hiederer reported a case-study about data validation and management from an international forest monitoring programme. They concentrated on the various problems data managers have to face (unclear submission rules, unsuited formats, ambiguous relations between tables, missing values, unplausible values, inconsistent values) and on the relevant QA procedures adopted. They emphasised the need for a full integration of data management issues when defining the procedure for field data collection.

Another type of error is represented by definition errors. They are particularly important with large-scale monitoring programmes involving expertise with different education over different countries. Or when data originating from different programmes are combined to derive statistics for the integrated dataset. In these cases, different definitions adopted to identify the same object (and the opposite) may lead to very inconsistent results.9 A particular aspect of problems caused by differences in definition is related to taxonomy, and taxonomic-related problems are discussed by Bacaro et al. in relation to plant species diversity assessment and monitoring. Apart from observer skill, “taxonomic inflation” is also a source of inconsistency between datasets and may pose a severe threat to the long-term and large-scale comparability of biodiversity data.

Quality Assurance: a unifying perspective to promote the value of ecological monitoring data

As it is obvious from the above, errors are ubiquitous and all error types may actually occur under concrete circumstances. Errors cannot be eliminated, but can be controlled. In most cases problems originate from the design stage and propagate throughout the monitoring programme.5 Several reasons exist to explain the frequently reported weakness of monitoring design: the traditional difficult communication between statisticians and field ecologists, the inadequate coverage of monitoring design in University courses, the scarce availability of digested (ready-to-use) scientific information for Agencies that commission the work, and the lack of peer review of tenders.5,11 This situation implies that, although much work has been done on data quality control, a broader approach is necessary to ensure all steps of monitoring are properly considered. A Quality Assurance (QA) perspective can be very useful in this respect.16 Such a perspective is typically cross-sectorial, embraces all the steps of the monitoring and can be applied across monitoring programmes. This perspective has been adopted in the US for some time, and the series of US-EPA quality documents (http://www.epa.gov/quality) represents perhaps the most complete reference for those involved in ecological monitoring. QA is viewed as “an integrated system of management activities involving planning, implementation, documentation, assessment, reporting, and quality improvement to ensure that a process, item, or service is of the type and quality needed and expected by the customer”.17 In addition to a number of advantages, the most important benefits arising from a QA-based perspective is that it provides guidance to, forces monitoring designers to respond to, and provides documentation of, critical issues like:

(i) Identification of the right question the monitoring should answer. This has to be done in close-co-operation with end users of the monitoring results and will facilitate the subsequent phases.

(ii) Definition of unambiguous objectives. Monitoring objectives should be explicit in order to allow conclusive statements about the success of the monitoring and the problem being investigated. If the objective is a population estimate, the required precision level (in terms of width of confidence interval and probability level) should be reported. If the objective is detecting change, the time frame for change detection, the minimum detectable change, the acceptable risk for Type I and Type II errors should be explicit.3,5 Once the “right question” is known, the definition of the objectives can be carried out and graded according to the importance of the question to be answered and the available resources.

(iii) Selection of proper attributes to be measured. Attributes should be selected according to their nature, ease of measurement, known performance and responsiveness to the objective being targeted.18

(iv) Identification of the appropriate sampling strategy. Formal sampling design allows the control and management of sampling errors, thus helping in achievement of the monitoring objectives. It is worth noting that without a probabilistic approach there is no warrant that the results obtained represent the population characteristics. In addition, all the traditional data analysis techniques assume data are originated from probabilistic sampling.3,17

(v) Preparation of adequate Standard Operating Procedures (SOPs). This includes a comprehensive description of data collection activity and covers field as well as laboratory and office activity. Definitions of terms, methods of measurements, range of application, equipment and reporting units, Data Quality Objectives (DQOs—see below), field forms, hardware and software descriptions are typically covered in the SOPs.

(vi) Formal identification of the data quality objectives (DQOs) in order to document the degree to which measurements fit into an explicit acceptable range of variation. In general, DQOs consist of an expressed level of accuracy for each measurement, called Measurement Quality Objectives (MQOs) and of a compliance threshold, termed Data Quality Limits (DQLs).19

(vii) Alongside, identify safe rules for data submission, checks, validation, storage and management. This should be something to be considered across all the design and planning phase.

A formal QA plan covering all the QA activities involved with the acquisition of ecological data (from direct measurements as well as from other sources) is important and very useful in this context.17 At the QA Workshop in Siena there was quite an agreement about the need for a comprehensive QA approach, and an overall QA-based perspective with the relevant QA plan was suggested to be considered for adoption for monitoring programmes in Italy.20 Ideally, such a plan should be required by funding agencies before a grant is assigned to a monitoring programme.

Summing-up

While everybody agrees that effective decision making in environmental management must be based on reliable and defensible information, the extent to which these two characteristics were achieved by means of monitoring data is questionable and should be promoted. Reliability is rooted in the overall quality of the input data and in the quality of the data analysis, while the capability to be defended rests not only on data quality, but also on the possibility to document it. Therefore, technical decisions made before, during and after the project need to be supported by adequate documentation. In most cases, inconclusive monitoring results stem from a lack in monitoring design, due to a substantial underestimate of the importance of this step in the overall monitoring concept. Together with other approaches (e.g. improving communication between ecologists and statisticians, extending the coverage of University courses, strengthening the preparation of practitioners, …), a QA-based perspective may help in improving reliability of data and defensibility of decision making, and to provide documentation of both. It cannot ensure the success of the monitoring, but it will provide considerable support to programme administrators, designers, staff, and end-users, ensuring safeguards to the programme implementation, and ultimately reducing the overall costs. While in the US QA plans are officially required to obtain funds,17 it is not so over much of Europe and Italy. This must be changed: it should be the responsibility of the sponsoring bodies to ensure that a sufficiently high standard is maintained and a QA plan will be beneficial in this respect. Because ecological monitoring “need not be a waste of time”.5

Acknowledgements

I would acknowledge the role of several colleagues and Institutions in different phases of this work.

Workshop organization. I am grateful to the other members of the organizing committee: Giorgio Brunialti (TerraData environmetrics, Siena, Italy), Alessandro Chiarucci (University of Siena, Siena, Italy), Paolo Giordani (University of Genova, Genova, Italy), Elena Gottardini (FEM-IASMA, Trento, Italy) and Maurizio Perotti (ISMES Divisione Ambiente e Territorio di CESI S.p.A Unità operativa Atmosfera, Piacenza, Italy). Elisa Baragatti (University of Siena, Siena, Italy), Elisa Santi (University of Siena, Siena, Italy), Arianna Vannini (TerraData environmetrics, Siena, Italy) helped in the organization of the Workshop. I acknowledge also all the colleagues that gave the presentations at the Workshop and submitted manuscripts for this special issue.

Editorials. I am indebted to Tracy Durrant Houston (JRC Ispra, Italy) for the linguistic revision of this paper. Tracy also reviewed other papers of this special issue and I want to thank her also on behalf of the relevant authors. I would also thank Dr Harpal Minhas (editor, the Journal of Environmental Monitoring) for his support and assistance in the preparation of this special issue.

Support. On behalf of the organizing committee of the Workshop, I would like to thank SET S.p.A. Servizi Energetici Teverola and ISMES Divisione Ambiente e Territorio di CESI S.p.A for the financial support. The Italian national Environmental Protection Agency (formerly APAT, now ISPRA) supported the organization of the event.

References

  1. T. W. Parr, M. Ferretti, I. C. Simpson, M. Forsius and E. Kovács-Láng, Environ. Monit. Assess., 2002, 78, 253–290 CrossRef CAS.
  2. I. F. Spellerberg, Monitoring ecological change, 1994, Cambridge University Press, 334 pp Search PubMed.
  3. C. L. Elzinga, D. W. Salzer, J. W. Willoughby, J. P. Gibbs, Monitoring plant and animal populations, 2001, Blackwell Science, Malden, Massachussets, USA, 337 ps Search PubMed.
  4. R. M. Peterman, Ecology, 1990, 71, 2024–2027 CrossRef.
  5. C. Legg and L. Nagy, J. Environ. Manage., 2006, 78, 194–199 Search PubMed.
  6. N. S. Urquhart, S. G. Paulsen and D. P. Larsen, Ecological Applications, 1998, 8, 246–257.
  7. M. Sulkava, S. Luyssaert, P. Rautio, I. A. Jansenn and J. Hollmén, Ecological Informatics, 2007, 2, 167–176 CrossRef.
  8. J. J. Hellman and G. W. Fowler, Ecological Applications, 1999, 9(3), 824–834 CrossRef.
  9. M. Köhl, B. Traub and R. Päivinen, Environ. Monit. Assess., 2000, 63, 361–380 CrossRef.
  10. L. Fattorini, in Ferretti M., Brunialti G., Chiarucci A., Giordani P., Gottardini E, Perotti M. (Eds.), 2008. Quality Assurance nei dati ambientali. Quanto sono affidabili i dati di monitoraggio ai fini della gestione delle risorse naturali? Books of Abstract and Presentations of the Workshop held in Siena, 7 Marzo 2008. FEM-IASMA, San Michele all'Adige: 17–25 Search PubMed.
  11. D. Edwards, Ecological Applications, 1998, 8(2), 323–325.
  12. US EPA. Guidance for choosing a sampling design for environmental data collection. EPA QA/G-5S, Environmental Protection Agency, 2002, Washington, D.C., USA Search PubMed.
  13. W. G. Cochran, Sampling techniques, 1977, Third edition, John Wiley & Sons, New York, NY, USA Search PubMed.
  14. G. Gertner, M. Köhl, in Köhl M., Bachmann P., Brassel P., Preto G., “The Monte Verità Conference on Forest Survey Designs”, published by WSL Birmensdorf and ETH Zürich: 177–190 Search PubMed.
  15. J. E. Pollard, J. A. Westfall, P. L. Patterson, D. L. Gartner, M. Hansen, O. Kuegler, Forest Inventory and Analysis National Data Quality Assessment Report for 2000 to 2003. Gen. Tech. Rep. RMRS-GTR-181. Fort Collins, CO, 2006: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station. 43 p Search PubMed.
  16. S. P. Cline, W. G. Burkman, in J. B. Bucher, I. Bucher-Wallin (eds.), 1989, Air Pollution and Forest Decline, Proc. 14th Int. Meeting for Specialists on Air Pollution Effects on Forest Ecosystems, IUFRO P2.05, Interlaken, Switzerland, Oct. 2–8 (1988) Birmensdorf, pp. 361–365 Search PubMed.
  17. US EPA. Guidance for Quality Assurance Project Plans. EPA QA/G-5, Environmental Protection Agency, 2002, Washington, D.C., USA Search PubMed.
  18. C. T. Hunsaker, Sci. Total Environ., 1993,(Suppl.), 77–95 CrossRef CAS.
  19. N. G. Tallent-Halsell, ( 1994) Forest Health Monitoring 1994. Field Methods Guide, EPA/620/R-94/027. U.S. Environmental Protection Agency, Washington, D. C Search PubMed.
  20. M. Ferretti, in Ferretti M., Brunialti G., Chiarucci A., Giordani P., Gottardini E, Perotti M. (Eds.), 2008Quality Assurance nei dati ambientali. Quanto sono affidabili i dati di monitoraggio ai fini della gestione delle risorse naturali? Books of Abstract and Presentations of the Workshop held in Siena, 7 Marzo 2008. FEM-IASMA, San Michele all'Adige: 10–16 Search PubMed.

This journal is © The Royal Society of Chemistry 2009
Click here to see how this site uses Cookies. View our privacy policy here.