Keith S.
Taber
Faculty of Education, University of Cambridge, UK. E-mail: kst24@cam.ac.uk
Recently CERP published its first articles in the categories of Comments and Replies. These are peer reviewed articles that address specific issues raised in articles published in the journal. A Comment is an article by new authors who argue that something in a published article should not stand in the literature without further comment, and a Reply is a response by the original authors to the Comment. The option of publishing articles of this kind is common among research journals, such as those published by the Royal Society of Chemistry. CERP is not seeking to actively encourage readers to submit comments on the papers they read in the journal as a matter of course, but rather is offering the possibility of challenging assertions made in published articles where readers strongly feel that there is some form of misrepresentation or limitation in an article that should be pointed out to the chemistry education community. These types of article raise issues about the definitiveness of research contributions, the status of knowledge claims made in published papers, and the conversational nature of the research literature in general.
No study can stand on its own outside of the context of the wider literature, as each research study is informed by, and indeed assumes, a great deal of background knowledge. For example, a good many studies in chemistry will present and interpret data from spectrometers or other instruments where a theory of instrumentation (how output graphs relate to inferred features of interest) and often analytical theory (such as the application of Fourier transformations) have to be assumed to proceed to the conclusions drawn. In addition a good many widely accepted scientific principles will often be assumed as part of the theoretical framework underpinning the study (Taber, 2014a).
In general, then, a new research study adds incrementally to a research programme, and also depends upon the researcher's (and to be accepted by a reader, the reader's) commitments to some of the existing content of that programme. In particular, research programmes have certain ‘hard core’ assumptions that are unquestioned in that programme (Lakatos, 1970).
In writing a research report in chemistry no author is expected to rehearse the arguments for the particulate nature of matter at submicroscopic scales, or to justify assuming that there exist chemical elements. In principle, these rather extreme examples reflect chemical knowledge that – even if technically provisional (as with all scientific knowledge) – no one expects to be overturned ‘any time soon’. Chemists will assume matter is particulate and that elements exist and will not even think it necessary to point out that strictly these are assumptions or ‘commitments’ (that is inferences drawn from previous rounds of interplay between theory and observation). In these cases this position is entirely reasonable as any author that attempted to take every point back to first principles in writing a research report would likely produce a very long, complex text (recapitulating chapters of basic chemical textbooks) that would be unwelcome if not impenetrable to potential readers. Yet there is clearly a matter of judgement in deciding what can be taken for granted as commitments shared with readers, and indeed some of the many assumptions drawn upon (whether or not made explicit) in research papers that seem sensible and reasonable at the time of their publication are likely to seem less secure or even invalid at a later date.
Yet the peer reviewer still has to determine that a paper is sound in terms of being based on reasonable assumptions and in making explicit those supporting grounds which may not have full community consensus – as well as acknowledging limitations in the study that readers should be aware of. This is always going to be a somewhat subjective judgement in the sense that replacing one reviewer with another ‘equally qualified’ colleague (the nominal test of objectivity in science) will not necessarily lead to the same recommendation as reviewing draws upon judgements that are subtle and complex and informed by the personal knowledge and experience of the reviewer (Polanyi, 1962/1969). In this sense, a necessary sense, peer review will always admit bias, even when reviewers have the highest personal and professional integrity and would never show prejudice. That is, reviewers may do their utmost to be fair to authors, yet still their recommendations will in part reflect background knowledge that is unique and idiosyncratic. It is hard to see how it could be any other way.
In educational research we have additional complications as the phenomena we study are often complex, diverse, and – unlike in chemistry – we can seldom fully dis-embed the phenomena of interest from particular contexts. One sample of a compound of high purity that has been isolated should be much the same as another pure sample under similar conditions. That does not apply so well to teachers, classrooms, students, etc. Moreover, because of this, educational research is often highly motivated by conceptual frameworks built around theoretical perspectives that are not consensual in the field, and uses a wide range of research methods, that are sometimes (grounded theory; Rasch analysis; cluster analysis, think aloud…) only familiar to a minority of colleagues and that may in some cases still be in the process of being adapted from other source disciplines (Taber, 2014a). Peer reviewers sometimes have to take the view that they are not themselves entirely convinced by a theoretical perspective or a methodological approach, but that it is still admissible, and then attempt to judge submitted manuscripts on their own terms. Yet reviewers are also expected to make it clear when they have genuine doubts about the applicability of a perspective or technique being adopted in a particular study.
Given the professionalism and care of the vast majority of CERP reviewers (with due oversight and sometimes additional input at editorial level), I would not expect many articles that get into production to be likely to cause widespread concern among colleagues. Yet given the natures of knowledge, the scientific process, and the foci of educational research, it is only to be expected that sometimes some readers will strongly feel that the peer review process has resulted in a misjudgement in recommending acceptance of an article in the form in which it appears as a publication.
My conclusion from this analysis is that:
(a) if peer review could be expected to always make judgments on behalf of the community that colleagues would generally share, then we would have little need for ‘Comments’ and ‘Replies’ but rather the usual succession of publications citing earlier publications would offer sufficient formal dialogue between researchers;
(b) however, even when reviewers do thorough and careful evaluations during peer review, it is inevitable that there will sometimes be papers published which lead some other colleagues to have significant concerns such that they wish to challenge some aspect of the published work directly.
There is a well established tradition of exploring students' ideas in science to inform and evaluate teaching (Driver and Erickson, 1983; Gilbert and Watts, 1983; Duit, 2009). Much of this work is undertaken from what is often called a personal constructivist perspective (Taber, 2009). This sees knowledge as the personal construction of the individual, so that teaching cannot transfer scientific concepts into learners' minds, but rather learners have to interpret and make sense of teaching using whatever resources they have available to do so. Some of the assumptions adopted in this area of research may seem so familiar that they are not always made explicit in reports as researchers often expect they can be taken for granted (as core commitments of a common research programme) and will intrinsically be shared with readers (Taber, 2013).
As well as methodological norms, there are ontological and epistemological commitments informing research (see Fig. 1). Ontological assumptions relate to, for example, what we actually mean by such terms as knowledge, understanding and thinking. Is it reasonable to assume we all know exactly what colleagues mean when using such terms, such that they can be used as technical terms in research without operational definition and further clarification?
Epistemological assumptions relate to, for example, when it is reasonable to make definite claims about the contents of other minds. In everyday life we all employ our ‘mind reading abilities’ to allow us to make such pronouncements as “I know what you are thinking”, “I can see that you have changed your mind about that”, “you do not understand my point” and “you are confused about that”. Having a ‘theory of mind’ (Wellman, 2011) that allows us to draw inferences about the mental lives of others is an essential part of normal social cognition, and is something we often do simply take for granted. However, we are not actually reading minds but rather we are drawing inferences, and they are based on limited and indirect evidence.
Fig. 1 Research involves interpretations that are informed by underlying commitments – whether these are made explicit in the report or not. |
We never see another person's thinking, but only observe ‘representations’ of that thinking that are made publicly available (Taber, 2013): what the person says, gestures, inscriptions, facial expression etc. Such evidence is partial and needs to be carefully interpreted. People are not fully aware of all their own cognitive processes, and conscious thought may be too rapid for comprehensive reporting even when a person is motivated to provide a full and honest report of their thinking. Moreover, not all of our thinking is verbal, so Einstein – for example – used imagery in his scientific thinking which could not be directly represented in a verbal report (Miller, 1986).
Research requires different standards to everyday conversation. When a research report makes claims about knowledge, learning, understanding, thinking and so forth it is important both that it is clear what the claims are about (e.g., what does this author mean by a learner's conception?) and how they are derived (e.g., what are the grounds for claiming most first year undergraduates have a poor understanding of the nature of catalysis?) This is challenging as authors need to balance thoroughness with readability (Pope and Denicolo, 1986), and texts can readily become convoluted when everything is being defined and is subject to chains of explicit provisos.
When students select responses from instruments of the kind used in the work I was reviewing – which I am not criticising, as they offer useful tools (Treagust, 1988) and I have undertaken research with similar instruments – they are making a choice between a limited number of options provided by the researcher, and may well be selecting the ‘best’ compromise option to match understanding that may not seem to fit any of the options especially well. Ideally the development of such tools involves cycles of interviews to ensure items are designed and if necessary modified to ‘catch’ student thinking: but that thinking is nuanced and often idiosyncratic (Taber, 2014b), so a modest set of written statements will only ever offer a first approximation to the thinking of many respondents. Nonetheless, it is still certainly of note if a high proportion of respondents select an option reflecting an alternative conception in place of an item representing the scientific concept or curriculum model held up as target knowledge. It is clearly significant for teaching if a substantive proportion of the learners choose as the best match the ‘wrong’ answer. But there are complications: the epistemological chain between data and conclusions is seldom straightforward.
We cannot be sure (unless we talk to the respondents, and perhaps not even then!) whether students have understood the intended meaning of the items – so sometimes we may be picking up issues of limited literacy or simply differences in the use of language (Watts and Gilbert, 1983) rather than clear indicators of poor or non-canonical conceptual understanding. Often students hold manifold conceptions in topic areas (Taber, 2014b), so selecting a particular response (either the correct one or a distractor) may not always indicate they would have made the equivalent choice in a parallel question with different wording or a different example, or set in a different context (or even to the same item if it was sequenced differently in the instrument such that different thinking was cued). This also ignores those students who may not be committed to reading and considering questions carefully, those who make random guesses, and those who may mistakenly tick a different response box to the one they intend to.
Moreover, a student may select an answer because they think it is most likely to be the right answer without this meaning they are committed to it as a matter of belief. For example, in biology, students who reject evolution because they consider it is inconsistent with the values and beliefs of their family/community, as many do in the United States for example (Long, 2011), may still learn to pick the response item representing natural selection in a school test. They can know what the right answer is supposed to be, without believing it is actually right.
Much of our work in chemistry education concerns models, which are meant to be understood and applied – not necessarily believed! We do not necessarily want students to ‘believe in’ the Lewis model of acids or in an Sn2 reaction mechanism – or even in the periodic table. Belief does not seem to be the right ontological category and, even if it was, this type of instrument does not offer the epistemological sophistication to equate a response choice to a strongly committed conception. That was an issue I raised in peer review, but perhaps another reviewer who has not themselves grappled with these issues in some depth (Taber, 2013) may have just been happy to consider student response choices to be appropriate indicators of belief.
Langbeheim's comments have (after peer review) been published as a Comment on the original article (Langbeheim, 2015). The original authors did not fully accept Langbeheim's criticism – in part because of privileged information, that is what was known to the original researchers who developed the research instrumentation but was not available in the public record. A researcher having worked through an extended process, and often having intimate engagement with a project that is only summarised in a report, may be convinced of their interpretation of events, when the written report may seem to readers to offer only one possible viable interpretation of what the data implies – without being conclusive.
After peer review, Smith and Villarreal (2015a)'s Reply to Langbeheim's (2015) Comment has also been published. Smith and Villarreal (2015a) report how their experience in (unpublished) pilot studies supports their interpretation of the data in the published study. It would have been possible to have included that information in the original submission – but there are always judgements to be made about how much detail is useful in explaining the development of research instruments and designs. As Medawar (1963/1990) long ago pointed out, the research paper traditionally offers a rational reconstruction of research to make a succinct and linear case for knowledge claims, rather than being a narrative account of all the blind avenues and peripheral considerations that usually have to be worked through before getting to publishable work. Although Smith and Villarreal have defended (2015a) the conclusions they drew in their study (2015b), their Reply acknowledges how Langbeheim's (2015) Comment indicates the potential value of “further investigation of how factors such as consideration of time and the nature of the representations in the instruments might affect students' conceptions and responses”. The Article–Comment–Reply triad offers an accelerated version of the dialogic conversation usually carried out in the literature between successive research papers that iteratively move a field forward.
As all research in chemistry education (or indeed, any empirical field) involves interpretation, there are always alternative interpretations that could be made (see Fig. 1). The authors of a research report need to provide enough detail to show their interpretation is viable, and well supported, but also to allow readers to see where other viable interpretations might be drawn. This may mean making more explicit the logical chain back to those underlying commitments (i.e., assumptions) adopted by the researchers that may not be shared by all those working in the field.
When a manuscript is submitted for publication the peer reviewers do not need to find a case for the conclusions ‘proven’, but have to be convinced there is a strong case for the interpretations made. Given that there are always going to be alternative interpretations, and in some cases the argument for the authors' preferred conclusions may not be overwhelming given other possible interpretations, we might consider that the strongest submissions will be written so as to deliberately make other possibilities apparent. This will support the research process by contributing to what has been called the ‘positive heuristic’ for the research programme (Lakatos, 1970) – that is indicating valuable directions for further studies. Authors who do their best to be open about interpretative alternatives to their own inferences encourage further work which might lead to results that undermine their own studies – but in doing so allow research to move forward (Popper, 1989) – or (if their interpretations are on the right lines) further studies supporting their own work. Either way, the implied invitation to challenge or develop their work is likely to be noticed by the research community, and the science of chemistry education progresses, which is surely what we all want.
This journal is © The Royal Society of Chemistry 2015 |