Always a matter of interpretation: inferring student knowledge and understanding from research data

Keith S. Taber
Faculty of Education, University of Cambridge, UK. E-mail: kst24cam.ac.uk

Received 14th December 2016 , Accepted 14th December 2016

Correcting and annotating the literature

Chemistry Education Research and Practice (CERP) actively invites papers reporting research results, reviews of areas of literature, and theoretical perspectives that can inform work in the field. Astute readers will have noticed that CERP has also begun publishing some articles in the new categories of comments (Langbeheim, 2015) and replies (Smith and Villarreal, 2015a). These are peer reviewed articles that address specific issues raised in articles published in the journal. A comment is an article by new authors who consider that something in the original article should not stand in the literature without further comment, and a reply is a response by the original authors in relation to the comment. The option of publishing articles of this kind is common among research journals.

Journals have the option of withdrawing advanced papers, or retracting those assigned to issues, when substantive doubts about their merits arise after publication. This would normally happen if the editor came to believe there was significant scientific malpractice (such as reporting fictional data), or when an author accepted a major flaw highlighted by a reader but missed in peer review and which effectively undermined the case for the conclusions, or when some finding reported in good faith was later found to be an artefact of something like instrumental failure or poor calibration.

The retraction of a published paper is usually an extreme step that is only merited when the paper is fundamentally flawed such that it is considered that it could mislead the community by remaining part of the literature of a field. Interestingly, research looking at the medical literature suggests that even retracted papers may continue to be regularly cited in later research, usually without any recognition that the work had already been retracted (Budd et al., 1998). Withdrawal or retraction of published articles is rare in chemistry education. Probably the only example I am aware of was published in the Journal of Chemical Education (Scerri, 2012), and then withdrawn because it was judged to be insufficiently distinct from a previously published article by the same author in Chemistry Education in New Zealand (Scerri, 2010). There was no suggestion that the article lacked scholarly merits, and it remains available on the journal website (as supporting information to an ‘Addition/Correction’ noting the withdrawal). The question of whether Scerri's (2012) paper is sufficiently developed and distinct from the 2010 publication to be considered to offer original new knowledge is a matter of nuanced judgement and interpretation. In an empirical paper it may be fairly clear that originality refers to new data, analysis and/or results. In the tradition of philosophical analysis (such as in Scerri's work) it is common practice for a scholar to return to the same theme to further develop particular lines of argument, and it may be less straightforward to judge whether a new article is sufficiently different from published works to be considered publishable.

Formal comments on published articles

Sometimes a reader of a published study will consider that the way the work has been conceptualised or executed is sufficiently problematic to lead to reservations about the study's conclusions: reservations that are not sufficiently covered by any provisos offered by the authors themselves. The focus of concern may be sufficiently substantial that it provokes the reader to prepare a commentary challenging some aspect of the published account (e.g., Taber, 2011).

The addition of comments to CERP's categories of submission now provides a suitable route for raising such concerns. A published comment reports something that its author feels should be considered by readers of a published article. This gives the reader of the original article the opportunity to challenge some aspect of work that has been through peer review and stands as part of the published literature in the field, as long as that same process of peer review suggests the challenge itself has sufficient merits. However, this type of article raises issues about the definitiveness of research contributions, the status of knowledge claims made in published papers, and the conversational nature of the research literature in general.

Knowledge and the research literature

The primary research literature is often seen as the location of scientific knowledge in a field (McInerney et al., 2004, p. 49). A little thought shows that even if this is the case in principle, this is a problematic notion in practice (Taber, 2013b). For one thing the literature is not a coherent and unitary thing. Even knowing which journals to take seriously in a field (when new journals are being started up all the time) requires insider, expert, judgement (raising issues of who is expert enough to make such judgements, and for that matter who decides who is expert enough, and so on). The literature is in flux, which certainly reflects how science progresses, but means it is difficult to know which papers published a century, a decade, a year – or in some fields, even a month – ago reflect the current state of knowledge.

The literature in an active field is unlikely to offer consensus in its accounts. As one example, early work to characterise students’ ideas in scientific topics produced contrary claims about the nature of student thinking with quite different implications for teachers: suggesting either that some published accounts were simply wrong, or that the issue was much more complex and nuanced than most of the published descriptions implied (Taber, 2009), i.e. researchers should not have been asking whether (or, worse, assuming that) students’ ideas were – for example – stable or labile, but rather under what circumstances were they likely to be stable, and under what circumstances were they likely to be labile.

An even more serious problem with the notion that scientific knowledge is found in the primary literature, certainly for a personal constructivist like myself, is that knowledge needs to be built up in the minds of individuals. Journal papers do not contain knowledge, but rather simply the representations of (some of) the knowledge of their authors. This is not simply making a pedantic point about semantics. The reader cannot find knowledge in journals, not even in CERP, but rather has to interpret the representations in articles to build their own personal understanding. This is not a process to be taken for granted – it is parallel to what happens in chemistry classrooms around the world where students make sense of textbooks and teaching and (to put it mildly) do not always arrive at canonical understanding. So, journal accounts “are public inscriptions that represent the thinking of authors but need to be interpreted through the idiosyncratic cognitive resources of readers to be understood” (Taber, 2013b, p. 201).

One model of science (Lakatos, 1970) involves the development of research programmes that remain worth supporting as long as they are seen by the community to be productive: that is where the interplay of theory and empirical research seems to offer new useful insights into the phenomena under study. This reflects a post-positivist view of science (Phillips and Burbules, 2000) which acknowledges that science – even if it is often popularly said to seek ‘truth’ – cannot produce absolute knowledge, but rather develops reliable knowledge that is always considered to be in principle provisional, and so open to further critique and revision in the light of new evidence. Given that, the task for an author of a research report is not to ‘prove’ a conclusion, but rather to make a persuasive case for some knowledge claim as being reasonable in the light of current thinking within a research programme and well supported by evidence that is robust.

No study can stand on its own outside of the context of the wider literature, as each research study is informed by, and indeed assumes, a great deal of background knowledge. For example, a good many studies in chemistry will present and interpret data from spectrometers or other instruments where a theory of instrumentation (how output graphs relate to inferred features of interest) and often an analytical theory (e.g. the use of Fourier analysis) have to be assumed (in effect, taken as given) to proceed to the conclusions drawn. In addition, a good many widely accepted scientific principles will often be assumed as part of the theoretical framework for the study: assumed in the sense that it is sufficient to refer to them, or perhaps even imply them, and others in the field will not require or expect further justification of these points. For example, the author of a paper that uses the conservation of energy as a principle to support an argument is not expected to provide the grounds for considering this principle valid, as it has been well established and is widely accepted. In Lakatos’ terms, the notion that energy is conserved would be an intellectual commitment that is not questioned within a research programme in chemistry.

Deciding what can be taken as given

In general, then, a new research study adds incrementally to a research programme, and also depends upon some of the existing content of that programme. In particular, research programmes have certain ‘hard core’ assumptions that are unquestioned in that programme (such as the conservation of energy). In writing a research report in chemistry no author today is expected to rehearse the arguments for the particulate nature of matter at submicroscopic scales, or to justify assuming that there exist chemical elements. In principle, these rather extreme examples reflect chemical knowledge that – even if technically should be considered provisional – no one expects to be overturned any time soon.

Chemists will assume matter is particulate and that elements exist and will not even think it necessary to point out that strictly these are assumptions or ‘commitments’. In these cases this position is entirely reasonable as any author that attempted to take every point back to first principles in writing a research report would likely produce a very long, complex text that would be unwelcome, if not impenetrable, to potential readers. Yet what can be taken as given has to be judged, and some of the many assumptions drawn upon (and not always made explicit) in research papers that seem sensible and reasonable at the time of their publication are likely to seem less secure or even invalid at a latter date.

Shaving off less plausible interpretations

From this perspective, the role of the peer reviewer is a nuanced one. A reviewer cannot set a test of being persuaded beyond all possible doubt that a submitted manuscript offers new knowledge that will stand the test of time – as no one can know which reasonable assumptions of today may become anachronistic false notions in the future. Moreover, any set of research results will in principle admit of alternative interpretations, even if these alternatives may be too convoluted and apparently contrived to get past Occam's (or Ockham's) razor. This is a heuristic that leads scientists to prefer explanations with the least number of auxiliary assumptions, but this reflects a metaphysical commitment which is imposed on science (Taber, 2013a) as a pragmatic rule of thumb rather than being a foolproof principle.

That is, our common sense notion of how the world is/should be tells us to prefer the simpler account, even if there is no strict rational or empirical basis for excluding more convoluted alternatives. This perhaps explains why, for example, a researcher who finds that students of a certain age demonstrate thinking about a particular chemistry topic which is labile, atheoretical, and romanced may prefer the explanation that this is because students’ scientific ideas tend to be labile, atheoretical, and romanced, rather than a more complex and nuanced account that student thinking about science is diverse in its nature, but that students at a certain level of development, with certain levels of background knowledge and personal experience relevant to a particular topic, in response to certain kinds of teaching about the topic, within a wider institutional (e.g. curriculum), cultural and linguistic context, and in response to being investigated through particular methodological approaches, tend to present with ideas about that topic which are labile, atheoretical, and romanced.

The peer reviewer has to determine that a paper is sound in terms of being based on reasonable assumptions and in making explicit those supporting grounds which may not have full community consensus – as well as acknowledging limitations in the study that readers should be aware of. This is always going to a somewhat subjective judgement in the sense that replacing one reviewer with another apparently ‘equally qualified’ colleague will not necessarily lead to the same recommendation: reviewing draws upon judgements that are nuanced and complex and informed by the personal knowledge and experience of the reviewer. In this sense, a necessary sense, peer review will always admit bias, even when reviewers have the highest personal and professional integrity and would never admit prejudice. That is, reviewers may do their utmost to be fair to authors, yet still their recommendations will in part reflect background knowledge that is unique and idiosyncratic. It is hard to see how it could be any other way.

Peer review has to support readers across the field

Additionally, a journal like Chemistry Education Research and Practice, that considers submissions from across the whole field of chemistry education (and not only from within a single research programme), receives manuscripts drawing upon a wide range of theoretical perspectives, and using diverse methodologies to inform research design. Many of these papers will report work based on ‘hard core’ commitments, some of which are shared widely within the field, and some of which are only taken as given within specific traditions of enquiry. Some of these assumptions may be ontological (assuming that people have alternative conceptions, and we all understand the nature of these), and some may be epistemological (assuming people's alternative conceptions can be characterised from their responses to interview questions), and sometimes familiarity within a research programme means some assumptions may be tacit and therefore largely unexamined – e.g. it is general experience that we can know what people think if we ask them (Taber, 2013b).

Peer reviewers working within the research programme from which a manuscript originates may not easily spot implicit assumptions, whereas peer reviewers from elsewhere in the field who are more likely to notice such assumptions may not have sufficient familiarity with the research literature in the topic to evaluate originality or whether the usual conventions in the particular research tradition have been adopted.

In educational research we have additional complications as the phenomena we study are often complex, diverse, and – unlike in chemistry – we can seldom fully disembed the phenomena of interest from particular contexts. One sample of a compound of high purity that has been isolated should be much the same as another pure sample under similar conditions. That does not apply so well to teachers, classrooms, students, etc. Moreover, because of this, educational research is often highly motivated by conceptual frameworks built around theoretical perspectives that are not consensual in the field, and uses a wide range of research approaches and methods, that are sometimes (grounded theory; Rasch analysis; cluster analysis, think aloud…) only familiar to a minority of colleagues, and may in some cases still be in the process of being adapted from other source disciplines. Peer reviewers sometimes have to take the view that they are not entirely convinced by a theoretical perspective or a methodological approach, but that it is still admissible, and then attempt to judge submitted manuscripts on their own terms. Yet reviewers are also expected to make it clear when they have genuine doubts about the applicability of a perspective or technique being applied in the context of chemistry teaching and learning.

Given the professionalism and care of the vast majority of CERP reviewers (with due oversight and sometimes additional input at editorial level), I would not expect many articles that get into production likely to cause widespread concern among colleagues. Yet given the natures of knowledge, the scientific process, and the foci of educational research, it is only to be expected that sometimes some readers will strongly feel that the peer review process has resulted in a misjudgement in recommending publication of an article in the form in which it is accepted: for example without acknowledging alternative interpretations that seem reasonable to a reader.

My conclusion from this analysis is that

(a) if peer review could be expected to always make judgments on behalf of the community that colleagues would generally share then we would have little need for ‘comments’ and ‘replies’ but rather the succession of publications citing earlier publications would offer sufficient dialogue between researchers;

(b) however, even when referees do thorough and careful work in peer review, it is inevitable that there will sometimes be papers published which lead some other colleagues to have significant concerns such that they wish to challenge some aspect of the published work directly.

Comments and replies: an example of how it works

So recently Smith and Villarreal (2015b) published a research report in CERP entitled “Using animations in identifying general chemistry students’ misconceptions and evaluating their knowledge transfer relating to particle position in physical changes”. As this title suggests, this study falls within the major tradition of work in chemistry education, and more widely science education, that engages with students’ knowledge and their ‘conceptions’ (Taber, 2009). There is a well established tradition of exploring students’ ideas in science to inform and evaluate teaching (Driver and Oldham, 1986). Much of this work is undertaken from the personal constructivist perspective referred to above. This sees knowledge as the personal construction of the individual, so that teaching cannot transfer scientific concepts into learners’ minds, but rather learners have to interpret and make sense of teaching using whatever resources they have available to do so.

Some of the assumptions adopted in this area of research may seem so familiar that they are not always made explicit in reports as researchers often assume they can be taken for granted (as core commitments of a common research programme) and are already shared with readers. Much of my own research has concerned such issues as student knowledge, understanding and learning in science topics. Working in such an area over time can make one very reluctant to make definitive claims, given some quite substantive challenges to drawing firm conclusions in this area of work (Taber, 2013b).

Ontological assumptions relate to what we actually mean by such terms as knowledge, understanding and thinking. Is it reasonable to assume we all know exactly what colleagues mean when using such terms, so that they can be used as technical terms in research without definition and further clarification? Epistemological assumptions relate to when it is reasonable to make definite claims about the contents of other minds. After all, in everyday life we all employ our ‘mind reading abilities’ to allow us to make such pronouncements as “I know what you are thinking”, “I can see that you have changed your mind about that”, “you do not understand my point” and “you are confused about that”. Having a theory of mind that allows us to draw inferences about the mental lives of others is an essential part of normal social cognition, and is something we often do simply take for granted. However, we are not actually reading minds but rather we are drawing inferences, and they are based on limited and indirect evidence.

We never see another person's thinking, but only observe ‘representations’ of that thinking that are publicly available: what the person says, gestures, inscriptions, facial expression etc. Such evidence is partial and needs to be carefully interpreted (Taber, 2013b). People are not fully aware of all their own cognitive processes, and thought that is conscious may be too rapid for detailed reporting even when a person is motivated to provide a full and honest report of their thinking. Moreover, not all of our thinking is verbal (Karmiloff-Smith, 1996), so Einstein for example used a lot of imagery in his scientific thinking (Miller, 1986), which can only be indirectly represented in a verbal report.

Research requires different standards to everyday social mind-reading. When a research report makes claims about knowledge, learning, understanding, thinking, and so forth, it is important both that it is clear what the claims are about (e.g., what does this author mean by a leaner's conception?) and how they are derived (e.g., what are the grounds for claiming most first year undergraduates have a poor understanding of the nature of catalysis?). This is challenging as authors need to balance thoroughness with readability, and texts can readily become convoluted when everything is being defined and is subject to chains of explicit provisos.

Research then collects indirect evidence of thinking. As one example I recently reviewed a paper for another journal where authors made a claim about the proportion of respondents who held a certain belief about an aspect of chemistry. Research into beliefs is an important and valid part of work in science education, but the instrumentation used in the research was a pencil and paper diagnostic instrument which asked learners to select from provided options. I was not convinced this approach was suitable to uncover beliefs. Indeed the work being reviewed was actually about student conceptions, and I suspect the reference to beliefs was simply intended as a synonym for conceptions, perhaps to avoid text that seemed repetitious.

When students select responses from instruments of the kind used in the work I was reviewing they are making a choice between a limited number of options provided by the researcher, and may well be making the ‘best’ choice to match actual understanding when none of the available options seem to fit especially well. It is certainly still noteworthy and likely significant when a high proportion of respondents select an option reflecting an alternative conception in place of another response representing the scientific concept or curriculum model held up as target knowledge. It is clearly significant for teaching if a substantive proportion of the learners choose as the best match the ‘wrong’ answer. But there are complications.

We cannot be sure (unless we talk to them, and perhaps not even then!) whether students have understood the intended meaning of the items – so sometimes we may be picking up issues of limited literacy rather than conceptual understanding. Often students hold manifold conceptions in topic areas (Taber, 2014), so selecting a particular response (either the correct one or a distractor) may not always indicate they would have made the equivalent choice in a parallel question with different wording or a different context for example (or even in the same item if it was sequenced differently in the instrument; or even in precisely the same instrument, on a different day when they had previously been thinking about different things). This also ignores students who are not committed to reading questions carefully, any that make random guesses, or who – simply by mistake – tick a different response box to the one they intend to.

Moreover, a student may select an answer because they think it is most likely to be the right answer without this meaning they are committed to it as belief. Much of our work in chemistry education concerns models, which are meant to be understood and applied – not necessarily believed! We do not necessarily want students to ‘believe in’ the Lewis model of acids or in an SN2 reaction mechanism – or even in the periodic table (Taber, 2010). Belief does not seem to be the right ontological category and, even if it was, this type of instrument does not offer the epistemological sophistication to equate a response choice to a strongly committed conception. That was an issue I raised in peer review, but perhaps another reviewer who has not themselves grappled with these issues in some depth may have just been happy to consider student response choices to be appropriate indicators of belief.

In Smith and Villarreal's (2015b) paper in this journal they report student responses to imagery “which illustrates particulate-level representations of a melting–freezing cycle” from which they inferred students have “misconceptions” about particle motion. Like all papers submitted to the journal, this article was subject to peer review, and reviewers felt the authors had made their case well enough to recommend publication. However, one reader, Langbeheim, was less convinced, and offered an alternative interpretation of student responses that need not necessary imply the misconceptions that Smith and Villarreal (2015b) inferred. Langbeheim's comments have (after peer review) been published as a comment on the original article (Langbeheim, 2015).

The original authors did not fully accept Langbeheim's criticism – in part because of privileged information that was known to those who developed the research instrumentation but was not available in the public record. A researcher having worked through an extended process, and often having intimate engagement with data that is only summarised in a report, may be convinced of their interpretation of events, when a written report may only seem to others to offer a viable interpretation of what the data means without seeming so conclusive.

After peer review, Smith and Villarreal (2015a)'s reply to Langbeheim's comment was also published. Smith and Villarreal report how their experience in (unpublished) pilot studies supports their interpretation of the data in the published study. It would have been possible to have included that information in the original submission – but there are always judgements to be made about how much detail is useful in explaining the development of research instruments and designs. As Medawar (1963/1990) long ago pointed out, the research paper traditionally offers a rational reconstruction of the research undertaken to make a succinct and linear case for knowledge claims, rather than offering a narrative of all the blind avenues and peripheral activities that usually have to be worked through before getting to publishable work. Although Smith and Villarreal have defended the conclusions they drew in their (2015b) study, their reply to Langbeheim's (2015) article acknowledges how this comment indicates the value of “further investigation of how factors such as consideration of time and the nature of the representations in the instruments might affect students’ conceptions and responses” (Smith and Villarreal, 2015a, p. 701). The article–comment–reply triad offers an accelerated version of the dialogic conversation carried out in the literature between successive research papers.

Acknowledging the centrality of interpretation

All research – whether in chemistry or chemistry education, whether using qualitative or quantitative methods – relies upon researchers making interpretations of data. In some research, such as when interview transcripts or talk aloud protocols are discussed and used to draw inferences about student thinking and knowledge, the process of interpretation is obvious to readers of the report. In some other research (such as when using instruments with researcher-determined response options, and when using statistics to test hypotheses) the process of interpretation almost becomes invisible because the apparatus of data collection and analysis channels the conclusions drawn. In effect much of the work of interpretation in such research is undertaken before data collection. That is data collection is set up so that it provides data that fall into categories for which the potential interpretations are already in place: if the statistic reaches the critical value then the hypothesis is supported; if a student selects this option then they hold that conception; etc.

All research in chemistry education involves interpretation, and there are always (more or less plausible) alternative interpretations that could be made. The authors of a research report need to provide enough detail to show their interpretation is viable, and well supported, but should help readers to see whether other interpretations might be drawn. Then peer reviewers do not need to find a case ‘proven’ but do have to be convinced there is a strong case for the interpretations made. Given that there are always going to be alternative interpretations, and in some cases the argument for the authors’ preferred conclusions may not be so strong that even they would wish to definitely exclude other options, it becomes creditworthy for a paper to make other possibilities apparent. This will support the research process by contributing to what has been called the ‘positive heuristic’ of the research programme (Lakatos, 1970) – that is indicating valuable directions for further studies. Arguably the scientific attitude involves facilitating others to test and potentially falsify our own work (Popper, 1989). Authors who do their best to be open about interpretative alternatives to their own inferences encourage further work which might either undermine their own findings or support and develop their own work. Either way, the science of chemistry education progresses, which is surely what we all want.

References

  1. Budd J. M., Sievert M. and Schultz T. R., (1998), Phenomena of retraction: reasons for retraction and citations to the publications, J. Am. Med. Assoc., 280(3), 296–297. DOI: 10.1001/jama.280.3.296.
  2. Driver R. and Oldham V., (1986), A constructivist approach to curriculum development in science, Stud. Sci. Educ., 13, 105–122.
  3. Karmiloff-Smith A., (1996), Beyond Modularity: a developmental perspective on cognitive science, Cambridge, Massachusetts: MIT Press.
  4. Lakatos I., (1970), Falsification and the methodology of scientific research programmes, in Lakatos I. and Musgrove A. (ed.), Criticism and the Growth of Knowledge, Cambridge: Cambridge University Press, pp. 91–196.
  5. Langbeheim E., (2015), Reinterpretation of students' ideas when reasoning about particle model illustrations, Chem. Educ. Res. Pract., 16, 697–700. DOI: 10.1039/C5RP00076A.
  6. McInerney C., Bird N. and Nucci M., (2004), The Flow of Scientific Knowledge from Lab to the Lay Public: The Case of Genetically Modified Food, Sci. Commun., 26(1), 44–74. DOI: 10.1177/1075547004267024.
  7. Medawar P. B., (1963/1990), Is the scientific paper a fraud? in Medawar P. B. (ed.), The Threat and the Glory, New York: Harper Collins, pp. 228–233. (Reprinted from: The Listener, Volume 70: 12th September, 1963).
  8. Miller A. I., (1986), Imagery in Scientific Thought, Cambridge, Massachusetts: MIT Press.
  9. Phillips D. C. and Burbules N. C., (2000), Postpostivism and Educational Research, Oxford: Rowman & Littlefield.
  10. Popper K. R., (1989), Conjectures and Refutations: The Growth of Scientific Knowledge, 5th edn, London: Routledge.
  11. Scerri E. R., (2010), Comments on a recent defence of constructivism in chemical education, Chem. Educ. New Zealand, 15–18.
  12. Scerri E. R., (2012), Some Comments Arising from a Recent Proposal Concerning Instrumentalism and Chemical Education, J. Chem. Educ., 89(11), 1481. DOI: 10.1021/ed101025f.
  13. Smith K. C. and Villarreal S., (2015a), A Reply to “Reinterpretation of Students’ Ideas when Reasoning about Particle Model Illustrations.”, Chem. Educ. Res. Pract., 16, 701–703. DOI: 10.1039/C5RP00095E.
  14. Smith K. C. and Villarreal S., (2015b), Using animations in identifying general chemistry students’ misconceptions and evaluating their knowledge transfer relating to particle position in physical changes, Chem. Educ. Res. Pract., 16(2), 273–282. DOI: 10.1039/C4RP00229F.
  15. Taber K. S., (2009), Progressing Science Education: constructing the scientific research programme into the contingent nature of learning science, Dordrecht: Springer.
  16. Taber K. S., (2010), Straw men and false dichotomies: overcoming philosophical confusion in chemical education, J. Chem. Educ., 87(5), 552–558. DOI: 10.1021/ed8001623.
  17. Taber K. S., (2011), Models, molecules and misconceptions: a commentary on “Secondary School Students’ Misconceptions of Covalent Bonding”, J. Turk. Sci. Educ., 8(1), 3–18.
  18. Taber K. S., (2013a), Conceptual frameworks, metaphysical commitments and worldviews: the challenge of reflecting the relationships between science and religion in science education, in Mansour N. and Wegerif R. (ed.), Science Education for Diversity: Theory and practice, Dordrecht: Springer, pp. 151–177.
  19. Taber K. S., (2013b), Modelling Learners and Learning in Science Education: Developing Representations of Concepts, Conceptual Structure and Conceptual Change to Inform Teaching and Research, Dordrecht: Springer.
  20. Taber K. S., (2014), Student Thinking and Learning in Science: Perspectives on the Nature and Development of Learners' Ideas, New York: Routledge.

This journal is © The Royal Society of Chemistry 2017