Keith S.
Taber
Faculty of Education, University of Cambridge, UK. E-mail: kst24@cam.ac.uk
Arguably, learning can be described as providing potential for new behaviour: “learning is considered to be a process through which a change in the potential for behaviour is brought about” (Taber, 2009, p. 10). If a learner has at time A no potential for behaviour X (which might be explaining the reactivity of double bonds, for example), but then later at time B this potential has been acquired, then some learning has taken place. This does not mean, however, that not observing this behaviour at one time, and then observing it later, is automatically strong evidence of learning. The learner has to have the opportunity and motivation to demonstrate the behaviour, and these are necessary rather than sufficient conditions. Clearly most of the time, our behaviour only reflects a tiny fraction of what we have learnt over the years. Moreover, students often have potential for offering multiple behaviours in particular contexts – such that our observations sample a limited number of the potential behaviours. A question such as ‘what is this?’ might legitimately invite different responses – say, ethene, an alkene, an unsaturated compound, a fuel, a hydrocarbon, a double bond, sp2 hybridisation, a molecule, a symmetric arrangement, a planar structure, a formula, a symbolic representation, a diagram, a focus for a research interview…
To give an example from my own research, one student I worked with (‘Tajinder’) offered three quite distinct ways of understanding the nature of chemical bonding. This behaviour was interpreted as representing manifold conceptions of bonding (Taber, 2000). That is, what was observed (speech acts that were classified as evidence of the use of alternative forms of explanation) was used to draw inferences about mental states (having different ideas, that drew upon different elements of a repertoire of conceptions assumed to be represented somehow in the brain). If Tajinder had simply been asked about bonding in a particular molecule near the start of his course, and again near the end, it is possible that he would have offered quite different forms of explanation (perhaps leading to an inference that learning had taken place as he had abandoned one idea and adopted another). However, the data collected suggest that if Tajinder had been asked about bonding in a particular molecule near the start of his course, and again near the end, it is also possible that he would have offered similar forms of explanation based on the same underlying explanatory principle (potentially leading to an inference that learning had not taken place and a tenacious alternative conception had not been challenged). In-depth collection of data over an extended period showed that neither inference would be adequate, as the learning process was more complex. There was learning, as shown in the change in the profile of responses given at different points in his studies – but it could not be summarised as a simple matter of conceptual substitution (Taber, 2001).
If looking to develop a descriptive account one might imagine that a purely observational study that explored the ‘natural’ progression of student thinking about chemical ideas would be indicated. Indeed one of the key distinctions made in describing educational research is between naturalistic studies (that seek to explore how things are, without influencing what is to be studied) and interventionist studies (that deliberately seek to change the state of affairs, and evaluate the intervention). Naturalism seeks to observe ‘given’ situations (Kemmis, 1980) but this can be a frustrating restriction, however. The developmental psychologist (or genetic epistemologist, as he framed his work) Jean Piaget is well known for introducing the clinical interview that has indirectly acted as a model of many research interviews in such areas as exploring learner thinking in science topics. Yet Piaget (1959/2002), having published as a biologist when a student, set out to do a natural observation study. Several frustrating weeks of following a child around school waiting for the target student to do or say something revealing led to Piaget developing a more direct method: sitting the child down and asking them some well-sequenced questions about the topic of interest.
However, as soon as the researcher probes the learner's thinking, she inevitably intervenes in the natural course of that thinking, and the thinking that is ‘revealed’ – that is actually inferred from the learner's behaviour offering observable representations (Taber, 2013) – is cued by the specific probes and questions the researcher presents. Piaget (1929/1973) himself recognised that some answers children gave to his questions were romanced: that is, the child made up a feasible response to a novel question. Romanced responses may clearly be of interest, as they draw upon the cognitive resources the interviewee has available (Hammer, 2004), but they do not reveal existing stable patterns of thought. The difficulty of knowing whether what is cued in a research interview is, or is not, a core and stable feature of a participant's thinking about a topic is probably responsible for some of the debates about the status and significance of what was actually being reported in accounts of students’ misconceptions or conceptual frameworks (Taber, 2009). Of course stable patterns of thought have their origin in something more tentative and provisional: so having constructed a romanced response, put together in situ within an interview context, as a means to answer some previously unconsidered question, the novel idea may then come to be adopted into a more regular part of the learner's thinking. Research activities, such as answering diagnostic tests or being interviewed, are learning opportunities – and so students may learn from them.
Such learning is not necessarily a bad thing in itself: I recall an interview where a student in effect ‘invented’ the idea of van der Waals's forces. The interview questions probing what the student did know led to her proposing an interaction she had not yet been formally taught about. Interview questions, like teacher's questions, can lead to a kind of Socratic dialogue that allows the participant to construct new knowledge structures by reorganisation and juxtaposition of existing knowledge. This reflects a deliberate teaching technique used to scaffold learning (Wood, 1988), where the teacher provides an activity to highlight and reorientate aspects of existing knowledge that provide the foundations for new learning, so acting as a scaffolding ‘plank’ (a platform for new knowledge) (Taber, 2002). The student clearly did not suggest this novel (for her) idea would be called van der Waals’ forces, but it seems likely that when she was later taught about this concept her previous insight will have provided an anchoring point for, and been reinforced by, the new learning.
In research contexts we are attempting to explore rather than teach, and often it is methodologically unsound to give feedback (the researcher will often say something like ‘I want to know what you are thinking, so there are no right or wrong answers’), especially when the research is longitudinal and we are interested in seeing how the learner's ideas develop over time. However, when students concoct new original non-canonical ideas in research interviews that go unchallenged, the research interview may act as a learning intervention, supporting the learning of alternative conceptions. This raises the question of when researchers have an ethical responsibility to intervene and feed back to the participant that ideas revealed in research interviews should be revisited and perhaps reconsidered. This issue certainly arose when I was interviewing one of my own students (‘Annie’) and found she had developed an alternative framework of ideas deriving from a misconstruing of what charge symbols denote in chemistry (Taber, 1995).
A national sample of sufficient size and representativeness may allow such local changes to tend to average out – as long as there are not systemic changes that undermine the study. It would be difficult to undertake such work in England, for example, where curriculum, school organisation, examination specifications, assessment modes, and the like, tend to be in such flux that making any kind of comparison over time is unlikely to ever simply reflect learning between two age levels.
This is not a problem with longitudinal studies as the same individuals are investigated at different points in time. Whether learners who are co-operative and generous enough to offer the gift of data regularly are representative of wider cohorts must be a question borne in mind. Moreover, as suggested above, whether their thinking can be considered to be typical of students under normal teaching conditions once they have been through regular episodes of having their ideas probed and questioned is even more dubious. I was blessed in my doctoral research to have one participant (Tajinder, see above) who was prepared to be interviewed at length numerous times over an extended period of time. This was in part because he himself recognised that the research sessions were learning opportunities and could help him in his studies (Taber and Student, 2003). The detailed data allowed me to identify shifts over time that were subtle – not a switch from one idea to another, but a slow evolution in the profile of explanations offered to describe aspects of chemical bonding and related concepts. Even if Tajinder's generosity and commitments to the research did not make him an outlier, the many hours of one-to-one conversation about his understanding of chemistry inevitably undermine any claims that his progression in the subject can be assumed to be typical.
This issue becomes especially pertinent in research designed to be microgenetic. Microgenetic studies (Brock and Taber, 2017) deliberately implement a high frequency of observations (and often in practice, observations of activity that need to be considered interventions) at a point where a learner is suspected of being ready to show development. This can help show whether learning occurs smoothly, through discontinuity, or in a somewhat jittery way with plenty of backsliding. It is difficult to explore such issues without the high intensity of observations – but at the cost of investigating a somewhat artificial situation. A microgenetic study of a toddler learning to walk could be naturalistic – a microgenetic study of a student's progress in balancing chemical equations is probably going to rely upon setting up a series of opportunities for frequent observation.
Individual differences are known to be very significant in science learning, so we should not assume that we can take any learner as typical. It is possible to study learning pathways by observing whole classes over time, but even here resource limitations are unlikely to allow both the depth needed to explore individual thinking and breadth across a range of students. The careful selection of cases can seek to avoid obvious atypicality, even if this has to be moderated by pragmatic considerations – such as working with students who are willing to be interviewed; students who contribute enough in class so they can be regularly observed using their ideas; etc., (Petri and Niedderer, 1998).
Teaching is a complex undertaking, and if individual students can be idiosyncratic, so can lesson sequences taught by particular teachers with specific classes. Some methodologies acknowledge this. Design research assumes an iterative process, where what is learnt from one iteration informs the next version – undertaken by a different class (Ruthven et al., 2009). Lesson study often involves teachers from different institutions observing and iteratively developing (and taking turns in teaching) versions of the same lesson in different classrooms (Allen et al., 2004), although the focus is usually on a single lesson. Evaluation of teaching effectiveness meets some of the problems alluded to above: given the contextual nature, complexity, and on-going nature of learning, it may be difficult to produce evidence that will seem convincing to outsiders. Every student, class, school, etc., is different, so simple testing of learning gains in a quantitative sense offers a simplistic basis for comparison across learning contexts. Large scale studies that can test teaching sequences at scale are seldom viable, and in any case may disguise the contextual factors that sometimes lead to different approaches working to different degrees in different classes (Guthrie, 1977). In-depth case studies presented with ‘thick’ description (Geertz, 1973) may offer better narratives and reader generalisation, but lack statistical generalisability.
This journal is © The Royal Society of Chemistry 2017 |