Learning progressions and teaching sequences – old wine in new skins?

Sascha Bernholt *a and Hannah Sevian *b
aDepartment of Chemistry Education, Leibniz-Institute for Science and Mathematics Education (IPN), Kiel, Germany. E-mail: bernholt@ipn.uni-kiel.de
bDepartment of Chemistry, University of Massachusetts Boston, Boston, Massachusetts, USA. E-mail: hannah.sevian@umb.edu

Received 6th September 2018 , Accepted 6th September 2018

Introduction

Evidence […] shows that the social stock of knowledge is more like a free-choice cafeteria than a ladder to a hierarchy of stages. Our children can and do help themselves to new meanings and new knowledge according to social opportunity, preferences and intellectual tastes. All of these probably change with age, so knowledge both alters and develops: but learning is a much more intentional process.” (Solomon, 1992, p. 75)

Given the current foci in science education on building disciplinary ideas, scientific practices, and crosscutting concepts across educational timespans (Waddington et al., 2007; National Research Council [NRC], 2012), which are motivated by widespread stagnation of educational progress in science learning (Organisation for Economic Co-operation and Development [OECD], 2016), there is amplified need for research that examines how understanding develops over longer periods of time than a lesson or unit, and for research that illuminates the ways in which particular sequences in moving learning forward result in intended development of understanding.

Research on learning progressions and teaching sequences has been highlighted by many as a means to advance the field (Méheut and Psillos, 2004; National Research Council, 2007; Duschl et al., 2011). “Learning progressions in science are empirically-grounded and testable hypotheses about how students’ understanding of, and ability to use, core scientific concepts and explanations and related scientific practices grow and become more sophisticated over time, with appropriate instruction” (Corcoran et al., 2009, p. 15; cf.National Research Council, 2007). It holds the promise to “help us make informed conjectures regarding the most productive directions for science standards, curriculum, and assessment” (Duncan and Rivet, 2013, p. 397).

In chemistry, not including the contributions in this themed issue, work on LPs has been presented in diverse areas and with different foci. LPs on atomic-molecular structure (Smith et al., 2006; Stevens et al., 2010), matter conservation (Mohan et al., 2009), and energy in carbon-transforming processes (Jin and Anderson, 2012; Jin et al., 2013) have focused on stages of progression. Still, others have focused on assessing whether students learn specific ideas in a particular order (e.g., Cooper et al., 2012). Others have theorized overarching structures and theoretical commitments in a learning progression of chemistry that is intended to guide curricular, instructional, and assessment decisions as well as to structure measurement and refinement of the LP (Sevian and Talanquer, 2014).

Research on teaching–learning sequences (TLSs) has a longer tradition and similar goals as LP research. The term TLS is “widely used to denote the close linkage between proposed teaching and expected student learning as a distinguishing feature of a research-inspired topic-oriented sequence […]. A TLS is both an interventional research activity and a product, like a traditional curriculum unit package, which includes well-researched teaching–learning activities empirically adapted to student reasoning.” (Méheut and Psillos, 2004, p. 516; cf.Leach and Scott, 2002). Research on TLSs is typically finer-grained than LP research, as specific instructional interventions on shorter timespans are studied. TLS research has also been reported previously in the literature, covering different chemical concepts like solubility (Kabapinar et al., 2004), conductivity (Buty et al., 2004), modelling states of matter (Saari and Viiri, 2003), and particle model structure and dynamics (Méheut, 2004).

Both LP and TLS research are often defined by their methodologies, and they are sometimes defined by the types of questions they address. Both of these approaches have drawbacks. Critiques of methodology-based definitions have been addressed by others (Duschl et al., 2011; Duncan and Rivet, 2013; Hammer and Sikorski, 2015). These critiques have brought up several main points, e.g. the overemphasis of linear, step-like developmental sequences (Gotwals and Songer, 2009); the development of test instruments or the development of instructional materials (Duncan and Gotwals, 2015; Lehrer and Schauble, 2015); the use of cross-sectional data to validate an assumed developmental sequence (Duncan and Gotwals, 2015; Taber, 2017); and the preference of canonical coherence over idiosyncrasies in students’ learning (Hammer and Sikorski, 2015).

In this commentary, and considering the field of chemistry education, as represented by the papers in this themed issue, as a sort of “case study” of current LP and TLS research, as a follow-on to the editorial written last year by the Editor of CERP anticipating this themed issue (Taber, 2017), we offer questions and tentative critiques of defining LP and TLS research by the types of questions they address. We wonder, to what extent is LP and TLS research “old wine in new skins”?

Contributions to this themed issue

Both the idea of LPs and the idea of TLSs have attracted wide attention in the last decade or longer, resulting not only in a large body of research and practice literature, but also in multifarious perspectives of how these approaches are interpreted and applied. The papers in this themed issue not only focus on the teaching and learning of different chemical concepts, but also cover a wide range of different research approaches to investigate the teaching and learning of chemistry.

Talanquer (2018) analyzes students’ reasoning schemes when explaining structure–property relationships. He focuses on the development of cognitive resources, a large proportion of which are implicit. Based on both existing research and theoretical assumptions, he aims at consolidating previous findings about students’ implicit knowledge and heuristics in order to identify and characterize dominant stances in the progression of students’ thinking about the intrinsic properties and the explicit behaviors of materials. Based on theoretical perspectives outlined in the paper, Talanquer argues that confronting a task triggers a variety of resources to be employed. Which resources are deployed depends on the actual resources available to an individual, the fluency with which these resources can be triggered based on prior experiences, the nature of the task at hand, the context in which the task is carried out, and the personal goals and levels of attention and motivation of the individual at that moment. Despite these differences, which are difficult to control, a major conclusion Talanquer draws is that “the types of components that students invoke to make sense of properties and phenomena may change with schooling, but the underlying reasoning persists” (Talanquer, 2018, p. 9).

Across broad populations, Talanquer concludes that the greatest difficulties in learning structure–property reasoning are prioritization of chemical composition over molecular structure, and centralized causality over emergence in explaining and predicting substance properties. Generally, learners assume that observed properties and behaviors are directly related (associative relationships) to the types of atoms present in a system and that they are determined by these atoms’ inherent dispositions and tendencies. In addition, because of the complexities surrounding resource deployment, Talanquer emphasizes that there are no well-defined stages in the explanatory power of students’ explanations, but that students switch between more or less sophisticated reasoning schemes, depending, for example, on context and familiarity with the problem. Talanquer envisions the usefulness of his contribution as serving to aid instructors in understanding specific challenges that students face in mastering structure–property reasoning, “to make explicit the cognitive difficulties students are likely to encounter in their learning”. In addition, his hypothesized LP also can be useful to guide choices of curriculum. For example, he points out that the synthesis of research undergirding the LP would predict that atoms-first approaches are likely to reinforce students’ biases toward atom-centered explanations because they have no anchor in real systems or phenomena.

Using conceptual profile theory (Mortimer and El-Hani, 2014), Amaral et al. (2018) investigated how secondary school students experience the process of conceptualization with regard to the concept ‘substance’ by means of a TLS. The theoretical grounding assumes that learning is a process of conceptualization, in which meanings of words (major ideas) become stabilized, driven by the collective and individual search for meaning. Individuals bring various senses of words, which are personal and dynamic, and depend on context. In this way, there is alignment with the assumptions taken by Talanquer (2018). Amaral et al. note that “collective discussion can lead to the construction and/or sharing of socially accepted and stabilized meanings”. When stabilized, these are zones in a conceptual profile of a concept or word, where different zones “represent the relevant aspects of our experience”. Learning, therefore, involves becoming aware of the diversity of modes of thinking, and seeking to understand the pragmatic value for these in different contexts, including situating the scientific view among these. Based on preceding work, the authors make use of six zones of the conceptual profile of ‘substance’ – generalist, pragmatic/utilitarian, substantialist, empirical, rationalist, and relational – to design specific activities to promote discussions among students and teachers as well as active participation of students. The authors report findings from questionnaires, classroom video analysis, and interviews of students, to show that different students activate different zones of the profile across the TLS. They emphasize that learning does not follow a linear pattern, but can be better characterized as a complex and dynamic process that requires continuous and repeated opportunities for sharing and discussing ideas in the classroom (cf.Talanquer, 2018). The findings are relevant for teachers to recognize when students intuitively choose informal (from daily life, non-scientific views) zones. Teachers can strategically lead students to gain access to the more scientific zones of the conceptual profile and to choose when particular zones (informal or scientific) are appropriate in particular contexts.

Sevian and Couture (2018) investigated problem-solving approaches of students ranging from grade 8 to upper-level university in the area of substance characterization. Similar to Talanquer (2018), these authors argue that how students perceive situations or specific problems influences how they approach the problems, and likely influences how they learn as well. Students’ problem solving is analyzed with regard to students’ underlying epistemological assumptions that constrain the approaches they take to solve the problem, i.e. the students’ “epistemic games”. The epistemic games are characterized in terms of the structure of moves and the knowledge resources deployed in a given problem situation. The authors identify and characterize five distinct epistemic games that are present across all educational levels, but with differences in their prominence across these educational levels, and variations in their deployment depending on an overall epistemological frame related to sense-making (‘doing the lesson’ or ‘doing science’) that appears to be taken by the student. While not representing an LP in itself, the authors derive from an examination of the instantiation of the epistemic games across educational levels and problem contexts the potential of specific teaching acts to foster students’ sense-making in the characterization of chemical substances. Thus, the results can help teachers recognize the major approaches students take when solving problems involving substance characterization, and may help to address specific challenges, with the goal of helping students to “gain access to and greater proficiency in” more scientific ways of thinking.

Chi et al. (2018) address students’ abilities to understand and use chemical symbol representations. The authors propose a four-level framework, based on Bloom's (Bloom, 1956; Anderson and Krathwohl, 2001) and the SOLO taxonomy (Biggs and Collis, 1982) to reflect differences in the quality of students’ abilities, ranging from the ability to connect chemical symbols with macroscopic phenomena, to the ability to use chemical symbols for reasoning in chemistry problems. The authors’ underlying assumption is that the expected knowledge must be organized systematically and coherently in order for students to be able to reach higher levels of the framework. Based on this framework and building up upon a review of previous findings, the authors develop a test instrument to measure students’ progress in understanding of this topic across grades 10 to 12. The students’ answers to the multiple-choice items of the developed test instrument are then analyzed using the Rasch model with the aim of establishing a unidimensional scale of students’ ability to understand and use chemical symbol representations. With regard to the results, students’ scores on this ability scale increase across the three grades, however, specific difficulties seem to be present in all three cohorts of the cross-sectional sample. The authors compare the distribution of students’ answers across the four levels of the framework and across grades, but also discuss the pattern of students’ preferences to select specific answer options in exemplary multiple-choice items within each level of the framework. In addition, the findings indicate different patterns in the development of understanding for girls and boys. The authors conclude that the factors that were identified to impact students’ test outcomes should be taken into account when planning instruction or when designing curricula.

In a methodologically comparable approach, Emden et al. (2018) focus on students’ understanding of transformations of matter across grades 7 to 9. Embedded in the context of the German educational system, the authors focus on generalized performance expectations, termed “Kompetenzen”. With regard to the topic transformations of matter, these performance expectations are selected and structured based on the logic of the subject's body of knowledge, as it is represented in chemistry textbooks or syllabi. Consequently, the assumed trajectory of students’ thinking when learning about transformations of matter is characterized by increasingly differentiated models of the particulate nature of matter, changes in substance properties, and theories of chemical bonding. Overall, students’ understanding of this topic is expected to start with interpreting observable phenomena and recognizing the formation of new substances, to then cover Dalton's atomic model and its application to describe chemical reactions as a rearrangements of atoms, and finally to comprise Bohr's atomic model and its application to describe chemical reactions in terms of changes in the electron shell. Based on this layered framework as well as on previous findings regarding this topic, the authors assume that students either develop an understanding of a unified concept (i.e., transformations of matter) or that this understanding is derived from two fundamental concepts, i.e. composition and structure as well as chemical properties and change.

The postulated framework is then operationalized by making use of strand maps (cf.American Association for the Advancement of Science and National Science Teachers Association, 2007). Here, the authors identify specific affordances and intermediate understandings (formulated as performance expectations, i.e. “students can…”) and arrange these “nodes” in the form of a hierarchical network. After operationalizing the networks’ nodes via multiple-choice items, the network's inherent structure is then tested by means of item response theory and Bayesian network analysis. Here, the pattern of students either reaching or not accomplishing specific performance expectations is used to analyze the assumed dependencies between specific nodes of the network by maximizing the conditional probability on the paths between nodes (which represent the hierarchy between content aspects to test for precursor–successor relationships). Thus, the analysis finally aims at identifying an idealized sequence of content aspects (or conglomerates of content aspects) and at providing evidence whether some Kompetenzen are more relevant to learning than others.

Regarding the results, the initial assumption of a gap between phenomenological and discontinuous approaches to understanding transformations of matter was corroborated, whereas a differentiation (or a hierarchical sequence) between the different discontinuous approaches (i.e., based on Daltonian and Bohrian atomic models) was not confirmed. Some more detailed findings, however, are rather unexpected. Sevian and Talanquer (2014) and Talanquer (2018) stress that identifying substances with reference to their properties is critical to being able to understand transformations of matter. However, Emden et al. (2018) found, with the items they used in their instrument, that the former is of little importance for understanding transformations of matter in the long run. From the authors’ perspective, these findings, and more generally, the approach used in this study, can provide evidence for informing instructional decisions, e.g. the allocation of time for teaching specific content aspects, in order to produce a maximum of successful students.

Also focusing on Bohr's atomic model, van Vorst (2018) discusses the use of “Ladders of Learning” (LL) as a tool for structuring the learning content in a transparent way for students in grade 8. Here, the graphical representation of a ladder is used to provide a chronological and hierarchical structure of content aspects and learning activities as a kind of advance organizer for students’ learning. As students are assumed to have different prior knowledge and capacities with regard to doing chemistry, and different needs with regard to learning chemistry, then they would benefit from different scaffolding. In the present study, scaffolding is based on self-evaluation questionnaires and differentiated learning exercises. These elements are used to provide students with a plausible content structure (by sequencing the learning content), a coherent learning process (by a five-step structure of different learning activities that gets repeated for each content unit, the completion of which are considered as “milestones”), goal clarity (by visualizing the sequence of both the content and the learning activities in the form of a LL on a poster in class), and individualized demands (by taking students’ initial knowledge into account and by providing exercises of different difficulty).

Using a mixed-method approach, van Vorst provides evidence for the positive effects of these structuring tools on both students’ achievement and interest, as compared to a control group. When asked to evaluate the differentiated instruction they experienced during their previous chemistry lessons, students who were low and medium performers rated the structuring positively and criticized a missing structure during their regular chemistry courses. This contribution has some relationship to Sevian and Couture (2018), as it addresses approaches students take. Whereas Sevian and Couture characterized approaches to solving problems that students naturally take, van Vorst examined the effects of LLs as an overt structure for students’ approaches to learning in a TLS.

Stavrou et al. (2018) describe the development and dissemination of a TLS on nanoscience and nanotechnology (NST). Building upon the model of educational reconstruction (Duit et al., 2012), the authors illustrate the design and evaluation of a learning environment that intends to foster students’ understanding of NST, but also includes aspects of responsible research and innovation (RRI). Both the development and the dissemination of the TLS are based on communities of learners, consisting of teachers, science researchers, science education researchers, and science museum experts. Making use of an extended version of the 5E model (Bybee et al., 2006), the teaching materials they developed aim to familiarize students with different aspects of scientific inquiry, e.g. making observations, posing questions, generating hypotheses, and analyzing and interpreting empirical data. By taking the teachers’ initial perspectives on TLS and NST into account, and by cooperating with teachers throughout the process from developing to enacting the teaching materials in class, the authors aim to gain insight into the implementation process, into the role of the teachers in this process, and how pre-developed teaching materials are reconstituted and restructured when being transferred from research to practice. Here, the dissemination of the TLS led to modifications by the teachers, mainly pertaining to changing specific learning activities or to adopting or changing specific representations, with the goal of better mobilization of the knowledge or interests of their students, thus providing them with meaningful learning experiences. Only few changes were made to the content of the TLS. The authors emphasize that the cooperative development of the TLS among teachers and researchers was highly consequential for the fidelity of implementation, as this likely created buy-in for the teachers as stakeholders in the work. Developing and repeatedly piloting the teaching materials in different grades and different schools rendered the TLS more flexible for local adaptions and implementations in different contexts (as opposed to a fixed set of teaching materials).

Contributions of and questions raised by this themed issue for the field

In his pre-editorial on the topics of LPs and TLSs, Taber (2017) anticipated four challenges in this area:

• Learning is not directly observable, so how can we tell when it has occurred?

• Research can be an intervention in learning.

• Longitudinal and microgenetic research may reveal different things.

• Teaching can be an intervention in learning that is studied.

In our commentary, we draw an analysis of the papers in the themed issue that addresses two of these, we raise a challenge that is not in Taber's list, and we propose an overall challenge to the field.

The collected papers in this themed issue raise a pair of linked questions: (1) How stable is learning? (2) Is student performance based on what is learned, or is it an artifact of the measures of learning? The second of these questions relates to Taber's first and second points, that what we observe depends on how we measure it, and that research itself can be an intervention in learning.

On the one hand, van Vorst (2018) shows that when a teaching sequence is tightly controlled, students are put into a position of learning in that particular sequence. Notably, when there are deliberate measures to cause students to reflect on their learning at specific points, such as the self-evaluation tasks studied by van Vorst (2018), then learning is more predictable and reliable. Chi et al. (2018) and Emden et al. (2018) focus on analyzing, rating, and measuring students’ performance on test items in order to sequence the items to come to an idealized sequence of content. For example, Emden et al. (2018) found that mastery of performance competency SMP.3 (students can identify substances based on their substance properties) is not very necessary for mastery of SMP.6 (students can describe chemical elements and compounds as pure substances and can differentiate them from each other). However, might this depend upon what questions were asked in the items used to measure the students’ performance in these competencies? When considering performance as a mirror image of item demands, i.e. characterizing students’ performance by features of the test items that triggered this performance, students’ performance is (generally and often implicitly) assumed to be fixed, as it is (implicitly) expected that a student will perform comparably, across contexts but also across repeated trials of answering an item (cf.Steedle and Shavelson, 2009). In addition, the obtained interpretation also depends on which item or test characteristics are taken into account to illustrate students’ performance (Dawson et al., 2006; Bernholt and Parchmann, 2011).

Interpretations in large-scale studies also tend to perceive individual differences as “noise” in the data that can be accounted for by sound measurement or statistical procedures. Even when being taken into account, most approaches either focus on quantifying the extent of this “construct-irrelevant variance” to gauge whether it imperils the intended interpretation of the obtained test scores, i.e. their validity (Haladyna and Downing, 2004), or they search for systematic biases in item or test scores for pre-defined subgroups (e.g., by gender), for instance in the form of differential item functioning (Holland and Wainer, 1993). However, such analyses are carried out to ensure that there is no threat to the validity of the findings, and they are not usually done to examine why the variation occurs. That is, it is often assumed that individual or contextual effects (in larger subsets of students or items) will level out in the form of random variation around the calculated mean, i.e., random measurement error. From this perspective, some researchers might indeed ask whether it is important to understand why such variations occur.

In contrast, smaller qualitative studies point out that variations are interesting because they provide insight into educating individuals. Findings from Amaral et al. (2018), Sevian and Couture (2018), and Talanquer (2018) indicate that performance depends greatly on the prior knowledge and current motivations of the individual, which influence the ways in which the individual interprets the situation/context as well as what is cued upon in a question or problem. Thus, while general trends are observable across broad populations (e.g., Talanquer's conclusion that the prioritization of chemical composition over molecular structure, and centralized causality over emergence, dominate learners’ explanations of structure–property relationships), the sensitivity of student performance on items to other factors (e.g., motivation, triggers in the items, context features that correspond to a student's learning history), necessitates a qualitative approach to analyzing student thinking. This is seen starkly in the paper by Sevian and Couture (2018), where seemingly small variations in the context of a problem make big differences in how students solve those problems. Amaral et al. (2018) also showed that two students who took part in the same TLS expressed very different conceptual profile zones of substance. While these findings are convincing, it is difficult to imagine carrying out such an extent of qualitative research with large numbers of students in order to determine the most ideal curriculum and instructional approaches that maximize learning.

This issue relates to a second challenge that occurs with LPs and TLSs, as represented by the papers collected in this themed issue, and likely beyond it. What exactly is a pathway of learning? There appears to be an inherent assumption in TLS research, as well as in much of the LP research in the field, that a stepwise, sequential approach to instruction is valuable (cf.Gotwals and Songer, 2009). The idea is that a fixed and stable order of instructional elements results in the same learning path by (at least the majority of) students. However, structuring students’ learning along a predefined sequence of content aspects and activities certainly puts high demands on the quality of this sequence (Duschl et al., 2011). In addition, when overemphasizing the idea of a fixed sequence (as was done in the approach of programmed instruction; cf.Reigeluth et al., 2016), students’ learning might get restricted to following (and keeping step with) the pre-planned order of content aspects, bit by bit. This approach would leave little room for individual meaning-making processes (Amaral et al., 2018) and might even shift the focus in class to rote-learning and memorization instead of sharing and discussing concurrent ideas (Talanquer, 2018), let alone postulating alternative hypotheses, discussing conflicting or contradictory findings, or processing diffuse data (cf.White and Frederiksen, 1998; Vellom and Anderson, 1999; Schwarz and White, 2005).

However, it is already widely recognized in the field of science education that “if the aim is to enrich the pupils’ conceptions, and not merely replace them by some alleged ‘true knowledge’, then school knowledge cannot have a single formulation, but must have various formulations of increasing degrees of complexity” (Martín del Pozo et al., 2017, p. 292). Moreover, some of the papers in this themed issue raise the concern that even tiny variations in the contexts used in a TLS (even in a linear instructional sequence) influence the performances of students, and likely change their learning paths. Sevian and Talanquer (2014) proposed an alternative to such linearity, as represented in their view of a theoretical potential energy surface as an analogy of a complex learning landscape, with local cognitive attractors as regions of relative stability, and many possible paths along this landscape (Fig. 1). Given that these two authors also contributed to this themed issue, it is not surprising that the results of their contributions take the form of such a landscape, i.e., characterizations of ways of thinking or approaching problem solving that describe regions of relative stability, but do not recommend learning paths in any set sequence. Nevertheless, learning paths occur, and are substantially influenced by the sequences and logic of concepts in instruction, thus it is important to investigate the outcomes of learning under the conditions of hypothesized “good” sequences and logic, as several of the papers in this themed issue have done.


image file: c8rp90009d-f1.tif
Fig. 1 Potential energy surface as an analogy of a learning landscape. (Reprinted from Sevian and Talanquer (2014) with permission from the Royal Society of Chemistry.)

But what are the variables that allow for characterizing such a landscape? Sevian and Talanquer (2014) proposed two main types of variables: conceptual sophistication and modes of reasoning. Other researchers hold that non-cognitive dimensions are also relevant. For example, Bulte et al. (2006) have long argued that an important “way of describing conceptual development stresses the idea that it is the motives, the affective components, the purposefulness, and the usefulness of an activity (as behavioral environment) that drive the progression of learning” (Sevian et al., 2014, p. 301). We have already discussed that individual variables matter, whether they are features embedded in the questions or problems that students confront, or variables related to affect and behavior as students learn and as their performance is measured. Tytler (2018) proposes three dimensions in which learning progress can be measured: increased repertoire, conceptual progression, and epistemic practices. The papers in this themed issue each directly address some, but not all, of these dimensions. Some of the papers focus on conceptual progression (Chi et al., 2018; Emden et al., 2018), others foreground conceptual progression while accounting for epistemic practices indirectly (Stavrou et al., 2018; van Vorst, 2018), and another is the reverse, foregrounding epistemic practices while accounting for conceptual progression indirectly (Sevian and Couture, 2018). Another prioritizes increased repertoire while accounting for conceptual progression indirectly (Amaral et al., 2018). Finally, one paper addresses conceptual progression, while acknowledging the roles of increased repertoire and epistemic practices (Talanquer, 2018). Perhaps it is necessary to foreground all three of these dimensions in order to characterize learning and to design learning conditions that foster mastery of competencies in all three dimensions.

Chemistry is unique among the sciences in many respects, and as with other pursuits within the broader area of science education, the unique features of chemistry may have important implications for studying LPs and TLSs that relate to chemistry. First, chemistry merges both science practices and technology/engineering practices, as students and chemists not only investigate phenomena using models that can predict and explain them, but they also synthesize new materials and design processes for creating these, and they must also confront the consequences of chemically transforming matter (Chamizo, 2013; Sevian and Talanquer, 2014). Second, chemistry largely cannot be modeled precisely. Trends drawn from the analysis of chemical data are loose and have many exceptions that fracture into perturbations of the trends, e.g., ‘like dissolves like’ later complexifies to nonpolar, polar aprotic, and polar protic solvents. Models in chemistry are partially relevant. Often more than one model is applied in order to synthesize a more complete understanding, e.g., there are many overlapping models of acids and bases (de Vos and Pilot, 2001). Thus, perhaps more so than in other sciences, students must learn many partially applicable scientific views, and then make judgments based on cues in problems as to which views are most relevant to be applied, sometimes simultaneously, for particular problems and with specific purposes. This requires operating with a wide repertoire, employing epistemic practices in judging problems to determine which cues are important, and deciding on levels of conceptual sophistication that are appropriate in the situation.

We propose that part of what constrains research in LPs and TLSs, particularly in the field of chemistry education is a phenomenon akin to “Old wine in new skins”. Consider, for example, the “old wine” of misconceptions research. As the field of chemistry education has advanced, studies have progressed from:

(1) Identifying misconceptions (e.g., Nakhleh, 1992), diagnosing them (e.g., Treagust, 1988), and studying how to overcome them (e.g., Taber, 2002; Barke et al., 2009), to

(2) Measuring the prevalence of misconceptions using, for example, concept inventories (e.g., Mulford and Robinson, 2002) and exams such as the American Chemical Society's conceptual exams, to

(3) Examining the cognitive underpinnings of misconceptions (e.g., Talanquer, 2006) and studying mental models and other explanations of these (e.g., Adbo and Taber, 2009).

While the field has advanced, research continues in all these arenas. New studies continue to identify misconceptions (e.g., Yan and Subramaniam, 2016; Lamichhane et al., 2018); new concept inventories are being developed (e.g., Luxford and Bretz, 2014); and syntheses across literature are presented which offer insights into the cognitive underpinnings of students’ misconceptions (e.g., Bain and Towns, 2016) and mental models that can explain them (e.g., Körhasan and Wang, 2016). As the field has progressed, shifts have occurred in the sets of theoretical assumptions behind various studies (cf.Cooper and Stowe, 2018). Early misconceptions research was pragmatically driven and guided by an assumption that incorrect ideas could be replaced by correct ones with proper identification of the misconceptions and appropriate teaching to eradicate them. Later, conceptual change and other theories evolved to consider naïve frameworks and conceptual ecologies that could explain stable and coherent models and reasoning relied upon by learners.

With regard to advancing the field of LP and TLS research toward new frontiers, proposing and researching new LPs and TLSs for each and every chemical concept would be an insurmountable challenge, not only for the number of concepts, but also in tying together so many LPs and TLSs in a way that is coherent for teaching. We suggest an alternative: perhaps as chemists we might take advantage of our unique ways of doing chemistry and apply these to LP and TLS research. From this perspective, regarding our first “uniqueness” discussed above, of the dual science-engineering/technology nature of chemistry, it might be fruitful to focus not only on the development and use of mental models or LPs to explain and predict students’ learning, but also to consider the design of processes and LPs that can inform TLSs, as well as taking into account the consequences of altering students’ thinking about chemistry. While most contributions in the area of LP and TLS research either focus on the theoretical or empirical modeling of learning trajectories or the development of instructional materials (Duncan and Gotwals, 2015; Lehrer and Schauble, 2015), which is also the case for the contributions in this themed issue, a more systematic co-development of theoretical (or empirical) models with well-designed teaching acts and process studies might be necessary to gain deeper insights into students’ learning of core chemical ideas and practices. In addition, regarding our second “uniqueness” discussed above, of simultaneously using “imperfect” models, we might apply this to working with different sets of assumptions about learning and teaching, such that examining the learning and teaching of chemistry might be advantageous. Going beyond the widespread approach of proposing, enacting, and evaluating one single LP or TLS, and instead considering alternative LPs or TLSs as concurrent hypotheses to students’ learning (cf.Duncan and Gotwals, 2015) seems to be a productive way to not only provide new evidence, but also to more systematically build upon and to further consolidate previous findings. A stronger focusing of research at the intersection of teaching and learning will also provide more insight with regard to the stability of learning, its responsiveness to instructional interventions, and the effects of different types of measurement.

The theoretical foundations and objectives of LP research, in particular the coherent integration of cognition, instruction and assessment, are only conditionally redeemed in current approaches. This is so because prioritizing specific aspects is often justified by the need to provide groundwork that can then be extended systematically. The development of LPs and TLSs over longer time spans is a very extensive undertaking. While current work is predominantly driven by individual research groups, more systematic cooperation could be key in moving the field forward (Bernholt et al., 2018). In this regard, integrating different perspectives would not only pertain to teaching and testing approaches, but also to methodological and conceptual stances (which are oven intertwined). To take up the introductory quote of this editorial, we conclude with the observation that it is not only teaching, but also educational research that is a kind of free-choice cafeteria. Researchers can and do help themselves to new meanings and knowledge according to social opportunity, personal preferences and intellectual tastes (as well as funding contexts). Calling for more comprehensive and integrated research agendas is certainly not new, but meeting the demands of such an agenda would help our field to move beyond questions about wine and skins.

References

  1. Adbo K. and Taber K. S., (2009), Learners' mental models of the particulate nature of matter. A study of 16-year-old Swedish science students, Int. J. Sci. Educ., 31(6), 757–786.
  2. Amaral E. M. R. D., Ratis Tenório da Silva J. R. and Sabino J. D., (2018), Analysing processes of conceptualization for students in lessons on substance from the emergence of conceptual profile zones, Chem. Educ. Res. Pract.,  10.1039/c8rp00050f.
  3. American Association for the Advancement of Science and National Science Teachers Association, (2007), Atlas of science literacy: Project 2061, Washington, DC: AAAS.
  4. Anderson L. W. and Krathwohl D. R. (ed.), (2001), A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives, White Plains, NY: Longman.
  5. Bain K. and Towns M. H., (2016), A review of research on the teaching and learning of chemical kinetics, Chem. Educ. Res. Pract., 17(2), 246–262.
  6. Barke H.-D., Hazari A. and Yitbarek S., (2009), Misconceptions in Chemistry: Addressing Perceptions in Chemical Education, Berlin: Springer.
  7. Bernholt S. and Parchmann I., (2011), Assessing the complexity of students' knowledge in chemistry, Chem. Educ. Res. Pract., 12(2), 167–173.
  8. Bernholt S., Neumann K. and Sumfleth E., (2018), Learning Progressions, in Krüger D., Parchmann I. and Schecker H. (ed.), Theorien in der naturwissenschaftsdidaktischen Forschung, Berlin: Springer, pp. 209–225.
  9. Biggs J. B. and Collis K. F., (1982), Evaluating the Quality of Learning: The SOLO taxonomy, New York, NY: Academic Press.
  10. Bloom B. S., (1956), Taxonomy of educational objectives: the classification of educational goals, Handbook 1: Cognitive domain, New York, NY: David McKay.
  11. Bulte A. M. W., Westbroek H. B., de Jong O. and Pilot A., (2006), A research approach to designing chemistry education using authentic practices as contexts, Int. J. Sci. Educ., 28(9), 1063–1086.
  12. Buty C., Tiberghien A. and Le Maréchal J.-F., (2004), Learning hypotheses and an associated tool to design and to analyse teaching–learning sequences, Int. J. Sci. Educ., 26(5), 579–604.
  13. Bybee R. W., Taylor J. A., Gardner A., van Scotter P., Powell J. C., Westbrook A. and Landes N., (2006), The BSCS 5E instructional model: origins and effectiveness, Colorado Springs: BSCS.
  14. Chamizo J. A., (2013), Technochemistry. One of the chemists’ ways of knowing, Found. Chem., 15(2), 157–170.
  15. Chi S., Wang Z., Luo M., Yang Y. and Huang M., (2018), Student progression on chemical symbol representation abilities at different grade levels (Grades 10–12) across gender, Chem. Educ. Res. Pract,  10.1039/c8rp00010g.
  16. Cooper M. M. and Stowe R. L., (2018), Chemistry education research-from personal empiricism to evidence, theory, and informed practice, Chem. Rev., 118(12), 6053–6087.
  17. Cooper M. M., Underwood S. M., Hilley C. Z. and Klymkowsky M. W., (2012), Development and assessment of a molecular structure and properties learning progression, J. Chem. Educ., 89(11), 1351–1357.
  18. Corcoran T., Mosher F. and Rogat A. D., (2009), Learning Progressions in Science: An Evidence-based Approach to Reform, Consortium for Policy Research in Education.
  19. Dawson T. L., Fischer K. W. and Stein Z., (2006), Reconsidering qualitative and quantitative research approaches. A cognitive developmental perspective, New Ideas in Psychol., 24(3), 229–239.
  20. de Vos W. and Pilot A., (2001), Acids and bases in layers. The stratal structure of an ancient topic, J. Chem. Educ., 78(4), 494.
  21. Duit R., Gropengießer H., Kattmann U., Komorek M. and Parchmann I., (2012), The model of educational reconstruction. A framework for improving teaching and learning science, in Jorde D. and Dillon J. (ed.), Science Education Research and Practice in Europe, Sense Publishers, Rotterdam, pp. 13–37.
  22. Duncan R. G. and Gotwals A. W., (2015), A tale of two progressions. On the benefits of careful comparisons, Sci. Educ., 99(3), 410–416.
  23. Duncan R. G. and Rivet A. E., (2013), Science learning progressions, Science, 339(6118), 396–397.
  24. Duschl R. A., Maeng S. and Sezen A., (2011), Learning progressions and teaching sequences. A review and analysis, Stud. Sci. Educ., 47(2), 123–182.
  25. Emden M., Weber K. and Sumfleth E., (2018), Evaluating a learning progression on ‘Transformation of Matter’ on the lower secondary level, Chem. Educ. Res. Pract,  10.1039/c8rp00137e.
  26. Gotwals A. W. and Songer N. B., (2009), Reasoning up and down a food chain. Using an assessment framework to investigate students' middle knowledge, Sci. Educ., III(1).
  27. Haladyna T. M. and Downing S. M., (2004), Construct-irrelevant variance in high-stakes testing, Educ. Meas., 23(1), 17–27.
  28. Hammer D. and Sikorski T.-R., (2015), Implications of complexity for research on learning progressions, Sci. Educ., 99(3), 424–431.
  29. Holland P. W. and Wainer H. (ed.), (1993), Differential item functioning, Hillsdale: Lawrence Erlbaum Associates.
  30. Jin H. and Anderson C. W., (2012), A learning progression for energy in socio-ecological systems, J. Res. Sci. Teach., 49(9), 1149–1180.
  31. Jin H., Zhan L. and Anderson C. W., (2013), Developing a fine-grained learning progression framework for carbon-transforming processes, Int. J. Sci. Educ., 35(10), 1663–1697.
  32. Kabapinar F., Leach J. and Scott P., (2004), The design and evaluation of a teaching–learning sequence addressing the solubility concept with Turkish secondary school students, Int. J. Sci. Educ., 26(5), 635–652.
  33. Körhasan N. D. and Wang L., (2016), Students' mental models of atomic spectra, Chem. Educ. Res. Pract., 17(4), 743–755.
  34. Lamichhane R., Reck C. and Maltese A. V., (2018), Undergraduate chemistry students’ misconceptions about reaction coordinate diagrams, Chem. Educ. Res. Pract., 19(3), 834–845.
  35. Leach J. and Scott P., (2002), Designing and evaluating science teaching sequences. An approach drawing upon the concept of learning demand and a social constructivist perspective on learning, Stud. Sci. Educ., 38(1), 115–142.
  36. Lehrer R. and Schauble L., (2015), Learning progressions. The whole world is not a stage, Sci. Educ., 99(3), 432–437.
  37. Luxford C. J. and Bretz S. L., (2014), Development of the bonding representations inventory to identify student misconceptions about covalent and ionic bonding representations, J. Chem. Educ., 91(3), 312–320.
  38. Martín del Pozo R., Porlán R. and Rivero A., (2017), The progression of prospective teachers’ conceptions of school science content, J. Sci. Teach. Educ., 22(4), 291–312.
  39. Méheut M., (2004), Designing and validating two teaching–learning sequences about particle models, Int. J. Sci. Educ., 26(5), 605–618.
  40. Méheut M. and Psillos D., (2004), Teaching–learning sequences. Aims and tools for science education research, Int. J. Sci. Educ., 26(5), 515–535.
  41. Mohan L., Chen J. and Anderson C. W., (2009), Developing a multi-year learning progression for carbon cycling in socio-ecological systems, J. Res. Sci. Teach., 46(6), 675–698.
  42. Mortimer E. F. and El-Hani C. N. (ed.), (2014), Conceptual Profiles: A Theory of Teaching and Learning Scientific Concepts, Contemporary Trends and Issues in Science Education, Springer Netherlands, Dordrecht, vol. 42.
  43. Mulford D. R. and Robinson W. R., (2002), An inventory for alternate conceptions among first-semester general chemistry students, J. Chem. Educ., 79(6), 739.
  44. Nakhleh M. B., (1992), Why some students don't learn chemistry. Chemical misconceptions, J. Chem. Educ., 69(3), 191.
  45. National Research Council [NRC], (2007), Taking science to school: Learning and teaching science in Grades K-8, Washington, DC: National Academies Press.
  46. National Research Council [NRC], (2012), A framework for K-12 science education: Practices, crosscutting concepts, and core ideas, Washington, DC: The National Academies Press.
  47. Organisation for Economic Co-operation and Development [OECD] (ed.), (2016), PISA 2015 Results: Excellence and Equity in Education, PISA, Paris: OECD Publishing, vol. I.
  48. Reigeluth C. M., Beatty B. J. and Myers R. D. (ed.), (2016), Instructional-design theories and models, New York, London: Routledge.
  49. Saari H. and Viiri J., (2003), A research-based teaching sequence for teaching the concept of modelling to seventh-grade students, Int. J. Sci. Educ., 25(11), 1333–1352.
  50. Schwarz C. V. and White B. Y., (2005), Metamodeling knowledge. Developing students' understanding of scientific modeling, Cognit. Instruct., 23(2), 165–205.
  51. Sevian H. and Couture S., (2018), Epistemic games in substance characterization, Chem. Educ. Res. Pract,  10.1039/c8rp00047f.
  52. Sevian H. and Talanquer V., (2014), Rethinking chemistry. A learning progression on chemical thinking, Chem. Educ. Res. Pract., 15(1), 10–23.
  53. Sevian H., Talanquer V., Bulte A. M. W., Stacy A. and Claesgens J., (2014), Development of understanding in chemistry, in Bruguière C., Tiberghien A. and Clément P. (ed.), Topics and trends in current science education: 9th ESERA conference selected contributions, Contributions from science education research, pp. 291–306.
  54. Smith C. L., Wiser M., Anderson C. W. and Krajcik J. S., (2006), Implications of research on children's learning for standards and assessment: a proposed learning progression for matter and the atomic-molecular theory, Measurement, 4(1–2), 1–98.
  55. Solomon J., (1992), Getting to know about energy – in school and society, London: Falmer Press.
  56. Stavrou D., Michailidi E. and Sgouros G., (2018), Development and dissemination of a teaching learning sequence on nanoscience and nanotechnology in a context of communities of learners, Chem. Educ. Res. Pract,  10.1039/c8rp00088c.
  57. Steedle J. T. and Shavelson R. J., (2009), Supporting valid interpretations of learning progression level diagnoses, J. Res. Sci. Teach., 46(6), 699–715.
  58. Stevens S. Y., Delgado C. and Krajcik J. S., (2010), Developing a hypothetical multi-dimensional learning progression for the nature of matter, J. Res. Sci. Teach., 47(6), 687–715.
  59. Taber K. S., (2002), Chemical Misconceptions: Prevention, Diagnosis and Cure, Cambridge: Royal Society of Chemistry, vol. 2.
  60. Taber K. S., (2017), Researching moving targets. Studying learning progressions and teaching sequences, Chem. Educ. Res. Pract., 18(2), 283–287.
  61. Talanquer V., (2006), Commonsense chemistry. A model for understanding students' alternative conceptions, J. Chem. Educ., 83(5), 811.
  62. Talanquer V., (2018), Progressions in reasoning about structure–property relationships, Chem. Educ. Res. Pract,  10.1039/c8rp00187h.
  63. Treagust D. F., (1988), Development and use of diagnostic tests to evaluate students’ misconceptions in science, Int. J. Sci. Educ., 10(2), 159–169.
  64. Tytler R., (2018), Learning progressions from a sociocultural perspective. Response to co-constructing cultural landscapes for disciplinary learning in and out of school: the next generation science standards and learning progressions in action, Cult Stud Sci Educ., 13(2), 599–605.
  65. van Vorst H., (2018), Structuring learning processes by ladders of learning. Results from an implementation study, Chem. Educ. Res. Pract,  10.1039/c8rp00078f.
  66. Vellom R. P. and Anderson C. W., (1999), Reasoning about data in middle school science, J. Res. Sci. Teach., 36(2), 179–199.
  67. Waddington D., Nentwig P. and Schanze S. (ed.), (2007), Making it comparable – Standards in science education, Münster: Waxmann.
  68. White B. Y. and Frederiksen J. R., (1998), Inquiry, modeling, and metacognition. Making science accessible to all students, Cognit. Instruct., 16(1), 3–118.
  69. Yan Y. K. and Subramaniam R., (2016), Diagnostic appraisal of grade 12 students' understanding of reaction kinetics, Chem. Educ. Res. Pract., 17(4), 1114–1126.

This journal is © The Royal Society of Chemistry 2018