What you see is what you learn? The role of visual model comprehension for academic success in chemistry

Thomas Dickmann *a, Maria Opfermann b, Elmar Dammann c, Martin Lang a and Stefan Rumann a
aUniversity of Duisburg-Essen, 45127 Essen, Germany. E-mail: thomas.dickmann@uni-due.de
bRuhr-University Bochum, 44780 Bochum, Germany
cFurtwangen University, 78120 Furtwangen, Germany

Received 14th January 2019 , Accepted 7th June 2019

First published on 7th June 2019


Abstract

Visualizations and visual models are of substantial importance for science learning (Harrison and Treagust, 2000), and it seems impossible to study chemistry without visualizations. More specifically, the combination of visualizations with text is especially beneficial for learning when dual coding is fostered (Mayer, 2014). However, at the same time, comprehending the visualizations and visual models appears to be rather difficult for learners (e.g., Johnstone, 2000). This may be one reason for the difficulties students experience especially during the university entry phase, which in a worst-case-scenario can result in high university drop-out rates as they are currently found in science-related study courses (Chen, 2013). In this regard, our study investigates, how the ability to handle and learn with visualizations – which we call visual model comprehension – relates to academic success at the beginning of chemistry studies. To do so, we collected the data of 275 chemistry-freshmen during their first university year. Our results show that visual model comprehension is a key factor for students to be successful in chemistry courses. For instance, visual model comprehension is able to predict exam grades in introductory chemistry courses as well as general chemistry content knowledge. Furthermore, our analyses point out that visual model comprehension acts as a mediator for the relation between prior knowledge and (acquired) content knowledge in chemistry studies. Given this obvious importance of visual model comprehension, our findings could give valuable insights regarding approaches to foster chemistry comprehension and learning especially for students at the beginning of their academic career.


Introduction

Imagine you learn chemistry with a textbook that doesn’t contain any visualization – no ball-and-stick models, no structural formula, no energy-level diagrams, just plain text. Impossible, isn’t it?!

Visualizations and visual models have always played a great role in chemistry learning and accordingly in learning materials such as textbooks. This refers to different kinds of visualizations (e.g., Johnstone, 2000; Gilbert and Treagust, 2009): those on a phenomenological or macro type level (representing empirical properties of compounds), those on a model or sub-micro type level (external representations, e.g., ball-and-stick models) and those on a symbolic level (sub-micro type further simplified to symbols, e.g., structural formulas). In this regard, in chemistry textbooks and learning materials, in order to understand chemical processes and reactions, visual models as one kind of external representation are a useful means to make invisible microscopic aspects concrete.

It is thus not surprising that research proposes visualizations and visual models to be highly important for substantial learning in chemistry (Harrison and Treagust, 2000; Ramadas, 2009; Coll and Lajium, 2011). Moreover, the use of visualizations and visual models in chemistry is omnipresent, so in short: “Chemistry is a visual science” (Wu and Shah, 2004).

When talking about the benefits of visualizations and visual models for chemistry learning, we can do so with regard to different aspects. In general, the literature indicates that visualizations can support learning in general when they are added to text (Schnotz, 2005; Mayer, 2014). This benefit, however, depends on the learners’ ability to comprehend and “read” the visualization itself, and this ability, in turn, relates to several other individual learner characteristics (Höffler et al., 2013).

Thus, when talking about the above mentioned high dropout rates for chemistry related university courses (Chen, 2013), we should take all of these aspects into account to get a comprehensive picture of the role of visual model comprehension. Furthermore, since the dropout takes place especially in the first year at university (Chen, 2013; Heublein et al., 2017), our study focuses on the role of visual model comprehension with regard to the academic success of chemistry freshman. Here, the influence of visual model comprehension on academic success and the question, which predictors in turn have an impact on visual model comprehension are of special interest.

Theoretical framework

Academic success in chemistry in the university entry phase

A huge problem for many universities in countries world-wide is the small number of students that start a chemistry-related course on the one hand and the high dropout rates among these few students on the other hand (OECD, 2011; Chen, 2013; Heublein et al., 2017). For example, in Germany, each year only 1.4% of students with a university entrance qualification decide to study a chemistry-related course (OECD, 2011; Prereira, 2018). With regard to drop-out rates, studies in the United States show that nearly 50% of the students who start a science-related course leave college or university without completing their degree or certificate (Chen, 2013).

When looking for learner-related reasons that account for success in the beginning of chemistry studies, research indicates that beside general variables such as cognitive abilities, grade point average, learning motivation or learning strategies (e.g., Baker and Talley, 1972; Wu and Shah, 2004; Tai et al., 2005), domain-specific variables such as prior knowledge (Cook et al., 2008; Seery, 2009), mathematical abilities (Derrick and Derrick, 2002; Nicoll and Francisco, 2001) and spatial ability (Wu and Shah, 2004) are predictive for a good performance in chemistry and its sub-disciplines. These learner-related characteristics play a major role with regard to the question, how learners are able to deal with chemistry contents and learning materials. When it comes to the latter, this comprises the use of multiple forms of visualizations, which can be found on nearly each page of current chemistry textbooks. While on the one hand, these visualizations are meant to foster learning by adding something to the text, they require to be recognized and understood in themselves to be beneficial. These issues will be focused on in the next two paragraphs.

Importance of visualizations in chemistry learning

Not only for chemistry, but for learning in general, visualizations bear a great potential as an enrichment to text. This has been shown in a wide range of studies; and as a consequence, instructional theories like the Cognitive Theory of Multimedia Learning (CTML; Mayer, 2009), the Integrated Model of Text and Picture Comprehension (ITPC; Schnotz, 2005) or the Cognitive Load Theory (CLT; Van Merrienboër and Sweller, 2005) emphasize that visualizations in combination with text can foster learning when they initiate dual coding (that is, making use of the verbal and visual information processing channels simultaneously) and thus off-load working memory. In line with this, Ainsworth (2006) in her DeFT – (Design, Functions, Tasks) framework for learning with multiple external representations extends these assumptions in that in her work, not only visualizations plus text, but any combination of two or more external representations (e.g., text plus pictures, text plus graph and table, picture plus graph,…) is considered to be beneficial for learning if they either complement each other (e.g., the visualization contains information that the text does not give at first sight) or constrain each other's interpretation and thus foster deeper comprehension. However, for multiple external representations and visualizations to work, learners must be able to identify the relevant information that is presented in the representations, to “translate” it and to relate respective elements to each other to build up coherent mental models (cf.Opfermann et al., 2017). That is, they need to possess some kind of representational competence (Rau, 2017), imagery ability (Clement et al., 2005), or imagistic reasoning ability (Stieff, 2010), all of which describe the ability to learn with and benefit from visually presented information.

Unfortunately, such ability seems to be lacking rather often when learning about chemistry. In this regard, Gilbert and Treagust (2009) in line with Johnstone (2000) assume that chemistry students have difficulties especially with regard to the above mentioned macroscopic, a sub-microscopic and symbolic types of chemistry visualizations (Fig. 1). More specifically, Johnstone (2000) states that learners might not be able to distinguish between and thus think at these three levels simultaneously. In a similar fashion, Kararo et al. (2019) found that students especially in the university entry phase often struggle with understanding structure–property relationships, which is for instance reflected in inaccurate drawings (e.g., of hydrogen bondings) and the inability to predict, argue about and explain such relationships.


image file: c9rp00016j-f1.tif
Fig. 1 Chemistry-specific classification of visualizations.

This demonstrates that students must handle quite different visualizations and should thus be able to learn with visualizations of different kinds. This ability is crucial because the learning success depends at least partly on the comprehension of visualizations (Harrison and Treagust, 2000; Ramadas, 2009; Coll and Lajium, 2011).

These assumptions, in turn, are in line with research that takes the view that the knowledge acquisition of the structure of molecules depends on the comprehension of the figural nature of the molecules and their representations (Oliver-Hoyo and Sloan, 2014), that some kind of representational competence is needed to cope with the often multi-representational and three-dimensional depiction of molecules and organic chemistry contents (Stieff, 2010; Stieff, Hegarty and Deslongchamps, 2011), and that visualizations are essential elements of scientific communication (Coleman et al., 2010; Oliveira et al., 2013).

To sum up at this point, research as well as theory both underline the importance of visualizations in science learning but at the same time emphasize one major difficulty. When learning chemistry with visualizations, learners are often required to comprehend a content they do not yet understand with the help of visualizations they are not used to (Ainsworth, 2006; McElhaney et al., 2015). This is called representation dilemma (Rau, 2017) and implies that before visualizations are beneficial for chemistry learning, they need to be understood in themselves, followed by the ability to relate them to their textual counterparts (Ainsworth, 2008; McElhaney et al., 2015). In other words, visual model comprehension is a necessary prerequisite for successful chemistry learning.

Visual model comprehension

Taking the above-mentioned considerations into account, we state a working definition of visual model comprehension as follows. Visual model comprehension is the ability of learners, taking into account domain-specific content and individual learner characteristics, to extract relevant information from visualizations, to “translate” them and to relate them to each other and to their respective textual counterparts.

This definition includes assumptions on both, characteristics of the learner and the visualizations themselves. As a consequence, when investigating the impact pf visual model comprehension on learning, both of these should be focused on more closely.

With regard to the type of visualization, a distinction can be made between decorative and instructional visualizations in a first step (Mayer, 2009). While decorative visualizations do not directly refer to the comprehension of the content to be learned and are assumed to have an impact on learning through their motivational potential (Lenzner et al., 2013), instructional visualization, as the name already suggests, are meant to foster comprehension directly through their explanatory character. (Leutner et al., 2014).

Instructional visualizations can further be divided into symbolic and iconic visualizations. Iconic visualizations have a more depictive character (Schnotz, 2005) in that they have structural commonalities with their reference objects (Niegemann et al., 2008). For instance, the drawing of a car looks like a car in reality. In contrast, symbolic visualizations have a more descriptive character and no similarity to the object they are meant to describe. For instance, the formula C2H5OH represents, but does not look like alcohol. With regard to (chemistry) learning, it can thus be said that iconic visualizations, like models of molecules or a picture of a distillation apparatus, are more suitable to convey concrete knowledge, whereas symbolic visualizations, like Lewis structures or Newman projections, are more suitable to convey abstract knowledge (Schnotz, 2005). When studying chemistry, both of these knowledge types are essential, and the question arises how they are used in relevant learning materials such as university chemistry textbooks. This question was focused on in a first step of our study, which will be described later on.

Besides the types and characteristics of the visualizations themselves, a second variable that is crucial for visual model comprehension is the set of individual prerequisites with which a learner approaches a learning situation. In other words, individual learner characteristics (Höffler et al., 2013), are assumed to be substantial predictors of learning success. In chemistry learning, besides general cognitive abilities and prior knowledge, the spatial ability (especially when it comes to working with iconic visualizations) and mathematical ability (especially when it comes to learning from structural formulas or graphs) of learners might play a central role.

Among these, the probably best investigated learner characteristic is the prior knowledge of learners. For instance, research has consistently shown that regardless of the domain, prior knowledge appears to be the strongest (but not the only) predictor of learning success (Parkerson et al., 1984; Leutner et al., 2006). At the same time, not all learners benefit equally from instructional materials. For instance, “instructional techniques that are highly effective with inexperienced learners can lose their effectiveness and even have negative consequences when used with more experienced learners” (Expertise Reversal Effect; Kalyuga et al., 2003, p. 23). With a special emphasis on learning with visualizations, this is taken up similarly by Mayer (2009) in his Individual Differences Principle, in which he states that text-picture materials that are beneficial for learners with low prior knowledge and high spatial ability can even have detrimental effects for high prior knowledge or high spatial ability learners. The rationale behind this assumption is that for learners with high prior knowledge, adding pictures to a text might be redundant and merely stress working memory capacities without additional comprehension gains, while at the same time, learners with low spatial ability might be stressed as well as they are less able to “read” the spatial characteristics of the visualization and keep all necessary information active in their working memory (Mayer and Moreno, 1998).

Spatial ability is often focused on in chemistry learning and science learning in general. For instance, Wu and Shah (2004) state that “Chemistry is a visual science” and emphasize that spatial abilities are one of the relevant predictors of chemistry learning. This can for instance be explained by the fact that in chemistry, learners must often identify key components of visualizations and rotate them in their minds, for instance when learning with ball-and-stick models. This ability is a central prerequisite to understand chemical contents, structures and relations. In line with this, empirical studies show significant correlations between spatial abilities and chemistry performance (Carter et al., 1987; Staver and Jacks, 1988), Furthermore, chemistry performance can obviously be enhanced by providing students with a pre-training regarding visual-spatial tasks (Tuckey et al., 1991). Nevertheless, the findings on the role of spatial ability especially in combination with or independently of other learner prerequisites are still not as clear and systematic as theory would suggest (Wu and Shah, 2004). Our study takes this up and tries to fill the gap by investigating individual learner characteristics and their interplay when studying chemistry.

Objectives of the study

Our study aimed at investigating the role of visual model comprehension for academic success regarding chemistry studies. More specifically, we were interested in the following research questions:

RQ 1: Is visual model comprehension an individual learner prerequisite that students possess at the beginning of their university chemistry studies?

RQ 1a: Is the construct domain-specific, or does it comprise domain-independent aspects?

RQ 1b: Can visual model comprehension be assessed validly and reliably by means of a multiple-choice test instrument?

RQ 2: Is visual model comprehension a stable trait, or does it develop over time?

RQ 3: Does visual model comprehension predict chemistry study success in terms of content knowledge gains and exam grades?

RQ 4: Which individual learner characteristics, in turn, predict visual model comprehension?

To answer these research questions, a first step was to develop a test instrument that is suitable to assess visual model comprehension validly and reliably. This was done twofold. First, a comprehensive textbook analysis was conducted to find out more about the types of visualizations that are used in common university textbooks that focus on chemistry for beginning students. On the basis of this analysis, we developed general and domain-specific items that were validated in a pilot study.

A second step was then to investigate, whether visual model comprehension as assessed with the new test instrument, is able to predict the academic success of chemistry students during their first semester at university. Furthermore, we also investigated individual learner characteristics as predictors of visual model comprehension to find out more about the still open question, why learners differ in their visual model comprehension and how visual model comprehension might be supported if students appear to lack the respective abilities.

We focused on the university entry phase, as this obviously is the most crucial phase for studying in general (Heublein, 2014) and with regard to chemistry courses (Lewis and Lewis, 2007; Jiang et al., 2010; Kennepohl et al., 2010). As described above, at the beginning of their university life, students might be overwhelmed by the multiple demands that their study programs pose on them, of which the need to process the often complex visual presentations of the contents to be learned is only one. Underestimating these demands can accordingly lead to cognitive overload, frustration and in a worst case even early study drop-out (e.g., OECD, 2011).

Thus, in our study, which is part of comprehensive long-term project on predictors of study success in the university entry phase of science and technology courses, we aimed at finding out more about the visual model comprehension of chemistry students, how it develops over the course of the first two university semesters, whether and how it predicts study success, and whether and how it is in turn predicted by individual learner characteristics. To sum our study up, it consisted of the following steps:

Step 1: Pilot study

– Step 1a: Chemistry textbook analysis: identifying common types of visualizations that are used in university textbooks and that students need to be able to work with.

– Step 1b: Development of the visual model comprehension test: using the visualization types identified in step 1a to create items that are able to measure students’ ability to comprehend visualizations in a general as well as in a chemistry-based context.

Step 2: Conducting the main study

– Step 2a: Beginning of first semester: Assessing visual model comprehension with the test developed in step 1b, prior chemistry-related knowledge, cognitive abilities, GPA, age, gender and other individual learner characteristics

– Step 2b: End of first semester: Assessing visual model comprehension (to be able to infer about stability versus increasability), chemistry-related knowledge (to be able to infer about learning gains), cognitive load and exam grades for the first introductory lecture.

– Step 2c: End of second semester: Assessing visual model comprehension, chemistry-related knowledge, cognitive load and exam grades for the second introductory lecture.

Step 3: Analyzing the main study

– Step 3a: Development of visual model comprehension over time: Is it a stable construct, or can it increase during the first two semesters?

– Step 3b: Visual model comprehension and study success: Can visual model comprehension predict the (chemistry-related) knowledge gains of students and their exam grades for the introductory lectures of the first two semesters?

– Step 3c: Predictors of visual model comprehension: If visual model comprehension is a predictor of study success (in terms of knowledge gains and exam grades), can we shed more light on it by finding out more about variables that in turn predict visual model comprehension?

These steps as parts of the pilot study and the main study will be described in more detail in the following. In both, ethical clearance was ensured twofold. First, the project had been approved and funded by the German Research Foundation, which included a statement on compliance with good research practice (e.g., the voluntariness of participation). Second, the data protection departments of the participating universities were informed about the project and ensured that the handling of student data (which included demographic information as well as their answers on the test instruments and questionnaires) strictly followed data protection laws.

Pilot study: chemistry textbook analysis

The pilot study was mainly conducted to answer our first research question, namely whether visual model comprehension as an own construct exists at all and whether it can be assessed validly and reliably by means of a test instrument that contains general as well as domain-specific items. To do so, we first conducted a textbook analysis to identify commonly used visualization types, which were then used to design the items of the visual model comprehension test.

For the textbook analysis, four chemistry textbooks were chosen that are among the most frequently used at German universities and that cover the different domains that are relevant in introductory chemistry courses. These are organic (Bruice, 2011), inorganic (Housecroft et al., 2006), physical (Atkins et al., 2013) and introductory chemistry (Mortimer and Müller, 2003). These textbooks were chosen based on interviews with professors who are responsible for the introductory lectures in chemistry at the universities that were part of the overall long-term project.

To classify the different visualizations theory-based, we used a scheme based on the above mentioned distinctions (see Fig. 1), which, among others, can be traced back to the work of Mayer (2009), Schnotz (2005), Schnotz (2008), Niegemann et al. (2008) or Treagust and colleagues (Harrison and Treagust, 2000; Gilbert and Treagust, 2009). On a first level, the visualizations were labelled as either decorative or instructional. Second, if they were instructional, a further distinction was made between iconic and symbolic visualizations. On this second level, we added a third category, which we called “hybrid”, as a first exploratory analysis had shown that a substantial part of instructional visualizations in these textbooks combine iconic and symbolic aspects (e.g., energy-level diagrams that include orbital visualizations).

All textbooks were analyzed by two independent expert raters, one of them being the first author of this paper and the second one being another PhD working in the department for didactics of chemistry, with Cohen's Kappas ranging from 0.89–0.99 depending on the category.

Table 1 shows the results of the textbook analysis. In line with expectations, visualizations of all kinds made up a large part of each of the textbooks. This ranges from 87% of the textbook pages in inorganic chemistry up to 95.2% of the textbook pages in physical chemistry. This once again underlines Wu's and Shah's statement that “Chemistry is a visual science” and emphasizes the need to investigate the predictors of successful learning with visualizations in chemistry.

Table 1 Findings of the textbook analysis
INC PYC ORC IOC IRR (Cohens κ)
IOC – introductory chemistry; PYC – physical chemistry; ORC – organic chemistry; IOC – inorganic chemistry.
% of pages containing visualizations 93.8 95.2 90.8 87.0 1
Level 1
Decorative 4.2 0 2.2 0 0.92–1
Instructional 95.7 100 97.8 100 0.91–1
Level 2
Iconic 8.3 9.9 9.7 10.1 0.93–0.96
Symbolic 75.3 83.6 71.5 68.5 0.93–0.99
Hybrid 12.0 9.8 17.1 21.4 0.89–0.99


The lower part of the table shows how decorative and instructional visualizations are distributed within these visualizations. As can be seen, decorative visualizations are rarely, if at all, used in university textbooks. They will thus not be focused on in the remainder of this paper. On the other hand, instructional visualizations constitute the overwhelming majority of the visualizations analyzed. As described above, they were further divided into iconic, symbolic and hybrid visualizations, Of these three, symbolic visualizations appear to be the most commonly used (which is no surprise, taking into account that all structural formulas count as such), but still, between a tenth to a fifth of the visualizations are either purely iconic or hybrid, that is, the contain iconic as well as symbolic aspects. There seem to be slight domain-related differences in that the amount of symbolic visualizations is highest in physical chemistry, whereas the amount of iconic visualizations is highest in inorganic chemistry, both again no surprising findings.

The results of this textbook analysis served as a basis for the development of items for the visual model comprehension test, which will be described next.

Pilot study – visual model comprehension test

The idea for the visual model comprehension test was to design an instrument, which assesses the ability to learn from and to learn with visualizations. Since our study is a part of a large long-term project on investigating and comparing academic success and study drop-out in different science- and technology-related study programs, the test was developed in cooperation with the respective engineering study and therefore contains engineering-related items as well. Altogether, the visual model comprehension test was set up in three parts with chemistry-specific, engineering-specific and general (domain-independent) items. It is important to note that although the domain-specific items were embedded in respective (chemistry or engineering) contexts, they are supposed to be solvable without greater domain-specific prior knowledge. That is, chemistry students should be able to work on the engineering-specific items depending on their visual model comprehension ability and not depending on their engineering knowledge, and the same applies to the engineering students working on the chemistry-specific items. To undermine the importance of general visual model comprehension abilities, we also added a general part with independent items. Examples for these items can be found in Table 2. It should be noted that although the test comprises engineering-specific items, the focus of our study was on the role visual model comprehension and academic success in chemistry, and thus the results of the engineering students aren’t reported here.
Table 2 Item examples of the visual model comprehension test
Example of the chemistry-specific part (more examples for the chemistry-specific items can be found in Appendix 1)
The following picture shows the lattice of sodium chloride (grey → chloride, black → sodium). How many neighbouring ions does the marked chloride ion have?
image file: c9rp00016j-u1.tif
(a) 2
(b) 4
(c) 6
(d) 8

Example of the general part
What do the dotted lines stand for?
image file: c9rp00016j-u2.tif
(a) The dotted lines are in front of the drawn through lines.
(b) The dotted lines are behind the drawn through lines.
(c) The dotted lines are longer than the drawn through lines.
(d) The dotted lines are shorter than the drawn through lines.

Example of the engineering-specific part
The following figure visualizes gearwheels, which are integrated with each other. The motion of each gearwheel impacts directly on the other gearwheels. Which combinations are possible for this construction?
image file: c9rp00016j-u3.tif
(a) Z5: Movement counterclockwise; Z1: Movement counterclockwise
(b) Z1: Movement counterclockwise; Z3: Movement clockwise direction
(c) Z1: Movement clockwise direction; Z4: Movement counterclockwise
(d) Z2: Movement counterclockwise; Z5: Movement clockwise direction


A first version of the test comprised 45 items with 15 items on each scale. In the pilot study, the test was administered three times over the course of the first two semesters of chemistry studies (beginning of first semester, end of first semester, end of second semester) at a large German university. The initial sample for the pilot study comprised 146 students for the beginning of the first semester, of which 133 also took part at the end of the first semester. At the end of the second semester, the sample had decreased to 61 students. Table 5 shows the internal consistencies as represented by Cronbach's alpha for all three time points of measurement as well as for each of the three scales and for the overall test. Although the internal consistencies for the overall test are satisfying to good, they differ quite substantially between the single scales and time points of measurement. Thus, based on the results of the single scales, we deleted some items so that the final instrument comprises 33 items (11 per scale) with Cronbach's alphas between 0.800 and 0.875 for the overall test, which can be considered good internal consistencies.

Validity and factor structure of the visual model comprehension test. To check whether the three scales really measure three different aspects of one overall construct, we aimed at confirming internal validity by means of bivariate correlation analyses between the subscales and the respective points of measurement and by conducting factor analyses to see whether the three scales are separable from each other. The results for the correlation analysis can be seen in Table 3.
Table 3 Bivariate correlations between the scales of the visual model comprehension test for the three points of measurement
1G 1E 1O 2C 2G 2E 2O 3C 3G 3E 3O
Note: 1 = first point of measurement; 2 = second point of measurement; 3 = third point of measurement; C = chemistry-specific items; G = general items; E = engineering-specific items; O = overall score. All correlations are significant at p < 0.01 or higher.
1C 0.42 0.29 0.74 0.57 0.42 0.42 0.57 0.54 0.52 0.48 0.59
1G 1 0.41 0.77 0.46 0.63 0.50 0.64 0.47 0.63 0.53 0.63
1E 1 0.77 0.50 0.45 0.59 0.63 0.48 0.48 0.53 0.61
1O 1 0.67 0.65 0.67 0.81 0.64 0.70 0.71 0.79
2C 1 0.48 0.55 0.81 0.71 0.55 0.60 0.70
2G 1 0.52 0.81 0.52 0.74 0.60 0.72
2E 1 0.85 0.61 0.63 0.77 0.77
2O 1 0.73 0.77 0.79 0.88
3C 1 0.59 0.63 0.83
3G 1 0.68 0.89
3E 1 0.89


All correlations were in a medium to high range and significant at a p < 0.001 level. It should be noted, however, that the highest correlations were between the respective scales (e.g., chemistry-specific items) at the different points of measurement and with the overall score, to which of course each single scale had contributed. The correlations between the different scales (e.g., between chemistry- and engineering-specific items) were consistently lower.

Nevertheless, the substantial overlap between the scales raised the question, whether they really measure different aspects of visual model comprehension or whether this finding rather points to one general construct. To check on this, we subsequently calculated confirmatory factor analyses (CFIs), which, according to Moosbrugger and Schermelleh-Engel (2008) can be used to examine the pre-specified structure of an instrument and are suitable if prior assumptions about the dimensionality are made. The results of this factor analysis can be seen in Table 4.

Table 4 Confirmatory factor analysis: Comparison of a one-dimensional with a three-dimensional model
χ 2-Value df Δχ2 Δdf RMSEA CFI NFI
**p < 0.001, N = 241.
1dim-Model 418.85 495 0.00 1 1
3dim-Model 368.85 492 50.06** 3 0.00 1 1


As can be seen in the table, the descriptive criterions are good in both models and indicate that both, a three-factor as well as a one-factor solution could explain the data structure. However, the χ2-test nevertheless indicates that there is a significant difference between the two models in that the three-dimensional model is more consistent with the data given. We can thus conclude that although the three subdimensions of visual model comprehension relate to each other substantially, they still represent individual constructs that are empirically separable from each other but can be added to an overall visual model comprehension score.

This final and three-scaled version of the visual model comprehension test was subsequently used during all points of measurement of the main study.

Table 5 Reliability scores of the visual model comprehension tests and its subscales in the pilot study
MP NOC Reliability scores of the visual model comprehension test (45 items) and its subscales (15 items per scale)
α of CP α of GP α of EP α of OT
Note: MP = measuring point; NOC = number of cases; α = Cronbach's α: CP = chemistry-specific part; GP = general part, EP = engineering part; OT = overall test.
1 146 0.753 0.664 0.678 0.844
2 133 0.658 0.652 0.711 0.840
3 61 0.557 0.611 0.570 0.791


Pilot study – summary

The pilot study, which was conducted to find out whether visual model comprehension is a construct that can be assessed with a standardized multiple-choice test instrument, was in the focus of our first research question. Taking the results into account, we can positively answer this research question. More specifically, we can say that yes, visual model comprehension is a learner prerequisite that students bring along at the beginning of their studies (RQ 1 overall), that it appears to contain general as well as domain-specific abilities (RQ 1a) and that it can be assessed validly and reliably with the test instrument that we have developed (RQ 1b).

These results and the visual model comprehension test served as the basis for the main study with the aim to answer the second, third and fourth research question. The main study will be described in the following chapter.

Main study – visual model comprehension and its relation to study success

The main study aimed at investigating the role of visual model comprehension for chemistry learning and accordingly the question, whether and how visual model comprehension and thus learning success can be fostered. To do so, we used the final version of the visual model comprehension test described in the previous chapter and assessed students’ visual model comprehension, chemistry-related content knowledge and several other learning-related variables three times over the course of two semesters. Analogously to the pilot study, these were the beginning of the first semester, the end of the first semester and the end of the second semester. The main study started just at the semester after the pilot study had finished.

Participants

In the main study, 275 students from two large German universities took part. 102 of them were female, and their average age was 21 years. Age and gender distribution did not differ between the two universities (Table 6).
Table 6 Participant characteristics
University 1 University 2
N % Female Age (years) N % Female Age (years)
Bachelor Chemistry 118 38.1 21.1 157 36.5 20.8


The students were recruited from the introductory lecture on chemistry that they had to attend right from the beginning of their studies. If they agreed to take part in the study, they attended a seminar over the course of the first semester, in which they acquired knowledge about empirical research and assessment methods. During the seminar sessions, they also filled out the questionnaires and tests that were part of the long-term study. Participation in the seminar and the completion of all questionnaires and tests was rewarded with credit points. In addition, at the end of the second semester, they were asked to take part in a third point of measurement, and in this session, they again answered the visual model comprehension test as well as the chemistry content knowledge test. Furthermore, at the end of the first and the second semester, respectively, students’ performance in their study-related exams was assessed.

Besides receiving credit points, students who took part in all three points of measurement, were rewarded with 100€ per person (which is about 115$). As was expected, the participation rate decreased over the course of the two semesters, so that in the end, 137 students had completed all tests and questionnaires (Table 7).

Table 7 Participation rate overall and at the two universities
Overall University 1 University 2
N % N % N %
Note: MP = measuring point.
1. MP 275 100 118 100 157 100
2. MP 245 89.1 106 89.3 139 88.5
3. MP 137 49.8 72 61 65 41.4


Design and procedure

As mentioned, our study took place as part of a large long-term project over the course of the first two semesters within chemistry-related study programs. We applied a correlational design, within which the variables relevant for our specific purposes were assessed primarily at the beginning of the first semester, end of first semester and end of second semester. Each assessment took place during sessions in the seminar on empirical research that the students attended. More specifically, the variables that were assessed during the three points of measurement were as follows:
Beginning of first semester. At the beginning of the first semester, the following variables were assessed:

– Visual model comprehension

– Content-related chemistry knowledge

– General cognitive abilities (verbal and figural reasoning)

– Spatial abilities

– Mathematical abilities

– Grade point average

– Age and gender.

Furthermore, cognitive load in terms of perceived difficulty (Kalyuga et al., 1999) and invested mental effort (Paas, 1992) was assessed several times during the assessments to investigate, how working memory capacities of the students were stressed by the different tasks.

End of first semester. At the end of the first semester, the following variables were assessed:

– Visual model comprehension

– Content-related chemistry knowledge

– Exam grades for the first introductory lecture

– Cognitive load.

End of second semester. Finally, at the end of the second semester, the same variables as for the end of the first semester were assessed (the exam grade in this case related to the second lecture that was attended during the second semester).

This longitudinal approach enabled us not only to investigate whether and how visual model comprehension predicts academic success and is in turn predicted by other variables, but also to find out more about the variability and development of visual model comprehension and about the development of chemistry-related content knowledge (which should of course be a central goal of study programs).

The instruments used during these points of measurement are described in more detail next (excluding the visual model comprehension test, which has been described and in the focus of the pilot study already).

Instruments

Content-related chemistry knowledge. The content-related chemistry knowledge test was a standardized instrument that had been developed and validated earlier on (Freyer, 2013). This test has a specific focus on chemistry knowledge that is (or should be) acquired during the first semesters of chemistry studies at German universities. It assesses more general, introductory chemistry knowledge (that is taught right at the beginning of studies) as well as organic chemistry knowledge (that is taught a little later on). The content knowledge comprised 30 items with satisfying to good internal consistencies between α = 0.79 and α = 0.89 depending on the scale and point of measurement.

In our study, the content-related chemistry knowledge, just like visual model comprehension, had a special role in that it was included in our analyses as a dependent variable (predicted by visual model comprehension) as well as an independent variable (and thus a potential predictor of visual model comprehension, but also of exam grades as an indicator for study success).

General cognitive abilities. General cognitive abilities were assessed with a standardized instrument as well, the so called “Kognitiver Fähigkeitstest” (Cognitive Abilities Test; KFT 4-12+R; Heller and Perleth, 2000). This test is able to assess the cognitive abilities of students from 4th grade (about 10 years) and higher. The original version comprises three scales (verbal, numerical and figural reasoning), of which we used two, the verbal and the figural reasoning scale. (For economical reasons, the numerical reasoning scale was excluded, since we already assessed these competencies with the mathematical abilities test described further below). This resulted in an instrument with 50 items divided equally across the two scales (25 each). The internal consistencies for these two scales in the standardized version of the instrument range between α = 0.80 and α = 0.90 and a little lower with α = 0.67 to α = 0.76 for our sample.
Spatial abilities. When investigating visual model comprehension or in general the ability to learn with any kind of visualization, spatial abilities of learners are among the most frequently included variables, which is for instance reflected in the above mentioned individual differences principle (Mayer, 2009). We assessed spatial abilities with a standardized and one of the most popular test instruments in this regard, the Paper Folding Test (PFT; Ekstrom et al., 1976). This test comprises 10 items, in which students are required to mentally fold and unfold (quadratic) pieces of paper through which holes have been punched (thus, this test is also well known as “Hole Punching Test”). The internal consistency for our sample was α = 0.68.
Mathematical abilities. As argued earlier on, mathematical abilities could be a potential predictor for chemistry performance, considering the often required work with structural formulas, but also tables and diagrams. These abilities were assessed with a test instrument that had been developed within another study of the long-term project (Müller et al., 2018). The test comprised 23 items and had an internal consistency of α = 0.75.

Results

Data analysis

All data were analyzed with the help of SPSS (version 24) and R (version 3.4.2). Since our project followed a longitudinal approach, we were interested in changes of variables (especially visual model comprehension) over time, and we calculated these by means of repeated measures analyses of variance (ANOVAs). Furthermore, we were interested in relations between visual model comprehension and study success as well as variables that might predict visual model comprehension. To shed more light on this, we calculated bivariate correlation analyses (with Pearson coefficients) as well as linear single regressions and multiple regressions (with a focus on beta coefficients and R2 as an indicator of the overall variance that can be explained by the respective models). The mediation analyses that were conducted to find out whether the role of visual model comprehension for study success in chemistry is more direct or indirect, also worked with regression approaches, whereby the direct versus indirect effects were tested by means of the Sobel test. The results of these analyses for the main study are reported in the following.

Visual model comprehension and its development over the course of study

One major question of interest when investigating visual model comprehension and its impact on study success is whether visual model comprehension is a static prerequisite with which students start their studies and which cannot be modified, or whether it is a dynamic construct. This question was the focus of our Research Question 2. Table 8 gives an overview over the scores for the three scales as well as for the overall scores for the three points of measurement.
Table 8 Visual model comprehension during the three points of measurement
N Chemistry-specific items General items Engineering-specific items Overall score
M SD M SD M SD M SD
Note: 1MP–3MP = measuring points 1–3.
1MP 275 0.74 0.21 0.53 0.19 0.64 0.24 0.65 0.16
2MP 245 0.79 0.20 57 0.21 0.69 0.23 0.68 0.17
3MP 137 0.84 0.18 0.66 0.23 0.73 0.22 0.74 0.18
Sig. <0.001 <0.001 <0.001 <0.001


As can be seen in the table, for all scales as well as for the overall score, the solution probability for the items consistently increases over time. To investigate whether these increases are significant, we calculated repeated measures analyses of variance. It has to be noted that only cases whose data were available for all three points of measurement were included in these analyses. That is, the final sample consisted of 137 students, for whom the increase for all three scales as well as for the overall score was highly significant over the three points of measurement. With other words, students constantly increased their visual model comprehension during their first two semesters of chemistry studies at university. RQ 2 can thus be answered positively in that yes, visual model comprehension appears to be a dynamic construct that can increase over time, or in other words: we can obviously help students to improve their visual model comprehension. This is even more important when visual model comprehension has an impact on how successful students learn chemistry overall. The question, whether this is the case, will be focused on in the next paragraph.

Visual model comprehension as a predictor for study success

To investigate how visual model comprehension relates to study success in chemistry, which was the focus of our Research Question 3, we focused on chemistry content knowledge as well as on lecture exam performance as indicators of how well students learn about chemistry during their first two semesters at university. The lecture exams took place twice, once at the end of the first semester (measurement point 2), with a focus on introductory chemistry, and at the end of the second semester (measurement point 3), with a focus on organic chemistry. Content knowledge was assessed at all three points of measurement. In alignment with the lecture exams, the test focused on introductory chemistry at the first and second point of measurement and on organic chemistry at the second and third point of measurement. Table 9 shows the bivariate correlations between visual model comprehension and the respective study success scores. It has to be noted that the negative correlation between lecture exam grades and visual model comprehension is due to the fact that grades in Germany range from 1 to 6 with 1 being the best (comparable to “A” in the USA).
Table 9 Bivariate correlations between visual model comprehension, content knowledge and lecture exam grades
ICK1 ICK2 OCK2 OCK3 LIC LOC
Note: OVC1–OVC3 = overall visual model comprehension at measuring points 1–3; ICK1 & ICK2 = introductory chemistry content knowledge at measuring points 1 and 2; OCK2 & OCK3 = organic chemistry content knowledge at measuring points 2 and 3; LIC = lecture exam introductory chemistry at measuring point 2; LOC = lecture exam organic chemistry at measuring point 3. All correlations are significant at p < 0.01 or higher.
OVC1 0.54 0.58 0.54 0.54 −0.37 −0.42
N 274 243 239 133 179 112
OVC2 0.53 0.60 0.56 0.55 −0.38 −0.36
N 241 240 239 133 179 111
OVC3 0.52 0.63 50 0.60 −0.46 −0.39
N 135 135 134 134 106 88


As can be seen in the table, all correlations are in a medium to high range and significant at p < 0.01 or p < 0.001. More specifically said, visual model comprehension at the very beginning of studies relates to chemistry content knowledge that is assessed at the same time, and it also correlates highly with content knowledge and lecture exam grades that are assessed later on. Similarly, visual model comprehension at the end of the first respectively the second semester correlates highly with content knowledge and lecture exam grades at all points of measurement.

These strong correlations are a first indicator for the predictive value that visual model comprehension might have with regard to study success. To shed more light on this, however, regression analyses need to be calculated to give these correlations some kind of direction and to be able to draw conclusions about the specific role of visual model comprehension taking other potential predictors of study success into account.

We did so by calculating four multiple regression analyses. For the first two analyses, the criterion variables were the introductory lecture exam grade and the organic lecture exam grade, respectively. As predictors, we included all variables described above, that is, visual model comprehension, chemistry-related content knowledge, general cognitive abilities, spatial abilities, mathematical abilities, GPA, age and gender. The results for these two regression analyses are depicted in Table 10.

Table 10 Predictors of lecture exam grades: results of multiple regression analyses
Introductory chemistry lecture exam Organic chemistry lecture exam
β p β p
Note: OVC2 = overall visual model comprehension at measuring point 2; ICK2 = introductory chemistry content knowledge at measuring point 2; MAA = mathematical ability; GPA = grade point average; OCK3 = organic chemistry content knowledge at measuring point 3.
OVC2 −0.195 <0.05
ICK2 −0.173 <0.05
MAA −0.187 <0.05
GPA 0.189 <0.05
OCK3 −0.538 <0.001
R 2 0.283 0.289
N 172 86


As can be seen in the table, the organic lecture exam grade is predicted only by organic chemistry knowledge that the students possess shortly before the exam is written. This variable alone explains almost 29% of the variance in these exam grades. On the other hand, the introductory lecture exam grade is predicted by a combination of visual model comprehension, GPA, mathematical abilities and chemistry-related content knowledge, which also explains more than 28% of the variance in the grades.

The results for the third and fourth regression analysis, in which the two aspects of chemistry-related content knowledge as measured by standardized tests were the criterions, are depicted in Table 11.

Table 11 Predictors of chemistry-related content knowledge: Results of multiple regression analyses
Introductory chemistry Organic chemistry
β p β p
Note: OVC1–2 = overall visual model comprehension at measuring points 1 and 2; ICK1 = introductory chemistry content knowledge at measuring point 1; OCK2 = organic chemistry content knowledge at measuring point 2; MAA = mathematical ability; GPA = grade point average; GEN = gender; VER = verbal reasoning.
OVC1 0.165 <0.05
OVC2 0.226 <0.001
ICK1 0.461 <0.001
OCK2 0.427 <0.001
MAA 0.144 <0.05 0.142 <0.05
GPA −0.183 <0.001 −0.261 <0.001
GEND 0.154 <0.001
VER 0.089 <0.05
R 2 0.647 0.617
N 231 128


As can be seen in the table, the strongest single predictor for both kinds of content knowledge is the respective knowledge that students possess half a year before. However, in both regressions, other variables contribute to the respective model in a significant manner as well. Introductory chemistry knowledge is also predicted by the visual model comprehension that students possess at this point, by GPA, gender, mathematical abilities and verbal reasoning abilities. These variables together are able to explain almost 65% of the variance in introductory chemistry content knowledge.

The organic chemistry content knowledge is also predicted by the visual model comprehension that students possess at this point, by GPA and by students’ mathematical abilities. These variables are able to explain almost 62% of the variance in organic chemistry content knowledge.

To sum up at this point, RQ 3 can be positively answered as well. Visual model comprehension, among other variables, of which prior knowledge is constantly the strongest, is able to predict study success significantly in terms of standardized content knowledge tests for introductory as well as for organic chemistry and in terms of lecture exam grades for introductory chemistry only. If visual model comprehension is such a meaningful predictor, the question emerges, whether and how it can in turn be predicted by other variables, which would give an indication for its trainability. These analyses will be described next.

Predictors of visual model comprehension

The question, which variables are able to predict visual model comprehension, was in the focus of our Research Question 4. As potential predictors in this regard, we considered the individual learner characteristics described above. These included chemistry-related content knowledge, general cognitive abilities in terms of verbal and figural reasoning, spatial abilities, mathematical abilities, gender and GPA. Table 12 gives an overview of the bivariate correlations between these variables and visual model comprehension at the three points of measurement.
Table 12 Bivariate correlations between visual model comprehension and its predictors
OVC1 OVC2 OVC3
r N r N r N
Note: OVC1–OVC3 = overall visual model comprehension at measuring points 1–3; CCK = chemistry-related content knowledge (overall score); VER = verbal reasoning; FIR = figural reasoning; SPA = spatial ability; GEN = gender; GPA = grade point average; MAA = mathematical ability. With the exception of the correlation between GPA and OVC2, all correlations are significant at p < 0.001.
CCK 0.54 274 0.53 241 0.63 135
VER 0.44 259 0.47 238 0.43 134
FIR 0.43 255 0.47 238 0.54 132
SPA 0.40 255 0.43 235 0.52 133
GEN 0.28 275 0.31 242 0.42 135
GPA −0.19 268 −0.12 235 −0.22 133
MAA 0.43 267 0.40 240 0.47 135


As can be seen in the table, with the exception of GPA at measuring point 2, all potential predictors correlate significantly with visual model comprehension at the beginning of the first, end of the first and end of the second semester.

Again, these correlations are a first indicator for the predictive value of some individual prerequisites for visual model comprehension. Just as in the analyses before, we added multiple regression analyses to give these relations some kind of direction. Table 13 depicts the models that are best suitable to predict visual model comprehension for the three points of measurement.

Table 13 Predictors of visual model comprehension: results of multiple regression analyses
OVC1 OVC2 OVC3
β p β p β p
Note: OVC1–OVC3 = overall visual model comprehension at measuring points 1–3; CCK = chemistry-related content knowledge (overall score); VER = verbal reasoning; FIR = figural reasoning; SPA = spatial ability; GEN = gender; GPA = grade point average; MAA = mathematical ability.
CCK 0.305 <0.001 0.442 <0.001 0.403 <0.001
VER 0.228 <0.001 0.220 <0.001 0.120 0.054
FIR 0.178 <0.001 0.208 <0.001 0.229 <0.001
SPA 0.132 <0.05 0.145 <0.05 0.166 <0.05
MAA 0.144 <0.05
GEN 0.137 <0.05 0.184 <0.05
R 2 0.482 0.519 0.611
N 250 230 131


As can be seen in this table, visual model comprehension at all three points of measurement is predicted by a combination of chemistry-related content knowledge, general cognitive abilities in terms of verbal and figural reasoning and spatial abilities. Gender adds to this at measuring points 1 and 3, and mathematical abilities appear to be an additional significant predictor at measuring point 1. The latter might be explained by the fact that at this very early stage of studies, the ability to handle formulas and read graphs and tables independently of content knowledge might be especially crucial, while at later stages of studies, an increased content knowledge could well compensate for a lack in these mathematical abilities. These variables in combination are able to explain between about 48 and 61% of the variance in visual model comprehension depending on the point of measurement. As expected, among these are figural reasoning and spatial abilities as some “typical” kind of visual competencies, but again, just like for the prediction of study success variables, the strongest predictor for visual model comprehension at all three points of measurement appears to be chemistry-related content knowledge. In short, RQ 4 can be answered positively in that we identified several significant predictors of visual model comprehension, which mostly refer to individual learner characteristics that share some commonalities with visual model comprehension.

As a consequence of these findings, the strong relation between visual model comprehension and chemistry-related content knowledge and vice versa raised our interest. Obviously, both can have an impact on one another, and both in turn, are also able to predict study success in terms of lecture exam grades. In this regard, the question arises, whether they are indeed both direct predictors of study success, or whether it might make more sense to assume some kind of indirect effects. To shed more light on this, we calculated path analyses over the three points of measurement and investigated possible mediation effects. The results for these analyses are described next.

The interplay between visual model comprehension and chemistry-related content knowledge

When visual model comprehension and chemistry-related content knowledge both predict lecture exam grades, and when they furthermore closely relate to each other, whereby the bivariate correlation coefficients are remarkably higher than the regression coefficients in a multivariate regression model, it could be that either of the impacts is rather indirect instead of predicting study success directly. In this regard, two possibilities come to mind: First, students who start their chemistry studies with a certain amount of visual model comprehension could increase this ability, because it helps them increasing the content knowledge they acquire over the course of the semester, which in turn, leads to better performance in the lecture exams. On the other hand and as a second possibility, it could well be the other way round. This would mean that students have a certain amount of chemistry-related content knowledge (that is, prior knowledge) when they start their chemistry studies at university. This knowledge helps them increasing their visual model comprehension, which in turn leads to better performance in the lecture exams.

In the first case, chemistry-related (prior) content knowledge would act as a mediator between visual model comprehension and lecture exam grades, which would mean that there is no direct relation between visual model comprehension and grades, but that this predictive power goes its way via increased content knowledge. In the second case, and vice versa, visual model comprehension would be the mediator.

We investigated both assumptions by means of path analyses, which tested for double mediation effects each. These analyses revealed significant models for the second assumption, in which visual model comprehension acts as a mediator between prior content knowledge, acquired content knowledge and lecture exam grades. Fig. 2 and 3 depict the results of these analyses for the two lecture exams in introductory chemistry and organic chemistry, respectively.


image file: c9rp00016j-f2.tif
Fig. 2 Mediation analyses for the relation between chemistry-related content knowledge and lecture exam grades in introductory chemistry mediated by visual model comprehension (N = 243 & N = 177, **p < 0.001, *p < 0.05). Note: OVC1–OVC 2 = overall visual model comprehension at measuring points 1 and 2; ICK 1–2 = introductory chemistry content knowledge at measuring points 1 and 2; LIC = lecture exam Introductory chemistry at measuring point 2.

image file: c9rp00016j-f3.tif
Fig. 3 Mediation analyses for the relation between chemistry-related content knowledge and lecture exam grades in introductory chemistry mediated by visual model comprehension (N = 131 & N = 86, **p < 0.001, *p < 0.05). Note: OVC2–OVC3 = overall visual model comprehension at measuring points 2 and 3; OCK 2–3 = organic chemistry content knowledge at measuring points 2 and 3; LOC = lecture exam organic chemistry at measuring point 3.

As can be seen in Fig. 2, visual model comprehension in a first step mediates the relation between chemistry-related (prior) content knowledge and acquired content knowledge and in a second step the relation between this acquired content knowledge and the lecture exams grades in introductory chemistry. This means that students start their studies with a certain amount of content knowledge about introductory chemistry. This helps them increasing their visual model comprehension, which in turn predicts the amount of content knowledge that is acquired over the course of the first semester. This increased content knowledge again increases visual model comprehension, which then predicts how well students perform in their lecture exam on introductory chemistry. Both mediations are partial and not complete mediations. That is, although the relation between prior content knowledge and acquired content knowledge and the relation between acquired content knowledge and lecture exam grades is in both cases mediated by visual model comprehension, the direct effect does not completely disappear when taking visual model comprehension into account. In other words, chemistry-related content knowledge still also has its own predictive value.

A similar pattern of results emerges when looking at the effects with regard to organic chemistry (Fig. 3). Again, visual model comprehension partially mediates the relation between prior content knowledge at the beginning of studies and the respective acquired knowledge at the end of the first semester. This in turn increases visual model comprehension again. However, this time, the predictive value of visual model comprehension for the lecture exam grades in organic chemistry does not reach statistical significance. In other words, for the second step, we cannot statistically confirm the partial mediation that appears on a descriptive level.

Results summary

Summing up, our results largely confirm our expectations and the role that visual model comprehension plays at the beginning of chemistry studies and over the course of the first two semesters. We were able to show that visual model comprehension is an individual learner prerequisite that comprises domain-specific as well as domain-independent aspects. Furthermore, visual model comprehension, although closely related to learner characteristics such as spatial abilities and figural reasoning, is an own variable that is able to predict study success in terms of chemistry-related content knowledge gains and lecture exam grades. In this regard, the role of visual model comprehension seems to be that it acts as a mediator between the knowledge that students already possess when they start their studies at university and the knowledge they acquire throughout their courses in the following.

Discussion and outlook

In our study, we aimed at shedding more light onto the question, whether visual model comprehension is an own individual learner characteristic that is suitable to explain a certain amount of variance in chemistry study success. This question was embedded into the broader and more general question, which predictors for study success and study drop-out can be identified in sciences and engineering university courses.

To do so, we first conducted a textbook analysis to find out which types of visualizations can be found in relevant university chemistry textbooks. Based on this analysis, we developed an instrument to assess visual model comprehension with items that are embedded within domain-specific as well as within domain-independent contexts. This visual model comprehension test was subsequently used in the main study to assess visual model comprehension and to investigate how it can predict study success in chemistry and how in turn it can be predicted by other individual learner characteristics.

Both, the pilot study as well as the main study could confirm our expectations and the role of visual model comprehension to a large extent. However, there are of course limitations to both studies as well as aspects that need further discussion. These will be outlined below.

Pilot study – textbook analysis and visual model comprehension test

Research literature assumes that visualizations have a crucial function for knowledge acquisition and learning success in science (Harrison and Treagust, 2000; Wu and Shah, 2004). We can undermine this assumption with our textbook analysis findings in that visualizations can be found on more than 85% of the pages of university chemistry textbooks.

In this regard, the validity and reliability of our textbook analysis can be considered as being given. On the one hand, the classification scheme as well as the coding guidelines were theory-based (Schnotz, 2005). The empirical findings undermine this, for instance by the high interrater reliability with Cohen's Kappas of at least 0.89.

However, it should be taken into account that our analysis focused on the quantity rather than the quality of the visualizations in the textbooks. The division into decorative and instructional and furthermore into iconic, symbolic and hybrid is rather simple so that the probability to achieve corresponding results is rather high. In future research, we might attempt to categorize visualizations in chemistry on a more fine-grained level to find out which type of iconic or symbolic (or hybrid) visualization is especially prominent in instructional materials in chemistry studies and thus, when assessing visual model comprehension, which visualizations are the most difficult ones for students. This might give a more differentiated insight into the question, how visual model comprehension and thus study success can be fostered accordingly.

Furthermore, future analyses might also take more chemistry textbooks from even broader fields into account. Finally, our results are only generalizable to the domain of chemistry. It might well be that other sciences, such as physics or biology, although the quantity of visualizations might be the same in the textbooks, reveal different results with regard to the quality, that is, the visualization types (for instance, it may well be assumed that in physics, symbolic outweigh iconic visualizations, whereas this could be the other way round in biology).

This textbook analysis served as the basis for the development of the visual model comprehension test, which is the central instrument of our research, whereby the pilot study with its validation was conducted to answer our first research question. To do so, the visual model comprehension test was developed in accordance with the visualization types found in the textbook analysis and shows satisfying to good reliability values in terms of internal consistencies. After item deletion, the overall scale of the visual model comprehension test shows Cronbach's alphas of >0.80.

The validity aspect of the visual model comprehension appeared to be a little more complicated to be answered. This was due to the fact that unlike cognitive abilities such as intelligence, visual model comprehension is not yet an exhaustively investigated construct, which is why we used our own preliminary working definition. However, what we can say is that the subscales and overall scale of the visual model comprehension test correlate with each other and over time significantly. Furthermore, confirmatory factor analyses show a three-dimensional structure of visual model comprehension and confirm that it is a construct that although it correlates with, can still be separated from general cognitive abilities, spatial ability or chemistry-related content knowledge. Future analyses might shed more light on this question by investigating in more detail, what exactly separates visual model comprehension from other “visual” learner characteristics such as spatial ability or figural reasoning. In addition, although our results already give first indications for visual model comprehension being a dynamic construct that can change over time, future studies should investigate this more deeply, especially with regard to the question, whether this refers to the domain-specific as well as to the domain-independent scales of the visual model comprehension test or whether there are differences in the development (and thus the supportability) of visual model comprehension.

In conclusion, the visual model comprehension test appears to be a reliable and valid instrument (RQ 1b) that is suitable to assess visual model comprehension as an own and multidimensional construct (RQ 1) on a domain-specific as well as on a domain-general level (RQ 1a). Our findings and the instrument scales are in line with assumptions derived from theory. Future research might investigate whether even better results could be achieved with items that are assigned to categories on more fine-grained levels or whether it might make sense to work with an instrument that, instead of working with domain-specific and domain-independent scales, divides items more generally into different types of iconic and symbolic ones.

Main study – visual model comprehension and its role for study success in chemistry

The main objective of our study was to examine the role of visual model comprehension with regard to academic success in chemistry. In this regard, our second, third and fourth research question focused on characteristics of visual model comprehension, namely the question, whether it is a stable trait or can change over time (RQ 2), whether and how it predicts study success in chemistry (RQ 3) and whether and how, in turn, it can be predicted by other relevant individual learner characteristics (RQ 4).

In this regard, and based on the overwhelming amount of research within the last decades that has investigated the role of visualizations not only for science learning (cf.Schnotz, 2008; Gilbert and Treagust, 2009; Mayer, 2009; Treagust et al., 2017), we assumed that visual model comprehension would predict grades in chemistry lecture exams as well as chemistry content knowledge at university. In line with this, our correlation and regression analyses show that visual model comprehension relates to the grades in the chemistry lecture exams of the first two semesters and to chemistry content knowledge in a medium to high range and that these relations are significant with the exception of the direct predictive value of visual model comprehension for the grades in the organic chemistry lecture exams at the end of the second semester.

Initially, this latter finding was a little surprising. Of course, the results could be due to the decreasing number of participants. As mentioned earlier, at the end of the second semester, only 137 of the initial 275 students had taken part in the third measuring point. In other words, the drop-out from the first to the third point of measurement comprised half of our initial sample. It could thus either be the sheer number of cases that leads to these different results. It could also well be that the students who still took part were the ones that were more motivated in general compared to the initial sample. Lastly, it could also be that not motivation, but general study success plays a deciding role here. For the 138 students who decided not to take part in the third measuring point, we cannot say whether they did (not) so because they just weren’t interested anymore, because they probably had too much other obligations with regard to their chemistry studies meanwhile, or whether they had simply stopped studying chemistry at all. This would be interesting to have a look at; however, data protection laws do not allow to follow up on these cases once they are not part of the sample anymore. From a pragmatic point of view, however, the latter interpretation seems more plausible. An argument in favor of this is that this dramatic drop-out took place only at the third point of measurement, that is, at the end of the second semester. During the first semester, when the first two points of measurements took place, we were able to continue working with almost 90% of the initial sample (see Table 7). It might thus well be that a remarkable number of students quit their chemistry studies (or changed to another study program) just after the first semester, which is a common finding (cf.Heublein, 2014; Heublein et al., 2017) and would also fit to our observation that the overall number of students, who remained in the chemistry study programs at the two universities involved in our project, had reduced remarkably during that time.

Furthermore, a very simple explanation for the missing relation could also be that preparing for the organic chemistry lecture exam did not require content- and visualization-based learning, but was maybe some kind of “teaching to the test”, that is, students might have simply learned with tests from lectures exams of previous semesters instead of really trying to repeat the lecture contents. This is pretty plausible, because it would also explain why at the same time that visual model comprehension does not predict organic chemistry lecture exams, it still predicts content knowledge in organic chemistry.

Finally, the results of our mediation analyses point out that the role of visual model comprehension for chemistry study success is the one of a mediator between chemistry-related (prior) content knowledge and according content knowledge gains respectively according lecture exam grades. In short, visual model comprehension seems to play an important role for studying chemistry at university in that it enhances learning success on the one hand but is also enhanced by already available knowledge at the beginning of studies. This finding bears two important implications. If visual model comprehension predicts knowledge acquisition, it might be worth fostering visual model comprehension by means of training programs that focus on the visual and spatial aspects (e.g., learning how to “read” visualizations and which kind of conventions stand for which content-related aspects) rather than on the content itself. Such general trainings for visual model comprehension would also give more insight into the question, whether this construct is a chemistry-specific ability or also enhances learning success in other science-related domains. Second, if the knowledge that students bring from high school when starting their university studies, has such an impact on visual model comprehension, it might also be worth investigating whether this works vice versa at school already.

Finally, we found that visual model comprehension is in turn also predicted by a number of individual prerequisites of which spatial ability and figural reasoning appear to be the most plausible ones. However, we also found a relation with gender and mathematical abilities especially at the very beginning of studies, and this needs further investigation, since with the exception of gender, the predictors of visual model comprehension should also be taken into account when talking about ways to increase visual model comprehension and thus study success.

In sum, visual model comprehension is a dynamic construct that increases over the course of the first two semesters (RQ 2), that can predict study success in chemistry in terms of content knowledge acquisition and exam grades (RQ 3) and that is in turn predicted by several other individual learner characteristics (RQ 4). This latter finding, however, also points to an important limitation of our study respectively the interpretation of the results. Although the role of visual model comprehension is undoubtedly significant and positive, we have to keep in mind that it is neither the strongest nor the only predictor of chemistry study success. That is, if we want to support students with regard to improving their learning as well as their grades in chemistry, we might think of more and comprehensive intervention programs than just training visual model comprehension. In other words, the big picture is always made up of many small puzzle pieces, of which visual model comprehension is only one, which, however, should not be missed.

Limitations and outlook

Our study was able to shed a little more light on the question, how visual model comprehension relates to study success in chemistry, however, several limitations should be taken into account. First, we defined study success in terms of performance in standardized content knowledge tests as well as grades in lecture exams. This operationalization does not include other possible criteria of study success such as student satisfaction or drop-out rate. We did assess cognitive load during the three measurement points, however, and the according results show that perceived difficulty and invested mental effort continuously decrease over the course of the first two semesters. These two items were rather general and unspecific, however, and it needs to be analyzed in more detail in an upcoming study, to which aspects students refer when indicating their perceived difficulty and invested mental effort (cf. validity of cognitive load measurement; Sweller et al., 2011).

With regard to our participants, the results were generated on the basis of initially 275 chemistry students from two large universities in Germany. These universities were very comparable with regard to their schedule and study requirements, so it might be worth having a look at different universities to find out whether the pattern of results stays the same for study courses placing different foci, for instance. Furthermore, we accompanied the students over the course of the first two semesters “only”. It would be interesting to see whether the path models and mediations stay similar for the further course of studies in higher semesters as well. This could only be done by a comprehensive longitudinal study, which would mean facing challenges in terms of sample acquisition (which can already be guessed by the reduced number of participants that we had after the second semester already) and a project organization that does not coincide with the regular study progress.

With regard to visual model comprehension, although our instrument appears to be valid and reliable and can be economically administered and quickly analyzed due to its multiple-choice-structure, it might also be worth taking more comprehensive approaches to assess the ability to work with visualizations into account. For instance, Cooper et al. (2012) developed an instrument to assess students’ abilities to process implicit information from Lewis structures and accordingly decode learning-relevant information that is contained in structure–property connections. This instrument works with open-ended questions and student interviews, and combining these two approaches might give additional and valuable insights not only into the quantitative aspects of visual model comprehension (how much do students know), but also into the qualitative side (what do they know and if they don’t know, where exactly are their deficits and misconceptions?). In other words, extending the ways to assess visual model comprehension would also provide more information on what exactly we can support especially if students appear to lack the necessary abilities to cope with the demands of chemistry study programs successfully.

Finally, one important limitation of our study is that we investigated the role of visual model comprehension for learning with the existing materials and requirements of the first two university semesters. However, unlike traditional multimedia research, we did not investigate the role of the visualizations themselves. That is, we now know more about how important visual model comprehension (and other individual learner characteristics) are for studying chemistry with what is the given learning scenario, but we do not know, which role is played by the learning materials (that is, the instructional design) itself. In this regard and back to the classical multimedia research, it might be worth investigating whether visual model comprehension and thus learning success is higher when and under which circumstances chemistry students learn with iconic visualizations like space-filling models (which might be more helpful for instance for learning about the more concrete structure of matter) or with symbolic-mathematical visualizations like Lewis structures or diagrams (which for instance might ne more helpful when learning about the more abstract concept of energy). This could again give valuable insights into the more general question, what should be fostered and how when we want all students to benefit equally from study courses at university.

Conflicts of interest

There are no conflicts to declare.

Appendix 1 – examples for the chemistry-specific items

Further examples for the chemistry-specific items of the visual model comprehension test. Note: The original (and validated) items and the test are in German. The items shown here are translated examples that have not yet been subject to validation with an English speaking sample.
image file: c9rp00016j-u4.tif

Acknowledgements

The preparation of this chapter/paper was supported by grants RU 1437/5-1 and OP 192/2-1 from the German Research Foundation (DFG) in the research group “Academic Learning and Study Success in the Entry Phase of Science and Technology Study Programs” (ALSTER; FOR 2242).

References

  1. Ainsworth S., (2006), DeFT. A conceptual framework for considering learning with multiple representations, Learn. Instruct., 16(3), 183–198.
  2. Ainsworth S., (2008), The Educational Value of multiple representations when learning complex scientific concepts, in Gilbert J. K., Reiner M. and Nakhleh M. (ed.), Visualization: Theory and Practice in Science Education, Dordrecht, Netherlands: Springer, pp. 191–208.
  3. Atkins P. W., de Paula J. and Bär M., (2013), Physikalische Chemie. [Physical Chemistry.], Weinheim: Wiley-VCH.
  4. Baker S. R. and Talley L., (1972), The relationship of visualization skills to achievements in freshman chemistry, J. Chem. Educ., 49(11), 775–776.
  5. Bruice P. Y., (2011), Organische Chemie. Studieren kompakt [Organic chemistry. Compact studying.], München: Pearson Studium.
  6. Carter C. S., Larussa M. A. and Bodner G. M., (1987), A study of two measures of spatial ability as predictors of success in different levels of general chemistry, J. Res. Sci. Teach., 24(7), 645–657.
  7. Chen X., (2013), STEM Attrition: College Students’ Paths Into and Out of STEM Fields (NCES 2014-001), Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.
  8. Clement J., Zietsman A. and Monaghan J., (2005), Imagery in Science Learning in Students and Experts, in Gilbert J. K. (ed.), Visualization in Science Education. Models and Modeling in Science Education, Dordrecht: Springer, vol. 1.
  9. Coleman J. M., McTigue E. M. and Smolkin L. B., (2010), Elementary Teachers’ Use of Graphical Representations in Science Teaching, J. Sci. Teach. Educ., 22(7), 613–643.
  10. Coll R. K. and Lajium D., (2011), Modeling and the Future of Science Learning, in Saleh I. M. and Khine M. S. (ed.), Models and Modeling. Cognitive Tools for Scientific Enquiry, Dordrecht, Netherlands: Springer, pp. 3–21.
  11. Cook M., Wiebe E. N. and Carter G., (2008), The influence of prior knowledge on viewing and interpreting graphics with macroscopic and molecular representations, Sci. Educ., 92(5), 848–867.
  12. Cooper M., Underwood S. M. and Hilley C. Z., (2012), Development and validation of the implicit information from Lewis structures instrument (IILSI): do students connect structures with properties? Chem. Educ. Res. Pract., 13, 195–200.
  13. Derrick M. E. and Derrick F. W., (2002), Predictors of Success in Physical Chemistry, J. Chem. Educ., 79(8), 1013–1016.
  14. Ekstrom R. B., French J. W., Harman H. H. and Dermen D., (1976), Manual for Kit of Factor-Referenced Cognitive Tests, Princeton, NJ: Educational Testing Service.
  15. Freyer K., (2013), Zum Einfluss von Studieneingangsvoraussetzungen auf den Studienerfolg Erstsemesterstudierender im Fach Chemie. [Impact of study prerequisites on study success of first semester chemistry students.], Berlin: Logos.
  16. Gilbert J. K. and Treagust D. F., (2009), Multiple Representations in Chemical Education, Dordrecht, Netherlands: Springer.
  17. Harrison A. G. and Treagust D. F., (2000), A typology of school science models, Int. J. Sci. Educ., 22(9), 1011–1026.
  18. Heller K. A. and Perleth C., (2000), Kognitiver Fähigkeitstest für 4. bis 12. Klassen: KFT 4-12 + R. [Cognitive Abilities Test.], Göttingen: Beltz-Test.
  19. Heublein U., (2014). Student Drop-out from German Higher Education Institutions, Eur. J. Educ., 49(4), 497–513.
  20. Heublein U., Ebert J., Hutzsch C., Isleib S., König R., Richter J. and Woisch A., (2017), Zwischen Studienerwartungen und Studienwirklichkeit: Ursachen des Studienabbruchs, beruflicher Verbleib der Studienabbrecherinnen und Studienabbrecher und Entwicklung der Studienabbruchquote an deutschen Hochschulen. [Between study expectations and reality: Causes for study drop-out, further career of drop-outs and development of drop-out rates at German universities.], Hannover: Deutsches Zentrum für Hochschul- und Wissenschaftsforschung (DZHW).
  21. Höffler T., Schmeck A. and Opfermann M., (2013), Static and Dynamic Visual Representations. Individual Differences in Processing, in Robinson D. R., Schraw G. J. and McCrudden M. T. (ed.), Learning Through Visual Displays, Charlotte, NC: Information Age Publishing, pp. 133–163.
  22. Housecroft C. E., Sharpe A. G. and Rompel A., (2006), Anorganische Chemie. [Inorganic Chemistry.], München: Pearson Studium.
  23. Jiang B., Xu X., Garcia A. and Lewis J. E., (2010), Comparing Two Tests of Formal Reasoning in a College Chemistry Context, J. Chem. Educ., 87(12), 1430–1437.
  24. Johnstone A. H., (2000), Teaching of chemistry – logical or psychological? Chem. Educ. Res. Pract., 1(1), 9–15.
  25. Kalyuga S., Chandler P. and Sweller J., (1999), Managing split-attention and redundancy in multimedia instruction, Appl. Cognit. Psychol., 13(4), 351–371.
  26. Kalyuga S., Ayres P., Chandler P. and Sweller J., (2003), The Expertise Reversal Effect, Educ. Psychol., 38(1), 23–31.
  27. Kararo A. T., Colvin R. A., Cooper M. M. and Underwood S. M., (2019), Predictions and constructing explanations: an investigation into introductory chemistry students' understanding of structure–property relationships, Chem. Educ. Res. Pract., 20, 316–328.
  28. Kennepohl D., Guay M. and Thomas V., (2010), Using an Online, Self-Diagnostic Test for Introductory General Chemistry at an Open University, J. Chem. Educ., 87(11), 1273–1277.
  29. Lenzner A., Schnotz W. and Müller A., (2013), The role of decorative pictures in learning, Instruct. Sci., 41(5), 811–831.
  30. Leutner D., Fleischer J. and Wirth J., (2006), Problemlösekompetenz als Prädiktor für zukünftige Kompetenz in Mathematik und in den Naturwissenschaften. [Problem solving competency as a predictor for future competency in mathematics and science.], in Prenzel M., Baumert J., Blum W., Lehmann R., Leutner D., Neubrand M., Pekrun R., Rost J. and Schiefele U. (ed.), PISA 2003. Untersuchungen zur Kompetenzentwicklung im Verlauf eines Schuljahres, Münster: Waxmann, pp. 119–137.
  31. Leutner D., Opfermann M. and Schmeck A., (2014), Lernen mit Medien [Learning with media.], in Seidel T. and Krapp A. (ed.), Pädagogische Psychologie, Weinheim: Beltz, pp. 297–322.
  32. Lewis S. E. and Lewis J. E., (2007), Predicting at-risk students in general chemistry: comparing formal thought to a general achievement measure, Chem. Educ. Res. Pract., 8(1), 32–51.
  33. Mayer R. E., (2009), Multimedia learning, Cambridge: Cambridge University Press, 2nd edn.
  34. Mayer R. E., (2014), The Cambridge handbook of multimedia learning, 2nd edn, Cambridge: Cambridge University Press.
  35. Mayer R. E. and Moreno R., (1998), A split-attention effect in multimedia learning, J. Educ. Psychol., 90(2), 312–320.
  36. McElhaney K. W., Chang H. Y., Chiu J. L. and Linn M. C., (2015), Evidence for effective uses of dynamic visualisations in science curriculum materials, Stud. Sci. Educ., 51(1), 49–85.
  37. Moosbrugger H. and Schermelleh-Engel K., (2008), Exploratorische (EFA) und Konfirmatorische Faktorenanalyse (CFA). [Exploratory and confirmatory factor analyses.], in Moosbrugger H. and Kelava A. (ed.), Testtheorie und Fragebogenkonstruktion, Berlin: Springer, pp. 307–324.
  38. Mortimer C. E. and Müller U., (2003), Chemie. Das Basiswissen der Chemie. [Chemistry basic knowledge.], Stuttgart: Thieme.
  39. Müller J., Stender A., Fleischer J., Borowski A., Dammann E., Lang M. and Fischer H. E., (2018), Mathematisches Wissen von Studienanfängern und Studienerfolg. [Mathematical knowledge and study success of study beginners.], Chim. Didact., 24(1), 1–17.
  40. Nicoll G. and Francisco J. S., (2001), An Investigation of the Factors Influencing Student Performance in Physical Chemistry, J. Chem. Educ., 78(1), 99–102.
  41. Niegemann H. M., Domagk S., Hessel S., Hein A., Hupfer M. and Zobel A., (2008), Kompendium multimediales Lernen [Multimedia learning compendium.], Berlin, Heidelberg: Springer.
  42. OECD, (2011), Education at a Glance 2011: OECD Indicators, OECD Publishing.
  43. Oliveira A. W., Rivera S., Glass R., Mastroianni M., Wizner F. and Amodeo V., (2013), Teaching Science Through Pictorial Models During Read-Alouds, J. Sci. Teach. Educ., 24(2), 367–389.
  44. Oliver-Hoyo M. and Sloan C., (2014), The development of a Visual-Perceptual Chemistry Specific (VPCS) assessment tool, J. Res. Sci. Teach., 51(8), 963–981.
  45. Opfermann M., Schmeck A. and Fischer H. E., (2017), Multiple representations in physics and science education – Why should we use them? in Treagust D. F., Duit R. and Fischer H. E. (ed.), Multiple Representations in Physics Education, Cham: Springer International Publishing, pp. 1–22.
  46. Paas F., (1992), Training strategies for attaining transfer of problem-solving skill in statistics, J. Educ. Psychol., 84(4), 429–434.
  47. Parkerson J. A., Lomax R. G., Schiller D. P. and Walberg H. J., (1984), Exploring causal models of education achievement, J. Educ. Psychol., 76(4), 638–646.
  48. Prereira J. A., (2018), Fakten und Trends: Chemiestudiengänge in Deutschland. [Facts and trends: Chemistry study programs in Germany.], Nachr. Chem., 66(7–8), 785–793.
  49. Ramadas J., (2009), Visual and spatial modes in science learning, Int. J. Sci. Educ., 31(3), 301–318.
  50. Rau M. A., (2017), Conditions for the Effectiveness of Multiple Visual Representations in Enhancing STEM Learning, Educ. Psychol. Rev., 29(4), 717–761.
  51. Schnotz W., (2005), An integrated model of text and picture comprehension, in Mayer R. E. (ed.), The Cambridge handbook of multimedia learning, Cambridge: Cambridge University Press, pp. 49–69.
  52. Schnotz W., (2008), Why multimedia learning is not always helpful, in Rouet J. F., Lowe R. and Schnotz W. (ed.), Understanding multimedia documents, New York: Springer, pp. 17–41.
  53. Seery M. K., (2009), The role of prior knowledge and student aptitude in undergraduate performance in chemistry: a correlation-prediction study, Chem. Educ. Res. Pract., 10, 227–232.
  54. Staver J. R. and Jacks T., (1988), The influence of cognitive reasoning level, cognitive restructuring ability, disembedding ability, working memory capacity, and prior knowledge on students' performance on balancing equations by inspection, J. Res. Sci. Teach., 25(9), 763–775.
  55. Stieff M., (2010). When is a molecule three dimensional? A task-specific role for imagistic reasoning in advanced chemistry, Sci. Educ., 95, 310–336.
  56. Stieff M., Hegarty M. and Deslongchamps G., (2011), Identifying Representational Competence With Multi-Representational Displays, Cognit. Instruct., 29, 123–145.
  57. Sweller J., Ayres P. and Kalyuga S., (2011), Cognitive Load Theory, New York: Springer.
  58. Tai R. H., Sadler P. M. and Loehr J. F., (2005), Factors influencing success in introductory college chemistry, J. Res. Sci. Teach., 42(9), 987–1012.
  59. Treagust D. F., Duit R. and Fischer H. E., (2017), Multiple representations in physics education, Cham: Springer International Publishing.
  60. Tuckey H., Selvaratnam M. and Bradley J., (1991), Identification and rectification of student difficulties concerning three-dimensional structures, rotation and reflection, J. Chem. Educ., 48, 460–464.
  61. Van Merrienboër J. J. G. and Sweller J., (2005), Cognitive load theory and complex learning: recent developments and future directions, Educ. Psychol. Rev., 17(2), 147–177.
  62. Wu H.-K. and Shah P., (2004), Exploring visuospatial thinking in chemistry learning, Sci. Educ., 88 (3), 465–492.

This journal is © The Royal Society of Chemistry 2019