Thomas
Dickmann
*a,
Maria
Opfermann
b,
Elmar
Dammann
c,
Martin
Lang
a and
Stefan
Rumann
a
aUniversity of Duisburg-Essen, 45127 Essen, Germany. E-mail: thomas.dickmann@uni-due.de
bRuhr-University Bochum, 44780 Bochum, Germany
cFurtwangen University, 78120 Furtwangen, Germany
First published on 7th June 2019
Visualizations and visual models are of substantial importance for science learning (Harrison and Treagust, 2000), and it seems impossible to study chemistry without visualizations. More specifically, the combination of visualizations with text is especially beneficial for learning when dual coding is fostered (Mayer, 2014). However, at the same time, comprehending the visualizations and visual models appears to be rather difficult for learners (e.g., Johnstone, 2000). This may be one reason for the difficulties students experience especially during the university entry phase, which in a worst-case-scenario can result in high university drop-out rates as they are currently found in science-related study courses (Chen, 2013). In this regard, our study investigates, how the ability to handle and learn with visualizations – which we call visual model comprehension – relates to academic success at the beginning of chemistry studies. To do so, we collected the data of 275 chemistry-freshmen during their first university year. Our results show that visual model comprehension is a key factor for students to be successful in chemistry courses. For instance, visual model comprehension is able to predict exam grades in introductory chemistry courses as well as general chemistry content knowledge. Furthermore, our analyses point out that visual model comprehension acts as a mediator for the relation between prior knowledge and (acquired) content knowledge in chemistry studies. Given this obvious importance of visual model comprehension, our findings could give valuable insights regarding approaches to foster chemistry comprehension and learning especially for students at the beginning of their academic career.
Visualizations and visual models have always played a great role in chemistry learning and accordingly in learning materials such as textbooks. This refers to different kinds of visualizations (e.g., Johnstone, 2000; Gilbert and Treagust, 2009): those on a phenomenological or macro type level (representing empirical properties of compounds), those on a model or sub-micro type level (external representations, e.g., ball-and-stick models) and those on a symbolic level (sub-micro type further simplified to symbols, e.g., structural formulas). In this regard, in chemistry textbooks and learning materials, in order to understand chemical processes and reactions, visual models as one kind of external representation are a useful means to make invisible microscopic aspects concrete.
It is thus not surprising that research proposes visualizations and visual models to be highly important for substantial learning in chemistry (Harrison and Treagust, 2000; Ramadas, 2009; Coll and Lajium, 2011). Moreover, the use of visualizations and visual models in chemistry is omnipresent, so in short: “Chemistry is a visual science” (Wu and Shah, 2004).
When talking about the benefits of visualizations and visual models for chemistry learning, we can do so with regard to different aspects. In general, the literature indicates that visualizations can support learning in general when they are added to text (Schnotz, 2005; Mayer, 2014). This benefit, however, depends on the learners’ ability to comprehend and “read” the visualization itself, and this ability, in turn, relates to several other individual learner characteristics (Höffler et al., 2013).
Thus, when talking about the above mentioned high dropout rates for chemistry related university courses (Chen, 2013), we should take all of these aspects into account to get a comprehensive picture of the role of visual model comprehension. Furthermore, since the dropout takes place especially in the first year at university (Chen, 2013; Heublein et al., 2017), our study focuses on the role of visual model comprehension with regard to the academic success of chemistry freshman. Here, the influence of visual model comprehension on academic success and the question, which predictors in turn have an impact on visual model comprehension are of special interest.
When looking for learner-related reasons that account for success in the beginning of chemistry studies, research indicates that beside general variables such as cognitive abilities, grade point average, learning motivation or learning strategies (e.g., Baker and Talley, 1972; Wu and Shah, 2004; Tai et al., 2005), domain-specific variables such as prior knowledge (Cook et al., 2008; Seery, 2009), mathematical abilities (Derrick and Derrick, 2002; Nicoll and Francisco, 2001) and spatial ability (Wu and Shah, 2004) are predictive for a good performance in chemistry and its sub-disciplines. These learner-related characteristics play a major role with regard to the question, how learners are able to deal with chemistry contents and learning materials. When it comes to the latter, this comprises the use of multiple forms of visualizations, which can be found on nearly each page of current chemistry textbooks. While on the one hand, these visualizations are meant to foster learning by adding something to the text, they require to be recognized and understood in themselves to be beneficial. These issues will be focused on in the next two paragraphs.
Unfortunately, such ability seems to be lacking rather often when learning about chemistry. In this regard, Gilbert and Treagust (2009) in line with Johnstone (2000) assume that chemistry students have difficulties especially with regard to the above mentioned macroscopic, a sub-microscopic and symbolic types of chemistry visualizations (Fig. 1). More specifically, Johnstone (2000) states that learners might not be able to distinguish between and thus think at these three levels simultaneously. In a similar fashion, Kararo et al. (2019) found that students especially in the university entry phase often struggle with understanding structure–property relationships, which is for instance reflected in inaccurate drawings (e.g., of hydrogen bondings) and the inability to predict, argue about and explain such relationships.
This demonstrates that students must handle quite different visualizations and should thus be able to learn with visualizations of different kinds. This ability is crucial because the learning success depends at least partly on the comprehension of visualizations (Harrison and Treagust, 2000; Ramadas, 2009; Coll and Lajium, 2011).
These assumptions, in turn, are in line with research that takes the view that the knowledge acquisition of the structure of molecules depends on the comprehension of the figural nature of the molecules and their representations (Oliver-Hoyo and Sloan, 2014), that some kind of representational competence is needed to cope with the often multi-representational and three-dimensional depiction of molecules and organic chemistry contents (Stieff, 2010; Stieff, Hegarty and Deslongchamps, 2011), and that visualizations are essential elements of scientific communication (Coleman et al., 2010; Oliveira et al., 2013).
To sum up at this point, research as well as theory both underline the importance of visualizations in science learning but at the same time emphasize one major difficulty. When learning chemistry with visualizations, learners are often required to comprehend a content they do not yet understand with the help of visualizations they are not used to (Ainsworth, 2006; McElhaney et al., 2015). This is called representation dilemma (Rau, 2017) and implies that before visualizations are beneficial for chemistry learning, they need to be understood in themselves, followed by the ability to relate them to their textual counterparts (Ainsworth, 2008; McElhaney et al., 2015). In other words, visual model comprehension is a necessary prerequisite for successful chemistry learning.
This definition includes assumptions on both, characteristics of the learner and the visualizations themselves. As a consequence, when investigating the impact pf visual model comprehension on learning, both of these should be focused on more closely.
With regard to the type of visualization, a distinction can be made between decorative and instructional visualizations in a first step (Mayer, 2009). While decorative visualizations do not directly refer to the comprehension of the content to be learned and are assumed to have an impact on learning through their motivational potential (Lenzner et al., 2013), instructional visualization, as the name already suggests, are meant to foster comprehension directly through their explanatory character. (Leutner et al., 2014).
Instructional visualizations can further be divided into symbolic and iconic visualizations. Iconic visualizations have a more depictive character (Schnotz, 2005) in that they have structural commonalities with their reference objects (Niegemann et al., 2008). For instance, the drawing of a car looks like a car in reality. In contrast, symbolic visualizations have a more descriptive character and no similarity to the object they are meant to describe. For instance, the formula C2H5OH represents, but does not look like alcohol. With regard to (chemistry) learning, it can thus be said that iconic visualizations, like models of molecules or a picture of a distillation apparatus, are more suitable to convey concrete knowledge, whereas symbolic visualizations, like Lewis structures or Newman projections, are more suitable to convey abstract knowledge (Schnotz, 2005). When studying chemistry, both of these knowledge types are essential, and the question arises how they are used in relevant learning materials such as university chemistry textbooks. This question was focused on in a first step of our study, which will be described later on.
Besides the types and characteristics of the visualizations themselves, a second variable that is crucial for visual model comprehension is the set of individual prerequisites with which a learner approaches a learning situation. In other words, individual learner characteristics (Höffler et al., 2013), are assumed to be substantial predictors of learning success. In chemistry learning, besides general cognitive abilities and prior knowledge, the spatial ability (especially when it comes to working with iconic visualizations) and mathematical ability (especially when it comes to learning from structural formulas or graphs) of learners might play a central role.
Among these, the probably best investigated learner characteristic is the prior knowledge of learners. For instance, research has consistently shown that regardless of the domain, prior knowledge appears to be the strongest (but not the only) predictor of learning success (Parkerson et al., 1984; Leutner et al., 2006). At the same time, not all learners benefit equally from instructional materials. For instance, “instructional techniques that are highly effective with inexperienced learners can lose their effectiveness and even have negative consequences when used with more experienced learners” (Expertise Reversal Effect; Kalyuga et al., 2003, p. 23). With a special emphasis on learning with visualizations, this is taken up similarly by Mayer (2009) in his Individual Differences Principle, in which he states that text-picture materials that are beneficial for learners with low prior knowledge and high spatial ability can even have detrimental effects for high prior knowledge or high spatial ability learners. The rationale behind this assumption is that for learners with high prior knowledge, adding pictures to a text might be redundant and merely stress working memory capacities without additional comprehension gains, while at the same time, learners with low spatial ability might be stressed as well as they are less able to “read” the spatial characteristics of the visualization and keep all necessary information active in their working memory (Mayer and Moreno, 1998).
Spatial ability is often focused on in chemistry learning and science learning in general. For instance, Wu and Shah (2004) state that “Chemistry is a visual science” and emphasize that spatial abilities are one of the relevant predictors of chemistry learning. This can for instance be explained by the fact that in chemistry, learners must often identify key components of visualizations and rotate them in their minds, for instance when learning with ball-and-stick models. This ability is a central prerequisite to understand chemical contents, structures and relations. In line with this, empirical studies show significant correlations between spatial abilities and chemistry performance (Carter et al., 1987; Staver and Jacks, 1988), Furthermore, chemistry performance can obviously be enhanced by providing students with a pre-training regarding visual-spatial tasks (Tuckey et al., 1991). Nevertheless, the findings on the role of spatial ability especially in combination with or independently of other learner prerequisites are still not as clear and systematic as theory would suggest (Wu and Shah, 2004). Our study takes this up and tries to fill the gap by investigating individual learner characteristics and their interplay when studying chemistry.
RQ 1: Is visual model comprehension an individual learner prerequisite that students possess at the beginning of their university chemistry studies?
RQ 1a: Is the construct domain-specific, or does it comprise domain-independent aspects?
RQ 1b: Can visual model comprehension be assessed validly and reliably by means of a multiple-choice test instrument?
RQ 2: Is visual model comprehension a stable trait, or does it develop over time?
RQ 3: Does visual model comprehension predict chemistry study success in terms of content knowledge gains and exam grades?
RQ 4: Which individual learner characteristics, in turn, predict visual model comprehension?
To answer these research questions, a first step was to develop a test instrument that is suitable to assess visual model comprehension validly and reliably. This was done twofold. First, a comprehensive textbook analysis was conducted to find out more about the types of visualizations that are used in common university textbooks that focus on chemistry for beginning students. On the basis of this analysis, we developed general and domain-specific items that were validated in a pilot study.
A second step was then to investigate, whether visual model comprehension as assessed with the new test instrument, is able to predict the academic success of chemistry students during their first semester at university. Furthermore, we also investigated individual learner characteristics as predictors of visual model comprehension to find out more about the still open question, why learners differ in their visual model comprehension and how visual model comprehension might be supported if students appear to lack the respective abilities.
We focused on the university entry phase, as this obviously is the most crucial phase for studying in general (Heublein, 2014) and with regard to chemistry courses (Lewis and Lewis, 2007; Jiang et al., 2010; Kennepohl et al., 2010). As described above, at the beginning of their university life, students might be overwhelmed by the multiple demands that their study programs pose on them, of which the need to process the often complex visual presentations of the contents to be learned is only one. Underestimating these demands can accordingly lead to cognitive overload, frustration and in a worst case even early study drop-out (e.g., OECD, 2011).
Thus, in our study, which is part of comprehensive long-term project on predictors of study success in the university entry phase of science and technology courses, we aimed at finding out more about the visual model comprehension of chemistry students, how it develops over the course of the first two university semesters, whether and how it predicts study success, and whether and how it is in turn predicted by individual learner characteristics. To sum our study up, it consisted of the following steps:
Step 1: Pilot study
– Step 1a: Chemistry textbook analysis: identifying common types of visualizations that are used in university textbooks and that students need to be able to work with.
– Step 1b: Development of the visual model comprehension test: using the visualization types identified in step 1a to create items that are able to measure students’ ability to comprehend visualizations in a general as well as in a chemistry-based context.
Step 2: Conducting the main study
– Step 2a: Beginning of first semester: Assessing visual model comprehension with the test developed in step 1b, prior chemistry-related knowledge, cognitive abilities, GPA, age, gender and other individual learner characteristics
– Step 2b: End of first semester: Assessing visual model comprehension (to be able to infer about stability versus increasability), chemistry-related knowledge (to be able to infer about learning gains), cognitive load and exam grades for the first introductory lecture.
– Step 2c: End of second semester: Assessing visual model comprehension, chemistry-related knowledge, cognitive load and exam grades for the second introductory lecture.
Step 3: Analyzing the main study
– Step 3a: Development of visual model comprehension over time: Is it a stable construct, or can it increase during the first two semesters?
– Step 3b: Visual model comprehension and study success: Can visual model comprehension predict the (chemistry-related) knowledge gains of students and their exam grades for the introductory lectures of the first two semesters?
– Step 3c: Predictors of visual model comprehension: If visual model comprehension is a predictor of study success (in terms of knowledge gains and exam grades), can we shed more light on it by finding out more about variables that in turn predict visual model comprehension?
These steps as parts of the pilot study and the main study will be described in more detail in the following. In both, ethical clearance was ensured twofold. First, the project had been approved and funded by the German Research Foundation, which included a statement on compliance with good research practice (e.g., the voluntariness of participation). Second, the data protection departments of the participating universities were informed about the project and ensured that the handling of student data (which included demographic information as well as their answers on the test instruments and questionnaires) strictly followed data protection laws.
For the textbook analysis, four chemistry textbooks were chosen that are among the most frequently used at German universities and that cover the different domains that are relevant in introductory chemistry courses. These are organic (Bruice, 2011), inorganic (Housecroft et al., 2006), physical (Atkins et al., 2013) and introductory chemistry (Mortimer and Müller, 2003). These textbooks were chosen based on interviews with professors who are responsible for the introductory lectures in chemistry at the universities that were part of the overall long-term project.
To classify the different visualizations theory-based, we used a scheme based on the above mentioned distinctions (see Fig. 1), which, among others, can be traced back to the work of Mayer (2009), Schnotz (2005), Schnotz (2008), Niegemann et al. (2008) or Treagust and colleagues (Harrison and Treagust, 2000; Gilbert and Treagust, 2009). On a first level, the visualizations were labelled as either decorative or instructional. Second, if they were instructional, a further distinction was made between iconic and symbolic visualizations. On this second level, we added a third category, which we called “hybrid”, as a first exploratory analysis had shown that a substantial part of instructional visualizations in these textbooks combine iconic and symbolic aspects (e.g., energy-level diagrams that include orbital visualizations).
All textbooks were analyzed by two independent expert raters, one of them being the first author of this paper and the second one being another PhD working in the department for didactics of chemistry, with Cohen's Kappas ranging from 0.89–0.99 depending on the category.
Table 1 shows the results of the textbook analysis. In line with expectations, visualizations of all kinds made up a large part of each of the textbooks. This ranges from 87% of the textbook pages in inorganic chemistry up to 95.2% of the textbook pages in physical chemistry. This once again underlines Wu's and Shah's statement that “Chemistry is a visual science” and emphasizes the need to investigate the predictors of successful learning with visualizations in chemistry.
INC | PYC | ORC | IOC | IRR (Cohens κ) | |
---|---|---|---|---|---|
IOC – introductory chemistry; PYC – physical chemistry; ORC – organic chemistry; IOC – inorganic chemistry. | |||||
% of pages containing visualizations | 93.8 | 95.2 | 90.8 | 87.0 | 1 |
Level 1 | |||||
Decorative | 4.2 | 0 | 2.2 | 0 | 0.92–1 |
Instructional | 95.7 | 100 | 97.8 | 100 | 0.91–1 |
Level 2 | |||||
Iconic | 8.3 | 9.9 | 9.7 | 10.1 | 0.93–0.96 |
Symbolic | 75.3 | 83.6 | 71.5 | 68.5 | 0.93–0.99 |
Hybrid | 12.0 | 9.8 | 17.1 | 21.4 | 0.89–0.99 |
The lower part of the table shows how decorative and instructional visualizations are distributed within these visualizations. As can be seen, decorative visualizations are rarely, if at all, used in university textbooks. They will thus not be focused on in the remainder of this paper. On the other hand, instructional visualizations constitute the overwhelming majority of the visualizations analyzed. As described above, they were further divided into iconic, symbolic and hybrid visualizations, Of these three, symbolic visualizations appear to be the most commonly used (which is no surprise, taking into account that all structural formulas count as such), but still, between a tenth to a fifth of the visualizations are either purely iconic or hybrid, that is, the contain iconic as well as symbolic aspects. There seem to be slight domain-related differences in that the amount of symbolic visualizations is highest in physical chemistry, whereas the amount of iconic visualizations is highest in inorganic chemistry, both again no surprising findings.
The results of this textbook analysis served as a basis for the development of items for the visual model comprehension test, which will be described next.
A first version of the test comprised 45 items with 15 items on each scale. In the pilot study, the test was administered three times over the course of the first two semesters of chemistry studies (beginning of first semester, end of first semester, end of second semester) at a large German university. The initial sample for the pilot study comprised 146 students for the beginning of the first semester, of which 133 also took part at the end of the first semester. At the end of the second semester, the sample had decreased to 61 students. Table 5 shows the internal consistencies as represented by Cronbach's alpha for all three time points of measurement as well as for each of the three scales and for the overall test. Although the internal consistencies for the overall test are satisfying to good, they differ quite substantially between the single scales and time points of measurement. Thus, based on the results of the single scales, we deleted some items so that the final instrument comprises 33 items (11 per scale) with Cronbach's alphas between 0.800 and 0.875 for the overall test, which can be considered good internal consistencies.
1G | 1E | 1O | 2C | 2G | 2E | 2O | 3C | 3G | 3E | 3O | |
---|---|---|---|---|---|---|---|---|---|---|---|
Note: 1 = first point of measurement; 2 = second point of measurement; 3 = third point of measurement; C = chemistry-specific items; G = general items; E = engineering-specific items; O = overall score. All correlations are significant at p < 0.01 or higher. | |||||||||||
1C | 0.42 | 0.29 | 0.74 | 0.57 | 0.42 | 0.42 | 0.57 | 0.54 | 0.52 | 0.48 | 0.59 |
1G | 1 | 0.41 | 0.77 | 0.46 | 0.63 | 0.50 | 0.64 | 0.47 | 0.63 | 0.53 | 0.63 |
1E | 1 | 0.77 | 0.50 | 0.45 | 0.59 | 0.63 | 0.48 | 0.48 | 0.53 | 0.61 | |
1O | 1 | 0.67 | 0.65 | 0.67 | 0.81 | 0.64 | 0.70 | 0.71 | 0.79 | ||
2C | 1 | 0.48 | 0.55 | 0.81 | 0.71 | 0.55 | 0.60 | 0.70 | |||
2G | 1 | 0.52 | 0.81 | 0.52 | 0.74 | 0.60 | 0.72 | ||||
2E | 1 | 0.85 | 0.61 | 0.63 | 0.77 | 0.77 | |||||
2O | 1 | 0.73 | 0.77 | 0.79 | 0.88 | ||||||
3C | 1 | 0.59 | 0.63 | 0.83 | |||||||
3G | 1 | 0.68 | 0.89 | ||||||||
3E | 1 | 0.89 |
All correlations were in a medium to high range and significant at a p < 0.001 level. It should be noted, however, that the highest correlations were between the respective scales (e.g., chemistry-specific items) at the different points of measurement and with the overall score, to which of course each single scale had contributed. The correlations between the different scales (e.g., between chemistry- and engineering-specific items) were consistently lower.
Nevertheless, the substantial overlap between the scales raised the question, whether they really measure different aspects of visual model comprehension or whether this finding rather points to one general construct. To check on this, we subsequently calculated confirmatory factor analyses (CFIs), which, according to Moosbrugger and Schermelleh-Engel (2008) can be used to examine the pre-specified structure of an instrument and are suitable if prior assumptions about the dimensionality are made. The results of this factor analysis can be seen in Table 4.
χ 2-Value | df | Δχ2 | Δdf | RMSEA | CFI | NFI | |
---|---|---|---|---|---|---|---|
**p < 0.001, N = 241. | |||||||
1dim-Model | 418.85 | 495 | 0.00 | 1 | 1 | ||
3dim-Model | 368.85 | 492 | 50.06** | 3 | 0.00 | 1 | 1 |
As can be seen in the table, the descriptive criterions are good in both models and indicate that both, a three-factor as well as a one-factor solution could explain the data structure. However, the χ2-test nevertheless indicates that there is a significant difference between the two models in that the three-dimensional model is more consistent with the data given. We can thus conclude that although the three subdimensions of visual model comprehension relate to each other substantially, they still represent individual constructs that are empirically separable from each other but can be added to an overall visual model comprehension score.
This final and three-scaled version of the visual model comprehension test was subsequently used during all points of measurement of the main study.
MP | NOC | Reliability scores of the visual model comprehension test (45 items) and its subscales (15 items per scale) | |||
---|---|---|---|---|---|
α of CP | α of GP | α of EP | α of OT | ||
Note: MP = measuring point; NOC = number of cases; α = Cronbach's α: CP = chemistry-specific part; GP = general part, EP = engineering part; OT = overall test. | |||||
1 | 146 | 0.753 | 0.664 | 0.678 | 0.844 |
2 | 133 | 0.658 | 0.652 | 0.711 | 0.840 |
3 | 61 | 0.557 | 0.611 | 0.570 | 0.791 |
These results and the visual model comprehension test served as the basis for the main study with the aim to answer the second, third and fourth research question. The main study will be described in the following chapter.
University 1 | University 2 | |||||
---|---|---|---|---|---|---|
N | % Female | Age (years) | N | % Female | Age (years) | |
Bachelor Chemistry | 118 | 38.1 | 21.1 | 157 | 36.5 | 20.8 |
The students were recruited from the introductory lecture on chemistry that they had to attend right from the beginning of their studies. If they agreed to take part in the study, they attended a seminar over the course of the first semester, in which they acquired knowledge about empirical research and assessment methods. During the seminar sessions, they also filled out the questionnaires and tests that were part of the long-term study. Participation in the seminar and the completion of all questionnaires and tests was rewarded with credit points. In addition, at the end of the second semester, they were asked to take part in a third point of measurement, and in this session, they again answered the visual model comprehension test as well as the chemistry content knowledge test. Furthermore, at the end of the first and the second semester, respectively, students’ performance in their study-related exams was assessed.
Besides receiving credit points, students who took part in all three points of measurement, were rewarded with 100€ per person (which is about 115$). As was expected, the participation rate decreased over the course of the two semesters, so that in the end, 137 students had completed all tests and questionnaires (Table 7).
Overall | University 1 | University 2 | ||||
---|---|---|---|---|---|---|
N | % | N | % | N | % | |
Note: MP = measuring point. | ||||||
1. MP | 275 | 100 | 118 | 100 | 157 | 100 |
2. MP | 245 | 89.1 | 106 | 89.3 | 139 | 88.5 |
3. MP | 137 | 49.8 | 72 | 61 | 65 | 41.4 |
– Visual model comprehension
– Content-related chemistry knowledge
– General cognitive abilities (verbal and figural reasoning)
– Spatial abilities
– Mathematical abilities
– Grade point average
– Age and gender.
Furthermore, cognitive load in terms of perceived difficulty (Kalyuga et al., 1999) and invested mental effort (Paas, 1992) was assessed several times during the assessments to investigate, how working memory capacities of the students were stressed by the different tasks.
– Visual model comprehension
– Content-related chemistry knowledge
– Exam grades for the first introductory lecture
– Cognitive load.
This longitudinal approach enabled us not only to investigate whether and how visual model comprehension predicts academic success and is in turn predicted by other variables, but also to find out more about the variability and development of visual model comprehension and about the development of chemistry-related content knowledge (which should of course be a central goal of study programs).
The instruments used during these points of measurement are described in more detail next (excluding the visual model comprehension test, which has been described and in the focus of the pilot study already).
In our study, the content-related chemistry knowledge, just like visual model comprehension, had a special role in that it was included in our analyses as a dependent variable (predicted by visual model comprehension) as well as an independent variable (and thus a potential predictor of visual model comprehension, but also of exam grades as an indicator for study success).
N | Chemistry-specific items | General items | Engineering-specific items | Overall score | |||||
---|---|---|---|---|---|---|---|---|---|
M | SD | M | SD | M | SD | M | SD | ||
Note: 1MP–3MP = measuring points 1–3. | |||||||||
1MP | 275 | 0.74 | 0.21 | 0.53 | 0.19 | 0.64 | 0.24 | 0.65 | 0.16 |
2MP | 245 | 0.79 | 0.20 | 57 | 0.21 | 0.69 | 0.23 | 0.68 | 0.17 |
3MP | 137 | 0.84 | 0.18 | 0.66 | 0.23 | 0.73 | 0.22 | 0.74 | 0.18 |
Sig. | <0.001 | <0.001 | <0.001 | <0.001 |
As can be seen in the table, for all scales as well as for the overall score, the solution probability for the items consistently increases over time. To investigate whether these increases are significant, we calculated repeated measures analyses of variance. It has to be noted that only cases whose data were available for all three points of measurement were included in these analyses. That is, the final sample consisted of 137 students, for whom the increase for all three scales as well as for the overall score was highly significant over the three points of measurement. With other words, students constantly increased their visual model comprehension during their first two semesters of chemistry studies at university. RQ 2 can thus be answered positively in that yes, visual model comprehension appears to be a dynamic construct that can increase over time, or in other words: we can obviously help students to improve their visual model comprehension. This is even more important when visual model comprehension has an impact on how successful students learn chemistry overall. The question, whether this is the case, will be focused on in the next paragraph.
ICK1 | ICK2 | OCK2 | OCK3 | LIC | LOC | |
---|---|---|---|---|---|---|
Note: OVC1–OVC3 = overall visual model comprehension at measuring points 1–3; ICK1 & ICK2 = introductory chemistry content knowledge at measuring points 1 and 2; OCK2 & OCK3 = organic chemistry content knowledge at measuring points 2 and 3; LIC = lecture exam introductory chemistry at measuring point 2; LOC = lecture exam organic chemistry at measuring point 3. All correlations are significant at p < 0.01 or higher. | ||||||
OVC1 | 0.54 | 0.58 | 0.54 | 0.54 | −0.37 | −0.42 |
N | 274 | 243 | 239 | 133 | 179 | 112 |
OVC2 | 0.53 | 0.60 | 0.56 | 0.55 | −0.38 | −0.36 |
N | 241 | 240 | 239 | 133 | 179 | 111 |
OVC3 | 0.52 | 0.63 | 50 | 0.60 | −0.46 | −0.39 |
N | 135 | 135 | 134 | 134 | 106 | 88 |
As can be seen in the table, all correlations are in a medium to high range and significant at p < 0.01 or p < 0.001. More specifically said, visual model comprehension at the very beginning of studies relates to chemistry content knowledge that is assessed at the same time, and it also correlates highly with content knowledge and lecture exam grades that are assessed later on. Similarly, visual model comprehension at the end of the first respectively the second semester correlates highly with content knowledge and lecture exam grades at all points of measurement.
These strong correlations are a first indicator for the predictive value that visual model comprehension might have with regard to study success. To shed more light on this, however, regression analyses need to be calculated to give these correlations some kind of direction and to be able to draw conclusions about the specific role of visual model comprehension taking other potential predictors of study success into account.
We did so by calculating four multiple regression analyses. For the first two analyses, the criterion variables were the introductory lecture exam grade and the organic lecture exam grade, respectively. As predictors, we included all variables described above, that is, visual model comprehension, chemistry-related content knowledge, general cognitive abilities, spatial abilities, mathematical abilities, GPA, age and gender. The results for these two regression analyses are depicted in Table 10.
Introductory chemistry lecture exam | Organic chemistry lecture exam | |||
---|---|---|---|---|
β | p | β | p | |
Note: OVC2 = overall visual model comprehension at measuring point 2; ICK2 = introductory chemistry content knowledge at measuring point 2; MAA = mathematical ability; GPA = grade point average; OCK3 = organic chemistry content knowledge at measuring point 3. | ||||
OVC2 | −0.195 | <0.05 | ||
ICK2 | −0.173 | <0.05 | ||
MAA | −0.187 | <0.05 | ||
GPA | 0.189 | <0.05 | ||
OCK3 | −0.538 | <0.001 | ||
R 2 | 0.283 | 0.289 | ||
N | 172 | 86 |
As can be seen in the table, the organic lecture exam grade is predicted only by organic chemistry knowledge that the students possess shortly before the exam is written. This variable alone explains almost 29% of the variance in these exam grades. On the other hand, the introductory lecture exam grade is predicted by a combination of visual model comprehension, GPA, mathematical abilities and chemistry-related content knowledge, which also explains more than 28% of the variance in the grades.
The results for the third and fourth regression analysis, in which the two aspects of chemistry-related content knowledge as measured by standardized tests were the criterions, are depicted in Table 11.
Introductory chemistry | Organic chemistry | |||
---|---|---|---|---|
β | p | β | p | |
Note: OVC1–2 = overall visual model comprehension at measuring points 1 and 2; ICK1 = introductory chemistry content knowledge at measuring point 1; OCK2 = organic chemistry content knowledge at measuring point 2; MAA = mathematical ability; GPA = grade point average; GEN = gender; VER = verbal reasoning. | ||||
OVC1 | 0.165 | <0.05 | ||
OVC2 | 0.226 | <0.001 | ||
ICK1 | 0.461 | <0.001 | ||
OCK2 | 0.427 | <0.001 | ||
MAA | 0.144 | <0.05 | 0.142 | <0.05 |
GPA | −0.183 | <0.001 | −0.261 | <0.001 |
GEND | 0.154 | <0.001 | ||
VER | 0.089 | <0.05 | ||
R 2 | 0.647 | 0.617 | ||
N | 231 | 128 |
As can be seen in the table, the strongest single predictor for both kinds of content knowledge is the respective knowledge that students possess half a year before. However, in both regressions, other variables contribute to the respective model in a significant manner as well. Introductory chemistry knowledge is also predicted by the visual model comprehension that students possess at this point, by GPA, gender, mathematical abilities and verbal reasoning abilities. These variables together are able to explain almost 65% of the variance in introductory chemistry content knowledge.
The organic chemistry content knowledge is also predicted by the visual model comprehension that students possess at this point, by GPA and by students’ mathematical abilities. These variables are able to explain almost 62% of the variance in organic chemistry content knowledge.
To sum up at this point, RQ 3 can be positively answered as well. Visual model comprehension, among other variables, of which prior knowledge is constantly the strongest, is able to predict study success significantly in terms of standardized content knowledge tests for introductory as well as for organic chemistry and in terms of lecture exam grades for introductory chemistry only. If visual model comprehension is such a meaningful predictor, the question emerges, whether and how it can in turn be predicted by other variables, which would give an indication for its trainability. These analyses will be described next.
OVC1 | OVC2 | OVC3 | ||||
---|---|---|---|---|---|---|
r | N | r | N | r | N | |
Note: OVC1–OVC3 = overall visual model comprehension at measuring points 1–3; CCK = chemistry-related content knowledge (overall score); VER = verbal reasoning; FIR = figural reasoning; SPA = spatial ability; GEN = gender; GPA = grade point average; MAA = mathematical ability. With the exception of the correlation between GPA and OVC2, all correlations are significant at p < 0.001. | ||||||
CCK | 0.54 | 274 | 0.53 | 241 | 0.63 | 135 |
VER | 0.44 | 259 | 0.47 | 238 | 0.43 | 134 |
FIR | 0.43 | 255 | 0.47 | 238 | 0.54 | 132 |
SPA | 0.40 | 255 | 0.43 | 235 | 0.52 | 133 |
GEN | 0.28 | 275 | 0.31 | 242 | 0.42 | 135 |
GPA | −0.19 | 268 | −0.12 | 235 | −0.22 | 133 |
MAA | 0.43 | 267 | 0.40 | 240 | 0.47 | 135 |
As can be seen in the table, with the exception of GPA at measuring point 2, all potential predictors correlate significantly with visual model comprehension at the beginning of the first, end of the first and end of the second semester.
Again, these correlations are a first indicator for the predictive value of some individual prerequisites for visual model comprehension. Just as in the analyses before, we added multiple regression analyses to give these relations some kind of direction. Table 13 depicts the models that are best suitable to predict visual model comprehension for the three points of measurement.
OVC1 | OVC2 | OVC3 | ||||
---|---|---|---|---|---|---|
β | p | β | p | β | p | |
Note: OVC1–OVC3 = overall visual model comprehension at measuring points 1–3; CCK = chemistry-related content knowledge (overall score); VER = verbal reasoning; FIR = figural reasoning; SPA = spatial ability; GEN = gender; GPA = grade point average; MAA = mathematical ability. | ||||||
CCK | 0.305 | <0.001 | 0.442 | <0.001 | 0.403 | <0.001 |
VER | 0.228 | <0.001 | 0.220 | <0.001 | 0.120 | 0.054 |
FIR | 0.178 | <0.001 | 0.208 | <0.001 | 0.229 | <0.001 |
SPA | 0.132 | <0.05 | 0.145 | <0.05 | 0.166 | <0.05 |
MAA | 0.144 | <0.05 | ||||
GEN | 0.137 | <0.05 | 0.184 | <0.05 | ||
R 2 | 0.482 | 0.519 | 0.611 | |||
N | 250 | 230 | 131 |
As can be seen in this table, visual model comprehension at all three points of measurement is predicted by a combination of chemistry-related content knowledge, general cognitive abilities in terms of verbal and figural reasoning and spatial abilities. Gender adds to this at measuring points 1 and 3, and mathematical abilities appear to be an additional significant predictor at measuring point 1. The latter might be explained by the fact that at this very early stage of studies, the ability to handle formulas and read graphs and tables independently of content knowledge might be especially crucial, while at later stages of studies, an increased content knowledge could well compensate for a lack in these mathematical abilities. These variables in combination are able to explain between about 48 and 61% of the variance in visual model comprehension depending on the point of measurement. As expected, among these are figural reasoning and spatial abilities as some “typical” kind of visual competencies, but again, just like for the prediction of study success variables, the strongest predictor for visual model comprehension at all three points of measurement appears to be chemistry-related content knowledge. In short, RQ 4 can be answered positively in that we identified several significant predictors of visual model comprehension, which mostly refer to individual learner characteristics that share some commonalities with visual model comprehension.
As a consequence of these findings, the strong relation between visual model comprehension and chemistry-related content knowledge and vice versa raised our interest. Obviously, both can have an impact on one another, and both in turn, are also able to predict study success in terms of lecture exam grades. In this regard, the question arises, whether they are indeed both direct predictors of study success, or whether it might make more sense to assume some kind of indirect effects. To shed more light on this, we calculated path analyses over the three points of measurement and investigated possible mediation effects. The results for these analyses are described next.
In the first case, chemistry-related (prior) content knowledge would act as a mediator between visual model comprehension and lecture exam grades, which would mean that there is no direct relation between visual model comprehension and grades, but that this predictive power goes its way via increased content knowledge. In the second case, and vice versa, visual model comprehension would be the mediator.
We investigated both assumptions by means of path analyses, which tested for double mediation effects each. These analyses revealed significant models for the second assumption, in which visual model comprehension acts as a mediator between prior content knowledge, acquired content knowledge and lecture exam grades. Fig. 2 and 3 depict the results of these analyses for the two lecture exams in introductory chemistry and organic chemistry, respectively.
As can be seen in Fig. 2, visual model comprehension in a first step mediates the relation between chemistry-related (prior) content knowledge and acquired content knowledge and in a second step the relation between this acquired content knowledge and the lecture exams grades in introductory chemistry. This means that students start their studies with a certain amount of content knowledge about introductory chemistry. This helps them increasing their visual model comprehension, which in turn predicts the amount of content knowledge that is acquired over the course of the first semester. This increased content knowledge again increases visual model comprehension, which then predicts how well students perform in their lecture exam on introductory chemistry. Both mediations are partial and not complete mediations. That is, although the relation between prior content knowledge and acquired content knowledge and the relation between acquired content knowledge and lecture exam grades is in both cases mediated by visual model comprehension, the direct effect does not completely disappear when taking visual model comprehension into account. In other words, chemistry-related content knowledge still also has its own predictive value.
A similar pattern of results emerges when looking at the effects with regard to organic chemistry (Fig. 3). Again, visual model comprehension partially mediates the relation between prior content knowledge at the beginning of studies and the respective acquired knowledge at the end of the first semester. This in turn increases visual model comprehension again. However, this time, the predictive value of visual model comprehension for the lecture exam grades in organic chemistry does not reach statistical significance. In other words, for the second step, we cannot statistically confirm the partial mediation that appears on a descriptive level.
To do so, we first conducted a textbook analysis to find out which types of visualizations can be found in relevant university chemistry textbooks. Based on this analysis, we developed an instrument to assess visual model comprehension with items that are embedded within domain-specific as well as within domain-independent contexts. This visual model comprehension test was subsequently used in the main study to assess visual model comprehension and to investigate how it can predict study success in chemistry and how in turn it can be predicted by other individual learner characteristics.
Both, the pilot study as well as the main study could confirm our expectations and the role of visual model comprehension to a large extent. However, there are of course limitations to both studies as well as aspects that need further discussion. These will be outlined below.
In this regard, the validity and reliability of our textbook analysis can be considered as being given. On the one hand, the classification scheme as well as the coding guidelines were theory-based (Schnotz, 2005). The empirical findings undermine this, for instance by the high interrater reliability with Cohen's Kappas of at least 0.89.
However, it should be taken into account that our analysis focused on the quantity rather than the quality of the visualizations in the textbooks. The division into decorative and instructional and furthermore into iconic, symbolic and hybrid is rather simple so that the probability to achieve corresponding results is rather high. In future research, we might attempt to categorize visualizations in chemistry on a more fine-grained level to find out which type of iconic or symbolic (or hybrid) visualization is especially prominent in instructional materials in chemistry studies and thus, when assessing visual model comprehension, which visualizations are the most difficult ones for students. This might give a more differentiated insight into the question, how visual model comprehension and thus study success can be fostered accordingly.
Furthermore, future analyses might also take more chemistry textbooks from even broader fields into account. Finally, our results are only generalizable to the domain of chemistry. It might well be that other sciences, such as physics or biology, although the quantity of visualizations might be the same in the textbooks, reveal different results with regard to the quality, that is, the visualization types (for instance, it may well be assumed that in physics, symbolic outweigh iconic visualizations, whereas this could be the other way round in biology).
This textbook analysis served as the basis for the development of the visual model comprehension test, which is the central instrument of our research, whereby the pilot study with its validation was conducted to answer our first research question. To do so, the visual model comprehension test was developed in accordance with the visualization types found in the textbook analysis and shows satisfying to good reliability values in terms of internal consistencies. After item deletion, the overall scale of the visual model comprehension test shows Cronbach's alphas of >0.80.
The validity aspect of the visual model comprehension appeared to be a little more complicated to be answered. This was due to the fact that unlike cognitive abilities such as intelligence, visual model comprehension is not yet an exhaustively investigated construct, which is why we used our own preliminary working definition. However, what we can say is that the subscales and overall scale of the visual model comprehension test correlate with each other and over time significantly. Furthermore, confirmatory factor analyses show a three-dimensional structure of visual model comprehension and confirm that it is a construct that although it correlates with, can still be separated from general cognitive abilities, spatial ability or chemistry-related content knowledge. Future analyses might shed more light on this question by investigating in more detail, what exactly separates visual model comprehension from other “visual” learner characteristics such as spatial ability or figural reasoning. In addition, although our results already give first indications for visual model comprehension being a dynamic construct that can change over time, future studies should investigate this more deeply, especially with regard to the question, whether this refers to the domain-specific as well as to the domain-independent scales of the visual model comprehension test or whether there are differences in the development (and thus the supportability) of visual model comprehension.
In conclusion, the visual model comprehension test appears to be a reliable and valid instrument (RQ 1b) that is suitable to assess visual model comprehension as an own and multidimensional construct (RQ 1) on a domain-specific as well as on a domain-general level (RQ 1a). Our findings and the instrument scales are in line with assumptions derived from theory. Future research might investigate whether even better results could be achieved with items that are assigned to categories on more fine-grained levels or whether it might make sense to work with an instrument that, instead of working with domain-specific and domain-independent scales, divides items more generally into different types of iconic and symbolic ones.
In this regard, and based on the overwhelming amount of research within the last decades that has investigated the role of visualizations not only for science learning (cf.Schnotz, 2008; Gilbert and Treagust, 2009; Mayer, 2009; Treagust et al., 2017), we assumed that visual model comprehension would predict grades in chemistry lecture exams as well as chemistry content knowledge at university. In line with this, our correlation and regression analyses show that visual model comprehension relates to the grades in the chemistry lecture exams of the first two semesters and to chemistry content knowledge in a medium to high range and that these relations are significant with the exception of the direct predictive value of visual model comprehension for the grades in the organic chemistry lecture exams at the end of the second semester.
Initially, this latter finding was a little surprising. Of course, the results could be due to the decreasing number of participants. As mentioned earlier, at the end of the second semester, only 137 of the initial 275 students had taken part in the third measuring point. In other words, the drop-out from the first to the third point of measurement comprised half of our initial sample. It could thus either be the sheer number of cases that leads to these different results. It could also well be that the students who still took part were the ones that were more motivated in general compared to the initial sample. Lastly, it could also be that not motivation, but general study success plays a deciding role here. For the 138 students who decided not to take part in the third measuring point, we cannot say whether they did (not) so because they just weren’t interested anymore, because they probably had too much other obligations with regard to their chemistry studies meanwhile, or whether they had simply stopped studying chemistry at all. This would be interesting to have a look at; however, data protection laws do not allow to follow up on these cases once they are not part of the sample anymore. From a pragmatic point of view, however, the latter interpretation seems more plausible. An argument in favor of this is that this dramatic drop-out took place only at the third point of measurement, that is, at the end of the second semester. During the first semester, when the first two points of measurements took place, we were able to continue working with almost 90% of the initial sample (see Table 7). It might thus well be that a remarkable number of students quit their chemistry studies (or changed to another study program) just after the first semester, which is a common finding (cf.Heublein, 2014; Heublein et al., 2017) and would also fit to our observation that the overall number of students, who remained in the chemistry study programs at the two universities involved in our project, had reduced remarkably during that time.
Furthermore, a very simple explanation for the missing relation could also be that preparing for the organic chemistry lecture exam did not require content- and visualization-based learning, but was maybe some kind of “teaching to the test”, that is, students might have simply learned with tests from lectures exams of previous semesters instead of really trying to repeat the lecture contents. This is pretty plausible, because it would also explain why at the same time that visual model comprehension does not predict organic chemistry lecture exams, it still predicts content knowledge in organic chemistry.
Finally, the results of our mediation analyses point out that the role of visual model comprehension for chemistry study success is the one of a mediator between chemistry-related (prior) content knowledge and according content knowledge gains respectively according lecture exam grades. In short, visual model comprehension seems to play an important role for studying chemistry at university in that it enhances learning success on the one hand but is also enhanced by already available knowledge at the beginning of studies. This finding bears two important implications. If visual model comprehension predicts knowledge acquisition, it might be worth fostering visual model comprehension by means of training programs that focus on the visual and spatial aspects (e.g., learning how to “read” visualizations and which kind of conventions stand for which content-related aspects) rather than on the content itself. Such general trainings for visual model comprehension would also give more insight into the question, whether this construct is a chemistry-specific ability or also enhances learning success in other science-related domains. Second, if the knowledge that students bring from high school when starting their university studies, has such an impact on visual model comprehension, it might also be worth investigating whether this works vice versa at school already.
Finally, we found that visual model comprehension is in turn also predicted by a number of individual prerequisites of which spatial ability and figural reasoning appear to be the most plausible ones. However, we also found a relation with gender and mathematical abilities especially at the very beginning of studies, and this needs further investigation, since with the exception of gender, the predictors of visual model comprehension should also be taken into account when talking about ways to increase visual model comprehension and thus study success.
In sum, visual model comprehension is a dynamic construct that increases over the course of the first two semesters (RQ 2), that can predict study success in chemistry in terms of content knowledge acquisition and exam grades (RQ 3) and that is in turn predicted by several other individual learner characteristics (RQ 4). This latter finding, however, also points to an important limitation of our study respectively the interpretation of the results. Although the role of visual model comprehension is undoubtedly significant and positive, we have to keep in mind that it is neither the strongest nor the only predictor of chemistry study success. That is, if we want to support students with regard to improving their learning as well as their grades in chemistry, we might think of more and comprehensive intervention programs than just training visual model comprehension. In other words, the big picture is always made up of many small puzzle pieces, of which visual model comprehension is only one, which, however, should not be missed.
With regard to our participants, the results were generated on the basis of initially 275 chemistry students from two large universities in Germany. These universities were very comparable with regard to their schedule and study requirements, so it might be worth having a look at different universities to find out whether the pattern of results stays the same for study courses placing different foci, for instance. Furthermore, we accompanied the students over the course of the first two semesters “only”. It would be interesting to see whether the path models and mediations stay similar for the further course of studies in higher semesters as well. This could only be done by a comprehensive longitudinal study, which would mean facing challenges in terms of sample acquisition (which can already be guessed by the reduced number of participants that we had after the second semester already) and a project organization that does not coincide with the regular study progress.
With regard to visual model comprehension, although our instrument appears to be valid and reliable and can be economically administered and quickly analyzed due to its multiple-choice-structure, it might also be worth taking more comprehensive approaches to assess the ability to work with visualizations into account. For instance, Cooper et al. (2012) developed an instrument to assess students’ abilities to process implicit information from Lewis structures and accordingly decode learning-relevant information that is contained in structure–property connections. This instrument works with open-ended questions and student interviews, and combining these two approaches might give additional and valuable insights not only into the quantitative aspects of visual model comprehension (how much do students know), but also into the qualitative side (what do they know and if they don’t know, where exactly are their deficits and misconceptions?). In other words, extending the ways to assess visual model comprehension would also provide more information on what exactly we can support especially if students appear to lack the necessary abilities to cope with the demands of chemistry study programs successfully.
Finally, one important limitation of our study is that we investigated the role of visual model comprehension for learning with the existing materials and requirements of the first two university semesters. However, unlike traditional multimedia research, we did not investigate the role of the visualizations themselves. That is, we now know more about how important visual model comprehension (and other individual learner characteristics) are for studying chemistry with what is the given learning scenario, but we do not know, which role is played by the learning materials (that is, the instructional design) itself. In this regard and back to the classical multimedia research, it might be worth investigating whether visual model comprehension and thus learning success is higher when and under which circumstances chemistry students learn with iconic visualizations like space-filling models (which might be more helpful for instance for learning about the more concrete structure of matter) or with symbolic-mathematical visualizations like Lewis structures or diagrams (which for instance might ne more helpful when learning about the more abstract concept of energy). This could again give valuable insights into the more general question, what should be fostered and how when we want all students to benefit equally from study courses at university.
This journal is © The Royal Society of Chemistry 2019 |