Revisiting the use of concept maps in a large enrollment general chemistry course: implementation and assessment

Lance E. Talbert , James Bonner , Kiana Mortezaei , Cybill Guregyan , Grace Henbest and Jack F. Eichler *
Department of Chemistry, University of California, Riverside, 501 Big Springs Road, Riverside, CA 92521, USA. E-mail: jack.eichler@ucr.edu; Tel: +1-(951)-827-3794

Received 26th February 2019 , Accepted 24th June 2019

First published on 25th June 2019


In an effort to improve student conceptual understanding and help students better connect pre-existing knowledge to new ideas, a concept map assignment was implemented in a first-year college level general chemistry course. This implementation included a quasi-experiment that was carried out in discussion group recitation sections within a third-quarter general chemistry course. Students enrolled in a single section of the course were divided into two groups in which a concept map treatment was compared to a control group that completed short journal entries. Comparison of a concept inventory post-test using an independent samples t-test indicates students in the concept map treatment appear to perform better than the students in the journal control group (t = 2.34, mean difference = 0.844, p < 0.05). However, a multi-variable regression analysis in which the concept inventory post-test scores were compared between the treatment and control groups, while traits related to incoming academic preparation were held constant, suggests there was no significant difference in performance (unstandardized b = 0.222, p = 0.540). The quality of the students’ concept maps was also evaluated and correlated to student performance on the concept inventory, and it appears students who were better at concept mapping made greater gains in conceptual understanding (Pearson's r = 0.295, p < 0.05). When the relationship between the quality of concept mapping and concept inventory post-test was determined while holding constant covariates related to incoming academic preparation, the unstandardized B coefficient was positive, but was not significant at the p = 0.05 level (unstandardized b = 0.215, p = 0.134) This study does not provide unequivocal evidence that a concept map treatment leads to greater gains in conceptual understanding compared to a control population, or that students with better concept mapping skills performed better on the concept inventory instrument. Nevertheless, a template for implementing a concept map assignment in a large enrollment course is provided, and the results presented herein might prompt chemistry instructors to consider including concept map assignments in their instructional toolbox.


Introduction

General chemistry is often the first science class taken by students in their undergraduate science curriculum. Unfortunately, historically high failure rates have given this class “gatekeeper” status for students who wish to major in STEM fields. One of the factors contributing to the lack of student success is the fact students often enter their first undergraduate chemistry course possessing misconceived mental models that thereby become a barrier to learning the foundational concepts covered in the general chemistry curriculum (Cros et al., 1986; Mulford and Robinson, 2002; Harrison and Treagust, 2018). Even students who have performed well on typical classroom assessments in their general chemistry courses have been found to struggle when asked to provide conceptual explanations for questions related to core learning objectives in the general chemistry curriculum (Cooper et al., 2013).

To overcome the limitations typical classroom assessments possess in regards to helping students develop clear and cogent mental models and conceptual understanding, considerable effort has been given to develop and employ metacognitive interventions (Novak, 1990; Rickey and Stacy, 2000). Metacognition is the process utilized to evaluate and monitor one's understanding and performance of the material. There are currently several strategies which are used to enhance metacognition, including paraphrasing and rewriting, working on homework problems, previewing material, and pretending to teach information (Rickey and Stacy, 2000). While the use of these strategies has been reported in the educational research literature (Cook et al., 2013), there remains a need to develop more widely applicable implementation strategies that can both help develop and assess student understanding of conceptual ideas (Johnstone, 1993; Gabel, 1999; Galloway and Bretz, 2015).

Anecdotal evidence has broadly confirmed that students in the University of California-Riverside (UCR) general chemistry program also continue to struggle to provide scientifically acceptable conceptual explanations for many fundamental learning objectives. We therefore decided to revisit the use of concept maps as both an intervention to help students better develop conceptual models and provide a means to measure this type of learning outcome. One of the first reports on the use of concept maps and their impact on student learning was published by Novak (1984), and concept maps have been subsequently proposed numerous times as a strategy to increase student retention and learning (Nicoll et al., 2001; Francisco et al., 2002; Kennedy, 2016). Generally speaking, a concept map requires students to reflect on their learning and the specific learning objectives by having them identify key concepts that have been covered in the course and using words or short phrases to link multiple concepts where appropriate. It is important for students to draw links between the concepts to establish the relationship between them and solidify their deeper understanding of the foundational course concepts. The process of making concept maps engages students in a form of active learning, and the continual cycle of reflection required to update the conceptual links makes concept mapping a potentially effective form of metacognitive engagement (Chevron, 2014).

The theoretical framework underpinning the impact of concept mapping on student conceptual understanding can be traced to Ausubel's assimilation theory of learning (Ausubel, 1968). Though this is reviewed in detail by Novak (1984), we highlight here that Ausubel proposes meaningful learning takes place when the learner relates new knowledge to concepts he/she already knows, and when the learner can identify the key concepts in the new knowledge and how to relate these to other concepts in other contexts. Ausubel points out if this connection of concepts from new knowledge to other concepts does not occur, verbatim/non-substantive learning can still occur, but this type of learning has less value and can actually interfere with subsequent learning. It is quite clear concept mapping can play an important role in promoting the type of meaningful learning described here (Nesbit and Adesope, 2006; Turan-Oluk and Ekmekci, 2018), and should certainly be more strongly considered by STEM instructors to be part of their instructional arsenal.

As alluded to above, not only does a concept map assignment provide an opportunity for students to develop more completely developed mental models, it can be a valuable tool for assessing the students’ conceptual understanding (Novak and Gowin, 1984). Assessment of the effectiveness of student conceptual understanding might be a daunting task for many instructors, but fortunately a variety of rubrics have been developed for carrying out the evaluation of student conceptual thinking. Typical rubrics focus on the scoring of hierarchies presented by the students and/or assess at the validity of the proposed links in traditional concept map assignments (Novak and Gowin, 1984; Francisco et al., 2002), whereas more recent studies have proposed creating new types of concept linking activities that can be more easily evaluated (Ye et al., 2015).

Though concept maps are clearly a valuable intervention in developing and assessing student conceptual understanding, their use in first-year college level chemistry courses has not been broadly demonstrated. Previous studies on the use of concept maps in general chemistry courses are generally limited to small enrollment implementations (n < 100; Regis et al., 1996; Markow and Lonning, 1998; Besterfield-Sacre et al., 2013; Luxford and Bretz, 2014), and are often used only to map concepts within specific learning units of the course. Furthermore, for studies that have been carried out in large enrollment courses the analysis of student data has generally been limited to a small subset of the class population (Burrows and Mooring, 2015). Perhaps the most noteworthy implementation of a concept map intervention in a large enrollment general chemistry course comes from Francisco and coworkers (2002). In this study, the implementation focused not only on the student aspect of completing the concept map assignments, but also incorporated graduate student teaching assistants (TAs) into the grading of concept maps and delivery of feedback to the students. It was shown concept maps in large enrollment classes do positively impact the development of student conceptual knowledge, in particular demonstrating the concept map intervention appeared to improve student performance on complex multi-step algorithmic problems. However, Francisco and coworkers also reported significant resistance from the students to include the concept maps as graded activities and the implementation required significant training of the TAs in order to achieve an observable positive impact.

Hence, the goal was to create a concept map implementation for a large enrollment general chemistry course that achieved the following objectives: (1) balance the need to create a significant incentive for students to complete the concept maps with the desire to avoid student resistance to being graded on a subjective measure; (2) provide a template for a concept map assignment that can be more easily adopted by other instructors of large enrollment courses; and (3) allow for the design of a quasi-experiment that adds to the limited pool of data describing the impact of concept map assignments on student learning outcomes in large enrollment general chemistry courses. The experimental hypotheses that drove the research design were: (1) students who completed a quarter-long concept map intervention would achieve more significant learning gains compared to a control group of students who did not use concept mapping; and (2) higher proficiency in creating valid and well-developed concept maps would correlate to gains in conceptual understanding. Herein, we will describe how a concept map assignment was administered in a streamlined fashion using TA-led discussion group recitation sessions. We will also describe results from a quasi-experiment in which student performance on a concept inventory and student survey responses were compared between a concept map treatment group, and a control group in which students completed weekly journal entries.

Implementation design and research methods

Course description

The concept map implementation was carried out in a third quarter general chemistry course (CHEM 001C). This is the third course in the same three-quarter general chemistry sequence offered at UCR. The topics covered in this course include: chemical equilibrium; acid–base chemistry; buffers and titrations; electrochemistry; coordination chemistry; and nuclear chemistry. This course was taught for an “on sequence” cohort of first-year students in the spring of 2018 (S18), but also included second year students and upper-class students who may have needed the course as a general college science requirement or a pre-requisite for medical and other health-related professional schools.

Quasi-experimental design

The S18 course consisted of a large enrollment lecture, which met two times per week for 80 minutes, and associated recitation sections that met once per week for 50 minutes (30–40 students each). Approximately one-third of the lecture meetings used flipped classroom modules (Eichler and Peeples, 2016), while the remaining class periods included a mixture of lecture, peer-to-peer discussion, and interactive clicker questions. The quasi-experimental design assigned recitation sections taught by a TA who had prepared in advance to implement the concept map intervention as the treatment group (n = 115). The remaining recitation sections, taught by a different TA, were assigned as the control group and required the students to complete a weekly journal entry in lieu of a concept map (n = 123). This design also allowed the instructors to assign similar workloads and use the same grading scheme in both groups. To minimize potential “treatment” effects in the journal entry control group, these students wrote weekly journals that summarized what they had learned in each week's lectures and no guidance regarding how to structure the journal entries was provided. Students were simply instructed to describe what they learned in lecture each week. Conversely, the concept map treatment group was instructed to build on the map from the previous week, and to emphasize important concepts from each chapter and show the relationship between various chapters.

Each concept map or journal was awarded a single point for nine of the ten weeks, and four points for the final week. Though all assignments were graded simply for completion and these points were awarded as extra credit toward the overall lecture grade, students were informed that no points would be awarded if they simply submitted their previous concept map. During the course of the term the TA found no instances in which students handed in unchanged concept maps from previous weeks. This grading design was chosen in an effort to promote student compliance in completing the weekly concept map assignment without creating the type of resistance one might expect to arise when subjective assignments such as this are graded more rigorously.

To increase the convenience of collecting and evaluating the concept maps, students were required to use a concept mapping program. Cmap is a freely available software program that students were able to download to their computers for the development of the concept maps, and students were able to save and submit their maps as PDF files (Cañas et al., 2005; Novak and Cañas, 2008). The TA was familiarized in the use of the Cmap program and was able to answer any questions students had with the program. To help ensure students were able to navigate the Cmap program and properly save their concept maps, the TA created a video tutorial that the students could view anytime during the term on the online course management system.

Questions regarding the Cmap program of the creation of concept maps were covered in the first TA-facilitated recitation meetings. Students were instructed to develop their concept maps freely and were only provided an outline of a concept map containing the chapters which would be covered during the course (see Appendix 1, ESI). The TA spent approximately 30 minutes of the first discussion group session providing an overview of how to create a concept map and why the concept maps were being included as an ongoing assignment. Aside from being given the preliminary concept outline, students had complete control over their concept maps and were instructed to build their concept maps using the material they had learned each week. This enabled students to have full control over how they would build the concept map and in what way they would connect key concepts to one another. Guidance in developing the concept maps was provided weekly through class-wide verbal feedback, which focused on highlighting key misconceptions identified by the TA. Approximately 10 minutes were devoted each week for this weekly feedback. Following each midterm exam, an instructor-completed concept map was shown to the students in their discussion sections and the TA led a dialogue to help elucidate for the students how proper connecting terms should be constructed. These post-exam debriefs took up approximately 10–15 minutes of discussion group time, and helped model for the students the types of conceptual thinking that should be used for the remainder of the quarter as they continued to build their maps. In the last discussion group session prior to the final exam, the TA spent approximately 15–20 minutes discussing with the students how they should use their final concept map as a study tool for the final exam and reviewed the traits of a well-developed concept map.

Grading rubric for concept maps

Concept maps were scored using an adapted version of the concept map rubric developed by Besterfield-Sacre and co-workers (2013). Concept maps were scored on comprehensiveness of the covered material, organization and linking between chapters, the correctness of the material in each chapter, and the correctness of the links between concepts (see Table 1). To ensure there was consistency in how the scoring rubric was used to allocate points, two coders discussed how points were to be awarded and each independently scored a random sample of 10 concept maps for comprehensiveness and organization/links (the graders consisted of a graduate student teaching assistant and an undergraduate research assistant). The course instructor then met with the two coders to discuss how the rubrics were applied in the grading and clarified how the student responses should be evaluated. The coders then independently evaluated another random sample of 10 concept maps and it was found that they agreed on the scoring on greater than 80% of the items. The remaining concept maps were scored for the comprehensiveness and organization/links categories. To ensure consistency and accuracy for scoring in the correctness category two experienced chemistry instructors evaluated a random sample of concept maps, and scores were compared in the same manner as described above to calibrate the application of the rubric. An additional random sample of concept maps were evaluated by both instructors to ensure over 80% of the items were coded in an equivalent fashion, and the remaining concept maps were coded for correctness.
Table 1 Scoring rubric for concept maps. Each category is scored 1–3 with the total score for each concept map being scored out of 9
Points Comprehensiveness Organization/links Correctness
3 Contains 1 or more key concepts in each chapter. Map is clear and easy to follow. Contains links between 4 or more discussed chapters. Map contains 1 or less errors.
2 Contains 1 or more key concepts in 2–4 chapters. Map is slightly cluttered. Contains links between 3 chapters. Map contains 2–3 errors.
1 Contains one or more key concepts in 0–1 chapters. Map is not easy to follow. Contains links between only 2 chapters. Contains 4 or more errors.


Fig. 1 and 2 illustrate examples of a well-developed and poorly developed concept map, respectively. In Fig. 1, the student used a color-coded system to connect sub topics to broader concepts and the concept map has several branches linking the broader concepts to each other. The connecting phrases between the various key concepts are also well developed and almost universally accurate. The concept map shown in Fig. 2 lacks many of the key concepts covered in each chapter and does not include many of the sub-topics observed in the more developed concept maps. Additionally, many of the connections that should have been made between the broader key concepts were not observed (e.g., other than a single connection between ICE table and salt solutions, there are no connections drawn between any of the other chapters).


image file: c9rp00059c-f1.tif
Fig. 1 Example of a well-developed concept map with correct connections and connecting phrases (comprehensiveness = 2, organization/links = 3, correctness = 3, total score = 8/9).

image file: c9rp00059c-f2.tif
Fig. 2 Example of a less well-developed concept map with significant numbers of incorrect connections and connecting phrases (comprehensiveness = 1, organization/links = 1, correctness = 1, total score = 3/9).

If readers are interested in implementing the concept map intervention as described here an instructor guide is included in the Appendices (see Appendix 2). This includes instructor notes that provide suggestions for preparing TA's to facilitate the weekly assignments, how the TA included the concept map assignment into the weekly recitation discussions, and how periodic feedback on the concept maps was provided to the students. Instructions for using the Cmap program and the concept map inventory used in this study are also provided (see Appendix 2).

Concept inventory, SALG survey, and data collection

To test the first quasi-experimental hypotheses a concept inventory and student self-assessment of leaning survey (SALG) was used in a pre-test/post-test alternative treatment/control group design as described by Barbera and co-workers (Mack, 2019a), and all student data was collected under an approved human subjects protocol (UCR Institutional Review Board protocol HS-10-135). Though to our knowledge the efficacy of a concept map intervention has not been previously evaluated using this type of quasi-experimental design in a chemistry course, we would like to highlight the report from Burdo and O’Dwyer (2015) that assessed the impact of a concept map treatment in an undergraduate physiology course. In this prior study the concept map treatment group was compared to a negative control group and a second treatment group in which retrieval practice was chosen as the independent variable of interest. The experimental design described by Burdo and O’Dwyer acts as a good model for the quasi-experimental reported herein, though it is noted this previous study assessed the impact of the concept map treatment on more traditional course exams as opposed to a measure of conceptual understanding.

The concept inventory consisted of a total of 16 multiple choice questions and was graded out of a total score of 16 (1 point for each item). This concept inventory was administered as a pre-test for students in the treatment and control groups in the first week of recitation. The pre-test was administered using the online quiz function in the course management system. Students were not able to access the quiz once it was completed and they were not allowed to view the questions or answers at any point during the quarter. The same questions from the concept pre-test were placed into the final exam for students in both the treatment and control groups, and this post-test was used to measure the improvement in conceptual understanding throughout the ten-week quarter. Any subsequent references to the concept inventory “post-test” are restricted to the 16 concept inventory questions that were embedded in the final exam. Any references to the “final exam” include the entire exam, which consisted of the 16 concept inventory questions, 14 additional algorithmic multiple-choice questions, and five free response questions. Finally, the authors note it would have been ideal to use a previously validated concept inventory to probe student gains in conceptual understanding. However, the unique combination of topics covered in this course dictated the use of a customized concept inventory that better matched the 12 respective course learning objectives. Because the concept inventory contained multiple dimensions (i.e., five different categories of concepts) the stratified alpha (αs) reliability coefficient was calculated as described by Widhiarso and Ravand (2014), and an item-analysis was carried out for the 16 test items (see Appendices 3–5, ESI).

The SALG survey was also administer in a pre/post format using a modified version of the freely available instrument. Questions that focused on conceptual learning gains, learning gains related to specific course topics, and learning gains related to applying course concepts in other contexts were chosen for the purpose of this study (see Table 7). The pre-SALG survey was available to students during a seven-day period starting two days before the first class and the post-SALG was available to students during a seven-day period starting two days before the final exam, and both surveys were completed using the online SALG interface. Students were informed if they completed both the pre- and post-SALG they would receive a small amount of extra credit on their final course point total (5 points out of 1000 total course points).

Statistical analyses

In the process of testing the quasi-experimental hypotheses, student performance on the pre- and post-test concept inventory was compared between the treatment and control groups. The incoming academic preparation for the students in both study populations were also compared. All statistical analyses were carried out using the SPSS Statistics 24 software package.§ The incoming academic preparation of the students in the concept map treatment and journal entry control groups was compared using an analysis of variance (ANOVA). Concept inventory pre/post-test gains within the treatment and control groups were analyzed using paired t-tests, post-test scores were compared between the treatment and control group using an independent samples t-test, and a comparison of the performance on the post-test concept inventory between the treatment and control group was carried out using a multiple linear regression model. In order to impart more rigorous statistical control of the student incoming academic preparation, multiple regression analyses were carried out in which the concept inventory post-test scores were compared between treatment and control groups, while holding constant the concept inventory pre-test scores, high school overall grade point average (GPAs), and math SAT scores (Leech et al., 2003). The relationship between the ability to construct high quality concept maps and performance on the concept inventory post-test was also evaluated by determining the Pearson correlation coefficient, and a multiple linear regression model was used to determine if the correlation between concept map rubric scores and concept inventory post-test scores was significant while holding constant the incoming student academic preparation. Analyses were carried out as described by Cohen (1988) in order to estimate the power to detect a significant regression coefficient in the multiple linear regression models. Finally, in an effort to obtain a preliminary overview of the student perception of the concept map treatment the descriptive responses on the SALG survey were compared between the concept map treatment and journal control groups.

Results and discussion

Table 2 summarizes the descriptive statistics for the control and treatment groups participating in the quasi-experiment. The academic preparation of students in the treatment and control groups appeared to be roughly equivalent when considering the distribution of pre-test scores and high school GPAs within each group, yet the concept map treatment group did appear to have slightly higher math SAT scores compared to the journal control group. The concept inventory pre-test scores, math SAT scores, and overall high school GPAs were compared using an ANOVA, and these results appear to corroborate the notion that there is no significant difference in the incoming academic abilities of the students between the two populations as measured by high school GPA and concept inventory pre-test (the null hypothesis stating there is no difference between the mean scores for all three of these independent variables cannot be rejected at the p = 0.05 level; see Appendix 6, ESI). Conversely, the null hypothesis stating the math SAT scores are equivalent between groups can be rejected (F = 6.681, p < 0.05; see Appendix 5, ESI).
Table 2 Descriptive statistics for the recitation section treatment and control groups (control = journal group; treatment = concept map)
  Control (n = 115) Treatment (n = 123)
a Not all students submitted a final concept map/journal. The n value designates the number of students that completed the final concept map/journal assignment.
High School GPA Avg. 3.74 ± 0.26 3.77 ± 0.27
SAT Math Avg. 582 ± 80 612 ± 85
Pretest Avg. (out of 16) 5.57 ± 2.07 6.09 ± 2.60
Post test Avg. (out of 16) 9.91 ± 2.81 10.8 ± 2.7
Exam 1 Avg. (out of 100) 64.0 ± 15.7 66.0 ± 15.3
Exam 2 Avg. (out of 100) 75.1 ± 15.9 78.4 ± 14.5
Final exam total Avg. (out of 400) 244 ± 60 253 ± 59
Avg. rubric score on final concept/journal entry (out of 9)a 7.00 ± 1.23 (n = 100) 6.17 ± 1.55 (n = 97)
Asian n = 46 n = 58
Black or African American n = 3 n = 0
Hispanic or Latino n = 45 n = 36
Multi-race/unknown/non-resident Alien n = 11 n = 13
White n = 9 n = 16
Male n = 55 n = 58
Female n = 58 n = 61
Gender not reported n = 2 n = 2


The comparison of the concept inventory pre- and post-test suggests students in the treatment and control groups made gains in conceptual understanding during the course of the term. A paired t-test in which the average pre- and post-test concept inventory scores were compared indicates a statistically significant increase in average score for the post-test within both the treatment and control groups (journal control: mean difference = 4.3889; t = −15.058; p < 0.001; concept map treatment: mean difference = 4.61017; t = −16.302; p < 0.001; see Table 3). These results are important to note because, although the concept inventory questions were not selected from a previously validated question set, this instrument appears to measure gains in conceptual understanding with statistically significant results for both groups. The general utility of the concept inventory was confirmed by an item analysis and a single-administration internal consistency analysis. The item analysis indicates all of the questions on the inventory exhibited an item difficulty between 0.30–0.85 (see Appendix 3, ESI), the item discrimination analysis revealed 12 of the 16 questions possess a point-biserial correlation greater than 0.20 (see Appendix 4, ESI), and the stratified alpha reliability coefficient (αs) was found to be 0.661, suggesting moderate reliability for the concept inventory (see Appendix 5, ESI).

Table 3 Paired t-test comparing concept inventory pre-test and post-test scores within the treatment and control groups for the concept map implementation
  n Mean difference Std. deviation Std. error mean of difference p
Journal control pre- vs. post-test mean 108 4.39 3.03 0.29 <0.001
Concept map treatment pre- vs. post-test mean 118 4.61 3.07 0.28 <0.001


To begin evaluating the first research hypothesis, in which it was stated the concept map treatment would yield greater gains in conceptual learning relative to the journal control group, an independent samples t-test was used to compare the mean scores on the concept inventory post-test between the two groups (see Table 4). Based on this independent samples t-test, it appears the concept map treatment group performed better on the post-test relative to the journal control group (mean difference = 0.844, p = 0.020). The mean difference observed between groups translates to an approximately 5% increase in exam performance for the concept map treatment group, and the Cohen's d suggests the concept map treatment had a moderate effect on the concept inventory performance. However, because the study groups in this quasi-experiment were not randomly assigned caution must be taken in over-interpreting these results. More specifically, because the assignment of the study groups was not randomized, relying solely on the comparison of means in which no background academic traits were statistically controlled was not prudent. Therefore, it was desired to carry out further analysis in which the academic preparation characteristics among the treatment and control groups could be statistically controlled.

Table 4 Independent samples t-test comparing concept inventory post-test between the concept map treatment and journal control groups
  t df Mean difference Std. error mean of difference p Cohen's d effect size
Journal control (mean = 9.91) vs. concept map treatment (mean = 10.8) 2.34 235 0.84 0.36 <0.05 0.308


Multiple regression analyses

Even though the quasi-experimental design is not able to control for the myriad of independent variables that likely impact student performance, including incoming academic preparation, a multiple regression analysis can be utilized to statistically control for these independent variables. Given the fact the math SAT scores appear to be slightly higher for the concept map treatment group, it is especially pertinent to hold this variable constant when comparing the concept inventory post-test scores between the treatment and control groups. The model employed here included three different independent variables: the concept inventory pre-test scores, overall high school GPAs, and math SAT scores. Because the quasi-experimental hypotheses are interested in determining the impact of the concept map treatment on conceptual understanding, the linear regression model included the concept inventory post-test as the dependent variable. The final exam contained a significant number of algorithmic problems that are likely less correlated to conceptual thinking, therefore this was not included in any of the analyses.

Since it was hypothesized that high school GPA, math SAT scores, and the concept inventory pre-test scores would likely correlate with performance on the concept inventory post-test, these co-variates were held constant when comparing the post-test score dependent variable between the treatment and control groups. Generally speaking, high school GPA is known to be a strong predictor of student success in college-level courses (Zwick and Sklar, 2005), and in fact has been shown to be a stronger predictor of success than standardized tests such as the ACT and SAT (Geiger and Santelices, 2007). Recent reports also suggest math SAT scores can be a strong predictor of student performance in general chemistry (Vincent-Ruz et al., 2018; Mack et al., 2019b), which suggests this variable is likely to be positively correlated to the dependent variable. These previous studies therefore suggest including both high school GPA and math SAT scores as co-variates in the regression model should help isolate the impact of the group participation (treatment vs. control) on the post-test scores. Including the concept inventory pre-test scores as a covariate fits within the creation of the explanatory model, as this assessment more directly measures existing conceptual understanding. Holding this variable constant is also expected to aid in isolating the impact of the concept map treatment on the final concept inventory performance.

The results of the final multiple regression analysis are shown in Table 5. The concept inventory post-test dependent variable output was compared between all participants in the treatment and control groups, while keeping constant the independent variables related to student academic preparation (high school GPA, SAT math scores, and the concept inventory pre-test scores). Though participation in the treatment group was positively related to performance on the concept inventory post-test, this result was not statistically significant (unstandardized b = 0.222, p = 0.540). The independent variables of high school GPA, math SAT, and concept inventory pre-test were also positively related to performance on the concept inventory post-test (unstandardized b = 0.174, 0.0150, and 0.235, respectively), though interestingly the impact of high school GPA was not statistically significant. Unsurprisingly, the concept inventory pre-test appeared to have the strongest relationship to performance on the post-test measure. Because the high school GPA, math SAT, and concept inventory pre-test covariates may be redundant in terms predicting student performance on the post-test, the correlations between these co-variates were calculated to determine if collinearity within the regression model might be present (see Appendix 7, ESI). None of the pairs of independent variables was found to have a correlation constant greater than 0.282, therefore no further changes were made to the regression model.

Table 5 Multiple regression analysis. Includes full class; dependent variable = concept inventory post-test. Group indicates coded treatment/control (journal control group = 0; concept map treatment group = 1)a
  Unstandardized coefficients Standardized coefficients t p
b Std. error Beta
a R = 0.534; R2 = 0.285; adjusted R2 = 0.270; standard error of the estimate = 2.46; estimated power to detect a significant regression coefficient ≈0.05 (see Appendix 8, ESI for description of the power estimation).
Constant −1.03 2.92   −0.355 0.723
Group 0.222 0.362 0.0390 0.614 0.540
High School GPA 0.174 0.674 0.0160 0.258 0.796
SAT Math 0.015 0.002 0.439 6.71 <0.001
Concept inventory pre-test 0.235 0.080 0.439 2.94 <0.05


A post hoc power analysis was carried out to estimate the model's power to detect a significant regression coefficient as described by Cohen (1988). This power estimate was carried out by comparing the R2 of the full model to the R2 of the model in which the group participation independent variable was not included (estimated power ≈0.05; see Table 5; see Appendix 8 (ESI) for the regression model without the group participation independent variable). Though the multiple regression model suggests there was not a significant correlation between study group participation and concept inventory post-test, the power analysis suggests there is a relatively strong likelihood this model yields a false negative conclusion (i.e., the erroneous retention of the null hypothesis). The post hoc power analysis included an estimate of the effect size index, f2, for the change in R2 of the full regression model compared to the regression model in which the class treatment independent variable was removed. Cohen identifies f2 effect size indexes of 0.02, 0.15, and 0.35 as small, medium, and large, respectively (1988). The effect size index for the regression model reported here was estimated to be 0.0014 (see Appendix 8, ESI). This suggests the concept map treatment indeed had a minimal effect on the concept inventory post-test, and provides some explanation as to why the sample size used in this study resulted in low statistical power. Cohen also describes how to estimate sample size required for a desired level power (1988), and a model that includes the same number of independent variables used in the current study would require approximately 618 participants to achieve a power of 0.80 with an f2 effect size index of 0.02.

To evaluate the second research hypothesis, which stated more well-developed concept mapping skills will correlate to gains in conceptual understanding, the correlation between concept inventory post-test scores and final concept map rubric scores was determined. The Pearson's correlation coefficient (ρ) suggests there is indeed correlation between concept mapping skills and performance on the final concept inventory post-test (ρ = 0.295, p < 0.05; see Fig. 3). Because this correlation did not account for the potential impact of other confounding variables on student performance on the concept inventory, a multiple regression model was also used to estimate the relationship between concept mapping skills and performance on the concept inventory post-test while holding constant the students’ incoming academic preparation. The model indicates there is a positive relationship between concept mapping skills and concept inventory post-test scores, yet it was not found to be statistically significant (see Table 6; unstandardized b = 0.210, p = 0.147). A post hoc power analysis was used to determine the power of the model to detect a significant regression coefficient as described above, and it is estimated this model might very well yield a false negative result (estimated power ≈0.24; see Table 6; see Appendix 9, ESI, for the regression model without the concept map rubric independent variable, and calculation of the f2 effect size index and a detailed description of how the power was estimated; it is noted this was a separate power analysis from that described above for the regression model summarized in Table 5, and the two power analyses were carried out for the estimated effect sizes of the two regression models). In short, the analyses described above make it difficult to arrive at a definitive conclusion in regards to whether students found to create more well-developed concept maps may have performed better on the concept inventory post-test.


image file: c9rp00059c-f3.tif
Fig. 3 Correlation of final concept map/journal entry rubric score with concept inventory post-test score (concept map rubric scores vs. concept inventory post-test scores; two-tailed Pearson correlation = 0.295; p < 0.05).
Table 6 Multiple regression of concept map rubric scores correlated to concept map post-test scores, holding constant math SAT, HS GPA, and concept inventory pre-testa
  Unstandardized coefficients Standardized coefficients t p
b Std. error Beta
a R = 0.586; R2 = 0.344; adjusted R2 = 0.307; standard error of the estimate = 2.21; estimated power to detect a significant regression coefficient ≈0.24 (see Appendix 9, ESI for a description of the power estimation).
Constant −0.881 4.300   −0.205 0.839
Final concept map rubric score 0.210 0.144 0.147 1.47 0.147
High School GPA 0.274 1.010 0.0260 0.271 0.787
SAT Math 0.0130 0.0030 0.424 4.00 <0.001
Concept inventory pre-test 0.225 0.106 0.217 2.11 <0.05


Table 7 SALG survey questions that were included in the quasi-experimental analysis
Survey questions
a Likert scale: 1 = not at all, 2 = just a little, 3 = somewhat, 4 = a lot, 5 = a great deal. b Likert scale: 1 = no gains, 2 = a little gain, 3 = moderate gain, 4 = good gain, 5 = great gain.
Pre-SALG: a presently, I understand…
Post-SALG: b as a result of your work in this class, what gains did you make in your understanding of…
 1. Chemical equilibrium.
 2. Acid base chemistry.
 3. How ideas we will explore in this class relate to ideas I have encountered in other classes within this subject area.
 4. How ideas we will explore in this class relate to ideas I have encountered in classes outside of this subject area.
 5. How studying this subject helps people address real world issues.
Pre-SALG: a presently, I am in the habit of…
Post-SALG: b as a result of your work in this class, what gains did you make in the following skills…
 6. Connecting key ideas I learn in my classes with other knowledge.
 7. Applying what I learn in classes to other situations.
 8. Using systematic reasoning in my approach to problems.
Pre-SALG: a presently, I…
Post-SALG: b as a result of your work in this class, what gains did you make in the following skills…
 9. Feel(ing) comfortable working with complex ideas.


Affective perceptions of students – SALG survey

The quasi-experimental design employed in this study relied on delivering the concept map treatment during co-curricular recitation sections taught by graduate TAs. Because this treatment was a relatively small part of the larger course structure, its impact on the students’ ability to make gains in conceptual understanding of the course content may have been limited relative to the learning interventions carried out in the main lecture during the course of the academic term. Therefore, the Student Assessment of Learning Gains (SALG) was used to survey the student's perceptions of affective outcomes. Students were asked to gauge the gains made in connecting chemistry concepts to real world issues, their ability to connect chemistry concepts to concepts covered in other STEM disciplines, and their general interest in the course content. These types of affective learning outcomes cannot be measured in a content-based concept inventory, but are certainly connected to long-term classroom success (Middlecamp et al., 2006). The SALG was administered in this study in a pre-post format in which students estimated their existing knowledge state and affective engagement in the course content prior to the beginning of the course, and then reported on their final learning outcomes at the end of the term.

The SALG was administered to the entire concept map treatment and journal entry control groups, and the survey respondents represent a sub-population of each group due to compliance issues related to survey completion (post-SALG respondents; n = 49 for the treatment group; n = 30 for the control group). The pre- and post-survey responses were paired, providing an opportunity to track student perceptions of gains in self-reported affective outcomes. Unfortunately, compliance issues further limited the number of students who completed both the pre- and post-survey instruments (pre/post paired respondents: n = 36 for treatment; n = 18 for control). The average Likert scale responses for all of the pre- and post-SALG questions among both study groups are summarized in Appendix 10 (ESI), and the distributions of Likert scale responses on the nine post-SALG survey questions are illustrated in Fig. 4 (questions 3–6) and Appendix 11 (ESI) (questions 1–2, 7–9). These descriptive results indicate the concept map treatment group had higher proportions of positive responses (5 = strongly agree; 4 = agree) than the control group on all but one of the post-SALG survey questions. However, any interpretation of these survey responses must be tempered given the fact the response rate on the post-SALG survey was quite low and the pre-SALG baseline responses could not be matched for a large number of these respondents. In an effort to compare how many students from each study group made gains on the various survey questions, the percentage of students who made gains was plotted against the number of questions for which an improvement was reported (see Appendix 12, ESI). This analysis indicates that 78% of the concept map treatment students reported improvement in the pre- to post-SALG responses for five or more of the nine questions, whereas 72% of the journal control group students reported such gains. Because of the limited number of students who responded to both the pre- and post-SALG survey these results do not provide unambiguous evidence as to whether the concept map treatment led to greater impact on student perceptions of conceptual thinking or affective outcomes relative to the journal control group.


image file: c9rp00059c-f4.tif
Fig. 4 Post-SALG responses to SALG questions: (A) #3; (B) #4; (C) #5; (D) #6. Post-SALG sample sizes: treatment = 49, control = 30.

Though the SALG survey data did not explicitly reveal the concept map students were impacted in their affective learning outcomes, using this type of data to supplement more quantitative exam or concept inventory data could provide useful insights in future studies. Students who feel they are better prepared may have additional confidence, which in turn helps them overcome test anxiety and perform better on high stakes assessments (Hackathorn et al., 2012). One could take the view that even if a concept map treatment were to yield equivalent exam scores compared to traditional assignments, if it resulted in gains related to measures of self-confidence, self-efficacy, and the ability to connect chemistry to real world ideas it is likely to positively impact interest and student success (Middlecamp et al., 2006; Lindstrom and Middlecamp, 2017).

Limitations of the study

Quasi-experimental studies conducted within an entire academic term are often limited due to history and maturation effects within the study groups, and by the non-random distribution of background traits among the participants in the study (Mack et al., 2019a). In the current study, the history and maturation internal validity threats were minimized by the use of treatment and control groups within the same class within the same academic term. The validity threat associated with the non-random distribution of incoming academic preparation traits was also reduced by the multiple linear regression model that was used to compare the treatment and control group concept inventory post-test scores. Perhaps the most obvious limitation of this study was the fact the treatment and control groups were taught by two different graduate TAs, therefore instructor effects could not be controlled. Despite possible biases in instructional effectiveness that might have been inherent between the two study groups, the two TAs did have similar experience teaching the discussion group sections and had collaborated in preparing classroom activities for previous offerings of the discussion group sections. These factors likely minimized the potential differences in student learning outcomes usually linked to instructor effects. Finally, it is noted the fact this was a single-institution study might lead to potential external validity threats. This study was conducted on an extremely diverse campus within a large enrollment course, suggesting the success of implementing the concept map intervention should translate to many higher education settings. With that said, future studies should ultimately aim to replicate the experimental implementation across a diverse set of institutions.

The comparison of means using the independent samples t-test, and the Pearson's correlation of concept inventory scores and concept map rubric scores appear to suggest the two research hypotheses should not be rejected. However, when multiple linear regression analyses were used to control for the participants’ incoming academic preparation, the difference in concept inventory post-test scores between the study groups was no longer significant, nor was the relationship between concept mapping skills and performance on the concept inventory. Arriving at a definitive judgement regarding the impact of the concept map intervention is further complicated by the post hoc power analyses that estimated the power to detect a difference in the regression coefficients. These results suggest there is a chance the research hypotheses are being erroneously rejected based on the multiple linear regression analyses. Future studies should therefore attempt to more effectively isolate the impact of the concept map intervention and/or increase the sample size in order to increase the effect size of the treatment. This would aid in arriving at a more decisive conclusion regarding the impact of the concept map intervention on student conceptual learning gains.

Even if there is some correlation between improved concept mapping and conceptual understanding, this finding would be limited due to the fact it could not be determined if the ability to create well-developed concept maps resulted from the concept map intervention itself, or if students had a pre-existing ability to create high level concept maps upon entering the study. Because concept mapping skills were not measured for incoming students it is simply not possible to reconcile this question in the current study. The data seem to suggest there is some chance improved concept mapping leads to gains in conceptual understanding, therefore finding a way to help students become more proficient in developing higher quality concept maps appears to be a worthwhile instructional goal. Future studies could attempt to measure incoming concept mapping skills, though this would still pose problems from an experimental standpoint. Initial skills in the concept mapping of a pre-existing knowledge state may not translate to concept mapping skills related to the learning objectives covered in the study, and the preliminary concept mapping evaluation itself could impact the performance on the final concept map created by students. Nevertheless, creating an experimental design that addresses this limitation should be a priority.

Though the SALG instrument was used in an attempt to gain insight about student perceptions of affective learning outcomes, compliance issues resulted in a limited subset of study participants who provided feedback. Caution is therefore warranted in interpreting any general trends in the student responses to the SALG survey provided herein. Future studies should aim to increase the participant response rate, and might consider including focus group interviews to gather additional data on students’ perceptions of affective learning outcomes that could help corroborate data collected in the SALG.

Conclusion

In summary, the purpose of this study was to create a streamlined concept map implementation and generate new evidence about the impact of concept map assignments on large enrollment general chemistry courses. Though no unambiguous conclusion can be made regarding the impact of the concept map treatment on conceptual understanding under these quasi-experimental conditions, the results presented here suggest the ability to create well-developed concept maps might correlate to improved learning gains in conceptual understanding. Future research should therefore aim to ascertain if there is indeed a causal relationship between conceptual learning and creating well-developed concept maps and how students can be coached into creating more advanced concept maps.

It is noted the concept map assignment was structured in a way that would allow easy integration into a traditional large enrollment course. More specifically, the goal was to balance making the concept map assignment significant enough to impact student performance without making the administration and grading of concept maps a burden in terms of TA/instructor workload. Additionally, it was desired to avoid creating student resistance to the concept map assignments, which can arise if the subjective grading rubric is used to assign grades for the concept maps. The concept map implementation described in this study can act as a template for instructors who wish adopt this type of learning intervention and assessment, however future implementations might be re-designed to improve student engagement and comfort using concept maps while maintaining the ease of implementation. Increasing student engagement with concept mapping without relying on a subjective grading rubric could be accomplished by coupling the concept map assignment with a close-ended task that assesses students’ ability to properly connect concepts. For instance, linking the concept map assignments described here with the Measure of Linked Concepts (MLC's) described by Lewis and coworkers (Ye et al., 2015) or the more recently reported Creative Exercises (CE's) designed by Ye and coworkers (Gilewski et al., 2019) would provide a less subjective method of evaluating the students. This would likely reduce student anxiety about being judged on their concept maps while simultaneously increasing the incentive for students to take the concept map assignment seriously. Including the MLC's or CE's in a quasi-experimental research design might also provide a means to determine if concept mapping skills can be taught to selected student populations and if true gains in concept mapping lead to improved conceptual understanding. Regardless of how instructors might adapt the use of concept maps into their course, making this tool a more prominent component of the instructional tool box can help students attain the type of meaningful learning described in Ausubel's assimilation theory of learning.

Conflicts of interest

There are no conflicts to declare.

Acknowledgements

The authors gratefully acknowledge the assistant editor and anonymous referees for providing extensive feedback during the review process. Their suggestions and advice greatly improved the overall quality of the final manuscript.

Notes and references

  1. Ausubel D. P., (1968), Educational psychology: A cognitive view, New York: Holt, Rinehart, & Winston.
  2. Besterfield-Sacre M., Gerchak J., Lyons M. R., Shuman L. J. and Wolfe H., (2013), Scoring Concept Maps: An Integrated Rubric for Assessing Engineering Education, J. Eng. Educ., 93(2), 105–115.
  3. Burdo J. and O’Dwyer L., (2015), The effectiveness of concept mapping and retrieval practice as learning strategies in an undergraduate physiology course, Adv. Physiol. Educ., 39, 335–340.
  4. Burrows N. L. and Mooring S. R., (2015), Using concept mapping to uncover students’ knowledge structures of chemical bonding concepts, Chem. Educ. Res. Pract., 16(1), 53–66.
  5. Cañas A. J., Carff R., Hill G., Carvalho M., Arguedas M., Eskridge T. C., Carvajal R., (2005), Concept Maps: Integrating Knowledge and Information Visualization BT, in Tergan S.-O. and Keller T. (ed.), Knowledge and Information Visualization: Searching for Synergies, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 205–219.
  6. Chevron M.-P., (2014), A metacognitive tool: theoretical and operational analysis of skills exercised in structured concept maps, Perspect. Sci., 2(1), 46–54.
  7. Cohen J., (1988), Statistical Power Analysis for the Behavioral Sciences, 2nd edn, New York, NY: Lawrence Erlbaum Associates, ch. 9.
  8. Cook E., Kennedy E. and McGuire S. Y., (2013), Effect of Teaching Metacognitive Learning Strategies on Performance in General Chemistry Courses, J. Chem. Educ., 90(8), 961–967.
  9. Cooper M. M., Corley L. M. and Underwood S. M., (2013), An investigation of college chemistry students’ understanding of structure–property relationships, J. Res. Sci. Teach., 50(6), 699–721.
  10. Cros D.; Maurin M.; Amouroux R.; Chastrette, M.; Leber, J.; Fayol, M., (1986), Conceptions of First-year University Students of the Constituents of Matter and the Notions of Acids and Bases, Eur. J. Sci. Educ., 8(3), 305–313.
  11. Eichler J. F. and Peeples J., (2016), Flipped classroom modules for large enrollment general chemistry courses: a low barrier approach to increase active learning and improve student grades, Chem. Educ. Res. Pract., 17(1), 197–208.
  12. Francisco J. S., Nakhleh M. B., Nurrenbern S. C. and Miller M. L., (2002), Assessing Student Understanding of General Chemistry with Concept Mapping, J. Chem. Educ., 79(2), 248.
  13. Gabel D., (1999), Improving Teaching and Learning through Chemistry Education Research: A Look to the Future, J. Chem. Educ., 76(4), 548.
  14. Galloway K. R. and Bretz S. L., (2015), Measuring Meaningful Learning in the Undergraduate General Chemistry and Organic Chemistry Laboratories: A Longitudinal Study, J. Chem. Educ., 92(12), 2019–2030.
  15. Geiger S. and Santelices M. V., (2007), Validity of High School Grades in Predicting Student Success Beyond the Freshman Year: High School Record vs. Standardized Tests as Indicators of Four-Year College Outcomes, University of California, Berkeley Center for Studies in Higher Education, https://escholarship.org/uc/item/7306z0zf.
  16. Gilewski A., Mallory E., Sandoval M., Litvak M., Ye L. (2019), Does linking help? Effects and student perceptions of a learner-centered assessment implemented in introductory chemistry, Chem. Educ. Res. Pract., 20, 399–411.
  17. Hackathorn J., Cornell K., Garczynski A., Solomon E., Blankmeyer K. and Tennial R., (2012), Examining exam reviews: a comparison of exam scores and attitudes, J. Scholarship Teach. Learn., 12(3), 78–87.
  18. Harrison A. G. and Treagust D. F., (2018), Secondary students’ mental models of atoms and molecules: implications for teaching chemistry, Sci. Educ., 80(5), 509–534.
  19. Johnstone A. H., (1993), The development of chemistry teaching: a changing response to changing demand, J. Chem. Educ., 70(9), 701.
  20. Kennedy S. A., (2016), Design of a Dynamic Undergraduate Green Chemistry Course, J. Chem. Educ., 93(4), 645–649.
  21. Leech N. L., Gliner J. A., Morgan G. A., Harmon R. J. and Harmon R. J., (2003), Use and Interpretation of Multiple Regression, J. Am. Acad. Child Adolesc. Psychiatry, 42(6), 738–740.
  22. Lindstrom T.; Middlecamp C., (2017), Campus as a Living Laboratory for Sustainability: The Chemistry Connection, J. Chem. Educ., 94 (8), 1036–1042.
  23. Luxford C. J. and Bretz S. L., (2014), Development of the Bonding Representations Inventory To Identify Student Misconceptions about Covalent and Ionic Bonding Representations, J. Chem. Educ., 91(3), 312–320.
  24. Mack M. R., Hensen C. and Barbera J., (2019a), Metrics and Methods Used To Compare Student Performance Data in Chemistry Education Research Articles, J. Chem. Educ., 96, 401–413.
  25. Mack M.R., Stanich C.A., Goldman L.M., (2019b), Math Self-Beliefs Relate to Achievement in Introductory Chemistry Courses. Chapter in It's Just Math: Research on Students’ Understanding of Chemistry and Mathematics, ACS Symp. Ser., 1316, 81–104.
  26. Markow P. G. and Lonning R. A., (1998), Usefulness of concept maps in college chemistry laboratories: Students’ perceptions and effects on achievement, J. Res. Sci. Teach., 35(9), 1015–1029.
  27. Middlecamp C. H.; Jordan T.; Shachter A. M.; Kashmanian Oates K.; Lottridge S., (2006), Chemistry, Society, and Civic Engagement (Part 1): The SENCER Project, J. Chem. Educ., 83 (9), 1301.
  28. Mulford D. R. and Robinson W. R., (2002), An Inventory for Alternate Conceptions among First-Semester General Chemistry Students, J. Chem. Educ., 79(6), 739  DOI:10.1021/ed079p739.
  29. Nesbit J. C. and Adesope O. O., (2006), Learning with concept and knowledge maps: a meta-analysis, Rev. Educ. Res., 76, 413–448.
  30. Nicoll G., Francisco J. S. and Nakhleh M., (2001), An Investigation of the Value of Using Concept Maps in General Chemistry, J. Chem. Educ., 78(8), 1111.
  31. Novak J. D., (1984), Application of advances in learning theory and philosophy of science to the improvement of chemistry teaching, J. Chem. Educ., 61(7), 607.
  32. Novak J. D., (1990), Concept maps and Vee diagrams: two metacognitive tools to facilitate meaningful learning, Instr. Sci., 19(1), 29–52.
  33. Novak J. D. and Cañas A. J., (2008), The Theory Underlying Concept Maps and How to Construct and Use Them, Technical Report IHMC CmapTools 2006-01 Rev 01-2008, Florida Institute for Human and Machine Cognition.
  34. Novak J. D.; Gowin B. B., (1984), Learning How to Learn, Cambridge: Cambridge University Press.
  35. Rickey D. and Stacy A. M., (2000), The Role of Metacognition in Learning Chemistry, J. Chem. Educ., 77(7), 915.
  36. Regis A., Albertazzi P. G. and Roletto E., (1996), Concept Maps in Chemistry Education, J. Chem. Educ., 73(11), 1084.
  37. Turan-Oluk N. and Ekmekci G., (2018), The effect of concept maps, as an individual learning tool, on the success of learning the concepts related to gravimetric analysis, Chem. Educ. Res. Pract., 19(3), 819–833.
  38. Vincent-Ruz P., Binning K., Schunn C.D., Grabowski J., (2018), The Effect of Math SAT on Women's Chemistry Competency Beliefs, Chem. Educ. Res. Pract., 19, 342–351.
  39. Widhiarso W. and Ravand H., (2014), Estimating reliability coefficient for multidimensional measures: a pedagogical illustration, Rev. Psychol., 21(2), 111–121.
  40. Ye L., Oueini R. and Lewis S. E., (2015), Developing and Implementing an Assessment Technique To Measure Linked Concepts, J. Chem. Educ., 92(11), 1807–1812.
  41. Zwick R. and Sklar J.C., (2005), Predicting College Grades and Degree Completion Using High School Grades and SAT Scores: The Role of Student Ethnicity and First Language, Am. Educ. Res. J., 42, 439–464.

Footnotes

Electronic supplementary information (ESI) available. See DOI: 10.1039/c9rp00059c
Cmap concept mapping software: https://cmap.ihmc.us/.
§ IBM Corp. Released 2016. IBM SPSS Statistics for Windows, Version 24.0. Armonk, NY: IBM Corp.
SALG survey: https://salgsite.net/about.

This journal is © The Royal Society of Chemistry 2020