Alex
Gilewski
ab,
Emily
Mallory
a,
Matthew
Sandoval
a,
Mikhail
Litvak
c and
Li
Ye
*a
aDepartment of Chemistry and Biochemistry, California State University, Northridge, Northridge, California 91330, USA. E-mail: li.ye@csun.edu
bDepartment of Chemistry, Glendale Community College, Glendale, California 91208, USA
cDepartment of Biology, California State University, Northridge, USA
First published on 13th February 2019
This study developed and implemented a learner-centered assessment named Creative Exercises (CEs) in introductory chemistry courses at a four-year university and a two-year community college. CEs were developed and implemented as an intervention for treatment groups. The control groups only used traditional assessments such as multiple-choice and short-answer questions. A mixed-methods approach was employed for evaluating the effectiveness of CEs in improving student learning and performance. First, quantitative data included student exam scores, DFW rates, and percentages of letter grades were analyzed and compared between treatment and control groups. Second, student responses to CEs were coded as chemistry concepts and then organized into chemistry topics. A series of visual maps were plotted to show students’ linking of chemistry topics and progress made throughout the semester. Lastly, student perceptions of the use of CEs were investigated via a free-response survey. Quantitative results showed that CEs improved students’ academic performance and retention in introductory chemistry courses at both college settings. The implementation at the two settings indicated that the frequency and quality of the use of CEs might impact the effectiveness. The results from qualitative data analyses converged with the positive effects of CEs. Students were able to connect prior and newly-learned topics in response to CEs and made progress on linking more topics over time. Open coding of the free-response survey data identified four themes explaining why the use of CEs helped students: knowledge integration, conceptual understanding, flexibility, and more effective study habits. Best practices for implementation of learner-centered assessments learned in this study and future directions for research are discussed.
The two theoretical frameworks mentioned above both emphasize that learners learn more effectively when they can make connections between prior knowledge to newly-learned knowledge. The ideas that learning occurs in the light of pre-existing knowledge and connection-making process leads to meaningful learning provide the theoretical basis for designing this study. Introductory chemistry contains a breadth of topics with which students may not naturally make connections. As instructors, it is important that we communicate with students the importance of linking knowledge. Linking knowledge learned previously to new content is essential for students to build meaningful knowledge structures. In Cooper and Stowe's recent chemistry education research review article (Cooper and Stowe, 2018), they summarized three criteria for meaningful learning to occur: “(1) the student must have appropriate prior knowledge to which the new knowledge can be connected. (2) The new knowledge must be perceived as relevant to this prior knowledge. (3) The student must choose to make these connections.” Creative Exercises provide students with a prompt designed based on new knowledge and ask them to write as many as statements using what they have learned. It requires students to retrieve prior knowledge associated with the prompt and apply it to the new situation. In the process of generating statements, students need to identify the relevant prior knowledge in the prompt, transfer the knowledge into new problems, and choose to write statements that they believe meet the criteria of CEs. These cognitive processes satisfy the three criteria of meaningful learning stated above. Therefore, using Creative Exercises regularly in introductory chemistry courses has the potential to promote greater linking of chemistry concepts and more meaningful learning to occur.
A meta-analysis study (Nesbit and Adesope, 2006) analyzed 55 studies reported the effects of concept mapping on students’ cognitive and affective outcomes. The mean effect size of 0.60 was found for the effects of concept maps as an intervention tool for improving student performance outcomes. Of these 55, only 6 reported self-reported affective outcomes including self-efficacy, motivation, and affect (anxiety, frustration, satisfaction). Positive effects on all the affective measures were found. In 2009, another meta-analysis study reported a medium effect size of 0.57 of six meta-analyses regarding the effectiveness of concept mapping (Hattie, 2009). In a recent study reported by Turan-Oluk and colleague (Turan-Oluk and Ekmekci, 2018), concept maps were used to measure the learning gains on the topic of gravimetric analysis in analytical chemistry. Concept maps were used as pre-test after gravimetric analysis was introduced by a traditional lecture. Then, the theory, objectives and application of concept maps were introduced as an intervention. They administered a post-test using concept maps and student opinions of the use of concept maps were investigated. The results showed that the differences between pre and post-tests were statistically significant. Students’ comments on the use of concept maps revealed that concept maps enabled students to see the relations between concepts and learn chemistry in a more effective way such as focusing more on understanding instead of memorization.
According to assumptive theory of learning and constructivist model of knowledge, CEs have the potential to assist students in retrieving prior knowledge and apply to new situations so as to foster meaningful learning in introductory chemistry. However, the effects of CEs on students’ academic achievement in chemistry courses haven’t been investigated yet. In this study, we examined how CEs impact students’ academic performance and retention in introductory chemistry courses at two different college settings. The effectiveness of CEs can be used to compare to the effects of other types of learner-centered assessments or those assessments designed to help students make connections in chemistry, such as concept mapping. Additionally, student perceptions of the use of CEs were also explored to help understand why CEs are helpful or not for student learning in chemistry. Another novel investigation of this study was that we showed students’ progress in linking of chemistry concepts for first time through visual maps. These visual maps revealed the extent of current and prior topics used by students while they were completing the CEs given at different time points throughout the semester. This study was guided by the following research questions:
(1) To what extent do Creative Exercises impact students’ performance and retention in introductory chemistry?
(2) How do students link chemistry topics through answering Creative Exercises across time?
(3) What are students’ perceptions of the use of Creative Exercises?
| Setting 1 | Setting 2 | |||
|---|---|---|---|---|
| Treatment | Control | Treatment | Control | |
| a TD: traditional assessments including multiple choice, true-false, short-answer questions. CE: creative exercises. MC: multiple-choice questions. | ||||
| Semester | Fall 2017 | Fall 2017 | Spring 2018 | Spring 2018 |
| Class format | Lecture only | Lecture only | Lecture A + lab A | Lecture A + lab B |
| Enrolment | 73 | 71 | 25 | 19 |
| Instructor | Both groups were taught by the same instructor | Lecture: instructor A (both groups were in the same lecture) | ||
| Lab A: instructor A; Lab B: instructor B | ||||
| Class time | Twice a week 75 min each | Twice a week 75 min each | Lecture A: twice a week, 2 hours each | Lecture A: twice a week, 2 hours each |
| Lab A: once a week, 3 hours | Lab B: once a week, 3 hours | |||
| Textbook (Tro) | 10 chapters | 10 chapters | 17 chapters | 17 chapters |
| Activities and Examsa | Group activities: TD + CE | Group activities: TD | Group activities: CE | Group activities: n/a |
| Midterms: MC + CE | Midterms: MC | Midterms: TD + CE | Midterm: TD | |
| Extra Credit: CE | Extra Credit: MC | Extra Credit: CE | Extra Credit: TD | |
| Final exam: MC only | Final exam: MC only | Final exam: TD only | Final exam: TD only | |
| Grading | 12% Mastering online homework | 10% weekly online quizzes | ||
| 10% Group activities | 50% Four midterms | |||
| 4% Attendance | 20% Final exam | |||
| 32% Two midterms | 20% Lab | |||
| 42% Final exam | ||||
At the second setting, multiple sections of an introductory chemistry course are offered each semester. It is designed for students who have never taken chemistry, took it in high school but did not pass the chemistry placement exam, or those who are returning students. It satisfies general education requirements and serves as a prerequisite for general chemistry and biology courses. It uses the same textbook as course at the first setting but covers seventeen chapters in the original version of “Introductory Chemistry Essentials”. The content sequence taught was: the scientific method, measurement, matter and energy, atoms and elements, molecules and compounds, the mole concept, chemical reactions, oxidation–reduction reactions, stoichiometry, electrons and atomic structure, chemical bonding, intermolecular forces, liquids and solids, gas laws, solutions, chemical equilibrium, acid–base chemistry, and nuclear chemistry. A laboratory component is also required for the course. Students can only choose a laboratory section associated with the lecture section for which they sign up; as such, the same cohort is present in both the lecture and the linked laboratory sections. Students had the option of two laboratory sections, one with the lecture instructor and one with a separate instructor. The laboratory section taught by the lecture instructor was the treatment group, and the other section served as the control. Both groups attended the exact same lectures, twice a week, for 2 hours each lecture. CEs were administered in the laboratory and on lecture exams to maintain the separation between the two groups. The week before each exam, students were tasked with completing a CE as homework, which they brought to laboratory and collectively discussed in permanent groups of 4 students before the experiment. The instructor circulated the room and facilitated the group discussion by providing quick feedback on the correctness of student responses to CEs. Student grades were determined as follows: 10% for weekly online quizzes, 50% for four midterm exams, 20% for a cumulative final exam, and 20% for the laboratory.
At both settings, the same instructors taught the lecture for both treatment and control groups. CEs were implemented for the treatment groups multiple times over the semester via different formats, including in-class group activity, midterm exam, and extra credit assignment. For midterm exams, 90% of the content were the same for the treatment and control groups. The treatment groups were given CEs worth 10% on each midterm exam while the control groups were given the traditional assessment questions matched to the prompt of CEs (see examples in Fig. 2). The matched questions were written for the control groups so that students have the opportunities to obtain equivalent points. Within each setting, students took common final exam that didn’t include CEs. More details about timelines of the implementation of CEs at the two settings can be found in Fig. 3.
Each student response to CEs was assigned a concept by the authors based on what facet of the prompt the student attempted to answer. In many cases, one response merited multiple concepts, as the student must have utilized more than one to come up with their statement. For example, for a prompt describing hydrochloric acid and sodium hydroxide reacting together, a student wrote:
“HCl(aq) + NaOH(aq) → NaCl(aq) + H2O(l)”. This was coded as ‘nomenclature,’ ‘reactions,’ and ‘reactions’ because the student must have translated names into chemical formulas and must have utilized two concepts of reactions: states in chemical reactions and writing molecular equations. Once the concepts were assigned to each statement, they were then coded as the major topic to which the concepts belonged. To establish reliability of the coding, two authors independently assigned statements to concepts and major topics comprising 10% of a data set. An inter-coder agreement of 76% was found, and any discrepancies were discussed and resolved. One author then coded the rest of the data. After the coding was completed, Gephi software (https://gephi.org/) was used to generate the visual maps show students’ linking of topics and progress made in the course. Each visual map was created by preparing and importing two files into the software: nodes (topics and frequencies of the topics) and edges (source topics, target topics, and frequencies of the links between topics). The size of a node represents the frequency of the topic, which means number of statements by all students coded as a particular topic. Larger nodes mean more statements were written referencing that topic. For edges, each student's coded statements were used to determine all the unique two-code (topic) combinations possible between all topics used by that student. These combinations were used to form the edges of the visual maps. Once each student's combinations were generated, the frequency of each combination was used to be represented as the thickness of the edge. The thickness of an edge represents the number of students who used both of the two topics connected by the edge. Visual maps were generated by Gephi software using a metric called betweenness centrality, which quantifies the number of times a node acts as a bridge along the shortest path between the two other nodes (https://gephi.org/users/). Darker nodes in the maps have higher betweenness centrality. In general terms, this means that darker nodes display more connections to other nodes. The color of the edges resonates with the color of the nodes they connected to. To summarize, the nodes represent the frequency of each topic by all students, and the edge connecting two nodes corresponds to how frequently students used both topics in their responses. While this platform is widely used in sociology network analysis (Bastian et al., 2009), Gephi visual maps are not unknown in education literature. A study in healthcare education analytics describes Gephi and similar programs which can be used to visualize data, allowing a reader to more rapidly understand large and/or complex datasets (Vaitsis et al., 2014). Another uses Gephi to demonstrate how science education organizations are interconnected to schools, universities, museums, and other educational institutions or groups (Falk et al., 2015). In chemical education research, Galloway and colleagues use Gephi to visualize and qualitatively compare how undergraduate students, graduate students, and professors categorize organic chemistry reactions when completing card sorting tasks (Galloway et al., 2018). In their Gephi maps, a node represents a reaction card, two cards are placed in the same category are connected by edges, the thicker edges indicate the more participants who sorted a card pair together; then, visualizations for multiple groups are qualitatively compared.
| CE1 | CE2 | CE3 | CE4 | Avg. CE | |
|---|---|---|---|---|---|
* Means correlation was significant at the 0.05 level. ** Means correlation was significant at the 0.01 level. |
|||||
| Setting 1 | |||||
| Final exam | 0.531* | 0.544* | N/A | N/A | 0.636** |
| Setting 2 | |||||
| Final exam | 0.558* | 0.454* | 0.152 | 0.356 | 0.584** |
| Variables | Setting 1 | Setting 2 | ||
|---|---|---|---|---|
| Treatment | Control | Treatment | Control | |
| Number of students | 73 | 71 | 25 | 19 |
| % Female | 41 | 46 | 47 | 66 |
| % URMs | 61 | 59 | 53 | 30 |
| SAT math | 492 | 482 | N/A | N/A |
| High School GPA | 3.34 | 3.42 | 3.32 | 3.42 |
The first research question was to investigate the effectiveness of the learner-centered assessment (i.e. Creative Exercises) as an intervention on student performance and retention in the introductory chemistry courses. First, students’ exam scores in the courses were converted to percentages by using the actual scores divided by the maximum scores that students could obtain for the exams. At both settings, the mean exam score of students in the treatment groups was higher than the control groups on every exam (see Table 4).
| Treatment | Control | ||||
|---|---|---|---|---|---|
| Mean (%) | SD (%) | Mean (%) | SD (%) | ||
| Setting 1 | Exam 1 | 61 | 16 | 58 | 16 |
| Exam 2 | 60 | 17 | 54 | 19 | |
| Final exam | 60 | 18 | 55 | 20 | |
| Avg. exam | 61 | 15 | 56 | 17 | |
| Setting 2 | Exam 1 | 88 | 8 | 80 | 22 |
| Exam 2 | 80 | 23 | 57 | 34 | |
| Exam 3 | 81 | 19 | 66 | 28 | |
| Exam 4 | 52 | 33 | 41 | 28 | |
| Final exam | 69 | 19 | 61 | 34 | |
| Avg. exam | 75 | 14 | 61 | 23 | |
The differences between the treatment and control groups were ranged from 3% to 6% at the first setting, and 8% to 23% at the second setting. To visualize the differences in exams scores between treatment and control groups, student exam scores were also converted to z-scores and mean z-scores of two groups were plotted in Fig. 4. The figure shows how far each group was far away from the mean score of the class (z-score = 0). To analyze whether the effect of the intervention was statistically significant on exam scores, independent t-tests were conducted to compare the mean differences in average exam scores between treatment and control groups (see Table 5). Results of the independent t-tests showed that the mean difference on average exam scores at the second setting was statistically significant (t = 2.388, p = 0.024), but not for the first setting (t = 1.588, p = 0.115). Because the relatively small sample sizes in our study, we also calculated and reported effect size (Cohen's d) to quantify the size of the differences between treatment and control groups. Effect size takes into account sample size and the amount of variation in scores. It is independent of the inferential statistics and allows us to move beyond does it work or not to how well does it work. Cohen suggested that d = 0.2, 0.5, and 0.8 to be considered as small, medium, and large effect size, respectively (Cohen, 2005). As listed in Table 5, the average effect size was 0.31 for average exam scores at the first setting and 0.75 at the second setting. The average effect size was considered as small at the first setting and medium at the second setting. More specifically, at the first setting, the average exam score of the average person in the treatment group was 0.31 standard deviations above the average person in the control group, and this difference between groups increased more than twofold at the second setting.
| Mean difference (treatment – control) | p-Value | Effect size (Cohen's d) | |
|---|---|---|---|
* Mean difference was significant at the 0.05 level. |
|||
| Setting 1 (N = 144, treatment = 73, control = 71) | |||
| Avg. exam | 5% | 0.115 | 0.31 |
| Setting 2 (N = 44, treatment = 25, control = 19) | |||
| Avg. exam | 14% | 0.024* | 0.75 |
Finally, the distribution of letter grade percentages (Fig. 5) and DFW rates between treatment and control groups were compared at the two settings. As shown in Fig. 5, in general, the bars in red shifted more on the top as compared to the blue bars. That is, students in the treatment groups earned higher grades than the control groups overall. As a result, at the first setting, the difference in DFW rates between the treatment and control groups was 7%, with 37% for the control group and 30% for the treatment group. Similarly, at the second setting, in the treatment groups obtained more A, B C grades and less D and F, leading the difference in DFW rates between the two groups became 8% with 32% for the control group and 25% for the treatment group. Chi-square analyses were carried out to determine if the differences between letter grades (DFW vs. non-DFW) of the two groups were statistically significant. The results indicated statistical differences with moderate effect size at the second setting (χ2 = 4.565, p = 0.033, Cramer's V = 0.322) but no difference was found at the first setting (χ2 = 1.350, p = 0.245, Cramer's V = 0.097).
Comparing the effect of CEs on students’ cognitive outcomes to the medium effect size of 0.57 using Cohen's d of six studies researched on the effect of concept mapping conducted by Hattie (Hattie, 2009), we found a similar average effect size (0.53) of the use CEs on exam scores at the both settings. It is worth noting that the impact of CEs on average exam scores at the second setting was more than twice compared to impact at the first setting. We believe the difference between the two settings is mainly because of the frequency and quality of the implementation of CEs. First of all, the frequency of using CEs was almost doubled at the second setting. CEs were used five times at the first setting while nine times at the second setting. Relating to assumptive theory of learning, the higher frequency of using CEs provides more opportunities for students to link concepts and enable them to link topics with shorter time gaps within the course. These experiences allow students construct more coherent understanding in minds and undergo more meaningful learning through the more CE activities, leading to a larger effect size. More importantly, for the treatment groups, the time students spent in class for CEs at the second setting was probably with higher quality. At the first setting, when students were given the CEs as in-class group activities, they were given along with other traditional chemistry problems. While at the second setting, the instructor pre-assigned the CEs to students individually before the class as homework, and CEs were used as the only problems that students had to complete during in class group activities. As such, students had their own answers when they came to class and were ready to share and discuss their answers with peers in groups. Students were also given sufficient time to compile individual answers and ask instructor for feedback on the correctness of group answers during the class time. This mechanism allows adequate time for group discussion and student–instructor interactions, which makes the implementation of CEs more efficient. Also, the larger effect size at the second setting could be due to the differences between control groups at the two settings. Students in control group lacked opportunities for group activity at the second setting but similar amount of group activities using traditional assessment were completed by the students in control group at the first setting. In sum, the frequency and quality of the implementation of CEs might be the important factors that enhance the effectiveness of CEs on students’ performance.
The maps were generated from student responses to CEs on the four exams implemented to the treatment groups at the second setting. The first map shows the smallest number of topics mentioned among all maps. Students focused on three topics: units and significant figures (sig. figs), elements, and compounds. They chose compounds most heavily, but the other two are not far off. The least mentioned topic was atoms. In exam 2, the data shows a large contrast. By far, one topic was mentioned more than any other topics: chemical reactions. Reactions were the target topic of the prompt, indicating students used current topic more than prior topics. Interestingly, stoichiometry was mentioned only once. This may be due to a desire to avoid performing calculations on these types of problems or may indicate a lack of understanding of stoichiometry by the students. In exam 3, the largest number of topics was demonstrated, and topics used were more spread out. Only one was much larger than the others: units and significant figures. The major topics for that exam were gases, chemical bonding, and intermolecular forces. Stoichiometry was again a rarely-used topic. In exam 4, which focused on solution, equilibrium, and acid–base chemistry, students again chose units and significant figures most. Students tended to focus on the topics presented on the exam, but similarly to exams 2 and 3, stoichiometry was the least-used topic.
Overall, it appears that students prefer to apply units and significant figures most often. This could be because of the “ease” of identifying how many significant figures a number has, or simply identifying what a unit represents. It could also result from the fact that students are simply accustomed to using this topic and getting it marked correct on previous exams, as “easy points.” However, students tended to not only apply recently learned concepts but accessed previous topics as well. Finally, stoichiometry and atoms are the least-used topics, which may due to their relative difficulty or lack of understanding. Ralph and Lewis (Ralph and Lewis, 2018) recently reported that students with low SAT math scores struggled disproportionately with stoichiometry across all general chemistry topics. This may speak to our results here since the students enrolled in the selected courses at the settings were in general less prepared for math. Atoms, as a topic, in particular may be less obvious or applicable to students, or they may have difficulty understanding the relevance of the topic.
There were four themes found under the helpful category: knowledge integration, conceptual understanding, flexibility, and study habits. First and foremost, the majority of the students stated that CEs helped them think chemistry as a coherent theme instead of learning chemistry as separate facts: “It helped in a way that allowed us to think about the whole concept of chemistry and not just the specific topics we learn every week.”; “It makes me think back to everything I've learned from every chapter and using all of the information together to answer one question.” These comments are in line with the purpose of using CEs as a learner-centered assessment, getting students to learn the importance of linking chemistry concepts and promote students making connections and build more coherent knowledge structures. Moreover, CEs also assisted students in understanding chemistry concepts conceptually and deeply: “It helps tremendously. These questions force me to understand the concept and apply it rather than just memorize and regurgitate it”; “You can explain why and how. You found such answers by showing work or writing out sentences to explain your reasoning.” Reducing the extent of using memorization in learning chemistry and being able to internalize and explain chemistry concepts are the signs that show students are adopting meaningful learning.
The third theme was flexibility, students felt that they had the freedom to choose the topics they would like to present: “I could utilize the areas which I felt the most comfortable in to answer the question.” Additionally, the open-ended nature of CEs gave students a sense of ownership because they were not restricted by a correct answer when answering CEs: “It gives you space to think the way you want, and the answers are unlimited.” One of the most noticeable themes throughout students’ responses was the impact of CEs on students’ study habits. Many students indicated that they put more efforts to learn prior knowledge because they need to use the knowledge consistently throughout the semester: “It motivates me to keep the prior material learned fresh in my mind”; “You had to remember the previous content long term because you knew it would relate to other exams”. Also, students appreciated the value of working with others through doing CEs in group activities, “You connect with people…input from other classmates adds to my own answer and thinking.” Lastly, CEs improved students’ metacognitive skills such as reflecting and evaluating the content they learned in the past, “It serves as a reminder of what areas I need to study more or seek help from a professor and tutor if I'm still having difficulty understanding it”; “It stimulated me to think critically and challenge myself to make sure I understand past and current lecture”. These changes in students’ study habits impacted by CEs, including reinforcement of prior knowledge, engagement in teamwork, and better metacognitive skills, have been reported as more effective study habits in chemistry (Cook et al., 2013; Sinapuelas and Stacy, 2015; Chan and Bauer, 2016; Ye et al., 2016).
In addition to the helpful category, student responses also revealed unhelpful sides of the use of CEs. Three themes were found: challenging, need more instruction or feedback, and self-doubt. Some students thought CEs were too challenging to answer or they couldn’t come up with enough statements to meet the required number of statements for full credit: “to pull a lot of concepts and apply it to the general question itself was beyond difficult”; “I feel as if I run out of things to say and just end up getting frustrated.” Second, students mentioned that they would like to have more instruction on how to meet the criteria of CEs and more immediate feedback on their answers to the CEs. The last theme was self-doubt, certain students were not confident about their answers to CEs: “I would make connections and they would be wrong”, which may cause additional anxiety during the exams.
![]() | ||
| Fig. 6 Visual maps show student's linking of concepts and progress over time using CE responses (a) CE1 in Exam 1, (b) CE2 in Exam 2, (c) CE3 in Exam 3, (d) CE4 in Exam 4. | ||
In sum, the majority of the students think that the use of CEs is beneficial for their learning in chemistry. Those helpful themes converge with the positive effects of CEs. CEs encourage knowledge integration of topics in introductory chemistry, promote conceptual understanding, and help students form more effective study habits. These explain why the use of CEs leads to better academic performance and retention.
Among these helpful themes, some viewpoints are consistent with students’ views of the use of concept mapping, such as appreciating the connections among chemistry concepts and improving conceptual understandings (Turan-Oluk and Ekmekci, 2018). The unhelpful themes provide insight into the implementation of CEs. They also resonate with the effect of the CEs on exam scores were doubled at the second setting compared to the first setting. Due to the improvement of instruction and feedback provided at the second setting, students were assigned CEs individually before class and given adequate time to work with each other in groups in class, making them were clearer on how to answer CEs and better at constructing responses to CEs through immediate feedback from instructor.
For practitioners who might adopt Creative Exercises as an assessment into their courses, the frequency and quality of implementing Creative Exercises are important factors to be considered. First, implementing Creative Exercises multiple times throughout the semester is necessary. Providing example responses to Creative Exercises and explain them before assigning Creative Exercises would be helpful for students to understand the criteria of Creative Exercises. The more practices and feedback students have with CEs, the better they are in response to Creative Exercises. Second, Creative Exercises should be used as both formative and summative assessments. Assigning Creative Exercises individually before class and having students discuss their responses via in class group activity can be an efficient way to implement Creative Exercises as formative assessment. In addition, instructors should allocate sufficient in class time to give immediate feedback on Creative Exercises. In order to maximize the effect of Creative Exercises on student learning, using Creative Exercises along with traditional assessment as summative assessment will help students value Creative Exercises more and promote knowledge integration, conceptual understanding and more effective study habits.
For researchers, the limitation of the study is the relatively small sample sizes in the courses at the two settings. Future studies considering the implementation of Creative Exercises into a larger sample would validate the generalizability of the findings in this study. The authors in this study are utilizing the student responses to Creative Exercises and developing a series of assessments to measure linked chemistry concepts and evaluate the effect of the follow-up assessment in a large scale study. Additionally, the students in our settings are in general less prepared for chemistry courses with poor math preparation and high proportions of underrepresented minorities. Researchers need to be cautious when they have a very different student body as compared to our settings. Moreover, visual maps show students connect stoichiometry and atoms least to other topics. Future research on developing instructional methods or interventions to improve understanding on the relevance of those topics would be necessary. Another insight of this study is from the visual maps of student responses to Creative Exercises. We are able to show evidence of students’ linking of chemistry topics and progress made across the semester through these visual maps. Researchers who are interested in investigating student written responses to chemistry assessments may utilize this method to show visual representations of their data. Finally, positive effects on student performance and encouraging student attitudes have been shown by implementing concept maps and Creative Exercises separately in chemistry courses. Future research might implement the two assessments simultaneously in one study and examine whether coupling concept maps with Creative Exercises may amplify the effects and explore similarities and differences of the two assessments for meaningful learning.
Footnote |
| † Electronic supplementary information (ESI) available. See DOI: 10.1039/c8rp00248g |
| This journal is © The Royal Society of Chemistry 2019 |