Sara H.
Kyne
*ab,
Martin M. H.
Lee
b and
Charisse T.
Reyes
bc
aSchool of Chemistry, Faculty of Science, University of New South Wales, Sydney, NSW 2052, Australia. E-mail: s.kyne@unsw.edu.au
bSchool of Chemistry, Faculty of Science, Monash University, Clayton, VIC 3800, Australia
cFaculty of Education, University of the Philippines Open University, Los Banõs, Laguna 4031, Philippines
First published on 13th April 2023
Recent developments in digital technologies, including learning analytics are changing educational practices due to the wealth of information available and its utility to inform academic interventions for students. This study investigates the impact of personalised feedback emails on students’ academic performance and student success in large first-year undergraduate chemistry courses. Learning analytics was used to inform and generate feedback emails to students at various timepoints during the semester. The feedback emails to students included an evaluation of their current performance, and advice of support systems and resources to support their learning. We analysed the marks of 6334 students at three timepoints during the semester in addition to their final course grade, and compared academic performance across three years of course offerings (2019–2021). We compared students that did not receive feedback (2019 control groups, n = 2391) with students that did receive feedback (2020–2021 experimental groups, n = 3943). Our findings suggest that students receiving personalised emails during the semester were more likely to pass their first-year chemistry course. Furthermore, our data showed that sending personalised feedback emails fostered higher student success among a greater number of students within the cohort, as well as students' appraisal of the personalised feedback.
A significant body of research reports the positive impacts of students receiving feedback (Hounsell et al., 2008; Ferguson, 2011; Evans, 2013). Nicol et al. (2014) reported that students viewed receiving feedback as “beneficial primarily because it alerts them to deficiencies or gaps in their work”. The functions of feedback, however, depend on the “learning environment, the needs of the learner, the purpose of the task, and the particular feedback paradigm adopted” (Evans, 2013) such as prescribed by cognitivist and socio-constructivist views. Nicol (2010) further argues that feedback “should be conceptualised as a dialogical and contingent two-way process” involving teacher–student and student–student interactions. In a dialogical context wherein adaptive, discursive, and reflective feedback is shared between teacher and student, interactive oral and written communication becomes more effective (Nicol, 2010).
As acknowledged by Hepburn et al. (2022), the first-year experience is challenging for both educators and students with the demand on the educators to improve low engagement particularly in large classes. Providing feedback could address this, as evidenced by the higher levels of students’ attention and satisfaction in a course reported by Pérez-Segura et al. (2022). Moreover, feedback has also been shown to increase student motivation to invest greater efforts in completing their learning tasks (Lim et al., 2021b). Students’ development of self-regulated skills (e.g., planning, monitoring, and evaluation) was also promoted by feedback designed with a pedagogical agent, as demonstrated by Karaoğlan Yılmaz et al. (2018).
Despite the known positive impacts of providing feedback to students, feedback systems are not yet routinely employed at an institutional level, across all university courses. Particularly for large cohort courses, such as first-year undergraduate chemistry, the demands of providing effective feedback that drives individual student's performance towards their learning goals, continue to pose a great challenge in terms of teachers’ workload (i.e. time investment required) and the scalability of feedback (Henderson et al., 2019). Research has also found that student feedback encounters challenges in the timeliness of the delivery, and effectiveness of the information received by students (Weaver, 2006; Bailey and Garner, 2010).
There are growing studies that have investigated the use of learning analytics to generate feedback and the impact this has on student performance (Wong et al., 2018) in undergraduate language (Karaoğlan Yılmaz and Yilmaz, 2020), psychology and communication (Lewis et al., 2021), mathematics (Dart and Spratt, 2020), science (Iraj et al., 2020) and other fields such as health and computer engineering (Lim et al., 2021a). Given this, there is a need to explore how feedback mechanism based on learning analytics specifically impact chemistry students, especially in first-year undergraduate courses where students are often learning how to navigate university life and academic expectations.
This paper aims to gain insight into how personalised feedback emails, generated through learning analytics measurements based on students’ performance in assessment tasks, impact their academic performance and student success in two first-year undergraduate chemistry courses. Furthermore, this study aims to uncover aspects of students’ behaviour upon receiving feedback. The following research questions guided this study:
RQ1. What impact could be attributed to learning analytics-based personalised feedback on the academic performance of students enrolled in first-year undergraduate chemistry?
RQ2. What differences are observed in student success in first-year undergraduate chemistry comparing 2019–2021?
RQ3. What unsolicited individual student responses are evident upon receiving feedback emails at specific timepoints during the semester?
Semester and year | First-year chemistry course | Number of students (n) | Mode of instructional delivery |
---|---|---|---|
Semester 1 2019 | CHEM A | 1377 | Blended (online preparation materials, in-person workshops, tutorials and laboratories) |
Semester 2 2019 | CHEM B | 1014 | Blended (online preparation materials, in-person workshops, tutorials and laboratories) |
Semester 1 2020 | CHEM A | 1002 | Fully online (online preparation materials and lecture videos, online synchronous large-group tutorials and laboratories) |
Semester 2 2020 | CHEM B | 866 | Fully online (online preparation materials and lecture videos, online synchronous large-group tutorials and laboratories) |
Semester 1 2021 | CHEM A | 1205 | Blended (online preparation materials and lecture videos, online or in-person large-group tutorials and in-person laboratories) |
Semester 2 2021 | CHEM B | 870 | Fully online (online preparation materials and lecture videos, online synchronous large-group tutorials and laboratories) |
Prior to the pandemic in 2020, the courses followed a blended design, where students would complete their weekly preparation learning tasks including an assessed online quiz through the Moodle learning management system (LMS). Each week there were two one-hour workshops and a one-hour tutorial. A three-hour laboratory practical was scheduled during eight weeks of the semester. Students were required to complete a compulsory pre-laboratory quiz prior to their laboratory practical. Following the practical, students completed an online laboratory report.
Early in Semester 1 2020, the CHEM A teaching delivery was switched to a fully online mode and the layout and delivery of classes was modified. The weekly preparation materials on the LMS remained in place. In addition, students were provided pre-recorded online lecture videos, and were expected to attend a synchronous one-hour large group tutorial delivered via Zoom. Additionally, students attended six one-hour virtual laboratory sessions delivered via Zoom over the semester. Assessment of the laboratory component remained the same format as 2019, however the laboratory reports were modified for the online format. Two in-semester assessments and a final end-of-semester examination were administered online. It should be noted that CHEM B Semester 2, 2020 and 2021 also followed this teaching approach. In Semester 1 2021, CHEM A students were allowed to return to campus, with strict social distancing laws and density restrictions. Therefore, weekly preparation materials and online lecture videos were again delivered via the LMS. Large-group tutorials were delivered both in-person and online (optional for students to choose their preferred mode of delivery). The only compulsory in-person class was a 2.5 hour laboratory practical (scheduled in six weeks of the semester).
The personalised feedback emails were sent at three distinct timepoints during the semester (Weeks 5, 8, 12). The specific timepoints were chosen based on the percentage of the in-semester assessment tasks which had been completed (Week 5: approximately 25%, Week 8 approximately 50% and Week 12 100%). The messages were generated via the use of Mail Merge and Mozilla Thunderbird. Mail Merge is an add-on that automatically generates and sends personalised emails based on pre-written email templates and a spreadsheet that contains specific details about the intended recipients and the category that they were classified into (for example student name, email address and specific feedback for the category). Before the messages were generated, the students’ marks were collated and processed to determine their cumulative total marks at that timepoint in the semester. Students were then categorised into three categories (Good, Midway, Poor). The ‘Good’ category corresponded to more than or equal to 50% of the cumulative marks possible at the timepoint, ‘Midway’ between 25–50% and ‘Poor’ less than 25%. Aiming to stimulate dialogue between the coordinator and student, or inner dialogue for the student, personalised feedback emails were generated with pre-written notes and advice (Table 2) using an email template (Fig. 1) based on the student categorisations, and were designed to be dialogic in their content (Espasa et al., 2022). Fig. 2 illustrates the flowchart for sending personalised emails during a semester.
Category | Notes | Advice |
---|---|---|
Good | We are pleased to know that you are doing very well based on your marks in the laboratory assessments and preparation quizzes that you recently completed. Congratulations on your hard work so far. | We want to encourage you to do even better, and to reach out to us in case you need support. |
Midway | We noticed that you may be struggling in this [course] based on your marks in the laboratory assessments and preparation quizzes that you recently completed. However, don’t be discouraged, as there is still plenty of time to improve your results. | To help you do better in the upcoming assessments, we want to encourage you to reach out to us in case you need further support. |
Poor | We noticed that you may be struggling in this [course] as you have not completed all of the laboratory assessments and/or preparation quizzes that have been due so far this semester. | To help you do better in the upcoming assessments, we want to encourage you to reach out to us in case you need further support. |
![]() | ||
Fig. 1 A sample (a) template and (b) email sent to students in the Good category in Week 8 (Semester 2 2021). |
Although qualitative data was not explicitly collected as part of this study, the course coordinator did receive unsolicited emails explicitly mentioning the personalised feedback emails. In addition, open-text responses to the end-of-semester student evaluations of each course referring to personalised feedback emails were collected. Thematic analysis of these sources of qualitative data was carried out using inductive coding (Braun and Clarke, 2006). NVivo (R1.6.1, QSR International, MA) was used for computer-assisted coding. A good interrater reliability between two raters was determined as indicated by the calculated Krippendorff's alpha (α) value of 0.877 using IBM SPSS Statistics 27 (IBM Corporation, NY).
In our study, academic performance was determined through students’ final course grades (Richardson et al., 2012; Narad and Abdullah, 2016) at the end of the semester. The mean final course grade of students who did not receive feedback emails in 2019 Semesters 1 and 2 (CHEM A and CHEM B, two semesters) were compared against the mean final course grades of students who received feedback emails in Semesters 1 and 2 of 2020 and 2021 (CHEM A and CHEM B, four semesters).
Mann–Whitney U test shows that the mean final course grades of students who received personalised feedback emails (mean = 63.5%, sd = 20.0%, n = 3943) was significantly higher compared to the mean final course grades of those who did not (mean = 59.2%, sd = 27.5%, n = 2391) with a medium effect size of 0.187 (Snyder and Lawson, 1993; Field, 2013; Tomczak and Tomczak, 2014). Furthermore, confidence interval plots shown in Appendix Fig. 5 indicate that at the 95% confidence level, students who received personalised feedback emails are centred within a narrower range (±0.6%) compared to those who did not (±1.1%). This finding suggests that at the 95% confidence level, when student received feedback emails the cohort had a more uniform academic performance. The positive impact of receiving feedback emails suggested by these findings is consistent with results reported by Pardo et al. (2019) indicating that the provision of feedback emails in the weeks prior to a major assessment allowed “a significant and positive impact on the learning experience of the students.”
Table 3 shows that a greater proportion of students passed their course when personalised feedback emails were sent compared to those students who did not receive feedback emails. Pearson's chi-squared test suggests that there is significant association between sending personalised feedback emails and passing the course (χ(1) = 12.5, p ≤ 0.001) although the association is weak (Cramer's V = 0.0445). These results infer that students receiving personalised emails during the semester were more likely to pass their first-year chemistry course, which is similar to the results obtained by (Lim et al., a) where the final grades of a first-year biological sciences students who received feedback were significantly higher than those who did not receive feedback.
Percentage for each course outcome % (n) | ||
---|---|---|
Pass | Fail | |
No feedback emails received (n = 2391) | 76.1 (1820) | 23.9 (571) |
With feedback emails received (n = 3943) | 79.9 (3150) | 20.1 (793) |
In addition, students successful completion of the course at the end of the semester was measured by the number of students who passed the course.
Fig. 3 illustrates the progress of students within each cohort over the semester through Sankey network diagrams. Students received a personalised feedback email based on which category they classified into (Good, Midway, Poor). The highlighted links (in yellow) indicate improvement in their performance from Week 5 to 8 and further to 12 whilst blue indicates a decrease in performance. Students that remained in the same category are shown in grey.
![]() | ||
Fig. 3 Sankey network diagrams showing students who improved from the Poor category in Week 5 to the Midway/Good categories (in latter weeks) for all six semesters (Semesters 1 and 2 2019–2021). |
Fig. 3(A) and (B) illustrates the graphs for the cohorts of students in 2019 when no feedback emails were sent. During these semesters, a large proportion of students who were categorised as Good throughout the semester (Semester 1 2019 = 77.1%; Semester 2 2019 = 90.8%). However, both graphs also display a significant proportion of students who were categorised as Poor at Week 5 that continued with little to no improvement for the rest of the semester. In Semester 1 2019, 15.9% (n = 219) of the students were categorised as Poor in all weeks, and failed the course at the end of the semester. As shown in Table 4 and Fig. 3(A), (B), a very small percentage of students improved from the Poor category in Week 5 to the Midway or Good category by the end of the semester in Semester 1 or Semester 2 2019 (0.2% and 0.3%, respectively). Note that the number of students that finish the semester in the Midway category at the final grade is an artefact of students failing one or more course hurdles, which resulted in the students receiving a final grade of 45.
Semester and year | Percentage of students improved % (n) | ||
---|---|---|---|
To week 8 | To week 12 | To final grade | |
Semester 1 2019 | 0.3 (4) | 0.3 (4) | 0.2 (3) |
(n = 1377) | |||
Semester 2 2019 | 0.1 (1) | 0.1 (1) | 0.3 (3) |
(n = 1014) | |||
Semester 1 2020 | 1.3 (13) | 1.2 (12) | 1.6 (16) |
(n = 1002) | |||
Semester 2 2020 | 0.2 (2) | 0.7 (6) | 0.5 (4) |
(n = 866) | |||
Semester 1 2021 | 1.3 (16) | 1.4 (17) | 1.7 (20) |
(n = 1205) | |||
Semester 2 2021 | 2.5 (22) | 1.8 (16) | 2.3 (20) |
(n = 870) |
![]() | ||
Fig. 4 Comparing the percentage of students in each category (Good, Midway, Poor) for all six semesters (Semesters 1 and 2 2019–2021). |
In contrast to Semester 1 2019, a higher percentage of students in Semester 1 2020 were observed in the Good category at each timepoint (average 85.3 ± 3.4% across the semester) and lower percentage of students in the Poor category at each timepoint (average 6.8 ± 1.5% across the semester). Furthermore, Semester 1 2021 had an average of 84.5% (±5.3%) of students in the Good category at each timepoint, with an average of only 6.1% (±0.8%) of students in the Poor category. Whilst there were significant differences in the learning conditions between 2019–2021, by comparing findings from Semester 1 2019 and Semester 1 2021 where the instructional delivery modes were relatively similar (Table 1), reveals significantly positive impacts of personalised feedback on students’ performance.
In Semester 2 2020 and 2021, a similarly high percentage of students were observed in the Good category (average 89.0% ± 5.3% across Semester 2 2020, and average 89.3% ± 2.0% across Semester 2 2021) and low percentage of students in the Poor category (average 4.6% ± 1.5% across Semester 2 2020, and average 5.3% ± 0.8% across Semester 2 2021) was observed when feedback was given (Fig. 4). A similar finding was observed in Semester 2 2019, with an average of 90.1% (±5.9%) of students in the Good category at each timepoint and an average of 4.3% (±1.0%) of students in the Poor category. Note that in Semester 2 2019, one email was sent midway through the semester by the course coordinator to students that were not going to pass the laboratory hurdle. Failing the laboratory hurdle would result in these students failing the course overall. This may have prompted students to withdraw from the course at this time and therefore means these students would not appear in the Poor category at the end of the semester like they do for Semester 1 2019.
Fig. 4 further illustrates the increase in the percentage of students in the Poor category during Semester 1 2019, rising from Week 5 (16.1%) to the final course grade (20.9%). In contrast, this increase was not observed in Semester 1 2020 or 2021. At the final course grade, the percentage of students in the Poor category for Semester 1 2020 (6.8%) and Semester 1 2021 (6.8%) are significantly lower compared to Semester 1 2019 (20.9%). Similar observations were observed in Semester 2 2020 and 2021, where the percentage of students in the Poor category at the final course grade were 4.3% and 5.4%, respectively. These results corroborate the observation from Fig. 3 wherein lower proportions of students in both Semesters 1 and 2 in 2020 and 2021 remained in the Poor category from Week 5 to the final course grade, unlike in Semester 1 2019.
Taken together, these results suggest that personalised feedback emails fostered student success among a greater number of students within the cohort compared to when students did not receive feedback emails. This apparent positive impact on students’ performance upon receipt of personalised feedback emails is consistent with the results reported by Stuart (2004) who found that accounting students who received feedback outperformed those who did not.
Kruskal–Wallis H test further suggests that the mean student marks at Week 5, 8, and 12, and the final course grade in Semester 1 2019 were significantly lower compared to all other semesters (Table 5). Post hoc analyses by DSCF pairwise comparisons further suggest that the mean student marks from Semester 1 2019 from Week 5, 8, and 12 were significantly lower upon individual comparison with other semesters. Moreover, personalised feedback emails sent in Week 5 may have contributed to medium increase in students’ performance in Weeks 8 and 12, and small increase in students’ final grade according to the effect size estimated by ε2 (Table 5) (Snyder and Lawson, 1993; Field, 2013; Tomczak and Tomczak, 2014). This finding is supported by Tinto (2012) who stated that particularly for first-year students who are “trying to adjust their behaviours to the new academic and social demands of college or university life”, students who receive frequent feedback about their performance are more likely to successfully complete their first-year courses.
Semester and year | Mean marks | |||
---|---|---|---|---|
Week 5 | Week 8 | Week 12 | Final grade | |
Semester 1 2019 | 68.2 | 67.6 | 66.1 | 57.4 |
(n = 1377) | ||||
Semester 2 2019 | 82.6 | 81.0 | 81.4 | 68.6 |
(n = 1014) | ||||
Semester 1 2020 | 76.5 | 69.7 | 73.0 | 63.3 |
(n = 1002) | ||||
Semester 2 2020 | 87.6 | 81.4 | 77.2 | 64.9 |
(n = 866) | ||||
Semester 1 2021 | 78.1 | 76.3 | 66.2 | 63.4 |
(n = 1205) | ||||
Semester 2 2021 | 78.3 | 78.9 | 76.6 | 63.1 |
(n = 870) | ||||
Kruskal–Wallis H test | ||||
χ 2 | 727.4 | 545.0 | 721.3 | 21.6 |
p | <0.001 | <0![]() |
<0![]() |
<0![]() |
ε 2 | 0.119 | 0.089 | 0.118 | 0.031 |
Benefits of receiving feedback emails | Sample quotes | |
---|---|---|
From unsolicited emails | From student evaluation responses | |
Increased motivation towards further self-improvement | I’m definitely keen to push myself to improve my outcomes (Semester 1 2020) | The three personalised messages throughout the semester from the [course] Coordinator addressing feedback on students' performance motivated me to continue to do well (Semester 1 2020) |
Increased motivation to pursue a chemistry major | Chemistry has always been my passion, and as such, the [course] I’ve done this year have really helped me further my development and understanding of chemistry. This year has confirmed that I want to continue into an extended major in chemistry. (Semester 2 2020) | None |
Increased awareness of available support | It is relieving to hear that there is a great amount of support offered to myself and everyone else doing [the course]. (Semester 1 2020) | [T]he [course] has general feedback every 4 weeks to reflect on how I am going and what should I do to keep updated with the work. (Semester 2 2020) |
Enjoyable learning experience | I love chemistry and, despite the current predicament of the pandemic and the shift of the semester online, am enjoying studying [the course]. (Semester 1 2020) | The positive feedback emails were a lovely surprise that none of my other [course] have done before. Highly motivating! (Semester 1 2021) |
Reflection on the quality of teaching of the course | I'd also like to thank you and the rest of the [teaching] team for managing to provide a very high level of teaching despite the conditions of this semester. This is actually my third year of [university]…, and I don't think I have seen a more passionate group of teachers. (Semester 1 2020) | [Course coordinator]'s monthly performance feedback emails felt like a good amount of communication and helped make the first-year chemistry team more approachable if I needed help. (Semester 1 2021) |
Furthermore, results of this study could likewise be limited by the scope of data collection methods in terms of the lack of student perceptions. Whilst unsolicited emails and student evaluations provided a glimpse of the positive impacts of receiving personalised feedback emails, this study was unable to explore in more depth students’ perceptions of where further improvements may be desirable.
Another limitation is that the impact of timing and frequency of sending personalised feedback emails were not explored in this study. This may have led to enhanced insight into the usefulness of feedback emails at different timepoints throughout the course, which may result in optimal benefits for students in improving their learning experiences.
Our future studies will employ survey instruments and focus group discussions and interviews with students to gather a depth of understanding of the impact of the feedback emails. This will enable teachers and instructional designers to better understand how the quantity and timing of feedback emails contribute to academic performance and student success. In addition, this will allow for students’ attitudes and behaviours upon receiving personalised emails to be explored.
The differences in the academic performance and student success for students who received and did not receive feedback were found to be small but significant (as shown in Table 3). As Evans (2013) argued, giving feedback alone is not sufficient to improve the learning goals of students, and improving the quality of the feedback merits greater consideration given the growing number and diversity of students in higher education. Hence, further studies to identifying parameters to provide well-designed, actionable, and quality of feedback in first-year chemistry as perceived by both the students and the teacher will be undertaken.
In addition, further studies may be done to explore the students’ activities inside the LMS after receiving feedback to identify students’ reactions to the evaluation of their past performance. For example, investigating what resources students’ access before and after they received feedback emails. Students’ activities in the LMS upon receipt of the emails may reflect students’ response to the actionable items included in the feedback. These data will provide an important perspective for teachers and instructional designers in terms of the effect of the feedback, and importantly the learning design of the course in the LMS.
![]() | ||
Fig. 5 Comparing the mean final course grades of students who received and did not receive personalised feedback emails (n = 6334). |
Semester and year | Course pass grade (%) | Assessment component | Weighting (%) | Hurdlea (%) | Mode of delivery |
---|---|---|---|---|---|
a Hurdle: if students did not meet this mark threshold, they failed the course regardless of their raw grade. | |||||
Semester 1 2019 | 50 | In-semester | 15 | — | Online (10 × weekly quizzes) |
In-class tests (3) (paper-based, timed, invigilated) | |||||
Laboratory | 30 | 45 | In-person attendance, online report | ||
End-of-semester examination | 55 | 30 | In-person (paper-based, timed, invigilated) | ||
Semester 2 2019 | 50 | In-semester | 15 | — | Online (10 × weekly quizzes) |
Online tests (2) (timed, not invigilated) | |||||
Laboratory | 30 | 45 | In-person attendance, online report | ||
End-of-semester examination | 55 | 30 | In-person (paper-based, timed, invigilated) | ||
Semester 1 2020 | 50 | In-semester | 15 | — | Online (10 × weekly quizzes) |
Online tests (2) (timed, not invigilated) | |||||
Laboratory | 30 | 40 | Online attendance, online report | ||
End-of-semester examination | 55 | 40 | Online (timed, not invigilated) | ||
Semester 2 2020 | 50 | In-semester | 15 | — | Online (10 × weekly quizzes) |
Online tests (2) (timed, not invigilated) | |||||
Laboratory | 30 | 40 | Online attendance, online report | ||
End-of-semester examination | 55 | 40 | Online (timed, not invigilated) | ||
Semester 1 2021 | 50 | In-semester | 15 | — | Online (10 × weekly quizzes) |
Online tests (2) (timed, not invigilated) | |||||
Laboratory | 30 | 45 | In-person attendance, online report | ||
End-of-semester examination | 55 | 45 | Online (timed, not invigilated) | ||
Semester 2 2021 | 50 | In-semester | 15 | — | Online (10 × weekly quizzes) |
Online tests (2) (timed, not invigilated) | |||||
Laboratory | 30 | 40 | Online attendance, online report | ||
End-of-semester examination | 55 | — | Online (timed, not invigilated) |
Footnote |
† Electronic supplementary information (ESI) available. See DOI: https://doi.org/10.1039/d3rp00032j |
This journal is © The Royal Society of Chemistry 2023 |