Enhancing academic performance and student success through learning analytics-based personalised feedback emails in first-year chemistry

Sara H. Kyne *ab, Martin M. H. Lee b and Charisse T. Reyes bc
aSchool of Chemistry, Faculty of Science, University of New South Wales, Sydney, NSW 2052, Australia. E-mail: s.kyne@unsw.edu.au
bSchool of Chemistry, Faculty of Science, Monash University, Clayton, VIC 3800, Australia
cFaculty of Education, University of the Philippines Open University, Los Banõs, Laguna 4031, Philippines

Received 9th February 2023 , Accepted 7th April 2023

First published on 13th April 2023


Abstract

Recent developments in digital technologies, including learning analytics are changing educational practices due to the wealth of information available and its utility to inform academic interventions for students. This study investigates the impact of personalised feedback emails on students’ academic performance and student success in large first-year undergraduate chemistry courses. Learning analytics was used to inform and generate feedback emails to students at various timepoints during the semester. The feedback emails to students included an evaluation of their current performance, and advice of support systems and resources to support their learning. We analysed the marks of 6334 students at three timepoints during the semester in addition to their final course grade, and compared academic performance across three years of course offerings (2019–2021). We compared students that did not receive feedback (2019 control groups, n = 2391) with students that did receive feedback (2020–2021 experimental groups, n = 3943). Our findings suggest that students receiving personalised emails during the semester were more likely to pass their first-year chemistry course. Furthermore, our data showed that sending personalised feedback emails fostered higher student success among a greater number of students within the cohort, as well as students' appraisal of the personalised feedback.


Introduction

Student feedback

Feedback is “among the most common features of successful teaching and learning” (Hattie, 2011, p. 115); hence, it can be a compelling influence on a student's learning experience. Effective feedback requires an understanding of how to alter the gap between where the student “is” and where they are “meant to be” (Sadler, 1989; Hattie and Timperley, 2007) and this requires a teacher's explicit expert role in assessing the quality of students’ work or performance. Particularly for first-year students, feedback practices should aim to aid students to develop their self-regulated learning capabilities as many of them come to higher education “unprepared for independent self-reflective learning” (Carless, 2015).

A significant body of research reports the positive impacts of students receiving feedback (Hounsell et al., 2008; Ferguson, 2011; Evans, 2013). Nicol et al. (2014) reported that students viewed receiving feedback as “beneficial primarily because it alerts them to deficiencies or gaps in their work”. The functions of feedback, however, depend on the “learning environment, the needs of the learner, the purpose of the task, and the particular feedback paradigm adopted” (Evans, 2013) such as prescribed by cognitivist and socio-constructivist views. Nicol (2010) further argues that feedback “should be conceptualised as a dialogical and contingent two-way process” involving teacher–student and student–student interactions. In a dialogical context wherein adaptive, discursive, and reflective feedback is shared between teacher and student, interactive oral and written communication becomes more effective (Nicol, 2010).

As acknowledged by Hepburn et al. (2022), the first-year experience is challenging for both educators and students with the demand on the educators to improve low engagement particularly in large classes. Providing feedback could address this, as evidenced by the higher levels of students’ attention and satisfaction in a course reported by Pérez-Segura et al. (2022). Moreover, feedback has also been shown to increase student motivation to invest greater efforts in completing their learning tasks (Lim et al., 2021b). Students’ development of self-regulated skills (e.g., planning, monitoring, and evaluation) was also promoted by feedback designed with a pedagogical agent, as demonstrated by Karaoğlan Yılmaz et al. (2018).

Despite the known positive impacts of providing feedback to students, feedback systems are not yet routinely employed at an institutional level, across all university courses. Particularly for large cohort courses, such as first-year undergraduate chemistry, the demands of providing effective feedback that drives individual student's performance towards their learning goals, continue to pose a great challenge in terms of teachers’ workload (i.e. time investment required) and the scalability of feedback (Henderson et al., 2019). Research has also found that student feedback encounters challenges in the timeliness of the delivery, and effectiveness of the information received by students (Weaver, 2006; Bailey and Garner, 2010).

Learning analytics and personalised feedback

Personalised or individualised feedback is considered a “vital and feasible strategy” to support students learning and promote motivation and performance (Koenka and Anderman, 2019). Compared to general feedback, students prefer specific feedback as shown by Poulos and Mahony (2008). In a foreign language course, personalised feedback was shown to play a critical role in the development and improvement of students learning skills including listening and reading (Pérez-Segura et al., 2022). Gould and Day (2013) reported that students valued personalised feedback as it invokes emotional influence, with student perceiving their teachers as “more approachable.” However, personalising feedback is still challenged by timeliness and effectiveness, particularly when attempting to implement on scale. One of the emerging solutions to address these challenges is to employ learning analytics to generate personalised feedback. The discipline of learning analytics was first defined as the “measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimi[s]ing learning and the environments in which it occurs” (Siemens and Gašević, 2012). Whilst research literature on learning analytics dashboards has proliferated in the last decade, a number of studies have highlighted either the lack of personalised feedback (Jivet et al., 2017; Zheng et al., 2022) or the importance of personalising feedback on improving students’ academic performance (Pérez-Segura et al., 2022; Zheng et al., 2022). Tsai et al. (2021) explored students’ individual expectations and experiences with a learning analytics-based feedback system delivered through OnTask software and identified the importance of developing feedback literacy amongst students. Interestingly, Iraj et al. (2020) reported positive benefits of early engagement with feedback on student success in a foundation science course, additionally finding that students’ previous engagement with feedback was a useful predictive metric for student success.

There are growing studies that have investigated the use of learning analytics to generate feedback and the impact this has on student performance (Wong et al., 2018) in undergraduate language (Karaoğlan Yılmaz and Yilmaz, 2020), psychology and communication (Lewis et al., 2021), mathematics (Dart and Spratt, 2020), science (Iraj et al., 2020) and other fields such as health and computer engineering (Lim et al., 2021a). Given this, there is a need to explore how feedback mechanism based on learning analytics specifically impact chemistry students, especially in first-year undergraduate courses where students are often learning how to navigate university life and academic expectations.

This paper aims to gain insight into how personalised feedback emails, generated through learning analytics measurements based on students’ performance in assessment tasks, impact their academic performance and student success in two first-year undergraduate chemistry courses. Furthermore, this study aims to uncover aspects of students’ behaviour upon receiving feedback. The following research questions guided this study:

RQ1. What impact could be attributed to learning analytics-based personalised feedback on the academic performance of students enrolled in first-year undergraduate chemistry?

RQ2. What differences are observed in student success in first-year undergraduate chemistry comparing 2019–2021?

RQ3. What unsolicited individual student responses are evident upon receiving feedback emails at specific timepoints during the semester?

Methodology

Ethical statement

The research described in this study adhered to ethical standards and guidelines as the nature of study demanded and was approved by the institutional ethics committee.

Institutional context

Data for this study were gathered from the two first-year chemistry courses offered at a public research university based in Victoria, Australia. CHEM A is usually offered in the first semester (Semester 1) and CHEM B in the second semester (Semester 2), each semester is 12 weeks, followed by one-week revision then three weeks for end-of-semester assessments. The course grades were determined through in-semester assessments (15%), laboratory component (30%) and one final end-of-semester examination (55%). Further details of the modes of delivery and course pass requirements are provided in the Appendix Table 7. The data for this research were taken from six semesters over three years (2019, 2020, and 2021). In 2019, the first-year chemistry courses were offered in blended mode. Due to the COVID-19 pandemic in 2020, the two courses were offered fully online as required by the situation during that time. In 2021, Semester 1 moved to a modified blended mode as governmental restrictions were eased, allowing restricted on-campus teaching and learning. However, in Semester 2, learning again moved fully online, due to government enforced lockdowns. Table 1 presents the different delivery modes for the two courses offered across the six semesters included in the study.
Table 1 Delivery modes for the two first-year chemistry courses offered across six semesters (2019–2021)
Semester and year First-year chemistry course Number of students (n) Mode of instructional delivery
Semester 1 2019 CHEM A 1377 Blended (online preparation materials, in-person workshops, tutorials and laboratories)
Semester 2 2019 CHEM B 1014 Blended (online preparation materials, in-person workshops, tutorials and laboratories)
Semester 1 2020 CHEM A 1002 Fully online (online preparation materials and lecture videos, online synchronous large-group tutorials and laboratories)
Semester 2 2020 CHEM B 866 Fully online (online preparation materials and lecture videos, online synchronous large-group tutorials and laboratories)
Semester 1 2021 CHEM A 1205 Blended (online preparation materials and lecture videos, online or in-person large-group tutorials and in-person laboratories)
Semester 2 2021 CHEM B 870 Fully online (online preparation materials and lecture videos, online synchronous large-group tutorials and laboratories)


Prior to the pandemic in 2020, the courses followed a blended design, where students would complete their weekly preparation learning tasks including an assessed online quiz through the Moodle learning management system (LMS). Each week there were two one-hour workshops and a one-hour tutorial. A three-hour laboratory practical was scheduled during eight weeks of the semester. Students were required to complete a compulsory pre-laboratory quiz prior to their laboratory practical. Following the practical, students completed an online laboratory report.

Early in Semester 1 2020, the CHEM A teaching delivery was switched to a fully online mode and the layout and delivery of classes was modified. The weekly preparation materials on the LMS remained in place. In addition, students were provided pre-recorded online lecture videos, and were expected to attend a synchronous one-hour large group tutorial delivered via Zoom. Additionally, students attended six one-hour virtual laboratory sessions delivered via Zoom over the semester. Assessment of the laboratory component remained the same format as 2019, however the laboratory reports were modified for the online format. Two in-semester assessments and a final end-of-semester examination were administered online. It should be noted that CHEM B Semester 2, 2020 and 2021 also followed this teaching approach. In Semester 1 2021, CHEM A students were allowed to return to campus, with strict social distancing laws and density restrictions. Therefore, weekly preparation materials and online lecture videos were again delivered via the LMS. Large-group tutorials were delivered both in-person and online (optional for students to choose their preferred mode of delivery). The only compulsory in-person class was a 2.5 hour laboratory practical (scheduled in six weeks of the semester).

Personalised feedback emails

No personalised feedback emails were sent to students in CHEM A and CHEM B offered in 2019. Beginning Semester 1 2020, personalised feedback emails were introduced into the courses.

The personalised feedback emails were sent at three distinct timepoints during the semester (Weeks 5, 8, 12). The specific timepoints were chosen based on the percentage of the in-semester assessment tasks which had been completed (Week 5: approximately 25%, Week 8 approximately 50% and Week 12 100%). The messages were generated via the use of Mail Merge and Mozilla Thunderbird. Mail Merge is an add-on that automatically generates and sends personalised emails based on pre-written email templates and a spreadsheet that contains specific details about the intended recipients and the category that they were classified into (for example student name, email address and specific feedback for the category). Before the messages were generated, the students’ marks were collated and processed to determine their cumulative total marks at that timepoint in the semester. Students were then categorised into three categories (Good, Midway, Poor). The ‘Good’ category corresponded to more than or equal to 50% of the cumulative marks possible at the timepoint, ‘Midway’ between 25–50% and ‘Poor’ less than 25%. Aiming to stimulate dialogue between the coordinator and student, or inner dialogue for the student, personalised feedback emails were generated with pre-written notes and advice (Table 2) using an email template (Fig. 1) based on the student categorisations, and were designed to be dialogic in their content (Espasa et al., 2022). Fig. 2 illustrates the flowchart for sending personalised emails during a semester.

Table 2 Coded notes and advice provided to students for each category (Good, Midway, Poor)
Category Notes Advice
Good We are pleased to know that you are doing very well based on your marks in the laboratory assessments and preparation quizzes that you recently completed. Congratulations on your hard work so far. We want to encourage you to do even better, and to reach out to us in case you need support.
Midway We noticed that you may be struggling in this [course] based on your marks in the laboratory assessments and preparation quizzes that you recently completed. However, don’t be discouraged, as there is still plenty of time to improve your results. To help you do better in the upcoming assessments, we want to encourage you to reach out to us in case you need further support.
Poor We noticed that you may be struggling in this [course] as you have not completed all of the laboratory assessments and/or preparation quizzes that have been due so far this semester. To help you do better in the upcoming assessments, we want to encourage you to reach out to us in case you need further support.



image file: d3rp00032j-f1.tif
Fig. 1 A sample (a) template and (b) email sent to students in the Good category in Week 8 (Semester 2 2021).

image file: d3rp00032j-f2.tif
Fig. 2 Three timepoints during the semester for sending personalised emails to students.

Collection and processing of data and statistical analyses

To determine the impact of feedback emails on academic performance and student success, data from the Moodle gradebook were employed. In particular, academic performance was measured by students’ marks on assessments at each timepoint, and student success was determined through their successful completion of the course. RStudio (v4.0.3) was used to process and visualise the data. Statistical analyses, Mann–Whitney U test, Pearson chi squared test of association and Kruskal–Wallis H test with post hoc analysis using Dwass–Steel–Critchlow–Fligner (DSCF) pairwise comparisons, were performed using jamovi (v2.3).

Although qualitative data was not explicitly collected as part of this study, the course coordinator did receive unsolicited emails explicitly mentioning the personalised feedback emails. In addition, open-text responses to the end-of-semester student evaluations of each course referring to personalised feedback emails were collected. Thematic analysis of these sources of qualitative data was carried out using inductive coding (Braun and Clarke, 2006). NVivo (R1.6.1, QSR International, MA) was used for computer-assisted coding. A good interrater reliability between two raters was determined as indicated by the calculated Krippendorff's alpha (α) value of 0.877 using IBM SPSS Statistics 27 (IBM Corporation, NY).

Results and discussion

This study explores the impact of learning analytics-based personalised feedback emails on students’ academic performance and student success. Furthermore, students’ responses to the feedback emails were examined to determine if they encouraged positive behaviours as students’ undertook first-year chemistry courses. Note that there were differences in the teaching and learning conditions during the implementation period that were caused by the COVID-19 pandemic and thus out of our control (discussed in the limitations section).

RQ1. What impact could be attributed to learning analytics-based personalised feedback on the academic performance of students enrolled in first-year undergraduate chemistry?

Narad and Abdullah (2016) defined academic performance as “the knowledge attained and designated by marks” as a measure of the extent to which learning goals are achieved by a student over a certain period. Academic performance is typically measured either by examinations or other assessment tasks that determine the achievement of learning goals. Richardson et al. (2012) identifies the use of grade point average as a measure of academic performance with good internal reliability as it measures students’ overall degree grades, semester, course, or test marks. Previous studies have employed final grades to measure quality academic performance (Pekrun et al., 2017; Tatar and Düştegör, 2020).

In our study, academic performance was determined through students’ final course grades (Richardson et al., 2012; Narad and Abdullah, 2016) at the end of the semester. The mean final course grade of students who did not receive feedback emails in 2019 Semesters 1 and 2 (CHEM A and CHEM B, two semesters) were compared against the mean final course grades of students who received feedback emails in Semesters 1 and 2 of 2020 and 2021 (CHEM A and CHEM B, four semesters).

Mann–Whitney U test shows that the mean final course grades of students who received personalised feedback emails (mean = 63.5%, sd = 20.0%, n = 3943) was significantly higher compared to the mean final course grades of those who did not (mean = 59.2%, sd = 27.5%, n = 2391) with a medium effect size of 0.187 (Snyder and Lawson, 1993; Field, 2013; Tomczak and Tomczak, 2014). Furthermore, confidence interval plots shown in Appendix Fig. 5 indicate that at the 95% confidence level, students who received personalised feedback emails are centred within a narrower range (±0.6%) compared to those who did not (±1.1%). This finding suggests that at the 95% confidence level, when student received feedback emails the cohort had a more uniform academic performance. The positive impact of receiving feedback emails suggested by these findings is consistent with results reported by Pardo et al. (2019) indicating that the provision of feedback emails in the weeks prior to a major assessment allowed “a significant and positive impact on the learning experience of the students.”

Table 3 shows that a greater proportion of students passed their course when personalised feedback emails were sent compared to those students who did not receive feedback emails. Pearson's chi-squared test suggests that there is significant association between sending personalised feedback emails and passing the course (χ(1) = 12.5, p ≤ 0.001) although the association is weak (Cramer's V = 0.0445). These results infer that students receiving personalised emails during the semester were more likely to pass their first-year chemistry course, which is similar to the results obtained by (Lim et al., a) where the final grades of a first-year biological sciences students who received feedback were significantly higher than those who did not receive feedback.

Table 3 Percentage of students who passed/failed a course who received or did not receive personalised feedback emails (n = 6334)
Percentage for each course outcome % (n)
Pass Fail
No feedback emails received (n = 2391) 76.1 (1820) 23.9 (571)
With feedback emails received (n = 3943) 79.9 (3150) 20.1 (793)


RQ2. What differences are observed in student success in first-year undergraduate chemistry comparing 2019–2021?

Student success can be measured not only by “conventional terminal completions” (Mullin, 2012) such as having successfully passed a course or not (Gardner and Brooks, 2018), but also intermediate outcomes measured at different timepoints throughout the course. These intermediate outcomes may include marks from mid-semester exams, assignments and completion of other assessment tasks (Gardner and Brooks, 2018). In our study, student success was determined by quantifying progress measures (Mullin, 2012) at specific, regular timepoints (Weeks 5, 8, and 12) during the semester (Fig. 2).

In addition, students successful completion of the course at the end of the semester was measured by the number of students who passed the course.

Fig. 3 illustrates the progress of students within each cohort over the semester through Sankey network diagrams. Students received a personalised feedback email based on which category they classified into (Good, Midway, Poor). The highlighted links (in yellow) indicate improvement in their performance from Week 5 to 8 and further to 12 whilst blue indicates a decrease in performance. Students that remained in the same category are shown in grey.


image file: d3rp00032j-f3.tif
Fig. 3 Sankey network diagrams showing students who improved from the Poor category in Week 5 to the Midway/Good categories (in latter weeks) for all six semesters (Semesters 1 and 2 2019–2021).

No feedback emails: semesters 1 and 2 2019 (two semesters)

In Semesters 1 and 2 2019, the first-year chemistry course was delivered without any systematic feedback mechanisms; hence this period was considered as the control in examining the impact of receiving personalised feedback emails on student success.

Fig. 3(A) and (B) illustrates the graphs for the cohorts of students in 2019 when no feedback emails were sent. During these semesters, a large proportion of students who were categorised as Good throughout the semester (Semester 1 2019 = 77.1%; Semester 2 2019 = 90.8%). However, both graphs also display a significant proportion of students who were categorised as Poor at Week 5 that continued with little to no improvement for the rest of the semester. In Semester 1 2019, 15.9% (n = 219) of the students were categorised as Poor in all weeks, and failed the course at the end of the semester. As shown in Table 4 and Fig. 3(A), (B), a very small percentage of students improved from the Poor category in Week 5 to the Midway or Good category by the end of the semester in Semester 1 or Semester 2 2019 (0.2% and 0.3%, respectively). Note that the number of students that finish the semester in the Midway category at the final grade is an artefact of students failing one or more course hurdles, which resulted in the students receiving a final grade of 45.

Table 4 Percentage of students who improved from the Poor category in Week 5 to the Midway/Good categories (in latter weeks) for all six semesters (Semesters 1 and 2 2019–2021)
Semester and year Percentage of students improved % (n)
To week 8 To week 12 To final grade
Semester 1 2019 0.3 (4) 0.3 (4) 0.2 (3)
(n = 1377)
Semester 2 2019 0.1 (1) 0.1 (1) 0.3 (3)
(n = 1014)
Semester 1 2020 1.3 (13) 1.2 (12) 1.6 (16)
(n = 1002)
Semester 2 2020 0.2 (2) 0.7 (6) 0.5 (4)
(n = 866)
Semester 1 2021 1.3 (16) 1.4 (17) 1.7 (20)
(n = 1205)
Semester 2 2021 2.5 (22) 1.8 (16) 2.3 (20)
(n = 870)


Sent feedback emails: semesters 1 and 2 2020 and 2021 (four semesters)

Similar to Table 3, it can be seen in Fig. 3(C)–(F) that less students finished the semester with a failing final course grade in 2020 and 2021 compared with 2019. Moreover, these graphs provide perspective on the students who improved from the Poor to Midway or Good categories by the end of the semester, as shown by the thicker yellow links. In Semester 1 2020, 16 out of 54 students who were categorised as Poor at Week 5 showed improvement in the latter weeks and in the final course grades (1.6%) [Table 4 and Fig. 3(C)]. In Semester 1 and Semester 2 2021, the percentage of students in the Poor category at Week 5 who improved by the end of the semester to the Midway or Good categories were 1.7% (n = 20) and 2.3% (n = 20), respectively [Table 4 and Fig. 3(E), (F)].

Comparing the semesters with and without personalised feedback emails

Fig. 4 presents the percentage of students in the Poor, Midway, and Good categories at each timepoint for all six semesters included in this study. Data presented in this figure reveal that Semester 1 2019 had the lowest percentage of students in the Good category at each timepoint (average 78.5% ± 3.0% across the semester), as well as the highest percentage of students in the Poor category at each timepoint (average 17.9% ± 1.9% across the semester).
image file: d3rp00032j-f4.tif
Fig. 4 Comparing the percentage of students in each category (Good, Midway, Poor) for all six semesters (Semesters 1 and 2 2019–2021).

In contrast to Semester 1 2019, a higher percentage of students in Semester 1 2020 were observed in the Good category at each timepoint (average 85.3 ± 3.4% across the semester) and lower percentage of students in the Poor category at each timepoint (average 6.8 ± 1.5% across the semester). Furthermore, Semester 1 2021 had an average of 84.5% (±5.3%) of students in the Good category at each timepoint, with an average of only 6.1% (±0.8%) of students in the Poor category. Whilst there were significant differences in the learning conditions between 2019–2021, by comparing findings from Semester 1 2019 and Semester 1 2021 where the instructional delivery modes were relatively similar (Table 1), reveals significantly positive impacts of personalised feedback on students’ performance.

In Semester 2 2020 and 2021, a similarly high percentage of students were observed in the Good category (average 89.0% ± 5.3% across Semester 2 2020, and average 89.3% ± 2.0% across Semester 2 2021) and low percentage of students in the Poor category (average 4.6% ± 1.5% across Semester 2 2020, and average 5.3% ± 0.8% across Semester 2 2021) was observed when feedback was given (Fig. 4). A similar finding was observed in Semester 2 2019, with an average of 90.1% (±5.9%) of students in the Good category at each timepoint and an average of 4.3% (±1.0%) of students in the Poor category. Note that in Semester 2 2019, one email was sent midway through the semester by the course coordinator to students that were not going to pass the laboratory hurdle. Failing the laboratory hurdle would result in these students failing the course overall. This may have prompted students to withdraw from the course at this time and therefore means these students would not appear in the Poor category at the end of the semester like they do for Semester 1 2019.

Fig. 4 further illustrates the increase in the percentage of students in the Poor category during Semester 1 2019, rising from Week 5 (16.1%) to the final course grade (20.9%). In contrast, this increase was not observed in Semester 1 2020 or 2021. At the final course grade, the percentage of students in the Poor category for Semester 1 2020 (6.8%) and Semester 1 2021 (6.8%) are significantly lower compared to Semester 1 2019 (20.9%). Similar observations were observed in Semester 2 2020 and 2021, where the percentage of students in the Poor category at the final course grade were 4.3% and 5.4%, respectively. These results corroborate the observation from Fig. 3 wherein lower proportions of students in both Semesters 1 and 2 in 2020 and 2021 remained in the Poor category from Week 5 to the final course grade, unlike in Semester 1 2019.

Taken together, these results suggest that personalised feedback emails fostered student success among a greater number of students within the cohort compared to when students did not receive feedback emails. This apparent positive impact on students’ performance upon receipt of personalised feedback emails is consistent with the results reported by Stuart (2004) who found that accounting students who received feedback outperformed those who did not.

Kruskal–Wallis H test further suggests that the mean student marks at Week 5, 8, and 12, and the final course grade in Semester 1 2019 were significantly lower compared to all other semesters (Table 5). Post hoc analyses by DSCF pairwise comparisons further suggest that the mean student marks from Semester 1 2019 from Week 5, 8, and 12 were significantly lower upon individual comparison with other semesters. Moreover, personalised feedback emails sent in Week 5 may have contributed to medium increase in students’ performance in Weeks 8 and 12, and small increase in students’ final grade according to the effect size estimated by ε2 (Table 5) (Snyder and Lawson, 1993; Field, 2013; Tomczak and Tomczak, 2014). This finding is supported by Tinto (2012) who stated that particularly for first-year students who are “trying to adjust their behaviours to the new academic and social demands of college or university life”, students who receive frequent feedback about their performance are more likely to successfully complete their first-year courses.

Table 5 Results of Kruskal–Wallis H test on mean marks in Weeks 5, 8 and 12 and final course grades for all six semesters (Semesters 1 and 2 2019–2021)
Semester and year Mean marks
Week 5 Week 8 Week 12 Final grade
Semester 1 2019 68.2 67.6 66.1 57.4
(n = 1377)
Semester 2 2019 82.6 81.0 81.4 68.6
(n = 1014)
Semester 1 2020 76.5 69.7 73.0 63.3
(n = 1002)
Semester 2 2020 87.6 81.4 77.2 64.9
(n = 866)
Semester 1 2021 78.1 76.3 66.2 63.4
(n = 1205)
Semester 2 2021 78.3 78.9 76.6 63.1
(n = 870)
Kruskal–Wallis H test
χ 2 727.4 545.0 721.3 21.6
p <0.001 <0[thin space (1/6-em)].001 <0[thin space (1/6-em)].001 <0[thin space (1/6-em)].001
ε 2 0.119 0.089 0.118 0.031


RQ3. What unsolicited individual student responses are evident upon receiving feedback emails at specific timepoints during the semester?

Table 6 shows the results of thematic analysis of unsolicited emails and student evaluation responses, revealing that the feedback emails positively influenced student learning. Affirmation from the feedback emails students received strengthened their belief in their own capabilities. For example, an email from a student in the Good category (across each timepoint during the semester) expressed their appreciation “to see that [their] efforts have paid off”. For this student, feedback appeared to provide support to allow them to acknowledge their ability and appreciate their own work (Boud, 2015). Evidence for increased motivation to pursue chemistry as a major and to further improve their performance in the course were likewise present (Lim et al., 2021b). For example, a student in the Good category (across each timepoint) commented that they were motivated to push themselves to improve their outcomes after receiving feedback emails. Furthermore, students expressed their appreciation for the sufficient support they received from the course (coordinator and teaching team) especially during the difficult pandemic times. Positive appraisal of the course delivery indicated student attention and satisfaction with the course similar to the findings reported Pérez-Segura et al. (2022). A student described how they enjoyed studying and maintained their interest in chemistry “despite the current predicament of the pandemic and the shift of the semester online”. Reflections about the perceived quality of teaching and learning were likewise stimulated among students as evidenced by their remarks about how they “really appreciate the hard work [the coordinator] and the teaching team put into making this online semester easy to follow for [them]”.
Table 6 Perceived student benefits from receiving feedback emails based on students’ unsolicited emails and student evaluation responses (Semester 1 and Semester 2 2020–2021)
Benefits of receiving feedback emails Sample quotes
From unsolicited emails From student evaluation responses
Increased motivation towards further self-improvement I’m definitely keen to push myself to improve my outcomes (Semester 1 2020) The three personalised messages throughout the semester from the [course] Coordinator addressing feedback on students' performance motivated me to continue to do well (Semester 1 2020)
Increased motivation to pursue a chemistry major Chemistry has always been my passion, and as such, the [course] I’ve done this year have really helped me further my development and understanding of chemistry. This year has confirmed that I want to continue into an extended major in chemistry. (Semester 2 2020) None
Increased awareness of available support It is relieving to hear that there is a great amount of support offered to myself and everyone else doing [the course]. (Semester 1 2020) [T]he [course] has general feedback every 4 weeks to reflect on how I am going and what should I do to keep updated with the work. (Semester 2 2020)
Enjoyable learning experience I love chemistry and, despite the current predicament of the pandemic and the shift of the semester online, am enjoying studying [the course]. (Semester 1 2020) The positive feedback emails were a lovely surprise that none of my other [course] have done before. Highly motivating! (Semester 1 2021)
Reflection on the quality of teaching of the course I'd also like to thank you and the rest of the [teaching] team for managing to provide a very high level of teaching despite the conditions of this semester. This is actually my third year of [university]…, and I don't think I have seen a more passionate group of teachers. (Semester 1 2020) [Course coordinator]'s monthly performance feedback emails felt like a good amount of communication and helped make the first-year chemistry team more approachable if I needed help. (Semester 1 2021)


Limitations of the study

Findings of this study are limited by the differences in the teaching and learning conditions during the implementation period, particularly in 2020 and 2021 when the education sector was impacted by the COVID-19 pandemic. With the onset of the pandemic, teaching and learning was urgently shifted to a fully online learning environment. Quality appraisal of students’ performance and providing useful, individualised feedback at scale, particularly for large cohorts of first-year students, were among the major concerns that became more pronounced amidst the challenge of online learning when face-to-face interactions were restricted. Consequently, the results of this study suggest positive outcomes arose from employing a learning analytics-based feedback system which enabled the provision of feedback to large-cohort first-year chemistry courses. However, it is acknowledged that a more comprehensive investigation into other measures of students’ performance and success in the course, such as students’ engagement with the feedback and students’ succeeding actions in the LMS after receiving the feedback could have provided insights into other potential factors that may have contributed to the positive student outcomes observed.

Furthermore, results of this study could likewise be limited by the scope of data collection methods in terms of the lack of student perceptions. Whilst unsolicited emails and student evaluations provided a glimpse of the positive impacts of receiving personalised feedback emails, this study was unable to explore in more depth students’ perceptions of where further improvements may be desirable.

Another limitation is that the impact of timing and frequency of sending personalised feedback emails were not explored in this study. This may have led to enhanced insight into the usefulness of feedback emails at different timepoints throughout the course, which may result in optimal benefits for students in improving their learning experiences.

Implications for practice

Findings from this study suggest that there is potential in our design of a robust and systematic feedback mechanism which will be particularly effective for courses with large enrolments. Our findings align with recommendations from previous studies that it is possible to maximise student learning by providing timely and actionable feedback about students’ success as they undertake a course. In this regard, we strongly encourage teachers and institutions to invest in the development and implementation of personalised feedback systems that are likely to promote students’ self-reflection and influence their self-regulated learning.

Our future studies will employ survey instruments and focus group discussions and interviews with students to gather a depth of understanding of the impact of the feedback emails. This will enable teachers and instructional designers to better understand how the quantity and timing of feedback emails contribute to academic performance and student success. In addition, this will allow for students’ attitudes and behaviours upon receiving personalised emails to be explored.

The differences in the academic performance and student success for students who received and did not receive feedback were found to be small but significant (as shown in Table 3). As Evans (2013) argued, giving feedback alone is not sufficient to improve the learning goals of students, and improving the quality of the feedback merits greater consideration given the growing number and diversity of students in higher education. Hence, further studies to identifying parameters to provide well-designed, actionable, and quality of feedback in first-year chemistry as perceived by both the students and the teacher will be undertaken.

In addition, further studies may be done to explore the students’ activities inside the LMS after receiving feedback to identify students’ reactions to the evaluation of their past performance. For example, investigating what resources students’ access before and after they received feedback emails. Students’ activities in the LMS upon receipt of the emails may reflect students’ response to the actionable items included in the feedback. These data will provide an important perspective for teachers and instructional designers in terms of the effect of the feedback, and importantly the learning design of the course in the LMS.

Conclusions

One of the important aspects in students’ learning experience is the opportunity to reflect and act on their performance whilst they are undertaking a course. Whilst students’ numerical marks from assessment tasks and final course grades serve as metrics of their academic performance and student success, a lack of actionable feedback in these metrics is not uncommon. This poses challenges, particularly for first-year students, many of whom are transitioning into the university academic environment, and the new expectations that entails. Findings from our study suggest that providing feedback at regular timepoints throughout the semester, promotes improved academic performance and higher student success. In this study, personalised feedback emails sent to students included not only an evaluation of their performance in assessment tasks, but also advice of available support systems. Our findings further underscore the importance of an explicit instructional system that affords regular feedback to promote students’ awareness of how they are progressing during a course, and the real-time support available to help improve their progress. A regular and consistent feedback loop further allows students to develop self-reflection, and consequently, self-monitoring to assist them to better evaluate their own progress towards their learning goals.

Author contributions

S. H. K. conceptualised, designed the methodology, supervised, and administered the project. S. H. K. and C. T. R. developed the methodology and carried out the investigation. All authors formally analysed, validated, and visualised the data. All the authors contributed to the original draft preparation and reviewing and editing the manuscript.

Conflicts of interest

There are no conflicts to declare.

Appendix


image file: d3rp00032j-f5.tif
Fig. 5 Comparing the mean final course grades of students who received and did not receive personalised feedback emails (n = 6334).
Table 7 Assessment details for the two first-year chemistry courses offered across six semesters (2019–2021)
Semester and year Course pass grade (%) Assessment component Weighting (%) Hurdlea (%) Mode of delivery
a Hurdle: if students did not meet this mark threshold, they failed the course regardless of their raw grade.
Semester 1 2019 50 In-semester 15 Online (10 × weekly quizzes)
In-class tests (3) (paper-based, timed, invigilated)
Laboratory 30 45 In-person attendance, online report
End-of-semester examination 55 30 In-person (paper-based, timed, invigilated)
Semester 2 2019 50 In-semester 15 Online (10 × weekly quizzes)
Online tests (2) (timed, not invigilated)
Laboratory 30 45 In-person attendance, online report
End-of-semester examination 55 30 In-person (paper-based, timed, invigilated)
Semester 1 2020 50 In-semester 15 Online (10 × weekly quizzes)
Online tests (2) (timed, not invigilated)
Laboratory 30 40 Online attendance, online report
End-of-semester examination 55 40 Online (timed, not invigilated)
Semester 2 2020 50 In-semester 15 Online (10 × weekly quizzes)
Online tests (2) (timed, not invigilated)
Laboratory 30 40 Online attendance, online report
End-of-semester examination 55 40 Online (timed, not invigilated)
Semester 1 2021 50 In-semester 15 Online (10 × weekly quizzes)
Online tests (2) (timed, not invigilated)
Laboratory 30 45 In-person attendance, online report
End-of-semester examination 55 45 Online (timed, not invigilated)
Semester 2 2021 50 In-semester 15 Online (10 × weekly quizzes)
Online tests (2) (timed, not invigilated)
Laboratory 30 40 Online attendance, online report
End-of-semester examination 55 Online (timed, not invigilated)


Acknowledgements

This work was supported by an Undergraduate Vacation Research Scholarship (Monash University, Australia). We thank Professor Christopher Thompson (Monash University, Australia) for helpful discussions in the writing of this manuscript.

Notes and references

  1. Bailey R. and Garner M., (2010), Is the feedback in higher education assessment worth the paper it is written on? Teachers’ reflections on their practices, Teach. High. Educ., 15(2), 187–198 DOI:10.1080/13562511003620019.
  2. Boud D., (2015), Feedback: ensuring that it leads to enhanced learning, Clin. Teach., 12(1), 3–7 DOI:10.1111/tct.12345.
  3. Braun V. and Clarke V., (2006), Using thematic analysis in psychology, Qual. Res. Psychol., 3(2), 77–101 DOI:10.1191/1478088706qp063oa.
  4. Carless D., (2015), Promoting student engagement with feedback, Excellence in University Assessment, Routledge.
  5. Dart S. and Spratt B., (2020), Personalised emails in first-year mathematics: Exploring a scalable strategy for improving student experiences and outcomes, Stud. Success, 11(2), 1–12 DOI:10.5204/ssj.1543.
  6. Espasa A., Mayordomo R. M., Guasch T., and Martinez-Melo M., (2022), Does the type of feedback channel used in online learning environments matter? Students’ perceptions and impact on learning, Act. Learn. High. Educ., 23(1), 49–63 DOI:10.1177/1469787419891307.
  7. Evans C., (2013), Making sense of assessment feedback in higher education, Rev. Educ. Res., 83(1), 70–120 DOI:10.3102/0034654312474350.
  8. Ferguson P., (2011), Student perceptions of quality feedback in teacher education, Assess. Eval. High. Educ., 36(1), 51–62 DOI:10.1080/02602930903197883.
  9. Field A., (2013), Discovering statistics using IBM SPSS Statistics.
  10. Gardner J. and Brooks C., (2018), Student success prediction in MOOCs, User Model. User - Adapt. Interact., 28(2), 127–203 DOI:10.1007/s11257-018-9203-z.
  11. Gould J. and Day P., (2013), Hearing you loud and clear: student perspectives of audio feedback in higher education, Assess. Eval. High. Educ., 38(5), 554–566 DOI:10.1080/02602938.2012.660131.
  12. Hattie J., (2011), The flow of the lesson: The place of feedback, Visible learning for teachers: Maximizing impact on learning, Taylor & Francis Group, pp. 115–137.
  13. Hattie J. and Timperley H., (2007), The power of feedback, Rev. Educ. Res., 77(1), 81–112 DOI:10.3102/003465430298487.
  14. Henderson M., Ryan T., and Phillips M., (2019), The challenges of feedback in higher education, Assess. Eval. High. Educ., 44(8), 1237–1252 DOI:10.1080/02602938.2019.1599815.
  15. Hepburn L.-A., Borthwick M., Kerr J., and Vasnev A., (2022), A strategic framework for delivering ongoing feedback at scale, Assess. Eval. High. Educ., 47(5), 742–754 DOI:10.1080/02602938.2021.1959517.
  16. Hounsell D., McCune V., Hounsell J., and Litjens J., (2008), The quality of guidance and feedback to students, High. Educ. Res. Dev., 27(1), 55–67 DOI:10.1080/07294360701658765.
  17. Iraj H., Fudge A., Faulkner M., Pardo A., and Kovanović V., (2020), Understanding students’ engagement with personalised feedback messages, Proc. Tenth Int. Conf. Learn. Anal. Amp Knowl., pp. 438–447 DOI:10.1145/3375462.3375527.
  18. Jivet I., Scheffel M., Drachsler H., and Specht M., (2017), Awareness is not enough. Pitfalls of learning analytics dashboards in the educational practice, Data Driven Approaches in Digital Education, Springer, pp. 82–96 DOI:10.1007/978-3-319-66610-5.
  19. Karaoğlan Yılmaz F. G. and Yilmaz R., (2020), Student opinions about personalized recommendation and feedback based on learning analytics, Technol. Knowl. Learn., 25(4), 753–768 DOI:10.1007/s10758-020-09460-8.
  20. Karaoğlan Yılmaz F. G., Olpak Y. Z., and Yılmaz R., (2018), The Effect of the Metacognitive Support via Pedagogical Agent on Self-Regulation Skills, J. Educ. Comput. Res., 56(2), 159–180 DOI:10.1177/0735633117707696.
  21. Koenka A. C. and Anderman E. M., (2019), Personalized feedback as a strategy for improving motivation and performance among middle school students, Middle Sch. J., 50(5), 15–22 DOI:10.1080/00940771.2019.1674768.
  22. Lewis S., Heath G., Lim L., and Roberts R., (2021), “I’m not a number, I’m someone to them”: Supporting commencing university students’ through technology-mediated personalised communication, Stud. Success, 12(1), 24–34 DOI:10.5204/ssj.1623.
  23. Lim L.-A., Gentili S., Pardo A., Kovanović V., Whitelock-Wainwright A., Gašević D., and Dawson S., (2021a), What changes, and for whom? A study of the impact of learning analytics-based process feedback in a large course, Learn. Instr., 72, 1–11 DOI:10.1016/j.learninstruc.2019.04.003.
  24. Lim L.-A., Dawson S., Gašević D., Joksimović S., Pardo A., Fudge A., and Gentili S., (2021b), Students’ perceptions of, and emotional responses to, personalised learning analytics-based feedback: an exploratory study of four courses, Assess. Eval. High. Educ., 46(3), 339–359 DOI:10.1080/02602938.2020.1782831.
  25. Mullin C. M., (2012), Student success: Institutional and individual perspectives, Community Coll. Rev., 40(2), 126–144 DOI:10.1177/0091552112441501.
  26. Narad A. and Abdullah B., (2016), Academic performance of senior secondary school students: Influence of parental encouragement and school environment, Rupkatha J. Interdiscip. Stud. Humanit., 8(2), 12–19 DOI:10.21659/rupkatha.v8n2.02.
  27. Nicol D., (2010), From monologue to dialogue: Improving written feedback processes in mass higher education, Assess. Eval. High. Educ., 35(5), 501–517 DOI:10.1080/02602931003786559.
  28. Nicol D., Thomson A., and Breslin C., (2014), Rethinking feedback practices in higher education: a peer review perspective, Assess. Eval. High. Educ., 39(1), 102–122 DOI:10.1080/02602938.2013.795518.
  29. Pardo A., Jovanovic J., Dawson S., Gašević D., and Mirriahi N., (2019), Using learning analytics to scale the provision of personalised feedback, Br. J. Educ. Technol., 50(1), 128–138 DOI:10.1111/bjet.12592.
  30. Pekrun R., Lichtenfeld S., Marsh H. W., Murayama K., and Goetz T., (2017), Achievement emotions and academic performance: longitudinal models of reciprocal effects, Child Dev., 88(5), 1653–1670 DOI:10.1111/cdev.12704.
  31. Pérez-Segura J. J., Sánchez Ruiz R., González-Calero J. A., and Cózar-Gutiérrez R., (2022), The effect of personalized feedback on listening and reading skills in the learning of EFL. Comput. Assist. Lang. Learn., 35(3), 469–491 DOI:10.1080/09588221.2019.1705354.
  32. Poulos A. and Mahony M. J., (2008), Effectiveness of feedback: The students’ perspective, Assess. Eval. High. Educ., 33(2), 143–154 DOI:10.1080/02602930601127869.
  33. Richardson M., Abraham C., and Bond R., (2012), Psychological correlates of university students’ academic performance: A systematic review and meta-analysis, Psychol. Bull., 138(2), 353–387 DOI:10.1037/a0026838.
  34. Sadler D. R., (1989), Formative assessment and the design of instructional systems, Instr. Sci., 18(2), 119–144 DOI:10.1007/BF00117714.
  35. Siemens G. and Gašević D., (2012), Learning analytics special issue, J. Educ. Technol. Soc., 15(3), 1–2.
  36. Snyder P. and Lawson S., (1993), Evaluating results using corrected and uncorrected effect size estimates, J. Exp. Educ., 61(4), 334–349 DOI:10.1080/00220973.1993.10806594.
  37. Stuart I., (2004), The impact of immediate feedback on student performance: An exploratory study in Singapore, Glob. Perspect. Account. Educ., 1, 1–15.
  38. Tatar A. E. and Düştegör D., (2020), Prediction of academic performance at undergraduate graduation: Course grades or grade point average? Appl. Sci. Switz., 10(14), 1–15 DOI:10.3390/app10144967.
  39. Tomczak M. and Tomczak E., (2014), The need to report effect size estimates revisited. An overview of some recommended measures of effect size, Trends Sport Sci., 1(21), 19–25.
  40. Tsai Y. S., Mello R. F., Jovanović J., and Gašević D., (2021), Student appreciation of data-driven feedback: A pilot study on OnTask, ACM Int. Conf. Proceeding Ser., 511–517 DOI:10.1145/3448139.3448212.
  41. Weaver M. R., (2006), Do students value feedback? Student perceptions of tutors’ written responses, Assess. Eval. High. Educ., 31(3), 379–394 DOI:10.1080/02602930500353061.
  42. Wong B. T.-M., Li K. C., and Choi S. P.-M., (2018), Trends in learning analytics practices: a review of higher education institutions, Interact. Technol. Smart Educ., 15(2), 132–154 DOI:10.1108/ITSE-12-2017-0065.
  43. Zheng L., Zhong L., and Niu J., (2022), Effects of personalised feedback approach on knowledge building, emotions, co-regulated behavioural patterns and cognitive load in online collaborative learning, Assess. Eval. High. Educ., 47(1), 109–125 DOI:10.1080/02602938.2021.1883549.

Footnote

Electronic supplementary information (ESI) available. See DOI: https://doi.org/10.1039/d3rp00032j

This journal is © The Royal Society of Chemistry 2023
Click here to see how this site uses Cookies. View our privacy policy here.