Establishing a delicate balance in the relationship between artificial intelligence and authentic assessment in student learning
Abstract
Across the past few decades, a gamut of technologies has emerged and been adopted as part of enabling student learning. These technologies and digital tools have been explored in terms of their affordances and their limitations during implementation in teaching practices. Teachers have actively worked to balance how a technology serves as a vehicle for learning against the challenges that are introduced through its implementation. In recent years, due to our increased reliance on digital tools and online learning environments, our education communities have first railed against, and then rallied for, each appearance of a new website, tool or platform. Whilst initial reactions can be negative (such as that recently observed on the appearance of the artificial intelligence (AI) based chatbot tool ChatGPT), many teachers will progress towards adoption of technologies in their practices once the affordances have been teased out. In some ways, as an analogy, teaching practice could be considered as an equilibrium reaction responding to the pressures of change. In this Editorial, I recognise that teachers are adaptive and creative therefore research that evidences authentic practice using AI to support student learning will increase. In parallel, as a journal, we are facing new challenges relating to the role of AI in authorship and peer review, most publishers are grappling with establishing their position on any authorship that involves AI generated text.
Thinking and doing chemistry in the context of learning with technology
Looking back across the past five decades, the rapidly expanding integration of technologies and tools across teaching environments has offered significant advances for teaching and assessment. Indeed, across this period, students have transitioned from resources that were 100% text-based printed matter including images to 100% digital resources, often multimodal, dynamic and interactive. This transition has catalysed exploration of associated demands on learning by researchers in terms of cognitive load and mechanisms of information processing.
Taking the liberty of a brief reflection on my own experience to illustrate how far we have progressed, one of my most entrenched memories as an undergraduate student is of sitting in a lecture theatre weekly on a Friday afternoon. My lecturer taught crystal field theory by writing in chalk, on a blackboard, moving from left to right – we had to copy our notes quickly before this text was rubbed out to make room for more information! Our supporting resource was a textbook – the internet did not exist so forms of multimodal representation were scarce, only the spoken word enhanced the written word. Fast forward 43 years… but recognising a few teaching technology milestones on the way such as overhead projectors, access to the WorldWideWeb (its contemporary abbreviation is ‘www’), Netscape Navigator as a web browser, Wikipedia, emails, learning management systems, and wifi! Today, our students are presented with vast array of digital platforms, tools and websites that generate and collate knowledge and information. The teacher must select a combination of tools, representations and resources that they believe will assist students to construct their understanding in chemistry and CERP has been enriched by diverse published examples of effective practices. Technologies have no doubt enabled wider participation and access to learning resources for students as well as different forms of formative feedback. However, at the same time, new barriers have emerged in terms of digital equity access (for example, linguistic diversity and financial considerations).
Current considerations in assessment of student learning in chemistry through technologies
The recent worldwide reaction to the emergence of ChatGPT (a generative pretrained transformer) saw teaching communities immediately view this tool as a threat to the integrity of assessment of student learning. As educators, we seek to establish that the work submitted by a student is their own work and, as such, represents their individual thinking. Educators had recently worked through the challenges that were raised in remote assessment triggered by the COVID pandemic, particularly instances of students accessing cheating websites and the loss of verified identity. It is natural that the assessment-weary might wave the white flag and wilt now faced with the potential of chatbot-generated submissions to assessment tasks.
On a positive note, many practitioners have begun to develop new approaches to assessment that are based on the inherent differences in thinking observed between a student and an AI response generator. It is also worth respecting that AI is regarded as a core technology used by students in many disciplines in which it supports design-thinking. Several studies have reported applications of machine learning (ML), a subset of AI, and its adoption in chemistry teaching practices (for example Thrall et al., 2021; Lafuente et al., 2021). ML has also been recognised for its potential to achieve a deeper analysis of student work with recent articles published in CERP providing inspiration and guidance for teachers. Two articles that appear in this current issue include a review of studies where ML has been used to assess mechanistic reasoning in organic chemistry (Martin and Graulich, 2023) and an application of ML in assessment of student explanations (Frost et al., 2023). Rubrics and frameworks have been shared (Raker et al., 2023; Yik et al., 2023) that can be further built upon in future studies.
A need to define the boundaries in terms of publishing
While reflecting on the potential for AI to add affordances in teaching and learning practices, we need to recognise the potential impact that is likely to arise in the process of publishing research. A question that has been tabled by publishers is whether generative AI can be accepted as an ‘author’ in its contribution to a research paper, currently the position appears to be ‘no’! A recent editorial by the Editor-in-Chief of Science (Thorp, 2023) challenges the value and integrity of the AI generated content. While RSC Publishing, and hence CERP, do not yet have a formal position, we are guided by the Committee on Publication Ethics (Watson and Stiglic, 2023) in setting boundaries for potential authors regarding AI generated text. In our own journal, we are facing a balance between practice and publishing. On one hand, we encourage teachers to integrate new digital tools and paradigms into their work followed by evaluation of effectiveness in facilitating student learning. On the other hand, we need to exclude manuscripts where human authors have not cognitively engaged in the evaluation and communication of findings. The road ahead is not straight forward but for the foreseeable future, AI generated authorship will not be acceptable in CERP.
References
- Frost S. J. H., Yik B. J., Dood A. J., de Arellano D. C. R., Fields K. B. and Raker J. R., (2023), Evaluating electrophile and nucleophile understanding: a large-scale study of learners’ explanations of reaction mechanisms, Chem. Educ. Res. Pract., 24(2) 10.1039/D2RP00327A.
- Lafuente D., Cohen B., Fiorini G., García A. A., Bringas M., Morzan E. and Onna D., (2021), A gentle introduction to machine learning for chemists: an undergraduate workshop using python notebooks for visualization, data processing, analysis, and modeling, J. Chem. Educ., 98(9), 2892–2898.
- Martin P. P. and Graulich N., (2023), When a machine detects student reasoning: a review of machine learning-based formative assessment of mechanistic reasoning, Chem. Educ. Res. Pract., 24(2) 10.1039/D2RP00287F.
- Raker J. R., Yik B. J. and Dood A. J., (2023), Development of a generalizable framework for machine learning-based evaluation of written explanations of reaction mechanisms from the postsecondary organic chemistry curriculum, in Graulich N. and Shultz G. V. (ed.), Student Reasoning in Organic Chemistry: Research advances and evidence-based instructional practices, The Royal Society of Chemistry.
- Thorp H. H., (2023), ChatGPT is fun, but not an author. Science, 379(6630), 313.
- Thrall E. S., Lee S. E., Schrier J. and Zhao Y., (2021), Machine learning for functional group identification in vibrational spectroscopy: a pedagogical lab for undergraduate chemistry students, J. Chem. Educ., 98(10), 3269–3276.
- Watson R. and Štiglic G., (2023), Guest Editorial: The challenge of AI chatbots for journal editors, Committee on Publication Ethics, https://publicationethics.org/news/challenge-ai-chatbots-journal-editors.
- Yik B. J., Dood A. J., Frost S. J., de Arellano D. C. R., Fields K. B. and Raker J. R., (2023), Generalized rubric for level of explanation sophistication for nucleophiles in organic chemistry reaction mechanisms. Chem. Educ. Res. Pract., 24(1), 263–282.
|
This journal is © The Royal Society of Chemistry 2023 |
Click here to see how this site uses Cookies. View our privacy policy here.