In this month’s blog, Dr Alex Buckley from Heriot-Watt’s Learning and Teaching Academy reflects on the flurry of discussion on artificial intelligence and implications for university assessment.
In the last few months, the university sector has seen a flurry of commentary and concern about artificial intelligence creation tools in general, and ChatGPT in particular. In a sense we’re late to the party; higher education is just the latest in a long line of industries to feel the impact of automation. In our case, the challenge is caused by the ability of ChatGPT to produce fluent writing that is hard to distinguish from that of a competent human writer. Thinking of our current predicament as part of the wider impact of artificial intelligence on society does put things into perspective: it’s about a lot more than how universities assess their students. Or as someone else has said: “while the sudden advent of generative AI may be disruptive to educators, it is even more disruptive to the futures of the students we teach”. While the scale and nature of the impact of ChatGPT and similar tools is hard to predict with any certainty, we can confidently say that if they are sophisticated enough to substantially impact on how we assess students, then they’re going to be a big part of our students’ careers.
It’s ok not to know
Lurking in the recent flood of interest in ChatGPT is a sense that we’ve been blindsided by technology, in a way that threatens our fundamental approaches to assessment. I think it’s true that we individually don’t really know how to approach the problem. And I think that’s fine. When we talk to students about academic integrity, we connect it to professional integrity so often because we want our rules of good academic conduct to link to the kinds of rules that operate in wider society, the professions and industries. We use our academic integrity rules to help prepare students to act appropriately in their future lives. But when it comes to ChatGPT and other AI tools, those rules of appropriate use have not yet been developed. It isn’t the fault of educators that norms of acceptable use of AI tools haven’t yet formed, but we are at the forefront of dealing with the consequences.
On the positive side, we will also be involved in shaping those norms. Within Heriot-Watt, there are people who can come together to help figure it out: people with expertise in artificial intelligence, people passionate about creating effective assessments, people interested in using technology for learning and teaching. There are specific groups (like the Academic Integrity Group and our Digital Pedagogy Hub) navigating a route through the short- and long-term challenges and opportunities. And as members of our disciplines, professions and industries we can – and should – be involved in helping to shape the expectations about how AI tools like ChatGPT can appropriate be used. The academic profession itself is a good example: we can be part of the ongoing conversation about the right and wrong ways of using AI tools for writing a journal article or a funding proposal.
So there is a lot we can’t say yet about the right and wrong ways of using tools like ChatGPT, and that is something that will come in time. And it will – universities have come to an accommodation with all sorts of disruptive technologies: calculators, Wikipedia, gramophones, Google Translate, spellcheckers, etc. (though there are also enough examples of over-hyped technology to encourage an approach of constructive scepticism to claims about the scale of disruption).
On the other hand, there are some things we can say right now. We can say that copying wholesale from a text-generation tool like ChatGPT, or copying wholesale and doing some limited paraphrasing, is plagiarism (though we may have to update the wording of our policies, given that up to now only human beings have been able to create the kind of text that a student would be tempted to copy).
What can we do right now?
Beyond that, and beyond waiting as society – with our help – develops a better sense of how these tools should and shouldn’t be used, what can we do to ensure that students aren’t using them inappropriately in assessments?
- We can think about assessment security at the level of the programme. Cutting down substantially on the opportunities for cheating doesn’t come cheap, and it makes sense to focus our effort at particularly crucial points in the programme. Including an oral assessment somewhere in the final year of an undergraduate programme might require some reallocation of resourcing, but it helps ensure that a student faces at least one assessment where they would struggle to use something like ChatGPT
- We can figure out which assessments the currently available AI tools aren’t good at. We’re all on a steep learning curve about how these things work, but there are particular tasks that are harder for ChatGPT to complete well at the moment, e.g. focusing on recent events, asking for a high level of detail, drawing on class discussions
- Of course, there are some who see a solution in traditional exams. In-person invigilated assessments are one way of preventing the use of ChatGPT, but they come with well-known downsides. If we’re going to use them we should probably be exploring a wider range of formats. Two-stage exams, in-tray exams, open-book exams; there are lots of ways of testing students in locked-down environments that don’t rely on the standard memory-based approach
- We tend to focus on marking output (e.g. an essay) rather than process (e.g. drafts of the essay), but assessing students on the basis of how they approach a task makes it a bit harder to use AI tools as a short-cut. Ask students to submit plans, literature reviews, drafts, redrafts, reflections on how they approached the process, reflections on how they acted on feedback on drafts; anything that reveals the steps they take to arrive at the final output
Looking on the bright side
Those are examples of the more negative approaches to protecting our assessments. They should be part of the picture, but we also need to focus on the positive approaches.
- We can sometimes focus on those students who actively and consciously choose to cheat, but we shouldn’t lose sight of the fact that most students don’t. Most students want to learn rather than simply get a degree in the easiest way possible; maybe because of intrinsic interest in the subject, maybe because they want to succeed in the workplace. That is partly up to the students themselves, but we can make a difference by ensuring that assessments are clearly helpful learning opportunities. We can try and make our assessments interesting and meaningful, less like artificial hoops to jump through and more like complex (and even fun) real-world activities. And we can make sure that the assessments are clearly linked to what the course is supposed to help students learn (and make sure that courses are aimed at producing learning outcomes that make sense to students)
- One potential consequence of the new technology is that we’ll have to rethink what we want students to learn. If AI tools are going to be an essential part of our students’ working lives, university programmes are going to have to help students develop their AI literacy: how those tools are made and how they work, their blindspots and biases, their strengths and limitations. Assessments will have to actively involve the use of AI tools, just as students now use Google and Wikipedia as part of developing the ability to assess and use information they find on the internet
- Beyond a simple rule like ‘Don’t cut-and-paste from ChatGPT’, there are currently few well-developed conventions about the appropriate use of AI tools. This creates challenges, but it also creates an opportunity: to involve students in an open discussion about how they could and should be using them. And that’s a discussion they are probably already having among themselves. The benefits of approaching academic integrity in positive terms – as part of helping students develop essential skills around writing, research, communication etc. – are well-known. The fact that when it comes to AI tools, the rules are currently being written (literally) means that we’re in a good position to involve students in conversations about honesty and integrity in academic work
There is a sense of crisis and excitement around ChatGPT at the moment, and I do think that’s because we’re doing things a bit backwards. We need to help students learn how to use these tools in ways that are appropriate for the professions and industries they will join, but at the moment those professions and industries are still figuring out themselves what such appropriate use looks like. As members of those professions and industries (including an academic profession that is grappling with the role of AI tools in academic writing) we are also helping to develop rules.
As educators, we’re on the frontline of dealing with something that is rippling through society, so it’s ok that we’re figuring it out as we go along, alongside contributing to the wider conversations that will provide the longer-term answers. What matters, as always, is that we are preparing students to flexibly navigate a changing world.
Heriot-Watt colleagues can find guidance about the impact of ChatGPT on learning, teaching and assessment on the Global Digital Pedagogies Hub on Sharepoint