Universities in the UK have seen a rise in the number of students committing academic misconduct through fraudulent use of generative artificial intelligence (AI), such as ChatGPT. Data obtained by the online magazine Times Higher Education show that the University of Sheffield investigated 92 cases related to the suspected use of AI in the 2023–24 academic year, punishing 79 students. In the 2022–23 academic year, only six cases were recorded, all of which resulted in punishments. At the University of Glasgow, 130 cases were investigated in 2023–24, leading to 78 punishments, more than triple the previous year. At Queen Mary University of London, the 2023–24 tally stood at 89 cases investigated and 89 punished, compared with 10 investigations and nine sanctions in the prior 12 months. Some institutions, such as the University of Southampton, said they did not record any cases of misconduct involving AI, while others did not systematically collect data on the issue, claiming that it is tackled on a departmental level.
Thomas Lancaster, an expert in academic integrity from Imperial College London, told Times Higher Education he was concerned that some institutions have recorded no cases. “It is disappointing that more universities are not tracking this information, given the ease of which generative AI access is now available to students,” he said.
The story above was published with the title “Misconduct linked to the use of AI on the rise in UK institutions” in issue 346 of December/2024.
Republish