Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg
Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.
A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.
Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.
The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.
Maybe we need a new way to approach school. I don’t think I agree with turning education into a competition where the difficulty is curved towards the most competitive creating a system that became so difficult that students need to edge each other out any way they can.
I guess what I don’t understand is what changed? Is everything homework now? When I was in school, even college, a significant percentage of learning was in class work, pop quizzes, and weekly closed book tests. How are these kids using LLMs so much for class if a large portion of the work is still in the classroom? Or is that just not the case anymore? It’s not like ChatGPT can handwrite an essay in pencil or give an in person presentation (yet).
In the US we went common core. That means the school board decides the courses at the beginning of the year, and they set tests designed to ensure the students are learning. But there are two issues. 1. The students are not being taught. Teachers dont get paid enough to care nor provide learning materials, so they just have yhe students read the textbook and do homework until the test. This means students are not learning critical thinking or the material, they merely memorize this weeks material long enough to pass the test. 2. The tests are poorly designed. As I hinted at with point 1, the tests merely ensure that you have memorized this weeks material. They do not and are not designed to ensure that you actually learn.
These issues are by design, not by accident. Teachers pay rates have stagnated along with the rest of the working class, with the idea being to slowly give the working class less and less propetional buying power and therefore economic control. In addition, edicating your populace runs directly contradictory to what the current reigning faction wants. An educated populace is harder to lie to.
Actually caught, or caught with a “ai detection” software?
Actually caught. That’s why it’s tip of the iceberg, all the cases that were not caught.
The article does not state that. It does, however, mention that AI detection tools were used, and that they failed to detect AI writing 90 something % of the time. It seems extremely likely they used ai detection software.
I’m saying this a someone that has worked for multiple institutions, raised hundreds of conduct cases and has more on the horizon.
The article says proven cases. Which means that the academic conduct case was not just raised but upheld. AI detection may have been used (there is a distinct lack of concencus between institutions on that) but would not be the only piece of evidence. Much like the use of Turnitin for plagiarism detection, it is an indication for further investigation but a case would not be raised based solely on a high tii score.
There are variations in process between institutions and they are changing their processes year on year in direct response to AI cheating. But being upheld would mean that there was direct evidence (prompt left in text), they admitted it in (I didn’t know I wasn’t allowed to, yes but I only, etc) and/or there was a viva and based on discussion with the student it was clear that they did not know the material.
It is worth mentioning that in a viva it is normally abundantly clear if a given student did/didn’t write the material. When it is not clear, then (based on the institutions I have experience with) universities are very cautious and will give the students the benefit of the doubt (hence tip of iceberg).
Surprise motherfuckers. Maybe don’t give grant money to LLM snakeoil fuckers, and maybe don’t allow mass for-profit copyright violations.
So is it snake oil, or dangerously effective (to the point it enables evil)?
it is snake oil in the sense that it is being sold as “AI”, which it isn’t. It is dangerous because LLMs can be used for targeted manipulation of millions if not billions of people.
Yeah, I do worry about that. We haven’t seen much in the way of propaganda bots or even LLM scams, but the potential is there.
Hopefully, people will learn to be skeptical they way they did with photoshopped photos, and not the way they didn’t with where their data is going.
Evidence says people aren’t skeptical for the most part and LLMs are good enough to fool all of us some of the time and some of us all of the time :(
ban photoshop too
No shit. I’m in postsecondary as an instructor and it is so beyond frustrating . They all use it, they don’t want to read or learn.
None of our institutions encourage “learning”; they are built to encourage “making the grade”. Why they need the grade and what it represents is irrelevant to students. It’s just a barrier that society has placed in front of them.
There needs to be something done about how we, as a society, approach education because whatever we are doing ain’t working. It apparently only worked at a very surface level and that was only because A.I. wasn’t available yet to be an easy out.
god i love ppl outsourcing their learning to Microsoft
we’re doomed
We are indeed. Not looking forward to my old age, where doctors, accountants, and engineers cheated their way into being qualified by using a glorified autocorrect.
doctors and engineer is probably much harder to cheat, because you would need to apply the knowlege hands on basis, and you would be found out and washe dout eventually. i can see fields that require alot of writing, oriignally people were hired to write thier prompts or essay pre-lawyer, or whatever but they always get caught down the line.
We live in a world where this building was signed off on and built, and that was before AI, so multiple incompetent people are getting through engineering.
As for incompetent doctors there is now an agency tasked with catching them.
“Get back in that bottle you stupid genie!”
Three magic words - “Open Note Exam”
Students prep their own notes (usually limited to “X pages”), take them into the exam, gets to use them for answering questions.
Tests application and understanding over recall. If students AI their notes, they will be useless.
Been running my exams as open note for 3 years now - so far so good. Students are happy, I don’t have to worry about cheating, and the university remains permanently angry because they want everything to be coursework so everyone gets an AI A ^_^
Exmatriculation that should be
Oh man the BBC is surely already preparing for Adolescence: rise of the robots
Should be expelled and banned for life.
The output is often really good, even for STEM questions about niche topics.
Not always. I teach a module where my lectures are fully coursework assessed and my god, a lot of the submissions are clearly AI. It’s super hard to prove though and I just mark the same as any other, but half-halluvinated school-grade garbage scores pretty damn low.
(edit: this is because we are trained on how to write questions AI struggles with. It makes writing exams harder, but it is possible. AI is terrible at chemistry. My personal favourite being when Google AI told me the melting point of pyrrole was about -2000C, so colder than absolute zero)
Of course it is only a tool, the same way an untrained person can not operate an excavator without causing lots of damage. I just wanted to say how impressed I often am at how good the response is.
Bold move voicing such an opinion on Lemmy! (I agree with you, and you are also objectively correct. There are also many things it is terrible at, but if one knows what one is doing, that really doesn’t detract from the quality stuff)
Not from UK and also not a student, but imo this is more a school problem than the students. The teachers just do not understand how to cope with AI. With open note exam and traditional exam style questions, I would be an idiot if I do use AI.
professors were already on the bordering of using AI, when before they just use software to look at your essay and any cheating it might detect.
If ChatGPT can effectively do the work for you, then is it really necessary to do the work? Nobody saying to go to the library and find a book instead of letting a search engine do the work for you. Education has to evolve and so does the testing. A lot of things GPT’s can’t do well. Grade on that.
The “work” that LLMs are doing here is “being educated”.
Like, when a prof says “read this book and write paper answering these questions”, they aren’t doing that because the world needs another paper written. They are inviting the student to go on a journey, one that is designed to change the person who travels that path.
Education needs to change too. Have students do something hands on.
Hands on, like engage with prior material on the subject and formulate complex ideas based on that…?
Sarcasm aside, asking students to do something in lab often requires them to have gained an understanding of the material so they can do something, an understanding they utterly lack if they use AI to do their work. Although tbf this lack of understanding in-person is really the #1 way we catch students who are using AI.
Class discussion. Live presentations with question and answer. Save papers for supplementing hands on research.
Have you seen the size of these classrooms? It’s not uncommon for lecture halls to seat 200+ students. You’re thinking that each student is going to present? Are they all going to create a presentation for each piece of info they learn? 200 presentations a day every day? Or are they each going to present one thing? What does a student do during the other 199 presentations? When does the teacher (the expert in the subject) provide any value in this learning experience?
There’s too much to learn to have people only learning by presenting.
Have you seen the cost of tuition? Hire more professors and smaller classes.
Anyways, undergrad isn’t even that important in the grand scheme of things. Let people cheat and let that show when they apply for entry level jobs or higher education. If they can be successful after cheating in undergrad, then does it even matter?
When you get to grad school and beyond is what really matters. Speaking from a US perspective.
But they can’t do grad school work, they lack undergraduate level skills because they skipped it all.
“Let them cheat”
I mean, yeah, that’s one way to go. You could say “the students who cheat are only cheating themselves” as well. And you’d be half right about that.
I see most often that there are two reasons that we see articles from professors who are waving the warning flags. First is that these students aren’t just cheating themselves. There are only so many spots available for post-grad work or jobs that require a degree. Folks who are actually putting the time into learning the material are being drowned in a sea of folks who have gotten just as far without doing so.
And the second reason I think is more important. Many of these professors have dedicated their lives to teaching their subject to the next generation. They want to help others learn. That is being compromised by a massively disruptive technology. the article linked here provides evidence of that, and therefore deserves more than just a casual “teach better! the tech isn’t going away”
hire more? alot of universities are quite stingy as they dont want to have too many tenures, they are infact trying to reduce that trend. some are also cutting back because enrollment issues in some areas.
If using ChatGPT for tests is cheating, I’d argue calculators are cheating for math… it’s just another tool at people’s disposal as far as I’m concerned.
calculators isnt a computer where you can search up the answers lol. its literally plug in a formula and numbers and it spits out whatever you input, it doesnt give you the answer to a question. Also many math questions are abstracts, so you have to discern the correct forumla/mathematics to use.
How can you be so dense?
Using a calculator for math is cheating unless it has been explicitly allowed. Which it isn’t until higher grades because before that, people are supposed to do math without a calculator. Which they should do to get a proper understanding about the subject.
The same holds for literally any tool. If the goal is to get the students to be able to convincingly communicate their thoughts or to see if they understood a topic by making them explain it, having them use chatgpt accomplished nothing and just wastes everybody’s time. If the goal is to see if they can produce enough bullshit to satisfy an average public administration, then letting them use llms might be valid. Just like any other tool, it’s legitimate to allow llms or not, based on whatever is supposed to end up in a student’s head. But using it without it being allowed is cheating, simple as that.