Let's talk about Education: AI and Writing
Education is evolving faster than ever - from digital classrooms to the psychology of learning.
In our special blog series "Let's Talk About Education", Dr. John Collick, international education expert and author, explores trends, ideas, and challenges shaping the future of teaching and learning.
Giving up Thinking: AI and Essay Writing and Marking
Let’s talk about the processes of thinking and learning, but this time examine the differences between the human mind, and Artificial Intelligence - specifically Large Language Model AI (LLM) and its application to writing and marking essays.
The AI revolution
Large Language Model (LLM) AI first hit the headlines in 2022. As more users signed up to applications like ChatGPT, the world was overwhelmed with a tidal wave of industry-generated hype. We were told that AI would revolutionise everything - industry, science, the creative arts and, of course, education. With just a few simple prompts, programs like ChatGPT, Midjourney or OpenAI could write articles, create novels and works of art, program more efficiently, and take away the time-consuming effort needed to create projects and lesson plans.
So where are we three years later?
According to an MIT report, 95% of companies who implemented AI are not seeing any return on their investment.[1] Furthermore, a number of class actions have been brought by authors and artists against AI companies who have used their works to train their systems. Meanwhile the internet is flooded with what is now called "AI-slop" - LLM generated text, imagery and video, with the added danger that bad actors are using it to promote fake news, hatred and pornography.
In education, AI is having a hugely disruptive influence. Teachers are spending an increasing amount of time dealing with students who use AI to cheat, sometimes using AI themselves to catch machine-generated assignments. According to a survey conducted by Oxford University, "One in four students say AI ‘makes it too easy’ for them to find answers" and that "Pupils fear that using artificial intelligence is eroding their ability to study, with many complaining it makes schoolwork “too easy” and others saying it limits their creativity and stops them learning new skills"[2].
Meanwhile, in the world of work, people use AI to write job applications, which companies then assess with AI, leaving us with a ridiculous situation in which two sets of machines interact with little to no input from the humans involved. All of this can prompt some second thoughts about the usefulness and role of AI, especially in schools.
The Myth of Intelligence
So what is the role of AI in education, and how can it be integrated into teaching and learning without compromising either? The first step is to understand what AI is (specifically the LLM models like ChatGPT) and how it works in comparison to human intelligence.
How humans learn and think
Humans think and learn by creating incredibly complex networks in their brains. (read more here) As signals travel along brain cells and jump from neuron to neuron, repetition embeds the pathways until they become permanent. A typical human brain contains approximately 100 billion brain cells, and 100 trillion synaptic connections. While a computer is faster, the human mind's ability to adapt, learn and interact with its environment is unsurpassed.
The human mind is also an astonishing pattern-recognition engine. For example, our ability to parse faces is phenomenally accurate, with the average human able to recognise around 5,000 different people.
Humans also have developed the ability to think in abstract terms, so we can imagine and talk about people and objects that aren't present, or don't even physically exist like love, decency and fairness. Most crucially, we have metacognition. We can think about thinking, both our own thoughts and the process of analysis in its abstract form. That's why we have developed the understanding of narrative, argument and conclusion, and use them to construct our speech and written work. We easily move from abstract overviews down to specific concrete details and back again.
How machines 'learn' and 'think'
Human minds learn by creating complex networks inside the brain, focussed on patterns and pattern recognition, and governed by an overall metacognition (our sense of 'self'). Artificial Intelligence works on mathematical probabilities. Specifically, in the case of LLM’s, the system is trained on an immense data set trawled from the internet - Wikipedia, web pages, books etc. It uses this vast reservoir of text to calculate what is the most statistically likely next word in a sentence. To all intents and purposes, LLM's are nothing more than very powerful and complex text completion engines.
When I type "What shall I have for..." into a message app it tries to complete the sentence with 'dinner', because it calculates that is the most likely word. LLM AI works in exactly the same way. Sentence by sentence it answers the question, what is the most likely word or phrase to come next? There is no overview, no structure or narrative, just ultra-powerful guesswork operating on a word by word basis.
As the mathematician Stephen Wolfram explained, "ChatGPT is always fundamentally trying to ... produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages"[3]
Unlike the metacognition humans possess when creating text, there is no intelligence at work with an LLM. We might feel that the AI bot we're talking to has a mind like our own. In reality, all that we're seeing are words chosen as the most statistically likely response to our input.
If you tell a bot it has got something wrong, it will often apologise. This isn't because it realises it's in error, but because it's seen the sentence 'you made a mistake' and has responded what it’s calculated to be the most common reply to that specific set of words.
In summary, using a Large Language Model AI to produce text is removing all intelligence from writing. It's not substituting a clever robot mind for a human one, even if that appears to be the case. In fact there is no mind at all, just a probability-based text completion engine.
Writing and marking essays: How humans do it
In the past students were given assignments that required they write and submit essays. I taught at university for over 12 years and so I experienced this from both sides. As a student I researched and wrote essays. Both the preparation and the writing process built up my knowledge and analytical skills, and taught me how to create an argument that was well-structured, marshalled suitable evidence and, hopefully, ended with a robust conclusion.
As a tutor I read my students' essays, looking for, and judging, all those features I've described above and offering feedback where I thought things could be improved. Of course this wasn't an infallible process, but overall, both writing and reading was a self-aware cognitive process that reinforced and enhanced the knowledge and skills of two human beings.
How machines write and mark essays
A number of technology companies are now offering AI-based essay marking systems. Most of these are promoted as methods of saving teachers' time, and occasionally as better and fairer than humans. As with Large Language AI models these also work in terms of probabilities.
Typically an essay-marking AI system will use a huge database of uploaded essays which it breaks down into probability maps to determine what makes a good essay as opposed to a poor one. A student-submitted essay will also be broken down into a collection of probabilities (keyword frequency, location, relationships with other words etc.). This is compared to the pre-existing data. An 'A' grade essay will fall within certain statistical parameters, as will 'B' grade and 'C' grade work. This comparison process happens in an instant. AI grading companies boast that their programs can mark thousands of essays in minutes.
But nothing is actually reading the essays in the way that a human would. Instead, the machine deconstructs the words into numbers and compares them to a data set. It's small wonder that, in our grade-focussed world, students are using AI to write essays that are then graded by AI. We are back to the ridiculous situation we saw with job applications, two machines creating and consuming content without any true active intelligence involved.
Does AI have a role in education?
This is why, in many schools and universities, essay writing has degenerated into a technology arms race. Teachers and administrators pay companies to use AI to identify student essays written by AI. Of course, as the systems improve and fakes become harder to spot, so the detection mechanisms evolve to keep pace. Nobody benefits from this, apart from the companies selling AI.
Large Language Model Artificial Intelligence is no more intelligent than the text completion algorithm on your phone messaging app that completes your sentences for you. In the case of essay writing it has no grasp of structure, argument or narrative coherence. It doesn't even know it's writing an essay, it merely serves up the next most statistically probable word in each sentence. By handing off writing and marking to a machine, we are removing all self-aware cognition from the process. We are not substituting machine thought for human thought, we are simply giving up thinking altogether.
Some people argue that there is skill in constructing the prompts that are fed into the LLM in the first place, and this is what we should be teaching students. But how long does it take to come up with a sentence like 'Write me a 2000 word essay in the style of Terry Eagleton about Nietzsche's influence on Kafka and its implications for European Modernism between the two World Wars'? And how much skill and knowledge goes into creating that prompt? Conversely, what is lost in terms of a student's learning - the development of research and analysis skills, the ability to structure and articulate a persuasive and evidence-based argument?
Possible solutions: using AI to challenge the students instead of dulling their brains
How do we deal with this situation? How do we ensure that thinking and learning to think is preserved in the face of machines that can fake it in milliseconds? Part of the problem is the over-emphasis on grades, rather than the processes of learning, and the devaluation of failure. Getting good grades should not be the be-all and end-all of education (although this is difficult in a world where students are increasingly seen as consumers paying for marks they feel entitled to).
Other techniques that can be used to break the cycle, or make it harder to incorporate AI, include returning to the hand-written short-form essay, the teaching of verbal reasoning and analysis through debate, and above all, metacognition. Once students and teachers understand how their human minds work, think and learn, as opposed to AI, there will be less motivation to palm off thinking onto something incapable of it.
AI does have a role in education, but it certainly is not in writing or marking essays. The students who say that AI makes things 'too easy' are echoing a complaint often aimed at games. From a neuroscientific point of view we know that the brain embraces failure as a path to learning, and happily rises to ever-harder challenges. Perhaps creating those challenges is the role we can give to AI. It should be making students' lives harder in fun ways, not easier.
____________
[1] "MIT report: 95% of generative AI pilots at companies are failing", Fortune, August 18 2025, https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/, Accessed 20/10/25
[2] "Pupils fear AI is eroding their ability to study, research finds", The Guardian , 15 October 2025, https://www.theguardian.com/technology/2025/oct/15/pupils-fear-ai-eroding-study-ability-research?CMP=share_btn_url, Accessed 20/10/25
[3] What Is ChatGPT Doing … and Why Does It Work?, Stephen Wolfram, February 14 2023, https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/, Accessed 20/10/25
Dr. John Collick
Dr. John Collick is an internationally recognised expert on education technology, the impact of ICT on society and the classroom, and the development of innovative education strategies from school to ministerial policy level. Throughout his career he has worked closely with Ministries of Education to develop solutions and programmes to meet the needs of 21st Century education systems worldwide.