03 9999 7450

​​Why AI-Written essays quietly cap your VCE English score

​​Why AI-Written essays quietly cap your VCE English score

 

Screenshot 2025 12 27 at 12.13.09 am

 

By now, almost every serious VCE English student has experimented with AI. Some talk about it openly, usually framing it as a productivity tool or a way of “checking” their work. Most don’t ever admit using it in the first place. Parents often sense it before they can properly name it: the writing looks cleaner, the sentences flow more smoothly, the paragraphs feel more polished, and yet something about the work feels oddly hollow. The essay reads well on the surface, but it doesn’t do very much. It gestures. It sounds competent. It moves, but without direction.

The marks tend to tell the same story. Sometimes there is a small lift at first, enough to reinforce the idea that something has finally clicked. Sometimes the marks stall immediately. Very often, students perform adequately — even comfortably — in SACs, only to underperform in the exam when the conditions are harsher and the safety nets disappear. In almost every case, the student is confused. They feel as though they are working harder than ever before, writing more than they did the previous year, producing pieces that look undeniably “better”, and yet they cannot seem to break into the higher bands. The effort no longer translates into momentum.

 

Screenshot 2025 12 27 at 12.13.37 am

 

That pattern is not coincidence, and it has very little to do with whether assessors approve of AI. VCAA assessors are not sitting there trying to catch students out or punish technological change. The issue runs much deeper than policy. It exists because of a fundamental mismatch between what VCE English actually rewards and what AI, by its nature, strips out of the learning process.

VCE English is widely misunderstood as a writing subject. It is not. Writing is simply the medium through which assessment occurs. The skill being tested is judgment. VCE English asks students to read a prompt and determine what that prompt is really demanding — not superficially, but structurally. It asks them to form an interpretation of the text that can be defended under scrutiny, and then to maintain control of that interpretation across an entire piece of writing while working under time pressure. That control is subtle. It involves knowing which ideas deserve space and which should be excluded, how to sequence paragraphs so that meaning accumulates rather than fragments, and how to deploy evidence selectively so that it advances an argument rather than decorating it.

None of this is intuitive. Strong English students are not born with this skill set. It is built slowly and unevenly through repeated uncertainty, wrong turns, and recalibration. Students improve not by producing flawless essays, but by producing imperfect ones and learning, often painfully, where their thinking collapsed, overreached, or remained vague. The moments that feel uncomfortable — when a student is unsure how to frame a contention, or cannot quite justify a reading — are not signs of weakness. They are the moments where judgment is being formed.

 

Screenshot 2025 12 27 at 12.14.09 am

 

AI removes almost all of that friction.

When AI is introduced early or casually into the process, it resolves uncertainty before the student has had to sit with it. It offers a contention that sounds reasonable. It provides a structure that appears coherent. It suggests interpretations that feel balanced and safe. The student moves straight to refinement without ever having grappled with decision-making. The writing improves in appearance, but the thinking underneath remains largely unchanged. Over time, that gap widens. The surface becomes smoother, while the underlying capacity to judge, prioritise, and commit stagnates.

What looks like progress is often simply polish layered over a skill that has stopped developing.

When a student uses AI to generate an essay, the most cognitively expensive part of the task has already been completed before the student ever begins to engage. The contention has been selected for them, usually framed in a way that sounds balanced and sensible. The line of argument has been imposed, complete with a logical sequence that feels reassuringly “essay-like”. The structure has been resolved in advance, often mirroring familiar VCE templates. Even the tone has been calibrated to sound appropriately analytical, confident without being bold, articulate without taking real risk. What remains for the student is surface-level refinement: adjusting phrasing, trimming sentences, perhaps inserting or swapping out quotations to make the piece feel more personalised.

On the surface, this feels productive. The resulting piece often looks impressive, especially when compared to a student’s earlier work. Fluency is high. Vocabulary is polished. Sentences are smoother and more controlled. There are fewer obvious grammatical errors, fewer moments of clumsiness, fewer signs of struggle. To a parent reading the work, it can look like a clear step forward. To a teacher skimming quickly, it can read as “strong writing”. But what has been bypassed is the very skill that separates a mid-range English student from a high-performing one: the ability to decide what matters and commit to it.

That ability does not show up immediately in sentence-level feedback. It shows up in how an essay moves, how ideas build, and how tightly the writing is tethered to the prompt. When AI supplies the thinking, students are no longer practising interpretive judgment. They are rehearsing presentation. Over time, that distinction becomes decisive.

 

 

Screenshot 2025 12 27 at 12.14.38 am

 

This is why AI use creates such a convincing illusion of progress. Early on, everything feels easier. Essays are produced more quickly, often in half the time they previously took. Students feel less resistance sitting down to write because the hardest part — working out what to say — has already been done. Teachers comment positively on expression or coherence. Parents notice fewer annotations about awkward phrasing or unclear sentences. Confidence lifts, sometimes dramatically, and with it comes the belief that English has finally “clicked”.

What’s actually happened is more subtle and more dangerous. The visible layer of performance — fluency, polish, surface coherence — has improved faster than the invisible one.

The invisible layer is judgment. It is the capacity to read a prompt closely and recognise what it privileges and what it excludes. It is the ability to choose a reading of the text that is specific rather than generic, defensible rather than safe. It is the discipline to sustain that reading across multiple paragraphs without drifting, contradicting oneself, or reverting to theme-spotting. It is knowing when a quote advances an argument and when it merely decorates it. This layer is slow to build, fragile under pressure, and essential at the top end of VCE English. It is also precisely the layer AI does not train, because it resolves these decisions instantly rather than forcing the student to make them.

For a while, the mismatch between surface improvement and underlying capacity remains hidden. School-based assessments, particularly earlier SACs, can mask the problem. Familiar texts, predictable prompts, and generous interpretation of criteria allow competent writing to pass comfortably. Students receive results that seem to confirm they are on the right track. The ceiling has not yet appeared.

The consequences become obvious only when the conditions change.

 

Screenshot 2025 12 27 at 12.14.59 am

 

In Units 3 and 4, marking is no longer buffered by classroom context or local expectations. It is externally aligned. Assessors are trained to reward specificity, interpretive commitment, and control, and to penalise vague engagement, thematic drift, and safe generalisation. Essays that sound polished but fail to commit are capped quickly. They are not “bad” essays. They are often technically sound, clearly written, and easy to follow. But they do not distinguish themselves. They could have been written by almost anyone.

Markers read hundreds of responses. They develop a finely tuned sense of when an argument is being actively driven by a student’s thinking and when it is simply following a pre-fabricated track. They do not need AI detection software to recognise this difference. They see it in topic sentences that announce ideas without advancing them, that signal intent without delivering substance. They see it in paragraphs that gesture towards complexity — mentioning multiple concepts or perspectives — but never unpacking any of them fully. They see it in quotations that are relevant on paper but never truly interrogated, never made to do analytical work.

Most tellingly, they see it in the absence of hierarchy. High-scoring essays know what matters most and organise themselves accordingly. AI-assisted essays tend to flatten importance, treating all ideas as equally worthy of space, resulting in writing that feels balanced but directionless. The essay moves, but it does not build. And at the top end of VCE English, that difference is everything.

Most of all, markers see AI reliance in risk avoidance. High-scoring VCE English essays are not defined by how polished they sound, but by the positions they are willing to take. They commit to readings that narrow the field of interpretation rather than trying to accommodate every possible angle. They privilege one idea and subordinate others. They make claims that could plausibly be challenged, and then they justify those claims with control and consistency. This is what gives an essay shape and authority. AI-generated essays almost never do this. They balance instead of deciding. They qualify instead of committing. They soften claims until nothing sharp remains. They sound reasonable, measured, and careful — and in doing so, they forfeit distinctiveness. At the top end of VCE English, sounding reasonable is not enough. Distinction comes from interpretive ownership, not balance.

Screenshot 2025 12 27 at 12.15.23 am

 

The deeper issue here is not plagiarism, compliance, or rule-breaking. It is habit formation. Every time a student allows AI to resolve interpretive uncertainty for them, they are training themselves out of the habit VCE English depends on most: sitting with uncertainty long enough to think clearly. Writing a strong English essay is uncomfortable because it forces students to decide what they think the text is doing, knowing full well that another reading could exist and that their own reading might not be perfect. That discomfort is not incidental to the task. It is the task. Judgment is forged in moments where there is no obviously correct answer, only a decision that must be made and defended.

AI short-circuits that process almost instantly. It offers an interpretation that sounds plausible, balanced, and academically respectable. It removes the need to choose by presenting something that feels safely “right enough”. In the short term, this feels helpful. In the long term, it is corrosive. The cost of that safety is growth. Students who are never required to tolerate interpretive discomfort never fully develop the capacity to judge, prioritise, and commit under pressure — precisely the capacities VCE English rewards at the highest level.

This is why so many capable students follow the same frustrating trajectory. They write more essays than ever before. They seek feedback consistently. They refine their expression. They expand their vocabulary. They even walk into SACs feeling more confident than they did in earlier years. And yet their scores plateau, often stubbornly in the low-to-mid 30s. From the outside, it looks like something is missing. Parents understandably assume the issue is exam technique, pressure, or nerves. Teachers may suggest more practice. In reality, the student’s writing has improved faster than their interpretive judgment — and judgment is the currency VCE English pays with at the top end.

 

Screenshot 2025 12 27 at 12.15.41 am

 

The strongest students I work with are not anti-AI, and they are certainly not technophobic. They are disciplined. They draft first, without assistance, even when the draft feels clumsy, inefficient, or uncertain. They wrestle with prompts rather than outsourcing that struggle. They make interpretive decisions that feel slightly uncomfortable, knowing that comfort is not the goal. Only after that work is done do they turn to AI — not to replace thinking, but to interrogate it. They might test whether an alternative reading exposes a weakness in their argument. They might use it to sharpen expression once the structure is locked in. But at no point do they surrender ownership of the central decisions. From beginning to end, the thinking remains theirs.

That distinction is subtle, but it is decisive. Once a student hands over interpretive control — even occasionally — they lose an opportunity to practise the very skill that determines exam performance. That loss compounds quietly over time. When the final exam arrives, with unseen prompts, unfamiliar phrasing, and no safety net, the difference becomes stark. The student may have fluent language, but no direction. Well-constructed sentences, but no hierarchy. Words without judgment. And at that point, the ceiling is not psychological or technical. It is structural.

AI does not ruin VCE English. Used carefully and deliberately, it can support reflection, refinement, and metacognitive awareness. Used casually, it quietly replaces the hardest part of the task and stalls development long before students realise what has happened. The difference is not access. Everyone has access now.

The difference is who is still doing the thinking.

 

 

Share the Post:

Related Posts