How SAC marking really works: Moderation, scaling and the statistics behind VCE internal assessment
Few parts of the VCE generate as much confusion, frustration and anxiety as School-Assessed Coursework (SACs).

Parents regularly ask:
- “Will my child’s SAC score change later on when the VCAA get the result?”
- “How does VCE moderation actually work?”
- “Are SACs graded on a curve?”
- “Is the school marking too harshly or too easily?”
Students often ask the same questions, particularly when they’ve heard through the great vine that strong SAC scores do not seem to translate into equally strong study scores.
The problem is that most explanations of SAC moderation are either incomplete or wrong.
To understand how SAC marking really works, you must clearly separate four ideas:
- internal SAC marking
- ranking versus raw scores
- statistical moderation
- the external exam as the neutralising benchmark shared by all students in a particular subject
Once these are understood, SAC results stop feeling arbitrary and start becoming predictable.
What SACs are designed to do (and what they are not):
SACs are school-based assessments that are:
- written or selected by schools
- marked internally by teachers
- aligned to VCAA Study Designs
- used to rank students within the same school
This is the critical point:
SACs are not designed to be comparable between schools.
A SAC score at one school is not meant to mean the same thing as the same percentage at another.
This is why:
- VCAA does not compare SAC tasks across schools
- raw SAC percentages are not carried directly into study score calculations
- rank within a cohort is the meaningful outcome of SACs

The most common misconception: “SACs are graded on a bell curve”

Parents often hear:
- “The SACs get curved”
- “The school scales marks down”
- “VCAA adjusts the percentages”
These explanations are inaccurate.
There is:
- no requirement by the VCAA for a bell curve to be imposed on SAC results by schools
- no forced distribution of scores
- no target number of high or low marks
Schools may apply their own grading system to SAC results but ultimately, the VCAA applies statistical moderation to all SAC results. Statistical moderation serves a very specific function.
Why statistical moderation exists
Schools differ significantly in:
- cohort strength
- task difficulty
- marking strictness
- interpretation of performance descriptors
Without moderation, students could be unfairly advantaged or disadvantaged simply by:
- attending a school with easier SACs
- or being assessed more harshly than peers elsewhere
Statistical moderation exists to ensure:
Students are judged on their performance, not on the school they attend.
The external exam: the great neutraliser

The external VCE exam is the single element of the system that:
- is identical across the state
- is externally marked
- uses standardised marking criteria
- is sat under the same conditions by all students
Because of this, the exam acts as the great neutraliser in the VCE.
Regardless of:
- how hard or easy SACs were
- how generous or strict internal marking was
- how strong or weak a cohort appeared on paper
Every student ultimately sits the same exam.
VCAA treats the exam as the most statistically reliable measure of performance across schools. It is the anchor point that allows internal school results to be aligned fairly.
This is why the exam plays a dual role:
- it directly contributes to the study score
- it provides the benchmark used to moderate SACs
Without the exam, there would be no fair way to compare internal assessments statewide.
The single most important rule: VCAA protects rank, not raw SAC scores
This principle governs everything.
VCAA assumes that:
- teachers are best placed to rank students within their own cohort
- the exam best measures performance between cohorts
The moderation process works as follows:
- schools submit SAC rankings
- students sit the external exam
- VCAA analyses the exam performance of each school’s cohort
- SAC rankings are aligned to the exam score distribution of that cohort
This means:
- your position relative to classmates is preserved
- your raw SAC percentage may change
- fairness is maintained across schools
A detailed example to help make things clear
Consider a cohort of 20 students.
- Student A is ranked 1st on SACs
- Student B is ranked 2nd
- Student C is ranked 3rd
On the exam:
- the highest exam score in the cohort is 44
- the second-highest is 42
- the third-highest is 40
After moderation:
- Student A’s moderated SAC score aligns with 44
- Student B’s aligns with 42
- Student C’s aligns with 40
Even if Student A originally scored 93% and Student B scored 89%, those raw percentages are replaced.
This is why students say:
“My SAC scores went down.”
What changed was the scale.
What did not change was the rank.
Why raw SAC percentages are misleading
Raw SAC scores are influenced by:
- assessment difficulty
- marking generosity
- cohort strength
An 85% at one school may represent stronger performance than a 95% at another.
This is why:
- comparing SAC scores across schools is meaningless
- small percentage differences should not be over-interpreted
- ranking matters far more than the number
For VCAA:
- rank is signal
- raw score is noise
What happens when a cohort performs strongly on the exam?

If a cohort performs well on the exam:
- SACs are moderated upwards
- strong internal rankings are reinforced
- top students benefit significantly
This is where the exam acts as a reward for genuine strength. Strong cohorts are not penalised — they are validated.
What happens when a cohort performs poorly?
If a cohort performs poorly:
- SACs are moderated downwards
- scores may compress
- differences between students narrow
However:
- rank is still preserved
- top-ranked students are still advantaged
This is the exam acting as a correction mechanism rather than a punishment.
Why SAC ranking matters more than any single SAC

Because moderation operates on overall rank:
- one poor SAC rarely ruins a study score
- one inflated SAC rarely guarantees one
What matters is:
- consistent performance across the year
- rank stability
- exam performance that supports that rank
Ultimately,
SAC results are not awarded against a bell curve.
They are not compared across schools.
They are not judged on raw percentages.
They are ranked internally, then moderated using the external exam, which acts as the great neutraliser across the state.
Once this system is understood, SAC results stop feeling unfair or mysterious — and start functioning as what they are meant to be: fair, relative measures of performance.
In the VCE, understanding how the system works is not gaming it.
It is understanding how performance is genuinely measured.
