The experimental-design mistake examiners flag every single year
Where capable students slip without realising
Questions about experimental validity, reliability and accuracy look deceptively simple. Students recognise the terms immediately. They’ve revised them. They’ve practised definitions.
And yet, examiner reports consistently show that these questions are among the worst-performing across the paper.
The reason is not confusion about the words.
It is confusion about what the question is actually asking students to do with them.
Validity is not “the experiment worked”
One of the most common errors noted by examiners is students stating that an experiment is valid because it “measures what it is supposed to measure”, and then stopping.
That definition is correct.
It is also insufficient.
Validity questions require students to explain why the experiment measures the intended variable. This means identifying:
- the independent variable
- the dependent variable
- the factor that has been isolated
- what has been controlled to prevent confounding
Responses that restate the definition without applying it to the specific experiment rarely earn full marks.
Examiners consistently reward answers that reference what was changed, what was measured, and what was kept constant.
Reliability answers that confuse repetition with consistency
Reliability questions expose another repeated misunderstanding.
Many students state that an experiment is reliable because it was repeated, or that reliability can be improved by repeating trials. That is only half the idea.
Examiner feedback shows that students often fail to explain how repetition improves consistency of results. Simply stating “repeat the experiment” does not demonstrate understanding unless students link repetition to:
- reduced impact of random error
- increased consistency of measurements
- greater confidence in observed trends
High-scoring responses explain what repeating achieves, not just that it occurs.
Accuracy answers that drift into validity
Accuracy is one of the most frequently misused terms in Biology responses.
Examiner reports repeatedly note students describing accuracy as “how close the results are to the true value” and then immediately discussing experimental design.
That’s a problem.
Accuracy is about measurement, not design. Answers should refer to:
- calibration of equipment
- precision of measuring instruments
- systematic error
- how close measurements are to accepted values
When students discuss control variables or fair testing in accuracy questions, they are answering the wrong concept — and marks are lost quickly.
When students name improvements without justification
Another issue highlighted by examiners is students listing ways to improve validity, reliability or accuracy without explaining how those changes help.
For example, students might write:
- use more samples
- control more variables
- use better equipment
These suggestions are not wrong, but without explanation they are incomplete.
Marks are awarded when students link the improvement to the concept. For instance:
- increasing sample size reduces the effect of anomalies
- controlling temperature prevents it from influencing enzyme activity
- using calibrated equipment improves measurement accuracy
The why matters more than the suggestion itself.
A recurring enzyme experiment mistake
Enzyme-based experiments are a common context for these questions, and examiner reports consistently highlight the same errors.
Students often identify the wrong variable when discussing validity. For example, they claim pH must be controlled when pH is actually the independent variable, or they list substrate concentration as a controlled variable when it is being manipulated.
This indicates that students are applying memorised templates rather than analysing the experiment in front of them.
Validity questions punish autopilot thinking.
Why “fair test” language can cap marks
Some students rely heavily on the phrase “fair test” when discussing experimental design.
While this phrase is not incorrect, examiner commentary shows that it is often used as a shortcut in place of explanation. Saying an experiment is fair without explaining how it is fair adds little value.
Examiners reward specificity, not slogans.
What high-performing Biology students do differently
High-scoring students treat validity, reliability and accuracy as tools, not definitions.
They identify what the experiment is testing, decide which concept is relevant, and apply it precisely to the scenario provided. They explain relationships clearly and avoid mixing concepts.
Their answers are short, targeted and anchored to the experiment.
A practical checklist for these questions
Before writing, strong students ask:
- What is being changed?
- What is being measured?
- What could interfere with the measurement?
- Am I talking about design, consistency or measurement quality?
That internal sorting prevents most mark-losing errors.
What this means for Biology preparation
Students need practice applying validity, reliability and accuracy to unfamiliar experiments, not memorising their definitions.
These questions are predictable, but only for students who know how to interpret experimental contexts carefully.
Working with ATAR STAR
ATAR STAR Biology tutoring focuses heavily on experimental reasoning.
We train students to read experiments precisely, identify variables correctly, and respond to validity, reliability and accuracy questions with control and confidence. This is an area where small improvements produce immediate mark gains.
If these questions feel easy but your marks don’t reflect that, the issue is almost never content. It’s how precisely the concepts are being applied — and that is exactly where ATAR STAR helps students sharpen their responses.