03 9999 7450

The 2024 VCE Psychology exam: what the paper actually rewarded and where students lost marks

The 2024 VCE Psychology examination was not unusually difficult, but it was highly discriminating. Students who understood the content but lacked precision, scientific language, or exam awareness consistently lost marks across both sections. The Examiner’s Report makes it clear that the issue was not unfamiliar material, but how students interpreted questions, applied science skills, and justified their answers under exam conditions.

The structure of the 2024 paper and why it matters

The paper followed the standard structure of 40 multiple choice questions in Section A and nine short-answer and extended-response questions in Section B. What is often overlooked is that even in Section B, the majority of marks were allocated to short-answer responses worth between one and four marks. This means that the exam overwhelmingly rewarded accuracy, interpretation, and disciplined explanation rather than extended writing stamina.

The Examiner’s Report repeatedly notes that students often understood the relevant topic but failed to tailor their response to the specific command term or the scenario provided. This was especially evident in questions that required application of key science skills rather than recall of content.

Section A: strong performance, but predictable traps

Overall performance in Section A was solid, particularly on questions that tested straightforward knowledge of key concepts such as neurotransmitters, sleep stages, and basic learning processes. However, the Examiner’s Report highlights that the most discriminating multiple choice questions were those that embedded knowledge within data, research descriptions, or subtle wording changes.

For example, questions involving correlational research required students to recognise the absence of manipulation and control. A significant number of students incorrectly selected options that implied causation, despite the study being clearly described as correlational. This shows that many students still default to surface reasoning rather than reading methodologically.

Other common Section A errors included confusing mean and median when interpreting graphs, misreading what bold lines represented in scatter plots, and misinterpreting standard deviation as an indicator of average performance rather than variability. These errors were not due to lack of content knowledge, but to weak data literacy.

Identifying variables: one of the most consistent weaknesses

One of the clearest patterns in the Examiner’s Report was difficulty identifying independent variables in experimental designs. In Question 6, which involved classical conditioning, many students listed the stimuli themselves as independent variables rather than identifying what was manipulated by the researcher, namely the order of presentation and the time delay between stimuli.

This error appeared even in otherwise strong scripts and reflects a deeper misunderstanding of what makes a variable independent. The Study Design is explicit that the independent variable is the variable deliberately changed or selected by the researcher. Students who name objects instead of conditions demonstrate conceptual confusion rather than careless reading.

Section B short-answer questions: where precision mattered most

In Section B, many students performed well when asked to outline or describe roles of brain regions, memory processes, or physiological mechanisms. However, marks were frequently lost because students failed to link their explanation to the scenario provided.

A clear example was Question 1a on the interaction between the basal ganglia and neocortex. Full marks required students to do three things: identify each region’s role, link those roles to the specific task of teeth brushing, and explain how the regions interact. Students who described the regions in isolation without addressing their interaction were capped at lower marks, even if their descriptions were correct.

This pattern repeated across multiple questions. Students who wrote accurate but generic responses often scored lower than students who wrote slightly less content but explicitly anchored their explanation to the scenario.

Long-term potentiation and observational learning

Long-term potentiation was another area where marks were commonly lost. The Examiner’s Report notes that many students could define LTP but failed to explain how it related to the retention stage of observational learning. In particular, students often used vague phrases such as mental image or visual memory instead of the required term mental representation. This language imprecision was enough to cost marks.

High-scoring responses explained that repeated stimulation of neural pathways during observation strengthens synaptic connections, allowing the observed behaviour to be retained in long-term memory. Students who did not explicitly link repeated stimulation to synaptic strengthening did not receive full marks.

Application of key science skills under pressure

Questions involving sampling, validity, reliability, and error types revealed ongoing weaknesses in scientific reasoning. In Question 4d, many students misidentified the population by including both injured and uninjured participants, failing to recognise that the population refers to the group of interest, not all participants in the study. Others described random sampling incorrectly by referring to participants having equal chances rather than members of the population.

Similarly, in Question 4e, students often recognised that the H-index reduced error but could not explain why. High-scoring responses explicitly linked the adjustment to participant height and explained how this reduced systematic error and improved accuracy or internal validity.

Data representation and graphing conventions

Question 7c, which required students to draw a graph, was one of the lowest scoring questions on the paper. The Examiner’s Report makes it clear that the issue was not mathematical difficulty but misunderstanding of graphing conventions. Many students plotted inappropriate graph types, mislabelled axes, or included incorrect numbers of bars.

This reinforces the importance of practising data representation throughout the year, rather than treating it as an isolated skill.

The extended-response question: not all or nothing

The extended-response question revealed an important insight. Students who structured their response clearly, used correct terminology, and addressed most of the assessment criteria often scored well even if their evaluation was incomplete. The Examiner’s Report explicitly notes that some students achieved six or seven marks without fully addressing the evaluation criterion, due to the strength of their explanations and analysis elsewhere.

This demonstrates that the extended-response question is assessed holistically. Clear scientific reasoning and accurate application of concepts can still earn substantial marks even when time is limited.

What the 2024 exam ultimately rewarded

The 2024 VCE Psychology exam rewarded students who could read carefully, think scientifically, and explain precisely. It penalised vague language, generic explanations, and assumptions not supported by the scenario or data provided.

Students who treated Psychology as a science, rather than a subject of memorised facts, were consistently advantaged.

How ATAR STAR supports this transition

ATAR STAR works with students to develop the specific exam skills highlighted by the 2024 paper. This includes identifying variables correctly, interpreting data accurately, using Study Design language precisely, and tailoring responses to the exact wording of each question.

This approach supports students who are already strong but want to eliminate costly errors, as well as students who understand the content but struggle to convert that understanding into marks under exam conditions.

 

Share the Post:

Related Posts