03 9999 7450

A close reading of key questions from the 2024 VCE Psychology exam

The 2024 Psychology exam contained several questions that appeared straightforward on the surface but proved highly discriminating once marking criteria were applied. The Examiner’s Report makes it clear that students often understood the relevant content, yet still lost marks due to imprecision, misinterpretation of the task, or failure to integrate key science skills. Looking closely at a few specific questions illustrates how these errors arose.

Question 1a: interaction between the basal ganglia and the neocortex

This question required students to explain how the basal ganglia and neocortex interacted during a familiar motor task. Many students approached this by describing each structure separately, outlining the basal ganglia’s role in motor learning and the neocortex’s role in planning and conscious control. While these descriptions were often accurate, they were not sufficient for full marks.

The Examiner’s Report notes that high-scoring responses explicitly addressed interaction. These responses explained that the neocortex is involved in the conscious planning and initiation of the task, while the basal ganglia supports the execution of learned motor sequences through habit formation and procedural memory. Crucially, students needed to show that as the task became more automatic, control shifted from conscious cortical processing to subcortical motor pathways. Responses that treated the two structures independently were capped, even when both descriptions were correct.

This question highlights a recurring issue in Psychology exams. When the question asks how systems interact, listing functions in parallel is not enough. Students must describe how those functions connect within the context provided.

Question 2b: long-term potentiation and observational learning

This question tested students’ understanding of how long-term potentiation supports the retention stage of observational learning. The Examiner’s Report indicates that many students could define long-term potentiation but failed to link it to the process described in the question.

High-scoring responses explained that repeated observation of the modelled behaviour leads to repeated activation of specific neural pathways, strengthening synaptic connections through long-term potentiation. This strengthening supports the formation of a mental representation of the behaviour, allowing it to be stored in long-term memory and later reproduced.

Lower-scoring responses often used vague language such as memory trace or visual memory, or they failed to connect repeated stimulation to synaptic change. In some cases, students described observational learning generally without explaining how long-term potentiation enabled retention. These responses demonstrated content knowledge but did not meet the explanatory requirement of the task.

This question illustrates how precise terminology matters. The Examiner’s Report explicitly noted that correct use of the term mental representation was a distinguishing feature of full-mark responses.

Question 4d: identifying the population and sampling issues

This question required students to identify the population relevant to a study. According to the Examiner’s Report, many students incorrectly described the population as all participants in the study, including both injured and uninjured individuals. This response reflects a common misunderstanding of population as meaning everyone involved in the research.

Full-mark responses correctly identified the population as the broader group of interest that the researchers intended to draw conclusions about. In this case, that group was defined by the research aim, not by participation status. Students who failed to link population to the research question, rather than to the sample, lost marks even when the rest of their response was well written.

This error is significant because it demonstrates how sampling and generalisability are assessed indirectly. The question was not asking students to define population. It was asking them to apply the concept accurately to a specific investigation.

Question 4e: use of an index to reduce error

In this question, students were asked to explain how an index was used to reduce error. Many students recognised that the index adjusted for participant differences, but the Examiner’s Report shows that a large proportion could not explain why this adjustment mattered.

High-scoring responses explained that the index accounted for individual differences in height, reducing systematic error by ensuring that measurements reflected proportional change rather than absolute difference. This improved the accuracy of the results and strengthened the validity of conclusions drawn from the data.

Lower-scoring responses often stated that the index made the results fairer or more accurate without explaining the mechanism. These responses were capped because they asserted improvement without justifying it.

This question demonstrates a recurring pattern in the exam. When students are asked about error reduction, they must explain how the method changes the quality of the data, not simply state that it does.

Question 7c: drawing and interpreting a graph

This was one of the lowest-scoring questions on the paper. The Examiner’s Report makes it clear that the issue was not lack of understanding of the data, but failure to follow graphing conventions. Many students selected an inappropriate graph type, mislabelled axes, or failed to represent all conditions correctly.

Full-mark responses used the correct graph format, labelled axes accurately, and ensured that the data presentation matched the information provided. Students who omitted labels or used an incorrect graph type lost marks even if their plotted values were otherwise correct.

This question highlights an important point. Data representation is not assessed as an artistic or stylistic task. It is assessed as a scientific skill with specific conventions. Partial correctness is not rewarded if those conventions are not followed.

What these questions reveal about the 2024 exam

Taken together, these questions show that the 2024 exam rewarded students who could integrate content knowledge with scientific reasoning and precise language. Students who relied on general explanations, intuitive phrasing, or familiar but imprecise terminology were consistently capped.

The Examiner’s Report repeatedly emphasises that marks were lost not because the content was unknown, but because responses did not do exactly what the question required.

How ATAR STAR addresses these question types

At ATAR STAR, exam preparation involves working through questions at this level of detail. Students are trained to recognise when a question requires interaction rather than description, mechanism rather than assertion, and application rather than recall.

This approach benefits students who are already strong but want to eliminate subtle errors, as well as students who feel confident leaving the exam but are disappointed by their results.

Share the Post:

Related Posts