The ACER General Ability Tests, commonly known as AGAT, are designed to measure general reasoning by requiring students to demonstrate an ability to identify relationships, process information and solve problems. The assessments have been developed especially, but not exclusively, for use in Australian schools.
This article covers the following topics:
- When to assess
- Choosing the right test
- Administering the tests
- Using the results
- About AGAT
- Administration instructions
- Assessment framework
- Australian norms
- Primary sample test
- Secondary sample test
AGAT 2nd Edition
AGAT 2nd Edition (2022) is the most recently developed assessment to use the AGAT construct. Updated in line with the contemporary needs of educators and students, AGAT 2nd Edition offers robust measures of general ability with an updated, colourful and engaging format.
AGAT 2nd Edition comprises test forms ranging from Test 1 to Test 9 and can be administered according to student ability, based on previous scale scores and the educator’s professional judgement. The test forms are targeted for students aged from seven to sixteen years of age and assess five aspects, or strands, of general ability.
Updated AGAT norms are based on testing completed during October – December 2019, which was selected as the last stable year of data collection before the COVID-19 global pandemic, which affected schools and students across Australia in 2020 and 2021. The norms capture the performance of students from Year 2 to Year 10 who completed AGAT 1st Edition tests.
AGAT 1st Edition
The AGAT 1st Edition assessments were the first assessments released in the AGAT series, but followed over 60 years of research and experience in the field of general ability testing. Initially released as paper-based assessments, AGAT was also made available as a suite of online assessments in 2012.
From 2023, AGAT 1st Edition is only available for purchase as a paper-based assessment.
When to assess
Generally, AGAT is administered to students once per year. The AGAT scale enables teachers to monitor and compare students’ general ability measures over time. For the purpose of monitoring student progress, a gap of 9 to 12 months between AGAT testing sessions is recommended. Learning progress may not be reflected in a student’s AGAT scale scores over a shorter period of time. Longitudinal growth should be measured over a minimum of two years of schooling, or three separate testing sessions, in most contexts. This will help account for possible scale score variation, for example where external factors may affect a student’s performance on a particular testing occasion.
Choosing the right test
Choosing the right test is necessary to ensure that students’ results provide useful information about their current ability in the learning domain.
The difficulty of a test and the teacher’s knowledge of a student should be taken into consideration in selecting an appropriate test.
When a student can answer around 50% of the questions correctly, the test is well targeted and provides considerable information about the skills a student is demonstrating, and those she or he is still developing. While it is the case that very high or very low AGAT scores will have larger error margins, measurement error should not be the motivating factor in test selection.
There is often a wide range of ability within the classroom and reasoning skills exist on a continuum, so it is not necessary to provide all students in a class with the same test. Instead the focus should always be each student’s ability at the time of the assessment, not where they are expected to be.
To make decisions about which test is most appropriate for a particular student or group of students, it is essential that the teacher previews and becomes familiar with the content of the tests.
|Generally suitable for
|No. of items*
|Years 2 or 3
|35 (7 items per strand)
|Years 2, 3 or 4
|Years 3, 4 or 5
|40 (8 items per strand)
|Years 4, 5 or 6
|Years 5, 6 or 7
|Years 6, 7 or 8
|Years 7, 8 or 9
|Years 8, 9 or 10
|Years 9, 10 or 11
|*All tests contain an equal number of items from each of the 5 strands – abstract reasoning, kinetic reasoning, numerical reasoning, spatial reasoning and verbal reasoning.
**The recommended time allocated for the test is 50 minutes, though some students may require additional time. In this case, an extra ten minutes can be given at the school’s discretion.
Administering the tests
Teachers are required to supervise test administration and manage the time allowed to students. Students will not be automatically timed out by the online system. The recommended test administration time is 50 minutes. This should be sufficient for most students to complete their tests. If a student requires more time to complete the test, an additional 10 minutes can be given at the school’s discretion. However, it is recommended that any additional time given is noted and taken into consideration when viewing reports.
Students’ focus and energy levels are important factors in their capacity to accurately demonstrate their ability on the assessment. For this reason, it is generally best to test students in the morning and not immediately before or after an exciting school event.
Prior to administering the tests to students, teachers should download or print a list of student login details as well as the test administration instructions. It is recommended that the school’s unique account URL is saved as a shortcut or link on students’ devices or on the school intranet for easy access.
Using the results
AGAT reports provide educators with valuable data that can be used to:
- provide a broad estimate of a student’s reasoning ability and monitor the development of their reasoning abilities over time,
- identify students who could be selected for extension programs and those who may require special diagnostic and remedial attention,
- confirm or supplement other estimates (for example, classroom tests) of a student’s stage of learning achievement,
- provide information that may be used in setting realistic goals and planning effective programs of work,
- compare your students' general ability to Australian norms.
A scale score is a numerical value given to a student whose achievement has been measured by completing an assessment. A student's scale score lies at a point somewhere on the achievement scale, and it indicates that student's level of achievement in that particular learning area — the higher the scale score, the more able the student.
Regardless of the test level or items administered to students, they will be placed on the same scale for the learning area. This makes it possible to directly compare students' achievement and to observe students' progress within a learning area by comparing their scale scores from multiple testing periods over time.
A score on a Reading scale, for example, has no meaning on the Maths scale. In fact, the units of the scale will have different meanings for each scale. This is because these units are calculated based on the range of student levels of achievement, which vary widely between learning areas.
General ability bands
While a scale score indicates a student’s general ability according to the AGAT scale, and can be used to quantitatively track a student’s growth, it is only in understanding what the number represents that teachers can successfully inform their practice to support students. For this reason, the AGAT scale is divided into eight general ability bands that demonstrate and describe general ability as a continuum.
Student test performance can be reported according to the five reasoning strands. This can provide insight into possible strengths, gaps, and weaknesses in different reasoning skills. The strands are evenly distributed across all levels to ensure that the general ability measures are not influenced by one strand over others.
Item difficulty is a measure of the extent of skills and knowledge required to be successful on the item. This makes it possible to allocate each test item a score on the same scale used to measure student achievement. An item with a high scale score is more difficult for students to answer correctly than a question with a low scale score. It could generally be expected that a student is able to successfully respond to more items located below their scale score than above.
Item difficulties are estimated based on the performance of individuals with a range of abilities who respond to that item, first at the item trial stage and later verified in real test results. The concept being assessed in the item is one aspect of item difficulty. Other factors may combine to make an item more or less complex. For example, the level of abstraction, the number of steps required, whether the question involves problem-solving or computation, the question context, the required precision of response, cognitive load, etc. An item assessing a concept that is introduced earlier in the curriculum may still be quite complex. Conversely, an item assessing a concept introduced later may be simpler.
By referencing the difficulty of an item, or a group of items, and the proportion of correct responses by a student or within a group, it may be possible to identify particular items, or types of items, that have challenged students.
AGAT norm data collected in October – December 2019, representing the achievement of students across Australia, are available as a reference against which your students' ability can be compared.
Norm data that represents the achievement of students across Australia is available as a reference sample against which student achievement can be compared.
The comparison between a student's scale score achievement and the Australian norm sample can be expressed as a percentile rank.
The percentile rank of a score is the percentage of students who achieve less than that score. For example, a student with a percentile rank of 75th compared to the Year 3 norm sample has a scale score that is higher than 75% of Australian Year 3 students.
Students' percentile ranks can be found in both the Individual report and the Group report. The table shows the year level in the AGAT norms that students are compared against, based on the AGAT test they completed.
|Australian norm comparison group
Abstract reasoning is the ability to see patterns and logic in pictures and diagrams. Abstract reasoning questions require students to complete visual patterns that follow simple rules, deduce which rules have been applied to change the states of images, and identify the next steps in 2-Dimensional visual sequences. Abstract reasoning is de-contextualised in the sense that the problems addressed cannot be applied to any real-world context – they deal with abstract concepts.
- At lower levels, students are required to spot simple rotational patterns in 2-D images, and identify the next step in that sequence.
- At higher levels, students must identify the rules that have been applied to transform multi-faceted shapes and apply those rules to new scenarios.
Kinetic reasoning is the ability to anticipate the results from the movement of objects in real-life situations. Kinetic reasoning questions require students to recognise the effects of turning gears, pulling levers and manipulating pulleys. They require students to understand the flow of water and rolling balls in simple networks, and the position of objects on a grid after a series of commands.
- At lower levels, students must identify what happens when a force is applied to a lever in a simple system.
- At higher levels, students are required to apply rules by tracking backwards to establish the starting point of a dynamic situation.
Numerical reasoning items require students to recognise numerical patterns and sequences, categorise objects to match numerical quotas, link input and output from number machines, and apply rules to arithmetic puzzles.
- At lower levels, students need to apply basic numerical deduction to calculate unknown values in simple word problems.
- At higher levels, students must take into account multiple inter-related variables to find the outcomes of non-standard scenarios.
Spatial reasoning is the ability to visualise the transformations of objects on a page. Spatial reasoning questions require students to identify different viewpoints when looking at 3-D objects, recognise where shapes appear in complex images, identify how shapes have been manipulated through reflection and rotation, and rearrange pieces of an image to form a complete picture.
- At lower levels, students must identify how to rotate two simple objects to make them fit together.
- At higher levels, students need to recognise how a set of objects has been manipulated between photographs taken from different perspectives.
Verbal reasoning is the ability to understand how words connect to each other and how words within a sentence affect meaning. Verbal reasoning questions require students to understand the hierarchy of words, identify relationships between words, rearrange words to form a sentence, and make logic of competing sentences.
- At lower levels, students need to identify specific and general words from a group of similar words.
- At higher levels, students must take into account multiple sentences providing related information to specify the order in which things can be organised.