Using results from PAT Maths Adaptive

The information provided by the PAT Maths Adaptive reports is intended to assist you in understanding students' abilities in the learning area, diagnosing gaps, strengths and weaknesses in students' learning, and measuring learning progress over time.

This article explains the results available from PAT Maths Adaptive:

Related article: Generating reports with the ACER Data Explorer

Scale score

A scale score is a numerical value given to a student whose achievement has been measured by completing an assessment. A student's scale score lies at a point somewhere on the achievement scale, and it indicates that student's level of achievement in that particular learning area — the higher the scale score, the more able the student.

Regardless of the test level or items administered to students, they will be placed on the same scale for the learning area. This makes it possible to directly compare students' achievement and to observe students' progress within a learning area by comparing their scale scores from multiple testing periods over time.

A score on a Reading scale, for example, has no meaning on the Maths scale. In fact, the units of the scale will have different meanings for each scale. This is because these units are calculated based on the range of student levels of achievement, which vary widely between learning areas.

Achievement bands

Students in the same achievement band are operating at approximately the same achievement level within a learning area regardless of their school year level.

Viewing student achievement in terms of achievement bands may assist you to group students of similar abilities. By referencing the achievement band descriptions, you can understand the types of skills typical of students according to their band.

Item difficulty

Item difficulty is a measure of the extent of skills and knowledge required to be successful on the item. This makes it possible to allocate each test item a score on the same scale used to measure student achievement. An item with a high scale score is more difficult for students to answer correctly than a question with a low scale score. It could generally be expected that a student is able to successfully respond to more items located below their scale score than above.

Item difficulties are estimated based on the performance of individuals with a range of abilities who respond to that item, first at the item trial stage and later verified in real test results. The concept being assessed in the item is one aspect of item difficulty. Other factors may combine to make an item more or less complex. For example, the level of abstraction, the number of steps required, whether the question involves problem-solving or computation, the question context, the required precision of response, cognitive load, etc. An item assessing a concept that is introduced earlier in the curriculum may still be quite complex. Conversely, an item assessing a concept introduced later may be simpler.

By referencing the difficulty of an item, or a group of items, and the proportion of correct responses by a student or within a group, it may be possible to identify particular items, or types of items, that have challenged students.

Australian norms

Norm data that represents the achievement of students across Australia is available as a reference sample against which student achievement can be compared.

The comparison between a student's scale score achievement and the Australian norm sample can be expressed as a percentile rank.

The percentile rank of a score is the percentage of students who achieve less than that score. For example, a student with a percentile rank of 75th compared to the Year 3 norm sample has a scale score that is higher than 75% of Australian Year 3 students.

Was this article helpful?
0 out of 0 found this helpful

Articles in this section

Transforming Learning Systems
6 & 7 February 2025 | Register now