PAIS Mathematics Adaptive allows educators to accurately and efficiently measure students’ abilities in Mathematics, to diagnose gaps, strengths and weaknesses in student learning, and monitor student progress over time.
This article covers the following topics:
- Assessment overview
- When to assess
- Choosing the right test
- Administering the tests
- Using the results
- Supporting documents
For more information on non-adaptive PAIS tests:
Assessment overview
PAIS Mathematics Adaptive evaluates students’ mathematical abilities across four fundamental strands:
- Number and Algebra
- Measurement and Geometry
- Space and Shape
- Chance and Data.
These assessments automatically assign entry levels to students and adapt personalised test pathways based on their responses. Administered within a 40-minute timeframe, they include both multiple-choice and interactive questions.
The PAIS assessments do not follow any single national curriculum. This allows students from all countries to be compared fairly. Each subject has been influenced by curriculum content found in the national curricula of UK, Australia, Singapore, USA and others. But the PAIS assessments are skill-based, not content-based and as such do not rely on specific content from the curricula. The tests require students to think about topics rather than just recall information, and to apply their subject knowledge rather than just remember facts. The PAIS program has a similar curriculum coverage with that of large-scale international assessments such as TIMSS and PIRLS.
When to assess
As a school, you should be clear on your purpose for using PAIS assessments and carefully plan your assessment approach, especially when monitoring of student learning progress over time.
Most schools administer their assessments towards the end of each school year to identify the skills students have attained, to identify specific areas where students need support in their learning and to measure growth from the previous year.
Some schools choose to use PAIS at the beginning of the year, placing a greater emphasis on teachers acting on the diagnostic information. Others use PAIS both at the beginning and end of the year in order to maximise the diagnostic information available to monitor growth throughout the year.
PAIS Adaptive allows you to gather richer diagnostic information about what each student can do in a particular learning area and 'parallel' testlets minimise the chance of students being repeatedly exposed to the same content. For this reason, the PAIS Adaptive may be administered at times between those longitudinal measurement points. Scale scores from these interim sittings would not necessarily be considered when monitoring progress over a matter of years.
Monitoring progress
For the purpose of monitoring student progress, a gap of 9 to 12 months between testing sessions is recommended. Learning progress may not be reflected in a student's PAIS scale scores over a shorter period of time. Longitudinal growth should be measured over a minimum of two years of schooling, or three separate testing sessions, in most contexts. This will help account for possible scale score variation, for example where external factors may affect a student's performance on a particular testing occasion.
Choosing the right test
For an assessment to produce valuable information about students’ abilities, it needs to be able to demonstrate what students can do and understand, as well as what they cannot yet do and understand.
When a student responds correctly to approximately 50% of questions, the test is well targeted and provides the maximum information about the skills a student is demonstrating, and those she or he is still developing.
There is no need to choose a test when using PAIS Adaptive. Students' entry testlets – the first block of items the student sees – are automatically assigned according to their estimated abilities. If a student previously completed a PAIS test in the same learning area within the preceding two years, that scale score is used to determine a suitably difficult starting point.
In some cases, for example, where a student has not previously completed a PAIS test, but you know their ability is relatively high or low compared with the 'average' for the year level, you may decide to override the automatically assigned entry level:
- Select the student’s name from the Students page
- Click Tests
- Click Edit starting level next to the assigned PAIS Adaptive test
For reference, the default PAIS Adaptive entry levels for each year level are listed below:
- Level 1: foundation, year 1
- Level 2: year 2
- Level 3: year 3, year 4
- Level 4: year 5, year 6
- Level 5: year 7, year 8
- Level 6: year 9, year 10
- Level 7: year 11, year 12, year 13
Administering the tests
The Test Administration Instructions linked at the bottom of this page provide further detail, including teacher scripts and troubleshooting tips.
Preparation
The following steps need to be completed ahead of time:
- Check the technical requirements and run the browser exam from a student device to identify any potential technical issues.
- Schedule your testing date and time. It is best to administer tests in the morning and not immediately before or after an exciting school event.
- Ensure that all students are listed within your school's online assessment account and have been assigned the necessary tests.
- Review and assign appropriate test levels to all students according to their ability (non-Adaptive tests only).
- Download or print a list of your students' login details.
- Make note of your school's online assessment login page, or make sure that the URL is saved on student devices, or available to your students as a link. The address will be similar to https://oars.acer.edu.au/your-school-name.
Instructions
- Tests should be administered under standard testing conditions with invigilation.
- Students' screens should be monitored as part of test invigilation.
- Students are permitted to use pen/pencil and paper to make notes during the test.
If you determine that some students require changed testing conditions due to specific learning needs, these changes should be recorded for future reference. The process for determining and implementing any changes to test conditions should be consistent between classes and across the school.
Delivery
- Students will not be automatically locked out of the tests after the allowed time passes. You must monitor and manage the time, including accommodating toilet breaks or other interruptions that may occur.
- Student responses are automatically saved each time they navigate to another question.
- If technical problems cause the need to postpone the completion of the tests, students may close the browser without losing their progress.
- iPads and tablet devices must be held in landscape orientation.
Adaptive testing
The nature of adaptive assessment means that students will likely see different items (questions), and different numbers of items, in their tests. To reduce student anxiety, it is important to explain that there will be differences between students' tests and to assure them that they will all have enough time to attempt their questions.
All students will see a total of 35 items in their PAIS Mathematics Adaptive test.
The multi-stage structure of PAIS Mathematics Adaptive is not obvious to students, so students should simply be discouraged from skipping items without reading them.
Students must view (not necessarily respond to) at least half of the items in Stage 3 of their test in order to receive a scale score.
Tests submitted prematurely will be flagged as 'Invalid: insufficient items viewed' and excluded from the reporting. These tests may be re-opened to allow students to complete them.
Calculators
PAIS Mathematics Adaptive items assess students’ abilities to perform simple mathematical operations without the use of a calculator. Students should not be permitted to use calculators.
Changes to administration conditions
If you determine that some students require changed testing conditions due to specific learning needs, these changes should be recorded for future reference. The process for determining and implementing any changes to test conditions should be consistent between classes and across the school.
Using the results
The information provided by the PAIS reports is intended to assist you in understanding students' abilities in the learning area, diagnosing gaps, strengths and weaknesses in students' learning, and measuring learning progress over time.
Read more about online reports available in the ACER Data Explorer.
Scale score
A scale score is a numerical value given to a student whose achievement has been measured by completing an assessment. A student's scale score lies at a point somewhere on the achievement scale, and it indicates that student's level of achievement in that particular learning area — the higher the scale score, the more able the student.
Regardless of the test level or items administered to students, they will be placed on the same scale for the learning area. This makes it possible to directly compare students' achievement and to observe students' progress within a learning area by comparing their scale scores from multiple testing periods over time.
A score on a Reading scale, for example, has no meaning on the Maths scale. In fact, the units of the scale will have different meanings for each scale. This is because these units are calculated based on the range of student levels of achievement, which vary widely between learning areas.
Achievement bands
Students in the same achievement band are operating at approximately the same achievement level within a learning area regardless of their school year level.
Viewing student achievement in terms of achievement bands may assist you to group students of similar abilities. By referencing the achievement band descriptions, you can understand the types of skills typical of students according to their band.
Item difficulty
Item difficulty is a measure of the extent of skills and knowledge required to be successful on the item. This makes it possible to allocate each test item a score on the same scale used to measure student achievement. An item with a high scale score is more difficult for students to answer correctly than a question with a low scale score. It could generally be expected that a student is able to successfully respond to more items located below their scale score than above.
Item difficulties are estimated based on the performance of individuals with a range of abilities who respond to that item, first at the item trial stage and later verified in real test results. The concept being assessed in the item is one aspect of item difficulty. Other factors may combine to make an item more or less complex. For example, the level of abstraction, the number of steps required, whether the question involves problem-solving or computation, the question context, the required precision of response, cognitive load, etc. An item assessing a concept that is introduced earlier in the curriculum may still be quite complex. Conversely, an item assessing a concept introduced later may be simpler.
By referencing the difficulty of an item, or a group of items, and the proportion of correct responses by a student or within a group, it may be possible to identify particular items, or types of items, that have challenged students.