The PAT Spelling tests are designed to assist teachers in their assessment of students’ knowledge of spelling. PAT Spelling assesses spelling skills using tasks based on written sentences containing a misspelt word that must be corrected.
Note that the updated PAT Spelling Skills assessment uses a wider range of authentic spelling tasks and is suitable for students from Foundation to Year 10. Audio instructions make the tests accessible to students with limited reading skills.
Read more about the differences between the two assessments.
The following topics are covered in this article:
- Introduction
- When to assess
- Choosing the right test
- Administering the tests
- Using the results
- Supporting documents
Introduction
As recommended in the national English curriculum for Australian schools, each PAT Spelling test includes an assessment of a range of phonetic, visual and morphemic relationships in words that are likely to be both familiar and less familiar to the focus year level.
The assessment has been developed especially, but not exclusively, for use in Australian schools, and is structured so that skills across a wide range of year levels can be assessed validly. PAT Spelling consists of eight tests, ordered according to difficulty.
There are two types of questions in each test. In the first type the misspelt word is identified for the student, while in the second type the student has to identify the misspelt word before correcting it.
When to assess
As a school, you should be clear on your purpose for using PAT assessments and carefully plan your assessment approach, especially when monitoring student learning progress over time.
Most schools administer their PAT assessments towards the end of each school year to identify the skills students have attained, to identify specific areas where students need support in their learning and to measure growth from the previous year. This also corresponds with the time of year that the Australian norm or reference group data are typically collected, allowing for more meaningful comparisons.
Some schools choose to use PAT at the beginning of the year, placing a greater emphasis on teachers acting on the diagnostic information. Others use PAT both at the beginning and end of the year in order to maximise the diagnostic information available to monitor growth throughout the year.
Monitoring progress
For the purpose of monitoring student progress, a gap of 9 to 12 months between PAT testing sessions is recommended. Learning progress may not be reflected in a student's PAT scale scores over a shorter period of time. Longitudinal growth should be measured over a minimum of two years of schooling, or three separate testing sessions, in most contexts. This will help account for possible scale score variation, for example where external factors may affect a student's performance on a particular testing occasion.
Choosing the right test
For an assessment to produce valuable information about students’ abilities, it needs to be appropriately targeted to uncover what students can do and understand, as well as what they cannot yet do and understand.
So, when a student responds correctly to approximately 50% of questions, the test is well targeted and provides the maximum information about the skills a student is demonstrating, and those they are still developing.
To make decisions about which test is most appropriate for a particular student or group of students, it is essential that you preview the test content:
- Click Students
- Click Tests
- Click Preview
The difficulty of a test and your knowledge of each student should be taken into consideration when selecting an appropriate test form. Curriculum appropriateness and the context of the classroom also need to be taken into account when making this decision.
There is often a wide range of ability within the classroom, so it is not necessary to provide all students in a class with the same test. Instead, the focus should always be on each student’s ability at the time of the assessment, not where they are expected to be.
Test level | Generally suitable for | No. of questions | Time allowed |
---|---|---|---|
Test 3 | Year 2 or 3 or 4 | 30 | 20 minutes |
Test 4 | Year 3 or 4 or 5 | 30 | |
Test 5 | Year 4 or 5 or 6 | 30 | |
Test 6 | Year 5 or 6 or 7 | 30 | |
Test 7 | Year 6 or 7 or 8 | 30 | |
Test 8 | Year 7 or 8 or 9 | 30 | |
Test 9 | Year 8 or 9 or 10 | 29 | |
Test 10 | Year 8 or 9 or 10 | 30 | |
Norm data collected in September 2007 |
Administering the tests
The Test Administration Instructions linked at the bottom of this page provide further detail, including teacher scripts and troubleshooting tips.
Preparation
The following steps need to be completed ahead of time:
- Check the technical requirements and run the browser exam from a student device to identify any potential technical issues.
- Schedule your testing date and time. It is best to administer tests in the morning and not immediately before or after an exciting school event.
- Ensure that all students are listed within your school's online assessment account and have been assigned the necessary tests.
- Review and assign appropriate test levels to all students according to their ability (non-Adaptive tests only).
- Download or print a list of your students' login details.
- Make note of your school's online assessment login page, or make sure that the URL is saved on student devices, or available to your students as a link. The address will be similar to https://oars.acer.edu.au/your-school-name.
Instructions
- Tests should be administered under standard testing conditions with invigilation.
- Students' screens should be monitored as part of test invigilation.
- Students are permitted to use pen/pencil and paper to make notes during the test.
If you determine that some students require changed testing conditions due to specific learning needs, these changes should be recorded for future reference. The process for determining and implementing any changes to test conditions should be consistent between classes and across the school.
Delivery
- Students will not be automatically locked out of the tests after the allowed time passes. You must monitor and manage the time, including accommodating toilet breaks or other interruptions that may occur.
- Student responses are automatically saved each time they navigate to another question.
- If technical problems cause the need to postpone the completion of the tests, students may close the browser without losing their progress.
- iPads and tablet devices must be held in landscape orientation.
Using the results
The information provided by the PAT reports is intended to assist you in understanding students' abilities in the learning area, diagnosing gaps, strengths and weaknesses in students' learning, and measuring learning progress over time.
- Read more about PAT scales and achievement bands
Scale score
A scale score is a numerical value given to a student whose achievement has been measured by completing an assessment. A student's scale score lies at a point somewhere on the achievement scale, and it indicates that student's level of achievement in that particular learning area — the higher the scale score, the more able the student.
Regardless of the test level or items administered to students, they will be placed on the same scale for the learning area. This makes it possible to directly compare students' achievement and to observe students' progress within a learning area by comparing their scale scores from multiple testing periods over time.
A score on a Reading scale, for example, has no meaning on the Maths scale. In fact, the units of the scale will have different meanings for each scale. This is because these units are calculated based on the range of student levels of achievement, which vary widely between learning areas.
Item difficulty
Item difficulty is a measure of the extent of skills and knowledge required to be successful on the item. This makes it possible to allocate each test item a score on the same scale used to measure student achievement. An item with a high scale score is more difficult for students to answer correctly than a question with a low scale score. It could generally be expected that a student is able to successfully respond to more items located below their scale score than above.
Item difficulties are estimated based on the performance of individuals with a range of abilities who respond to that item, first at the item trial stage and later verified in real test results. The concept being assessed in the item is one aspect of item difficulty. Other factors may combine to make an item more or less complex. For example, the level of abstraction, the number of steps required, whether the question involves problem-solving or computation, the question context, the required precision of response, cognitive load, etc. An item assessing a concept that is introduced earlier in the curriculum may still be quite complex. Conversely, an item assessing a concept introduced later may be simpler.
By referencing the difficulty of an item, or a group of items, and the proportion of correct responses by a student or within a group, it may be possible to identify particular items, or types of items, that have challenged students.
Achievement bands
Students in the same achievement band are operating at approximately the same achievement level within a learning area regardless of their school year level.
Viewing student achievement in terms of achievement bands may assist you to group students of similar abilities. By referencing the achievement band descriptions, you can understand the types of skills typical of students according to their band.
Australian norms
Norm data that represents the achievement of students across Australia is available as a reference sample against which student achievement can be compared.
The comparison between a student's scale score achievement and the Australian norm sample can be expressed as a percentile rank.
The percentile rank of a score is the percentage of students who achieve less than that score. For example, a student with a percentile rank of 75th compared to the Year 3 norm sample has a scale score that is higher than 75% of Australian Year 3 students.