PAT Reading 5th Edition and PAT Reading Adaptive are underpinned by the PAT Reading construct, which allows educators to accurately and efficiently measure students’ abilities in Reading, to diagnose gaps, strengths and weaknesses in student learning, and monitor student progress over time.
The following topics are covered in this article:
- The assessments
- When to assess
- Choosing the right test
- Administering the tests
- Using the results
- About PAT Reading
- Acknowledgements
-
Supporting documents:
- Administration instructions
- Achievement band descriptions
- Assessment framework
- Australian norms
The assessments
PAT Reading Adaptive
PAT Reading Adaptive (2021) uses students’ responses to actively determine the content as they progress through the test. Each student sees their own mix of test items as they progress through one of dozens of available testing pathways.
This article about PAT Adaptive provides more detailed information about how the assessment works.
PAT Reading 5th Edition
PAT Reading 5th Edition (2018) comprises test forms ranging from Test 1 to Test 10 and can be administered according to student ability, based on previous scale score and educators' professional judgement. All students in a group respond to the same test items according to which of the ten available test forms they were assigned.
Differences and similarities
Both assessments are suitable for years 1–10, share the same construct and measurement scale, and have some test content in common. All test items are mapped to the Australian Curriculum, Victorian Curriculum and New South Wales Syllabus.
The diagnostic strengths of the adaptive test structure makes PAT Reading Adaptive well-suited to measuring the individual needs of students and providing rich evidence to inform teaching and learning. Comparisons of students' performance based on shared test content is more difficult due to the variety of different test pathways students may follow.
As a conventional, linear, assessment, PAT Reading 5th Edition allows you to administer the same test items to a single group of students. By assessing students on a shared set of items, you can more easily compare student performance on a single body of content.
When to assess
As a school, you should be clear on your purpose for using PAT assessments and carefully plan your assessment approach, especially when monitoring student learning progress over time.
Most schools administer their PAT assessments towards the end of each school year to identify the skills students have attained, to identify specific areas where students need support in their learning and to measure growth from the previous year. This also corresponds with the time of year that the Australian norm or reference group data are typically collected, allowing for more meaningful comparisons.
Some schools choose to use PAT at the beginning of the year, placing a greater emphasis on teachers acting on the diagnostic information. Others use PAT both at the beginning and end of the year in order to maximise the diagnostic information available to monitor growth throughout the year.
Monitoring progress
For the purpose of monitoring student progress, a gap of 9 to 12 months between PAT testing sessions is recommended. Learning progress may not be reflected in a student's PAT scale scores over a shorter period of time. Longitudinal growth should be measured over a minimum of two years of schooling, or three separate testing sessions, in most contexts. This will help account for possible scale score variation, for example where external factors may affect a student's performance on a particular testing occasion.
Choosing the right test
For an assessment to produce valuable information about students’ abilities, it needs to be appropriately targeted to uncover what students can do and understand, as well as what they cannot yet do and understand.
So, when a student responds correctly to approximately 50% of questions, the test is well targeted and provides the maximum information about the skills a student is demonstrating, and those they are still developing.
PAT Reading Adaptive
In some cases, for example, where a student has not previously completed a PAT test, but you know their ability is relatively high or low compared with the 'average' for the year level, you may decide to override the automatically assigned PAT Adaptive entry level:
- Select the student’s name from the Students page
- Click Tests
- Click Edit starting level next to the assigned PAT Adaptive test
For reference, the default PAT Adaptive entry levels for each year level are listed below:
- Level 1: foundation, year 1
- Level 2: year 2
- Level 3: year 3, year 4
- Level 4: year 5, year 6, year 7
- Level 5: year 8, year 9, year 10
PAT Reading 5th Edition
To make decisions about which test is most appropriate for a particular student or group of students, it is essential that you preview the test content:
- Click Students
- Click Tests
- Click Preview
The difficulty of a test and your knowledge of each student should be taken into consideration when selecting an appropriate test form. Curriculum appropriateness and the context of the classroom also need to be taken into account when making this decision.
There is often a wide range of ability within the classroom, so it is not necessary to provide all students in a class with the same test. Instead, the focus should always be on each student’s ability at the time of the assessment, not where they are expected to be.
Test level | Generally suitable for | No. of questions | Time allowed |
---|---|---|---|
Test 1 | Year 1 | 14 | 25 minutes |
Test 2 | Years 1, 2 or 3 | 29 | 40 minutes |
Test 3 | Years 2, 3 or 4 | 32 | |
Test 4 | Years 3, 4 or 5 | 29 | |
Test 5 | Years 4, 5 or 6 | 29 | |
Test 6 | Years 5, 6 or 7 | 34 | |
Test 7 | Years 6, 7 or 8 | 35 | |
Test 8 | Years 7, 8 or 9 | 34 | |
Test 9 | Years 8, 9 or 10 | 35 | |
Test 10 | Years 9 or 10 | 35 |
Administering the tests
Preparation
The following steps need to be completed ahead of time:
- Check the technical requirements and run the browser exam from a student device to identify any potential technical issues.
- Schedule your testing date and time. It is best to administer tests in the morning and not immediately before or after an exciting school event.
- Ensure that all students are listed within your school's online assessment account and have been assigned the necessary tests.
- Review and assign appropriate test levels to all students according to their ability (non-Adaptive tests only).
- Download or print a list of your students' login details.
- Make note of your school's online assessment login page, or make sure that the URL is saved on student devices, or available to your students as a link. The address will be similar to https://oars.acer.edu.au/your-school-name.
Instructions
- Tests should be administered under standard testing conditions with invigilation.
- Students' screens should be monitored as part of test invigilation.
- Students are permitted to use pen/pencil and paper to make notes during the test.
If you determine that some students require changed testing conditions due to specific learning needs, these changes should be recorded for future reference. The process for determining and implementing any changes to test conditions should be consistent between classes and across the school.
Delivery
- Students will not be automatically locked out of the tests after the allowed time passes. You must monitor and manage the time, including accommodating toilet breaks or other interruptions that may occur.
- Student responses are automatically saved each time they navigate to another question.
- If technical problems cause the need to postpone the completion of the tests, students may close the browser without losing their progress.
- iPads and tablet devices must be held in landscape orientation.
PAT Reading Adaptive
- The nature of adaptive assessment means that students will likely see different items (questions), and different numbers of items, in their tests. To reduce student anxiety, it is important to explain that there will be differences between students' tests and to assure them that they will all have enough time to attempt their questions.
- All students will see at least 30 items in their PAT Reading Adaptive test. The most is 36 items.
- The multi-stage structure of PAT Reading Adaptive is not obvious to students, so students should simply be discouraged from skipping items without reading them.
- Students must view (not necessarily respond to) at least half of the items in Stage 3 of their test in order to receive a scale score.
Tests submitted prematurely will be flagged as 'Invalid: insufficient items viewed' and excluded from the reporting. These tests may be re-opened to allow students to complete them.
Using the results
The information provided by the PAT reports is intended to assist you in understanding students' abilities in the learning area, diagnosing gaps, strengths and weaknesses in students' learning, and measuring learning progress over time.
- Read more about online reports available in the ACER Data Explorer.
- Read more about PAT scales and achievement bands
Scale score
A scale score is a numerical value given to a student whose achievement has been measured by completing an assessment. A student's scale score lies at a point somewhere on the achievement scale, and it indicates that student's level of achievement in that particular learning area — the higher the scale score, the more able the student.
Regardless of the test level or items administered to students, they will be placed on the same scale for the learning area. This makes it possible to directly compare students' achievement and to observe students' progress within a learning area by comparing their scale scores from multiple testing periods over time.
A score on a Reading scale, for example, has no meaning on the Maths scale. In fact, the units of the scale will have different meanings for each scale. This is because these units are calculated based on the range of student levels of achievement, which vary widely between learning areas.
Achievement bands
Students in the same achievement band are operating at approximately the same achievement level within a learning area regardless of their school year level.
Viewing student achievement in terms of achievement bands may assist you to group students of similar abilities. By referencing the achievement band descriptions, you can understand the types of skills typical of students according to their band.
Item difficulty
Item difficulty is a measure of the extent of skills and knowledge required to be successful on the item. This makes it possible to allocate each test item a score on the same scale used to measure student achievement. An item with a high scale score is more difficult for students to answer correctly than a question with a low scale score. It could generally be expected that a student is able to successfully respond to more items located below their scale score than above.
Item difficulties are estimated based on the performance of individuals with a range of abilities who respond to that item, first at the item trial stage and later verified in real test results. The concept being assessed in the item is one aspect of item difficulty. Other factors may combine to make an item more or less complex. For example, the level of abstraction, the number of steps required, whether the question involves problem-solving or computation, the question context, the required precision of response, cognitive load, etc. An item assessing a concept that is introduced earlier in the curriculum may still be quite complex. Conversely, an item assessing a concept introduced later may be simpler.
By referencing the difficulty of an item, or a group of items, and the proportion of correct responses by a student or within a group, it may be possible to identify particular items, or types of items, that have challenged students.
Australian norms
Norm data that represents the achievement of students across Australia is available as a reference sample against which student achievement can be compared.
The comparison between a student's scale score achievement and the Australian norm sample can be expressed as a percentile rank.
The percentile rank of a score is the percentage of students who achieve less than that score. For example, a student with a percentile rank of 75th compared to the Year 3 norm sample has a scale score that is higher than 75% of Australian Year 3 students.
About PAT Reading
PAT Reading aims to measure the essential skill of reading comprehension. The assessments require students to utilise a variety of processes in a range of contexts as reflected by different written texts on an online platform. PAT Reading captures the range of skills competent readers adopt in the construction of meaning, from retrieving discrete pieces of information, to maintaining and developing understanding, to using previous knowledge in the critical evaluation of information.
The PAT Reading construct is the organising principle of the assessments; it is used to guide test development and structure the PAT reports. This structure is also part of the Progressive Achievement approach because the knowledge, skills and understanding represented in the assessments is designed to support educators in identifying student needs.
Four overarching elements guide PAT Reading assessment development:
- Strands
- Strand processes
- Text types
- Text formats
The PAT Reading Assessment Framework document linked at the bottom of this page provides further detail about these components.
Strands are the core competencies that form the foundation of textual understanding. There are four strands used in PAT Reading:
- Retrieve
- Interpret explicit
- Interpret implied
- Reflect
Strand processes offer a way of further focusing learning intentions. By being able to identify the processes by which students retrieve, interpret, or reflect on texts, you can achieve a finer grained understanding of students’ gaps and strengths within each of the core skill areas, and are therefore better able to target students’ learning needs.
A range of text types must be used when assessing students' reading comprehension abilities. PAT Reading assesses five text types:
- Narrative
- Information
- Persuasive
- Procedural
- Word or sentence
Different text formats are used to reflect and assess different Reading processes. There are three text formats used in PAT Reading:
- Continuous (e.g. prose texts, made up of sentences and commonly organised into paragraphs)
- Non-continuous (e.g. lists, diagrams, graphs, advertisements and schedules)
- Mixed
Acknowledgements
Related article: PAT Reading copyrighted material
PAT Reading Adaptive
PAT Reading Adaptive is the most recently developed assessment to use the PAT Reading construct and builds on the earlier PAT Reading 5th Edition and PAT-R Comprehension (4th Edition) assessments. Sandra Knowles was the lead test developer. Trisha Reimers, David Kelly, Jude Alexander, Adam Wardell, Karin Halpin, Daniel Vine, Greta Rollo, Kathryn Miller, Brad Jackel, Roslyn Gross, Lynn Sendy-Smithers and Daniel Duckworth all made valuable contributions to the development of the test content. Siek Toon Khoo and Ling Tan led the psychometrics and methodology work. Steve Kambouris worked on PAT Adaptive design and item banking, Liang-Cheng Zhang on trial analysis, Fuchun Huang on trial analysis and testlet selection, and Clare Ozolins on PAT Adaptive norming. Other staff members who made valuable contributions include Penny Pearson and Kathy He.
PAT Reading 5th Edition
The Progressive Achievement Tests in Reading: Comprehension and Vocabulary was first produced in New Zealand by Warwick Elley and Neil Reid, and was published by the New Zealand Council for Educational Research (NZCER). The first Australian manual, referred to as a Teacher's Handbook, was based on the Teacher's Manual by these authors and was prepared by Milton Clark of ACER. The second Australian edition of the Handbook was a revision of the first one and was prepared by Graham Ward, also of ACER. The third Australian edition of the PAT-R tests represented a substantial revision of the earlier editions with considerable new material and a completely reworked manual and report forms. Joy McQueen was responsible for the test development and Andrew Stephanou for the psychometrics, norming study, data analysis and reporting.
The fourth Australian edition of the PAT-R tests represented a substantial expansion in the number of reading comprehension test booklets, with two-thirds of the content being new material from a wider range of text types than in previous editions. An additional vocabulary test booklet was added to the previous set, and spelling tests were included for the first time. Prue Anderson was responsible for the test development and Andrew Stephanou for the psychometrics, norming study, and reporting. Daniel Urbach was responsible for the analysis of the data. Other ACER staff who made valued contributions include Penny Pearson, Mark Butler, Margaret McGregor, Dara Searle, Melissa Hughes and Lucy Bastecky.
PAT Reading 5th Edition contains a significant amount of recently developed content; two-thirds of the content is new material. Sandra Knowles was largely responsible for the test development and Daniel Urbach for the psychometrics and the analysis of the data. Other staff members who made valued contributions include David Kelly, Clare Ozolins, Jude Alexander and Siham Barakat.