The following topics are covered in this article:
- Assessment overview
- When to assess
- Choosing the right test
- Administering the tests
- Using the results
- Supporting documents
While PAT Vocabulary measures one important strand – the ability to identify synonyms – the newly released PAT Vocabulary Skills assesses a much broader conception of vocabulary and a wide range of associated processes.
Results from the new PAT Vocabulary Skills cannot be directly compared to the existing PAT Vocabulary assessment.
If your school has not previously used PAT Vocabulary, ACER recommends using PAT Vocabulary Skills to assess your students' abilities.
Assessment overview
The PAT Vocabulary tests are designed to assist teachers in their assessment of students’ knowledge of vocabulary.
PAT Vocabulary consists of five word-knowledge tests, ordered according to difficulty. Each test contains between 35 and 40 items. Each item is a short sentence in which a focus word is used in context. Students select an appropriate synonym for this word from a set of alternatives in a multiple-choice format.
Each test requires up to 25 minutes of testing time, plus time for administration.
When to assess
To monitor student progress, a gap of 9 to 12 months between testing sessions is recommended. Learning progress may not be reflected in a student’s PAT Vocabulary scale scores over a shorter period of time.
If schools assess students towards the end of the calendar year, the administration will correspond with the time of year that the national norms were collected (September). The assessment data can then be passed on to teachers to inform teaching practice the following year.
Choosing the right test
Choosing the right test is necessary to ensure that students’ results provide useful information about their current ability in the learning domain.
The difficulty of a test form and the teacher’s knowledge of a student should be taken into consideration in selecting an appropriate test form. Curriculum appropriateness and the context of the classroom also need to be taken into account when making this decision.
When a student can answer around 50% of the questions correctly, the test is well targeted and provides considerable information about the skills a student is demonstrating, and those she or he is still developing. While it is the case that very high or very low PAT scores will have larger error margins, measurement error should not be the motivating factor in test selection.
There is often a wide range of ability within the classroom and reading skills exist on a continuum, so it is not necessary to provide all students in a class with the same test. Instead, the focus should always be each student’s ability at the time of the assessment, not where they are expected to be.
To make decisions about which test is most appropriate for a particular student or group of students, it is essential that the teacher previews and becomes familiar with the content of the tests.
Test level | Generally suitable for | No. of questions | Time allowed |
---|---|---|---|
Test 1 | Year 3 or 4 or 5 | 35 | 25 minutes |
Test 2 | Year 4 or 5 or 6 | 36 | |
Test 3 | Year 6 or 7 or 8 | 38 | |
Test 4 | Year 7 or 8 or 9 | 40 | |
Test 5 | Year 8 or 9 or 10 | 40 | |
Norm data collected in September |
Administering the tests
Schools should take care in planning the administration of PAT Vocabulary to ensure consistency. PAT Vocabulary is a standardised assessment that must be completed in 25 minutes. Time must be managed by the teacher invigilating the assessment. Students will not be automatically timed out by the online system.
Students’ focus and energy levels are important factors in their capacity to accurately demonstrate their ability on the assessment. For this reason, it is generally best to test students in the morning and not immediately before or after an exciting school event.
Prior to administering the tests to students, teachers should download or print a list of student login details as well as the test administration instructions. It is recommended that the school’s unique account URL is saved as a shortcut or link on students’ devices or on the school intranet for easy access.
Using the results
Scale score
A scale score is a numerical value given to a student whose achievement has been measured by completing an assessment. A student's scale score lies at a point somewhere on the achievement scale, and it indicates that student's level of achievement in that particular learning area — the higher the scale score, the more able the student.
Regardless of the test level or items administered to students, they will be placed on the same scale for the learning area. This makes it possible to directly compare students' achievement and to observe students' progress within a learning area by comparing their scale scores from multiple testing periods over time.
A score on a Reading scale, for example, has no meaning on the Maths scale. In fact, the units of the scale will have different meanings for each scale. This is because these units are calculated based on the range of student levels of achievement, which vary widely between learning areas.
Achievement bands
Students in the same achievement band are operating at approximately the same achievement level within a learning area regardless of their school year level.
Viewing student achievement in terms of achievement bands may assist you to group students of similar abilities. By referencing the achievement band descriptions, you can understand the types of skills typical of students according to their band.
Item difficulty
Item difficulty is a measure of the extent of skills and knowledge required to be successful on the item. This makes it possible to allocate each test item a score on the same scale used to measure student achievement. An item with a high scale score is more difficult for students to answer correctly than a question with a low scale score. It could generally be expected that a student is able to successfully respond to more items located below their scale score than above.
Item difficulties are estimated based on the performance of individuals with a range of abilities who respond to that item, first at the item trial stage and later verified in real test results. The concept being assessed in the item is one aspect of item difficulty. Other factors may combine to make an item more or less complex. For example, the level of abstraction, the number of steps required, whether the question involves problem-solving or computation, the question context, the required precision of response, cognitive load, etc. An item assessing a concept that is introduced earlier in the curriculum may still be quite complex. Conversely, an item assessing a concept introduced later may be simpler.
By referencing the difficulty of an item, or a group of items, and the proportion of correct responses by a student or within a group, it may be possible to identify particular items, or types of items, that have challenged students.
Australian norms
Norm data that represents the achievement of students across Australia is available as a reference sample against which student achievement can be compared.
The comparison between a student's scale score achievement and the Australian norm sample can be expressed as a percentile rank.
The percentile rank of a score is the percentage of students who achieve less than that score. For example, a student with a percentile rank of 75th compared to the Year 3 norm sample has a scale score that is higher than 75% of Australian Year 3 students.