Standardized test data can be useful
It seems like just yesterday that my school finally finished up state testing. These tests are seriously no joke. The only good thing about them for last year is that they aren’t going to count against the students if they didn’t do well on them. Of course, the same will not be true for teachers and how that will impact their evaluations. But we can use standardized test data to help improve education.
Shocker right?
When we think about standardized tests, we tend to think about the end of year assessments that states make kids go through.
We also call them high stakes tests. But, those aren’t the only standardized tests we give kids throughout the year. Most schools also give a universal screener 4 times a year.
Universal screeners are the assessments that schools and districts use to track student learning gains for the school year, or determine if students are learning compared to their peers. Some examples are Star Tests, iReady, or MAP testing.
These assessments are usually pretty quick, and give the teacher an idea of the student’s grade level ability, as well as indicates areas of strength and weakness in reading and math. That standardized test data should be used to help figure out what a low-performing student may be weak in. Teachers then delve deeper into the cause so that the most basic skill deficit can be addressed.
Many teachers don’t really think of these as standardized tests because they aren’t nearly as stressful as the big state tests. But the fact is these assessments are standardized against same age and grade students to give an idea for how our students are stacking up against others.
Other assessments that are standardized are the evaluations that we use to determine eligibility for special education.
For example, IQ and academic evaluations are all standardized. Really, all standardized tests are are assessments that have to be administered in the same way to all students. They provide comparisons to other students based on how they did answering the questions. We use a lot of them throughout the education world, even though we may not think about it. The standardized test data drives a lot of what we do in school.
So, after a standardized assessment has been given, we tend to get a print out with a scaled score and maybe a percentile rank.
Most of my friends look at the scaled score to figure out if the student “passed” the test. They then look to see if the score was higher than last time the student took the test. When the student made a better score than last time and “passed” the test, teachers celebrate a job well done.
But, that doesn’t really create a complete picture does it?
Let’s talk a little bit more about what all that standardized test data means for a student.
First of all, what is a scaled score anyway?
A scaled score is really just the number of correct answers which is put into a formula to give you a standardized score. This score helps us to know how our student did compared to other students who had similar, but not the same test questions.
The percentile rank is based on the scaled scores of the students who took that assessment. While the scaled score shows what the student did as an individual, the percentile rank tells us how they did compared to others. This is when we start to see if students are performing with their peers or not.
The higher percentage the “better” the student did on the assessment, but when we look at growth over time, if students performed at the same percentage level as the previous assessment, they are making the same amount of growth as the peers they scored with before. But when students increase percentage scores, they made more growth than the normed peers. That is what we want for students who are struggling. These students need to make more growth in the same amount of time to catch up with the grade level expectations. Dropping percentage points means that the student made fewer gains than their peers.
One misconception going is that the percentile rank is based on the current class of students, which is very inaccurate.
Standardized test data is normed or compared to a previous year’s data. For example, for the 2021 school year, Star assessments were normed against 2017 data. This means that the students in school being assessed after a pandemic were really being compared to students performance data from 2017.
The reason for this is that first of all, it takes a lot of time to norm the data and we would not get percentile scores for a very long time after an assessment. Also when a crazy year like last year happens, schools can rate their progress compared to more stable years to see what is actually going on.
Some schools are starting to put more weight in percentile growth rather than scaled scores, which tends to make teachers uncomfortable. I understand both sides to this.
Scaled score growth shows that the student made gains during the school year, indicating the steady march onward for student learning.
I personally prefer the percentile to measure growth for struggling students because we don’t want these students to stay at the same level of struggle throughout their careers. We want these students to make big strides forward so that their level of struggle is reduced as the school year goes on.
They need to be closing the gap between themselves and their peers, rather than just maintaining the same gap year after year.
Standardized assessments only give a snapshot of the student’s abilities on a given day or week. Standardized test data shouldn’t be given as much weight as we tend to put on it as a society today, but it can be used to help teachers get an idea for what they need to focus on for different students to help them progress.