Every year, thousands of students in Austrian primary and middle schools take a reading test to see how well they do at distinguishing nonsensical sentences from meaningful ones. It's called the "Salzburger Lese-Screening" (SLS), and it's probably one of the most well-designed tests that these students will ever take; it's definitely much better than most of those that will ultimately determine their grades, on which the SLS has no effect.
(That said, I still don't see how it can be of much use for policymakers. Sure, you get reliable and valid results for every student at a given age -- but if you wanted to know how to make students read better, you'd have to correlate this with data on how and what they're being taught. But the school authorities, at least in Austria, do not have this information, except at the extremely coarse-grained level of school types. So, fine, students at one type of school do better than those at another type of school -- but what exactly is it that causes this difference, provided it's not all selection effects? To be clear, this is something that could be investigated, but that would require a much greater dedication to evidence-based teaching -- and policy-making -- than you tend to get in this country.)
So students take this little test, which takes all of ten minutes if you factor in preparation time and collecting the reams of paper. -- Speaking of paper: it's five sheets for every student. One with instructions, one practice sheet to familiarize students with the format of the test, and three sheets filled with increasingly complex sentences that students have to mark as either correct or incorrect. -- Now it's the responsibility of the class teacher (usually a language teacher) to evaluate the test. At most schools, teachers share a set of transparencies with the correct answers so they don't have to think about every answer for themselves. They then write down the number of correctly marked sentences by student. Next, there is a handy little table included with the instructions that maps this raw data to a "reading quotient", which works like the IQ in that it's centered on the established national average at 100 points.
The final step, as far as the administering teacher is concerned, is to fill out a report sheet for the class. The report sheet does not ask for the raw data, but for the percentage (to be calculated by the teacher) of students with a reading quotient lower than 95 or higher than 115. This sheet is then tied together with the ~140 pages of tests and handed in for further processing.
The problem should be obvious. No matter how well-designed the test -- this way of collecting the results introduces multiple unnecessary points of failure, by having teachers manually perform not only the evaluation, but two linear transformations of the raw data, both of which could be done by a computer program in seconds. Everybody who has ever worked with data knows that most problems derive from input errors, i.e. human failure -- and to multiply that by introducing three separate lists of hand-written numbers, shaded tables printed on paper in ten-point font, and having language teachers calculate percentages (no offense) is just asking for trouble.
Especially when the solution is equally obvious. Coding a digital version of the test is trivial (the reading quotient might have to be recalibrated for students clicking or tapping instead of circling the correct sign, though) -- and even failing that, digitally processing printed multiple-choice tests is a routine affair these days. At the very least, just take the teachers' lists with the raw data, if you can't make them input the numbers directly into a computer, instead of wasting the time of highly educated professionals with work that has nothing to do with their core competencies.
Of course, I'm saying all of this because I just wasted two hours doing just that (and explaining to colleagues how to calculate percentages). The upside? It helped me finally figure out what the overarching topic of my blog should be. So here it is: low-hanging fruit for a start, but I'll be sure to venture further out in future posts.
(More thoughts on optimizing education: here)
No comments:
Post a Comment