Commentary: States Undercutting Student Proficiency Drive
Education Op-Ed
By: Lance T. Izumi, J.D.
7.25.2004
Chatanooga Times Free Press, July 25, 2004
Imagine two students. Billy, who lives in an affluent suburb and attends a highly regarded public elementary school, is an average-performing pupil whose reading and math test results are just below the state's definition of proficiency in those subjects. Maria, who lives in a low-income neighborhood and attends a problem-plagued public elementary school, performs very poorly on state tests and is nowhere near the proficient level. Question: Which student is likely to be the higher object of state and local attention? Although most people would likely say Maria, the truth is that many state accountability systems would focus on Billy. The reason for this counterintuitive choice is that states have crafted perverse incentives for themselves as they strive to meet the requirements of the federal No Child Left Behind Act. Under NCLB, every student must reach the proficient level on state reading and math tests by 2013-14. State plans to meet this goal often involve simplistic annual growth targets. Thus, for example, California's plan calls for 13.6 percent of students to be proficient in English/language arts in 2003-04, plus increases of 10.8 percent for most years until 2013-14. Using aggregate percentage targets, however, provides an irresistible temptation for states to focus on Billy rather than Maria. This temptation is due to the "low-hanging-fruit" syndrome. Since Billy is performing just below the proficiency bar, it will be much easier to get him rather than Maria over that bar and meet the annual proficiency target goal. States will focus on Billy first, then only later will they get to Maria. By that time, it may be too late for Maria ever to get up to proficiency. At that point, the next perverse incentive will be for states to dumb down their definition of proficiency so that Maria can be classified as "proficient" even if her knowledge and skills don't warrant it. In the end, the real loser will be Maria and the many children like her. There is an alternative, however. States must focus not on groups of students, but on the achievement growth of each individual student. Value-added testing, which has been pioneered in Tennessee, addresses this problem. Under the Tennessee Value-Added Accountability System, individual student performance data on the state test is collected over time and analyzed. TVAAS data can be used as a diagnostic device to spot individual student weaknesses and help teachers improve individual student achievement. Also, since the individual student data are linked to specific teachers in the classroom, the effectiveness of individual teachers can be estimated. While the TVAAS provides a great deal of useful information, individual student performance is not compared to a proficiency benchmark on the state test. This omission is important since NCLB calls for all students to hit that proficiency benchmark. What is needed, therefore, is a value-added model that compares student performance on state tests to the proficiency benchmark on those tests. Such a model has just been developed. This measurement model, detailed in a new study by the Pacific Research Institute, calculates a rate of expected academic change, or REACH, using an individual student's test scores to come up with an annual individual improvement target, based on the proficiency level on the state test, for that student. In other words, given a student's current location on the ability scale, the REACH model tells teachers, principals, parents and officials how much both Billy's and Maria's achievement needs to grow each year to be classified as proficient by the time each leaves school. Other measurement models often report gains in student achievement in terms of simple point increases or through comparisons with local, state and national averages. The REACH scores, in contrast, focus not on comparison to average scores, but on Billy's and Maria's progress toward subject-matter proficiency, which is the goal of NCLB. Using the REACH model, teachers can identify whether Billy or Maria needs remedial help and who requires greater attention. Also, because the REACH model measures and projects how each student is progressing toward proficiency, it can be used to evaluate whether a student's exposure to a particular education program has helped or hurt that progress. For instance, if Maria has been in smaller classes but is still not making progress toward proficiency, then policy-makers might have to rethink reducing class sizes. Further, by measuring student achievement gains under individual teachers who may be using teaching methods similar to or different from those used by their colleagues, the REACH model can inform educators and the public about which instructional practices are best able to move students toward subject-matter proficiency. Since the model can identify especially effective teachers, incentives can be given to these teachers to teach in classrooms with low-performing students. Finally, by showing if students aren't becoming proficient, the REACH model can individualize teacher professional development and training to address teacher weaknesses. There are those who criticize value-added analysis saying that it is too complicated. Yet Tennessee has overcome the challenge of instituting such a system, and other states, such as California, are giving each student an individual student identification number, which is the prerequisite for creating the database needed for value-added analysis. Others claim that value-added analysis sets lower expectations for learning, implying that as long as students, especially low socioeconomic students, show some growth then that's enough even if that growth is way below the proficiency level on state tests. The REACH model, however, is geared to ensuring that every student, including those who are poor or minority, achieves the same goal of subject-matter proficiency on the state tests. States are beginning to wake up to the deficiencies in the way they currently measure student achievement. Colorado, for example, is moving toward a REACH-based achievement measurement system. With the NCLB clock ticking, states need to stop finding excuses for poor student performance and use tools such as REACH that provide the individual diagnostic information necessary to help both Billy and Maria become successful. LanceT. Izumi is Director of Education Studies at the Pacific Research Institute. He is co-author with Harold C. Doran of the PRI study "Putting Education to the Test: A Value-Added Model for California." He can be reached at lizumi@pacificresearch.org.
|