Skip to main content

Mend not end educational testing

The Common Core State Standards and accompanying K-12 assessments have recently sparked a fierce national backlash against testing. Sound educational testing and assessment are integral to good teaching and learning in classrooms and necessary for evaluating school performance and assuring quality in education. Rather than throw the baby out with the bathwater, I propose a more considered, "mend, not end" approach to testing, assessment, and accountability in America's schools, with validity at the forefront of the conversation.
Mending begins with understanding that most commercial standardized tests are designed to serve particular purposes well, for particular populations, and can support only particular decisions at best. To uphold validity principles in practice, it is worthwhile to ask: Are we using the test for the originally intended purpose, or for another purpose that taxes the tool beyond its technical limits? Multi-purposing a test indiscriminately is not a good idea from a validity standpoint, despite its efficiency.
Validity deals with the meaningfulness of test scores and reports. Technically, validity is determined by the built-in features of a test, including its overall content, the quality of test questions, the suitability of metrics for the domains tested, and the reliability of scores. In addition, how and where a test's results are applied, and the defensibility of inferences drawn, or actions taken, with test-based information affect the levels of validity we can claim from the scores and reports.
—Jori Bolton for Education Week
According to testing standards published by the American Educational Research Association, the National Council on Measurement in Education, and the American Psychological Association, once a validated test is taken out of its originally intended context, we may no longer be able to claim as much validity for a new population, purpose, or decisionmaking context, nor with as much certainty.
New proposed uses call for more tests of a test—a process called "validation." New evidence must be secured to support a new or different action. Too often, this basic guideline is overlooked, particularly under high-stakes accountability policies like the federal No Child Left Behind Act or the common core. Validity oversights also happen with relatively low-stakes international-assessment programs like the Program for International Student Assessment, or PISA.
No Child Left Behind, signed into law in 2002, mandated testing of all students in grades 3-8 to measure progress of schools based on results of annually administered achievement tests. Variable state-set standards toward manifestly unattainable growth targets of "adequate yearly progress" and "universal proficiency" by 2014 stretched many school evaluation systems beyond their technical capabilities. NCLB's public rewards and sanctions based on school performance led to "teaching to the test," spuriously raising student test scores without lasting or replicable learning gains. This repercussion, in and of itself, undermined the validity of inferences from test scores, which no longer indicated clearly what students actually knew on tested domains.
Ripple effects of NCLB took hold in other school evaluation contexts, too, threatening validity in additional ways. Even the most enlightened and progressive of districts were pressured into missteps by high-stakes-testing requirements. In 2005, for example, Montgomery County, Md., sought to ratchet up performance and close achievement gaps districtwide by identifying its own model schools and school practices—a laudable goal. However, the county's selected measure of student achievement, aggregated to serve as an indicator of school performance in "value added" evaluation models, was the combined math and verbal SAT score of high school students.
Recent efforts have sought to align the SAT more with college-readiness and common-core standards, but at the time of the 2005 report, "Value-Added Models in Education: Theory and Applications," the validity of the SAT as an indicator of school-level outcomes was questionable. A college-entrance exam, the SAT is designed to predict how well students will perform as college freshmen, with limited validity as a curriculum-based achievement test. Variability in the levels and kinds of coursework taken by students could significantly affect the meaning of the scores, weakening inferences about student achievement in K-12 scholastic programs.
See Also
Check edweek.org Monday, March 17, for the launch of a time-limited group blog facilitated by Madhabi Chatterji and James Harvey. Assessing the Assessments will focus on measurement, assessment, and accountability.
Further, because students opt to take the SAT, test-takers are likelier to be stronger academically and inclined toward college, come from wealthier families, or have exposure to stronger schooling experiences. Self-selection biases schools' aggregate SAT scores, complicating interpretations of what caused them to rise or fall.
Neither the school district nor the SAT is at fault. Rather, punitive accountability measures tied to test results in the larger context of reforms may be called into question. The power of such accountability mandates influences decisions of even trained analysts, regardless of stakes tied to local actions.
In the current context of the common core, a parallel drama is playing out. The common-core tests now being developed have been criticized as too long, superficial or overly narrow, and out of alignment with the curriculum and common-core standards. Educators, parents, and local officials reasonably fear that, yet again, tests are serving as blunt policy instruments to drive top-down reforms with inadequate time and resources for designing deeper curriculum and assessments to match, with little or no professional development of teachers and school leaders and in neglect of critical supports that schools need to succeed.
With ill-prepared schools and students, what will the test results really tell us about student learning and the quality of schooling?
Yet, were the same tests implemented after standards were refined, teachers and schools readied, parents and students oriented, tests validated to measure what students actually learned better, and results freed from external rewards and sanctions, the results might be more meaningful. Further, the anti-testing backlash might well disappear.
No one was celebrating the recently released results on the 2012 PISA, ranking American 15-year-olds below their peers in many other industrialized countries, particularly in math and science. But how meaningful and defensible are the intercountry comparative averages, given the differences in culture, educational opportunity, and backgrounds of 15-year-olds tested from different nations?
Despite popular claims, these sample survey statistics also cannot tell us much about whether particular regional reforms failed or succeeded. Interpreted carefully, PISA results yield useful benchmarks within particular nations, opening opportunities for education systems to improve.
Misinterpretation of PISA's intercountry rankings, however, reflects a larger syndrome of misuse of educational assessment results and hand-wringing about public education that could easily be avoided.
Most standardized instruments rest on a solid base of scientific knowledge that dates back to the first half of the 20th century. These tools have documented achievement gaps in ethnic, gender, and socioeconomic groups reliably, furnishing policymakers, educators, and our society at large with evidence for improving conditions.
But misuse and misinterpretation of standardized-test results is a pervasive problem in educational assessment that threatens levels of validity, especially in high-stakes testing contexts. Here's an area where scholars and practitioners; test-makers and test users; educators, parents, and students; and the media could work together to make a difference.
These and other issues will be open for debate and discussion in a time-limited blog hosted by edweek.org, to be launched next week and facilitated by James Harvey of the National Superintendents Roundtable and me. Assessing the Assessments: K-12 Measurement and Accountability in the 21st Century will feature expert commentary from scholars and practitioners, offering a variety of perspectives on today's critical assessment challenges.

Comments

Popular posts from this blog

Social Innovation

Can’t be in DC? Join via live webcast ! For distant Fulbright and like-minded communities around the world, we are able to offer the opportunity to view and participate through a live webcast offered free to individuals and alumni chapters. We encourage you to organize a viewing party for members, area alumni, visiting Fulbright scholars and other interested parties.  Download the Livestream Invitation  and  sign up as a viewer  to be recognized on a world map at the event! Check back to this page for the URL on the day of the event, April 5th 10:00 am – 6:00 pm EST. Vote Now for the Finalists of the Fulbright Social Innovation Challenge! The jury has narrowed the competition to eight semi-finalists, and it’s now up to you to help select the top four! Jump over to the Challenge Page to check out the most impressive programs for social change and cast your vote to determine who will present their pitch from the TEDxFulbright stage at the US Chamber of Commerce in Washington, DC

GRAYS PROJECT

G.R.A.Y.S. Project Inc. (Granting at Risk Adolescents & Youth Sustainability) WWW.GRAYSPROJECT.COM Words from the Founder: Hi Supporters, I would like to thank you all for taking the time to read our newsletter. We continue to offer SAT/ACT Test prep to local high school students from Middleton, Blake, and Jefferson High School in Tampa, FL every Saturday.   We also attended an educational fair through Hillsborough County Public Schools and I am proud to announce that we have signed an agreement with two facilities (PACE Center for Girls & Hillsborough HS) to provide GED Test Prep & Tutoring Services.   Our vision is to promote SUCCESS “One child at a time”. GRAY’S SPOTLIGHT feature this month is Project Promise; an organization that provides opportunity & promise to at-risk/delinquent youth.   The services they provide include mentoring, intervention, and educational support in collaboration with the Federal Title I Student Intervention and

Dipolma Mills-Dipolma Police

A diploma mill, also known as a  degree mill , is a phony university that sells college diplomas and transcripts—the actual pieces of paper—rather than the educational experience. Diploma mills are scam colleges that literally crank out fake diplomas to anyone who pays the requested "tuition." Diploma mills often promise a fast college degree based on "life experience." The Get Educated online education team has prepared these Top 10 Signs of an Online College Degree Mill to help students protect themselves from this popular online scam. Don’t be fooled by degree mills. Many maintain impressive-looking websites, and all of them advertise heavily online under the attractive terms "fast degree," "life experience degrees," "fast online degree" and "work experience degree." To protect yourself, look behind the curtain. Flip past the flashy graphics on the website and the promises of an instant degree for the name of the  school