Wednesday, June 2, 2010

A plea to slowdown

Valerie Strauss is laying on her criticism of Race to the Top even thicker on her Washington Post blog.



And while I admit to being a proponent of reform, I can't help but begin to think that Strauss is right about one thing; this all seems too rushed. She writes:

To make today’s deadline, many of the participating states rushed major education bills through their legislatures to meet the contest’s requirements and engaged in furious negotiations with unions against artificially set deadlines.


Unfortunately, this statement is too true. Despite MSDE claims to the contrary, its clear to me that stakeholders have been brushed aside in efforts to create quick change. I'm sure there's enough blame to go around, but my own school district and union have yet to say "boo" when it comes to where they stand on the issue of reform. I'm sure that statement is on the way though.

Struass goes even further:

[Legislatures] passed laws allowing more charter schools to open -- even though studies show that charter schools on average are no better than regular public schools -- and tying teacher compensation to standardized test scores, even though the tests aren't designed to assess teachers.


Academics call this a part of the construct validity of a test. And it has always been a kind of stumbling block when it comes to the use of high stakes tests. For instance, it might be said of a national algebra test that the test can effectively measure the knowledge skills and abilities of an algebra student. But that same test cannot and should not be used to measure whether a student should graduate from high school. Why not? Because that was not the design of the test when it was created. In other words, this test may be a valid measure of a student's math knowledge, but we cannot make generalizations about this test's ability to determine whether someone should graduate.

Likewise, that same Algebra test cannot and should not be used to measure whether a teacher is necessarily an effective teacher. This is not what this particular test was designed to capture.

This does not mean that student work, or student performance on tests cannot be used to help us evaluate teachers. However, it does mean we need to be congnizant of identifying teacher effectiveness solely or primarly based on student performance data that relies on these tests. And I'm not sure whether these conversations are going on as we speed on with the RttT process.

What we really need first is a comprehensive evaluation system for teachers. And I believe Race to the Top has the cornerstones of a reform system that most hard working teachers are dying to get behind. RttT demands that we need to more carefully identify which teachers (and principals) are strongest, and which teachers are weakest, and which teachers are somewhere in between. RttT also demands that teachers and principals be rewarded when they show they are capable. And RttT demands that we dismiss those teachers that are unable to show improvement, rather than those teachers who have been teachers the shortest length of time. These are real reforms for education; and reforms that will benefit our students around the country if we take the time to get it right.

My blogging friends over in California have put some real time and effort into this question of evaluation. And I applaud their efforts, despite the fact I believe evaluation systems can and should include input from students, parents, and colleagues (both master and non-master teachers).

But with that said, I'll leave you with InterAct's basic tenents for comprehensive teacher evaluation:

Here are the principles on which improved evaluations should be constructed:
-Teacher evaluation should be based on professional standards.
-Teacher evaluation should include performance assessments to guide a path of professional learning throughout a teacher’s career.
-The design of a new evaluation system should build on successful, innovative practices in current use.
-Evaluations should consider teacher practice and performance, as well as an array of student outcomes for teams of teachers as well as individual teachers.
-Evaluation should be frequent and conducted by expert evaluators, including teachers who have demonstrated expertise in working with their peers.
-Evaluation leading to permanent status (“tenure”) must be more intensive and must include more extensive evidence of quality teaching.
-Evaluation should be accompanied by useful feedback, connected to professional development opportunities, and reviewed by evaluation teams or an oversight body to ensure fairness, consistency, and reliability.

1 comment:

  1. Mike,

    Thanks for the shout out, and the thoughtful treatment of the issues. On the state testing question, I would even say that some state tests are not even valid measures for that one student. To get an accurate measure of one student's vocabulary, you'd probably need to ask more questions that the typical state test requires. But, to get a sense of the vocab. level of 1,000 or 100,000 students, you can ask each of them fewer questions. I think state tests are generally designed to get a snapshot of the state, more than the student. Likewise, if one student misses one out two questions about the French Revolution, how much do we know? If most of the students in a school, district, or state, answer a certain way, then we have a sense of how the system is covering that content.

    On the evaluation front, I think you'd find that many of us who worked on the report are open to student and parent feedback, but that did not emerge as a consensus principle that would apply to all evaluations - all age levels, types of schools and teachers. Regarding peer feedback as part of evaluation, that's an interesting idea as well. I don't recall that we discussed it except in the context of trained peer evaluators who serve as a mentor/coach and possibly even serve on a panel that has stronger evaluative authority. If I were working on a team or grade level, I'd be interested in developing some protocols that involved mutual observations and feedback. Thanks for pushing the conversation forward.

    ReplyDelete