Without a TV in my home, I haven’t been privy to the awesomeness that is Rachel Maddow until last night, in my hotel room in Little Rock. She was speaking with Richard Holbrooke, US special rep for Afghanistan and Pakistan. Maddow had recently visited Afghanistan to report on the front there, so of course, the topic of the interview was the ongoing war. Why am I blogging about war in the context of evaluation?
Maddow tells Holbrooke about the police forces she saw in Afghanistan, where, in January of THIS YEAR, they were exiting marksmen classes with a 30-35 percent accuracy rate. Then the US worked out some magical intervention that, after only a few months, rose the exit accuracy rates to 95 percent. Our allied forces are now much better at killing people. Whew.
But the clincher was when Maddow, only half-rhetorically, asks her guest: What the hell have we been doing there for the last eight years???
How did was the situation allowed to go so long with such poor performance, at the expense of thousands of lives and billions of taxpayer dollars? Well?
A lack of evaluation got us there, of course. It was the simple fact that no one took the care to collect and/or use data on the accuracy of marksmen skills.
But much like any social service intervention that promises a lot and shows early signs of success, we still have work to do. A single posttest is necessary but insufficient. Continued evaluation will be needed to demonstrate that the skill levels have been maintained two months, six months, one year down the road. The story is not yet complete, Ms. Maddow. Keep asking the hard questions.
To view part of the interview: http://www.msnbc.msn.com/id/26315908/
Posted by Stephanie Evergreen on July 14, 2010
I found the cutest old-man optometrist. He puttered around the room, in cute old man fashion. He had a little cute old man mantra: “if it ain’t broke…”
Him: Are your contacts working okay for you?
Me: Sure, I guess.
Him: Well, if it ain’t broke…
Me: But aren’t you going to check my eyes???
He eventually did. But he must have repeated his mantra three or four more times during our appointment together.
It was while I was waiting for my eyes to dilate that I realized how “if it ain’t broke…” might be the worst phrase for an evaluator to hear. Why wait until things are broken to start fixing them? Waiting until things are broke means enduring a period of decline, a period of broken-ness, and a period of rebuilding to get things back up at the same operating level as previously. That sort of downtime impacts an organization’s productivity, effectiveness, and bottom line. When there are clear patterns and signposts established (especially in the eyecare industry), it would be much more efficient to watch out for those early warning signals and take action, rather than wait until it is broke. This is why evaluators are good at pattern recognition.
Now whenever I hear “if it ain’t broke…,” I cringe. Must be hard to examine my eyes that way.
Posted by Stephanie Evergreen on June 16, 2010
This post has been a long time coming.
In the not so distant past, I tried to publicly criticize (I know, I know…) how authors of an evaluation book mis-taught formative and summative. Not such a big deal if they are personally in error, but a much larger offense if publishing. As a brief review:
Formative When evaluation findings are used, typically internally, to make improvements to the organization. As Stake put it, “when the cook tastes the soup.”
Summative When evaluation findings are used, typically externally, to make judgments about the organization. As Stake put it, “when the guests taste the soup.”
The authors in question tried to establish that formative was when an evaluation looks at the activities of an organization. By contrast, they said summative was when the evaluation looks at the impacts of those activities. Of course, this is not exactly the case. For example, evaluative information about the impacts of an organization can be used to make judgments, yes (that’s summative), but can also be used to make improvements to the organization (formative, here). So the authors were incorrectly conflating why organizations do evaluation (formative or summative) with the organizational areas an evaluation can examine (activities v. impacts).
My rant about this mistake began with “These ‘experts’…” and ended with “…and make twice as much as me.” (In other words, a typical tirade from me.)
But my listeners shut it down. They agreed that I was correct, but condemned my urge to be so public in my critique, saying something to the effect of “a lot of people make this same mistake.” I am fairly sure the larger mistake may be to let such misconceptions go unturned.
And now you have had your vocabulary lesson for the day. It might make you smarter than your average evaluator.
Posted by Stephanie Evergreen on May 5, 2010