Why Proposals Fail

Summer, as you  know, is proposal season. I’ve been up to my neck (literally – these proposals are huge) in stacks of papers, reviewing ideas seeking support from various federal agencies. Regardless of the agency, some proposals seem to fare less well for common reasons. Here’s my breakdown (and strictly mine – the weaknesses I identified were not always a shared concern with my other panelists) of why proposals fail, in no particular order:

1. The evaluation plans don’t clearly match the project’s goals and objectives. If the project is seeking to change the consumer experience but the evaluation is only looking at production of the consumer good, it will never be able to tell whether the project has met its goal. This could mean a review of the evaluation plans OR a revision to the project’s goals and objectives.

2. The evaluation is not evaluative. No targets or performance standards are set. The way the evaluation is structured will only enable them, in the end, to say descriptive things about what the project did – not how good or worthwhile it was.

3. Experimental designs typically, and surprisingly, lacked a power analysis to determine whether the project’s recruitment efforts are on track. In the era of accountability – and at a time when technology allows us to see ahead of time where we should focus our efforts – there is no excuse for a missing power analysis, at least in those designs where it is called for.

4. Letters of support were clearly written by project staff and cut-and-pasted by the supporters. Letter content was identical, save for the letterhead and signature line. I know it is unreasonable to request others to draft letters of support originally, in most cases. However, the letters I saw frequently left out key responsibilities of the supporting organizations. For  example, if the school district will need to commit to providing control condition classrooms, where no benefit to participation will be derived, that needs to be clearly agreed to up front. The danger is looking like your evaluation isn’t well-planned and hasn’t been throughly communicated to all parties.

5. The evaluation organization appears to have the collective experience necessary, but the specific individuals assigned in the proposal have no direct relevant experience in the tasks on the table. Too much narrative space is spent defending the established history of Evaluation Consultants, LLC, particularly when the actual evaluation staff bio- buried in the appendices – is weeeeeeeak.

6. It pains me to even have to write this one – but sometimes I saw proposals that did not yet have an evaluator identified. Sheesh! It is okay, Principal Investigators, to contact an evaluation team during proposal development and ask them to help you draft your evaluation plan. They will probably even help you write the evaluation section of  the proposal. You might want to draft up an memorandum of understanding that ensures they will be selected as your evaluator, should the award be granted. In the evaluation business, most of us are used to devoting a little free time to writing up plans that are (or sometimes aren’t) funded in the future. It is part of our work and it is okay for you to start talking to your evaluator the moment you start thinking about your program. In fact, it is highly encouraged. What? You don’t have one now? Go forth!

Okay, there is it – the top six reasons I saw proposals fail this summer. I’m hoping next year it will be a totally different bag. Did you see something different? Post it in the comments!

Don’t Even Try

I love being on the other side. I am in the midst of reviewing evaluator letters of interest – miniproposals – to evaluate one of my work projects. Rarely am I in the position to need the evaluator. Usually I am the one submitting my ideas and credentials. The pile sitting in front of me holds an incredible range of quality. For some, I am honored that they would be interested in working with us. For others, I am reminded of a mistake I made early on in my professional evaluation career.

I was hired on to a grant, which had proposed to evaluate a community initiative, after the proposal was accepted and funding had landed. My team was geeked, particularly because the local community initiative had been so successful, other cities were adopting the model. We saw this rapid replication as an opportunity – perhaps even as a meat market. Hmmmm, which one of these pretties shall we go after? We, naturally, went for the largest, the richest, the most popular options and courted those community leaders around the country. We submitted evaluation proposals to them that were all basically the same, with selected search-and-replacing. At the time, I had never actually written an evaluation proposal and I use my naivete as an excuse, thankyouverymuch.

When the first rejection letter was returned to us, I was devastated (I mean, I cried. First rejection.) It was from Denver. And their chief complaint was that the proposal didn’t reflect an understanding of the Denver context. We had talked about this particular community initiative being so necessary because the larger community of Fill-In-The-Blank was a waning industrial center that needed revitalization. Hello? Been to Denver lately? That’s not them at all. They were right to reject us. We should have done more homework before submitting that proposal.

The same mistakes are sitting in front of me: boilerplate language that shows no evidence of even trying to understand who we are and what we do. While this might seem like an easy strategy (and who knows, one of the 400 letters sent out might actually land a job…), one shouldn’t be a surprised by rejection. Just like the guy who sidles up to me at the bar, I am thinking in my head, “don’t even try.”