The Only Grant-Writing Book You'll Ever Need

LESSON 11

THE EVALUATION PLAN: HOW CAN YOU BE SURE IF YOUR PROGRAM WORKS?

OPENING REMARKS

I was evaluating a grant-funded program designed to help a small number of homeless families in shelters find and remain in permanent housing by providing intensive, long-term assistance.

A variety of measures showed that it was a very successful program. For example, the families remained in permanent housing significantly longer than families without this assistance—and this information enhanced the organization’s later proposals. But staff interviews indicated that the staff felt overwhelmed and burned out by the amount of work they were doing and thought there was a need for more staff. A review of the case files showed that they were overworked, but not because they had to be. It turned out that in addition to the small group of clients who were their primary focus, they were providing information or short-term help to hundreds of other families in the shelters; they couldn’t say no.

Among other recommendations, I suggested that the program managers either rethink their objectives—and possibly hire additional staff to do the short-term assistance—or retrain staff to focus only on the smaller group that was the original target population.—ASF

LEADING QUESTIONS

I Have Four (or Three, or Two, or One) Programs to Run. Who Has Time for Evaluation?

First of all, you’d better make time if you ever want another grant from the funding organization that asked for the evaluation. In the wake of corporate scandals in the early 2000s and the financial-sector meltdown in 2008, Congress passed new laws requiring, among other things, greater transparency and accountability in the private sector, and it has been working to apply these regulations to the nonprofit sector. Although some of these regulations have since been dropped or modified, foundations and government funders (often under pressure from their boards or the legislative bodies that set the rules for government programs) increasingly are looking for proof that their grants make a significant impact in the lives of people and in the community.

Besides, as you’d agree if you had time to think about it, you really do want to know whether the programs that keep you so busy are operating the way you want them to, are meeting your objectives—and are worth the effort. Too many people wait to think about evaluation until the report is due, which is a surefire method for creating serious headaches for everyone involved with the program and is no way to measure the success or failure of a program.

Remember: Programs aren’t funded and conducted to look good in reports. They are designed, funded, and implemented to address compelling problems. Well-designed and well-executed evaluations tell you if you’ve been successful.

The best time to think about the evaluation is when you’re first designing a program, when you can select those outcomes and indicators that will satisfy you that the program is working, whether the grant application actually requires an evaluation plan or not. If the information is carefully and systematically collected, and it convinces you that you’re getting the results you intended, it almost certainly will be sufficient to convince the funder—and potential funders—as well. And every grant proposal should discuss how a program will be evaluated. It’s common sense (we hope) to describe to a funder how you will know if the program is working and what you’ll do if it’s not. Don’t let the absence of an evaluation requirement stop you. Your evaluation plan can be addressed, at least briefly, along with your measurable objectives or in your program description. The key to any good relationship with funders is transparency; you need to keep this in mind as you plan.

My Staff Members Work Hard. They Know If Programs Are Working. Why Do They Have to Have Someone Evaluating Them?

People don’t like the feeling that someone is looking over their shoulders, monitoring everything they do. But this really is not what an evaluation is about. Almost all program staff members and managers we’ve ever met are interested in how well their program is doing and whether their teenagers or seniors or children or students of English for speakers of other languages are getting something out of it. More importantly, staff want to know whether the participants are getting what staff members hope they will get out of it. And almost all staff members and managers truly want to know how they can improve their programs. The purpose of an evaluation is not to judge an individual worker but to consider the entire program and determine what works, what doesn’t, and how to fix what needs fixing—and sometimes, as in the evaluation described in the Opening Remarks, to reduce the amount of work a staff person is doing!

I Work for a City Government Agency. Of Course We Know How to Run Grant Programs. But Federal Applications Ask for Such Complicated Evaluation Plans…

If a government agency is giving grants for hundreds of thousands of dollars—often millions of dollars—of taxpayers’ money over one, two, or three years, why wouldn’t it expect to see the most rigorous, methodical evaluation plan? Proposal writers, like program staff, sometimes take offense at the whole notion of evaluation, as if the funder is prying or being just plain nosy. The evaluation plan should be viewed as an important element of the proposal, linked in an orderly way to the objectives and the activities planned to achieve the objectives. As the program is developed, the evaluator should be involved in the process of identifying realistic, measurable objectives.

Whoa… You Said “Evaluator”! Do I Need to Hire—and Pay for—an Evaluator?

The answer to this question depends on your organization’s capacity to develop meaningful evaluation plans and conduct meaningful evaluations, whether you are a not-for-profit organization or a government agency or a school district. Federal or state grants often require that the applicant spend a specific percentage of the grant funds on evaluation activities. That’s usually a big hint that you should work with an outside evaluator—whether it be someone from a local university, a research organization, a state or city agency that conducts evaluations, or an individual consultant.

Most evaluators who work with government and nonprofit agencies understand the grants process and are willing to help develop the proposal (and the evaluation plan) “on spec,” meaning they get money only if the grant is funded, even if they donate considerable time to the planning process. If the evaluator expects to be paid to participate in the development of the proposal for a grant, this is probably not the right evaluator for you. (At least, it wouldn’t be the right evaluator for us.) It is reasonable for not-for-profit organizations and government agencies to try to find competent evaluators who do not view a grant as a cash cow. Evaluators should be as much a part of the program development team as every other partner, helping to define and refine the objectives in measurable terms and devising a comprehensive plan that will be included in the grant proposal.

Some evaluators—especially if they are from universities—also may be willing to donate space for meetings and activities, recruit student interns, or provide professors’ expertise for all facets of the program. In this way, the evaluators become real partners—collaborators—not just hired hands. And remember, colleges and universities are often eager to work with and support nonprofits and municipal agencies for many reasons—an important one being to build good will in the community. Even if you are a grassroots organization just getting started, you might want to approach a local college for help figuring out evaluation and other strategies for your program.

The cost of conducting the program evaluation should be outlined in the budget; sometimes a separate evaluation budget should be attached, explaining how many people will be conducting the evaluation and in what capacity, along with other relevant details. The evaluator should be able to help with this too.

DISCUSSION

An evaluation, like a needs statement, can range from the simple collection of information on a few indicators (e.g., attendance, demonstrated improvement in a skill, or other concrete measures) to extremely complex research projects that can assess the long-term outcomes of the program or compare it with other programs to determine which is most effective. Evaluations generally are of two types: process evaluation and outcome evaluation. Whenever possible, it’s smart to use both, as did the evaluation in the Opening Remarks.

Process evaluation. Process evaluations, sometimes called formative evaluations, are used to assess the functioning of the project and provide feedback to allow for program corrections. Process evaluations consider such questions as whether activities are occurring when and where they should, who is receiving the services, how well they are being implemented, whether they could be done more efficiently, and whether participants are satisfied.

Process evaluations generally make use of qualitative methods, which might include focus groups, personal reports, observation notes, case files, surveys, and interviews. You might use this type of evaluation during the first year of a new program, or you might maintain some form of process evaluation throughout the life of the program to keep it functioning at the highest level.

Outcome evaluation. As the name indicates, outcome (or summative) evaluations measure outcomes, program effectiveness, and the program’s impact on the problem that it is designed to address. The questions that outcome evaluations raise include whether program objectives have been achieved, whether the target population has changed as a result, whether unanticipated results have occurred and whether they are desirable, what factors may have contributed to the changes that have occurred, how cost-effective the program is compared with others with the same objectives, what impacts the program has had on the problem, and what new knowledge has been generated.

Outcome evaluations generally, but not always, are formal in approach and designed according to professional research procedures. They use primarily quantitative methods but may draw on systematically obtained qualitative data to help explain the research findings. Such evaluations probably would involve “before and after” measures of attitudes and/or behavior and/or knowledge of members of the intervention group (the group that experiences the program) and at least one control or comparison group (which does not receive services, at least until a later time).

Which specific data collection methods are used depends on the nature of the evaluation and the questions to be answered. They may include standard attitudinal or behavioral measures that have been tested on similar populations, or they may be developed and tested for the specific target population. Questionnaires, observations, systematic collection of data from various sources, and similar techniques may be used.

Keep in mind that a program’s impact on a broad population or community might require multiple measures over time and might be well outside the scope of your project or your ability to evaluate. It may take the resources of many researchers to determine what factors even need to be examined to determine the real outcomes. You should be comfortable describing the need for long-term evaluation to the funders, and perhaps suggest additional funding to conduct such an evaluation or that this might be the subject of a different grant.

Okay, Evaluation Is Useful. But How Much of the Evaluation Design Do I Have to Put into the Proposal?

At the most basic level, unless you’re writing a proposal for a large grant that requires an external evaluation—in which case the evaluator will write the section—you often just need to let the reader know that you care about outcomes, want to know if you’ve succeeded, and have thought carefully about how you will know if you’ve succeeded. If you can make the case that collecting a few pieces of information, and looking at changes in those indicators between the beginning and end of a program, will tell you whether the program has worked, and if that information clearly relates to your objectives, the reader probably will accept this as a reasonable effort at evaluation.

Some examples might include the number of adults who were hired for jobs after completing a job training program and, if you can follow them over time, the number who remain in the job; children’s reading scores before and after a semester of tutoring in an after-school program; or the number and percentage of teenagers’ acceptance to college after the college-bound program, and the number and dollar value of scholarships they receive. A longer-term measure might track whether they stay at the college and whether they graduate. You might use a standardized format to obtain older persons’ feelings of depression or loneliness before and after they’ve engaged in a discussion club for a few months.

How Do I Decide What to Include?

We can’t tell you exactly what to include, of course; it depends on your program and your resources. But here are some basic principles that underlie the evaluation section of a proposal and will be at the back of a reviewer’s mind.

• The linkages between the activities (program components) proposed and the expected outcomes of the program must be clear and explicit in the evaluation design. This linkage is spelled out in the program objectives. Here’s an example: By the end of the project, 80 percent of the 21-to 24-year-olds who participate in a comprehensive work-study program will pass their GED exams. The GED scores of the work-study participants will be collected and analyzed.

• “Dosage,” the actual amount or level of services provided, can influence outcomes. For example, children who participate for two hours several times a week in a tutoring program may show better results than children who participate for an hour once a week—or may not, which has implications for program planning. Another word for dosage is intervention: What (and how much of the activity) is actually received by the target population that will likely yield a certain result that can be measured.

• As suggested by the last example, negative findings can be just as important as positive results because they help in understanding why a program did not work and how it might be modified to be effective. Why continue providing many hours a week of tutoring when one hour is sufficient?

• Qualitative data (informal interviews, for instance) can be useful in determining the effectiveness of an intervention, especially when quantitative data (e.g., scores on standardized tests) are not appropriate or available. Qualitative data also can supplement quantitative data to explain the results and make the evaluation even more comprehensive.

Reviewers will look at the evaluation section to see if it answers several important questions beyond the obvious ones: How did the project work? Were the specific objectives achieved? Which ones were or were not? Were the activities that you planned actually conducted the way you planned? (You may have planned for four workshops to take place in the evening but revised that plan when too few people wanted to come out after dark. Instead, you ran the workshops on Saturday mornings, and you provided child care so people could attend—which greatly improved attendance.) Were there any unexpected world, national, or local events that seemed to affect the success of your project? What did you learn from them? Did the staff members who were hired to run the project follow the job descriptions stipulated in the proposal? Was there community and/or organization buy-in—and if so, how do you know? Could the project be replicated by other organizations in your town or across the country as it is, or should elements be changed for replication purposes?

This Can Be Serious Stuff

As we discussed earlier in this lesson (and in Roundtable 1, at the beginning of the book), funders increasingly want to know their grants have a real impact on the problem they’re intended to solve. Many grants, especially those from the federal government or large foundations, may require systematic outcome evaluation of the funded project, using validated and reliable measures (as opposed to homemade quizzes and surveys), to demonstrate whether the program has had an impact, and why it had or did not have an impact. In many cases the funding agency expects, or even requires, that the grantee hire an external evaluator, and tells you in the guidelines to build in the costs for the evaluation. But even in cases where the evaluation is done “in house,” it should be as rigorous as possible.

If you think you may be interested in seeking funding to test a model program that you have created (or are replicating), you should establish a relationship with a college or a consulting group that has experience with the funding agency or experience evaluating the type of program proposed. As we said before, the earlier an evaluator becomes involved, the more useful the evaluation plan will be and the stronger the proposal is likely to be. Ideally, the evaluator will help you formulate realistic goals, achievable objectives, and even appropriate activities that can be expected to lead to the results you hope to see. The relationship between you and the evaluator is a special type of collaboration.

POP QUIZ

True or False?

1. Evaluation is only necessary for large government grants.

2. Process evaluation looks at how many, how much, how well, and how often.

3. A large, multiyear grant should have only an outcome evaluation to tell you what the program’s impact was.

4. If a grant application requires an outside evaluator, you have to hire a consultant to help you with the evaluation section of the proposal.

5. An evaluation has to be a formal process using an academic research approach.

6. An evaluation considers only statistical data.

7. Whew… the grant application doesn’t require an evaluation plan, so I’m off the hook.

8. The best time to think about evaluation is at the very beginning—when you first begin designing the program.

9. Small demonstration projects don’t need to be evaluated.

10. Frankly, you should always work with an outside evaluator, even if you are applying for a $5,000 grant.

Short Answer

Describe three important things that you can learn from an evaluation.