One or more of a number of analytic methods or designs may be proposed to evaluate a particular program. The most common of these are case studies, simple descriptive statistics, before/after comparisons, cohort studies, time series analysis, cross sectional comparisons, cost benefit analysis, controlled experiments and quasi-experimental designs. Some of these are easier to do and take less time while others are more complex.
Complicated research designs or sophisticated statistical analyses that take long periods of time to complete do not necessarily lead to better results. Often, useful information can be gleaned from comparatively simple designs and basic data analysis that can be completed in a reasonable time frame.
The following information describes the different types of evaluations. More information about and examples of evaluations is available here.
A case study presents detailed information about a particular participant or small group, frequently including the accounts of subjects themselves. A form of qualitative descriptive research, case studies look closely at an individual or small group of participants, drawing conclusions only about that participant or group and only in that specific context. Case studies do not seek findings that can be generalized to other groups or cause and effect relationships; instead, emphasis they focus on exploration and description.
Case studies typically examine the interplay of multiple variables in order to provide as complete an understanding of an event or situation as possible. They are often referred to interchangeably with ethnography, field study, and participant observation. Unlike quantitative methods of research, case studies are preferred when how or why questions are asked. They are also used when the researcher has little control over the events, and when there is a contemporary focus within a real life context.
Advocates of case studies suggest they produce much more detailed information than what is available through a statistical analysis. On the down side, case studies are difficult to generalize because of inherent subjectivity and because they are based on qualitative subjective data, are relevant only to a particular context.
Collecting data on project measures before and after the project intervention helps assess possible impact of the project. Quantitative data collection begins before project interventions start (the baseline point) and continues throughout the project period and beyond. Measures may be plotted on a graph to show increase or decrease in variables or outcomes over time. Pre-intervention to post-intervention periods (e.g., the number of arrests during a pre-intervention time period and for the same period after the project intervention) may be compared. Outcome measures (e.g., fear of crime assessed via neighborhood surveys) that are not be available on a continual (e.g., monthly) basis may be gathered before interventions begin and again after the interventions have been implemented.
If desired changes are shown to occur after the intervention, support is given to the assertion that the project caused the change to take place. Confidence in the findings of pre-post analyses depends on whether other factors changed as well. That is, did something else occur that could be claimed as the cause of any measured change?
Times Series Analyses
Taking several measures before and after an intervention can compensate for fluctuations in conditions. This design, called time series analyses, first observes trends in the conditions that exist before the project begins and then analyzes the trend statistically for some time into the future after the project’s implementation. By comparing what occurred with what would have been expected to occur without the project being implemented the evaluator could attribute any change from the expected to the project.
Quasi-Experimental Studies (Pre-Post with Treatment Group/Area and Comparison Group/Area)
Where random assignment of participants to treatment or control groups is not feasible, the evaluator may propose a quasi-experimental design to approximate the advantages of random selection. Adding a comparison area or group to an impact evaluation gives one even more confidence to say that a project has been responsible for observed changes in a project population. A comparison area is an area comparable to the project area (in key ways, such as demographics, geography, policing practices, etc.) that does not have the particular project being evaluated. A comparison group contains people of the same age, race/ethnicity, gender, socioeconomic status, severity of condition, or other key attributes as those in the group receiving project services, but does not receive the project intervention.
The strength of this design rests on the extent to which all the influential characteristics are accounted for in selecting the comparison group/area. It is important that the comparison area/group does not experience changes in any factors that may impact measurement of evaluation variables but that are not experienced by the project area/group. This can be a challenge in project evaluation, especially if there is a time lag between project implementation and completion of the evaluation.
Because the use of a non-random comparison group/area does not eliminate all alternative explanations for the relationship between intervention and outcome, this type of design requires more complicated analyses and yields less definitive results than experimental designs.
Experimental Designs (Pre-Post with Treatment Group/Area and Control Group/Area)
The investigative technique that provides the maximum control, so that the relationship between a particular element of a project and the desired outcome can be isolated fr