Saturday 8 February 2014

Program Evaluation Model Selection

For this program I suggest a primarily goals-based evaluation focus (as defined by McNamara, 2002) within the framework of a case study evaluation model (as defined by Stufflebeam, 2001). I see goal specification as conspicuously absent from the information provided in the program document. It talks at some length about incidence of, and risk factors for, gestational diabetes mellitus (GDM).

However, there are no statements regarding targeted outcomes of the program. Is the goal of the program decreased incidence of GDM, increased education about risk factors, better overall health, establishment of a social support network?

Somewhat tangential to goals-based evaluation is the question of participation level. A very high percentage of inquiring women participated (69%), but a very low percentage of the estimated population inquired (11%). Were there resources in place to accommodate 69% participation at the population level? That could potentially be 300 women (plus buddies) looking for child care, bathing suits, food, exercise space, and so on. The participation aspect starts to move the evaluation towards process-based, but I believe it ties into the goal of servicing the target community.

The main goal of a case study model (Stufflebeam, 2001) is to “delineate and illuminate” (p. 34) a program rather than directing or assessing its worth. It's strengths include incorporation of various stakeholders and multiple methodologies. Perhaps most importantly in this application, a case study is a natural fit for a focused program evaluation. Its primary limitation is a direct offshoot of that strength. It is not well suited for a whole-program evaluation. As that is not a requirement in this situation, the strengths of this model greatly outweigh its limitations. 

References 

McNamara, C., 2002. A Basic Guide to Program Evaluation. Retrieved from the Grantsmanship Center website: http://www.tgci.com/sites/default/files/pdf/A%20Basic%20Guide%20to%20Program%20Evaluation_0.pdf 

Stufflebeam, D. L., 2001. Evaluation Models [Monograph]. New Directions for Evaluation, 89, 7-98. doi:10.1002/ev.3

Saturday 1 February 2014

Program Evaluation Assessment

The Upward Bound Math and Science initiative (UBMS) was established in 1990 within the scope of the existing Upward Bound (UB) program. It was created, within the United States Department of Education, with the goal of improving the performance and enrollment of economically disadvantaged K-12 students in math and science. Additionally, it sought to address underrepresentation of disadvantaged groups in math and science careers. These goals were pursued through funding of UBMS projects at Colleges and Universities that provided hands-on experience in laboratories, field sites, and with computers; and through summer programs targeted towards university preparation in math and science. UBMS grew from 30 programs in 1990 to 126 programs in 2006, servicing 6707 students at an annual cost of $4990 per student.

There have been evaluations of UBMS, in 1998, 2007, and 2010. My assessment will focus on the 2010 evaluation. The 2010 report combined data from 2001-2002 and 2003-2004 with a focus entirely on postsecondary outcomes. Each subsequent evaluation builds on previous work, which represents the first difficulty that I had with this report. Many times old data was presented for comparison to new data and, in some situations where new data was not available, old data was incorporated into the analysis. Without very careful reading it was often quite difficult to distinguish what was being reported.
This was a purely summative, outcomes-based evaluation presented exclusively from a quantitative, statistical perspective. The summative, quantitative approach is effective in justifying the cost per student as it provides hard numbers to compare against hard numbers. The method does not match up particularly well with any specific program evaluation model, although it comes closest with goal attainment as defined by Tyler as the purpose of the report is to connect the objectives of UBMS with measurable outcomes.

There were two components to the evaluation. A descriptive analysis of survey information about the credentials and demographics of staff, student recruitment and enrollment, program offerings, and other demographic aspects. This analysis was not the focus of the study and was used solely for descriptive purposes. The bulk of the study consisted of an impact analysis where outcomes of UBMS were measured against participants in the regular Upward Bound program rather than the general populace. This enabled measurement of the additional success provided by UBMS over and above any success reported by UB. However, since the successes of UB are not enumerated in this evaluation, the numbers presented are somewhat misleading as they are primarily relevant in comparison to data that are missing.

Even with this caveat the results are impressive. The evaluation found statistically significant increases in enrollment at four-year institutions (12%), enrollment at selective institutions (18.6%), enrollment in math and science courses (36.5% more credits completed), and social science degree completion (11%). There was also an increase in completion of math and physical science degrees, though it was not statistically significant. Overall, despite some minor data and analysis issues, the UBMS evaluation paints a clear picture of the successes of the program.