Evaluators in any context should have working knowledge of multiple evaluation models. Models provide conceptual frameworks for determining the types of questions to be addressed, which stake-holders should be involved and how, the kinds of evidence needed, and other important considerations for an evaluation. However, evaluation practitioners rarely adhere strictly to any one model (Christie, 2003). Rather, they draw on them selectively. Below are a few popular models:

EvaluATE has previously highlighted the Kirkpatrick Model, developed by Donald Kirkpatrick for evaluating training effectiveness in business contexts. It provides a useful framework for focusing an evalua-tion of any type of professional develop-ment activity. It calls for evaluating a training intervention on four levels of im-pact (reaction, learning, behavior, and high-level results). A limitation is that it does not direct evaluators to consider whether the right audiences were reached or assess the quality of an intervention’s content and implementation— only its effects. See bit.ly/1fkdKfh.

Etienne Wenger reconceptualized the Kirkpatrick “levels” for evaluating value creation in communities of practice. He provides useful suggestions for the types of evidence that could be gathered for evaluating community of practice impacts at multiple levels. However, the emphasis on identifying types of “value” could lead those using this approach to overlook evidence of harm and/or overestimate net benefits. See bit.ly/18x5aLc.

Three models that figure prominently in most formal evaluation training programs include Daniel Stufflebeam’s CIPP Model, Michael Scriven’s Key Evaluation Checklist, and Michael Quinn Patton’s Utilization- Focused Evaluation, described below. These authors have distilled their models into checklists—see bit.ly/1fSXu5H.

Stufflebeam’s CIPP Model is especially popular for education and human service evaluations. CIPP calls for evaluators to assess a project’s Context, Input, Process, and Prod-ucts (the latter encompasses effectiveness, sustainability, and transportability). CIPP evaluations ask What needs to be done? How should it be done? Is it being done? Did it succeed?

Scriven’s Key Evaluation Checklist calls for assessing a project’s processes, outcomes, and costs. It emphasizes the importance of identifying the needs being served by a pro-ject and determining how well those needs were met. Especially useful is the list of 21 sources of values/criteria to consider when evaluating pretty much anything.

Patton’s Utilization-Focused Evaluation calls for planning an evaluation around the information needs of “primary intended users” of the evaluation, i.e., those who are in a position to make decisions based on the evaluation results. He provides numerous practical tips for engaging stakeholders to maximize an evaluation’s utility.

This short list barely scratches the surface— for an overview of 22 different models, see Stufflebeam (2001). A firm grounding in evaluation theory will enhance any evaluator’s ability to design and conduct evaluations that are useful, feasible, ethical, and accurate (see jcsee.org).
Stufflebeam, D. (2001). Evaluation models. New Directions for Evaluation, 89, 7–98.

Christie, C. A. (2003), Understanding evaluation theory and its role in guiding practice. New Directions for Evaluation, 97, 91–93.

About the Authors

Lori Wingate

Lori Wingate box with arrow

Executive Director, The Evaluation Center at Western Michigan University

Lori has a Ph.D. in evaluation and more than 20 years of experience in the field of program evaluation. She is co-principal investigator of EvaluATE and leads and a variety of evaluation projects at WMU focused on STEM education, health, and higher education initiatives. Dr. Wingate has led numerous webinars and workshops on evaluation in a variety of contexts, including CDC University and the American Evaluation Association Summer Evaluation Institute. She is an associate member of the graduate faculty at WMU.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant numbers 0802245, 1204683, 1600992, and 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.