Conducting evaluations for multisite projects can present unique challenges and opportunities. For example, evaluators must be careful to ensure that consistent data are captured across sites, which can be challenging. However, having results for multiple sites can lead to stronger conclusions about an intervention’s impact. The following are helpful tips for evaluating multisite projects.

 1.      Investigate the consistency of project implementation. Just because the same guidelines have been provided to each site does not mean that they have been implemented the same way! Variations in implementation can create difficulties in collecting the data and interpreting the evaluation results.

2.      Standardize data collection tools across sites. This will minimize confusion and result in a single dataset with information on all sites. On the downside, this may result in having to limit the data to a subset of information that is available across all sites.

3.      Help the project managers at each site understand the evaluation plan. Provide a clear, comprehensive overview of the evaluation plan that includes the expectations of the managers. Simplify their roles as much as possible.

4.      Be sensitive in reporting side-by-side results of the sites. Consult with project stakeholders to determine if it is appropriate or helpful to include side-by-side comparisons of the performance of the various sites.

5.      Analyze to what extent differences in outcomes are due to variations in project implementation. Variation in results across sites may provide clues to factors that may facilitate or impede the achievement of certain outcomes.

6.      Report the evaluation results back to the site managers in whatever form would be the most useful to them. This is an excellent opportunity to recruit the site managers as supporters of evaluation, especially if they see that the evaluation results can be used to aid their participant recruitment and fundraising efforts.

 

* This blog is a reprint of a conference handout from an EvaluATE workshop at the 2011 ATE PI Conference.

 

FOR MORE INFORMATION

Smith-Moncrieffe, D. (2009, October). Planning multi-site evaluations of model and promising programs. Paper presented at the Canadian Evaluation Society Conference, Ontario, CA.

Lawrenz, F., & Huffman, D. (2003). How can multi-site evaluations be participatory? American Journal of Evaluation, 24(4), 471–482.

About the Authors

Candiya Mann

Candiya Mann box with arrow

Senior Research Manager, Social & Economic Sciences Research Center at Washington State University

Candiya Mann is the independent evaluator for several National Science Foundation (NSF) grantees across multiple programs, including 10 Advanced Technology Education (ATE) centers and projects. She specializes in K-16 education and youth workforce issues and has conducted evaluations for clients including the US Department of Labor, Washington State Office of the Superintendent of Public Instruction, United Way, school districts, community-based organizations, and workforce development agencies. Mann served on the advisory group for the NSF ATE Evaluation Community of Practice. She is a senior research manager with the Social and Economic Sciences Research Center at Washington State University, where she has spent over 18 years.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant numbers 0802245, 1204683, 1600992, and 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.