Data don’t speak for themselves. But the social and educational research traditions within which many evaluators have been trained offer little in the way of tools to support the task of translating data into meaningful, evaluative conclusions in transparent and justifiable ways (see Jane Davidson’s article). However, we can draw on what educators already do when they develop and use rubrics for grading student writing, presentations, and other assessment tasks. Rubrics can be used in similar ways to aid in the interpretation of project evaluation results. Rubrics can be developed for individual indicators, such as the number of women in a degree program or percentage of participants expressing satisfaction with a professional development workshop. Or, a holistic rubric can be created to assess larger aspects of a project for which it is impractical to parse into distinct data points. Rubrics are a means for increasing transparency in terms of how conclusions are generated from data. For example, if a project claimed that it would increase enrollment of students from underrepresented minority (URM) groups, an important variable would be the percentage increase in URM enrollment. The evaluator could engage project stakeholders in developing a rubric to interpret the date for this variable, in consultation with secondary sources such as the research literature and/or national data. When the results are in, the evaluator can refer to the rubric to determine the degree to which the project was successful on this dimension. To learn more about how to connect the dots between data and conclusions, see the recording, handout, and slides from EvaluATE’s March webinar evalu-ate.org/events/march_2013/.
Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.