Mountain Ridge

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

In this blog post, an independent evaluator and principal investigator (client) interview each other to unearth key points in their professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments will be useful to other STEM evaluators and clients. In this blog post the evaluator interviews the client and key takeaways are suggested for evaluators (watch for our follow-up post in which the tables are turned).

Understanding of Evaluation

Evaluator (Ayesha): What were your initial thoughts about evaluation before we began working together?
PI (Manu): “Before this I had no idea about evaluation, never thought about it. I had probably been involved in some before as a participant or subject but never really thought about it.”

Key takeaway: Clients have different experiences with evaluation, which can make it harder for them to initially appreciate the power of evaluation.

Evaluation Reports

Evaluator: What were your initial thoughts about the evaluation reports provided to you?
PI: “So for the first year, I really didn’t look at them. And then you would ask, “Did you read the evaluation report?” and I responded, “uuuuhhh…. No.”

Key takeaway: Don’t assume that your client is reading your evaluation reports. It might be necessary to check in with them to ensure utilization.

Evaluator: Then I pushed you to read them thoroughly and what happened?
PI: “Well, I heard the way you put it and thought, “Oh I should probably read it.” I found out that it was part of your job and not just your Ph.D. project and it became more important. Then when I read it, it was interesting! Part of the thing I noticed – you know we’re three institutions partnering – was what people thought about the other institutions. I was hearing from some of the faculty at the other institutions about the program. I love the qualitative data even more nowadays. That’s the part that I care about the most.”

Key takeaway: Check with your client to see what type of data and what structure of reporting they find most useful. Sometimes a final summative report isn’t enough.

Ah ha moment!

Evaluator: When did you have your “Ah ha! – the evaluation is useful” moment?
PI: “I had two. I realized as diversity director that I was the one who was supposed to stand up and comment on evaluation findings to the National Science Foundation representatives during the project’s site visit. I would have to explain the implementation, satisfaction rate, and effectiveness of our program. I would be standing there alone trying to explain why there was unhappiness here, or why the students weren’t going into graduate school at these institutions.

The second was, as you’ve grown as an evaluator and worked with more and more programs, you would also give us comparisons to other programs. You would say things like, “Oh other similar programs have had these issues and they’ve done these things. I see that they’re different from you in these aspects, but this is something you can consider.” Really, the formative feedback has been so important.”

Key takeaway: You may need to talk to your client about how they plan to use your evaluation results, especially when it comes to being accountable to the funder. Also, if you evaluate similar programs it can be important to share triumphs and challenges across programs (without compromising the confidentiality of the programs; share feedback without naming exact programs). 

About the Authors

Ayesha Boyce

Ayesha Boyce box with arrow

Assistant Professor, Department of Educational Research Methodology University of North Carolina Greensboro

Dr. Ayesha Boyce received her Ph.D. in Educational Psychology with a specialization in Evaluation from the University of Illinois Urbana-Champaign. She is an assistant professor at the University of North Carolina at Greensboro. Her research interests focus on addressing issues related to diversity, equity, access, climate, and cultural responsiveness while judging the quality of implementation, effectiveness, impact, and institutionalization of educational programs, especially those that are multi-site and/or STEM. Dr. Boyce has evaluated many programs funded by the National Science Foundation, National Institutes of Health, Title VI, and others. She is the Chair of the American Evaluation Association STEM TIG.

Manu Platt

Manu Platt box with arrow

Associate Professor, Department of Educational Research Methodology University of North Carolina Greensboro

Dr. Manu O. Platt earned his B.S. in Biology from Morehouse College and Ph.D. in Biomedical Engineering from Georgia Institute of Technology and Emory University. After postdoctoral work at MIT, he returned to Georgia Tech/Emory where he was recently promoted and tenured. The Platt Lab studies strokes in children with sickle cell disease, HIV-mediated cardiovascular disease, and predictive medicine in cancer. He is also Diversity Director for the NSF Center on Emergent Behaviors of Integrated Cellular Systems (EBICS). He co-founded and co-directs Project ENGAGES, a biotech and engineering research program for African-American high school students in Georgia Tech laboratories. Website: Platt Lab

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant numbers 0802245, 1204683, 1600992, and 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.