Recently, I attended the Building Pathways and Partnerships in STEM for a Global Network conference, hosted by the State University of New York (SUNY) system. It focused on innovative practices in STEM higher education, centered on increasing retention, completion, and cultural diversity.
As an evaluator, it was enlightening to hear about new practices being used by higher education faculty and staff to encourage students, particularly students in groups traditionally underrepresented in STEM, to stay enrolled and get their degrees. These included:
- Research opportunities! Students should be exposed to real research if they are going to engage in STEM. This is not only important for four-year degree students, but also community college students, whether they plan to continue their education or move into the workforce.
- Internships (PAID!) are crucial for gaining practical experience before entering the workforce.
- Partnerships, partnerships, partnerships. Internships and research opportunities are most useful if they are with organizations outside of the school. This means considerable outreach and relationship-building.
- One-on-one peer mentoring. Systems where upper level students work directly with new students to help them get through tough classes or labs has been shown to keep students enrolled not only in STEM programs, but in college in general.
The main takeaway from this conference is that the SUNY system is being more creative in engaging students in STEM. They are making a concerted effort to help underrepresented students. This trend is not limited to NY—many colleges and universities are focusing on these issues.
What does all this mean for evaluation? Evidence is more important than ever to sort out what types of new practices work and for whom. Evaluation designs and methods need to be just as innovative as the programs they are reviewing. As evaluators, we need to channel program designers’ creativity and apply our knowledge in useful ways. Examples include:
- Being flexible. Many methods are brand new or new to the institution or department, so implementers may tweak them along the way. Which means we need to pay attention to how we assess outcomes, perhaps taking guidance from Patton’s Developmental Evaluation work.
- Considering cultural viewpoints. We should always be mindful of the diversity of perspectives and backgrounds when developing instruments and data collection methods. This is especially important when programs are meant to improve underrepresented groups’ outcomes. Think about how individuals will be able to access an instrument (online, paper) and pay attention to language when writing questionnaire items. The American Evaluation Association provides useful resources for this: http://aea365.org/blog/faheemah-mustafaa-on-pursuing-racial-equity-in-evaluation-practice/
- Thinking beyond immediate outcomes. What do students accomplish in the long-term? Do they go on to get higher degrees, do they get jobs that fit with their expectations? If you can’t measure these due to budget or timeline constraints, help institutions design ways to do this themselves. It can help them continue to identify program strengths and weaknesses.
Keep these in mind, and your evaluation can provide valuable information for programs geared to make a real difference.
Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.