Developing a functional strategy to sustain crucial program components is often overlooked by project staff implementing ATE-funded initiatives. At the same time, evaluators may neglect the opportunity to provide value to decision makers regarding program components most vital to sustain. In this blog, I suggest a few strategies to avoid both of these traps, established through my work at Hezel Associates, specifically with colleague Sarah Singer.

Defining sustainability is a difficult task in its own right, often eliciting a plethora of interpretations that could be deemed “correct.” However, the most recent NSF ATE program solicitation specifically asks grantees to produce a “realistic vision for sustainability” and defines the term as meaning “a project or center has developed a product or service that the host institution, its partners, and its target audience want continued.” Two phrases jump out of this definition: realistic vision and what stakeholders want continued. NSF’s definition, and these terms in particular, frame my tips for evaluating for sustainability for an ATE project while addressing three common challenges.

Challenge 1: The project staff doesn’t know what components to sustain.

I use a logic model to address this problem. Reverting to the definition of sustainability provided by the NSF-ATE program, it’s possible to replace “product” with “outputs” and “service” with “activities” (taking some liberties here) to put things in terms common to typical logic models. This produces a visual tool useful for an open discussion with project staff regarding the products or services they want continued and which ones are realistic to continue. The exercise can identify program elements to assess for sustainability potential, while unearthing less obvious components not described in the logic model.

Challenge 2: Resources are not available to evaluate for sustainability.

Embedding data collection for sustainability into the evaluation increases efficiency. First, I create a specific evaluation question (or questions) focusing on sustainability, using what stakeholders want continued and what is realistic as a framework to generate additional questions. For example, “What are the effective program components that stakeholders want to see continued post-grant-funding?” and “What inputs and strategies are needed to sustain desirable program components identified by program stakeholders?” Second, I utilize the components isolated in the aforementioned logic model discussion to inform qualitative instrument design. I explore those components’ utility through interviews with stakeholders, eliciting informants’ ideas for how to sustain them. Information collected from interviews allows me to refine potentially sustainable components based on stakeholder interest, possibly using the findings to create questionnaire items for further refinement. I’ve found that resources are not an issue if evaluating for sustainability is planned accordingly.

Challenge 3: High-level decision makers are responsible for sustaining project outcomes or activities and they don’t have the right information to make a decision.

This is a key reason why evaluating for sustainability throughout the entire project is crucial. Ultimately, decision makers to whom project staff report determine which program components are continued beyond the NSF funding period. A report consisting of three years of sustainability-oriented data, detailing what stakeholders want continued while addressing what is realistic, allows project staff to make a compelling case to decision makers for sustaining essential program elements. Evaluating for sustainability supports project staff with solid data, enabling comparisons between more and less desirable components that can easily be presented to decision makers. For example, findings focusing on sustainability might help a project manager reallocate funds to support crucial components, perhaps sacrificing others; change staffing, replace personnel with technology (or vice versa), or engage partners to provide resources.

The end result could be realistic strategies to sustain program components that stakeholders want continued supported by data.

About the Authors

Andrew Hayman

Andrew Hayman box with arrow

Research Analyst, Hezel Associates

Andrew Hayman is a research analyst with Hezel Associates. With a background in environmental studies, he brings a holistic perspective to his evaluations. His interests include program sustainability, qualitative research methods, and evaluation utility. Mr. Hayman leads evaluation activities for one ATE program in the northeast and is involved in three large TAACCCT evaluations as well. Mr. Hayman earned a master’s in public administration from Syracuse University and a master’s in professional studies from State University of New York-College of Environmental Science and Forestry.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant numbers 0802245, 1204683, 1600992, and 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.