We are Nena Bloom and Lori Rubino-Hare, the internal evaluator and principal investigator, respectively, of the Advanced Technological Education project Geospatial Connections Promoting Advancement to Careers and Higher Education (GEOCACHE). GEOCACHE is a professional development (PD) project that aims to enable educators to incorporate geospatial technology (GST) into their classes, to ultimately promote careers using these technologies. Below, we share how we collaborated on creating a rubric for the project’s evaluation.
One important outcome of effective PD is the ability to master new knowledge and skills (Guskey, 2000; Haslam, 2010). GEOCACHE identifies “mastery” as participants’ effective application of the new knowledge and skills in educator-created lesson plans.
GEOCACHE helps educators teach their content through Project Based Instruction (PBI) that integrates GST. In PBI, students collaborate and critically examine data to solve a problem or answer a question. Educators were provided 55 hours of PD, during which they experienced model lessons integrated with GST content. Educators then created lesson plans tied to the curricular goals of their courses, infusing opportunities for students to learn appropriate subject matter through the exploration of spatial data. “High-quality GST integration” was defined as opportunities for learners to collaboratively use GST to analyze and/or communicate patterns in data to describe phenomena, answer spatial questions, or propose solutions to problems.
We analyzed the educator-created lesson plans using a rubric to determine if GEOCACHE PD supported participants’ ability to effectively apply the new knowledge and skills within lessons. We believe this is a more objective indicator of the effectiveness of PD than solely using self-report measures. Rubrics, widespread methods of assessing student performance, also provide meaningful information for program evaluation (Davidson, 2004; Oakden, 2013). A rubric illustrates a clear standard and set of criteria for identifying different levels of performance quality. The objective is to understand the average skill level of participants in the program on the particular dimensions of interest. Davidson (2004) proposes that rubrics are useful in evaluation because they help make judgments transparent. In program evaluation, scores for each criterion are aggregated across all participants.
Practices we used to develop and utilize the rubric included the following:
- We developed the rubric collaboratively with the program team to create a shared understanding of performance expectations.
- We focused on aligning the criteria and expectations of the rubric with the goal of the lesson plan (i.e., to use GST to support learning goals through PBI approaches).
- Because good rubrics existed but were not entirely aligned with our project goal, we chose to adapt existing technology (Britten & Casady, 2005; Harris, Grandgenett & Hofer, 2010) and PBI rubrics (Buck Institute for Education, 2017) to include GST use, rather than start from scratch.
- We checked that the criteria at each level was clearly defined, to ensure that scoring would be accurate and consistent.
- We pilot tested the rubric with several units, using several scorers, and revised accordingly.
This authentic assessment of educator learning informed the evaluation. It provided information about the knowledge and skills educators were able to master and how the PD might be improved.
References and resources
Britten, J. S., & Cassady, J. C. (2005). The Technology Integration Assessment Instrument: Understanding planned use of technology by classroom teachers. Computers in the Schools, 22(3), 49-61.
Buck Institute for Education. (2017). Project design rubric. Retrieved from http://www.bie.org/object/document/project_design_rubric
Davidson, E. J. (2004). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage Publications, Inc.
Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks, CA: Corwin Press.
Harris, J., Grandgenett, N., & Hofer, M. (2010). Testing a TPACK-based technology integration assessment instrument. In C. D. Maddux, D. Gibson, & B. Dodge (Eds.), Research highlights in technology and teacher education 2010 (pp. 323-331). Chesapeake, VA: Society for Information Technology and Teacher Education.
Haslam, M. B. (2010). Teacher professional development evaluation guide. Oxford, OH: National Staff Development Council.
Oakden, J. (2013). Evaluation rubrics: How to ensure transparent and clear assessment that respects diverse lines of evidence. Melbourne, Australia: BetterEvaluation.
Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.