Curriculum development projects are designed to create new content or present content to students in a new format with new activities or approaches. The following are important things to know about evaluating curriculum development projects.

1.     Understand the underlying model, pedagogy, and process used to develop the curriculum. There are several curriculum development models, including the DACUM model (Developing a Curriculum), the Backward Design Method, and the ADDIE (Analysis, Design, Development, Implementation, and Evaluation) model of instructional design. Whatever approach is used, make sure you understand its methodology and underlying philosophy so that these can help guide the evaluation.

2.     Establish a baseline. If possible, establish what student performance was before the curriculum was available, to assess the level of change or increased learning created as a result of the new curriculum. This could involve data on student grades or performance from the year before the new curriculum is introduced or data on job performance or another indicator.

3.     Clearly identify the outcomes expected of the curriculum. What should students know or be able to do when they have completed the curriculum? Take the time to understand the desired outcomes and how the curriculum content, activities, and approach support those outcomes. The outcomes should be directly linked to the project goals and objectives. Look for possible disconnects or gaps.

4.     Employ a pre/post test design. One method to establish that learning has occurred is to measure student knowledge of a subject before and after the curriculum is introduced. If you are comparing two curriculums, you may want to consider using one group as a control group that would not use the new curriculum and comparing the performance of the two groups in a pre/post test design.

5.     Employ content analysis techniques. Content analysis is the process of analyzing documents (student guides, instructor guides, online content, videos, and other materials) to determine the type of content, frequency of content, and internal coherence (consistency of different elements of the curriculum) and external coherence (interpretation in the curriculum fits the theories accepted in and outside the discipline).

6.     Participate in the activities. One effective method for helping evaluators understand the impact of activities and exercises is to participate in them. This helps determine the quality of the instructions, the level of engagement, and the learning outcomes that result from the activities.

7.     Ensure assessment items match instructional objectives. Assessment of student progress is typically measured through written tests. To ensure written tests assess the student’s grasp of the course objectives and curriculum, match the assessment items to the instructional objectives. Create a chart to match objectives to assessment items to ensure all the objectives are assessed and that all assessment items are pertinent to the curriculum.

8.     Review guidance and instruction provided to teachers/facilitators in guides. Determine if the materials are properly matched across the instructor guide, student manual, slides, and in-class activities. Determine if the instructions are clear and complete and that the activities are feasible.

9.     Interview students, faculty, and, possibly, workforce representatives. Faculty can provide insights into the usefulness and effectiveness of the materials, and students can provide input on level of engagement, learning effort, and overall impression of the curriculum. If the curriculum is tied to a technician profession, involve industry representatives in reviewing and examining the curriculum. This should be done as part of the development process, but if it is not, consider having a representative review the curriculum for alignment with industry expectations.

10.  Use Kirkpatrick’s four levels of evaluation. A highly effective model for evaluation of curriculum is called the Kirkpatrick Model. The levels in the model measure initial learner reactions, knowledge gained from the instruction, behavioral changes that might result from the instruction, and overall impact on the organization, field, or students.

11.  Pilot the instruction. Conduct pilot sessions as part of the formative evaluation to ensure that the instruction functions as designed. After the pilot, collect end-of-day reaction sheets/tools and trainer observations of learners. Having an end-of-program product—such as an action-planning tool to implement changes around curriculum focus issue(s)—is also useful.

 

RESOURCES

For detailed discussion of content analysis, see chapter 9 of Gall, M. D., Gall, J. P, & Borg, W. R. (2007). Educational research: An introduction (8th ed.). Boston: Pearson.

DACUM Job Analysis Process: https://s3.amazonaws.com/static.nicic.gov/Library/010699.pdf

Backward Design Method: https://educationaltechnology.net/wp-content/uploads/2016/01/backward-design.pdf

ADDIE Model: http://www.nwlink.com/~donclark/history_isd/addie.html

Kirkpatrick Model: http://www.nwlink.com/~donclark/hrd/isd/kirkpatrick.html

 

* This blog is a reprint of a conference handout from an EvaluATE workshop at the 2011 Advanced Technological Education PI Conference.

About the Authors

Karl Kapp

Karl Kapp box with arrow

professor of Instructional Technology, Bloomsburg University of Pennsylvania

Karl Kapp, Ed.D., is a professor of instructional technology at Bloomsburg University of Pennsylvania. He has served as an external evaluator on several National Science Foundation grants and is currently a researcher on a National Institutes of Health grant investigating methods to help childcare workers detect child abuse.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant numbers 0802245, 1204683, 1600992, and 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.