top of page

Evaluate Instructional & Non-Instructional Interventions (1)

Implement formative evaluation plans.

Part of a larger project, the artifact I have selected to represent the challenge area “Implement formative evaluation plans” is the implementation and evaluation report created for a preteen drawing workshop (additional artifacts involving this project are available throughout this portfolio, notably in the Planning & Analysis and Design & Development tabs). This artifact was created as part of my EDCI 572 class (Introduction to Learning Systems Design), after the field trial of the preteen drawing workshop I designed was conducted. The implementation and evaluation report is a section of the final project reporting documentation; it includes both a description of the evaluation process used, the evaluation raw data and synthesis/interpretation, and based on that, recommendations for modifications to future iterations of the workshop.

 

Conducting a field test is an important step in designing instruction. It provides an opportunity for the instructional designer to ensure the learning materials and strategies will yield the desired results. Additionally, formative evaluation illuminates areas which may required refining, modification, removal, or additions in order for the learners to meet the stated learning goals.


In the case of this implementation and evaluation report, I provided a full description of my data collection methods, the learning environment, and its impact on the workshop. Additionally, I provided the surveys (one for learners and one for observers) that were used to gather reaction and learning data. Rubrics were also used to collect learning data, although raw rubric data is not included in the report because of the length. However, rubric results are referenced in the report. The surveys sought to collect data primarily focused on the learners’ confidence (as self-efficacy was a major consideration based on the target audience analysis and content research), comparing both before and after the workshop. The surveys also gathered data on self-reported areas of improvement (showing effective areas of the workshop), what learners wanted more instruction in (showing areas of the workshop that could be built out further), and how engaged learners felt in workshop activities (showing if modifications were needed to the types or amount of activity of the workshop). The findings highlighted what I (as the instructor) experienced during the workshop – there was opportunity for improvement, especially in the overall length of the workshop and in some of the less-critical activities, but that overall, the workshop was successful in teaching learners how to use proportion and shading to create more realistic pencil drawings.

 

I have used surveys prior to this, but this project represented an expansion in how intentionally I approached the design of the surveys. Additionally, although I have been involved in survey data collection before, I have never created a rubric. Although the rubric was itself a learner assessment tool, the interpretation of its resulting data was part of my evaluation process. Rubrics are particularly useful for evaluating the effectiveness of many skills, because of their ability to show varying degrees of success. Because I have worked to design primarily leadership development programs and projects, the ability to include a rubric is an advancing of my own professional skills; many previous projects I have worked on would have benefitted from the inclusion of a rubric.

 

The implementation and evaluation report for the preteen drawing workshop I created is a strong example of my ability to implement formative evaluation plans. It was interesting to see how the learners actually received the content and in which areas content could be removed. Although the design was made to meet the needs of the learners based on the target audience analysis, it was found that some content can be effectively delivered with less instruction that originally believed. However, without conducting the formative evaluation, I may not have known that. In the future, because I have worked as a contractor (and have not been involved in much evaluation in those projects), I plan to seek out projects that do include that component of the process. Additionally, when I am conducting a field trial, pilot test, or beta test of a program, I am far more skilled and confident in creating an evaluation plan, interpreting the data, and making recommendations on program modifications.

  • LinkedIn

© 2024 by Rebecca Judkins. Powered and secured by Wix.

bottom of page