Learning Designer & Technical Writer
Evaluation & Implementation
The supra-badge “Evaluation and Implementation” represents projects that were intimidating to me. Of course, with time and continued personal and professional development, I now view them as strong learning opportunities and evidence of my commitment to continuing education throughout my career.
As a course-long project using the Dick and Carey model, I created a realistic pencil drawing workshop for preteens. One of the components of the Dick and Carey model is formative evaluation, which evaluates the effectiveness of instructional interventions. The formative evaluation process allows designers to identify strategies, activities, etc. that worked well during the learning experience, along with those that would benefit from revision. For the workshop, “working well” was defined in terms of the effectiveness of instruction, general attitudes of the learners, and the overall experience. Through the summative evaluation, I was able to determine that there was an unnecessary activity within the design that did not add value proportionate to the time to took to complete. I suggested that in future iterations of the workshop, this activity be removed. Although potentially enhancing a skill (perspective and scale), it was designed to provide additional practice. The materials could be offered to participants as an optional activity after the course concludes if they wished to practice independently.
Where formative evaluation determines the overall effectiveness of instruction prior to wide distribution, summative evaluation determines the overall effectiveness of the instructional program after it has been implemented and is, presumably, driving the desired results. The full evaluation plan I (with a partner) created for an eLearning series I designed and developed for a client several years ago walks through all four levels of the Kirkpatrick evaluation model. Careful analysis and planning was done to ensure a cohesive plan that addressed the key goals of the series. Because the series included a required summative assessment (which would speak to Kirkpatrick’s Level 2), this project illuminated areas of the assessment which should be changed to maximize the data value and identify learning outcomes and areas of need. A summative evaluation is often used to determine whether a program will be continued or retired. Using mock data, my partner and I determined that the program was successful and should be continued.
Instructional design programs are often complex and seek to integrate a number of (sometimes) competing ideas, desires, measures, etc. Part of the work of an instructional designer is to navigate varying goals and ideas and create an instructionally sound and effective solution, typically within a specified timeline and budget. The sub-badge “Design a Plan for Dissemination and Diffusion of Instructional and/or Non-Instructional Interventions” requires that all those items be taken into account. As part of a small-scale design project tied to a case analysis, I created a sample mock-up of a kiosk interface that would be stationed in an aquarium wetlands exhibit. Given the case parameters, I identified several challenges and constraints, which all needed to be addressed in some way. Notably, the design features scrolling pictures with animated transitions, rather than a full-scale animated video. This compromise promoted adherence to a budget but also gained and sustained user attention. The design also featured optional “Learn More” interactions that allowed advanced users to dive deeper into subjects without burdening novice users with more information that they may want or need to successfully use the kiosk.
My contracting experience has not typically included much involvement in the evaluation or implementation phases of ADDIE (the most common instructional design model I have used in corporate training). Due to this gap, much of the evaluation and implementation skills I learned were new to me. However, because instructional design is an iterative process, I am able to clearly see and appreciate how strong implementation and evaluation practices pull directly from the analysis, design, and development phases. Instructional design does not occur in a vacuum, so each step is inexorably connected to the previous and the next.
As I move into the next phase of my career, my goal is to find a role in which I can use the implementation and evaluation skills I have developed. I plan to continue to sharpen these skills by infusing the strategies and considerations of these two phases, particularly evaluation, even in situations where I am not participating in them. For example, as I design and develop assessments, I will forevermore do so with a dedicated alignment to the instructional goals and the key metrics being used to evaluate effectiveness. There will undoubtedly be projects that are not deemed appropriate for a full application of the Kirkpatrick model’s four levels. But the first two levels are very often considered during design and development and it is this inclusion that excites me for what comes next in my career.