WETP promotes logic model concept for training assessment
By Eddy Ball
A workshop at NIEHS Oct. 11-12 aspired to fundamentally transform ways of thinking about worker health and safety training evaluation, as well as share best practices. More than 100 grantees and contractors attended the workshop, which was part of the annual meeting (http://tools.niehs.nih.gov/wetp/events.cfm?id=2521) of the NIEHS Worker Education and Training Program (WETP).
The workshop developed a theme that WETP Director Chip Hughes introduced in his opening comments on the importance of being able to demonstrate the effectiveness of worker health and safety training and its value to the nation. “Program evaluation has always been a core part of our mission.”
As veteran evaluator Ruth Ruttenberg said during a panel session on evaluation tools and methods, “Training only matters if it makes a difference.” Her challenge, and the challenge of the other panelists and presenters, involved helping participants think critically about how they design training and how they can develop metrics to show its impact on workers on the job, at home, and in the larger political context of their lives.
Following Hughes' introduction, WETP grantee Craig Slatin, Sc.D., presented a big-picture keynote address on “Current and Future Safety and Health Training Expectations Under 21st Century Workplace and Socioeconomic Conditions.” In the face of economic restructuring that continues to impact job security, Slatin explained, the objectives of training can be difficult to realize. “We try to get workers’ voices out through the training,” he said, “[but the question of] how far do I push is always in workers’ minds.”
Something old, something new — Kirkpatrick and logic models
Hughes' introduction, and nearly every presentation at the workshop, acknowledged the tremendous contribution and continuing relevance of the four-level model of training course evaluation first published by University of Wisconsin Professor Emeritus Donald Kirkpatrick, Ph.D., (http://www4.uwm.edu/sce/instructor.cfm?id=12884) in 1959. Modified over the years by the author and his partners, Kirkpatrick’s framework (http://www.kirkpatrickpartners.com/) breaks down evaluation into how well learners respond, what they learn specifically, how much training influences performance, and whether the education program accomplished its original organization goals.
Not surprisingly, Kirkpatrick’s influence is also evident in the WETP logic model, which was the topic of presentations by NIEHS Division of Extramural Research and Training program analyst and evaluation specialist Kristi Pettibone, Ph.D., WETP program analyst Jim Remington, and WETP senior intern Hannah Leker, as well as a hands-on group exercise involving a collective effort to flesh out a logic model for their own programs.
The logic model reflects Kirkpatrick’s striving for exhaustive analytical precision, as it helps users construct a timeline flow chart of inputs, activities, outputs, and impacts — short term, midterm, and long term. The WETP handout also included mission and organizational priorities from the WETP 2008-2113 strategic plan (http://www.niehs.nih.gov/careers/assets/docs/wetp_strategic_plan.pdf) as a benchmark for planning activities.
Numbers, narratives, and post-training audits
Early in the workshop, panelist Sue Ann Sarpy, Ph.D., introduced the concept of 360-degree evaluation. She described the feedback evaluation scheme as a multisource system that includes all stakeholders, including the director, students, employers, program coordinator, and community members, and also serves as a continuing quality improvement instrument. Sarpy’s approaches range from questionnaires and open-end questions, to focus groups and success stories to help lend a human touch to other metrics, as well as strengthen the case for support.
The 360-degree approach can also involve audits of trainees after they’ve completed training, with a checklist of behaviors observed. Yale University project manager Thomas Ouimet warned against the number one fatal assumption — attending equates with learning — as he advocated for on-the-jobsite followup observations. Like many of his co-presenters, Ouimet is a strong advocate of blended training, such as e-learning simulation, role playing, and case studies, as well as blended assessment with instruments matched to Kirkpatrick’s levels of evaluation.
As he wrapped up the meeting, Hughes looked toward funding realities, and the aging of the current group of experienced health and safety program operators and trainers. The workshop title, “Prove It Makes a Difference,” referred both to the immediate, as well as long term, health of worker training. “The impact on the next generation is one of our big challenges for the future,” he said.
Worker safety and health training — a 25-year odyssey
Although the workshop focused on evaluation as a defensive strategy in an atmosphere of flat and even declining budget resources, participants have a rich history of accomplishment to build on, as they make their case. “You have to prove to somebody what you already know is true,” Hughes said to the audience. He also reflected on the quality of what crowd-source knowledge trainers can bring to the effort.
Keynote rappateur Eula Bingham, Ph.D., of the University of Cincinnati, framed her talk on evaluation in historical terms. “Let people know that you’re saving lives,” she said, “[by] helping us to tell the people it was all worthwhile.”
National Institute for Occupational Safety and Health researcher Paul Schulte, Ph.D., added, “WETP is one of the unsung heroes of this country.” Looking ahead to the potential for taking an effective program, with a proven track record, to an even higher level, Don Elisburg, J.D., of the National Clearinghouse for Worker Safety and Health Training, talked optimistically about what the talent on display during the workshop could mean for the future of worker training. “Looking at what everybody has done with these projects is mind-blowing,” he said.