Is There any Learning in that eLearning?

Several weeks ago I published a blog How eLearning Delivers. As you may recall, the term eLearning has been around for about 15 years and generally refers to online learning courses and other digital content with an internet-enabled or other technology-based interface such as a CD-ROM. All eLearning requires a certain investment of time and money, and those writing the checks should have a variety of questions such as:
  • Should we do more eLearning?
  • How effective is eLearning compared to instructor-led training (ILT), peer-to-peer coaching, and other types of learning?
  • How do we know if eLearning actually produces learning?
  • If we don’t measure it, how will we know its effectiveness? How will we improve upon it?
This article discusses program evaluation of online learning. Through various program evaluation methods we can determine how online learning stacks up to its objectives and how we can ensure that eLearning programs do indeed deliver. The “E” in the ADDIE model (Analyze, Design, Develop, Implement, Evaluate) of instructional design represents evaluation. Program evaluation is an important step in any training and development program. Program evaluation methods for eLearning include research activities before, during, and after the program.

First, What is the Goal?

We refer to program evaluation before the start of a program as needs assessment. We can’t design, develop, and implement an impactful eLearning program without first understanding the objectives. Needs assessment involves all stakeholders, including management, front-line employees, and customers. By understanding the current situation and what behaviors we want to change or improve upon, we can design an appropriate eLearning program. Program evaluation methods in the needs assessment phase include observation, informal dialogue, formal interviews, and surveys. Instructional design personnel seek answers to questions such as:
  • What new knowledge, skills, and/or behaviors do key stakeholders (employees, management, customers) need to meet business challenges?
  • What are measurable results to help determine success of online learning program?
  • How does the eLearning program align with other learning activities supporting the learning objectives?

How is the Process Going?

After determining objectives of an eLearning program, the instructional design team gets to work designing and developing a program to meet program objectives. In-process evaluation, also known as formative evaluation, helps the instructional design team pre-test how well the program aligns with objectives. Formative program evaluation methods include a review of content and activities material, and may include a more formal product test, or prototype evaluation, before going “live.” Formative evaluation may be informal and conducted internal to the instructional design group. Instructional design team members ask questions such as:
  • Does the design include all identified objectives?
  • How might design elements encourage more engagement in the content?
  • How will each element facilitate learning?
  • How will we measure results?
After a number of iterations, the instructional design team determines that the program is “ready for prime time” and the eLearning program proceeds to the implementation phase.

What are the Results?

After implementation of the eLearning program, we conduct summative evaluation to assess the impact of a program. The focus of summative evaluation is on the outcome. Summative program evaluation methods encompass formal assessments, often external to the instructional design organization. Summative program evaluation methods incorporate questionnaires, interviews, observations, and traditional end-of-course testing. Summative program evaluation is an integral part of program design. Common topics for a summative program evaluation include:
  • Most helpful elements
  • Most engaging elements
  • Elements most pertinent to the participant’s job
  • Elements participants expect to immediately apply on the job
  • Suggestions for improvement
Program evaluation surveys contain questions on rating and ranking, agreement, and other questions related towards producing numerical data. Open-ended survey questions seek deeper a understanding of how stakeholders feel and think about the program. Both numbers and thoughts have value.

Numbers, Thoughts, or Both?

Consider the following for an eLearning program evaluation: Thinking about learning the material in this course, please rate the following course elements on the basis of how helpful each element was to you, using a scale of 1 to 5, with 1 meaning “not helpful at all” and 5 meaning “very helpful.” We might list course elements such as reading material, list of resources for future reading, mini-lectures, case study activities, diagrams and charts, peer chat groups, technology elements, experiential activities, question and answer databases, and many others. The numerical responses to the questions can be summed and averaged. With an adequate number of responses we can do significance testing between elements. By comparing the scores we can report the “most helpful” elements among those asked about on the survey. Of course, it is nearly impossible to list every element of the course. We could improve upon the design of the evaluation by including some “Other” options with a request for the participant to write in one or more “other” elements. Even better, we could follow up the closed-ended question above with an open-ended question, such as: Thinking about all elements of the course, what do you feel was most helpful to you in learning and applying the material to your job? With the question above, we may learn about parts of the course that we did not list in the previous question or acquire a deeper understanding of how various elements listed make an impact on participants’ learning. We could go a step further and actually observe participants before and after completing an online learning program to determine any change in their behavior. Case study program evaluation methods include ethnographic research incorporating field observation, informal and in-depth interviews, surveys, and document review. Document review comprises a review of job aides, communications between work group members and management, and other written material. A case study is one way to address all four levels (or steps) of Donald Kirkpatrick’s Four Level Evaluation Model, introduced nearly five decades ago and remains relevant with some minor tweaks: Level 1: Reaction of Learners – How well did they like the learning process? Did it motivate them to learn? Level 2: Learning – What did participants learn? Can they apply it? Level 3: Behavior – What observable changes in job performance resulted? Level 4: Results – What are the measurable results in terms of reduced costs, improved quality, increased production, and other measures of efficiency and effectiveness regarding our business? If we think of the Kirkpatrick levels as a system, level 1 pertains to the individual, level 2 represents the learning environment, and level 3 represents the work environment. The learning environment bridges the individual and the work environment. Level 4 incorporates the entire organization, or overall system—an integration of individuals and the work environment. Organizational results occur as a byproduct of learning by and between individuals and teams. (Diagram adapted from Don Clark)


To make good decisions about online learning as part of our portfolio of learning activities, and to continuously improve upon eLearning programs for performance improvement, we conduct program evaluation before, during, and after each program. Quality eLearning program evaluation requires planning, design, and implementation deploying appropriate evaluation methods. By reviewing both quantitative (numerical) and qualitative (thoughts and feelings) data, we can achieve a richer set of data to analyze and base future actions. Our ultimate objective for any eLearning program, or other training and development activity, is improved organizational results that we measure through evaluation methods focused on Kirkpatrick Evaluation Model Level 4.


CATMEDIA is an award-winning Inc. 500 company based in Atlanta, Georgia. Founded in 1997, the company specializes in advertising, creative services, media production, program management, training, and human resource management. As a Women Owned Small Business (WOSB), CATMEDIA provides world-class customer service and innovative solutions to government and commercial clients. Current CATMEDIA clients include Centers for Disease Control and Prevention (CDC), Federal Aviation Administration (FAA), Office of Personnel Management (OPM), and the Department of Veterans Affairs (VA).

Stay Connected with CATMEDIA: For more information, please visit Like us on Facebook Follow us on Twitter

Leave a Reply