Evaluating Training and Technical Assistance

 

Overview

Welcome to the e-learning lesson on Evaluating Training and Technical Assistance. Effective training and technical assistance providers embrace evaluation and outcome measurement, set high expectations for their own performance, and ensure that they are offering clients the best value by continuously improving their services. Evaluation and outcome measurement can help organizations to measure their effectiveness, identify areas of service that are effective or in need of improvement, and develop clarity of purpose, uniting staff around a set of common goals and expectations. Most importantly, however, proper evaluation techniques provide your organization with proof of their value to existing funders, potential funders, and the larger community. Whether this value is communicated in dollars or the number of individuals served, quantifiable performance measures are becoming important in the increasingly competitive social service industry. At the end of this lesson you will be able to relate how proper outcome measurement depends on effective evaluation, recall the basic levels of evaluation, and identify valuable tools and techniques that your organization can use to incorporate evaluation into all the services you provide.

Evaluation processes validate program outcomes.

An outcome is a change in individuals, groups, organizations, systems, or communities that occurs during or after program activities. An outcome answers the question “so what?” So what if you provide an organization with 10 hours of technical assistance on fundraising techniques? Is the organization better able to raise money? Do they actually raise more money now? So what if you train an organization on how to develop a strategic planning process? Can the organization effectively perform the steps involved? Do they actively engage in strategic planning now?

Quantitative and qualitative evaluation measures help to answer this “so what?” question by methodically linking an organization’s actions to client results. Proper evaluation processes and procedures help a training and technical assistance provider to answer the questions: what has changed as a result of this program? How has this program made a difference? How are the lives of our clients better as a result of the program?

Keep in mind that logic models and evaluation processes can provide insight regarding your organization’s contribution to positive results. In order to prove direct causation, however, an organization will need to take part in experimental research and a controlled study to link training and technical assistance to results.

Kirkpatrick’s four levels of evaluation provide a framework.

Donald L Kirkpatrick is a Professor Emeritus at the University of Wisconsin and a former President of the American Society for Training and Development. He is well known throughout the educational and training community for his work in creating a framework of training evaluation.
Kirkpatrick identifies four levels of evaluation.

Each level of evaluation is discussed in more detail in Chapters 2-5 of this lesson.

CHAPTER 1: Logic Models and Outcome Measurement

An organization should have a well-developed logic model in place before they begin to develop a comprehensive evaluation plan. A logic model maps out an overview of an organization’s tools and resources, the services they provide, and the intended impacts of these services. A basic logic model documents inputs or resources, activities, outputs, and short-term, intermediate, and long-term outcomes. Inputs or resources are the assets that an organization is prepared to invest to support or implement a program, including things like money, staff, and equipment. Activities capture the methodologies an organization plans to use in order to implement a project, while outputs describe activities in more finite, numerical terms, such as the number of training hours provided. Last, outcomes capture the changes, benefits, and overall impact that the program or initiative has had on an organization’s client population. Once a well-developed logic model is in place, an organization can begin to analyze its stated outcomes and develop performance measures and a detailed evaluation plan.

Clearly defined outcomes become organizational goals and hypotheses.

Organizations may find it helpful to analyze their activities and outputs through the “if/then” lens. When developing outcomes, an organization should ask itself, “If we provide these activities and outputs, what do we hope will then happen?”

The answer to this question should provide an organization with short-term, intermediate, and long-term outcomes.

Short-term outcomes are those outcomes that will occur while clients are receiving your services, including things like knowledge gain or changes in attitude in the organizations that you work with. Achievement of short-term outcomes can generally be measured using Kirkpatrick’s second level of evaluation.

Intermediate outcomes are those that occur within the client organization itself, including changes in behavior or skill-gain that you expect to result from the training and technical assistance you provided. Achievement of intermediate outcomes is usually measured through tests for learning and observations of changes in behavior, Kirkpatrick’s second and third levels of evaluation.

Long-term or end outcomes refer to the resulting ability of a client organization to operate more efficiently and effectively by serving more people, or becoming more sustainable in accomplishing its larger purpose. Achievement of long-term outcomes can be measured through Kirkpatrick’s fourth level of evaluation.

Logic models document relationships.

While not all logic models look the same, they all serve the same purpose: to graphically capture the assumption and cause and effect relationships that drive your organization’s work on a project.

Download a sample logic model template and test your understanding of the different elements of a logic model using the activity on the right.

Build from a foundation of data.

Experienced training and technical assistance providers know that in order to prove the effectiveness of their services, they must incorporate evaluation into all that they do and build off a foundation of data collection. Organizations may decide to collect this information through in-person or online surveys, or through site visits to client organizations.

Conducting regular surveys and needs assessments with your client population can help you to determine client demographics, experience, training and technical assistance needs, motivations, job satisfaction levels, and baseline performance.

While these surveys are incredibly helpful in providing insight into what sort of training and technical assistance opportunities would most benefit the client, these surveys also offer long-term value, providing points of comparison that your organization can reference throughout the evaluation process.

Site visits can also present training and technical assistance providers with important insight into how client organizations are performing and operating. Site visits can be an excellent source of qualitative information, most of which is not easily conveyed through surveys.

CHAPTER 2: Evaluating Reaction

Evaluating for reaction is, without question, the easiest level of evaluation included within the Kirkpatrick model, as reaction is basically synonymous with customer satisfaction. When organizations evaluate for reaction, they aim to discover a client’s “gut reaction” to a training or technical assistance event.

Has a customer service agent ever asked you to remain on the phone in order to answer a quick survey and provide feedback about your call-in experience? This is an example of a level 1 survey. Training and technical assistance providers usually find it easiest to distribute level 1 surveys electronically, using an online survey tool, or in-person, using a simple hand-out or comment card.

Level 1 surveys will generally enquire into topics like the training venue, schedule, food or snack services, training materials such as handouts and audiovisual aids, and the facilitator.

Experienced training and technical assistance providers recognize that something as small as room temperature or infrequent breaks can have a large impact on participants’ abilities to learn and retain information. Regular review and analysis of level 1 survey results can help your organization to improve training and technical assistance opportunities by making them more convenient, comfortable, and relevant to the client.

Make the most out of your surveys.

The length and type of level 1 survey will often depend on the length and type of training or technical assistance delivered. Regardless as to the format of this survey, organizations should try to ensure that 100% of participants respond, that participants remain anonymous, and that results are quantifiable, yet allow for comments and written feedback. Download a sample level 1 training evaluation, here.

There are a number of web-based survey applications, including Zoomerang, SurveyMonkey, and SurveyGizmo, that organizations can use to create and distribute electronic surveys. Each survey application has different editions that allow you to analyze functionality and choose a plan and price point that works for your organization. If you are unable to financially invest in a survey tool, check out the free versions on Zoomerang and SurveyMonkey.

Both in-person and electronic surveys can also be used to evaluate technical assistance offerings. Whether technical assistance takes place over the phone, via email, or in person, organizations should be prepared to deploy a survey enquiring into whether the individual providing the technical assistance was helpful and whether the client’s questions were answered.

Develop performance measures and keep high standards.

Performance measures are the data points that support the achievement of a larger outcome or goal. At initial stages of evaluation, performance measures are usually easy to identify, as they relate directly to organizational outputs. When formulating performance measures, an organization should ask, “How do we know we’ve been successful?”

For example, your organization identifies fifty hours of training as one of your outputs. In order to assess whether you’ve successfully delivered this output, you might collect a series of performance measures including attendance rates, contact hours, and participant level 1 surveys that note opinions regarding the usefulness of the training.

Acceptable quality levels (AQLs) are the quantifiable standards that your organization has set for its own performance measures. For instance, your organization might say that, in order to be considered a successful training event, 100% of all registered participants must attend the training and 90% of training participants must agree that they would recommend the training to a coworker.

The development of AQLs should be a collaborative process, involving all those that play a role in implementing training or technical assistance events. After you have developed a level 1 survey tool and AQLs, you can begin to tabulate results and measure them against your organization’s standards of performance.

CHAPTER 3: Evaluating for Learning

Kirkpatrick’s second level of evaluation measures whether or not the participant learned anything from the training or technical assistance event. Tests for learning are developed to measure gains in knowledge, skills, or attitude. Level 2 evaluations can vary in length and type, depending on the event being evaluated. After a training event, for instance, it might be appropriate to distribute level 2s in the form of a written test, while after providing technical assistance, your organization might find it more appropriate to collect from clients a written plan that documents the issues discussed and the client’s plan for action. Regardless as to what sort of gain you are testing for and how you are planning to measure it, level 2 evaluations play an invaluable role in linking your organization’s actions to the success of your clients. Without level 2 evaluations, it is impossible for an organization to prove that their work has resulted in positive behavioral changes and overall improvements in clients’ efficiency or effectiveness.

Document level 2 gains with pre- and post-tests.

In order to prove that your clients have gained new knowledge, skills, or attitudes as a result of your training or technical assistance, your organization will need to be able to quantify those gains using performance measures. Pre-tests or pre-event surveys can help to capture your client’s beginning understanding or knowledge in the training and technical assistance subject area.

Just like with level 1 surveys, level 2 pre- and post-tests should be developed in a consistent manner, so that you can easily compare the two and identify the impact of your training or technical assistance.

Pre-tests or surveys can also be very informative, as they help identify a client’s strengths and weaknesses and highlight areas that your staff should focus on in more detail, or spend less time on, depending on the client’s skill level and knowledge.

Develop level 2 evaluations that are relevant to the learning content.

Depending on the objectives of your training or technical assistance event, you may find it useful to use a variety of methods to evaluate clients’ learning. Learning evaluations will vary depending on whether the TTA event is designed to increase participants’ knowledge, improve their skills, or change their attitudes.

Level 2 evaluations can include written or electronic tests or surveys, presentations, essays, or small projects. For longer or more dynamic training events, a combination of these elements might be more appropriate.

CHAPTER 4: Evaluating Behavior

Kirkpatrick’s third level of evaluation aims to unearth the changes in behavior that have taken place within the client as a result of the provided training or technical assistance. Level 3 assessments enquire as to whether an individual actually applied the knowledge they gained in a valuable way. Although evaluating for behavior changes takes time and patience, level 3 surveys can help to showcase how the training and technical assistance you provide inspires your clients to take action and make organizational improvements. In order to properly evaluate changes in behavior, your organization should be prepared to collect both quantitative and qualitative information. This might include methods such as surveys, interviews, and even on-site observation. Some organizations may decide to evaluate behavior at multiple points in time after the training or technical assistance event to see whether clients have maintained momentum and continue to make positive changes. Naturally, it is up to your organization to develop and implement a level 3 evaluation plan that works within your budget and scheduling constraints.

Client interviews reveal behavioral changes.

In order to effectively evaluate for changes in behavior, you will need to reconnect with training and technical assistance participants. Whether you reach out via electronic survey, email, telephone, or in-person interview, you will be looking to answer the same set of questions:

Your organization may also find it beneficial to interview client staff members who regularly interact with the individual who took part in the TTA event.  These staff interviews may include colleagues, supervisors, or subordinates – anyone who might be able to provide insight into the individual’s behavior.  When interviewing client staff members, TTA providers should enquire as to whether the individual left the training or technical assistance event energized and excited to make positive changes, whether or not the individual actually made a change, and whether this change was well-received and sustainable within the client’s overall organizational climate.

Gain perspective through pre-and post-tests of behavior.

Just as with level 2 evaluations, level 3 evaluations are often more informative when organizations evaluate behavior both before and after a training or technical assistance event. These pre- and post-tests or surveys provide insight into how your clients have historically performed certain processes and procedures, and how new knowledge, skills, or attitudes have impacted or changed how those processes and procedures are performed.

Level 3 evaluations require patience.

It takes time to observe how learning impacts behavior. Because of this, your organization will need to review the content and objectives of your training and technical assistance efforts and decide on a reasonable length of time that provides your clients an opportunity to put their new knowledge or skills to work. Furthermore, you will also want to provide your clients with sufficient time to consider these behavioral changes and formulate an opinion as to whether they think the changes were positive and sustainable.

CHAPTER 5: Evaluating for Results

The fourth and final level of evaluation deals with results and return on investment. Level four assessments ask: How have organizational outcomes changed as a result of a change in behavior? While it can often be incredibly challenging to evaluate for results, it is often the most rewarding. The data unearthed through level 4 evaluations is well worth the effort, as it provides organizations with evidence or proof that their training and technical assistance efforts are ultimately impacting the performance of the client in a positive way. Like level 3 evaluations, evaluating for results takes time and patience. Prior to beginning the level 4 evaluation process, you and your clients should map out the outcomes and performance measures that you hope to achieve through the implementation of a training and technical assistance program. After these data points have been identified, you can begin to analyze whether your work has resulted in tangible benefits for your client, allowing them to operate more efficiently and effectively, and empowering them to increase their capacity.

Evaluate long-term outcomes and identify results.

Outcomes are the desired measurable changes in efficiency or effectiveness that are meaningful to the client. In the early stages of developing a training or technical assistance program, outcomes become goals or hypotheses as to the impact you hope to have on your client. When evaluating for results, an organization should revisit the long-term outcomes identified in their logic model, and consider ways to evaluate these outcomes.

Performance measures are the data points that support the achievement of a larger outcome. While an outcome generally represents a larger goal or aim for the organization, performance measures are the concrete factors that are assumed to quantitatively measure the established outcome.

Level 4 evaluations do not exist in a vacuum.

Level 4 evaluations are compelling. Evaluating for results helps to affirm that your organization’s efforts were well spent, that your clients came away with meaningful knowledge that motivated them to change their behavior, and that this behavior change led to improvements in the way they do business.

Because results are so compelling, it is important that your organization be able to show the link between your services and each level of client evaluation. If your organization has not taken the time to make level 2 and level 3 evaluations a priority, it will be hard to make the case that your client’s successes can be attributed to the training and technical assistance opportunities you provided.

While all evaluation processes require you to make assumptions, effective evaluation of all four levels will make it far easier to correlate the activities and outputs your organization provides with the positive results of your clients. Keep in mind that correlation does not imply direct causation. In order to prove direct causation, an organization will need to take part in experimental research and a controlled study to link training and technical assistance to results.

Summary

Effective training and technical assistance providers embrace evaluation and outcome measurement, set high expectations for their own performance, and ensure that they are offering clients the best value. Logic models provide a great starting point for organizations, as they help you to graphically lay out your organization’s resources, activities, outputs, and outcomes. Once you’ve identified these elements, Kirkpatrick’s four levels of evaluation provide a framework around which to analyze and explore these offerings. Tests for learning and behavior can provide evidence that you are meeting short-term and intermediate outcomes, while tests for results eventually prove the accomplishment of long-term objectives.Thorough evaluation processes take time, effort, and patience. However, when done properly and consistently, evaluation can provide you with the edge over your competitors. Methodical evaluation plans and regular analysis can provide your organization with calculations of your value not only to your clients, but to the larger community you serve. Carefully consider the costs and benefits of evaluation, and develop a plan that meets your expectation, yet works within your organization’s budget. Thank you for taking the time to learn about Evaluating Training and Technical Assistance.

Let improvement drive your evaluation process.

Effective training and technical assistance organizations develop cultures of constant improvement and are constantly striving to make their offerings more convenient and relevant. Kirkpatrick’s four levels of evaluation can help your organization to identify both small and large changes that, when implemented, can significantly impact the quality of services you provide.

Whether evaluation results are positive or negative, they can help your organization to fine-tune training and technical assistance processes. To keep this drive for improvement at the forefront, end all evaluations with some variation of the question, “How can we make this program more helpful?”

Consider cost versus benefits.

When crafting an evaluation plan, an organization should always consider costs versus benefits.  Consider the who, what, when, and how of your evaluation plan.

Consider the size and projected impact of the training and technical assistance you are providing, and develop a complementary evaluation plan that works within your available resources.