Analyzing Data and Communicating Results



Welcome to the e-learning lesson on analyzing data and communicating results. By this point you should be familiar with the structure and purpose of an outcome measurement plan and the first two phases of developing an outcome measurement plan which are: identify outcomes and develop performance indicators, and create and implement a data collection plan. The next two phases in the development of your outcome measurement plan are: analyzing your data and communicating the results of your analysis. Data analysis is more than figuring out ways to make beautiful pie charts and other graphics. It is about looking at the information you have collected and asking yourself what it means. Once you have the data, it is up to you to make use of it to inform decisions about your programs. This can be achieved by effectively communicating your results to your audience, which centers around marketing your outcomes to external stakeholders. By the end of this lesson you will be able to recall the types of descriptive and inferential statistics, utilize data analysis strategies when developing an evaluation report, and apply communication strategies to gather support from external stakeholders.

Analyzing data and communicating results is part of the outcome measurement plan.

Although there are many uses for the information generated by outcome measurement, organizations often engage in outcome measurement because they are required to do so. They are asked to be accountable for the use of their grantmakers' funds. This includes foundations and grantmaking organizations such as United Way, as well as local, state, and Federal governments.

Every organization hopes to deliver quality services. Outcome measurement will help you understand whether or not you do. With the information you collect, you can determine which activities to continue and build upon and which you may need to change in order to improve the effectiveness of your program.

Analysis and communication play a critical role in outcome measurement.

Data analysis is a useful component of outcome measurement because it helps you quantify your support and provides a much more compelling message when communicating your investment to stakeholders. Although analyzing data can help you make informed decisions, it is worth noting that data does not substitute for judgment or managerial decision-making.

CHAPTER 1: Statistics

Analysis of data can be simple or complex as you decide what tools to use to apply to the data. You can simply count, sort, and order the pieces of data. You can perform statistical tests to determine the relationship between two sets of data, or use the information to try to find patterns that allow you to predict future behavior. Whichever tools you use for analysis, be sure to continue to ask yourself, “to what end?” and make sure that the analysis is used in service of the mission of your program.

It is useful to be familiar with common descriptive statistics.

It is likely that most analysis will use measures of total, arithmetic mean, and standard deviation. However, it is useful to be familiar with all the descriptive statistics.


Total - the total is the sum of all parts.

Considerations: The total by itself is less useful than if used by comparison to show increases. For example, the total number of organizations served, the total number of youth participating in a program, or the total amount of additional funds raised by an organization.

Arithmetic Mean - The arithmetic mean is the sum of the observations divided by the number of observations. It is the most common statistic of central tendency, and when someone says simply "the mean" or "the average," this is what they mean.

Considerations: This measurement is easily skewed by outrageous outliers. For example, if you are reporting the average additional funds that organizations raised, if one organization was able to raise many times the amount of others, the average goes up significantly.

Median - The median is found by sorting all the data from lowest to highest, and taking the value of the number in the middle. If there is an even number of observations, the median is the average of the two numbers in the middle.

Considerations: If the distribution of data is very skewed, the median is a more useful tool to indicate the central tendency because it is less influenced by outliers.

Mode - The mode is the common value in the data set.                

Considerations: Mode is particularly useful when you have data that is grouped into a small number of classes, for example, the type of organization you are serving, or what county the organization operates in. The mode is simply the type of organization you serve most frequently, or the county where the largest number of organizations operate.

Standard Deviation - Standard deviation is a measure of the variability or dispersion of a data set.

Considerations: A low standard deviation indicates that the data points tend to be very close to the same value (the mean), while high standard deviation indicates that the data are spread out over a large range of values.  For example, if looking at the amount of money each organization devoted to hiring consultants, if the range of dollars spent was between $40-$200 dollars, the standard deviation would be low compared to a range of $20 - $300,000.

Ratio - A ratio is an expression that compares quantities relative to each other.

Considerations: A ratio is a proportional relationship and therefore compares two variables against each other.

Inferential statistics help you draw conclusions that may be more widely applicable.

Where descriptive statistics help you understand the data as you have it, inferential statistics help you draw conclusions that may be more widely applicable beyond the specific data set you are working with. Essentially, inferential statistics allow you to “infer” additional information. Two common operations are correlation and regression.

Correlation uses statistical formulas to calculate the relationship between two variables. For example, the two variables might be the number of hours of technical assistance an organization receives and the capacity index score at the beginning or end of the intervention, or even the difference between the two. You might expect that organizations with lower capacity index scores required more technical assistance, or that those who received more hours of technical assistance saw more increases in capacity index scores. The statistical methods of calculating correlative relationships help define the degree of interdependence between the two variables. Regression is the process of plotting the two variables on a graph, and then finding a line that “best fits” the trends in the data. It helps predict what values you might expect to see in one variable given another variable.

For both correlation and regression, it is important to understand that the equations do not definitively provide evidence of a cause and effect relationship. For example, number of hours of technical assistance may go up as the capacity index score decreases (a negative relationship). One possible explanation would be that your capacity building program design requires that you spend more time with organizations with lower capacity scores. On the other hand, more hours of technical assistance may be associated with organizations with lower capacity scores because those organizations did not exhibit high levels of readiness for change.

CHAPTER 2: Analysis and Data Displays

This chapter contains samples of analysis you might find in real capacity builders’ evaluation reports. Text displays can be simple and ineffective. Capacity builders face a number of circumstances where utilizing a graphic display can provide a more intuitive way of delivering the message and, in some cases, more compelling. Explore the examples and utilize your knowledge of descriptive statistics to apply data analysis strategies and create a visual representation of your data.

Sample Measure 1 - Increase in knowledge of best practices in managing Federal grants.

The goal of this capacity builder’s training was to equip faith-based and community organizations with the knowledge, skills, and tools to successfully manage Federal grants. The capacity builder measured the outcome in two ways: first, in a post-training survey, participants were asked to agree or disagree with the statement, “I gained new knowledge about managing Federal grants.” These outcomes were self-reported, and if the capacity builder wanted to get more objective assessments about whether the organization did, in fact, learn something about managing Federal grants, they might have administered a brief survey or quiz prior to and after the workshop.

Let us suppose that 54 percent of recipients reported that they gained new knowledge on the topic. The analysis would ask, “Is this good? What does this tell us about the training, and possibly about the participants?” On one hand, this tells us that nearly half of the participants did not learn anything new from the training. You might conclude that your training participants have all the knowledge they need of grants management, and schedule other training topics. But it is possible that your staff observes something very different—perhaps that most of the participating organizations are not tracking employee time in accordance with Federal rules and regulations, or retaining records. Now you have a different situation where your training contained very important information that the participants simply did not receive. Your next step, rather than to move onto other topics, might be to find other ways to deliver the necessary information.

Sample Measure 2 - Organizations implement management best practices.

One capacity builder used index scores for a capacity assessment and tracked the progress of each organization as well as the cohort across several categories of assessment. Here is a sample of the data, which analyzes the average and median index scores of all organizations in a cohort for each category. The assessment asks whether the organization has key practices in place for each of the categories. The results are tabulated into scores as the percentage of “yes” answers. The index scores are tracked across time by category and in aggregate for the organization and the cohort.

As expected by the program, results from the analysis indicated that index scores were increasing across the cohort. However, in the area of financial management, the scores actually dropped. According to this capacity builder, this was because after the initial assessment, the participating organizations increased their knowledge about the topic and then answered the questions on the assessment differently, realizing that they did not have as many effective financial management practices in place as they initially thought. What could have been interpreted as a “negative” outcome is instead explained as a possible “positive” outcome.
The capacity builder might also be interested in what capacity dimensions saw the most frequent improvements overall. This might help them define future capacity building. The program manager might start by identifying the single biggest area of improvement for each organization:

Sample Measure 3 – Hours of Technical Assistance

Let’s suppose that the individual capacity index scores for the previous example are compared with the hours of technical assistance each organization received, and were plotted. Why would this be of interest? In many programs, a central theory of capacity building is that more technical assistance yields better results for capacity building. If an organization is measuring capacity through the index scores and measuring the amount of technical assistance in hours, it makes sense to explore the relationship between those sets of data to see what information is available to managers. Contracted evaluators may perform rigorous statistical analysis, including correlation and regression analysis as discussed in the previous section, however, you will see in the chart below that even non-statistically savvy managers can examine the relationships between two variables.

Use the activity on the right to explore what this data set looks like when plotted on a two-variable chart.

CHAPTER 3: Communicating Results

Now that capacity builders have the data in hand it is now time to focus on communicating the results. The process of communicating results focuses on marshalling support from various external stakeholders, and generating enthusiasm about the results of the program. If the capacity builder has dedicated marketers or fundraisers, they should be leading or a key advisor on the plans you make. When considering how to use the results externally, create a plan that incorporates: what to communicate, how to communicate it, and the target audience.

The following suggestions are great ways to communicate your findings.

You will determine what is the right mix of these six forms of communication:

  1. Issue a formal report. At the conclusion of the project, complete a full report of your evaluation efforts. Describe your desired outcomes and your logic model, the data collection plan, your results, and any recommendations or actions you have taken or plan to take as a result.
  2. Present case studies or stories of impact. These may be part of your formal report, but they can also be teased out and made available as marketing or teaching tools. Focus on a single organization and the results that organization achieved through your capacity building intervention.
  3. Develop press releases. Draft a press release highlighting the strongest results you have discovered and distribute to local newspapers, columnists, bloggers, community email lists, and neighborhood organizations.
  4. Create snapshots or postcards. Distill your key results to a short list, and turn that short list into a snappy display for print or online. Printed, laminated cards make great promotional materials and can be handed out at community meetings, mailed to constituents, and displayed on webpages.
  5. Incorporate visual aids. Whenever possible, reinforce the numerical results with pictures, graphs, or charts. You can also include photos.
  6. Produce a promotional video. Record interviews with organizational leaders discussing how the capacity building help they received improved their organization. Leaders of the capacity builder can discuss the program’s goals and results. The video can be loaded for free on, or hosted on the organization’s web site.

Having put together great materials to communicate your results, it is important to put them to good use.

The following are tried-and-true methods of connecting external stakeholders to the materials that communicate the evaluation results.

  1. Enhance your web presence. Snapshots, case studies, stories, visual aids, and even your final report should be made available on your program’s website, Facebook page, or any other online communities you are part of, including a nonprofit management association or any associations of management-support organizations.
  2. Invite the media. Send the press releases to local media outlets and invite them to take a tour. Newspapers that cover the local events may be interested to report on the results you have achieved. Invite members of the media, including local bloggers or activists, to interview the leadership and take a tour of your facilities or programs.
  3. Give presentations. Invite stakeholders (board, partners, funders) for a meeting. Large organizations may be able to call a press conference. Go to your funders or stakeholders. If your organization is invited to give a speech or present information at any forum, be sure to include the key results in your introduction of your organization.  Note that these presentations can be more affordable and touch a wider geographic spread if done as a virtual presentation (webinar) using a product like GoToWebinar or Acrobat Connect.
  4. Identify a program champion. Identify a charismatic leader from an organization that has benefited greatly from your program and invite them to be a partner with you to help get the word out about the great results they have experienced in the program. This person or organization can participate in panels and forums, write letters on behalf of your organization, and participate in other outreach activities.  Remember to recognize this person for his/her efforts in ways that will be meaningful to him/her.

It will be important to include many audiences when discussing the results of your outcome measurement.

Some possible audiences you may want to include are:

  1. Funders, potential and current. This is most likely the first audience that capacity builders will think to target, and rightly so; communicating the effectiveness of your services is key to sustainability.
  2. Partners. Include all the organizations and individuals who contribute to your program’s operations.
  3. Your client organizations. Throughout the process, we have encouraged capacity builders to work closely with the organizations that are part of the capacity building engagement. This should not stop when it comes to the distribution of results. In addition to any results for that particular organization, provide the overall results of the program to give the organization an idea of how they compare to others in their cohort.  This builds goodwill that organizations have towards your organization, and builds your reputation across the nonprofit sector.  This can help create “buzz” that is sure to get back to funders and also create more demand for your services.
  4. Organizations you did not serve. Seeing the results of another organization can be a powerful marketing pitch and can generate more interest in your programs.  You may consider, for this target audience, engaging individuals from organizations you once served to help deliver the message to the organizations that you have not yet served via community panels, webinars, etc.
  5. Community leaders. Think about who benefits from the success of your capacity building program. Stronger nonprofits mean better results for the individuals being served by those organizations. Therefore, include local politicians (mayors, county executives, congressmen and women, state legislators) and institutions that benefit from the stronger organizations, such as school systems or corrections officials.
  6. Management and staff inside your organization.  The subsequent phase of the Measuring Outcomes attends in greater details on how to take the results and use them to examine internal performance and processes.


Two key phases of your outcome measurement plan are analyzing data and communicating the results of your analysis. In completing this lesson you should now have a firm understanding of how these two phases fit into your outcome measurement plan. Over the course of this lesson you have learned how to recall the various types of descriptive and inferential statistics, utilize data analysis strategies when developing an evaluation report, and apply communication strategies to gather support from external stakeholders. Thank you for taking the time to learn more about analyzing data and communicating results.

Analyzing data and communicating results are critical phases in your outcome measurement plan.

After engaging in this lesson, you now have a better understanding of the key elements required to develop an outcome measurement plan and accurately measure a program’s outcomes. Analyzing data and delivering the results in a clear and concise way will assist you when building your outcome measurement plan.

Visit these additional resources to assist you with your outcome measurement plan.

TCC Group []

Innovation Network []