Ways to evaluate programs




















Janie Hermann offers tips on how to collect program data and feedback in a virtual environment. Monday, March 29, - Books and Authors. Community Engagement. Environmental Sustainability. Financial Literacy. Grants and Fundraising. Great Stories Club. Health and Wellness. Marketing and Communications.

Media literacy. New Americans. Passive Programming. Social-distancing programs. Tech and Gaming. Author Event. Film Screening. Finals Week. Oral History and Preservation. Program Series. Instead of going for the infamous Happy Sheets, try asking for targeted feedback on each session as the day progresses. After 3 days tops, send a follow-up quiz testing knowledge, skill level and attitude.

See how they advise implementing each level and how you can adapt that for your organization! Look at this graph featured in Learning Solutions Magazine to see how much your employees will have forgotten with every day that passes by! Around 2 weeks after the training, send a questionnaire asking for detailed answers and concrete examples of the ways in which the info they were given was useful and applicable in their day-to-day jobs.

Resist doing this right after the training. Not even our guy, Nostradamus, could read into the future all that well. Check out this short-term vs. Determine what kinds of learning you can assess at what points after the training. Send out a pre-evaluation briefing to all attendees letting them know why they are attending and what is expected of them, based on their current roles.

Start the evaluation even before the training by giving participants a context, as well as checking the training for purpose and strategic role within the company objectives. We summarized it for you below:. State the organizational context for the training and its importance. State the course schedule and explore possible impediments. State the objectives of the training program. Discuss how the learnings could be applied and possible barriers to application. Ask trainees, no later than 1 month after the training, for examples of when they applied lessons learnt and how these altered their usual behavioral pattern.

Was the process effective? What is the status of the program's progress toward achieving the goals? Will the goals be achieved according to the timelines specified in the program implementation or operations plan?

If not, then why? Do personnel have adequate resources money, equipment, facilities, training, etc. How should priorities be changed to put more focus on achieving the goals?

Depending on the context, this question might be viewed as a program management decision, more than an evaluation question. How should timelines be changed be careful about making these changes - know why efforts are behind schedule before timelines are changed? How should goals be changed be careful about making these changes - know why efforts are not achieving the goals before changing the goals?

Should any goals be added or removed? How should goals be established in the future? Process-based evaluations are geared to fully understanding how a program works -- how does it produce that results that it does. These evaluations are useful if programs are long-standing and have changed over the years, employees or customers report a large number of complaints about the program, there appear to be large inefficiencies in delivering program services and they are also useful for accurately portraying to outside parties how a program truly operates e.

There are numerous questions that might be addressed in a process evaluation. These questions can be selected by carefully considering what is important to know about the program. What is required of employees in order to deliver the product or services? How are employees trained about how to deliver the product or services? How do customers or clients come into the program? What is required of customers or client?

How do employees select which products or services will be provided to the customer or client? What is the general process that customers or clients go through with the product or program?

What do customers or clients consider to be strengths of the program? What do staff consider to be strengths of the product or program? Program evaluation with an outcomes focus is increasingly important for nonprofits and asked for by funders.

An outcomes-based evaluation facilitates your asking if your organization is really doing the right program activities to bring about the outcomes you believe or better yet, you've verified to be needed by your clients rather than just engaging in busy activities which seem reasonable to do at the time.

Outcomes are benefits to clients from participation in the program. Outcomes are often confused with program outputs or units of services, e. The following information is a top-level summary of information from this site. To accomplish an outcomes-based evaluation, you should first pilot, or test, this evaluation approach on one or two programs at most before doing all programs. The general steps to accomplish an outcomes-based evaluation include to: 1.

Identify the major outcomes that you want to examine or verify for the program under evaluation. You might reflect on your mission the overall purpose of your organization and ask yourself what impacts you will have on your clients as you work towards your mission. For example, if your overall mission is to provide shelter and resources to abused women, then ask yourself what benefits this will have on those women if you effectively provide them shelter and other services or resources.

As a last resort, you might ask yourself, "What major activities are we doing now? This "last resort" approach, though, may just end up justifying ineffective activities you are doing now, rather than examining what you should be doing in the first place. Choose the outcomes that you want to examine, prioritize the outcomes and, if your time and resources are limited, pick the top two to four most important outcomes to examine for now.

For each outcome, specify what observable measures, or indicators, will suggest that you're achieving that key outcome with your clients. This is often the most important and enlightening step in outcomes-based evaluation. However, it is often the most challenging and even confusing step, too, because you're suddenly going from a rather intangible concept, e.

It helps to have a "devil's advocate" during this phase of identifying indicators, i. Specify a "target" goal of clients, i. Identify what information is needed to show these indicators, e.

If your program is new, you may need to evaluate the process in the program to verify that the program is indeed carried out according to your original plans. Michael Patton, prominent researcher, writer and consultant in evaluation, suggests that the most important type of evaluation to carry out may be this implementation evaluation to verify that your program ended up to be implemented as you originally planned. Decide how can that information be efficiently and realistically gathered see Selecting Which Methods to Use below.

Consider program documentation, observation of program personnel and clients in the program, questionnaires and interviews about clients perceived benefits from the program, case studies of program failures and successes, etc. You may not need all of the above. Analyze and report the findings see Analyzing and Interpreting Information below. The following table provides an overview of the major methods used for collecting data during evaluations. Also consider Appreciative Inquiry Survey Design.

Note that if you plan to include in your evaluation, the focus and reporting on personal information about customers or clients participating in the evaluation, then you should first gain their consent to do so. They should understand what you're doing with them in the evaluation and how any information associated with them will be reported. You should clearly convey terms of confidentiality regarding access to evaluation results.

They should have the right to participate or not. Have participants review and sign an informed consent form. See the sample informed-consent form. The overall goal in selecting evaluation method s is to get the most useful information to key decision makers in the most cost-effective and realistic fashion. Consider the following questions: 1. What information is needed to make current decisions about a product or program?

Of this information, how much can be collected and analyzed in a low-cost and practical manner, e. How accurate will the information be reference the above table for disadvantages of methods? Will the methods get all of the needed information? What additional methods should and could be used if additional information is needed? Will the information appear as credible to decision makers, e. What is program evaluation? What types of program evaluations are there?

EPA has used program evaluation to: Support new and innovative approaches and emerging practices Identify opportunities to improve efficiency and effectiness Continuously improve existing programs Subsequently, improve human health and the environment What Types of Program Evaluations are there?

Program Evaluation Program evaluations can assess the performance of a program at all stages of a program's development. Design Evaluation A design evaluation is conducted early in the planning stages or implementation of a program. Process Evaluation A process evaluation assesses whether a program or process is implemented as designed or operating as intended and identifies opportunities for improvement.

Outcome Evaluations Outcome evaluations examine the results of a program intended or unintended to determine the reasons why there are differences between the outcomes and the program's stated goals and objectives e. Impact Evaluation An impact evaluation is a subset of an outcome evaluation.

Cost-Effectiveness Evaluation Cost-effectiveness evaluations identify program benefits, outputs or outcomes and compare them with the internal and external costs of the program. What is performance measurement? How do we determine good measures? How is performance measurement different from program evaluation? We strive to meet three key criteria in our measurement work: Is it meaningful? Measurement should be consistent and comparable to help sustain learning.



0コメント

  • 1000 / 1000