Difference between revisions of "Evaluation"

Difference between revisions of "Evaluation"

From Learning and training wiki

Share/Save/Bookmark
Jump to: navigation, search
(Created page with "{{Term|EVALUATION|Is an in-depth study which takes place at a discrete point in time, and in which recognized research procedures are used in a systematic and analytically defe...")
 
Line 1: Line 1:
 
{{Term|EVALUATION|Is an in-depth study which takes place at a discrete point in time, and in which recognized  
 
{{Term|EVALUATION|Is an in-depth study which takes place at a discrete point in time, and in which recognized  
research procedures are used in a systematic and  analytically defensible manner  to form a judgment on the value of an intervention. It is an applied inquiry process for collecting and synthesizing evidence to produce conclusions on the state of affairs value, merit worth significance or quality of programmes, projects, policy, proposal or plan. (Fournier: 2005)
+
research procedures are used in a systematic and  analytically defensible manner  to form a judgment on the value of an intervention. It is an applied inquiry process for collecting and synthesizing evidence to produce conclusions on the state of affairs value, merit worth significance or quality of programmes, projects, policy, proposal or plan. (Fournier: 2005)
Conclusions arising from an evaluation encompass both an empirical aspect ( that something is the case) and a normative aspect (judgment about the value of something). The value feature in evaluation differentiates it from other types of inquiry such as investigative journalism or public polling for instance.
+
Conclusions arising from an evaluation encompass both an empirical aspect (that something is the case) and a normative aspect (judgment about the value of something). The value feature in evaluation differentiates it from other types of inquiry such as investigative journalism or public polling for instance.
  
 
Evaluation can be conducted for purposes of:
 
Evaluation can be conducted for purposes of:
(i) Generating general knowledge about and principles of programme effectiveness
+
# Generating general knowledge about and principles of programme effectiveness
(ii) Developing programmes and organizations
+
# Developing programmes and organizations
(iii) Focusing management efforts
+
# Focusing management efforts
(iv) Creating learning organizations
+
# Creating learning organizations
(v) Empowering project/programme participants
+
# Empowering project/programme participants
(vi) Directly supporting and enhancing programme interventions (by fully integrating evaluation into the intervention)
+
# Directly supporting and enhancing programme interventions (by fully integrating evaluation into the intervention)
(vii) Stimulating critical reflection on the path to more enlightened practice
+
# Stimulating critical reflection on the path to more enlightened practice
  
  
Line 16: Line 16:
  
 
Characteristics of evaluation can be summarized as follows:
 
Characteristics of evaluation can be summarized as follows:
* '''Analytical''' – based on recognized research techniques
+
* '''Analytical''' – based on recognized research techniques
* Systematic – carefully planned and using chosen techniques consistently
+
* '''Systematic''' – carefully planned and using chosen techniques consistently
* Objective – where the evaluator is as neutral as possible and avoids bias, values and or prejudice
+
* '''Objective''' – where the evaluator is as neutral as possible and avoids bias, values and or prejudice
* Valid – internally valid because the causal link between the intervention and the  observed effects is certain; and externally valid because the conclusions about the intervention can be generalized and applied to other people, settings and times
+
* '''Valid''' – internally valid because the causal link between the intervention and the  observed effects is certain and externally valid because the conclusions about the intervention can be generalized and applied to other people, settings and times
* Reliable – able to have findings that are reproducible by a different evaluator  with access to same  
+
* '''Reliable''' – able to have findings that are reproducible by a different evaluator  with access to same (or similar) context and using the same or similar methods of data analysis
(or similar) context and using the same or similar methods of data analysis
+
* '''Issue-oriented''' – address important issues relating to the program, including its relevance, efficiency and effectiveness
*Issue-oriented – address important issues relating to the program, including its relevance, efficiency  
+
* '''User-driven''' – the design and implementation of the evaluation should provide useful information to decision-makers.
and effectiveness
+
* User-driven – the design and implementation of the evaluation should provide useful information to decision-makers.
+
  
 
 
'''''Training Evaluation Approaches'''''''
+
'''''Training Evaluation Approaches'''''
  
 
 
Training evaluation is generally considered as the final stage in a systematic approach with the purpose
+
Training evaluation is generally considered as the final stage in a systematic approach with the purpose being to improve interventions (formative evaluation) or make a judgment about worth and effectiveness of the training intervention (summative evaluation)  (Gustafson & Branch:1997). Goal-based and systems-based approaches are predominantly used in the evaluation of training (Philips, 1991) with the most influential approach being the Kirkpatrick model (1959). This model follows the goal-based evaluation approach and is based on four simple questions that translate into four levels of evaluation. The four levels evaluation are reaction, learning, behavior, and results. Under the systems approach, the most widely applied models include:  
being to improve interventions (formative evaluation) or make a judgment about worth and effectiveness
+
 
of the training intervention (summative evaluation)  (Gustafson & Branch:1997).Goal-based and systems-based  
+
* Context, Input, Process, Product (CIPP) Model (Worthen & Sanders, 1987)
approaches are predominantly used in the evaluation of training (Philips, 1991) with the most influential approach  
+
* Training Validation System (TVS) Approach (Fitz-Enz, 1994)
being the Kirkpatrick model (1959). This model follows the goal-based evaluation approach and is based on four  
+
* Input, Process, Output, Outcome (IPO) Model (Bushnell, 1990)
simple questions that translate into four levels of evaluation . The four levels evaluation are reaction, learning,  
+
 
behavior, and results. Under the systems approach, the most widely applied models include:  
+
(a) Context, Input, Process, Product (CIPP) Model (Worthen & Sanders, 1987)
+
(b) Training Validation System (TVS) Approach (Fitz-Enz, 1994)
+
(c) Input, Process, Output, Outcome (IPO) Model (Bushnell, 1990)
+
* View the key  training evaluation approaches
+
  
 
In the final analysis, the purpose of evaluating training programmes is to:
 
In the final analysis, the purpose of evaluating training programmes is to:
1. Establish if the training intervention is fully meeting its stated objectives  
+
 
2. Make training programmes more efficient and effective in enhancing individual and organization performance  
+
# Establish if the training intervention is fully meeting its stated objectives  
3. Provide an opportunity for organizational learning with lessons learned being applied to improve service
+
# Make training programmes more efficient and effective in enhancing individual and organization performance  
    delivery and meet beneficiary expectations
+
# Provide an opportunity for organizational learning with lessons learned being applied to improve service delivery and meet beneficiary expectations
4. Determine the value (ROI) of the training intervention both to participants and organization
+
# Determine the value (ROI) of the training intervention both to participants and organization
Training Evaluation Tools
+
 
 +
 
 +
Training Evaluation Tools:
 +
 
Flow chart to determine if Level 1 evaluation is required
 
Flow chart to determine if Level 1 evaluation is required
  
 
Flow chart to determine if Level 2 evaluation is required
 
Flow chart to determine if Level 2 evaluation is required
  
Steps for conducting Level 1 Training Evaluation (for UNITAR training events)
+
Steps for conducting Level 1 Training Evaluation (for UNITAR training events)}}
 +
 
  
}}
 
  
 +
{{Addmaterial|[[Media:Approaches_to_Training_Evaluation.pdf|Approaches to Training Evaluation]]}}
  
  

Revision as of 09:56, 19 July 2011

Term2.png EVALUATION
Is an in-depth study which takes place at a discrete point in time, and in which recognized

research procedures are used in a systematic and analytically defensible manner to form a judgment on the value of an intervention. It is an applied inquiry process for collecting and synthesizing evidence to produce conclusions on the state of affairs value, merit worth significance or quality of programmes, projects, policy, proposal or plan. (Fournier: 2005) Conclusions arising from an evaluation encompass both an empirical aspect (that something is the case) and a normative aspect (judgment about the value of something). The value feature in evaluation differentiates it from other types of inquiry such as investigative journalism or public polling for instance.

Evaluation can be conducted for purposes of:

  1. Generating general knowledge about and principles of programme effectiveness
  2. Developing programmes and organizations
  3. Focusing management efforts
  4. Creating learning organizations
  5. Empowering project/programme participants
  6. Directly supporting and enhancing programme interventions (by fully integrating evaluation into the intervention)
  7. Stimulating critical reflection on the path to more enlightened practice


Evaluation should ideally be undertaken selectively to answer specific questions to guide decision-makers and/or programme managers, and to provide information on whether underlying theories and assumptions used in programme development were valid, what worked and what did not work and why(UN OIOS)

Characteristics of evaluation can be summarized as follows:

  • Analytical – based on recognized research techniques
  • Systematic – carefully planned and using chosen techniques consistently
  • Objective – where the evaluator is as neutral as possible and avoids bias, values and or prejudice
  • Valid – internally valid because the causal link between the intervention and the observed effects is certain and externally valid because the conclusions about the intervention can be generalized and applied to other people, settings and times
  • Reliable – able to have findings that are reproducible by a different evaluator with access to same (or similar) context and using the same or similar methods of data analysis
  • Issue-oriented – address important issues relating to the program, including its relevance, efficiency and effectiveness
  • User-driven – the design and implementation of the evaluation should provide useful information to decision-makers.


Training Evaluation Approaches


Training evaluation is generally considered as the final stage in a systematic approach with the purpose being to improve interventions (formative evaluation) or make a judgment about worth and effectiveness of the training intervention (summative evaluation) (Gustafson & Branch:1997). Goal-based and systems-based approaches are predominantly used in the evaluation of training (Philips, 1991) with the most influential approach being the Kirkpatrick model (1959). This model follows the goal-based evaluation approach and is based on four simple questions that translate into four levels of evaluation. The four levels evaluation are reaction, learning, behavior, and results. Under the systems approach, the most widely applied models include:

  • Context, Input, Process, Product (CIPP) Model (Worthen & Sanders, 1987)
  • Training Validation System (TVS) Approach (Fitz-Enz, 1994)
  • Input, Process, Output, Outcome (IPO) Model (Bushnell, 1990)


In the final analysis, the purpose of evaluating training programmes is to:

  1. Establish if the training intervention is fully meeting its stated objectives
  2. Make training programmes more efficient and effective in enhancing individual and organization performance
  3. Provide an opportunity for organizational learning with lessons learned being applied to improve service delivery and meet beneficiary expectations
  4. Determine the value (ROI) of the training intervention both to participants and organization


Training Evaluation Tools:

Flow chart to determine if Level 1 evaluation is required

Flow chart to determine if Level 2 evaluation is required

Steps for conducting Level 1 Training Evaluation (for UNITAR training events)


MATERIAL.png Additional Materials
Approaches to Training Evaluation


References