Difference between revisions of "Evaluation"
From Learning and training wiki
(21 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
− | {{Term|EVALUATION|Is an in-depth study which takes place at a discrete point in time, and in which recognized research procedures are used in a systematic and analytically defensible manner | + | {{Term|EVALUATION|Is an in-depth study which takes place at a discrete point in time, and in which recognized research procedures are used in a systematic and analytically defensible manner to form a [[Judgement|judgment]] on the value of an intervention. It is an applied inquiry process for collecting and synthesizing evidence to produce [[Conclusions|conclusions]] on the state of affairs value, merit worth significance or quality of programmes, projects, policy, proposal or plan.<ref>Fournier M. Deborah in Mathison, Sandra. Encyclopaedia of Evaluation, pp 138, Ed. University of British Columbia. Thousand Oaks, CA: Sage Publications, 2005.</ref> |
Conclusions arising from an evaluation encompass both an empirical aspect (that something is the case) and a normative aspect (judgment about the value of something). The value feature in evaluation differentiates it from other types of inquiry such as investigative journalism or public polling for instance. | Conclusions arising from an evaluation encompass both an empirical aspect (that something is the case) and a normative aspect (judgment about the value of something). The value feature in evaluation differentiates it from other types of inquiry such as investigative journalism or public polling for instance. | ||
+ | |||
Evaluation can be conducted for purposes of: | Evaluation can be conducted for purposes of: | ||
Line 12: | Line 13: | ||
− | Evaluation should ideally be undertaken selectively to answer specific questions to guide decision-makers and/or programme managers, and to provide information on whether underlying theories and assumptions used in programme | + | Evaluation should ideally be undertaken selectively to answer specific questions to guide decision-makers and/or programme managers, and to provide information on whether underlying theories and assumptions used in programme development were valid, what worked and what did not work and why.<ref>[http://www.un.org/Depts/oios/mecd/mecd_glossary/index.htm Office of Internal Oversight Services (OIOS). Monitoring, Evaluation and Consulting Division, 2006.]</ref> |
− | Characteristics of evaluation | + | |
+ | '''See also''': [[A.D.D.I.E Model]], [[Formative Evaluation]] | ||
+ | |||
+ | |||
+ | __TOC__ | ||
+ | =='''Characteristics of evaluation'''== | ||
* '''Analytical''' – based on recognized research techniques | * '''Analytical''' – based on recognized research techniques | ||
* '''Systematic''' – carefully planned and using chosen techniques consistently | * '''Systematic''' – carefully planned and using chosen techniques consistently | ||
Line 23: | Line 29: | ||
* '''User-driven''' – the design and implementation of the evaluation should provide useful information to decision-makers. | * '''User-driven''' – the design and implementation of the evaluation should provide useful information to decision-makers. | ||
− | + | =='''Planing an Evaluation'''== | |
− | ''' | + | Planning an evaluation entails determining what it will accomplish and specifying the methods and resources necessary to achieve the intent of the evaluation. Evaluation planning calls for the definition of key evaluation questions, description of the information to be acquired and its sources, data collection and analysis techniques, reporting protocols and required resources.<ref>Imas Linda G. Morra, Rist C. Ray. The Road To Results; Designing and Conducting Effective Development Evaluations pp 240. The World Bank, Washington DC, 2009.</ref><ref>Smith M. F. in Mathison, Sandra. Encyclopaedia of Evaluation, pp 345, Ed. University of British Columbia. Thousand Oaks, CA: Sage Publications, 2005.</ref> |
+ | }} | ||
− | + | ||
− | + | ||
+ | {{Tool| EVALUATION DESIGN MATRIX| | ||
+ | |||
+ | |||
+ | [[Image:Evaluation Design Matrix_croped.jpg]] | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | =='''Evaluation Approaches'''== | ||
+ | Evaluation is generally considered as the final stage in a systematic approach with the purpose being to improve interventions (formative evaluation) or make a judgment about worth and [[Effectiveness|effectiveness]] of the training intervention (summative evaluation).<ref>Gustafson, K. L., & Branch, R. B. Survey of instructional development models. 3rd ed. Syracuse, 1997.</ref> Goal-based and systems-based approaches are predominantly used in the evaluation of training with the most influential approach being the [[Kirkpatrick Model|Kirkpatrick model]].<ref>Kirkpatrick, D. L. Techniques for evaluating training programs. Journal of the American Society of Training Directors, 13, 3-26, 1959.</ref> This model follows the goal-based evaluation approach and is based on four simple questions that translate into four levels of evaluation. The four levels evaluation are reaction, learning, behavior, and results. Under the systems approach, the most widely applied models include: | ||
* Context, Input, Process, Product (CIPP) Model<ref>Worthen, B. R., & Sanders, J. R. Educational evaluation. New York: Longman, 1987.</ref> | * Context, Input, Process, Product (CIPP) Model<ref>Worthen, B. R., & Sanders, J. R. Educational evaluation. New York: Longman, 1987.</ref> | ||
* Training Validation System (TVS) Approach<ref>Fitz-Enz, J. Yes…you can weigh training’s value. Training, 31(7), 54-58, July, 1994.</ref> | * Training Validation System (TVS) Approach<ref>Fitz-Enz, J. Yes…you can weigh training’s value. Training, 31(7), 54-58, July, 1994.</ref> | ||
− | * Input, Process, Output, Outcome (IPO) Model<ref> | + | * Input, Process, Output, Outcome (IPO) Model<ref>Bushnell, D. S. Input, process, output: A model for evaluating training. Training and Development Journal, 44(3), 41-43, March, 1990.</ref> |
− | [[ | + | View the key evaluation approach: [[Media:Approaches_to_Training_Evaluation.pdf|Aproaches to trainning evaluation]] [[http://www.click4it.org/images/e/e5/Approaches_to_Training_Evaluation.pdf]] |
Line 45: | Line 62: | ||
− | |||
− | + | =='''Evaluation Questions'''== | |
− | Flow chart to determine if Level 1 evaluation is required | + | Evaluation questions serve as a guide for an evaluation. They denote the scope of an evaluation and communicate to [[Stakeholder|stakeholders]] the focus of the evaluation. A key evaluation question on the value of [[E-Learning|e-learning]] may be constructed as follows: |
+ | |||
+ | ''To what extent does the design and delivery of e-learning contribute or impede participants' learning and transfer of training to the job as compared with the design and delivery of face to face classroom based training?'' | ||
+ | |||
+ | Key evaluation questions are not the same as survey questions for the evaluation in that they are broader and more comprehensive than specific questions used in a [[Data Collection Tools|data collection tool]]. They may be useful in determining the type of questions to be asked in a data collection effort.<ref>Russ-Eft F. Darlene in Mathison, Sandra. Encyclopaedia of Evaluation, pp 355, Ed. University of British Columbia. Thousand Oaks, CA: Sage Publications, 2005.</ref> | ||
+ | |||
+ | |||
+ | |||
+ | =='''Evaluation Tools'''== | ||
+ | [[Media:Level_I_Evaluation_Flow_Chart.pdf|Flow chart]] to determine if Level 1 evaluation is required [[http://www.click4it.org/images/5/50/Level_I_Evaluation_Flow_Chart.pdf]] | ||
+ | |||
+ | [[Media:Level_II_Evaluation_Flow_Chart.pdf|Flow chart]] to determine if Level 2 evaluation is required [[http://www.click4it.org/images/1/11/Level_II_Evaluation_Flow_Chart.pdf]] | ||
− | + | [[Media:L1_Evaluation.pdf|Steps]] for conducting Level 1 Training Evaluation (for UNITAR training events) [[http://www.click4it.org/images/d/d9/L1_Evaluation.pdf]]}} | |
− | |||
+ | {{Addlink|Below you have a list of selected websites where you can find additional information:}} | ||
+ | {|border=1; width= 100% | ||
+ | !Link | ||
+ | !Content | ||
+ | |- | ||
+ | |[http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/9672.pdf 10 must-knows about evaluation] | ||
+ | |Here is a nice infographic with the 10 must-knows about evaluation! | ||
+ | |- | ||
+ | |[http://www.doview.com/ DoView Software] | ||
+ | |DoView is an innovative software which lets you use a visual approach to monitor, evaluate and communicate your outcomes. | ||
+ | |} | ||
Line 64: | Line 101: | ||
[[Image: pdf.png]] [[Media:Level_II_Evaluation_Flow_Chart.pdf|Flow chart to determine if Level 2 evaluation is required]] | [[Image: pdf.png]] [[Media:Level_II_Evaluation_Flow_Chart.pdf|Flow chart to determine if Level 2 evaluation is required]] | ||
− | [[Image: pdf.png]] [[Media:L1_Evaluation.pdf|Steps for conducting Level 1 Training Evaluation (for UNITAR training events)]] | + | [[Image: pdf.png]] [[Media:L1_Evaluation.pdf|Steps for conducting Level 1 Training Evaluation (for UNITAR training events)]] |
+ | [[Image: pdf.png]] [[Media:Evaluation Design Matrix croped.jpg|Evaluation Design Matrix]]}} | ||
Latest revision as of 13:20, 22 June 2015
EVALUATION | |
Is an in-depth study which takes place at a discrete point in time, and in which recognized research procedures are used in a systematic and analytically defensible manner to form a judgment on the value of an intervention. It is an applied inquiry process for collecting and synthesizing evidence to produce conclusions on the state of affairs value, merit worth significance or quality of programmes, projects, policy, proposal or plan.[1]
Conclusions arising from an evaluation encompass both an empirical aspect (that something is the case) and a normative aspect (judgment about the value of something). The value feature in evaluation differentiates it from other types of inquiry such as investigative journalism or public polling for instance.
Characteristics of evaluation
Planing an EvaluationPlanning an evaluation entails determining what it will accomplish and specifying the methods and resources necessary to achieve the intent of the evaluation. Evaluation planning calls for the definition of key evaluation questions, description of the information to be acquired and its sources, data collection and analysis techniques, reporting protocols and required resources.[3][4] |
EVALUATION DESIGN MATRIX |
Evaluation ApproachesEvaluation is generally considered as the final stage in a systematic approach with the purpose being to improve interventions (formative evaluation) or make a judgment about worth and effectiveness of the training intervention (summative evaluation).[5] Goal-based and systems-based approaches are predominantly used in the evaluation of training with the most influential approach being the Kirkpatrick model.[6] This model follows the goal-based evaluation approach and is based on four simple questions that translate into four levels of evaluation. The four levels evaluation are reaction, learning, behavior, and results. Under the systems approach, the most widely applied models include:
Evaluation QuestionsEvaluation questions serve as a guide for an evaluation. They denote the scope of an evaluation and communicate to stakeholders the focus of the evaluation. A key evaluation question on the value of e-learning may be constructed as follows: To what extent does the design and delivery of e-learning contribute or impede participants' learning and transfer of training to the job as compared with the design and delivery of face to face classroom based training? Key evaluation questions are not the same as survey questions for the evaluation in that they are broader and more comprehensive than specific questions used in a data collection tool. They may be useful in determining the type of questions to be asked in a data collection effort.[10]
Evaluation ToolsFlow chart to determine if Level 1 evaluation is required [[2]] Flow chart to determine if Level 2 evaluation is required [[3]] Steps for conducting Level 1 Training Evaluation (for UNITAR training events) [[4]] |
Web Resources |
Below you have a list of selected websites where you can find additional information: |
Link | Content |
---|---|
10 must-knows about evaluation | Here is a nice infographic with the 10 must-knows about evaluation! |
DoView Software | DoView is an innovative software which lets you use a visual approach to monitor, evaluate and communicate your outcomes. |
References
- ↑ Fournier M. Deborah in Mathison, Sandra. Encyclopaedia of Evaluation, pp 138, Ed. University of British Columbia. Thousand Oaks, CA: Sage Publications, 2005.
- ↑ Office of Internal Oversight Services (OIOS). Monitoring, Evaluation and Consulting Division, 2006.
- ↑ Imas Linda G. Morra, Rist C. Ray. The Road To Results; Designing and Conducting Effective Development Evaluations pp 240. The World Bank, Washington DC, 2009.
- ↑ Smith M. F. in Mathison, Sandra. Encyclopaedia of Evaluation, pp 345, Ed. University of British Columbia. Thousand Oaks, CA: Sage Publications, 2005.
- ↑ Gustafson, K. L., & Branch, R. B. Survey of instructional development models. 3rd ed. Syracuse, 1997.
- ↑ Kirkpatrick, D. L. Techniques for evaluating training programs. Journal of the American Society of Training Directors, 13, 3-26, 1959.
- ↑ Worthen, B. R., & Sanders, J. R. Educational evaluation. New York: Longman, 1987.
- ↑ Fitz-Enz, J. Yes…you can weigh training’s value. Training, 31(7), 54-58, July, 1994.
- ↑ Bushnell, D. S. Input, process, output: A model for evaluating training. Training and Development Journal, 44(3), 41-43, March, 1990.
- ↑ Russ-Eft F. Darlene in Mathison, Sandra. Encyclopaedia of Evaluation, pp 355, Ed. University of British Columbia. Thousand Oaks, CA: Sage Publications, 2005.