Investment in evaluation is necessary to produce reliable data that can guide choices about programs and policies. By providing data-based feedback, evaluation supports decision-making and policy development. The systematic assessment of a project, program, or policy to provide feedback for programmatic change may be an evaluation. It helps identify what works effectively for whom and in what circumstances, and what may be improved upon or scaled up. In addition to the definition and goal, several assessment techniques may be utilized, including theory-based evaluation.
As the name implies, a theory-based approach is founded on the program theory, which explains the intervention pathways, i.e., the stages by which the objectives will be attained and the important assumptions entrenched within the intervention pathways. Further, according to Coryn, Noakes, Westine, & Schröter (2011), theory-driven evaluation refers to any evaluation technique or approach that intentionally integrates and uses stakeholder social science or other theories in conceptualizing, creating, conducting, interpreting, and applying an evaluation.
Attributing impact to intervention is a crucial part of the theory-based approach. The method offers the framework for evaluating how impacts are achieved, and causes are assigned to the impact (Stame, 2004). It focuses on the transformational relationships between intervention and results (Chen and Rossi, 1989) and discovering the “mechanisms” that cause things to occur. Theory-driven or theory-based evaluation aims to grasp the program’s logic effectively and clarify the connection between the program’s problem and its actions.
While looking at theory-based evaluation literature, its proponents feel that addressing the area between inputs and outcomes is crucial to understand evaluation efficiently. Theory-based evaluation addresses the black box. It is a distinctive component of theory-based evaluation that distinguishes it from other evaluations. Models of theory-based evaluation or design incorporate a blend of theories regarding stakeholder input. Relevant social science research evaluators must have knowledge and skills in method-neutral methodologies that may include qualitative or quantitative techniques.
Stewart Donaldson (2007) has propounded that theory-based evaluation consists of seven steps. In step one, the objective is to engage relevant stakeholders. At this stage, the evaluator speaks with as many representatives as feasible to obtain their perspectives on the program’s planned long-term objectives and the process it employs to accomplish those outcomes. In step two, one needs to write a draft of the program theory, which is the evaluator’s or assessment team’s responsibility. At the next stage, one gives the document to stakeholders for additional debate, reaction, and input.
In the fourth step, one runs a plausibility check. Here, evaluators study relevant existing research and evaluations to assess the plausibility of each link and ask whether the program action could result in the desired consequences. Further, you report these findings to key stakeholders and change the program theory. The plausibility check may indicate that significant program modifications are required, or stakeholders have been overly optimistic about the potential outcomes.
In the last step, evaluators submit study findings to stakeholders and collaborate with them to update the program theory and the program itself so that the model appropriately depicts what will be done and what can be accomplished. One needs to examine arrows to determine model specificity. At this stage, the focus of the evaluators is often on crucial links and details, such as the duration of time required for an outcome to occur and the nature of the process to deliver a specific outcome. The evaluation team’s objective is to determine the nature of the process. The stakeholders have the last say in approving the model that will serve as the basis for studying the program.
The model’s validity is determined by comparing actual results to the theoretically-predicted outcomes and analyzing the degree to which the actual change process adheres to the model. The incorporation of contextual analysis constitutes a second refinement of the model. Importantly, the theory-based method has been criticized on theoretical grounds because the models are frequently too generic to be falsifiable and seldom identify and test all possible competing hypotheses (Bamberger, Rugh, and Mabry, 2006, pp 187-88; Cook, 2000).
Bamberger, M., Rugh, J., & Mabry, L. (2006). Realworld Evaluation and the Contexts in Which It Is Used. Realworld Evaluation: Working under Budget, Time, Data, and Political Constraints, 18-32.
Chen, H. T., & Rossi, P. H. (1989). Issues in the theory-driven perspective. Evaluation and program planning, 12(4), 299-306.
Coryn, C. L., Noakes, L. A., Westine, C. D., & Schröter, D. C. (2011). A systematic review of theory-driven evaluation practice from 1990 to 2009. American journal of evaluation, 32(2), 199-226.
Cook, T. D. (2000). The false choice between theory‐based evaluation and experimentation. New directions for evaluation, 2000(87), 27-34.
Donaldson, S. I. (2007). Program theory-driven evaluation science: Strategies and applications. Routledge.
Stame, N. (2004). Theory-based evaluation and types of complexity. Evaluation, 10(1), 58-76.
Kultar Singh – Chief Executive Officer, Sambodhi