Evaluation research is comprised of a body of knowledge and methodology developed to assess the effectiveness of programs of change. It originated in response to questions about the effectiveness of business and government consulting programs. The field has progressed along to the expansion of international and domestic change programs. However, research on consulting evaluation remains underdeveloped; it is not a focal point of consulting. Questions often arise as to how and what is evaluated, and should be evaluated. Often, the client and consultant may claim success without drawing upon a systematic, data-based approach. Failures may therefore go under-reported and results may be tainted by a bias toward the advocating consultant’s approaches and the client’s sentiments.
Regardless of the drivers for consulting, the commonly used evaluation concepts, processes, and methods tend to be afflicted by local biases and unchecked assumptions, concepts, and methods. Although managers are interested in creating value with consulting resources, the evaluation and measurement of value-added aspects of consulting are often neglected, avoided, or deemed unnecessary.
Evidence-Based Evaluation and Consulting Engagements
In many cases, consulting efforts are led by consultants who sell or advocate their services and pre-packaged products using claims of success. Often, clients rely on remedies offered by consultants for quick results—without proper diagnosis, intervention, and evaluation. To improve consulting practices, the consulting field would benefit from both evaluations  and learning about the effectiveness of the services and outcomes of consulting efforts and their impacts. Managers would also benefit by learning the results of the efforts and resources expended on consulting to find better ways to logically plan and justify consulting efforts in their own organizations.
An evidence-based approach to evaluation offers valid and reliable information on the efficacy of consulting efforts and provides opportunities to generate knowledge, learning, improvements, and innovation. This methodology relies on a logical approach to consulting assessments, and includes planning, designing, implementing, and controlling: a disciplined approach to evaluation. Developing and adopting a logical approach that fits the specific drivers of change efforts  tends to assure greater validity and reliability in evaluating the effectiveness of the consulting approach, processes, and results, versus using unchecked, idiosyncratic consultant- and client-dominated approaches.
Resource expenditures for consulting engagements are justified based on views, hopes, and aspirations that consultants can draw upon to help their clients deal with the realities of their situations—whether implicit or explicit. The evidence-based evaluation approach emphasizes a logical approach that underpins the evaluation of the consultation process from start to finish, or at any phase of the consulting activity. This approach can help reveal the level of thought, scope, and usefulness of the engagement and associated interventions. A logical approach can guide the design and selection of the appropriate type of thinking, concepts, methodologies, and processes to be used in evaluations. This approach can also provide key elements and play a critical role in facilitating a valid and reliable evaluation. Evidence-based evaluation can incorporate features and attributes that offer the best assessments of the unique situational conditions and driving forces of the consulting engagement.
An effective consulting evaluation requires adopting the appropriate type of thinking, concepts, methodologies, design, and implementation. To ensure valid and reliable evidence-based evaluation outcomes, evaluation methodologies should reflect the rationale of the consulting engagement. When the purpose and drivers of the consulting engagement are not considered, the evaluation concepts, methodology, and subsequent results will most likely be confused and unreliable. It is important to configure the explicit and implicit unique, subtle, and special conditions of the consulting effort. An evidence-based evaluation approach guides the type and level of thinking, applicable concepts, process design, and implementation, and provides a framework for the valid and reliable evaluation of consulting engagements.
Ideally, evidence-based evaluation is incorporated into consulting engagements from beginning to end, with resources and support for the evaluation effort integrated into the consulting engagement. Furthermore, key stakeholders are identified and, if appropriate, included in the process of consulting evaluation planning and implementation efforts, according to the circumstances and drivers of the consulting engagement.
Drivers of Consulting Engagements
The call for consulting services is driven by the client’s need to add value to his or her organization and activities. Effective consulting enhances and enables the client’s system to create value and achieve its goals effectively and efficiently. Three common drivers for those seeking consulting services include: (1) external conditions, (2) internal factors, or (3) a strategic need to respond to combined external (opportunities and threats) and internal (strengths and weaknesses) situations. These three thematic modalities of consulting should be recognized from the outset. For example, external and internal impetuses for change should be clearly understood and diagnosed, with cause-and-effect relations analyzed. Errors in determining the correct impetuses for consulting engagements and evaluations can become costly and self-defeating.
External drivers, such as global, macro trends, industry dynamics, disruptions, transorganizational dynamics, institutional requirements, competition, market shifts, technological changes, regulations, laws, and similar forces, are beyond an organization’s domain of direct control and influence. Requirements for success are often illusive and changing. Adroit consultants provide insights and resources to help the client learn and understand the new and emerging realities, plan, and take appropriate actions to succeed. The overall success of externally driven consulting engagements is best evaluated with externally based success criteria and measures. Self-assessment and internal measures are necessary, but not sufficient. Consulting evaluations address the requisite factors that lead to eventual success in the external environment. The primary measures and metrics utilized for consulting processes focus from beginning to end on the requisite external success requirements, as well.
Second, internal drivers are focused on improving efficiencies within the firm. Consulting resources are directed at the departmental, sectional, team, or individual levels. The changes required may be behavioral, technological, structural, or procedural, and the focus may be on a department or subset of the organization. The rationale and measure of successful consulting outcomes are bound within internal core processes, resources, expectations, or even stylistic managerial and cultural preferences. For example, members of a department and their supervisor may aim to improve teamwork to improve their productivity and morale. Other interventions may include structural, technical, personnel, and procedural changes. Evaluations of internal consulting engagements have a local flavor, too. Rousseau offers an example of internally driven consulting, as demonstrated by a health care case:
… with input from clinic staff and feedback from clinic staff, a redesigned feedback system takes shape. The new system uses three performance categories—care quality, cost, and employee satisfaction—and provide a summary measure for each of the three. Over the next year, through provision of feedback in a more interpretable form, the performance of the health system improves across the board, with low performing units showing the greatest improvement.
This example illustrates internal drivers for improvements and their subsequent internal evaluation.
Third, strategy drivers for change and consulting facilitate rethinking, renewing, building, maintaining, and rebalancing to achieve an effective equilibrium between the client’s vision, mission, and capabilities, and dynamic external environment conditions. According to de Kluyver and Pearce, strategic thinking drives decisions regarding what resources are employed to plan and implement firm-level strategies, as well as how those resources are employed. Value migration  creates conditions that require firms to make adjustments—leap forward to maintain or rebuild their strategic advantage by mitigating external and internal requirements.
Strategic consulting engagements may include initiatives to diversify, acquire and merge businesses, harvest cash cows, and spin off or abandon non-core businesses. A client’s system for value-creating activities is renewed, developed, modified, and used to enhance the firm’s competitive and strategic advantages. Strategy-consulting drivers make use of internal resources to assess market opportunities and risks, develop new product lines and services, build and adopt new technologies, competencies, and capabilities, and fashion and implement strategic change initiatives. Their success may be evaluated by assessing the level of revitalization reflected by the interactive balancing of the firm’s internal needs, capabilities, and strengths, and the related external opportunities and risks.
5 Elements of Engagement Evaluations
Consulting engagement evaluations involve five separate and interactive elements (Table 1): the level of thinking or logic type adopted for the evaluation, a conceptual map of the relevant and interactive concepts and knowledge to be applied in the evaluation, the logic model, implementation and outcome assessments, and their impacts. These five elements and their interactive sequential dynamics are discussed below.
Table 1. Elements of Evidence-Based Consulting Evaluations
1. Logic Type
The first element addresses the level of thinking or logic type used to clarify a consulting engagement for the purpose of analyzing and evaluating it. The logic type, whether implicit or explicit, is a precursor to the evaluation effort. It reflects the level and nature of thinking and, in turn, determines the concepts and measures that will be used in the evaluation. The theory of logic types advanced by Bateson is relevant to the understanding, application, and level of thinking adopted in a given situation. An effective logic type elevates thinking to an appropriately high level, enabling participants to understand and grasp seminal and pivotal issues that extend beyond the point where problems present themselves.
When an evidence-based evaluation utilizes logic types that are at the same level where problems arise, the evaluation efforts will become short-sighted, suboptimal, and risky, leading to faulty results. It is a human tendency to reduce various situations to simpler forms and, ultimately, to escape uncertainty and complexity. This tendency becomes stronger when the consultant and clients are faced with added responsibility for high-stakes situations, as well as uncertain and complex consulting engagements, decisions, and outcomes.
The common phrase “The medicine cured the disease but killed the patient” is an example of an evidence-based evaluation developed at the same faulty level of logic as where the problem resides. The disease was targeted and cured successfully, but the intervention mortally damaged and killed the patient. The effort failed to consider a higher level of reality and logic for total patient care, focusing instead on the disease level. To effectively evaluate the outcome of a prescribed medicine, the level of thought should include not only the disease, but higher levels of logic that would assess side effects and the patient’s overall well-being and health over both the short and long term. Thus, the logic type to be used in the evaluation of a solution to an epidemic disease may require a broader view and a higher level of thought about the disease, specifically at a public health policy level, rather than focusing solely at the lower levels of patient treatment or personal action, which are important but not sufficient for resolving the health problem at its core. A robust evaluation effort uses an appropriate logic type that is specific to the client and the related conditions.
2. Conceptual Map
The second element of evidence-based evaluation is the conceptual map. Conceptual maps play a useful role in surfacing theories, concepts, and research in use and to be used. They outline the relevant concepts specific to a consulting engagement and portray the logic type and level of thinking as the cognitive driving forces of the evaluation. The conceptual map integrates seminal concepts to guide the evaluation and, subsequently, guide the design of the evaluation logic model. For example, to treat many forms of cancer, the choices are to radiate, cut, or chemically treat the patient. Not only the diagnosis but the treatment utilizes a variety of concepts and research approaches which, at times, are used carefully for successful collective treatment, while on other occasions, when inappropriate concepts are used, the treatment results in failure.
3. Logic Model
The third element of evaluation is the logic model. The logic type delineates the level of thinking employed for the consulting evaluation, while the conceptual map outlines the key concepts that provide the conceptual framework for the evaluation effort. The logic model is derived from the conceptual map and the logic type. It focuses on planning, designing, and sequencing the implementation of evaluation activities alongside consulting engagement efforts. The logic model embodies the methods, techniques, and processes of evaluation and measurement. It requires appropriate evaluation methodologies, processes, and measures. The logic model also delineates the need for and use of pre- and post-assessments of conditions related to the consulting engagement, objective and subjective measures, surveys of stakeholders’ sentiments and satisfaction, and financial measures of key success factors. The logic type outlines a disciplined approach to data collection, analysis, and diagnosis.
Implementation is the fourth element of evidence-based evaluation. Implementation follows the detailed plans and action items of the logic model. It may use responsibility charts and performance metrics and measures and collect data and evidence on conditions and issues before, after, and during the consultation engagement, according to the logic model, conceptual map, and logic type incorporated in the evaluation effort. Implementation embodies the assessment of phases, which are the milestones of consulting engagements.
5. Outcomes Assessment
The fifth and final element of an evidence-based evaluation is outcomes assessment of the overall consulting engagement’s efficacy, impacts, cross impacts and intended and unintended consequences, and benefits and costs. It would also be helpful for it to include a post-mortem evaluation to capture the lessons learned and knowledge gained from the consulting engagement. The outcomes assessment is built on and incorporates all elements of the evidence-based evaluation. It considers the drivers of change, whether external, internal, or strategic, and provides valuable knowledge about the consulting engagement’s efficacy, pre- and post- consultation conditions, and possible causalities and linkages to events, whether intended or not. The fifth element of evaluation provides a great opportunity to generate new knowledge, learning, education, and client and consultant training.
Support for Consulting Engagement Evaluation
Evaluations of consulting engagements require support, openness to inquiry, and a willingness to learn on the part of sponsors and stakeholders. They require the allocation of appropriate and sufficient resources for the related activities. Knowledge and competencies in evaluation research and their applications are critical to conducting a robust, valid, and reliable evaluation program. It may be necessary to educate and train both the clients and consultants in the use of the five elements of evidence-based evaluation: logic types, a conceptual map, a logic model, implementation, and outcomes assessment. Individual, team, organizational, and institutional collaboration would offer a broader set of knowledge and views for building an effective evidence-based evaluation effort, and enable timely feedback, feedforward, and continuous improvements. These potential benefits provide a compelling argument to support and advance consulting evaluation research, knowledge, and practice (see Appendix A: “Guidelines for Evidence-Based Evaluation of Consulting”).
Guidelines for Evidence-Based Consulting Evaluations
Based on the above, we propose the following preliminary set of procedures to facilitate the design and development of evidence-based evaluations of consulting efforts for consulting engagements. The evaluation design would include the participation of clients, consultants, and key stakeholders.
- Involve the key stakeholders in a dialogue regarding the need, scope, and level of the consulting engagement and its desired process and outcomes.
- Educate, learn, and collaboratively designate the key elements of evidence-based evaluations of consulting engagements and apply them throughout the consulting engagement.
- Recognize the appropriate level of evaluation thinking and the logic type that fit a specific consulting engagement.
- Develop the conceptual model and integrate the concepts to be used in the evaluation effort, based on the selected logic type.
- Design the logic model for the purpose of applying the logic type and conceptual map to a given consulting engagement. Build outcomes and performance methodologies, processes, and measurement metrics for implementation. Construct a decision support system to promote and support evidence-based practices to be implemented effectively.
- Implement the logic model to collect the relevant situation-specific, valid, and reliable evaluation data accurately and in a timely manner throughout the consulting engagement phases and processes, end to end.
- Analyze the data and evidence-based data, assessing outcomes, their impacts and anticipated and unanticipated consequences, and possible drivers of change in the consulting engagement.
- Hold post-mortem sessions to examine, surface, diagnose, discuss, and dialogue the cause-and-effect links and assumptions regarding the consulting engagement process, outcomes, and intended and unintended consequences and impacts.
- Plan, organize, and allocate resources for total evidence-based evaluation of the consulting engagement effort.
- Manage timely and targeted information-sharing processes among stakeholders, clients, and consultants to avoid redundancies, overuse, and misuse of information and resources.
- Incorporate a repository of lessons learned to be incorporated into future consulting engagements.
- Build evidence-based evaluation processes into the consulting engagement from start to finish.
Summary and Conclusions
Organization practitioners and researchers and the field of management consulting as a whole will benefit from evidence-based evaluations of consulting efforts. Similar to other applied fields, such as medicine, there is a growing need to generate valid, reliable, and timely data to assess and objectively evaluate the processes, outcomes, and efficacy of consulting engagements. Scholars and practitioners have an important role in educating managers, consultants, and organization actors, elevating their awareness of the need for, and benefits of, rigorous evidence-based evaluations. The five elements of evidence-based evaluation for consulting engagements are intended to provide a framework to advance consulting knowledge and practices. In addition, there is great opportunity to generate and use the reliable, valid knowledge derived from consulting engagement evaluations to advance organizational and consulting research and practice.
 Bledsoe, K. L., and Graham, J. A. (2005). “The use of multiple evaluation approaches in program evaluation,” American Journal of Evaluation, 26, no. 3, pp. 302–319.
 McClintock, C. (2003). “American commentary: The evaluator scholar/practitioner/ change agent,” Journal of Evaluation, 24, 91.
 Torres, R. T., and Preskill, H. (2001). “Evaluation and organizational learning: Past, present, and future,” American Journal of Evaluation, 22, 387.
 Davidson, P., Motamedi, K., & Raia, T. (2009). “Using evaluation research to improve consulting practice,” ING Emerging Trends and Issues in Management Consulting: Consulting as a Janus-Faced Reality, editor Anthony F. Buono. Charlotte, NC: Information Age Publishing, pp. 61–74
 Nicholas, J. N. (1979). “Evaluation research in organizational change interventions: Considerations and some suggestions,” Journal of Applied Behavioral Science, 15, 23.
 Chen, W. W., Cato, B. M., and Rainford, N. (1998–99). “Using a logic model to plan and evaluate a community intervention program: A case study,” International Quarterly of Community Health Education, 18, no. 4, pp. 449–458.
 McLaughlin, J. A., and Jordan, G. B. (1999). “Logic models: A tool for telling your program’s performance story,” Evaluation and Program Planning, 22, pp. 65–72.
 Dwyer, J. (1997). “Using a program logic model that focuses on performance measurement to develop a program,” Canadian Journal of Public Health, 88 no. 6, pp. 421–425.
 Renger, R., Carver, J., Custer, A. and Grogan, K. (2002). “How to engage multiple stakeholders in developing a meaningful and manageable evaluation plan: A case study.” Manuscript submitted for publication.
 Slywotzky A. J. (1995). Value Migration. Boston, MA: Harvard Business School Press.
 Slywotzky A. J., and Morrison, D. (1997). Profit Zone. New York: Crown Books.
 Bateson, G. (1972). Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology. New York: Chandler Publishing. Co.
 Rosas, S. (2005). “Concept mapping as a technique for program theory development: An illustration using family support programs,” American Journal of Evaluation 26, 389–401.
 Stevahn, L. (2005). “Establishing essential competencies for program evaluators,” American Journal of Evaluation, 26, 1, pp. 43–59.