Sie sind hier:Home»Position Papers

Position Papers

The position papers in German can be found here.

Zuletzt geändert: 22. December 2020

Statement of the Board on the Role of Evaluation in the Context of Current Pandemic-related Challenges

November 2020

In recent months, societies, politicians, administrators and institutions have been confronted by major challenges with respect to identifying appropriate strategies and measures to deal with the risks and consequences of the ongoing coronavirus pandemic.

As the leading professional association for evaluation in Germany and Austria, DeGEval (Gesellschaft für Evaluation e.V.) expressly welcomes the role played by scientifically supported evidence as a basis for discussion and decision-making in relation to these measures. We would like to encourage all parties involved in public discourse to make decisions with reference to the best available knowledge wherever possible.

The same applies, in our view, to the discussion and assessment of the rationale, practical feasibility, effectiveness and efficiency of measures taken. These measures, too, should be designed wherever possible on the basis of evidence, which requires systematic science-based investigation and an assessment based on transparent criteria with due consideration of intended and unintended consequences.

We therefore support all efforts already made in this regard for the evaluation of measures and encourage all responsible decision-makers to systematically embed evaluation and link it with newly approved measures. Particularly in view of the fact that many decisions still have to be made under time pressure and on an uncertain decision-making basis, it is essential for transparent democratic discourse, the establishment of legitimacy and the ability to learn from experience to provide information about the implementation and impacts of measures which is as sound as possible.

Evaluation is a science-based process for the systematic and transparent assessment of measures, strategies and other objects. The Standards for Evaluation upheld by DeGEval define good evaluations as fair, feasible, accurate and above all useful for the purposes of better-informed decision-making at all levels. Used correctly, they are an important tool for supporting transparent and democratic negotiation processes and strengthening an open society.

DeGEval therefore calls for public action to be accompanied systematically and in an appropriate manner by transparent evaluation processes aligned with the standards of good evaluation, and will continue to do so even after the current crisis is over.

DeGEval - Gesellschaft für Evaluation e.V.
Wilhelm-Theodor-Römheld-Straße 20, D-55130 Mainz
Tel. +49 (0) 6131 / 2173887
info@degeval.org
www.degeval.org


Position paper of the DeGEval Board as PDF

Zuletzt geändert: 22. December 2020

The Future of Evaluation - Position paper 10 of the DeGEval

Position Paper of the Board of DeGEval - Evaluation Society

November 2017

 

The theme of the 20th annual conference of DeGEval - Gesellschaft für Evaluation e.V. was ‘The Future of Evaluation – Evaluation for the Future’. DeGEval took advantage of this anniversary conference to reflect on its 20-year history, during which the society has contributed to the recent success story of evaluation in the German-speaking countries. Looking ahead, it was also apparent that although good evaluation is now used more than ever before in a wide range of societal areas, the continuation of this success story cannot be taken for granted.

In the points that follow, the DeGEval board has summarised the discussions that took place and the conclusions reached.


Evaluation and civil society

  • In recent times, the use of fake news has reached unprecedented levels as an instrument of public debate. Sections of the public have begun not only to sceptically challenge scientific evidence to a substantial degree, but to fundamentally reject it and replace it with self-confirming ‘realities’ of their own. As a professional evaluation society, we are very concerned about this trend.
  • However, we believe that good evaluation can be an effective tool for achieving greater transparency with regard to state and non-state action. Through the systematic examination of policies, it is able to provide a substantiated, evidence-based assessment of their planning, implementation and effect. We therefore see evaluation as an important instrument with which to support debate and decision-making in civil society.
  • With this in mind, we would call on everyone who is concerned with strengthening civil society to be more active in calling for and supporting the evaluation of public action. This includes:
  • Pressing politicians, administrators, public institutions, foundations and other state and non-state actors to apply a verifiable evidence basis in decision-making processes
  • Calling for an examination of the effects of policies and how these policies take effect
  • Calling for available evidence always to be taken into consideration in political decision-making processes
     

Politics and administration

  • In recent years there has been a steady increase in the use of the term ‘evaluation’ in the political sphere, for example in parliamentary work. However, this increase remains largely nominal as there has been little rise in demand for evaluation, for example from parliaments. In politics, there is still therefore an implementation deficit.
  • The same applies to administration. Although the Federal Budget Code (Bundeshaushaltsordnung) stipulates that checks should be implemented to monitor achievement of aims, effect and cost-effectiveness, this is by no means carried out continuously or across the board. There is a lack of systematic preparation, of definition of measurable goals, and of resources and competencies to implement examinations of effectiveness and efficiency. The Bundesrechnungshof (Federal Court of Auditors) regularly points to this shortcoming in its annual reports.
  • Politicians are therefore also called upon to make greater use of evaluation in the political process. In particular, evaluation can be used as part of parliamentary checks to review the implementation and effects of policies. Due to its explicitly evaluative perspective, it is superior to purely descriptive control instruments such as auditing and monitoring in this respect.
     

Evaluation in organizations and institutions

  • Many public, commercial and non-commercial organizations and institutions use evaluation. These include, for example, administrative bodies, educational and healthcare institutions, foundations and companies which evaluate their activities and initiatives internally or commission evaluations from external service providers.
  • It is likely that we will see an increase in the importance of internal evaluations in which evaluation tasks are performed by sub-units of the organization. It is important that these too are subject to professional evaluation standards, as formulated in DeGEval’s Standards for Evaluation.
  • In various sectors, for example higher education, the question is regularly raised as to whether ongoing evaluation systems in particular are appropriate and beneficial. We consider this question to be a legitimate one, as the Standards for Evaluation define utility as the first criterion of good evaluation.
  • In organizations and institutions, the utility of evaluations can be ensured most honestly when evaluation is firmly embedded in organizational structures and processes. The best way to ensure this is through an evaluation policy, which clarifies for a particular area of validity what is being evaluated, when, how often, by whom and for what purpose. It also specifies in advance where the results are to be used and who is responsible for this use, as well as ensuring conscious use of resources and preventing purely ritualized forms of evaluation.
  • We encourage all organizations and institutions which aim to use evaluation beneficially to develop an evaluation policy of this nature.
     

Evaluation and academia

  • The fact that evaluation is increasingly becoming an academic field of research may be regarded as a sign of success. This is reflected, for example, in the number of professorships for which evaluation forms part of the title, often in addition to research methods or subject-specific areas.
  • However, as demonstrated by DeGEval’s Standards for Evaluation, among other things, evaluation itself is transdisciplinary. This transdisciplinary nature cannot be reduced to methodological aspects alone, because good evaluation consists of more than merely the application of social sciences methodologies.
  • The further development of evaluation therefore requires the establishment of dedicated professorships for evaluation at universities in Germany and Austria which take this transdisciplinary aspect into account. This is also an important prerequisite to allow the many early career researchers who are qualifying in evaluation to pursue this area of research as a specific career path.
  • Like other areas of activity, evaluation depends upon an empirical knowledge basis for its professional practice. So far, however, the corresponding research has mostly taken place in the context of subject-specific investigations (e.g. the use of evaluation results in schools), which results in a fragmented knowledge basis. What is therefore needed is more explicit research into evaluation which adopts a transdisciplinary approach and seeks to promote a dialogue on evaluation research in various areas of action.
     

Professionalization of evaluation

  • Evaluation is an unprotected term. In practice, a range of very different activities are referred to as evaluation. Because evaluation is still a relatively young field and subject to dynamic changes, efforts to restrict access to evaluation as a professional activity seem to us to be premature.
  • However, open access should not be misunderstood as a licence to interpret the term freely. The most important reference point for evaluation is professional evaluation standards. The DeGEval standards require that evaluation should not be limited to simply measuring, but should offer utility, propriety, feasibility and accuracy. We call upon everyone who conducts evaluation and offers it as a service to commit themselves to compliance with the DeGEval Standards for Evaluation.
  • We also call on those who commission evaluations to be aware of their responsibility to enable good evaluation. Evaluation clients can contribute to the quality of evaluation through realistic expectations, by allowing adequate time for the work to be carried out, and by making adequate resources available.
     

Evaluation practice

  • As a young, transdisciplinary profession, evaluation has so far only developed uniformly accepted and unambiguous terminology to a limited extent. The professionalization of an activity requires the development of unambiguous terminology as an important prerequisite for internal and external comprehension. With the newly revised Standards for Evaluation, DeGEval has adopted a glossary of key evaluation terms that is intended to be used as a reference to clarify terminology issues in cases of doubt.
  • As the use of evaluation increases, as can already be observed in some areas, it is important to ensure that no negative saturation occurs. The risk of ‘off-the-peg’ evaluation is that it may become little more than well-developed monitoring and make it difficult to discern unintended or unusual factors. This risk may arise from both routine evaluations by evaluators and detailed, extensive specifications on the part of clients.
     

Discussions at DeGEval’s 20th annual conference also revealed that the tools and methods required for the evaluation of the future are available. Trends can be systematically extrapolated, scenarios can be developed, and discontinuities of trends can be identified and assigned a probability of occurrence. The best way to actively shape the future is to discuss the past and the present situation and to formulate requirements. DeGEval will continue to contribute to this in the years ahead by creating the opportunity for dialogue through the newsletter, magazine, conferences and workshops. Everyone with an interest in evaluation is invited to participate: internal and external evaluators, clients, stakeholders and decision-makers in administration and politics. We would be delighted if this position paper encouraged you to get in touch with us.

 

www.degeval.org/en/about-us/board/


The Future of Evaluation - Position paper of the DeGEval as PDF

Zuletzt geändert: 22. November 2017

Utilization, Influence and Long-Term Impact – What is the Effect of Evaluation in Different Systems? – Position paper 09 of the DeGEval

Utilization, Influence and Long-Term Impact – What is the Effect of Evaluation in Different Systems?

Position paper of the Board of DeGEval-Evaluation Society

Januar 2017

The growing importance of evaluation is repeatedly pointed out. Evaluation often seeks to examine impacts, but does it have enough impact of its own? Can evaluation results change society, or sections of society, in the intended way? The question as to the use of evaluation links to the first set of criteria in the recently revised Standards for Evaluation produced by DeGEval – Gesellschaft für Evaluation (see www.degeval.de). However, this standard of utility in evaluation only relates to the conceptual design of the evaluation project, the extent to which all stakeholders were involved and how clearly the purposes of the evaluation were stated. Whether these are then actually implemented relates to the question addressed here as to the use and long-term impact of evaluation.
The long-term effect of evaluation in different social systems is the subject of intense debate, which commenced at DeGEval’s 19th annual conference in Salzburg, Austria between 21 and 23 September 2016. The discussion ranged from issues relating to the requirements and conditions for the profitable utilization of evaluation, i.e. the planned usage or application of evaluation by stakeholders, to the direct and indirect influences and effects of evaluation. The term ‘long-term impact’ implies that the utilization of evaluation and its results produces a lasting positive effect, with equal consideration being given to social, ecological and economic aspects. The annual conference shed light on the utilization, influence, effect and long-term impact of evaluation with overarching issues in the educational, political, cultural, health, economic and administrative systems and from different perspectives.
Significant differences were identified between sectors depending on the extent to which evaluation is embedded in practice. Let’s take a closer look at certain sectors by way of example:

•    In the education sector, which was the focus of the 2016 conference owing to the host, the School of Education at the University of Salzburg, evaluation and the implementation of evaluation results is firmly embedded in many areas. The first models introduced in schools were self-evaluation models. Participation was mostly voluntary, with the result that conducting evaluation largely built upon the participating teachers’ engagement and desire for self-improvement and the expansion of their own skills and thus led to more lasting effects. In recent years, evaluation in schools has increasingly been based on the assessment of pupils’ competencies, which has led to some very thought-provoking results (the reaction in Germany to the results of the first PISA assessment was referred to as the ‘PISA shock’) but has not always contributed to changes in the school system. In higher education, evaluation is required by law in many countries. In Austria, for example, universities are required to subject their quality management systems to regular external assessment; this may result in obligations which must then be fulfilled.

•    In industry, evaluative measures – even if they do not always constitute evaluation in the proper sense – are primarily found in quality management. They are normally strongly embedded in the institution. Owing to economic considerations, companies have an interest in translating results directly into optimization measures and thus ensuring effect and long-term impact.

•    In the health sector, defined forms of evaluation are required by law for the approval of drugs and treatments (experimental quantitative studies in the form of randomized controlled trials). Effectiveness in practice is therefore defined. In other areas, such as the evaluation of health prevention measures, processes are less compulsory and methodologies are ‘softer’ (e.g. quasi-experimental). Here, the long-term usefulness of the measures often remains unclear.

What can we learn from this? We believe that utilization, influence and long-term impact can be increased above all through two approaches. Firstly, participatory elements in evaluation augment its long-term utility. Stakeholders feel that they are being taken seriously, develop an interest in the evaluation results, apply the evaluation methodology where they identify a need and are therefore able to collaborate directly in the implementation of the results. Secondly, the long-term usefulness of evaluation is facilitated when evaluation is firmly embedded in practice, for example through legal obligations. However, it is important to stipulate not only that evaluation be conducted, but also that the results be implemented.

These two approaches are certainly opposed, often incompatible, and therefore represent different routes which may be more or less promising in different fields of practice. Both, however, contribute to the stronger embedding of evaluation in society.


Utilization, Influence and Long-Term Impact – What is the Effect of Evaluation in Different Systems? - Position paper of the DeGEval as PDF

Zuletzt geändert: 11. August 2017

Evaluation and knowledge society - Position paper 08 of the DeGEval

Evaluation and knowledge society

Position paper of the Board of DeGEval-Evaluation Society

The move towards a “knowledge society” has already attracted broad public and scientific attention with the work of Daniel Bell in the early 1970s. Knowledge and its role in society have been analysed in manifold ways in connection with knowledge politics and the knowledge economy. It is no coincidence that the development of the knowledge society has occurred alongside an increasing importance of evaluation. Evaluation as such is a process for the generation of knowledge.

 

“Evaluation and knowledge society” formed the underlying of the annual conference of the DeGEval – Evaluation Society in Speyer in 2015.

  • Evaluation fulfils a need generated by the move towards a knowledge society


Societal governance processes as well a individual actions nowadays increasingly rely on knowledge, which is enhanced by new technological developments for data processing and digitalization, as stressed by Wolfgang Böttcher in his introductory speech at the conference to the meeting. This is relevant at organizational level, but also for other parts of society or even entire societies. Modernization concepts such as “better or smarter regulation” as well as “open government” are hardly conceivable without specific forms of knowledge generation, processing and exploitation.

  • Evidence from evaluation is generated by manifold use of methods


Evaluations represent evidence in the widest sense, which needs to be framed by theory and methodology. In his conference-opening keynote Stefan Kuhlmann stressed that one should not overhastily talk about a hierarchy of “good” or “bad” evidence, “good” or “bad” knowledge. In the demand for an evidence-based approach priority is usually given to quantitative randomized and controlled studies that follow an experimental design. This does not take into account that there are several alternative evaluation designs that can also contribute to our knowledge. Indeed, it might be more appropriate to collect different evidences on the same item in order to triangulate evidence.

  • Secure connectivity of knowledge produced by evaluation


An important function of evaluation in the knowledge society is the preparation of knowledge in a way that makes connectivity and use possible. The results of evaluation projects should not end up in the drawer, but be merged to a knowledge base about effects of measures in the different fields. This newly generated knowledge should be compatible with discourses in different fields or systems – e.g. contribute to the dissemination of knowledge generated in the knowledge system and the inclusion in politics. Evaluation therefore fulfils the function of being “translator” between scientific knowledge aimed at general contexts on the one hand on the other of providing detailed action requests of actors in specific settings. Evaluation can help to translate general knowledge and adapt it to specific contexts.

  • Aim: contributions to the public debate, improvement of democratization


Knowledge bases which are fed by evaluations can be used for social decision processes in different areas. Contemporary evaluation is developing towards being knowledge based multi-methods research on specific questions which puts the interests of its clients into focus. Evaluation can thus through knowledge which is being provided to ever-growing degree and support stakeholders in the design of their actions. Wherever the use of public funds is involved, it always refers to society as a whole. The professionalization of evaluators is based on increasing understanding of theory and methods and can thus also stick to the emancipatory promise of the knowledge society.


______________________________________________________________
Evaluation and knowledge society - Position paper of the DeGEval as PDF

Zuletzt geändert: 11. August 2017

Professionalization in and for Evaluation: Position paper 07 of the DeGEval

Professionalization in and for Evaluation

 

Position Paper of the DeGEval – Gesellschaft für Evaluation (Evaluation Society) Management Board

February 2015

The 16th Annual Conference of the DeGEval – Gesellschaft für Evaluation e.V., which we carried out together with the Swiss Evaluation Society SEVAL in Zurich in 2014, dealt with the issue of professionalization in and for evaluation: How can professionalism be ensured, what can be or is to be meant when talking about professionalism, or what exactly should our evaluation society contribute in order to make evaluation an indispensable instrument for the decision making process in all societal and political fields?

The choice of this topic was not solely based on the fact that the support of a continuous development process of evaluation practice and standards is explicitly stipulated in the DeGEval’s statutes and thus leads to the question of the significance of “professionalism”. It is also a prevailing topic at present since there are intensive ongoing discussions in several evaluation societies in Germany and in other countries about this battery of questions. Last but not least, the success story of evaluation necessitates a debate that is able to provide better orientation.

However, the success story of evaluation can definitely be described as being quite ambivalent. On the one hand, there is hardly any political decision made today without at least some reference to evaluation. This applies to Europe in particular. In the German speaking countries, “evaluation” has not been regarded as a kind of foreign word that evokes resistance in those affected for quite some time. On the other hand, there are a number of activities such as simple feedbacks, audits, or psychometric tests that are wrongly called evaluations since they are not able to reach the standards of professional program or organization evaluation, even though there might be some overlapping aspects. Furthermore, it is not quite clear if someone who calls himself/herself an evaluator, is really appropriately qualified enough. So far a reliable outline of the profession has been missing.

The topic “professionalization of evaluation” and the challenges attached to it may be discussed in the form of thesis and antithesis. Thus, guidelines can be drawn up that mark the lines within which discussions can take place. They can be phrased as questions:

  • Is it sufficient to have a thorough knowledge about the respective political field in order to be able to evaluate this? Or does it suffice to have evaluation competence to be able to evaluate in any old field?
  • Are quality and quantity of the certainly very heterogeneous offers for the development of evaluation competence sufficient? Or will the range of course offerings have to be systematically extended and accredited?
  • Should training and professional development courses rather be localized in the political area? Or does it make more sense to make generic offers?
  • Do the available evaluation standards with regard to quality development suffice? Or are further measures required that go beyond that, such as controlling and examining measures to safeguard standards and, if necessary, to sanction noncompliance.
  • Can the quality of evaluation only be ensured by more stringent and more systematic regulations such as e.g. a certification of evaluators? Or do self-definition and possibly a membership in the DeGEval in Germany and Austria suffice to act as evaluators?
  • Is the quality of an evaluation defined by the parties involved? Or is an external assessment and complaints body required?

This addresses a wide range of possible issues and challenges. At the one end of the spectrum the question arises what kind of quality evaluations must have and which competencies evaluators must possess in order to be able to talk about “good” evaluation. At the other end, there is the question if and in how far accreditation and certification of evaluators should define the field of professional evaluation.

Subsequent to the discussions at our annual conference and numerous internal debates on the two ends of the problem “good evaluation” and “certification and accreditation” that also included several experts, we feel that we are closer to the former. Wherever our position tends to open up to the latter – in particular with regard to certification – it is in favor of voluntariness.

Given the political fields in which evaluators move and the importance of evaluation results as evidences in highly relevant societal decision processes, this is indeed quite a moderate position.

Our general objections and concerns against strong regulation refer to the fact that evaluation has not yet advanced enough on the “classical” way towards professionalism despite its development. In Austria and Germany, we have just about made the first steps on the way towards a “profession”: evaluation is carried out on a commercial basis, it is taught and researched, and the stakeholders have organized themselves. But evaluation is still far away from being able to fulfill an institutionalized function within the political sector. A professional profile has not yet been reliably defined, either. Owing to the multidisciplinarity of evaluation that collects and brings together the methodological knowledge of empirical social research and assessment as well as the special expertise of the respective research area, the development of a professional profile that embraces this duality seems by no means trivial. We are still far away from governmental recognition and approval as a profession.

All along this way a number of questions remain unanswered: Who should enunciate in a mandatory and legally-binding way what evaluation, i.e. good evolution, really is? Who should be certified by whom in order to subsequently certify others? Who should pass a fair judgment on the quality of an evaluation in a legal case between the commissioning and the evaluating body – with possibly considerable consequences – that is professional as well binding for all the participants? What are the consequences of mandatory certification and thus the potential exclusion of several of today’s evaluating bodies?

Last but not least, the number of individuals and institutions organized in the DeGEval limits the possibilities, but also the need for stronger accreditation and certification at present. We still have not yet our member potential in Germany and Austria, and the entire work in the research groups and in the board is done on an honorary basis.

However, we do indeed follow processes based on voluntariness such as the initiative of mutual assessment (Voluntary Evaluator Peer Review) or the standardized procedure in Canada to become a “credentialed evaluator”, but our impressions are that those procedures are very intricate and costly and not at all indisputable as to their legitimacy.

Against the background of the developments in Germany and Austria so far, it is first and foremost about further developing the qualification landscape. Some initiatives are taking place at present. The DeGEval offers a platform that collects offers and proposals for training and professional development courses in the evaluation sector and systematically describes them on the basis of standard criteria. One project group is presently working on a teaching course “Evaluation” that could be integrated in the methodology seminars of social science degree programs.

We will continue to work on strengthening the DeGEval’s standards of evaluation. Each member can contribute to this by offering project-related discussions of these standards in the context of evaluations. We also think that an increased inclusion of those bodies that actually commission evaluations seems worthwhile: the cooperation of evaluating and commissioning institutions and organizations constitutes one of the conditions for a high-quality evaluation. We as an evaluation society will certainly increase our efforts even further to come into contact with those who commission evaluations. The description of the evaluators’ competencies might also lead to a revision process and thus result in a firm establishment of such competencies by means of precisely described curricular elements.

There is one further stimulus that we actually owe to the European Evaluation Society (EES). They discuss certain “cases” on a special platform. The idea is that it might be useful to learn more about actual evaluations. Whenever possible, real evaluation studies should be made visible: learning on the case. This is very similar to the challenge to intensify research on evaluation.

The DeGEval’s strength also depends on its size. We would like to continue being open to people who are “somehow” involved with evaluation. All our activities like e.g. the publications, the conferences organized by our research groups and the annual convention aim at professionalizing our members in matters of evaluation. However, interested candidates will not have to pass an entry test in order to become members.

There are quite a few interesting prospects for the DeGEval as on organization as well as for its members as far as advancing professionalization of evaluation is concerned: winning new, interested and qualified members, spreading the standards of good evaluation even further, cooperating with professional associations whose members are also involved in the evaluation sector, strengthening the external communication with those who commission evaluations, and the cooperation with evaluation societies in other countries. Ultimately, it is exchange and reflection that, in our opinion, leads the way towards a stronger professionalization of evaluation. These first steps might even be the stepping stones towards a certification process.


Professionalization in and for Evaluation: Position paper 07 of the DeGEval als PDF.

 

Zuletzt geändert: 11. August 2017

Complexity and Evaluation - Position paper 06 of the DeGEval

Complexity and Evaluation

Position Paper of the DeGEval Gesellschaft für Evaluation (Evaluation Society) Management Board

January 2014


Complexity is a characteristic feature of human action and behaviour. It is a particularly formative issue in modern societies: In most cases, the major and minor challenges and problems of human cooperation can only be resolved in a co-action process of diversely differentiated and specialised systems or persons. The ever-changing and developing interaction relationships are partly responsible for the efficiency of societies. However, they are also constitutive for their complexity. Overall it is true that complexity can increase problem-solving abilities and open opportunities for individual development potential – albeit at the cost of uncertainty and permanent coordination and attunement requirements. Thus evaluation, which inter alia attempts to trace causal influence conditions – i.e. activities and their desired or non-desired effects – faces very special challenges against this background. Therefore the DeGEval-Gesellschaft für Evaluation dealt with the issue “Complexity and Evaluation” at its Annual Conference 2013.

Evaluation supports comprehension

Action and behaviour in social contexts under the conditions of complexity is characterised by permanent and manifold correlations and interdependencies. Globalisation, acceleration, networking as well as technological innovations are drivers for these developments. This limits the possibilities to detect clear causalities. In order to be able to collect data on effect relationships, tailor-made designs, appropriate methods, a high data processing and analysis quality as well as a competent execution of evaluation processes depending on the respective subject and context of an evaluation are required. Non-linearity, emergence, but also different interpretations and assessments exacerbate the ascertainment of effect relationships. Evaluation can aid in comprehending and explaining how interactions take place, which changes occur, how they proceed, and which results they generate. Evaluation supports competent and evidence-based action and behaviour.

Evaluation means multiplicity of methods

Bearing in mind these circumstances, the methods that evaluation applies to approach its subject matter – e.g. a labour market programme – cannot just be of the quantitative kind. In particular, they cannot merely aim at gathering “hard” causality data such as in experimental control group designs. Both, control group approaches and quantitative methods in the broader sense, are indeed indispensable for the evaluation. However, evaluation will only be able to unfold its full achievement potential, if it applies and combines a more or less wide and context-specific range of methods according to the concrete subject matter and the precise problem in question. Qualitative methods, particularly those that are able to pick up diverging view points and assessments of participating stakeholders and to feed them into the evaluation, are absolutely crucial. Evaluation uses and combines a wide range of methods.

Evaluation is communication

When carrying out the evaluation of complex programmes specific challenges must be met. On the one hand, sufficient context knowledge and information, especially on the respective cultures, is indispensable in order to be able to capture complexity in an adequate manner. On the other hand, the evaluation process itself gains in significance. The incorporation of all the relevant stakeholders, their perceptions and assessments within the process is of vital importance for a really successful evaluation. Handling complexity comprises participation, negotiations, visualisations, comprehensibility as well as co-management, and it outlines the challenges facing the evaluating bodies in practise. The DeGEval Standards for Evaluation can supply some helpful advice as to the successful execution of the evaluation process.

Evaluation requires adequate competencies and resources

It is of crucial importance that evaluations under complexity conditions must be carefully planned and carried out in a way that is appropriate to the subject matter and problem of the respective evaluation. Therefore the actual evaluating team as well as those who control and accompany an evaluation project must possess comprehensive competencies. They should comprise not only an understanding of the appropriate approaches as to planning and design, the selection and implementation of suitable methods and the ability to communicate during the evaluation process, but also an excellent knowledge of the respective area. It is the combination of this knowledge as well as all these skills and abilities that will develop the full potential of an evaluation project. Another requirement, however, is the allocation of sufficient resources. The availability of resources does not merely refer to financial aspects. It also includes the availability of qualified personnel on either side of the project, the evaluating team as well as the team that controls and accompanies the evaluation.

Evaluation creates orientation

Concepts of modern political and administrative process control and organisation management in general frequently rely on evidence-based orientation. In these approaches, procedures that gather and feed back information from the environment by means of a limited number of indicators and parameters often play an important role for management and control processes. In most cases, however, the concentration on few selected parameters cannot do justice to the complexity of social reality. Those who only or predominantly attempt to manage processes on the basis of indicators without continuously examining and adjusting the selected parameters are forgetting about complexity. This bears the risk of mismanagement and wasting resources. Evaluation is able to counteract such tendencies. It can help to comprehend complexity and possibly reduce it so that targeted actions in social contexts are made possible despite a certain level of precariousness that can never entirely be eliminated. This applies to politics as well as to decisions in individual organisations. Evaluation is able to increase transparency with regard to the changes as well as to the participating stakeholders and structures. Metaphorically speaking, evaluation can contribute to help political bodies and organisations navigate in their continuously confusing environment – without, however, ignoring too much of this environment.

Evaluation and complexity are tightly connected. To the extent that evaluation can manage to elucidate patterns and regularities in the social environment of organisations and policies, it can contribute significantly to handle complexity. Making use of evaluation in this sense also means to develop an understanding for the context. Evaluation helps the stakeholders in the political arena and in organisations to find orientation in their respective environment. To this effect evaluation helps to understand whether, and if so, how targeted interventions are at all feasible in a given setting. Thus it is not paramount that evaluation provides specific details or explicit recommendations for decisions. Evaluation has its strength in contributing towards the “enlightenment” of politicians and those in charge. If it is successful in this respect, it will have managed to make a contribution to the handling of complexity and to societal development in terms of aspired objectives.

Evaluation is the systematic analysis and study of the benefits or value of a certain subject matter. Such evaluation objects include e.g. programmes, projects, products, measures, achievements, organisations, political issues, technologies or research projects. Results, conclusions or recommendations must be comprehensible and documented according to DeGEval standards, and they must be based on empirical qualitative and/or quantitative data. Approximately 750 persons and institutions working in the evaluation sector, mainly from Germany and Austria, have joined forces to become the DeGEval – Gesellschaft für Evaluation. The objectives of the DeGEval comprise information and exchange as to evaluation issues, the integration of different perspectives of evaluation as well as its professionalization. Apart from the work in 14 thematically structured working teams, the annual conferences are an important place for such an exchange. The topic for the Annual Conference 2013 was “Complexity and Evaluation”. The Annual Conference 2014 will be jointly organised together with the Swiss evaluation society SEVAL. It takes place between 10th and 12th September in Zurich. The topic is “Professionalization in and for Evaluations”.

 


Complexity and Evaluation - Position paper 06 of DeGEval as PDF

Zuletzt geändert: 11. August 2017

Evidence and Evaluation - Position paper 05 of the DeGEval

Evidence and Evaluation

Position paper of the DeGEval – Gesellschaft für Evaluation

At its annual convention in 2012, the DeGEval - Gesellschaft für Evaluation discussed the topic “Evidence and Evaluation”. At present, this issue is quite significant in policy and practise settings. Decisions aimed at introducing and implementing political programmes should be based on knowledge. The connection between evidence and evaluation is self-evident since the latter is to generate essential information for the rational design and control of programmes and organisations. Those in charge of decision making and execution of evaluations expect findings based on scientific research in order to further support their own action. Whereas “evident” frequently refers to “self-evident” or “obvious” in everyday use, the term “evidence” in an evaluation context means “proof” or “argument”, so that knowledge based and well-founded decisions can be made.

Until only a few years ago, talks preceding evaluation projects took place on a relatively simple level, and they were often limited to questions like how to explain to the persons who are to be evaluated what an evaluation is or how to explain to contracting bodies the respective advantages of a specific evaluation. Nowadays, however, evaluators increasingly come across a well-developed understanding and a sound knowledge with regard to evaluations. Meanwhile, a certain amount of maturity as to the handling of evaluations seems to be widely spread.

In the past, evaluations covered a wide range of different issues: they examined the basic requirements for projects or programmes, they assessed their structure and meaningfulness, and there was a strong interest in accompanying measures by means of evaluation and in continuously developing them in cooperation with the persons to be evaluated. This went hand in hand with the application of a large variety of methods.

Such a wide thematic and methodological range of evaluations has now been reduced in the face of an increasing (cost) pressure on initiators and executors of political programmes. This means that at present evaluations are more and more expected to supply evidences in the form of sound proof to demonstrate the efficiency of interventions. The occurrence of desired effects should be proven as clearly as possible.

Within the methodological debates in social sciences, the idea of evidences being able to support the efficiency of programmes solely by applying the methodical “gold standard” is dominating the respective discussions at present. Expressed in a simplified manner it means that effects can only be proven when a systematic comparison between at least two statistically identical groups, one with and the other one without “intervention”, results in a significant difference. The intervention must therefore be controlled, i.e. it must be carried out in an exactly prescribed manner and must therefore be repeatable.
However, such a randomised and controlled experiment is difficult to realise in social contacts for a number of reasons. In social or political programmes, the variables influencing the success are scarcely isolable. Programmes can usually not be carried out in a strictly mechanical sense, and a great deal of programmes lack a logical causal model. For ethical and practical reasons, it is hardly possible to determine control groups, and quantitative studies require large samples.

Furthermore, the question might be asked what kind of content related statements can be gathered by means of quantitative effect sizes of programmes. Merely looking at effects might easily shift the view to non-intended or even negative effects of programmes. This applies in particular to underfinanced evaluations.

The expectation of effect evidences is quite understandable. Indeed, effect evaluations used to be an essential part of the evaluators’ work. However, the dominance of requests for effect evidences tends to overlook the fact that the quality of a programme can neither be sufficiently assessed nor further developed by effect measuring. Such a shift or narrowing of focus increasingly alters the function of an evaluation: it is not so much aimed at optimisation but at legitimising a programme or a measure.

Let us look at the problem from another perspective. If an evaluation has found proven evidences, it is certainly disappointing when the decision maker or the body in charge of practical implementation will then not act according to the evidences provided. Those who evaluate must learn to understand that decisions can well be supported by the respective evidences, but that they are still influenced by other variables as well. Evidences do not determine certain decisions. The way one acts always depends on contexts, norms and standards that are beyond the scope of proof. Decisions might be based on information gathered on the basis of evidences, but they might still find orientation in other standards and benchmarks than measured evidences. Action in political, social and educational contexts cannot be reduced to “something that can be measured” in the sense of a normatively neutral currency.

During the exchange between evaluating body and awarding body, the plausible request for generating evidences should always bear in mind the difficulties involved with such a requirement. Evidences should also serve the purpose of improving programmes, increasing their advantages, and strengthening the people – and should not just find proof for measurable effects. This approach requires feedback loops and their analyses in the sense of structured learning processes. It also requires a multi-methodological procedure, which is not possible without adequate time and effort as well as the respective expenditure that goes hand in hand with it.

Ideas of straightforward and unproblematic effect evaluations are unrealistic and reduce the potential of evaluations. Professor Geert Biesta (at present at Luxembourg University), the main speaker at the DeGEval conference in 2012, emphasised the fact that the overpowering idea of evidences generates the tendency that decision makers only deem important what is actually measurable with regard to its effects. Instead of that, it should be quite the other way round. The main question should be what is important to us as far as political, societal, educational and ecological issues are concerned. The question if the important issue is in fact measurable should then actually be secondary.

 


Evidence and Evaluation - Position paper 05 of DeGEval as PDF

Zuletzt geändert: 11. August 2017

Participation in evaluation - Position paper 04 of the DeGEval – Gesellschaft für Evaluation (Society for Evaluation)

Participation – nothing extraordinary

Today participation is a central component of many evaluations. Quite frequently an active participation in evaluation processes is important for the success of the evaluation. The participation in evaluations, i.e. the persons concerned and the participants, have an influence on the questions of an evaluation, on criteria used, on the interpretation of results as well as on the development of assessments and recommended subsequent actions, will result in a higher acceptance level and thus in a better utilisation of evaluation results, which significantly improves the quality of evaluations.

What does participation mean?

Participation means that clients and the persons concerned are actively involved in the realisation of the evaluation. It might, however, also include all groups with a legitimate interest. The evaluation process of a participative evaluation can be decisively controlled by the evaluating body or, alternatively, the persons concerned can have a high influence on the process and procedure. It ranges from an adequate consideration of the participants’ point of view to cooperation as to the interpretation of results. A classic distinction of participative evaluations aims at their claim to cause changes. Participative evaluations can serve the purpose to provide better foundations for decisions that are to be made for the assessment and the continuation of projects and programmes. However, participative evaluations might also be guided by the idea to initiate social changes by incorporating the persons concerned.

Why participation?

Participation is based on a basically democratic and democracy-enhancing understanding of evaluation. Systematic and valuating assessments can only be made adequately within the respective context of the people participating in the processes. For example, the evaluation of labour market programmes is relatively useless without considering the perspective of job seekers. The same applies to the evaluation of university teaching without taking the students’ perspectives into consideration.
Furthermore, participation can also be an important foundation of practical evaluation processes. The crucial standards of utility, feasibility, propriety, and accuracy will only be achieved by respective procedures of participation. Participative procedures make it possible to come to professionally grounded and useful evaluation results.

Participation – how?

If we understand participation as a crucial component of evaluation processes, an early and comprehensible participation is important. Prior to commencing, all persons concerned and all participants must be identified. Only then is it possible to express all problems and questions adequately and avoid ‘blind spots’ and lopsided allocations.
A diplomatic approach as well as a sure instinct are vital. On the one hand, this refers to the cooperation with the clients that ordered the evaluation. On the other hand, the integrative, fair, and transparent cooperation with the persons that are, directly or indirectly, affected by the evaluation results is equally important. There are numerous procedures to ensure this and they depend on the respective objectives and processes.
It is of crucial importance to incorporate and illustrate the different perspectives of participants and persons concerned with regard to the evaluation. Taking into account the various points of view increases the precision of evaluation results. A multitude of perspectives as to objectives as well as evaluation criteria is absolutely positive for carrying out programmes since their long-term success does not only depend on the opinion of experts. For example, in case of a programme to evaluate urban redevelopment, the assessment of the local inhabitants is of vital importance. After all, they decide what ‘catches on’.
Moreover, an early incorporation of different participants ensures mutual learning and a step-by-step further development of knowledge and skills. It is not only programmes and their evaluation that benefit from participation. By processes of change that are initiated, evaluated, reflected and further developed mutually, all participants and people concerned are able to expand their competencies. In the area of education a new level of quality for schools and universities can be gradually developed that will result in a new learning culture.

Challenges with participation

As a matter of course, there are always basic and evaluation-related problems as far as participation is concerned. The crucial question is: Who is allowed to participate, why is someone allowed to participate, and what is the scope of someone’s participation? And who will make the final decision in case of doubts?
For evaluations, in particular, there is always the question if basic principles like e.g. the foundation in empirical research methods are questioned. In how far is comprehensive participation compatible with scientific quality criteria which ensure that evaluation means more than just a mere feedback or an interest-based expression of one’s opinion? In what way are handling criteria and efficiency still manageable with such a number of participants? There are definitely no simple answers. The methodological questions can only be answered in respect of the objective of the evaluation. However, a certain amount of participation seems to be vital.

Permanent quality assurance

The decisive fact for a participative evaluation is a start of the evolution process, which has been mutually agreed upon. Together, possibilities and necessities must be determined and negotiated. Apart from a basic understanding as to possibilities, limits and costs of participation on all sides, methodical competences and skills regarding the process organisation of evaluators are of utmost importance. Their quality is the most important factor. Accordingly, that means that these competencies and skills must be established with all the participants in an evaluation and efforts must be made that they are permanently ensured.

In order to support professional and relevant evaluations, the DeGEval – Gesellschaft für Evaluation e.V. (Society for Evaluation) has published standards for evaluation, for further training and qualification with regard to evaluations as well as recommendations for potential clients that also include information on participation.

 


Position paper 04 of the DeGEval - Participation in evaluation as PDF

Zuletzt geändert: 11. August 2017

Methods of Evaluation - Position paper 03 of the DeGEval – Gesellschaft für Evaluation

Evaluations analyse and assess goal achievement and the effects of measures. Irrespective of its scope, funding and time frame an evaluation can support decision makers and the practice to objectively review and, if necessary, systematically improve the accuracy and effectiveness of measures, strategies and other subject matters. One question frequently asked is which methodical requirements are to be met under practical con-ditions in order to enable an evaluation to perform such tasks reliably, com-petently and professionally. The position paper at hand is meant to be a summary of central answers to this question. It is primarily aimed at individ-uals and institutions in charge of commissioning evaluations, accounting for them or utilizing their results as well as at the interested public.

1. What characterizes evaluation methods?

Quite frequently evaluation methods stem from empirical social research. Furthermore, however, evaluation uses additional methodical approaches such as e.g. the Delphi method, group discussions and cost-benefit analyses. These are instruments at the interface of statistical data collection (qualitative and/or quantitative) and assessment. Evaluations often make use of a mix of methods. They combine different methodical procedures (triangulation) in order to be able to appropriately take into consideration different perspectives.

2. Are there correct and wrong methods?

Unlike fundamental research, the quite frequently order-based evaluation must find an intelligent balance between high methodical standards on the one hand and a rather pragmatic, economically justifiable and often time-pressured approach on the other. Therefore there is no simple "right or wrong" as far as the selection of methods is concerned, but at best a "right or wrong" with regard to appropriateness as to the respective evaluation object. There rarely is one single method of choice though, but there often are a number of substantiated selection options instead.
Certain evaluation objectives, however, require specific methodical ap-proaches. As far as impact analyses are concerned, (quasi) experimental designs in due consideration of comparison groups are of particular im-portance, if practicably possible. In case of self-selective comparison group allocations possible disruptive factors should be compensated by respective statistical methods (matching) or other appropriate research designs.

3. Which methodical competences should evaluating bodies have?

Evaluating bodies should be competent with regard to a large range of methods. Although it is true that in actual evaluations the entire range of methods is rarely required, the evaluating bodies ought to have sufficient methodical knowledge to be able to assess potentials as well as limits of procedures and approaches employed. Thus they must be capable of giving substantiated reasons for their respective use, which obviously implies knowledge of alternative procedures.

4. What are the consequences for commissioning bodies?

Appropriate methods as well as a high quality of evaluation will supply a sound foundation for the utilisation of evaluation results; however, they will also incur costs. Commissioning evaluations must therefore not merely be based on economic aspects. Issues of contents and quality ought to be given priority.

In order to support professional and relevant evaluation procedures the DeGEval – Gesellschaft für Evaluation has published evaluation standards, recommendations for commissioning bodies as well as information on further training and education with regard to evaluation including details on evaluation methods.

 


Position paper 03 of the DeGEval - Methods of Evaluation as PDF

Zuletzt geändert: 11. August 2017

Evaluation and Society - Policy Document 02 of the DeGEval – Evaluation Society

1. What relevance and function does evaluation have in society?

 

Evaluation contributes substantially to the assessment of projects, pro-grammes, and organisations. One of evaluation’s central fields of application is that of public and state programmes initiated through political legislation. Thus, the assessment of ‘public’ action is very much at the forefront in evaluation. Beginning with programmes in the education, social and health sector, evaluation’s field of application has now grown to encompass all areas of public action. Due to the increasing autonomy and self-monitoring of political subsystems and organisations (e.g. universities) the obligation has grown accordingly to legitimise programmes vis-à-vis political decision makers and the public. It is the task of evaluation in this context to examine the consequences of political and administrative decisions and to provide public and political debates with sound and factual information. In this sense, evaluation takes on an informative dimension providing a decision basis founded in factual knowledge and it serves, moreover, the legitimation and transparency of processes within organisations. It supports processes of quality assurance and development and should encourage learning process in organisations.

2. How much evaluation does society need?

In comparison to its lengthy history in the USA evaluation in Europe is a relatively new development. However, in today’s Europe evaluation proce-dures are regularly used in various fields of socio-political action. Unfortu-nately, these procedures often are only in limited accordance with profes-sional standards or are frequently under funded. If, however, evaluations are supposed to provide well-informed consultation for the direction of future policies, it is imperative that they are carried out professionally and are adequately funded. It is of especial importance that the findings of past evaluations are more actively taken into consideration and that the focus of future evaluations is shifted more towards the lasting impact of measures and programmes on society. It is not the quantity of evaluations, but their relevance with regard to and their contribution towards the deepening of insight that are of central importance.

3. Should evaluation take a socio-political position?

Evaluation is not just mere measuring and quantification, but inherently also an assessment. Therefore, evaluations should take diverse and contrary positions into account and make clear the evaluation criteria used. An evaluative practice conceived in this way can enlighten, support, and create trust. Consequently, evaluation does not take a socio-political position itself, but sheds light on diverse positions.

4. How can evaluation findings make their way into societal practice?

Evaluation contributes to making societal processes more transparent and, thus, promotes a fruitful dialogue. The participation of stakeholders in the evaluation process and clarity in the presentation of evaluation findings in-crease the utility of evaluations. This decidedly does not only apply to positive findings. The ‘failure’ of a programme is equally as informative and as beneficial to further societal development. To what extent evaluation results actually are taken onboard in political processes depends mostly on the re-spective client and the political decision makers. It requires that politics and practice are open to learning from evaluation results. At the same time, the public should have a vested interest in demanding evaluation as a tool of political and public accountability.


The DeGEval – Evaluation Society has published standards for evaluation, recommendations for clients, and recommendations for education and training in evaluation in order to support and further professional evaluation in relevant fields of application. For further information please visit: http://www.degeval.de.


Policy Document 02 of the DeGEval – Evaluation Society as pdf

Zuletzt geändert: 11. August 2017

Governance Needs Evaluation - Policy Document 01 of DeGEval - Gesellschaft fuer Evaluation

Modern societies are shaped by complex governance processes within which the economic, political, educational and social system and further stakeholders pursue diverse, and oftentimes contradictory interests each with a different logic of action. Moreover, the globalisation of markets makes even further demands on governance processes, as has been demonstrated by the recent financial crisis. This complex relationship of governance and evaluation was the topic of the 11th annual meeting of the DeGEval - Gesellschaft fuer Evaluation in Klagenfurt, Austria.

 

In recent years, evaluations have become more important in all fields of politics and practice. They can provide well-founded information for policy planning, the improvement of operational sequences in organisations and for the further development of professional practices in general. In the opinion of DeGEval, this is both an opportunity and a risk: an opportunity in that evaluations can improve governance processes via well-founded knowledge and transparent evaluation policies; a risk, however, insofar as evaluations can be used solely to legitimise political decisions and the evaluations themselves - due to growing demand and often low budgets - can lack essential quality standards.

From the perspective of DeGEval, the interconnection of evaluations with decision-making processes in political, economic and other fields of practice is an essential requirement to contribute to a reflective attitude of decision-makers, to found governance on a factual basis and to assess and anticipate consequences accurately. Thus, evaluations contribute to politics and practice founded upon empirical facts.

The executive board of DeGEval calls upon decision-makers in politics and other fields of practice to create adequate conditions in order to strengthen the use of evaluation for rational governance. The following aspects are of especial importance:
1. The support of governance processes with evaluation requires a binding agreement regarding the form and transparency of the use of results. Unclear or lacking agreements without an orientation towards potential users reduce the relevance of evaluation for governance.

2. For the implementation of evaluation it is of utmost importance to define a time frame in accordance with the evaluation's purpose in order to be able to observe methodical and field-specific standards. In particular, evaluators should be involved early in the target explication process and the creation of instruments for programmes, strategies and institutions in order to enable a discussion regarding the feasibility of certain evaluation designs with the client.

3. Evaluations must be backed by sufficient resources. DeGEval views with concern that on the one hand the number of evaluations has drastically increased, on the other hand, however, the necessary amount of financial resources for qualitatively appropriate evaluations are oftentimes not available.

4. Taking all things into consideration, the allocation of evaluations should take place according to professional standards that alongside the already mentioned aspects also define the relationship between evaluators and clients and define the evaluation schedule.

5. On the part of the evaluators, it is required that they have adequate methodical and field knowledge, evaluation experience, social skills and know-how regarding evaluation approaches and models. This is essential because evaluation results are only useful if the evaluations are carried out professionally.

6. Thus, evaluations should take place in accordance with national and international standards upon which all stakeholders are in agreement.

DeGEval - Gesellschaft fuer Evaluation as Europe's largest association for evaluation has developed several helpful products to support stakeholders in evaluations. Alongside the DeGEval - Standards for Evaluation that are widespread in Europe there are in particular the Recommendations for Clients of Evaluation and the Recommendations on Education and Training in Evaluation.


DeGEval Policy Document 01 "Governance Needs Evaluation" as pdf

Zuletzt geändert: 11. August 2017