Professor; Minnesota Evaluation Studies Institute (MESI) Director
Bildquelle
Ph.D., Cornell University, curriculum and instruction, 1979
M.S., Cornell University, curriculum and instruction, 1978
A.B., Cornell University, English, 1971
Current Research Interests
Profile
Given that my mother was a third grade teacher and my father a school administrator, I've long felt at home in schools. As an adult, I became a practitioner in my own right as a seventh and ninth grade English teacher in upstate New York. [I like to joke that I'm a junior high school teacher gone bad.] After earning my graduate degrees in curriculum and instruction at Cornell, I moved to New Orleans where I spent a decade running the secondary teacher education program at Tulane University and taught courses related to middle and high school certification—the social foundations of education, methods, and student teaching. Moving outside of teacher education, my research centered in part on the functioning of the research and evaluation unit in the Orleans Parish Schools.
In 1989 I moved upriver to the University of Minnesota as the founding director of the Center for Applied Research and Educational Improvement (CAREI), a collaborative research organization designed to link university research with school-based practice. I worked closely with school superintendents as part of my work with CAREI. I left CAREI in 1993 to help develop the evaluation studies program in the Department of Organizational Leadership, Policy, and Development. The program now includes both a master's and Ph.D. in evaluation studies and a post-master's evaluation certificate and a Graduate School minor in program evaluation. I also helped coordinate a professional practice site at Patrick Henry High School in Minneapolis for a number of years.
From 1999-2001, I took a leave from my professorial role to serve as an internal evaluator/coordinator of research and evaluation for Anoka-Hennepin ISD #11, now the state's largest district. Anoka-Hennepin is Garrison Keillor's alma mater, and its children and professional staff are truly above average. I was quickly reminded that it is far easier to talk about educational change than to make it happen. While at Anoka, I had the opportunity to work on a number of participatory evaluations, including a special education project with a 50-member study committee, and to collaborate with central office administrators to build an evaluation infrastructure. The passage of No Child Left Behind demanded a re-focusing of district resources to expand standardized testing, making it difficult to sustain program evaluation. As luck would have it, though, I have continued to work at ISD #11, most recently with my colleague Jennifer York-Barr on a three-year evaluation of their Elementary Curriculum Specialization Project (2006-2009) and now helping to again evaluate the district's special education programs and facilitate evaluation capacity building.
As an evaluator who spends a lot of time teaching, I'm constantly bridging the research and practitioner worlds. For thirty years I have studied educational practice, consistently focusing on evaluation use and the mechanisms of organizational change. Increasingly, my work concerns the role that the systematic use of data by practitioners plays in effecting and documenting change, both in schools and in other organizations. Since moving to Minnesota, my primary research emphasis has remained in program evaluation, with special interest in the areas of participatory evaluation, evaluation capacity building, and evaluator competencies.
With my grounding in the world of schools and social service organizations, my research has addressed two broad topics: (1) studying evaluation practice in these settings, especially during change efforts, and (2) the role and function of program evaluation, including the use of both the evaluation process and its results. The ultimate goal of my work as it has evolved is to determine how to foster and support evaluation processes (by whatever name) in educational and social service organizations over time. The terms I use to describe what I study have evolved, from action research and process evaluation, to participatory or collaborative evaluation (where evaluators work with program staff and participants), and finally to evaluation capacity building (purposeful efforts to build evaluation infrastructure and skills into an organization, also known as organizational learning). Since 1998 when I introduced the phrase in a speech, I often refer to my focus as "free range evaluation" – a collaborative evaluation process that lives freely in the world, that is more viable when it survives (and it often does not) because it lives in a natural setting and reproduces itself in its organizational context. Free range evaluation is longitudinal, and it focuses on building the capacity of individuals and organizations to sustain evaluation activities. I have been fortunate to have given workshops and presentations on these ideas around the world, including Sweden, England, Israel, Japan, Australia, and New Zealand.
This summer my collaborator Laurie Stevahn of Seattle University and I completed a small book that is part of a kit on needs assessment. Ours is the final book in the series and discusses what people can do to use needs assessment data to implement change in their organization. With that book in press, Laurie and I have returned to our magnum opus, a book on what we call interpersonal evaluation practice or the "interpersonal factor." Over ten years in the making, our book will apply theory-based principles from social psychology and evaluation research to program evaluation processes and record what we've learned in more than a quarter century of evaluation experience.
On a personal note, I am a proud but aging tent camper who, with my husband, purchased a pop-up camper in the year 2000 with the express goal of camping at all 63 Minnesota state parks and as many national parks as possible. I love children and cats and have two of each (Ben, age 26, and Hannah, age 25; Marvel Rodham Dawn, age 17, and Gus, age 4).
Selected Publications
King, J. A., & Stevahn, L. (In progress, to be published, 2011). Interactive evaluation practice: Managing the interpersonal dynamics of program evaluation. Newbury Park, CA: Sage Publications
Stevahn, L., & King, J. A. (In press). Needs assessment phase III: Taking action for change (Book 5). Newbury Park, CA: Sage Publications.
Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). Research on evaluation use: A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377-410.
King, J. A., & Ehlert, J. (2008). What we learned from three evaluations that involved stakeholders. Studies in Educational Evaluation, 34(4), 194-200.
Toal, S. A., King, J. A., Johnson, K., & Lawrenz, F. (2008). The unique character of involvement in multi-site evaluation settings. Evaluation and Program Planning, 32(2), 91-98.
King, J. A. (2008). Bringing evaluative learning to life. American Journal of Evaluation, 29(2), 151-155.
King, J. A. (2007). Developing evaluation capacity through process use. New Directions for Evaluation, 116, 45-59.
Volkov, B., & King, J. A. (2007). A checklist for building organizational evaluation capacity. Evaluation Checklists website, Western Michigan University.
Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43-59.
For more information about Jean King, see her full curriculum vitae [PDF].
The many forms of collaborative and participatory evaluation share a commitment to engaging staff and community members in evaluation activities, and research has demonstrated the positive effects of such involvement both on people’s potential learning from the process and on the eventual use of results. But participatory evaluation faces a persistent and ongoing challenge in the contexts common today that require “rigorous” designs (typically quantitative research studies) to “prove” that a program’s actions are causally linked to its outcomes. How can evaluators in this world of direct accountability adapt participatory techniques to document their inherent strengths in such environments? Several possibilities emerge from practice, including shoe string interactive evaluation practice (IEP), evaluation capacity building (ECB), and the strategic instruction of policy makers.