Chat with us, powered by LiveChat In light of what you read in the article, what are your thoughts with regard to considering a multi-disciplinary approach during the program evaluation process? Do you t - Writingforyou

In light of what you read in the article, what are your thoughts with regard to considering a multi-disciplinary approach during the program evaluation process? Do you t

using the article attached please Artie a short summary of the article. Then answer each question separately

In light of what you read in the article, what are your thoughts with regard to considering a multi-disciplinary approach during the program evaluation process?

Do you think that sociological literature can help build a clear business case for using the sociological approach? Any examples you can elaborate on from the article?

In the “Kentucky Community and Technical College System” project, what are your thoughts on the goal setting process? What other examples of sociological interventions could have been proposed to the evaluation process?


To what extent did the author/evaluator use his sociological imagination? Please give examples to elaborate on this.Hougland used his sociological

Sociology as a Partial Influence on Evaluation Research

James G. Hougland Jr.1

Published online: 4 June 2015 # Springer Science+Business Media New York 2015

Abstract Although most evaluators are not sociologists, Sociology is represented in the evaluation profession. To what extent does the presence of sociologists affect the content and emphases of evaluation projects? Reflecting on four projects on which I have served as an evaluator, I conclude that a background in Sociology has been important in the sense that it has led to an emphasis on structural arrangements that may provide a basis for a program’s lasting impact. However, it is apparent that an evaluator’s disciplinary background is only one of several influences on the content of an evaluation project. Stakeholder mandates and standard evaluation practices also have important influences on how an evaluation is conducted. The impact of discipline- based content on the thinking and decisions of program administrators will vary according to the willingness of evaluation researchers to maintain regular communica- tions, to insure that the rationale for discipline-based content is understood, and to present results in terms that can be understood by people outside one’s discipline.

Keywords Evaluation . Discipline-based content . Capacity development . Job training programs . Higher education . Information technology .Women in science

Most evaluators are not sociologists. Rossi et al. (2004: 28) note that members of the American Evaluation Association are more likely to be have degrees in Education (22 %), Psychology (18 %), BEvaluation^ (14 %), and Statistical Methods (10 %) than in Sociology (6 %). In fact, most disciplines are outnumbered by BOther and unknown^ (21 %). The foregoing statistics are rather dated (1993), but there is no reason to believe that Sociology has achieved new-found dominance among evaluation professionals.

Am Soc (2015) 46:467–479 DOI 10.1007/s12108-015-9273-x

This article is based in part on results of projects funded by the National Science Foundation: DUE-0101573, DUE-0101455, DUE-0532651, DUE-0101577, HRD-1007978. Any opinions, findings, and conclusions or recommendations expressed in this article are those of the author and do not necessarily reflect the views of the National Science Foundation.

* James G. Hougland, Jr. [email protected]

1 Department of Sociology, University of Kentucky, Lexington, KY 40506-0027, USA

Does this matter? Given that BSystematic evaluation is grounded in social science research techniques^ (Rossi et al. 2004: 26), evaluators almost certainly benefit from a grounding in one or more social science disciplines. Because Sociology includes enthusiastic advocates of quantitative and qualitative methodologies, some sociologists may be equipped to approach evaluation with a flexible set of methodological strate- gies. Wide-ranging methodological expertise can be an important advantage, but meth- odological training alone is unlikely to seal any discipline’s case for its unique contribution to evaluation research. Most graduate programs in any of the social sciences are likely to offer several sophisticated seminars and workshops in methodology. Proponents of a discipline might do better to argue that the theoretical perspectives associated with the discipline will influence what researchers think about and the kinds of questions that they raise as they work on evaluations. Sociology, with its intellectual origins in structural and social-psychological analyses and its frequent emphasis on critical perspectives, is likely to call attention to questions that go beyond a mechanistic determination of whether a program is meeting its formal objectives. However, it is appropriate to exercise caution before claiming that any single discipline holds the key to effective evaluation.

In at least two senses, it is reasonable to question the importance of an evaluator’s disciplinary background. First, most evaluations are formally driven by official ques- tions formulated by the stakeholders who have funded the evaluation. The officially mandated questions are likely to emphasize summative analysis focusing on whether a program is achieving its officially stated goals. To the extent that stakeholders expect official counts of successful outcomes to be provided, the evaluator’s disciplinary background may have little impact on the actual content of an evaluation report. However, attention to the formal questions identified by those funding the evaluation does not necessarily preclude other analyses that reflect the evaluator’s observations or the interests of less powerful stakeholders (Rossi et al. 2004: 399). In addition, evaluators can enhance their usefulness to the extent that they can offer explanations of the factors that have led programs to achieve or fail to achieve their formal objectives. The insights provided by one’s disciplinary background may prove useful as one attempts to interpret the dynamics underlying officially reported outcomes.

Second, many large-scale evaluations are conducted by multidisciplinary teams rather than individual researchers. While any one discipline is likely to lead to a narrow understanding, a combination of disciplines may lead to Bcollective comprehensiveness through overlapping patterns of unique narrownesses^ (Campbell 1969: 328). However, this argument hardly discounts the importance of any one discipline. Each discipline that is included in a multidisciplinary effort makes a unique contribution to the overall intellectual product. Thus, to the extent that sociological insights can make valuable contributions to an evaluation effort, it is important that sociologists play meaningful roles in an evaluation team.

Based on these considerations, I am inclined to argue that a background in Sociology (or any other academic discipline) is likely to affect (for better or worse) the content and emphases of evaluation efforts. For that reason, I hope that an effort to reflect on Sociology’s impacts on my own evaluation efforts will prove worthwhile.

Working from degrees and a faculty appointment in Sociology, I have pursued a research program that has included several evaluation projects, including eight on which I served as Principal Investigator and four others. Five of the projects involved large, multi-year budgets for evaluation, while others were modest in scope. Most have

468 Am Soc (2015) 46:467–479

involved job-related training programs that overlap to a degree with my academic interests in organizational sociology and the sociology of work. All were funded by stakeholders with specific information needs, but I tended to have some freedom to identify specific research questions and to make decisions about methodological strategies. My decisions almost certainly have been influenced by my background in Sociology, but I would consider some of the influences to be more apparent than others. Project sponsors varied in the enthusiasm with which they received my sociological insights. The following are observations and reflections concerning several of the projects. Each of the projects described below was comparatively large in scope, allowing some room for personal influence over the formulation of evaluation questions.

Lessons from Four Evaluation Projects

CETA Evaluation, Funded by Kentucky Cabinet for Human Resources (1981–83)

This was my first evaluation project, initiated 5 years after earning my Ph.D. and 1 year after being promoted to Associate Professor with tenure. As a graduate student, I had participated in two seminars that touched on evaluation research, but my exposure to the evaluation literature was limited. I served as Principal Investigator but also worked closely with the Directors of two Centers that had been jointly awarded the project. In cooperation with them, I hired two sociologists as professional staff members. After 1 year, the senior sociologist relocated and was replaced by an anthropologist. The project included Research Assistants who were working toward degrees in several social science disciplines.

The project was intended to evaluate the success of employment-training programs administered in Kentucky with federal funding awarded through the Comprehensive Employment and Training Act (CETA). As the project progressed, CETAwas replaced by the Job Training Partnership Act (JTPA), which also was part of the evaluation. Under JTPA, a program to assist BDisplaced Homemakers^ (individuals who were re- entering the job market due to divorce or the death of a spouse) was of particular interest to program administrators.

Program administrators indicated that they wanted to gain a rich understanding of participants’ experiences, so the research team decided to survey former partic- ipants to learn about their views of the training programs and their experiences on the job market after completing training. Using contact information provided by the sponsor, participants were contacted for telephone interviews. The interviews included information on job placements and the extent to which post-CETA place- ments represented an improvement over jobs held prior to CETA, but complexities in the data set prevented an immediate report of results for these tangible outcomes. Based on the centrality of satisfaction in the sociology of work, we also collected information on satisfaction with program processes. This aspect of the data required less detailed analysis, so we quickly prepared a report on program satisfaction for the sponsor. Results regarding satisfaction were generally favorable, so we assumed that the sponsor would be pleased with the report. However, when we met with key administrators to present the report, we quickly learned that this was not the case. It

Am Soc (2015) 46:467–479 469

had been self-evident to the research team that clients’ reactions to programs that supposedly were serving them would be important, but the program administrators criticized the satisfaction data as Bsoft^ and unrelated to the outcomes for which they would be held accountable. Rather than being pleased to have some findings in exchange for their investment in the evaluation, they were concerned that we were delaying their receipt of the information they had anticipated (and possibly wasting their money) by devoting our time and energy to the presentation of findings that they considered superfluous.

A presentation of data on program satisfaction could have been justified. Katz et al. (1975) and Holzer (1976) are among influential researchers whose work was available for citation at the time of our research who have argued that clients’ reactions to public programs is worthy of careful scrutiny, and Ostrom (1977) has reported a close correspondence between subjective and objective measures of the quality of some public services. However, we presented the findings without offering the program administrators a preview or a prior justification of the data’s importance. I would attribute this to my inexperience in dealing with stakeholders in nonacademic positions as well as my assumption that variables that had been shown to be important in the research literature would immediately be recognized as important by program administrators.

I should note that the administrators’ collective mood improved as more objective data regarding job placement became available. I also should note that their suspicion of satisfaction data was not completely off-base. In a later analysis of predictors of CETA clients’ satisfaction with program outcomes, I found that job placement, wage level, and occupational social standing were poor predictors of satisfaction with program outcomes. Instead, satisfaction with program processes was the best predictor of satisfaction with program outcomes. While subjective reactions to a program may have value, my later analysis led me to conclude (with some regret) that Bperceptions of a program’s quality do not correspond precisely to advantages gained from the program^ (Hougland 1987a: 393).

Communications between the administrative staff and the evaluation team were more firmly established by the time attention turned to programs for displaced home- makers. Moreover, programs were doing a better job of entering outcomes data into information systems than had been the case during the early stages of the CETA evaluation. As a result, interviews with program participants were not necessary to compare programs’ success in meeting official expectations regarding job placement. This allowed a more timely report of outcomes to administrators. However, I did face the challenge of explaining why programs differed in their success. To that end, the evaluation staff and I used interviews with program coordinators and available data sources to collect data on local unemployment rates, client characteristics (e.g., prior job experience, educational attainment), staff characteristics, and program characteris- tics (e.g., types of training provided). It is safe to say that such information would have been collected regardless of the evaluator’s disciplinary training, and some of it proved useful.

However, my exposure to organizational sociology also led me to pay attention to programs’ efforts to establish and maintain appropriate relationships with external actors. This focus was encouraged by such researchers as Whetten and Aldrich (1979), who have argued that:

470 Am Soc (2015) 46:467–479

Because of the nature of their technology, people-processing organizations devote a high proportion of their resources to boundary-spanning activities, and to building and maintaining a large organization set (1979: 253).

In addition to size, Whetten and Aldrich note the importance of diversity within the organization set, and Pfeffer and Salancik (1978) note that an organization’s ability to derive benefits from its involvement in an organization set is tied to its success in identifying external groups that control important resources. The latter point about the importance of ties to groups with important resources received particularly clear support from our data set. Including business representatives on an advisory board proved to be an important predictor of successful job placement for clients in the private sector. Because the evaluation analysis was influenced by sociologists and other scholars engaged in the study of organizations, I was able to argue persuasively that establishing appropriate external ties could have a significant impact on a program’s success (Hougland 1987b).

SmartTeam Evaluation, Conducted for Statewide Community Colleges,1 Funded by National Science Foundation (2002–05)

SmartTeam was introduced in 2002 to provide a combination of integrated courses and group-based processes to enhance the academic progress of community college stu- dents whose entrance test scores indicate that they should be placed in developmental (non-College credit) Algebra or English courses. Traditionally, students who test into developmental courses have high drop-out rates because their eligibility for financial aid ends while their work in courses bearing college credit has barely begun. SmartTeam spoke to this problem by allowing students to take developmental courses on an accelerated schedule while also earning college credit for Physics and Computer Literacy. They completed the course work as part of a cohort whose members attended all or most classes together. As members of a cohort, they were expected to learn teamwork and to provide each other with peer support. The courses themselves were designed to tie academic content to realistic situations in which the applicability of knowledge is clear.

As evaluator, I visited campuses that were involved in the effort. During the visits, I interviewed faculty and students, administered questionnaires to students, and arranged to receive reports of academic progress. I also attended and observed workshops for faculty members who were either involved in SmartTeam or interested in learning about it. I also interviewed selected System-level administrators about their perceptions of the program and strategies for sustaining it.

Interviews with faculty led to a conclusion that involvement in SmartTeam had led to cross-disciplinary collaborations that could help to overcome a tendency for faculty to work primarily within their personal academic divisions. Interviews with students suggested that many were gaining increased confidence in their ability to solve problems on their own, and some (but not all) were developing a sense of mutual responsibility, resulting in improved attendance and participation. Many students also were getting to know faculty members better than they expected, increasing their

1 Project and client names are pseudonyms.

Am Soc (2015) 46:467–479 471

willingness to request assistance and support. Such support can be particularly impor- tant for students from households whose members are not strong supporters of education.

On one campus, I was able to compare the academic progress of SmartTeam students with a probability-based sample of similarly underprepared students who were not enrolled in SmartTeam but who were taking pre-algebra or elementary algebra courses.2 By every measure, students who were involved in SmartTeam experienced more success than their counterparts who were not involved in SmartTeam. During their first semester, they completed a higher percentage of credit hours with a grade of C or better. In later semesters, they were more likely to have passed the next level math and writing courses with grades of C or higher. They also were more likely to have maintained their enrollment over time.

Did my sociological background influence this evaluation? Because of the impor- tance of student outcomes to program stakeholders, the evaluation project’s emphasis on outcomes did not reflect my sociological background. On the other hand, my interest in organizational processes probably led me to devote more time and energy to semi- structured interviews than system administrators would have required. However, I think that Sociology’s influence probably was most powerful as I formulated recommenda- tions for strengthening and sustaining the program. I noted three problems that needed attention. In each instance, my sociological background led me to propose a structural solution that would be embedded in organizational practice.

First, student enrollment in SmartTeam was voluntary, and recruitment was prob- lematic throughout the period of the evaluation. With its accelerated and concurrent courses, the program could appear intimidating. For that reason, I recommended that administrators strengthen their expectation that advisors play a stronger role in encour- aging students with appropriate test scores to enroll.

Second, individual campuses in the system had not developed a unified approach to providing students with appropriate training in writing. Without trying to undermine individual campus autonomy, I recommended more intensive communication regarding how writing should be handled.

Finally, attendance at System-level workshops designed to familiarize faculty mem- bers with SmartTeam was voluntary. Although workshops were well-received by most individuals who attended, it is not clear that they led to sustained structural changes in the curriculum at most institutions. Because the SmartTeam approach has shown itself capable of producing positive outcomes, I recommended that workshops be combined with formal expectations that Colleges would bring teams that would include high-level administrators and that participating institutions should be expected to develop a formal plan for implementing SmartTeam as a tangible product of the workshop. In other words, I have encouraged attention to structural change as opposed to individual voluntarism.

To my knowledge, my recommendations received some support but were not implemented in the form that I had suggested. My hope is that they have encouraged ongoing consideration of how students with problematic academic backgrounds can best be served.

2 Participation in SmartTeam was voluntary and often based on advisor’s recommendations, so no claims can be made for randomized assignment to the different types of courses.

472 Am Soc (2015) 46:467–479

KITCenter Evaluation, Conducted for Kentucky Community and Technical College System, Funded by National Science Foundation (Multiple Grants; 2001–09)

The Kentucky Information Technology Center (KITCenter) was a major initiative to enhance instruction and capabilities regarding information technology (IT) throughout Kentucky. A major emphasis involved professional development on the part of com- munity and technical college faculty to enhance their ability to develop a new IT curriculum and to teach courses within it. As external evaluator, I worked with project leaders to identify a number of short-term and longer-term goals against which the project would be evaluated. Short-term goals involved faculty credentials, curriculum development, course offerings, and student enrollment. Longer-term goals focused on the success of students in the enhanced IT courses in obtaining degrees or certificates, and, following completion, their success in entering more advanced educational pro- grams or obtaining employment in their field. An additional goal concerned the level of satisfaction on the part of employers of graduates who obtained employment. For each goal, I worked with the faculty to establish quantitative indicators of success, but, as I have reported elsewhere (Hougland 2008), the quantitative standards proved rather artificial and arbitrary when compared with the actual experiences of those involved in the program. Goals certainly have a place in evaluation research.

[G]oals can be helpful in conceptualizing and organizing an evaluation. More- over, the funding agencies that often require evaluations are unlikely to be satisfied with an evaluation that pays no attention at all to goals (Hougland, 2009: 72).

However, tendencies for goals to become contested by the many stakeholders involved in any given organization (Cyert and March 1963; Pennings and Goodman 1977; Pfeffer and Salancik 1978) suggest that a goal-oriented evaluation may fail to capture all important aspects of a program. For that reason, my focus in evaluating KITCenter gradually shifted from a quantitative analysis of whether specific numeric targets were being met to a focus on the program’s ability to sustain itself over time.3

Borrowing from work on international development, I attempted to focus attention on capacity development, thinking of Bcapacity^ as an organization’s ability to achieve its mission effectively and to develop Bstrategies that are sustainable and have impact^ (Alliance for Nonprofit Management 2015). Following Todd and Risby (2007: 2), I looked for evidence that the KITCenter initiative was contributing to the development of capacity at three levels:

Individual: Were individuals obtaining new knowledge and skills that could be applied to new challenges and opportunities? Organizational: Are the colleges and schools involved in the effort developing appropriate structures and procedures? Are they showing evidence of an ability to modify structures and practices as situations change?

3 To be clear, I continued to report quantitative outcomes, but I attempted to shift the emphasis in written and oral reports.

Am Soc (2015) 46:467–479 473

Systemic: What is happening in the overall system’s headquarters to support initiatives at local colleges?

As I have reported in more detail elsewhere (Hougland 2009: 73–77), evidence obtained during the final months of KITCenter’s NSF funding provided some reason to be optimistic about the continuing viability of IT-related efforts within KCTCS.

At the individual level, some faculty who participated in KITCenter workshops reported that they had earned formal certification because of the knowledge they gained (12 %), more have started teaching new courses because of the knowledge they gained (29 %), and most report that they are better equipped to meet the needs of IT students (93 %) and of co-workers with IT problems (77 %). At the organizational level, all KCTCS institutions developed an IT curriculum while KITCenter was in effect. Several that offered no IT certificates prior to KITCenter have begun to offer them. Moreover, the percent of KCTCS IT faculty who were employed on a full-time basis increased from 50% when KITCenter started to 76 % 6 years later. I have noted that Ba critical mass of full-time faculty members is important for ongoing program development because it is full-time faculty members who will be involved in curriculum development, negotiations for re- sources, and other critical tasks outside the classroom^ (Hougland 2009: 76). Less can be said about capacity development at the systemic level, but it is noteworthy that an individual has been given responsibility for maintaining IT-related communications and that an IT curriculum committee continues to meet on a system-wide basis. While individual colleges make their own decisions about which specializations to offer, they do so within the context of a system-wide curricular framework. In addition, the curriculum committee allows geographically dispersed faculty members to maintain professional contact over time.

The KITCenter evaluation began as a fairly standard evaluation effort, but my exposure to the organizational literature allowed me to perceive loose linkages between formally articulated goals as well as barriers that needed to be overcome for KITCenter to achieve long-term viability even after its eligibility for NSF funding had expired. I believe the emerging emphasis on capacity development expedited the effort of faculty and administrators to maintain support for an effort that was showing promise for students as well as faculty.

External Evaluation of NSF ADVANCE at East-Central University,4 Funded by National Science Foundation (2010–Present)

The National Science Foundation’s ADVANCE initiative is intended to improve opportunities and organizational climates for women in academic careers in science, technology, engineering, and mathematics (STEM), thereby contributing to the devel- opment of a more diverse science and engineering workforce. ADVANCE includes several types of grants. East-Central University applied for and received an Institutional Transformation (IT) award. IT awards are expected to promote comprehensive pro- grams for change that will be institution-wide in their impacts. At East-Central University, the ADVANCE program is designed to promote the success of all faculty members by creating a diverse scientific community promoting constructive interac- tions leading to professional and personal development. The program is intended to

4 This is a pseudonym.

474 Am Soc (2015) 46:467–479

develop a sense of interdependence and collective efficacy based on changes intro- duced at multiple levels (ranging from University-wide policies to individual opportu- nities for professional development).

I serve as External Evaluator for the project, which also has an Internal Evaluator and graduate student serving as a research assistant. The Internal Evaluator is a faculty member in Mathematics, while the discipline of the graduate assistant varies from year to year. In every other project I have evaluated, other sociologists could be found, if at all, only on the evaluation team, but the ADVANCE leadership includes three sociol- ogists. As a result, sociological theory has influenced the content of the effort as well as some emphases of the evaluation.

The evaluation team has found NSF to be highly prescriptive and specific regarding types of data that must be collected, but we also have had the freedom to identify additional topics that should be investigated and to establish an overall framework for the evaluation. To that end, we have worked with project leaders to identify several specific research questions that are tied to specific aims of the project. Research questions have been investigated using a combination of official documents, institu- tional data, semi-structured interviews, written questionnaires, and focus groups.

Several aspects of the project have sociological implications. Some might have proceeded in their current form regardless of the evaluators’ disciplinary background, but other emphases probably attained a more prominent profile because of the repre- sentation of Sociology on the evaluation team. First, the evaluation team has used structural foundations as its starting point. We have been attentive to the formal establishment and staffing of an ADVANCE office and evidence of its official recog- nition by university officials. We also have been attentive to the establishment and implementation of official policies that support work-life balance. Examples include policies related to extension of the tenure clock, parental work assignments, and lactation procedures.

Second, participation in formal events sponsored by ADVANCE has been moni- tored, as have participants’ assessments of the events. Aside from their specific content, the events provide opportunities for networking that may have a lasting impact on the quality of discourse within the University.

Third, we have been attentive to ties between hiring authorizations and mandated participation in workshops that will encourage equity in the hiring process. The ADVANCE program began holding search committee trainings in 2011. In many cases, searches were not authorized to begin until training had been completed and departments developed approved recruitment plans. Our interviews with Chairs indi- cate that some department leaders readily understood what was expected, but others required considerable help. It is in the latter cases that ADVANCE may have made some of its most significant contributions. While departments vary in their success, the percentage of women among the faculty of targeted departments has increased over the life of the grant.

Fourth, we have been attentive to the ability of women to be promoted over