Resources for Middle Eastern Language Programs

2011 Western Consortium Middle East Language Program Evaluation Workshop

"Making the most of program evaluation"

Sponsored and hosted by the National
Middle East Language Resource Center & the Center for Middle Eastern Studies at the University of Texas at Austin; Facilitated by the University of Hawaii National Foreign Language Resource
Center

Scroll down to see the schedule of events, and to download PowerPoint presentations, handouts, and discussion summaries.

FRIDAY, JULY 29

Summary:
Middle East language programs (MELPs) face the need to engage in evaluation for
a variety of reasons, including in particular mounting pressures to respond to
questions about the value and effectiveness of contemporary language education
in the U.S. Given these and related demands, how can MELPs pursue evaluation in
ways that support our efforts, improve our teaching, learning, and other activities,
and demonstrate the value of what we do to a variety of audiences? This
presentation will review key findings that are emerging from current research
and practice in language program evaluation, highlighting particularly useful
strategies for initiating, sustaining, and acting upon evaluation, both within
individual programs and across the discipline.
[View PowerPoint]

2:45-4:15Workshop
by John Davis:
Using surveys for understanding and improving foreign language programs

Summary:
Surveys are often the first method we think of for collecting data in program
evaluations, yet the development and use of good surveys may be less
straightforward than presumed. This workshop provides advice (and examples) on
using surveys in tertiary language programs, from the beginning planning stages
through to reporting and acting on survey findings. The overall goal of the
workshop is to help language educators develop and administer quality surveys
that produce useful information for various program development and evaluation
aims.
[View PowerPoint and
handout]

Summary:
Academic programs are regularly encouraged or required to engage in so-called
'program review', typically involving a self-study and a brief site visit by
faculty from peer programs or other domain experts. Unfortunately, the utility
of such reviews is often threatened by a variety of challenges, including the
lack of a guiding framework or evaluation questions, minimal or
non-participation by important stakeholders, inadequate/invalid/unreliable data
to illuminate program activities and outcomes, and external reviewers with
insufficient understanding of the target program and/or of evaluation purposes
and methods. In this roundtable discussion, participants offer suggestions for
how to improve program reviews, with an eye towards developing recommendations
for practice in MELPs.
[View Norris Handout]
[View Panel Discussion]

SATURDAY, JULY 30

Summary:
Current accountability and accreditation systems require college foreign
language (FL) programs--including academic programs, National
Resource Centers, area studies programs, and others--to engage in evaluation of
program- or project-level outcomes, though often such activities are seen as
daunting and bureaucratic. How can we build evaluative culture within
organizations and create a proactive evaluation framework that addresses the
demand for outcomes? The presenters will provide examples of (a) transformative
organizational and evaluative culture in FL departments and National Resource
Centers as well as (b) changes in curriculum, pedagogical practices, and
project designs as we engaged faculty and staff in stating, mapping, and
assessing/evaluating outcomes. We explore the strategies and factors that seem
to impact valuing of assessment and evaluation, as well as the value
contributed by these processes.
[View PowerPoint and
handout]

Materials:
Below is a list of generic learning outcomes for 1st through 6th semester Arabic that
were developed by graduate students at the University of Texas in Austin in a curriculum development course.

10:40-11:20Nahal
Akbari: Using program logic models to understand and improve
Persian language programs

Summary: College language programs are often critiqued for lacking clear curricular scope and sequence,
meaningful articulation across courses/semesters/years of study, or valued outcomes that respond to specific societal and educational needs. At the same time, it is clear that language programs consist of
multiple elements, from materials and instruction to trained teachers to fitting assessments, all of which interact to produce the educational experience. How can the distinct parts of a language
program be combined intentionally into an overall effective educational design? How can our theories about language teaching and learning be translated consistently into practice across courses and within the
different pedagogic efforts we make? In this presentation, we report on the use of "logic models" as one way of literally mapping out the various elements of a language program and demonstrating how they
are linked together. Using the example of the Persian language programs at University of Maryland, we show how logic models can help to explicate the theory underlying our educational program, the needs
to which the program responds, the outcomes it seeks to achieve, and the pedagogic practices we pursue. Further, we highlight the contribution of logic models to identifying strengths and weaknesses,
as well as indicating aspects of the program which may require evaluation and/or improved design.
[View PowerPoint]

11:20-12:00Esther L. Raizen and Joanna Caravita: The Use of 'Sabras' as Mentors for Advanced Hebrew Students

Summary: In the spring of 2010,
we offered the course
"Hebrew via Popular Culture," an upper-division course conducted entirely in Hebrew. The course immersed students in a variety of cultural issues, and because
of the heavy reliance on current events and blogs/talkbacks, fairly quickly focused on three aspects of opposition in Israeli society: political left and
right, religious and secular, and Ashkenazi/Sephardic Jews. Two weeks into the course students were assigned individual mentors from the Israeli
community, either from their parents' generation or from an earlier generation. The mentorship experience was meant to add cultural depth in
terms of both time span and emotional attachment to historical and social issues. It was also designed to provide broader exposure to the language.
In this presentation we will discuss the parameters of student-mentor work and relationships, and the impact of the mentoring component of the course,
as evaluated in the spring of 2010 and again in the summer of 2011.
[View PowerPoint]

1:30-3:15Evaluation-topic-specific
breakout sessions, facilitated by UH team and other presenters

Summary:
This session provides an opportunity for individuals to meet and discuss
evaluation issues specific to their programs and interests, topics to be
determined based on a survey of attendees' interests, targeting 4-6 topics.
Facilitators will provide a short overarching commentary on the particular
topic, and each group will plan to report back.

Discussion topics:1. How should outcomes assessment help our programs? Stating and assessing outcomes with an eye towards use and impact.
2. What is the best way to get started with program evaluation? Strategies for initiating feasible, useful evaluation projects.
3. How can we develop an 'evaluation culture' in our programs? Encouraging participation, buy-in, and a willingness to change.
4. What are the alternatives for collecting data in language programs? Key methods and ethics for empirical evaluation practice.

Framing
questions for each group:

- What are the key challenges associated
with your particular topic, in MELPs?

- Are there any good examples of practice
that can/should be shared?

- Which strategies might be pursued by ME
language educators in responding to the challenges associated with this topic?

3:15-3:30Break

3:30-4:30Reporting
session

Breakout
groups report back to full group on challenges, examples, strategies discussed
in breakout sessions, with an eye towards informing the Sunday strategic
planning session.
[View discussion highlights]

Summary:
Evaluation calls upon empirical data as a primary basis for informing decisions
and taking actions in language programs. Yet there are numerous possible
methodologies for gathering data, from assessments of student learning, to
observations of how well programs are delivered, to perceptions of satisfaction
and impact. Indeed, many of the outcomes associated with language programs may
defy easy 'measurement'. In this roundtable discussion, participants will
provide insights into useful methods for collecting meaningful data on the
distinct kinds of outcomes (learning and otherwise) that ME language programs
seek to encourage.
[View Norris Handout]
[View Panel Discussion]