You are here

SLO Handbook

Introduction

Mission

Cerro Coso Community College is committed to the ongoing assessment of student learning in academic programs and through student services through a systematic, college-wide assessment plan. The results of assessment provide clear evidence of student learning and are used to make further improvement to instruction and services.

Definition

Student learning outcome assessment is an activity in which institutional and instructional effectiveness is certified by evidence of student learning. Specific measurable learning behaviors are identified and assessed, and the results of the assessment are used to improve programs, courses, and services. Assessment, in this context, is not an evaluation of individual students or faculty.

There are several other concepts implicit to assessment:

Its primary purpose is to improve student learning at Cerro Coso.

It is a faculty-driven, collaborative process

It is a process that is on-going and cyclical

It does not encroach upon academic freedom

The results are used constructively, not punitively

It is a process by which individual student learning outcomes are defined at the institutional, program, and course level. For a particular outcome, expected student achievement is compared with actual outcomes, using predetermined benchmarks. If the results are lower than what has been determined to be acceptable, a plan to improve student learning is developed and implemented.

Assessment, in this context, is not related to grades or faculty evaluation. Although students provide evidence of learning, this is not an assessment of individuals, but an assessment of curriculum design and institutional best practices to the end that students are successfully learning.

Philosophy

Self-assessment is a natural extension of instruction and student services, and all members of the College share in this responsibility. It is a means to an end, with the result being continuous improvement in student learning. Student populations are becoming more diverse and a rapidly changing employment economy creates challenges to meet all students’ needs effectively. Consequently, the teaching methods of today may not work as well for tomorrow’s learners. We need to continually assess what is working and what requires improvement. Another trend that makes self-assessment a natural academic activity is the culture of teaching and learning is shifting from independence and autonomy to interdependence and collaboration. Intra-departmental, collaborative assessment is a natural extension of this culture. We want to ensure that students are learning, so we should be interested in verifying this. Finally, we are accountable to external organizations and students, as consumers, for our learning effectiveness. Assessment certifies the quality of the education we offer.

Some people fear that student learning outcome assessment is a slippery slope, and if we "give in" to it, instruction will eventually become heavily regulated by the Department of Education. In truth, if we do nothing or respond slowly to implementing assessment, such regulation will be a foregone conclusion. Rather, it is by thorough, self-initiated assessment that we will retain our autonomy and, thus, the quality of our instruction. Regional accrediting agencies are our ally in this effort and are working diligently on our behalf. The Accrediting Commission for Community and Junior Colleges (ACCJC) requests reports about our progress, not to micro-manage us, but rather to make a case to the Department of Education that we are effectively conducting assessment and intervention of the DoE is not necessary. The more we can accomplish, the more progress we can report to the ACCJC, and the better protected we are against regulation.

Several models and approaches to assessment have been discussed during faculty flex days, faculty chair meetings, and learning outcome workshops. Several major themes have emerged from these discussions:

We favor a philosophy and a process that best supports full faculty participation and the successful completion of an assessment cycle, including the definition of outcomes and assessments, assessment design and collection of data, analysis of the data, and implementation of improvements based on the data.

We favor a process that is simple, but not simplistic. Outcome assessment should be simple enough to be manageable and sustainable, but it should thorough enough to assess and improve instructional programs and services.

We favor quality over quantity. Due to the large number of course student learning outcomes, we cannot assess every single outcome in a sustainable cyclical fashion. Therefore, priority should be given to high impact learning outcomes. There are considerably fewer program and institutional learning outcomes, however, and we should strive to incorporate all of those into a sustainable cycle.

We favor assessment being a faculty-driven process to ensure that it is constructive and non-punitive.

Roles and Responsibilities

Faculty/Department Chairs

Faculty or Department Chairs assume primary responsibility for all aspects student learning outcome or administrative outcome assessment, although the process should be collaborative within departments and/or programs, and it may be necessary to rely more heavily on particular faculty members who have more expertise in a course's subject matter. The ACCJC is interested in seeing evidence of collaboration and dialog, so it is important to maintain detailed department meeting minutes evidencing this discussion.

Student Learning Outcome Coordinator

The Student Learning Outcome Coordinator (SLOC) provides college-wide leadership in the implementation of student learning outcome assessment. This includes the following:

Obtain training in regional and state-wide SLO workshops.

Work with the Curriculum and Instruction Council to establish an institutional process.

Mentor and train the College faculty in the design of outcomes and assessments and the implementation of assessment studies.

Document progress that is being made at the College in each phase of the assessment process.

Report progress to constituents, including the College community and ACCJC.

The SLOC should have a strong understanding of curriculum, program review, and accreditation standards and is a member of the Curriculum and Instruction Council. The following is a list of other skills identified as necessary for SLOCs, based on input from SLOCs, curriculum chairs, and administrators throughout California:

An understanding of student learning outcomes and assessment

Classroom teaching experience

Educational research

Sensitivity to diverse backgrounds

Faculty leadership

Strong interpersonal and motivational skills

Organization and ability to keep current records

Knowledge of institutional processes

Institutional Researcher

Cerro Coso has access to the District Institutional Researcher 2 days per month for support for unit plan, program review and student learning outcome assessment data. However, this support is not adequate—especially with respect to the need for a researcher's guidance on the design of effective assessment studies. Although many faculty are familiar with research practices, few have any experience with educational research. We are in need of a researcher who is dedicated to our campus to provide guidance in the crafting of assessments that are valid and reliable and to assist in the collection of data that is not easily attainable through classroom-embedded assessments or through Oracle Discoverer. It is important that we have a researcher who is a member of our college culture and understands the complexities of serving students across multiple sites over a large geographic area.

Process

There are 3 primary phases to outcome assessment:

Outcome and Assessment Definition

The Assessment Study

Implementation, Planning, Budgeting

I. Outcome and Assessment Definition

Student Learning Outcomes identify what students can DO to demonstrate that they are learning. There should be clear linkages between student behavior, the production of a learning artifact, and assessment of that artifact. Other characteristics of student learning outcomes include:

They are NOT instructional objectives or goals

They are an observable behavior that demonstrates that learning has occurred

They focus on the end result, not on the learning process.

They are learner-centered, rather than instructor-centered.

They may or may not be content specific

They should take a diverse student population into account

They should be delivery platform independent (classroom, ITV, online)

As much as possible, they should require higher-level cognitive, affective, and/or psychomotor domains.

Student learning outcomes should be defined for:

the institution, defining broad learning outcomes under which all courses fall

general education competency areas

certificates and degrees

individual courses

instructional support and student services

Ideally, program learning outcomes should be defined first, resulting from input from advisory committees or academic organizations for the discipline. Course learning outcomes should emerge from program learning outcomes. A matrix is useful in presenting how courses align or map to program learning outcomes.

Administrative Unit Outcomes (AOU) identify what students (or clients) will experience or receive as a result of a given service. AOUs may also be business related, identifying particular goals related to efficiency or achievement.

Structure

To be fully descriptive and useful, the structure of a student learning outcome includes

The condition in which student learning takes place

An observable outcome to demonstrate that learning has occurred

A specification of acceptable results

The assessment tool and method.

Conditions. In our course outlines of record and program documents, the condition is either "upon successful completion of the program" or upon successful completion of the course."

Outcomes. We refer to Bloom's Taxonomy of Educational Objectives (see Appendix A) for suggestions about appropriate observable outcomes (although Bloom's is not an exhaustive list). Bloom organized outcomes into three domains: cognitive, psychomotor, and affective. The cognitive domain relates to knowledge, the psychomotor domain relates to skills, and the affective domain relates to attitudes and values. If possible, we favor a set of outcomes that draw from each domain, although the psychomotor domain may not be appropriate for all programs or courses. Each of those domains has outcomes further organized according to depth of processing. We favor higher level outcomes that demonstrate critical thinking, a high degree of skill mastery, or personal integration of attitudes and values. Such higher level outcomes are listed in the right columns of of the outcome tables.

Acceptable Results. It is also useful to determine what the acceptable benchmark of student achievement will be. This has nothing to do with students passing courses or obtaining credit. Although we are measuring student learning in assessment, the objective is to determine how well we are doing with respect to instruction or student services. The question to be considered is: at what level would we determine that there is nothing more that we can do to improve student learning? There are student success factors that are outside of our control, so 100% student success is not realistic. However, something less than 100% will be appropriate, perhaps 90%, 85%, 80%, etc.

The determination of what will be acceptable is dependent upon many factors and, at first, may have to be a best guess among departments and program areas. That benchmark may differ from department to department, and it may differ between courses within a department. It may even differ between outcomes within a single course. An illustration of why this may differ is the following:

In some programs, entry level courses may have greater attrition than advanced courses because some students likely discover sooner rather than later that the program is not a good fit for their interests or aptitudes. This is a factor over which we have no control. Defining 75% as an acceptable result for assessment may be appropriate for an entry level course, during which many students are determining whether they are really interested in that program of study, whereas 95% might be appropriate for the capstone course of the same program because presumably by that time, students are confident about their academic goals. We would expect greater success, given the same quality of instruction.

Again, you are determining the point at which you believe institutional enhancements will no longer improve the results. This benchmark will inform you about what to do with the assessment data—make improvements or congratulate yourselves. There isn't a science to this. Determining appropriate levels is best achieved through continuous dialog within your department, as well as reassessment of the criteria after an assessment cycle.

Assessment Artifacts. Finally, the student learning outcome assessment definition needs to specify how the outcome will be measured. This includes an artifact and a method for scoring the quality of that artifact. Examples of common assessment artifacts include:

Projects

Portfolios

Essays

Speeches

Performances

Skill Demonstrations

Athletic performances

Exit Interviews

Multiple Choice Exams

Essay Exams

Surveys

Critiques

The artifact(s) chosen for the assessment should be appropriate for the outcome verb. For example, a learning outcome of describe is better measured by an essay than a multiple choice exam. Another consideration for the selection of an artifact is the relative ease or difficulty that the assessment can be conducted. Exams and surveys are easier to administer than portfolio assessments that are scored with a rubric. Departments should give careful thought about choosing an assessment that effectively measures the learning outcome, but is also reasonable to administer. An ideal assessment definition that is never implemented has little value.

Assessment Scoring. Some of the above artifacts can be simply scored for correctness, as is the case with multiple choice exams. Rubrics are appropriate for scoring projects, portfolios, essays, speeches, performances, skill demonstrations, critiques or essay exams. Response scales, such as Likert (respondents choose Strongly Agree, Somewhat Agree, Neutral, Somewhat Disagree, Strongly Disagree) may be useful in scoring surveys, interviews, or critiques. A scale might also be used to score a artifact holistically.

Assignment or course grades are not a valid means of assessing student learning outcomes for the following reasons. Course grades and many assignments reflect multiple skills and outcomes. We need to tease out a specific outcome for measurement. Course grades also may reflect criteria that have nothing to do with course learning outcomes, but are imposed within a course to motivate participation and the development of a learning community. Grades are an individual evaluation, whereas outcome assessment is collaborative and the results generalized.

However, certain types of course assignments can be leveraged for student assessment AND course assessment. To do so, you would need to ensure that the same assignments and measuring tools are used in every single section of a course over multiple semesters and among all faculty. There must be a way to tease out a specific outcome and assess only that outcome.

Instructional Student Learning Outcome Examples

The following are examples of several complete outcome statements, where purple is the condition, green is the outcome, blue is the acceptable result, and orange is the assessment tool and method:

Upon successful completion of the course, students will be able to:

Evaluate the the merits of various learning theories and reflect on how they might be applied to their own practice with 80% accuracy and thoroughness. This will be measured with an essay, scored with a rubric.

Compose basic lyric and prose poems that integrate the key elements of poetry writing with 85% accuracy and completeness. This will be assessed through an appropriate sample of student poetry, scored by a rubric.

Apply the use of communication as a helping skill. This is to be measured by an oral presentation, scored by a rubric.

Construct diagrams that accurately explain and demonstrate such earth science processes as the hydrologic cycle, the rock cycle, and the plate tectonic cycle. This will be evaluated through lab reports, scored with a rubric, based upon professional society guidelines.

Analyze case studies and identify the best possible solution to a problem.This will be measured by a multiple choice exam, with the criteria of success being that 80% of students answer correctly.

Demonstrate stage techniques in the performance of cold readings and monologues. This will be measured through peer critiques, scored through a faculty-developed rubric.

Upon successful completion of the program, students will be able to:

Develop and displaya portfolio of visual art works from a variety of visual art disciplines that reflects a personal direction and individual creativity.The portfolio will be assessed by a rubric.

Demonstrate that they are prepared for one or more of the occupations specified in the program descriptions for digital animation. This will be measured by exit surveys, scored with a Likert scale.

Upon graduation from the institution, students will be able to:

Use inductive and deductive reasoning to analyze complex problems and synthesize appropriate resolutions. This will be measured with a post graduation alumni survey that uses a 5-point Likert scale to determine the extent to which students believe they are competent.

Connect the contributions of the humanities to the development of the political and cultural institutions of contemporary society. This will be measured with a post graduation alumni survey that uses a 5-point Likert scale to determine the extent to which students believe they are competent.

Student Services SLO Examples:

Given the completion of a counseling session, 80% of students will be able to articulate, identify, develop, and clarify educational, career, vocational, degree, and transfer goals. This will be measured with a survey that uses a 5-point Likert scale to determine the extent to which students believe they are competent.

Given Web-based and printed instructions, 90% of students will be able to successfully apply, update, register, and drop classes. This will be measured using data collected from the computer in Admissions and Records and from BanWeb.

Given a detailed orientation, 90% of EOPS students will effectively use services and support to make satisfactory academic progress and achieve career goals. This will be assessed with a rubric that is completed by the counselor following meeting with a student.

Given print and electronic resources, 90% of veterans will be able to identify the necessary steps for obtaining educational services for veterans. This will be measured and scored by a survey in which steps are ranked in the order in which they should occur.

Given the Student Conduct Policy, 90% of student athletes will act in accordance with the Cerro Coso’s Student Conduct Policy both on and off the field. This will be assessed by documenting incidents of athlete misconduct and deriving related data.

Administrative Unit Outcome Examples

Given appropriate staffing and resources, 90% of graduating students will rate service received from the Office of Admissions & Records as "excellent." This will be assessed with a survey, scored by a Likert scale.

Given appropriate staffing and resources, 90% of students will evaluate campus maintenance and security as "excellent." This will be assessed with a survey, scored by a Likert scale.

Given appropriate resources and recruitment, the girl’s basketball team will place among the top 3 schools in the league. This will be assessed through conference points awarded to individual teams.

Approval of Student Learning Outcome and Assessment Definitions

Student learning outcomes are identified in the appropriate curriculum documents, such as the program curriculum form or the course outline of record and are approved by the Curriculum and Instruction Council (CIC) via the approval of those documents. CIC and the SLOC are faculty resources to provide input and guidance on the crafting of outcomes so that the outcomes are observable, measurable, and use higher order learning domains (critical thinking) whenever possible. Bloom's Taxonomy is recommended as a resource for the selection of outcome verbs.

CIC requires that ALL student learning outcomes have assessment statements included in all new or revised course outlines of record and program documents. Assessment statements simply follow as a second sentences in the Student Learning Outcome Assessment sections of the CORs and program documents (see the above examples).

II. The Assessment Study

The Assessment Study is the process by which a learning outcome is actually measured and the results analyzed. It is important to understand that only 1 outcome is assessed in a particular study.

This phase occurs over an appropriate period of time, to allow data to be collected from a sufficient sample. For the assessment of course student learning outcomes, this is usually 2-3 semesters. For program learning outcomes, it could be 2-3 years. There are 3 steps to the Assessment Study phase:

Design the study

Collect the data

Analyze the results

Design the Study

In the previous phase, the assessment artifact, scoring method, and possibly the criteria for success will have already been defined. At this point, however, departments or program areas will need to work out the details of how the assessment will be conducted. The following issues/questions should be considered:

What outcome will be assessed?
Some suggestions...

A program's highest impact learning outcome

An outcome from a high impact course

An outcome that faculty are extremely passionate about

Who is on the assessment team?

Subjective assessments must have 2 or more assessors to reduce bias

Objective assessments must have someone to tabulate the results

What assessment artifacts or scoring devices need to be developed? For example:

Exam questions

Surveys

Interview questions

Likert scales

Rubrics

What criteria will be used to determine whether the outcome of the study is successful or not (if not previously defined)?

How will assessment artifacts be collected from students and archived until a review of those artifacts occurs? For example:

Video taping performances

Photocopying written works

Photographing visual works

Electronically storing digital works

Generating a mailing list for surveys, and budgeting for postage and self-addressed, stamped return envelopes

What will constitute a sufficient sample?

Data should ideally be collected from multiple course sections across terms, sites, delivery modes, and instructors. Depending upon the number of sections offered each semester, a sufficient sample may require data collection over 2-3 semesters. A few courses, however, may provide a sufficient sample over a single semester, due to the number of sections offered and variation of delivery locations, modes, and instructors. ENGL C101 may be an example.

Similarly, program learning outcomes should be assessed with a sufficient sample, which may mean several groups of graduating classes over 2-3 years.

How will the results be recorded?

Both objective and subjective data needs to be tabulated and compiled.

Who will write the analysis of the findings?

Collect the Data

With thorough planning, the data collection process is fairly straightforward. There are a few points of note, however:

The same artifact and scoring method must be used throughout the study. In other words, exams cannot be given to some students and not others. If the assessment artifact is an embedded class assignment, all course sections, regardless of instructor or delivery mode, must include the identical assignment. If multiple assessment artifacts are used, then all must similarly be used consistently.

Given that only 1 outcome is assessed in a particular study, and course-embedded exams usually cover multiple course outcomes, the exam questions that pertain to the particular outcome need to be identified and the results somehow teased out and tabulated separately. Unless the entire exam only pertains to the single outcome, the general exam results cannot be used.

Subjective evaluations must be conducted with a team of assessors to reduce bias. All assessors must evaluate all artifacts. In other words, the team cannot divide the work up to get through the process faster.

It might be useful to document other data in association with learning outcome results, such as semester term, delivery mode, instructor, and/or campus site. This will be useful for analyzing the results and planning for improvement.

Departments might consider incorporating retention/attrition data into the results. We should not merely be interested in the students that persisted to the end of the course, but also those who dropped out along the way.

Analyze the Results

After tabulating the results and having already determined a benchmark of success, it will be clear whether students are achieving the outcome above, at, or below the expected level. If the result is at or above the expected level, congratulations are in order! This implies that there is nothing department faculty can do to improve the result. However, it may be worthwhile to discuss whether the criterion was set too low. This may be obvious if the department faculty can identify practices that could improve the result further.

If the result is lower than expected, there should be discussion about why that is the case and what can be done to improve the result. This is where the identification of other data in association with the outcome data is useful. If on-site courses have a better result than online courses, what can be done to improve student learning in online sections? If results are better for 16-week semester courses than for 8-week summer courses, is there a way to improve the outcome for summer courses? Perhaps the solution is that particular course should not be offered during the summer because there is not enough time on task. If one instructor produced better results than others, what is that instructor doing that should be replicated throughout the department?

Please note that this data should not be used to penalize faculty or to point out failures. It should only be used to identify best practices and implement what works well more consistently. This is a constructive process and faculty should have that spirit about it. (This is also a good time to point out that while faculty are asked to discuss student learning outcome assessment as a part of the Faculty Evaluation process, this should simply be a discussion of the instructor's involvement in the process. The results of assessment are not included in faculty evaluation.)

Based on a collaborative departmental process, the results should be analyzed and a plan for improvement developed. Be sure to take detailed minutes of all meetings in order to provide evidence of collegial dialogue.

III. Implementation, Planning, Budgeting

If a plan to improve student learning was developed, it should be implemented and reassessed in a new Assessment Study to verify that student learning has, indeed, improved. As has been previously mentioned, assessment is an on-going and cyclical activity. If the results of the previous study were acceptable, the next Assessment Study should focus on a different outcome.

Assessment results and plans for improvement must be integrated into our other institutional plans and processes. Because Cerro Coso Community College exists so that students may learn, there must be a link between the results of Assessment Studies and everything else that we do at Cerro Coso.

Assessment results and plans should be included in the Department Unit Plan and in Program Review. The Unit Plan is included in the Educational Master Plan, which drives the College's Technology Plan, the Staffing Plan, the Facilities Master Plan, and the College's budget. Some improvements to student learning can be made with instructional practices, but sometimes institutional support is needed, and this process accomplishes that.