Glossary

| A |

Accommodations: Modification in the way assessments are designed or administered to create fair testing conditions for students with learning disabilities. Students are entitles to accommodations after documenting their disabilities through DSP&S.

Active Learning: Active learning is an approach in which students are participating in learning beyond passively absorbing knowledge such as in a didactic session. Actively learning students solve problems, apply knowledge, work with other students, and engage the material to construct their own understanding and use of the information. Examples of active learning methods include those methods where deeper thinking and analysis are the responsibility of the student, and the faculty member acts as a coach or facilitator to achieve specified outcomes. Examples of active learning include inquiry-based learning, case-study methods, project development, modeling, collaborative learning, problem-based learning, brainstorming, and simulations.

Affective Domain: Skills in the affective domain describe the way people react emotionally and their ability to feel another living thing's pain or joy. Affective objectives typically target the awareness and growth in attitudes, emotion, and feelings.

Analytic Scoring: Evaluating student work across multiple dimensions of performance rather than from an overall impression (holistic scoring). In analytic scoring, individual scores for each dimension are scored and reported. For example, analytic scoring of a history essay might include scores of the following dimensions: use of prior knowledge, application of principles, use of original source material to support point of view, and composition. An overall impression of quality may be included in analytic scoring.

Anchor: A sample of student work that exemplifies a specific level of performance. Raters use anchors to score student work, usually comparing student performance to the anchor. For example, if student work was being scored on a scale of 1-5, there would typically be anchors (previously scored student work), exemplifying each point on the scale.

Example: The Business Office will produce timely reports and information accurately communicating the fiscal status of the institution for use by administrator, faculty and staff.

Assessment: Assessment refers to methods used by a faculty member, department, program or institution to generate and collect data for evaluation of processes, courses, and programs with the ultimate purpose of evaluating overall educational quality and improving student learning. Results of assessment may include both quantitative and qualitative data.

Assessment Plan: A planning document outlining the college's assessment procedures for each component of the college.

Assessment of Student Learning Outcomes: The systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development (Source: Assessment Essentials, Palomba & Banta).

Assessment Analysis Report: For departments in Instruction, Student Services and the Library, a reporting document containing an analysis of departmental SLO assessment activities and recommendations for improvement. These are submitted as part of each area's Instructional Plan or departmental review to the Council for Instructional Planning or the Student Services Manager's Team. After scrutiny by those groups, the reports are forwarded to the College SLO Assessment Review Committee. Administrative Services departments, which indirectly aid instruction, embed their assessment reports in their departmental reviews. These are also forwarded to the SLO Assessment Review Committee after scrutiny by manager teams.

Attitudinal Outcomes: These outcomes relate to development of certain values or changes in beliefs, often through questionnaires.

Authentic Assessment: Authentic Assessment evaluates students' ability to use their knowledge and to perform tasks that approximate those found in the work place or other venues outside the classroom. Designed to allow student to actively demonstrate what they know rather than recognize or recall answers to questions.

| B |

Baseline: A staring point or point in time from which to compare and measure selected changes in the course, program or institution. A baseline always invites a comparison.

Basic Skill: Below college-level reading, writing, ESL, mathematics, and student success skills: any skill, ability, or understanding that is necessary for students to succeed at college-level courses.

Benchmark: A detailed description of a specific level of student performance expected of students at particular stages of development levels. Benchmarks are often represented by samples of student work. A set of benchmarks can be used as "checkpoints" to monitor progress toward meeting performance goals within and across levels. Benchmarks can be interim outcome measures.

Bloom's Taxonomy: Bloom's taxonomy of learning objectives is used to define how well a skill or competency is learned or mastered.

Classroom-based Assessment: Classroom-based assessment is the formative and summative evaluation of student learning within a single course. This assessment involves evaluating the curriculum as designed, taught, and learned. It emails the collection of data aimed at measuring successful learning in the individual course and improving instruction with a goal to improving learning.

Cognitive Domain: Skills in the cognitive domain revolve around knowledge, comprehension, and "thinking through" a particular topic. Traditional education tends to emphasize the skills in this domain, particularly the lower-order objectives.

Communication: "Meaningful Communication uses subject, audience, and purpose to influence, inform, and/or connect with others. This could include, but is not limited to, organizational structures, appropriate support, and various delivery methods."

Community/Global Awareness and Responsibility: Learners recognize and analyze the interconnectedness of global, national, and local concerns, analyzing cultural, political, social and environmental issues from multiple perspectives; they recognize the interdependence of the global environment and humanity.

Core Competencies: These college-wide skills describe what students are able to do at the end of General Education curriculum or when receiving an AA or AS degree. Chaffey's core competencies are: 1) Communication, 2) Critical Thinking and Information Competency, 3) Global Awareness and 4) Personal Responsibility and Professional Development.

Course Objectives: Statements that tell students what supporting skills, knowledge, and attitudes they will learn during a course that lead to mastery of the course SLOs. They are usually discrete skills that require lower level thinking skills and form the building blocks to course SLOs.

Criteria: Guidelines, rules, characteristics, or dimensions that are used to judge the quality of student performance. Criteria indicate what we value in student responses, products or performances. They may be holistic, analytic, general, or specific.

Criterion-based Assessments: Instructors evaluate or score such assessment using a set of criteria to appraise work. Criterion-referenced evaluation is based on proficiency, not subjective measures such as improvement.

Critical Thinking: Learners use the intentional application of rational higher order thinking skills, such as analysis, synthesis, problem recognition and problem solving.

| D |

Direct Measures: Methods of collecting information about student learning that require students to display their knowledge, skills, and/or abilities. Direct measures often require a systematic scoring system that employs a rubric.

Ends Policies: Are institutional goals that support the mission of the college. These End Policies have been approved by the Chaffey College Governing Board and cover the broad areas of the learning environment, accountability and institutional effectiveness, infrastructure, human resource planning, fiscal management and the Chaffey College Core Competencies.

Evidence of Performance: Quantitative or qualitative, direct or indirect data that provide information concerning the extent to which a course, program, student service and institution meet their established and publicized goals.

Equity: The extent to which an institution or program achieves a comparable level of outcomes, direct and indirect, for various groups of enrolled students; the concern for fairness, i.e., that assessments are free from bias or favoritism. An assessment that is fair enables all students to show what they know or can do.

| F |

Formative Assessment: Formative assessment generates useful feedback for development and improvement. The purpose is to provide an opportunity to perform and receive guidance (such as in-class assignments, quizzes, discussion, lab activities, etc.) that will improve or shape a final performance.

| I |

Information Competency: Learners demonstrate the ability to state a research question, problem, or issue, determine information requirements in various disciplines for the research questions, problems or issue, use information technology tools to locate and retrieve relevant information, communicate using a variety of information technologies, understand the ethical and legal issues surrounding information and information technologies, and apply the skills gained in information competency to enable lifelong learning.

Indicator: Specific items of information to track or monitor success on accomplishing an outcome. Indicators are numerical measure characterizing the results or impact of a program activity, service or intervention and are used to measure performance.

Indirect Assessment: Methods of collecting information about student learning that asks students (or others) to reflect on their learning rather than demonstrate it. Indirect measures often involve collecting opinions and perceptions from surveys and/or focus groups, as well as gathering pertinent statistics from department or college records.

| L |

Local Assessment: This type of assessment is developed and validated for a specific purpose, course, or function and is usually criterion-referenced to promote validity, e.g. a department placement or exit exam.

| M |

Means of Assessment: Means of Assessment refers to methods used by a faculty member, department, program or institution to generate and collect data for evaluation of processes, courses, and programs with the ultimate purpose of evaluating overall educational quality and improving student learning. Results of assessment may include both quantitative and qualitative data. Components of Means of Assessment include: the type of assessment tool, the population being assessed, where and when the assessment will take place and a criteria for assessment.

Example: 70% of selected students will be able to demonstrate knowledge of essay structure on the English 1A common final.

Metacognition: Metacognition is the act of thinking about one's own thinking and regulating one's own learning. It involves critical analysis of how decisions are made. Vital material is consciously learned and acted upon.

Mission: The purpose of an organization or institution, why it was created, who its clients are and what it intends to do and accomplish.

| N |

Norming: The process of educating raters to evaluate student work and produce dependable scores. Typically, this process uses anchors to acquaint raters with criteria and scoring rubrics. Open discussions between raters and the trainer help to clarify scoring criteria and performance standards, and provide opportunities for raters to practice applying the rubric to student work. Rater training often includes an assessment of rater reliability that raters must pass in order to score actual student work.

Norm-referenced Assessment: An assessment where student performance or performances are compared to a larger group. Usually the larger group or "norm group" is a national sample representing a wide and diverse cross-section of students. Students, schools, districts, and even states are compared or rank-ordered in relation to the norm group. The purpose of a norm-referenced assessment is usually to sort students and not to measure achievement towards some criterion of performance.

| O |

Objective: General intent or purpose of the program broken down into measurable components.

Outcome: Knowledge, skills, and abilities that a student has attained at the end (or as a result) of his or her engagement in a particular set of collegiate experiences. WASC

Outcomes or Results: Positive benefits and behaviors accruing to individuals, families and communities that result from a program or service intervention. A result achieved and a change in condition, functioning or problem of an individual, group or community. Outcomes are always measurable. Outcomes can be positive, neutral or negative.

Outcome Statement: "Statements describing what students should know, understand, and be able to do with their knowledge when they complete courses, degrees, certificates, or enter employment, or transfer to other higher education institutions or programs" (Huba and Freed, 9 ). An Outcome Statement must contain have these three components: Audience, Behavior and Condition.

Example: By successfully completing CHEM 111 (Grade of Ã¢â‚¬ËœC' or higher); students will demonstrate the ability to analyze sample laboratory data.

| P |

Performance-based Assessment: (also known as Authentic Assessment): Items or tasks that require students to apply knowledge in real-world situations.

Personal, Academic and Career Development: Learners demonstrate an understanding of the consequences, both positive and negative, of their own actions; set personal, academic and career goals; and seek and utilize the appropriate resources to reach such goals.

Placement Testing: The process of assessing the basic skills proficiencies or competencies of entering college students.

Primary Trait Analysis (PTA): PTA is the process of identifying major traits or characteristics that are expected in student work. After the primary traits are identified, specific criteria with performance standards are defined for each trait.

Portfolio: A representative collection of a student's work, including some evidence that the student has evaluated the quality of his or her work.

Program: A program is defined as a set of courses that lead to degrees or certificates OR services that support student learning.

Program Goals: Are broad statements that describe the overarching long-range intended outcomes of a program, and are primarily used for general planning and are used as the starting point to the development and refinement of outcome statements. Program goals are usually not measurable and need to be further developed as separate distinguishable outcomes.

Example: Help studentsclarify and implement individual educational plans which are consistent with their skills, interests, and values.

Program and Services Review(PSR): A process of systematic evaluation of multiple variables of effectiveness and assessment of student learning outcomes of an instructional or student services program.

Prompt: A short statement or question that provides students a purpose for writing; also used in areas other than writing.

Proxy measures: Surrogate or stand-in measures for the actual outcome. Proxies are assumed to be highly correlated with the outcome of interest, so that a trend in a proxy is likely to correspond to the trend that would be observed if the actual outcome could be measured.

Psychomotor Domain: Skills in the psychomotor domain describe the ability to physically manipulate a tool or instrument like a hand or a hammer. Psychomotor objectives usually focus on change and/or development in behavior and/or skills.

| Q |

Qualitative Data: Qualitative data are data collected as descriptive information, such as a narrative or portfolio. These types of data, often collected in open-ended questions, feedback surveys, or summary reports, are more difficult to compare, reproduce, and generalize. They are bulky to store and to report; however, they can offer insightful information, often providing potential solutions or modifications in the form of feedback. Qualitative data, such as opinions, can be displayed as numerical data by using Likert-scaled responses that assigns a numerical value to each response (e.g. 5 = strongly agree to 1 = strongly disagree).

Quantitative Data: Quantitative data objectively measures a quantity (i.e. number) such as students' scores or completion rates. These data are easy to store and manage; they can be generalized and reproduced but have limited value due to the rigidity of the responses and must be carefully constructed to be valid.

| R |

Reliability: Reliability refers to the reproducibility of results over time or a measure of the consistency when an assessment tool is used multiple times. In other words, if the same person took a test five times, the data should be consistent. This refers not only to reproducible results from the same participant but also to repeated scoring by the same or multiple evaluators.

Rubric: A rubric is a set of criteria used to determine scoring for an assignment, performance, or product. Rubrics may be holistic, providing general guidance, or analytical, assigning specific scoring point values. Descriptors provide standards for judging the work and assigning it to a particular place on the continuum.

| S |

Service or Activity: A key service area that typically consumes a significant portion of the budget or is critical to the success of the institution's mission.

Standardized Assessments: Assessments developed through a consistent set of procedures for designing, administering, and scoring. The purpose of standardization is to assure that all students are assessed under the same conditions so that their scores have the same meaning and are not influenced by differing conditions.

Student Learning Outcomes (SLO): An SLO is a clear statement of what a student should learn and be able to demonstrate upon completing a course or program. It describes the assessable and measurable knowledge, skills, abilities or attitudes that students should attain by the end of a learning process.

Example: By successfully completing CHEM 111 (Grade of 'C' or higher); students will demonstrate the ability to analyze sample laboratory data.

Summative Assessment: A summative assessment is a final determination of knowledge, skills, and abilities. This could be exemplified by exit or licensing exams, senior recitals, or any final evaluation that is not created to provide feedback for improvement but is used only for final judgments. A midterm exam may fit in this category if it is the last time the student has an opportunity to be evaluated on specific material. See Formative assessment.

| V |

Validity: The extent to which an assessment measures what it is supposed to measure. A valid standards-based assessment is aligned with the standards intended to be measured, provides an accurate and reliable estimate of students' performance relative to the standard, and is fair.

Vision: A description of what the organization, or institution could look like or achieve at some time in the future.

| W |

Wurtz Wheel: Is a tool developed by the Chaffey College Institutional Research Department to help understand and track student learning outcomes, and visualize the SLO Assessment Cycle. The Wurtz Wheel was adapted from the Nichols Five Column model, the SLO Diagram provides a visual representation of each of the five steps involved in the SLO Assessment Cycle, and can be used to assess SLOs or Administrative Unit Outcomes (AUOs) at the course, program, or institutional level.