This website uses features which update page content based on user actions. If you are using assistive technology to view web content, please ensure your settings allow for the page content to update after initial load (this is sometimes called "forms mode").
Additionally, if you are using assistive technology and would like to be notified of items via alert boxes, please follow this link to enable alert boxes for your profile.

This website uses features which update page content based on user actions. If you are using assistive technology to view web content, please ensure your settings allow for the page content to update after initial load (this is sometimes called "forms mode").
Alert box notification is currently enabled, please follow this link to disable alert boxes for your profile.

Frequently Asked Questions Assessment Policy

Yes, a crediting plan, which is the same thing as a rating schedule (i.e., occupational questionnaire), can be considered an approved alternative assessment to the ACWA provided the crediting plan is based on a thorough job analysis and is related to the competencies or knowledge, skills, and abilities (KSAs) required for successful performance in the job being filled.
If an agency uses a crediting plan/rating schedule, the development and implementation of the assessment(s) must be consistent with the technical standards in the Uniform Guidelines on Employee Selection Procedures (http://uniformguidelines.com/).
If you have any further questions regarding ACWA or use of validated selection procedures, please contact Assessment_Information@opm.gov.

The ACWA can be administered both online and as a paper-and-pencil (written) test, with the online option being used much more frequently.
If you are applying for an ACWA position through a job opportunity announcement on OPM's USAJOBS (http://www.usajobs.gov), you will see a reference to an "On-line assessment questionnaire" under the How to Apply tab or section. The on-line assessment is the ACWA questionnaire.
If you would like to find out more about ACWA, please contact Assessment_Information@opm.gov.

5 CFR 300.103 states that selection procedures must have a rational relationship to job performance, including showing that the selection procedures were "professionally developed."
The phrase "professionally developed" comes from Title VII (Sect. 703h) (http://www.eeoc.gov/laws/statutes/titlevii.cfm). The Uniform Guidelines on Employee Selection Procedures (http://uniformguidelines.com/) interprets "professionally developed" to mean a selection procedure that is validated according to the technical standards in the Uniform Guidelines. The Uniform Guidelines also states that a validity study can be "performed by any person competent to apply the principles of validity research."
If you have any further questions regarding ACWA or validating selection procedures, please contact Assessment_Information@opm.gov.

General policy guidance on assessment tools is provided in Chapter 2 of the Delegated Examining Operations Handbook (DEOH), http://www.opm.gov/policy-data-oversight/hiring-authorities/competitive-hiring/deo_handbook.pdf. Writing evaluations belong to a class of assessments referred to as "work sample tests." The guidance in the DEOH is not specific to writing assessments but the same principles would apply. As with any other procedure used to make an employment decision, a writing assessment should be:

Supported by a job analysis,

Linked to one or more critical job competencies,

Included in the vacancy announcement, and

Based on standardized reviewing and scoring procedures.

Other considerations may be important, such as the proposed method of use (e.g., as a selective placement factor, quality ranking factor) and specific measurement technique.
Writing performance has been evaluated using a wide range of techniques such as portfolio assessment, timed essay assignments, multiple-choice tests of language proficiency, self-reports of writing accomplishments (e.g., winning an essay contest, getting published), and grades in English writing courses. Each technique has its advantages and disadvantages.
For example, with the portfolio technique, applicants are asked to provide writing samples from school or work. The advantage of this technique is that it has high face validity (that is, applicants perceive that the measure is valid based on simple visual inspection). Disadvantages include difficulty verifying authorship, lack of opportunity (e.g., prior jobs may not have required report writing, the writing samples are proprietary or sensitive), and positive bias (e.g., only the very best writing pieces are submitted and others are selectively excluded).
Timed essay tests are also widely used to assess writing ability. The advantage of timed essay tests is that all applicants are assessed under standardized conditions (e.g., same topic, same time constraints). The disadvantage is that writing skill is based on a single work sample. Many experts believe truly realistic evaluations of writing skill require several samples of writing without severe time constraints and the use of multiple judges to enhance scoring reliability.
Multiple-choice tests of language proficiency have also been successfully employed to predict writing performance (perhaps because they assess the knowledge of grammar and language mechanics thought to underlie writing performance). Multiple-choice tests are relatively cheap to administer and score, but unlike the portfolio or essay techniques, they lack a certain amount of face validity. Research shows that the very best predictions of writing performance are obtained when essay and multiple choice tests are used in combination.
There is also an emerging field based on the use of automated essay scoring (AES) in assessing writing ability. Several software companies have developed different computer programs to rate essays by considering both the mechanics and content of the writing.
The typical AES program needs to be "trained" on what features of the text to extract. This is done by having expert human raters score 200 or more essays written on the same prompt (or question) and entering the results into the program. The program then looks for these relevant text features in new essays on the same prompt and predicts the scores that expert human raters would generate. AES offers several advantages over human raters such as immediate online scoring, greater objectivity, and capacity to handle high-volume testing. The major limitation of current AES systems is that they can only be applied to pre-determined and pre-tested writing prompts, which can be expensive and resource-intensive to develop.
However, please keep in mind that scoring writing samples can be very time-consuming regardless of method (e.g., whether the samples are obtained using the portfolio or by a timed essay). A scoring rubric (that is, a set of standards or rules for scoring) is needed to guide judges in applying the criteria used to evaluate the writing samples. Scoring criteria typically cover different aspects of writing such as content organization, grammar, sentence structure, and fluency. We would recommend that only individuals with the appropriate background and expertise be involved in the review, analysis, evaluation, and scoring of the writing samples.
For more information regarding the development of written assessments, please contact Assessment_Information@opm.gov.

Yes, the ACWA occupational questionnaires were designed for entry-level applicants who are expected to have only limited work experience. ACWA assessments are intended to distinguish among applicants on the basis of their self-reported education and life experiences rather than occupation-specific work experience.
The accomplishment-related questions give credit for experience gained through not only paid work, but other types of work as well (e.g., school, voluntary, charitable). Applicants who have worked on a temporary or part-time basis in the occupations they are applying for do receive credit, but only to the extent that their work experience provides evidence of demonstrating entry-level competencies.
More information about ACWA is provided in Chapter 2 of the Delegated Examining Operations Handbook: http://www.opm.gov/policy-data-oversight/hiring-authorities/competitive-hiring/deo_handbook.pdf
If you would like to find out more about ACWA, please contact Assessment_Information@opm.gov.

Recency is typically defined by the number of years that have elapsed since the job-relevant training or experience was acquired. The implication is that those lacking "recent" experience (however that term is defined by the employer) may be given less credit in the assessment process.
However an employer, or agency, decides to use recency, under the Uniform Guidelines on Employee Selection Procedures (http://uniformguidelines.com/), it is imperative a selection procedure's method of application or use must be justified.* It is conceivable that recency may be a job-related factor for occupational areas subject to rapid change (e.g., medicine, information technology, and engineering) or for skills or knowledge that decay from lack of use.
A recency requirement is no different from any other selection procedure used as a basis for disqualification. In each case, the employer or agency must provide evidence of a demonstrable relationship between the employee selection procedure and job performance. In terms of a recency factor, justification may involve providing evidence of major changes in the nature of the work (e.g., in terms of basic theory, practice, or subject matter) or degradation in critical job competencies over time.

*See Uniform Guidelines at 15c(7), http://uniformguidelines.com/:
"(7) Uses and applications. The methods considered for use of the selection procedure (e.g., as a screening device with a cutoff score, for grouping or ranking, or combined with other procedures in a battery) and available evidence of their impact should be described (essential). This description should include the rationale for choosing the method for operational use, and the evidence of the validity and utility of the procedure as it is to be used (essential)."

In short, OPM does not offer specific guidance to agencies on how accommodation requests are processed. Please check your agency's policies on processing accommodation requests because policies may vary by agency.
For some general information in regards to processing accommodation requests, you may want to consult the Association on Higher Education and Disabilities (AHEAD) "Guidelines for Documentation of a Learning Disability in Adolescents and Adults" (available on several websites, for example: http://www.ldonline.org/article/6127/).
The guidelines are used by numerous educational institutions, testing organizations, and employers to evaluate the adequacy of documentation used to support the existence of learning disabilities. Documentation of a "specific learning disability" (Equal Employment Opportunity Commission's (EEOC) term) often requires gathering information from aptitude (intelligence quotient, or IQ), achievement, and information processing tests. The results should be summarized in a report by a qualified evaluator (e.g., a mental health professional with experience and training in the assessment of learning problems).
If the applicant does indeed have a specific learning impairment, the battery of test results should show average to above-average mental ability (IQ of 100 or higher) in combination with pronounced difficulties in one or more specific areas of achievement. IQ/achievement score differences that reach statistical significance are used to support a clinical diagnosis of a learning disorder. However, a clinical diagnosis only rises to the level of a "disability" under the Americans with Disabilities Act (ADA) if the individual is unable to perform, or is substantially limited in the ability to perform, an activity "compared to an average person in the general population" (see EEOC's "Technical Assistance Manual: Title I of the ADA, Section II," http://askjan.org/links/ADAtam1.html).
In terms of achievement scores, the average adult is assumed to function at the high school level. For instance, an applicant whose intellectual functioning is at the college level but only achieves at the high school level (because of a learning impairment) would normally not meet the ADA legal criteria of having a disability. Not all clinically diagnosed impairments meet the "substantially limited" criteria under the ADA.
For more information regarding assessment accommodations, please contact Assessment_Information@opm.gov.

Unexpected Error

There was an unexpected error when performing your action.

Your error has been logged and the appropriate people notified. You may close this message and try your command again, perhaps after refreshing the page. If you continue to experience issues, please notify the site administrator.