This website uses features which update page content based on user actions. If you are using assistive technology to view web content, please ensure your settings allow for the page content to update after initial load (this is sometimes called "forms mode").
Additionally, if you are using assistive technology and would like to be notified of items via alert boxes, please follow this link to enable alert boxes for your profile.

This website uses features which update page content based on user actions. If you are using assistive technology to view web content, please ensure your settings allow for the page content to update after initial load (this is sometimes called "forms mode").
Alert box notification is currently enabled, please follow this link to disable alert boxes for your profile.

Frequently Asked Questions Assessment Policy

Assessment Methods

General policy guidance on assessment tools is provided in Chapter 2 of the Delegated Examining Operations Handbook (DEOH), http://www.opm.gov/policy-data-oversight/hiring-authorities/competitive-hiring/deo_handbook.pdf. Writing evaluations belong to a class of assessments referred to as "work sample tests." The guidance in the DEOH is not specific to writing assessments but the same principles would apply. As with any other procedure used to make an employment decision, a writing assessment should be:

Supported by a job analysis,

Linked to one or more critical job competencies,

Included in the vacancy announcement, and

Based on standardized reviewing and scoring procedures.

Other considerations may be important, such as the proposed method of use (e.g., as a selective placement factor, quality ranking factor) and specific measurement technique.
Writing performance has been evaluated using a wide range of techniques such as portfolio assessment, timed essay assignments, multiple-choice tests of language proficiency, self-reports of writing accomplishments (e.g., winning an essay contest, getting published), and grades in English writing courses. Each technique has its advantages and disadvantages.
For example, with the portfolio technique, applicants are asked to provide writing samples from school or work. The advantage of this technique is that it has high face validity (that is, applicants perceive that the measure is valid based on simple visual inspection). Disadvantages include difficulty verifying authorship, lack of opportunity (e.g., prior jobs may not have required report writing, the writing samples are proprietary or sensitive), and positive bias (e.g., only the very best writing pieces are submitted and others are selectively excluded).
Timed essay tests are also widely used to assess writing ability. The advantage of timed essay tests is that all applicants are assessed under standardized conditions (e.g., same topic, same time constraints). The disadvantage is that writing skill is based on a single work sample. Many experts believe truly realistic evaluations of writing skill require several samples of writing without severe time constraints and the use of multiple judges to enhance scoring reliability.
Multiple-choice tests of language proficiency have also been successfully employed to predict writing performance (perhaps because they assess the knowledge of grammar and language mechanics thought to underlie writing performance). Multiple-choice tests are relatively cheap to administer and score, but unlike the portfolio or essay techniques, they lack a certain amount of face validity. Research shows that the very best predictions of writing performance are obtained when essay and multiple choice tests are used in combination.
There is also an emerging field based on the use of automated essay scoring (AES) in assessing writing ability. Several software companies have developed different computer programs to rate essays by considering both the mechanics and content of the writing.
The typical AES program needs to be "trained" on what features of the text to extract. This is done by having expert human raters score 200 or more essays written on the same prompt (or question) and entering the results into the program. The program then looks for these relevant text features in new essays on the same prompt and predicts the scores that expert human raters would generate. AES offers several advantages over human raters such as immediate online scoring, greater objectivity, and capacity to handle high-volume testing. The major limitation of current AES systems is that they can only be applied to pre-determined and pre-tested writing prompts, which can be expensive and resource-intensive to develop.
However, please keep in mind that scoring writing samples can be very time-consuming regardless of method (e.g., whether the samples are obtained using the portfolio or by a timed essay). A scoring rubric (that is, a set of standards or rules for scoring) is needed to guide judges in applying the criteria used to evaluate the writing samples. Scoring criteria typically cover different aspects of writing such as content organization, grammar, sentence structure, and fluency. We would recommend that only individuals with the appropriate background and expertise be involved in the review, analysis, evaluation, and scoring of the writing samples.
For more information regarding the development of written assessments, please contact Assessment_Information@opm.gov.

Yes, a hiring manager may ask that the candidates perform a writing exercise (i.e., responding to a pre-determined question) once the candidates have been found to be the best qualified and are at the end of the hiring process (i.e., at the final interview).
However, a writing sample like that is not scored; it's merely used to inform the hiring manager of the candidate's writing ability. And just like a "real" assessment or test, there are still many aspects to the process that need to be followed:

Clearly notify applicants in the job opportunity announcement that they may be required to participate in a writing exercise and at what point in the assessment process this will occur.

Develop the question(s), or "prompt(s)," the candidates will respond to (or address).

Ensure each candidate receives the same question and testing considerations.

You will also need to consider:

Testing environment

Type of room – Will it be the hiring manager's office, conference room, or another room?

Use of computers – Will a laptop be needed? Will you want the candidates to be able to access the Internet? Can they use spell-check?

Geographically remote applicants – For any candidates who are not local, consider using technology as appropriate

Maximum amount of time for writing

Maximum and minimum lengths of writing sample

It is up to your discretion how you (or the hiring manager) collect the writing sample. For example, you could ask the candidate to respond to a question or have them fix a grammatically-incorrect paper. Just keep in mind what writing skill you want to measure in order to guide the development of the assessment.
As with all testing practices, it's paramount to standardize the process, meaning that all candidates who are asked to produce a writing sample are treated the same, given the same question, and so forth.
For more information regarding the development of writing samples, please contact Assessment_Information@opm.gov.

Learn about the benefits and limitations of various assessment methods and strategies,

Evaluate and implement assessment tools that help improve the match between jobs and employees, and

Become familiar with the professional and legal guidelines to follow when administering an assessment program

The guide also contains an extensive list of resource materials if you need more information on a particular topic and a glossary for quick clarification of assessment terms and concepts.
The Assessment Decision Tool (http://apps.opm.gov/adt/ADTClientMain.aspx?JScript=1) is designed to assist HR professionals in developing assessment strategies based on specific competencies and other factors relevant to their hiring situations (e.g., applicant volume, level of available resources). The issues to consider when selecting or developing an assessment strategy or specific assessment tool are complex. The level of expertise needed to develop most assessments can vary greatly, and some can be quite substantial.
If an agency is interested in purchasing an assessment, Appendix B of the Delegated Examining Operations Handbook (http://www.opm.gov/policy-data-oversight/hiring-authorities/competitive-hiring/deo_handbook.pdf) lists criteria you may want to consider when choosing an assessment vendor. Under delegated examining, the decision to administer assessments for particular occupations and the responsibility to defend the use of those assessments rests with the agencies. Also, many vendors offer professionally-developed assessments, including OPM: http://www.opm.gov/services-for-agencies/assessment-evaluation/
For more information regarding assessment development, please contact Assessment_Information@opm.gov.

Yes, under delegated competitive examining, an agency may establish its own retesting policy and procedures. The Uniform Guidelineson Employee Selection Procedures (Section 12: Retesting of Applicants, http://uniformguidelines.com/uniformguidelines.html#50) requires employers to provide applicants with "a reasonable opportunity for retesting and reconsideration." It's good practice to provide a reasonable opportunity for retesting, which should also be consistently communicated to all applicants. Unless the examination announcement states otherwise, the default policy is that applicants may reapply and be reassessed at any time as long as the examination is still open.
The technical and/or administrative basis for a retesting policy should be clearly explained and documented (e.g., availability of alternate forms of an assessment, impact on the validity or integrity of the assessment process). Additional retesting information appears in other authoritative sources such as:

Professional standards (e.g., see Society for Industrial and Organizational Psychology (SIOP), "Principles for the Validation and Use of Personnel Selection Procedures" under Reassessing Candidates, http://www.siop.org/_Principles/principles.pdf), and

Eligibility: The conditions which prompt or allow retesting and how retesting will take place need to be explained to applicants. Conditions typically include the elapse of specific time intervals or the completion of a required developmental activity or required corrective action (e.g., correcting errors or disruptions in assessment administration, delivery, or scoring).

Opportunity: Some employers will only open their examinations when a job vacancy needs to be filled, while others offer continuous testing with an open continuous announcement.

Change in the Assessment: If an assessment changes, applicants are required to retest. Examples may include significant modifications to the scoring process, qualifying level, or assessment content.

Type of Assessment: Retesting considerations must be appropriate to the nature of the assessment.

For cognitive measures (e.g., mental ability, job knowledge), repeated exposure to the examination material may threaten test security (e.g., test items could be copied or memorized) and can undermine the validity and usefulness of the assessment. For example, applicants can obtain high scores by "learning the test" rather than mastering the work content domain from which the items are sampled.

For non-cognitive measures (e.g., personality, biodata, training and experience questionnaires), there is minimal harm in repeated exposure to assessment items because responses are usually based on the applicant's past experiences, opinions, or preferences rather than on objectively verifiable facts.

Waiting Period: The retest waiting period is the time interval that an applicant must wait to retake an assessment, starting at the assessment administration date. Typical intervals are three months to three years for cognitive assessments and zero days to three years for non-cognitive assessments. Some employers allow a limited number of retests to be given within a specified time period (e.g., up to three retests will be given at any time within a 12-month period).

Score Replacement: When applicants are permitted to retest, consideration must be given to which score becomes the score of record: the highest, lowest, or most recent? Most employers and college admissions screeners use the scores earned on the most recent test administration when making eligibility and selection decisions.

Availability of Multiple Forms: Multiple, parallel (equivalent or possibly adaptive) versions of an assessment may need to be developed for assessments where repeated exposure to the examination content undermines validity (e.g., most cognitive measures). Where feasible, applicants should not be given the same form of the assessment twice in a row.

Employers and other users of high-stakes assessments are subject to legal and other pressures to provide reassessment and reconsideration opportunities to applicants. The major consideration is the potential for retesting to undermine the integrity and usefulness of the assessment procedure.
For more information regarding retesting policy, please contact Assessment_Information@opm.gov.

In short, OPM does not offer specific guidance to agencies on the use of personality tests to assess candidates. Please check your agency's policies on using personality tests to assess candidates because policies may vary by agency.
In general, personality tests that are designed to measure work-related traits in normal adult populations are permissible. The personality factors assessed most frequently in work situations include Conscientiousness, Extraversion, Agreeableness, and Openness to Experience. As with any assessment tool used to make an employment decision, personality tests must meet the technical standards established in the Uniform Guidelines on Employee Selection Procedures (http://uniformguidelines.com/).
It is important to recognize that some personality tests are designed to diagnose psychiatric conditions (e.g., paranoia, schizophrenia, compulsive disorders) rather than work-related personality traits. The Americans with Disabilities Act (ADA) considers any test designed to reveal such psychiatric disorders as a "medical examination." Examples of such medical tests include the Minnesota Multiphasic Personality Inventory (MMPI) and the Millon Clinical Multi-Axial Inventory (MCMI).
Under the ADA, personality tests meeting the definition of a medical examination may only be administered after an offer of employment has been made. The following memorandum, "OPM Adjudication of Psychiatric/Psychological Objections," contains further information on making the distinction between medical and non-medical psychological and personality tests: http://www.chcoc.gov/Transmittals/TransmittalDetails.aspx?TransmittalID=1742.
For information on the validity and proper use of personality tests, see OPM's Assessment Decision Guide: http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf
For more information regarding the use of personality tests, please contact Assessment_Information@opm.gov.

Brief list of verifying sources (e.g., when and where the applicant gained minimum qualifications)

Fill-in-the-blank responses, typically a single word or number

Multiple choice responses or self-report ratings (e.g., as used on occupational questionnaires)

Another option is to use a multi-step, hurdled approach such that the written narratives are introduced after the initial application is submitted but before the formation of the Certificate of Eligibles.
For more information, please contact Assessment_Information@opm.gov.

If the intended results are not achieved with a particular question, it may be considered for elimination before final scoring of the assessment (i.e., given an effective weight of zero). Any adjustments to the scoring procedure should be based on a sound rationale, implemented uniformly for all applicants, evaluated for potential negative impact (e.g., maintaining coverage of critical competencies), and thoroughly documented.
It is highly recommended that you administer the interview questions as part of a trial run (or pilot) before using any interview questions in the "real" interview(s). Doing a trial run of the interview questions allow you to determine whether the question(s) is(are) clearly worded and elicit an acceptable range of responses. A pilot test will often reveal if any revisions need to be made. To be useful, the pilot test should mimic the actual structured interview process as closely as possible. Refer to page 14 of OPM's Structured Interview Guide (http://www.opm.gov/policy-data-oversight/assessment-and-selection/structured-interviews/guide.pdf) for a discussion of pilot testing interview questions and evaluating the interview process.
For additional information regarding structured interviews, please contact Assessment_Information@opm.gov.

Yes, OPM approval is required when using tests to determine basic eligibility or as the sole basis for ranking applicants for inservice placement (reference Part E.9[d][3] of the Operating Manual on Qualification Standards for General Schedule Positions (http://www.opm.gov/policy-data-oversight/classification-qualifications/general-schedule-qualification-policies/#url=app). For occupations not requiring an OPM test, agencies may develop and implement their own tests for inservice placement without OPM approval as long as such tests are used as part of a comprehensive set of assessment procedures.
However, for delegated competitive examining, OPM approval is not required as long as the assessment procedure is consistent with the technical standards of the Uniform Guidelineson Employee Selection Procedures (http://uniformguidelines.com/). Specifically, the Uniform Guidelines require that the method of test use (e.g., as a screening device with a passing score, for grouping or ranking, combined with other assessments) be supported by findings of a job analysis and test validation study. For example, if the test is to be used for ranking, the agency should have evidence showing that higher scores on the test are related to better job performance.
When a test is used as a "screen out," it becomes part of the minimum requirements for the position and is subject to the same job-relatedness requirements as any other selective placement factor (see the guidance in the Delegated Examining Operations Handbook on the use of selective factors in Chapter 5, Section B, http://www.opm.gov/policy-data-oversight/hiring-authorities/competitive-hiring/deo_handbook.pdf).
For more information regarding delegated examining, please contact Assessment_Information@opm.gov.

Unexpected Error

There was an unexpected error when performing your action.

Your error has been logged and the appropriate people notified. You may close this message and try your command again, perhaps after refreshing the page. If you continue to experience issues, please notify the site administrator.