CPWA Exam & Pass Rates

Exam Format

Approved use of one of the following calculators: HP 10b, HP 10bII, HP10bII Plus, HP 12C, HP 12C Platinum, HP 17B, HP17bII, HP 17bII Plus, as well as the Texas Instrument BA II Plus, BA II Plus Professional, and BA II Plus Business Analyst (newer and older versions are allowed. The Institute does not endorse or recommend any specific model.

Candidates are required to clear their financial calculator's memory prior to an exam. Any notes, including manually programmed formulas, will not be allowed in the testing area. If the calculator has notes/formulas printed on the calculator, or includes any other information, it must be removed or covered by solid color tape. Calculators are subject to inspection by test center staff.

Developing and Scoring the Examination

A reliable and defensible exam begins with a job analysis, a study of the knowledge, skills, activities, and tasks performed by a typical candidate seeking CPWA certification. The process requires a representative sample of volunteer certification holders to write knowledge, skill, and task statements (KSAs). These statements are put before the industry at large in the form of a survey. Practitioners rate the KSA statements based on criteria such as level of importance, frequency performed, etc. The results directly inform which categories are included on the examination and the percentage of questions selected for each category. A new job analysis is conducted approximately every five years to identify major changes in the work activities covered by the certification.

Standard setting is the process by which test programs establish a cut-score, or minimum score required to pass a test. Criterion-referencing compares people to an objective standard of performance or knowledge regardless of test form, time, and location by explicitly linking the passing standard to the purpose of the exam. Criterion-referenced standard setting is not strictly data driven. Rather, it is based on the sound professional judgment of subject matter experts (SMEs).

Before beginning the standard setting activity, SME participants often take the test, so they can read the items (test questions) in a context similar to test candidates. Next, SMEs think about a hypothetical person who performs just well enough on the job to be considered successful. Then, SMEs describe the performance level required to be able to just pass the test (i.e., just good enough to be certified or move onto the next level). This is the minimum standard required to be certified, licensed, or considered for selection/promotion. Test candidates that meet that criterion are traditionally referred to as Just Sufficiently Qualified (JSQ) Candidates or Minimally Qualified Candidates.

Once the performance level is defined, SMEs review the test content and make multiple independent rounds of judgments about what type of test score constitutes a JSQ level.
Between rounds, SMEs share their first judgments with each other and facilitators provide impact data, such as the percentage of all candidates who answered a selected-response test item correctly. The discussion and impact data are important to ensure that SMEs have a shared understanding of the JSQ level, which enhances their level of agreement. After the discussions are complete, the SMEs independently make a final judgment without further discussion. The analyst calculates a cut-score later and provides the recommendation to the policy making body. The analyst uses equating techniques across forms to ensure that candidates are treated equitably regardless of which items appear.