Elemental Impurity Analysis

The United States Pharmacopeia (USP) has revised its elemental impurities limits and procedureal chapters with implementation set for May 2014. The author explains the need for these revisions and provides a look at some of USP's proposed techniques for elemental impurity detection and identification.

Revisions to US Pharmacopeia General Chapter <231> Heavy Metals have been mooted and proposed for more than a decade, and it has long been known that the current methods are highly subjective and likely to prove inaccurate, at least for certain metals. The road to reform has been somewhat stuttering, but after a long period of review and commentary, on Dec. 1, 2012, General Chapter <232> Elemental Impurities—Limits and Chapter <233> Elemental Impurities—Procedures will be published in the second supplement of the US Pharmacopeia 35–National Forumulary 30 (USP–NF).

On May 1, 2014, when USP 37–NF 32 becomes official, all references to Chapter <231> will cease to exist, and conformance to Chapter <232> and Chapter <233> within the General Notices will be required. The acceptance of these chapters will open the door for laboratories to use a wider range of methods for analyzing heavy metal contaminants. Of course, these methods will still need to be validated, and there may still be room for debate about which methods are best for any given situation, but at least the dubious methods of Chapter <231> will cease to be available for medicines marketed in the United States. (For the time being, the comparable methods used in the European and Japanese pharmacopoeias will continue to be available.) This paper addresses the need for new compendial requirements, with a focus on elemental impurity detection and identification.

The need for change

Testing for heavy metals is actually one of the most established ideas contained within the national pharmacopeias around the world. In fact, USP has included a general test for heavy metals since 1905 in the eighth volume of the pharmacopeia, which used sulphide precipitation to detect antimony, arsenic, cadmium, copper, iron, lead, and zinc. As it happens, the purpose of the test had more to do with prevention of mislabeling than prevention of contamination, because heavy metal salts were often used in therapy and one had to know which salts were present in a treatment. The need to detect residual contamination was established in 1942, with the introduction of USP volume XII, in which a lead-containing standard was included in the test. The goal was to detect potentially poisonous heavy metal residuals, such as lead and copper, because these metals were widely used in production equipment at the time. Interestingly, metals such as iron, chromium, and nickel were not revealed by the test (1). Ultimately, it is the inapplicability of a "standard" test (such as that defined by Chapter <231>) that has led to its demise and the need for more flexibility.

Industry knowledge of common metal contaminants. Metal impurities are rightly a cause for concern in pharmaceutical products and there are many means by which a product might become contaminated. There are many inorganic impurities that are deliberately added to the pharmaceutical processes (e.g., catalysts). There are other impurities that can arise as undetected contaminants from starting materials or reagents, or that come from the process itself (e.g., leaching from pipes and other equipment). Then, of course, there are metal ions that occur naturally within the plant or mineral sources that are used to produce the active ingredients of pharmaceuticals and herbal medicines.

Regardless of how metals may get into a product, or previous certification of these metals, pharmaceutical producers must carry out tests to demonstrate the absence of impurities before using materials in a pharmaceutical product.