We use cookies to ensure that we give you the best experience of our website. If you accept without changing your settings, we assume that you are happy to receive all cookies on the LGC website. However, if you would like to, you can change your cookie settings at anytime.

Resources

Proficiency Testing FAQs | PT Schemes | LGC Standards

Q1: Which international standards are relevant to LGC PT schemes?

A1: All our PT schemes are operated in accordance with the international standard ISO/IEC 17043. The statistical analysis undertaken is in accordance with the international standard ISO 13528. LGC is accredited by the United Kingdom Accreditation Service (UKAS) for the provision of proficiency testing schemes against ISO/IEC 17043; a copy of our current scope of accreditation which lists the accredited schemes is available on our website: www.lgcstandards.com.

Q2: How are your schemes organised?

A2: The day-to-day operation of each scheme is the responsibility of LGC Standards. Individual schemes are managed by a team of Scheme Coordinators, to cover reporting, customer service and technical functions. For some schemes, external advisors may also be used to provide the full range of relevant knowledge and expertise needed to operate the scheme effectively. A small number of schemes are run in collaboration with other organisations.

Q3 Do you use Advisors and Advisory Groups?

A3: Yes, depending upon the scheme in question. Advisors are selected on the basis of their technical knowledge and experience of the industry to which the scheme is related. Advisors may be used on an ad-hoc basis and contacted when specific issues need to be addressed. Alternatively, formal Advisory Groups may be used. Advisory Groups consist of members who may or may not be participants on the scheme but who are experienced in the field of testing covered by the PT scheme. The composition and terms of reference of each Advisory Group will be agreed on a scheme-by-scheme basis. Membership of the Advisory Groups is subject to change but members’ names are available on request.

Q4: Do you run schemes that are jointly managed?

A4: Yes, some schemes are operated jointly with a partner organisation. Where schemes are operated jointly, a Management Committee may be set up to address operational issues for the scheme.

Q5: How do I join a scheme?

A5: Participants are advised to take part in the scheme(s) that are most fitting to their own area of testing. Where necessary, appropriate staff at LGC Standards can advise on which scheme is most suitable for participants. For each scheme, a scheme description and application form will be available, containing information about the test materials included in the scheme, and the intended distribution dates. Participants are invited to complete the application form, indicating which test materials they wish to receive during the scheme year. If the availability of test materials changes during the scheme year, participants are kept fully informed. Once a completed application form is received, an order confirmation will be sent to the participant, confirming the test materials selected and distribution date.

Q6: Can you guarantee my laboratory’s confidentiality?

A6: In order to ensure confidentiality, participants in all schemes are allocated a unique laboratory reference number. This number enables results to be reported without divulging the identities of participant laboratories. Only staff within the proficiency testing team and the laboratory itself will know this number.

Q7: How often do I need to participate?

A7: The frequency that a laboratory needs to participate in proficiency testing depends on a wide range of factors specific to each individual laboratory, such as other quality tools used, the volume of work undertaken and the risk associated to the measurements. Therefore every individual laboratory may have a different need, which is why schemes provided by LGC Standards offer flexible participation, although some do have a minimum participation level. Third parties, such as regulatory bodies, may recommend minimum levels of participation. To gain the benefit from trend analysis, participation in a minimum of four rounds over a scheme year is normally recommended.

Q8: What are the fees for participation?

A8: Fees for participation are reviewed annually and the current fees for each scheme are available on application. Payment terms are detailed on the application form and invoice. Participants are advised that delays with payments may result in test materials and / or reports being withheld until payments are made.

Q9: Where do you source your test materials?

A9: The vast majority of test materials are manufactured by LGC Standards. Where this is not possible, test materials are carefully sourced to meet the needs of participants. Wherever practical, test materials will be as similar as possible to those routinely tested by participating laboratories. However, in some cases, in order to achieve the required degree of homogeneity and stability, test materials may be in the form of simulated matrices or concentrated spiking solutions. The analyte concentration range of test materials will usually be varied from round to round in order to be realistic and challenging. Details of individual test material types are available in the relevant scheme description.

Q10: How are test materials packaged and transported?

A10: Test materials are packaged appropriately to protect the contents during transit. The majority of test materials are sent using priority courier. Overseas customers must provide relevant documents to prevent delay in customs such as import permits and may be required to pay import duties locally. Once packages have been delivered, LGC Standards cannot be held responsible if they subsequently fail to reach the correct personnel or are not stored under the recommended conditions. Participants are asked to check the contents of packages immediately on receipt and to contact LGC Standards if there are any problems with the condition of the test materials or accompanying documentation.

Q11: How is test material stability affected by time, distance and temperature?

A11: The test materials are all stable at the stated storage temperatures for at least the period of the proficiency test round. Studies have shown there is no significant difference between results of test materials tested the day after despatch and those tested on the deadline date. There is also no evidence that results are influenced by different climatic conditions of participating countries. Distance travelled does not affect test material results. We have undertaken studies on a number of our PT test materials comparing the average result according to distance travelled, and no correlations have been found. Stability consideration is an important part of the design and feasibility process for a PT scheme, where transport conditions such as temperature, humidity, pressure, exposure to x-rays etc are taken into account.

Q12: How do I treat my PT test material?

A12: It is important for laboratories to understand how to get the optimum benefit from PT participation. To do this, a laboratory must participate in an open and honest fashion, being prepared to, on occasion, be evaluated as unsatisfactory. If PT is to achieve its aims, laboratories need to treat the PT test materials the same as routine test materials, and staff must be encouraged to treat them appropriately and learn from their results in a constructive manner. Laboratories will learn very little about the quality of their routine work if the PT test materials are given special treatment, such as carrying out a much higher number of analyses, in order to be evaluated as satisfactory. This may in fact compromise the quality of routine measurements as a disproportionate level of effort is being expended for the PT.

Q13: Do I have to use specific methods to analyse the test materials?

A13: Unless otherwise instructed, participants should analyse the test materials using any method that they feel is appropriate Participants are asked to treat the PT material in the same way as a routine test material. Participants may be asked to state their method when reporting results. It is important that this information is accurate as results are analysed and reported according to the method stated.

Q14: Are microbiology results obtained from MPN methods comparable to those obtained using plate count methods?

A14: MPN and plate counts are both estimates of the number of microbial cells in the original sample and therefore provided all dilutions and calculations have been performed correctly, results should be comparable. For QWAS and QMS, comparing MPN results against results obtained from all other methods show no significant differences.

Q15: Do I have to report my results within a specific timescale?

A15: Deadlines are specified for the return of results, to ensure the timely issue of assigned values and reports to participants. For each scheme a closure date will therefore be specified. For certain tests there may also be a date specified by which examination of the test material is recommended to have been commenced. This is to ensure that sufficient time is available to complete the test and report results in time for the deadline date.

Q16: How should I report my results using PORTAL?

A16: For the majority of schemes, results are returned through our bespoke electronic reporting software, PORTAL. Once you are ready to report your results, please go to: www.lgcpt.com/portal You will need to log in using your lab ID, username and password. We advise that prior to using PORTAL you read the user guide which is available at:www.lgcpt.com/portal select ‘help’ from the menu.

For some schemes (or parts of a scheme) alternative reporting mechanisms are provided, details of which will be emailed to participants prior to test materials receipt.

It is recommended that results and calculations are checked thoroughly before reporting. Results should be reported clearly, in the format and units detailed in the scheme description. If calculations are used, unless instructed otherwise, the laboratory is to report only the final calculated result.

In general, results of zero should not be reported; results should be reported depending upon the detection limit of the method used, for example, <10. The exception is a small number of parameters, where it may be appropriate to report a result of zero, depending on the measurement scale being used. Results of zero and truncated results, such as < or > cannot often be included in the data analysis and therefore allocated a performance score. Results will be rounded up or down to the number of reporting decimal places stipulated in the scheme description and may not therefore be identical to your original reported result. The effects of rounding may also mean that occasionally percentage totals do not add up exactly to 100 %.

Part of the challenge of proficiency testing is the ability to perform calculations and transcribe results correctly. The proficiency testing team cannot interpret or calculate results on participants’ behalf. Once submitted and received, results cannot be amended and no changes can be made after the report has been issued. However, if you notice an error in your result before the reporting deadline, this can be corrected using PORTAL until the round closes.

Q17: How many results may I submit?

A17: Although it is desirable for participants to submit multiple results in order to compare results between different analysts, methods or instruments, a single laboratory reporting a large number of results could potentially bias the dataset. In order to minimise the effects of bias, LGC Standards therefore limits the number of results participants are able to report. Each participant is able to enter up to 13 different results. Of these results a maximum of 3 results can be ‘nominated’. Nominated results are included in the statistical analysis of the dataset whilst non-nominated results are not, however all results will receive z performance scores and assessments as appropriate.

Nominated results must be obtained using different methods, again to minimise the effects of bias

Further information is available in the PORTAL User Guide and the PORTAL Nominated Results FAQ, both of these documents are available for download from the PORTAL website and further information is available from ptsupport@lgcgroup.com.

Q18: Can my results be included in the report if I’ve missed the deadline for reporting?

A18: Participants are asked to return results by the given deadline in order to ensure that their results are included in the statistical analysis and the scheme report. Results received after the closure date will not be included in the report.

For schemes where a generic report is issued, this is available to all participants subscribing to the round regardless of whether their results were submitted or not.

Q19: How do you prevent collusion and falsification of results?

A19: It defeats the objective of taking part in proficiency testing if participants are not returning genuine results. Certain measures are built into the schemes to try and prevent collusion but, ultimately the responsibility rests with each participating laboratory to behave in a professional manner.

Q20: How is the assigned value established?

A20: ISO 13528: ‘Statistical Methods for use in Proficiency Testing by Interlaboratory Comparisons’ sets out how the assigned value and performance assessment criteria can be established and describes the options for the various performance scoring systems.

The assigned value is the value selected as being the best estimate of the ‘true value’ for the parameter under test. The method used to determine the assigned value may vary depending upon the particular scheme and test material and is detailed in the relevant scheme description.

For quantitative tests, where it is appropriate, practicable and technically feasible, the assigned value will be derived through formulation (or occasionally through the use of a certified reference material) to provide metrological traceability; the associated uncertainty of the value can therefore be estimated. However, in many cases the use of a consensus value is the only practicable and technically feasible approach to use. When the assigned value is determined from the consensus value of participant results, or from expert laboratories, robust statistical methods are used for calculation of the consensus value, the estimated standard uncertainty and the robust standard deviation.

For qualitative tests, participant results are compared against the intended result (assigned value) based on formulation or expert assessment.

For interpretive schemes where the result is subjective rather than quantifiable, a model answer produced by appropriate experts will be published in the report.

For microbiology test materials, all participant results are transformed by converting them to log10 before the statistical analysis is undertaken.

Q21: How do I evaluate measurement uncertainty?

A21: The aim when evaluating measurement uncertainty is to combine the effects of all the errors that will influence the measurement result, into a single value. There are many different guides available which provide advice on evaluating measurement uncertainty.

Further information on approaches to evaluating measurement uncertainty may also be available from your national accreditation body.

Q22: Can I use PT data to estimate my measurement uncertainty?

A22: It is possible, but must be regarded as a very rough estimate, and is not an approach addressed in many guides to evaluating measurement uncertainty. However documents that do address the use of PT data are:

Q23: What is the Standard Deviation for Proficiency Assessment (SDPA)?

A23: The SDPA expresses the acceptable difference between the laboratory result and the assigned value. An acceptable z performance score represents a result that does not deviate from the assigned value by more than twice the SDPA. The method used to determine the SDPA may vary depending upon the particular scheme and test material and is detailed in the relevant scheme description.

Q24: What standard deviation for proficiency assessment (SDPA) is used in microbiology PT schemes?

A24: There are many sources of variation in microbiological testing and the SDPA used to assess performance therefore needs to be fit-for-purpose and take all possible sources of variation into account. From experience and historical data, LGC Standards PT uses a fixed SDPA value of 0.35 log10 for the majority of microbiological tests.

Q25: How do I report a ‘presumptive’ result in microbiology?

A25: Report your result as usual but record in the comments section that the result is ‘Presumptive’.

Q26: What is the purpose of scoring my result?

A26: Once the assigned value for the parameters under test has been established, participant laboratories are assessed on the difference between their result and the assigned value, with this difference being represented by a performance score called a z score. This provides a simple and consistent measure of performance which is the key to monitoring competence and implementing an improvement programme as required.

Q27: How is a z performance score calculated?

A27: The participant’s result, x, is converted into a z performance score using the following formula: Where: X = the assigned value SDPA = Standard Deviation for Proficiency Assessment

For small data sets, there will be increased uncertainty around the assigned value if derived from a consensus value from participants’ results. In such cases, z performance scores may not be provided, or may be given for information only.

The z score expresses performance in relation to the assigned value and the standard deviation for proficiency assessment (SDPA). A z performance score of 2 represents a result that is a distance of 2 x SDPA from the assigned value.

A fit for purpose value for SDPA, rather than being derived from participant results, is preferable as this enables z scores to be compared from round to round to demonstrate general trends.

For each scheme, the value of SDPA and the method used to derive it is reported in the scheme description and / or report.

Q28: How do I interpret my results?

A28: For quantitative examinations, participant performance is assessed using the z score, and the following interpretation is given to results:

|z| ≤ 2.00

Satisfactory result

2.00 < |z| < 3.00

Questionable result

|z| ≥ 3.00

Unsatisfactory result

For qualitative examinations or semi-qualitative results, laboratories reporting the assigned result or range of results will be considered correct, and therefore have satisfactory performance.

Q29: What are the advantages of a z performance score?

A29: • Results can be expressed in a form that is easy to interpret and understand • Results can be summarised in graphical or tabular form to depict overall performance • A performance score allows participants to directly compare their own result with others • If consistent statistical values are applied, a performance score enables participants to monitor and trend their own performance over time

It is important to interpret any performance score in the full context of the overall results and in the context of a laboratory’s own quality control measures.

Q30: What is the estimated uncertainty of the assigned value?

A30: The assigned value has a standard uncertainty (ux) that depends upon the method used to derive the assigned value. When the assigned value is determined by the consensus of participants’ results, the estimated standard uncertainty of the assigned value can be calculated by:

ux = 1.25 x Robust standard deviation/√n

Where n = number of results

When the assigned value is determined by formulation, the standard uncertainty is estimated by the combination of uncertainties of all sources of error, such as gravimetric and volumetric measurements.

If ux is ≤ 0.3 x SDPA, then the uncertainty of the assigned value can be considered negligible and need not be considered in the interpretation of results.

If ux is > 0.3 x SDPA, then the uncertainty of the assigned value is not negligible in relation to the SDPA and so z’ performance scores (z-prime), which takes into account the standard uncertainty of the assigned value in their calculation, will be reported in place of z performance scores.

Q31: How is z’ performance score (z-prime) calculated?

A31: A z’ performance score (z-prime) incorporates the standard uncertainty of the assigned value and is calculated as follows:

Where: x = participant result

X = the assigned value

SDPA = Standard Deviation for Proficiency Assessment

ux = standard uncertainty of the assigned value X

A z’ performance score is interpreted in exactly the same way as a z performance score, ≤2 is satisfactory, >2 but <3 is questionable and ≥3 is unsatisfactory.

Q32: Do you include outlying results due to ‘errors and blunders’ in the statistical analysis of the data?

A32: Although robust estimators are used in order to minimise the influence of outlying results, extreme results or results that are identifiably invalid should not be included in the statistical analysis of the data. For example, these may be results caused by calculation errors or the use of incorrect units. However, such results can be difficult to identify by the PT organiser. For this reason, the robust mean and standard deviation will be calculated in the usual way, but those results that are out of the range of the assigned value ± 5 x SDPA will be excluded and the robust mean and standard deviation will then be recalculated. These recalculated values will be used for the statistical analysis. By removing these ‘blunders’ from the dataset any influence on the summary statistics is completely removed. All results, including excluded results, will be given performance scores.

Q33: What could be the cause of my poor performance?

A33: A single poor result is not indicative of overall laboratory performance but neither is a single good result. Ideally, PT results should be monitored over time to detect potential bias or repeated unsatisfactory results. There are many possible reasons for a single poor result. It is therefore important to interpret the results from PT schemes within the context of an all-round quality assurance programme, including internal quality control, use of validated methods and reference materials. There are numerous potential causes of poor performance in a PT scheme which may include analytical and non-analytical errors.

Test materials are subjected to rigorous quality control testing before being distributed to participants, and are unlikely to be the cause of a poor z performance score. All possible reasons for a poor performance should be investigated fully in order to identify the most likely cause and to enable action to be taken to prevent recurrence. Repeat test materials are available after every distribution, but it is most important to investigate and understand the reason(s) for the failure, document this fully, and carry out corrective actions before repeating a test.

Q34: How can I measure my laboratory’s performance over time?

A34: You can do this by trend analysis. A single result simply reflects the performance of the laboratory on the particular day that the test was carried out and therefore gives limited information. Frequent participation in PT schemes over time can give greater insight into long-term performance and can help identify where internal bias may be occurring. One of the best methods of summarising performance scores over time is graphically as this gives a clear overview and is less prone to misinterpretation than numerical methods. Participants are therefore advised to monitor their PT results over time. Further information regarding interpretation and trend analysis of proficiency results is given in the Eurachem guide on ‘Selection, Use and Interpretation of Proficiency Testing (PT) schemes (available at www.eurachem.org), IUPAC ‘International Harmonised Protocol for the Proficiency Testing of Analytical Chemistry Laboratories’, 2006 and ISO 13528.

Q35: How can I graphically plot and analyse trends for qualitative results?

A35: Qualitative results are difficult to depict graphically as they are not normally allocated a performance score. However for qualitative results, a correct result could be allocated a performance score of 0 to represent a satisfactory result. A false positive result can be represented by a performance score of + 3, whilst a false negative result can be represented by a performance score of - 3. If plotted graphically over time, this should give a clear visual indicator of performance in qualitative tests.

Q36: How will I receive my report?

A36: Following statistical evaluation of the results, the reports will generally be available on the website within 4 to 10 working days of round closure (see specific scheme description). We aim to provide 95 % of our reports to participants within 5 working days. Participants will be emailed when the report is available. The content of reports vary from scheme to scheme but include details of the composition of test materials, the assigned values, and tabular and / or graphical representations of participants’ results. Paper copies are also available for an additional charge.

Q37: How do I assess the reproducibility standard deviation from the PT report?

A37: The robust standard deviation provided in the PT report for a specific method can be taken as an estimate of the reproducibility standard deviation for the PT round for that specific method.

Q38: Can I have a report that only includes my group laboratories?

A38: Yes, we can produce reports tailored to a customer’s specific requirement. There may be an additional charge for administration and computer programming costs. Copyright to all reports remains with LGC Standards but permission is granted to participants to make copies for their own internal use, for example for quality control and regulatory purposes. No other copies may be made without obtaining permission.

Q39: My results have not been included in the report can I calculate my performance z / z ’performance score?

Q40: How can I receive advice and feedback?

A40: Communication with participants will be carried out through scheme-related documentation, emails, letters, or through local LGC Standards offices. Open meetings may also be organised and all interested parties invited to attend.

Q41: How can I send feedback?

A41: Comments on any aspect of our products and services are welcome either by phone, fax, letter, email or by contacting your local LGC Standards office.

Q42: Can I suggest a scheme or test material?

A42: We welcome suggestions any time. Please complete the ‘Wish list’ form on our website: www.lgcstandards.com.