VALIDITY OF CET RESULT The validity of the result of the DNB-CET shall be only for the current admission session i.e. January 2013 admission session only and cannot be carried forward for the next session of CET.

DECLARATION OF RESULTThe results for DNB-CET shall be declared within 6 weeks of conduct of exam. The mark sheet-cum-result certificate for the DNB-CET examination can be downloaded from NBE website www.natboard.edu.in/CET after the declaration of result.

TIE – BREAKER CRITERIAIn the event of two or more candidates obtaining same percentage, the merit position shall be determined by the number of wrong responses of such candidates. Candidate with less number of wrong responses shall be placed at higher merit.

In case of tie with same percentage rank and same number of wrong responses, date of birth shall be considered to determine inter-se-merit. An elder candidate shall be placed at a higher merit failing which marks obtained in qualifying exam i.e. MBBS shall be considered.

RESULTS – EQUATING & SCALINGThe question paper of DNB-CET comprises of 180 multiple choice questions each with four options and only one correct response. Multiple question papers are used for DNB-CET for different sessions and days.

A standard psychometrically-sound approach is employed for the scoring process of DNB CET. This approach has been applied to score all large scale Computer Based Examination utilizing multiple question papers.

Step 1 : Calculation of Raw Marks Raw marks are calculated based on the number of questions answered

Step 2: Raw Marks are equatedWhile all papers (forms) are carefully assembled to ensure that the content is comparable, the difficulty of each form may be perceived by different subjects undertaking the test to slightly vary. Such minor differences in the overall difficulty level are accurately measured after all the different question papers (forms) have been administered and the results analyzed. A post-equating process is necessary to ensure validity and fairness.

Equating is a psychometric process to adjust differences in difficulty so that scores from different test papers (forms) are comparable on a common metric and therefore fair to candidates testing across multiple papers (forms). To facilitate this comparison, each form contains a predefined number of questions (items) selected from a large item bank, called an equating block, which is used as an anchor to adjust candidates scores to the metric of the item bank. Taking into account of candidates’ differential performance on these equating blocks, each individual’s raw marks are adjusted for difference in paper (form) difficulties.

During post-equating, test items are concurrently analyzed and the estimated item parameters (item difficulty and discrimination) are put onto a common metric. Item Response Theory (IRT), a psychometrically supported statistical model, is utilized in this process. The result is a statistically equated raw score that takes into account the performance of the candidate along with the difficulty of the form administered.

Equated raw score is scaled In order to ensure appropriate interpretation of an equated raw score, the scores must be placed on a common scale or metric. A linear transformation is used for this scaling process, which is a standard practice for such test administration. Post equating takes into account any statistical differences in examination difficulty and ensures all candidates are evaluated on a common scale. The aforesaid steps ensure that all examination scores are valid, equitable and fair. Merit List shall be prepared on the basis of scaled score obtained by the candidates.

There is no provision for re-checking /re-totaling /re-evaluation of the question paper, answers, score and no query in this regard will be entertained.