Abstract: We investigate the issue of model selection and the use of the nonconformitystrangeness measure in batch learning. Using the nonconformity measure wepropose a new training algorithm that helps avoid the need for Cross-Validationor Leave-One-Out model selection strategies. We provide a new generalisationerror bound using the notion of nonconformity to upper bound the loss of eachtest example and show that our proposed approach is comparable to standardmodel selection methods, but with theoretical guarantees of success and fasterconvergence. We demonstrate our novel model selection technique using theSupport Vector Machine.