Introduced as a new subtask of the ImageCLEF? 2010 challenge, we aim at recognizing the modality of a medical image based on its content only. Therefore, we propose to rely on a representation of images in terms of words from a visual dictionary. To this end, we introduce a very fast approach that allows the learning of implicit dictionaries which permits the construction of compact and discriminative bag of visual words. Instead of a unique computationally expensive clustering to create the dictionary, we propose a multiple random partitioning method based on Extreme Random Subspace Projection Ferns. By concatenating these multiple partitions, we can very efficiently create an implicit global quantization of the feature space and build a dictionary of visual words. Taking advantages of extreme randomization, our approach achieves very good speed performance on a real medical database, and this for a better accuracy than K-means clustering.

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each authors copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.