This thesis introduces two novel semantic representation spaces for text documents and semantically annotated data, which are based in an intrinsic geometry approach,as well as other results, among which we have: (1) a novel ontology-based semantic distance, that we call weighted Jiang-Conrath, and (2) generalized normal distribution on differential manifolds, called geodesic normal distribution, what lead us to the de finition of the geodesic Mahalanobis distance. By last, we prove that any Bayes classfi er on a manifold defi nes a dual Voronoi diagram on it. The ontology-based IR model looks promising, but it has not been evaluated experimentally yet. By other hand, the text document classif er yielded a fi rst discouraging result due to the difficulties for the training of the model. The common thread of our research is the use of notions of intrinsic differential geometry and geometric invariance, as means to bridge some gaps in the literature. The ontology-based IR model, as well as the text classifi er proposed in this thesis, is inspired by a geometric approach, whose core idea is the integration of the geometric structures of the problem in the semantic representation spaces of the information. In summary, our approach attempts to build better models of semantic spaces by incorporating the properties and constraints of the mathematical objects involved in its defi nition. The first part of the thesis introduces a novel ontology-based IR model based in a structure-preserving embedding of a populated ontology into a metric space that we call Intrinsic Ontological Spaces. The second part of the thesis introduces a novel text classifi er, called Intrinsic Bayes-Voronoi, which is based in the representation of the document vectors by a manifold-based generative model, where the distribution function is defi ned on the unit hypersphere, instead of the euclidean ambient space. The Intrinsic Ontological Spaces introduces a novel theoretical IR model that looks promising, although it has not even been evaluated experimentally. The pro- posed IR model is described in depth and validated with regard to our design axioms. The motivation behind of our model is the fi nding of a set of geometric inconsistencies in some ontology-based IR models in the literature, which are derived from certain overlooked properties in their adaptations of the Vector Space Model (VSM). In essence, our model refutes the unreflective use of the VSM model in the fields of natural language processing (NLP) and information retrieval (IR). Despite that the theoretical approach is interesting by itself, our main hypothesis is that the structure-preserving approach proposed by our model, should lead us to improve the quality of the ranking, as well as the measures of precision and recall in the semantic information retrieval systems. The Intrinsic Ontological Spaces are, up to our knowledge, the fi rst ontology-based IR model to build a whole ontology-based structure-preserving representation for any sort of semantically annotated data in a populated ontology. In our model, every component has been designed with the aim to preserve the intrinsic geometry of any base ontology. The intrinsic geometry of any ontology is defi ned by three algebraic structures: (1) the order relation of the taxonomy, (2) the set inclusion relation, and (3) its intrinsic semantic metric. In this way, the methods for the representation of the queries, information units, weighting, ranking and retrieval, have been designed from geometric principled-based axioms, with the aim to capture all the semantic knowledge encoded in the base ontology. Using the language of the theory of categories, our model builds a natural equivalence, or morphism, among the input populated ontology and the representation space for the indexed information units. Finally, the classifi er of Bayes-Voronoi, introduced in the second part of the thesis, uses a manifold-based generative model to represent documents which is defi ned by a vector normal distribution on the unit hypershere, and we have called geodesic normal distribution. The distribution is defi ned on the unit hypershere, considered as a manifold, instead of the ambient space. The core idea is the ob- servation that the normalized vectors are defi ned on the unit hypersphere, instead of the whole euclidean ambient space, and the proposed model explicitly integrates this constraint. The model removes one dimension to the normalized vectors, which corresponds to the projection of the data vectors on the unit hypershere (normalization). The geodesic normal distribution lead us to the defi nition of the Mahalanobis distance on a differential manifold, distance that we call geodesic Mahalanobis distance. We also prove that any Bayes classifi er on a manifold defi nes a dual Voronoi diagram on it.