This paper proposes a novel cross-media retrieval approach. First, an isomorphic subspace is constructed based on Canonical Correlation Analysis (CCA) to learn multi-modal correlations of media objects; Second, polar coordinates are used to judge the general distance of media objects with different modalities in the subspace. Since the integrity of semantic correlations is not likely learned from limited training samples, users’ relevance feedback is used to accurately refine cross-media similarities. We also propose methods to map new media objects into the learned subspace, and any new media object would be taken as query example. Experiment results show that our approaches are effective for cross-media retrieval, and meanwhile achieve a significant improvement over content-based image retrieval and content-based audio retrieval.