Kernel Learning for Extrinsic Classification of Manifold Features

Abstract: In computer vision applications, features often lie on Riemannian manifolds with known geometry. Popular learning algorithms such as discriminant analysis, partial least squares, support vector machines, etc., are not directly applicable to such features due to the non-Euclidean nature of the underlying spaces. Hence, classification is often performed in an extrinsic manner by first mapping the manifolds to Euclidean spaces using kernels. However, for kernel based approaches, poor choice of kernel often results in reduced performance. In this paper, we address the issue of kernel selection for the classification of features that lie on Riemannian manifolds using the kernel learning approach. We propose two criteria for jointly learning the kernel and the classifier using a single optimization problem. Specifically, for the SVM classifier, we formulate the problem of learning a good kernel-classifier combination as a convex optimization problem and solve it efficiently following the multiple kernel learning approach. Experimental results on image set-based classification clearly demonstrate the superiority of the proposed approach over existing methods for classification of manifold features.

Contributions

We introduced a general framework for developing extrinsic classifiers for features that lie on Riemannian manifolds (like the Grassmann manifold and the space of Symmetric Positive Definite matrices) using the kernel learning approach.

We proposed a geodesic distance-based regularizer for data-driven kernel learning.

Focusing on the SVM classifier, we showed that the problem of learning a good kernel-classifier combination can be formulated as a convex optimization problem.

We introduced new kernels for the space of Symmetric Positive Definite matrices.