Kernel Machineshttp://www.kernel-machines.org
These are the search results for the query, showing results 811 to 825.

Learning with Kernelshttp://www.kernel-machines.org/publications/Smola98 Unified presentation of regularized risk functionals, kernels, and cost functions for regression and classification. New uniform convergence bounds in terms of kernel functions are given.No publisheradmin2007-01-31T10:07:55ZPhdthesis ReferenceDynamic Alignment Kernelshttp://www.kernel-machines.org/publications/Watkins00 A new concept using generative models to construct Dynamic Alignment Kernels is presented. These are based on the observation that the sum of products of conditional probabilities $\sum_c p(x|c) p(x'|c)$ is a valid SV kernel. This is particularly well suited for the use of Hidden Markov Models, thereby opening the door to a large class of applications like DNA analysis or speech recognition.No publisheradmin2007-01-31T10:07:56ZInproceedings ReferenceNatural Regularization from Generative Modelshttp://www.kernel-machines.org/publications/OliSchSmo00 A class of kernels including the Fischer kernel recently proposed by Jaakola and Haussler is analyzed. The analysis hinges on information-geometric properties of the log probability density function (generative model) and known connections between support vector machines and regularization theory, and proves that the maximal margin term induced by the considered kernel corresponds to a penalizer computing the $L_2$ norm weighted by the generative model. Moreover, it is shown that the feature map corresponding to the kernel whitens the data.No publisheradmin2007-01-31T10:07:56ZInproceedings ReferenceProbabilities for SV Machineshttp://www.kernel-machines.org/publications/Platt00 SVM do not immediately lead to confidence ratings. This problem is addressed by by fitting a logistic to the function values of a SVM in order to obtain Probabilities for SV Machines. The results are comparable to classical statistical techniques such as logistic regression while conserving the sparseness and thus numerical efficiency of SVMs. Pseudocode is given for easy implementation.No publisheradmin2007-01-31T10:07:56ZInproceedings ReferenceMaximal Margin Perceptronhttp://www.kernel-machines.org/publications/Kowalczyk00 Overview of sequential update algorithms for the Maximal Margin Perceptron. In particular, a new update method is derived which is based on the observation that the normal vector of the separating hyperplane can be found as the difference between between two points lying in the convex hull of positive and negative examples respectively. This new method has the advantage that at each iteration only one Lagrange multiplier has to be updated, leading to a potentially faster training algorithm. Bounds on the speed of convergence are stated and an experimental comparison with other training algorithms shows the good performance of this method.No publisheradmin2007-01-31T10:07:57ZInproceedings ReferenceLarge Margin Rank Boundaries for Ordinal Regressionhttp://www.kernel-machines.org/publications/HerGraObe00 Based on ideas from SV classification an algorithm is designed to obtain Large Margin Rank Boundaries for Ordinal Regression. In other words, a SV algorithm for learning preference relations. In addition to that, the paper contains a detailed derivation of the corresponding cost functions, risk functionals, and proves uniform convergence bounds for the setting. Experimental evidence shows the good performance of their distribution independent approach.No publisheradmin2007-01-31T10:07:57ZInproceedings ReferenceGeneralized Support Vector Machineshttp://www.kernel-machines.org/publications/Mangasarian00 Arbitrary kernel functions which need not satisfy Mercer's condition can be used. This goal is achieved by separating the regularizer from the actual separation condition. For quadratic regularization this leads to a convex quadratic program that is no more difficult to solve than the standard SV optimization problem. Sparse expansions are achieved when the $1$-norm of the expansion coefficients is chosen to restrict the class of admissible functions. The problems are formulated in a way which is compatible with Mathematical Programming literature.No publisheradmin2007-01-31T10:07:58ZInproceedings ReferenceLinear Discriminant and Support Vector Classifiershttp://www.kernel-machines.org/publications/GuySto00 Review of linear discriminant algorithms. SVMs in feature space are one special case of this, and the authors point out similarities and differences to other cases. Placing SVMs into this wider context provides a most useful backdrop which should help avoiding SVM specialist discussions losing sight of the general picture.No publisheradmin2007-01-31T10:07:58ZInproceedings ReferenceRegularization Networks and Support Vector Machineshttp://www.kernel-machines.org/publications/EvgPonPog00 Uniform convergence results for kernel methods are reviewed and a new theoretical justification of SVM and Regularization Networks based on Structural Risk Minimization is given. Furthermore, the paper contains an overview over the current state of the art regarding connections between Reproducing Kernel Hilbert Spaces, Bayesian Priors, Feature Spaces and sparse approximation techniques.No publisheradmin2007-01-31T10:07:58ZInproceedings ReferenceRobust Ensemble Learninghttp://www.kernel-machines.org/publications/RatSchSmoMiketal00 No publisheradmin2007-01-31T10:07:59ZInproceedings ReferenceTowards a Strategy for Boosting Regressorshttp://www.kernel-machines.org/publications/ShaKar00 No publisheradmin2007-01-31T10:07:59ZInproceedings ReferenceBounds on error expectation for SVMhttp://www.kernel-machines.org/publications/VapCha00 Bounds on the Error Expectation for SVM in terms of the leave-one-out estimate and the expected value of certain properties of the SVM are given. It is shown that previous bounds involving the minimum margin and the diameter $D$ of the set of support vectors can be improved by the replacement of $D^2$ by $SD$. Here, $S$ is a new geometric property of the support vectors called the span. Experimental results show that this improvement gives significantly better predictions of test error than the previous bounds, and seems likely to be useful for model selection.No publisheradmin2007-01-31T10:07:59ZInproceedings ReferencePublicationshttp://www.kernel-machines.org/publications Publications in the database on Kernel-Machines.OrgNo publisheradmin2008-11-11T08:39:32ZBibliography FolderFrequently Asked Questionshttp://www.kernel-machines.org/frequently-asked-questions ...about this web siteNo publisheradmin2007-01-31T07:29:35ZFolderFAQhttp://www.kernel-machines.org/frequently-asked-questions/faq Frequently Asked QuestionsNo publisheradmin2010-07-12T11:42:19ZPage