Kernel Canonical Correlation Analysis is a very general technique for subspace learning that incorporates
PCA and LDA as special cases. Functional magnetic resonance imaging (fMRI) acquired data is naturally
amenable to these techniques as data are well aligned. fMRI data of the human brain is a particularly interesting
candidate. In this study we implemented various supervised and semi-supervised versions of KCCA on human
fMRI data, with regression to single- and multi-variate labels (corresponding to video content subjects viewed
during the image acquisition). In each variate condition, the semi-supervised variants of KCCA performed better
than the supervised variants, including a supervised variant with Laplacian regularization. We additionally analyze
the weights learned by the regression in order to infer brain regions that are important to different types of visual
processing.

Recent research has shown that the use of contextual cues significantly improves performance
in sliding window type localization systems. In this work, we propose a method
that incorporates both global and local context information through appropriately defined
kernel functions. In particular, we make use of a weighted combination of kernels defined
over local spatial regions, as well as a global context kernel. The relative importance of
the context contributions is learned automatically, and the resulting discriminant function
is of a form such that localization at test time can be solved efficiently using a branch
and bound optimization scheme. By specifying context directly with a kernel learning
approach, we achieve high localization accuracy with a simple and efficient representation.
This is in contrast to other systems that incorporate context for which expensive
inference needs to be done at test time. We show experimentally on the PASCAL VOC
datasets that the inclusion of context can significantly improve localization performance,
provided the relative contributions of context cues are learned appropriately.

Resting state activity is brain activation that arises in the absence of any task, and is usually measured in awake subjects during prolonged fMRI scanning sessions where the only instruction given is to close the eyes and do nothing. It has been recognized in recent years that resting state activity is implicated in a wide variety of brain function. While certain networks of brain areas have different levels
of activation at rest and during a task, there is nevertheless significant similarity between activations in the two cases. This suggests that recordings of resting
state activity can be used as a source of unlabeled data to augment discriminative regression techniques in a semi-supervised setting. We evaluate this setting
empirically yielding three main results: (i) regression tends to be improved by the use of Laplacian regularization even when no additional unlabeled data are available, (ii) resting state data seem to have a similar marginal distribution to that recorded during the execution of a visual processing task implying largely similar types of activation, and (iii) this source of information can be broadly exploited to improve the robustness of empirical inference in fMRI studies, an inherently data poor domain.

We present a new technique for structured
prediction that works in a hybrid generative/
discriminative way, using a one-class
support vector machine to model the joint
probability of (input, output)-pairs in a joint
reproducing kernel Hilbert space.
Compared to discriminative techniques, like
conditional random elds or structured out-
put SVMs, the proposed method has the advantage
that its training time depends only
on the number of training examples, not on
the size of the label space. Due to its generative
aspect, it is also very tolerant against
ambiguous, incomplete or incorrect labels.
Experiments on realistic data show that our
method works eciently and robustly in situations
for which discriminative techniques
have computational or statistical problems.

PDF Web[BibTex]
elds or structured out-
put SVMs, the proposed method has the advantage
that its training time depends only
on the number of training examples, not on
the size of the label space. Due to its generative
aspect, it is also very tolerant against
ambiguous, incomplete or incorrect labels.
Experiments on realistic data show that our
method works eciently and robustly in situations
for which discriminative techniques
have computational or statistical problems.&p[url]=http://ei.is.tuebingen.mpg.de/person/blaschko/5610&p[summary]=Joint Kernel Support Estimation for Structured Prediction&p[images][0]=http://ps.staging.is.tuebingen.mpg.de/uploads/publication/image/495/thumb_lg_pami.jpg %>" onclick="popupCenter($(this).attr('href'), '', 580, 470); return false;" class="popup social_facebook"></a> -->
<!-- <a href="#" onclick="fb_share('We present a new technique for structured
prediction that works in a hybrid generative/
discriminative way, using a one-class
support vector machine to model the joint
probability of (input, output)-pairs in a joint
reproducing kernel Hilbert space.
Compared to discriminative techniques, like
conditional random elds or structured out-
put SVMs, the proposed method has the advantage
that its training time depends only
on the number of training examples, not on
the size of the label space. Due to its generative
aspect, it is also very tolerant against
ambiguous, incomplete or incorrect labels.
Experiments on realistic data show that our
method works eciently and robustly in situations
for which discriminative techniques
have computational or statistical problems.', 'http://ei.is.tuebingen.mpg.de/person/blaschko/5610', 'http://staging.is.tuebingen.mpg.de/assets/home/am_home-5c82e9f63cc81d6ae8884feb8adb256e.jpg', 'Joint Kernel Support Estimation for Structured Prediction')" class="popup social_facebook"></a> -->
</li>
<li>
<a href="http://twitter.com/home?status=@MPI_IS_Tue - We present a new technique for structured
prediction that works in a hybrid generative/
discriminative way, using a one-class
support vector machine to model the joint
probability of (input, output)-pairs in a joint
reproducing kernel Hilbert space.
Compared to discriminative techniques, like
conditional random elds or structured out-
put SVMs, the proposed method has the advantage
that its training time depends only
on the number of training examples, not on
the size of the label space. Due to its generative
aspect, it is also very tolerant against
ambiguous, incomplete or incorrect labels.
Experiments on realistic data show that our
method works eciently and robustly in situations
for which discriminative techniques
have computational or statistical problems.: http://ei.is.tuebingen.mpg.de/person/blaschko/5610" onclick="popupCenter($(this).attr('href'), '', 580, 470); return false;" class="popup social_twitter"></a>
</li>
<li>
<a href="http://www.linkedin.com/shareArticle?mini=true&amp;url=http://ei.is.tuebingen.mpg.de/person/blaschko/5610&amp;title=We present a new technique for structured
prediction that works in a hybrid generative/
discriminative way, using a one-class
support vector machine to model the joint
probability of (input, output)-pairs in a joint
reproducing kernel Hilbert space.
Compared to discriminative techniques, like
conditional random elds or structured out-
put SVMs, the proposed method has the advantage
that its training time depends only
on the number of training examples, not on
the size of the label space. Due to its generative
aspect, it is also very tolerant against
ambiguous, incomplete or incorrect labels.
Experiments on realistic data show that our
method works eciently and robustly in situations
for which discriminative techniques
have computational or statistical problems. &amp;summary=Joint Kernel Support Estimation for Structured Prediction" onclick="popupCenter($(this).attr('href'), '', 580, 470); return false;" class="popup social_linkedin"></a>
</li>
<li>
<a href="https://plus.google.com/share?url=We present a new technique for structured
prediction that works in a hybrid generative/
discriminative way, using a one-class
support vector machine to model the joint
probability of (input, output)-pairs in a joint
reproducing kernel Hilbert space.
Compared to discriminative techniques, like
conditional random elds or structured out-
put SVMs, the proposed method has the advantage
that its training time depends only
on the number of training examples, not on
the size of the label space. Due to its generative
aspect, it is also very tolerant against
ambiguous, incomplete or incorrect labels.
Experiments on realistic data show that our
method works eciently and robustly in situations
for which discriminative techniques
have computational or statistical problems. %20http://ei.is.tuebingen.mpg.de/person/blaschko/5610" onclick="popupCenter($(this).attr('href'), '', 580, 470); return false;" class="popup social_googleplus"></a>
</li>
<li>
<a href="mailto:?subject=We present a new technique for structured
prediction that works in a hybrid generative/
discriminative way, using a one-class
support vector machine to model the joint
probability of (input, output)-pairs in a joint
reproducing kernel Hilbert space.
Compared to discriminative techniques, like
conditional random elds or structured out-
put SVMs, the proposed method has the advantage
that its training time depends only
on the number of training examples, not on
the size of the label space. Due to its generative
aspect, it is also very tolerant against
ambiguous, incomplete or incorrect labels.
Experiments on realistic data show that our
method works eciently and robustly in situations
for which discriminative techniques
have computational or statistical problems. &amp;body=http://ei.is.tuebingen.mpg.de/person/blaschko/5610" class="social_mail"></a>
</li>
</ul>
</div>

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems