For participants coming from Msida, Gzira, Sliema, or St Julian’s or somewhere else other than Valletta, they can first catch any bus with number 12, 13 or 15 that takes them to the bus terminus in Valletta.

From Valletta, take any bus with number 51, 52, 53, 54 or 55 and stop at the bus station called Mile End (facing Schembri street). You may ask the bus driver to stop you there. This bus stop is about 10 minutes drive from Valletta. Then follow Schembri street for about 200 metres until the roundabout. The building of Saint Martin’s Institute is on the left side of the roundabout.

For further details you may call Saint Martin’s Institute of Higher Education on +356 21235451.

This tutorial provides an introduction to distance or similarity-based systems in the context of supervised learning. The so called Learning Vector Quantization (LVQ), in which classes are represented by prototype vectors, will serve as a particularly intuitive example framework for distance based classification. A key step in the design of a classifier is, in this context, the choice of an appropriate distance or similarity measure. Besides standard Euclidean metrics, unconventional measures like statistical divergences and kernelized distances are discussed. Furthermore, the elegant framework of relevance learning is introduced. Here, adaptive distance measures are employed, which are optimized in the data-driven training process. Benchmark problems and real world applications, mainly from the bio-medical domain, will be presented in order to illustrate the different concepts and approaches and to demonstrate their usefulness.

In this tutorial Prof. Biehl will provide some code in MATLAB for a hands-on experience with different extensions of the LVQ algorithm. It would be beneficial if you could bring your laptops with the basic MATLAB installation (no extra toolboxes will be needed).

Title: Deep Learning and LifeLogging - how far are we from being able to explain person's lifestyle using Computer Vision?

The recently appeared technology of visual Lifelogging consists in acquiring images that capture our everyday experience by wearing a camera over long periods of time. The collected data have a number of potential applications, since they could visualise the circumstances of the person’s activities, state, environment and social context. However, due to the low temporal resolution of lifelogging data (2 up to 3 fpm) and to the huge amount of images collected over a long period of time (up to 100.000 images per month), extracting and locating relevant content from the collection represent major challenges that strongly limit their utility and usability in practice.

The aim of this tutorial is to give participants insight into state of the art techniques for automatic analysis of visual egocentric data, and how to apply them to real world problems. First, we will give an overview of Deep learning techniques to analyse images that are revolutionising currently the Computer Vision field. Then, after a brief introduction to the field of Lifelogging, we will focus on the goals of extracting meaningful semantic information and enabling fast and easy access to visual lifelogs content. We will address three main problems: temporal segmentation, detection of social events, common use object discovery. In the last part of the talk, we will touch upon applications to health in order to illustrate the different techniques and approaches and to demonstrate their usefulness and applicability.

Title: The Bag of Visual Words model and recent advancements in image classification

Automatic image classification and concept detection are important tasks, which allow multimedia systems to bridge the Semantic Gap and permit to efficiently search multimedia data with textual keywords. In this tutorial we will review the standard image classification pipeline based on the quantization of local features (e.g. SIFT) following the bag of visual words paradigm. We will take an historical perspective of the literature describing the major improvements proposed with particular reference to the commonly adopted public datasets (e.g. Caltech and Pascal VOC datasets). We will present recent local descriptor aggregation techniques and explicit feature mappings, in order to stress the importance of careful data treatment and their connection with previous approaches. We will further show how these approaches may be used with online linear classifiers, in order to scale to a large number of images and categories. A final part will touch Deep Learning approaches, just to mark what the current state of the art is.

Connected filters have rapidly become one of the most important classes of morphological filters. They allow edge preserving image simplification using a variety of strategies, and can be applied to many different tasks, ranging from image de-noising at the low-level end of the spectrum, to object recognition at the high-level task. Besides their edge preserving nature, connected filters can model the Gestalt notion of perceptual grouping, by using more generalized notions of connectivity, allowing, e.g., a flock of birds to be viewed as a single entity. Furthermore, they allow very fast multi-scale analysis of images and volumes, and can be made scale or even affine invariant very easily. They have deep theoretical links to segmentation, as witnessed by the recent development of the notion of connective segmentation. Hyperconnected filters initially formed an extension of connected filters which allowed overlap between object. More recently it has been shown that they encompass a large set of adaptive morphological filters, and form a bridge between connected filters and more traditional mathematical morphology. In this tutorial the foundations of (hyper)connected filters and (hyper)connectivity will be presented. The aim is to give participants insight into the properties of these methods, and how to apply them in practical problem.