Teaching

I’m teaching on a regularly basis on various topics (scientific computing,
machine learning, computational neuroscience, computer science). Some courses
have taken place at the national level and consequently they’re in French. Some
others have been taught at the international level and they’re in
English. Sources for the slides are available upon request (by mail).

Computational Neuroscience

From single neuron to behavior (2016)

This lecture introducing computational neuroscience was given during the VIII
course of the International School of Bioelectromagnetics “Alessandro
Chiabrera”: Electromagnetic Fields and the Nervous System: Biological Effects,
Biophysical Mechanisms, Methods, and Medical Applications, 2016, Erice, Italy.

Advanced neural fields (2015)

This lecture on advanced neural fields and cognition was given during the
“Neural mass and neural field models” tutorial session of the Computational
Neuroscience Symposium (CNS), 2015, Prague, Czech Republic.

Scientific visualization

Matplotlib tutorial (2015)

Matplotlib is probably the single most used Python package for 2D-graphics. It
provides both a very quick way to visualize data from Python and to output
publication-quality figures in many formats. This tutorial proposes to cover
the main aspects of matplotlib through a serie of exercises.

10 Simple rules for better figures (2014)

Scientific visualization is classically defined as the process of graphically
displaying scientific data. However, this process is far from direct or
automatic. There are so many different ways to represent the same data: scatter
plots, linear plots, bar plots, and pie charts, to name just a
few. Furthermore, the same data, using the same type of plot, may be perceived
very differently depending on who is looking at the figure. A more accurate
definition for scientific visualization would be a graphical interface between
people and data.

Numerical computing

From Python to Numpy (2017)

There is already a fair number of book about Numpy (see Bibliography) and a
legitimate question is to wonder if another book is really necessary. As you
may have guessed by reading these lines, my personal answer is yes, mostly
because I think there’s room for a different approach concentrating on the
migration from Python to Numpy through vectorization. There is a lot of
techniques that you don’t find in books and such techniques are mostly learned
through experience. The goal of this book is to explain some of them and to
make you acquire experience in the process.

100 Numpy exercises (2016)

This lesson gather 100 Numpy exercises ranging from the very easy to the
extreme esoteric ones (not for the faint of heart). Those exercises have been
collected on the numpy mailing lists, on stack overflow and from other sources
as well (personal communication mostly). The goal is both to offer a quick
reference for new and old users and to provide also a set of exercices for
those who teach.

Introduction to Numpy (2015)

NumPy is the fundamental package for scientific computing with Python. Besides
its obvious scientific uses, NumPy can also be used as an efficient
multi-dimensional container of generic data. Arbitrary data-types can be
defined and this allows NumPy to seamlessly and speedily integrate with a wide
variety of projects. This lesson is based on the programming of the game of
life using numpy for efficient computation.

Embodied Cognition (2010)

Twenty years ago, R. Brooks revealed to the A.I. community that elephants don’t
play chess. Ten years later, A. Clark explained that “we ignore the fact that
the biological mind is, first and foremost, an organ for controlling the
biological body. Minds make motions, and they must make them fast - before the
predator catches you, or before your prey gets away from you. Minds are not
disembodied logical reasoning devices”. This lecture proposes to look back at
(almost) 60 years of Artificial Intelligence researches in order to address the
question of what has been accomplished so far towards our understanding of
intelligence and cognition. In this context, we’ll introduce the
action-perception loop, the embodied cognition paradigm and the symbol
grounding problem as it has been identified by Steve Harnad. This problem has
became prominent in the cognitive science society and the idea that a symbol is
much more than a mere meaningless token that can be processed through some
algorithm sheds a new light on higher brain functions. More specifically, we’ll
explain how those theories can impact modeling on computer vision.

Visual attention (2010)

This lecture proposes to review current psychological and physiological data as
and classical experiments related to visual attention as well as anatomical and
physiological data related to the oculomotor control in the primate. We will
introduce the two main forms of visual attention, namely exogeneous (bottom up)
and endogeneous (top down) visual attention that are known to play a critical
role in the perception and processing of a visual scene. Facilitating and
inhibitory effects of visual attention will be presented in light of Posner
experiments (1980) related to the concept of inhibition of return that play a
major role in a number of computational models of visual attention. Finally,
integrative theories related to visual attention will be introduced, namely the
premotor theory of attention, the active perception paradigm and the deictic
codes for the embodiment of cognition.

Introduction to Neural Fields (2010)

This lecture introduces main concepts related to classical artificial neural
networks as well as computational neuroscience. Standard artificial neural
network models related to supervised, unsupervised and reinforcment learning
will be briefly introduced as well as key concepts from neuro-anatomy and
neuro-physiology. This lecture will also focus on the dynamic neural field
(DNF) Theory as it has been originally introduced by Wilson and Cowan in the
early seventies and later formalized by S.I. Amari and J.G. Taylor. These
theories explain the dynamic of pattern formation for lateral-inhibition type
homogeneous neural fields with general connections. They show that, in some
conditions, continuous attractor neural networks are able to maintain a
localised bubble of activity in direct relation with the excitation provided by
a stimulation. We will investigate further these theories in order to explain
how their functional properties can be linked to visual attention defined as
the capacity to attend to one stimulus in spite of noise, distractors or
saliency effects.

Models of Visual Attention (2010)

The visual exploration of a scene involves the interplay of several competing
processes (for example to select the next saccade or to keep fixation) and the
integration of bottom-up (e.g. contrast) and top-down information (the target
of a visual search task). Identifying the neural mechanisms involved in these
processes and the integration of these information remain a challenging
question. Visual attention refers to all these processes, both when the eyes
remain fixed (covert attention) and when they are moving (overt
attention). Popular computation models of visual attention consider that the
visual information remains fixed when attention is deployed while the primate
are executing around three saccadic eye movements per second, abruplty changing
the whole visual information. We’ll introduce in this lecture a model relying
on dynamic neural fields and show that covert and overt attention can emerge
from such a substratum. We’ll identify and propose a possible interaction of
four elementary mechanisms for selecting the next locus of attention,
memorizing the previously attended locations, anticipating the consequences of
eye movements and integrating bottom-up and top-down information in order to
perform a visual search task with saccadic eye movements.