My work mainly focus on Recurrent Neural Network modelling (especially to model prefrontal cortex), Language Acquisition (applied to Robotics) and exploration of brain codes of bird song syntax. For modelling, I mainly use Echo State Networks, which is part of the Reservoir Computing framework (here is an introduction to RC).

The common thread of my research aims
at the exploration of the neural coding and the modelling
of complex sequences processing, chunking, learning and production, for
“syntax-based” sequences, and to apply these models to
robotics (for future embodiment purposes). In particular, my artificial neural networks based models are were focused on dynamics
of prefrontal cortex and basal ganglia.

In these models, I am interested in
human (and robot) grammar learning and acquisition as well as the
categorization of monkey motor action sequences. Besides, I worked on
experimental protocols and on the analysis of neural codings of canary
song. In order to better understand human language mechanisms, canary
song is particularly interesting because it is very variable, it has a
complex “syntax” and continues to evolve even during adult life.

Short bio

After obtaining my Ph.D. at the
University of Lyon in January 2013 at the Stem Cell and Brain Research
Institute (SBRI / INSERM 846) under the supervision of Peter Ford
Dominey, I did post-doctorate internships at the University of
Hamburg in 2013 and 2015 (Marie Curie Individual Fellowship) in the team
of Stefan Wermter, and a post-doctorate internship (CNRS) in 2014 at the Paris-Saclay Neuroscience Institute
(NeuroPSI) in the team of Catherine Del Negro and Jean-Marc Edeline.

Main paper

If you read only one paper of my work it should be this one. For a more general view, see the presentations slides about language processing with Reservoir Computing.

Some news!

October 2018

Summary presentation of my research on language processing with Reservoir Computing and application to Human-Robot Interaction.

NEW! IJCAI 2015 video: "Humanoidly Speaking": How the Nao humanoid robot can learn the name of objects and interact with them through common speech.
IJCAI 2015 video competition link.
Software: Take a look at "Syntactic Reservoir Model" and "DOCKS" on the WTM team sofware webpage to download and try the source code to make the same experiment.

Humanoidly Speaking

March 2015

I just began a new post-doc at the University of Hamburg with a grant from the European Union: