publications

Abstract

We introduce a Maximum Entropy model able to capture the statistics of melodies in music. The model can be used to generate new melodies that emulate the style of a given musical corpus. Instead of using the n–body interactions of (n−1)–order Markov models, traditionally used in automatic music generation, we use a k-nearest neighbour model with pairwise interactions only. In that way, we keep the number of parameters low and avoid over-fitting problems typical of Markov models. We show that long-range musical phrases don’t need to be explicitly enforced using high-order Markov interactions, but can instead emerge from multiple, competing, pairwise interactions. We validate our Maximum Entropy model by contrasting how much the generated sequences capture the style of the original corpus without plagiarizing it. To this end we use a data-compression approach to discriminate the levels of borrowing and innovation featured by the artificial sequences. Our modelling scheme outperforms both fixed-order and variable-order Markov models. This shows that, despite being based only on pairwise interactions, our scheme opens the possibility to generate musically sensible alterations of the original phrases, providing a way to generate innovation.

Abstract

This article presents a mathematical framework based on information theory to compare temporally-extended embodied sensorimotor organizations. Central to this approach is the notion of configuration: a set of distances between information sources, statistically evaluated for a given time span. Because information distances capture simultaneously effects of physical closeness, intermodality, functional relationship and external couplings, a configuration characterizes an embodied interaction with a particular environment. In this approach, collections of skills can be mapped in a unified space as configurations of configurations. This article describes these different abstractions in a formal manner and presents results of preliminary experiments showing how this framework can be used to capture the behavioral organization of an autonomous robot.

Downloads

BibTeX entry

@INPROCEEDINGS { kaplan:05d,AUTHOR="Kaplan, F. and Hafner, V.V.",BOOKTITLE="Proceedings of the 4th IEEE International Conference on Development and Learning (ICDL-05)",PAGES="129--134",PUBLISHER="IEEE",TITLE="Mapping the space of skills: An approach for comparing embodied sensorimotor organizations",YEAR="2005",}

Abstract

Continuous and real-time learning is a difficult problem in robotics. To learn efficiently, it is important to recognize the current situation and learn appropriately for that context. To be effective, this requires the integration of a large number of sensorimotor and cognitive signals. So far, few principles on how to perform this integration have been proposed. Another limitation is the difficulty to include the complete contextual information to avoid destructive interference while learning different tasks.
We suggest that a vertebrate brain structure important for sensorimotor coordination, the cerebellum, may provide answers to these difficult problems. We investigate how learning in the input layer of the cerebellum may successfully encode contextual knowledge in a representation useful for coordination and life-long learning. We propose that a sparsely-distributed and statistically-independent representation provides a valid criterion for the self-organizing classification and integration of context signals. A biologically motivated unsupervised learning algorithm that approximate such a representation is derived from maximum likelihood. This representation is beneficial for learning in the cerebellum by simplifying the credit assignment problem between what must be learned and the relevant signals in the current context for learning it. Due to its statistical independence, this representation is also beneficial for life-long learning by reducing the destructive interference across tasks, while retaining the ability to generalize. The benefits of the learning algorithm are investigated in a spiking model that learns to generate predictive smooth pursuit eye movements to follow target trajectories.