This paper describes a method for building semantic scene models from video data using observed motion. We do this through unsupervised clustering of simple yet novel motion descriptors, which provide a quantized representation ...

This paper describes a method for building visual “maps” from video data using quantized descriptions of motion. This enables unsupervised classification of scene regions based upon the motion patterns observed within them. ...

This paper will describe new work that attempts to perform the modelling of human behaviour not at the level of visible patterns of motion, but at the level of intentions. By inferring intentions in terms of known goals, ...

We propose a new method that treats visible human behaviour at the level of navigational strategies. By inferring intentions in terms of known goals, it is possible to explain the behaviour of people moving around within ...

This paper describes a method for building visual scene models from video data using quantized descriptions of motion. This method enables us to make meaningful statements about video scenes as a whole (such as “this video ...