EPSRC funded research project hosted by the Centre for Vision, Speech and Signal Processing (CVSSP) of the University of Surrey (UK) and the Acoustics Research Centre at the University of Salford (UK)

The research project focusses on how sound data can be converted into understandable and actionable information by humans and machines. It started on 14 March 2016 and will run until 13 March 2019. The project is funded by the Engineering and Physical Sciences Research Council (EPSRC) with a funding value of £1,275,401. This is a joint project between the Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey, and the Acoustics Research Centre at the University of Salford.

Christian Kroos gave an interview to the Russian news agency TASS on the question whether AI/robots could create art, sparked by his upcoming invited talk at the VII St. Petersburg International Cultural Forum (Saint Petersburg, Russia).

Yong Xu, Qiuqiang Kong, Wenwu Wang and Mark Plumbley won the 1st prize in Task 4, ‘large-scale weakly supervised sound event detection for smart cars’, Subtask A, ‘audio tagging’ in the IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE 2017). The DCASE challenge constitutes the most important challenge in the non-speech audio domain. It is organised by Tampere University of Technology, Carnegie Mellon University and INRIA and sponsored by Google and Audio Analytic. Because of its unique standing, the best players in the field participate such as CMU, New York University, Bosch, USC, TUT, Singapore A*Star, Korean Advanced Institute of Science and Technology, Seoul National University, National Taiwan University and CVSSP.