Search on speech (SoS) is a challenging area due to the huge amount of information stored in audio and video repositories. Spoken term detection (STD) is an SoS-related task aiming to retrieve data from a spee...

Voice-enabled interaction systems in domestic environments have attracted significant interest recently, being the focus of smart home research projects and commercial voice assistant home devices. Within the ...

Speech emotion recognition methods combining articulatory information with acoustic features have been previously shown to improve recognition performance. Collection of articulatory data on a large scale may ...

In this paper, we apply a latent class model (LCM) to the task of speaker diarization. LCM is similar to Patrick Kenny’s variational Bayes (VB) method in that it uses soft information and avoids premature hard...

We propose a new method for music detection from broadcasting contents using the convolutional neural networks with a Mel-scale kernel. In this detection task, music segments should be annotated from the broad...

Singing voice analysis has been a topic of research to assist several applications in the domain of music information retrieval system. One such major area is singer identification (SID). There has been enormo...

Authors: Deepali Y. Loni and Shaila Subbaraman

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2019
2019:10

There are many studies on detecting human speech from artificially generated speech and automatic speaker verification (ASV) that aim to detect and identify whether the given speech belongs to a given speaker....

In this paper, an adaptive averaging a priori SNR estimation employing critical band processing is proposed. The proposed method modifies the current decision-directed a priori SNR estimation to achieve faster...

Authors: Lara Nahma, Pei Chee Yong, Hai Huyen Dam and Sven Nordholm

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2019
2019:7

Dynamic time warping (DTW) can be used to compute the similarity between two sequences of generally differing length. We propose a modification to DTW that performs individual and independent pairwise alignmen...

Authors: Lerato Lerato and Thomas Niesler

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2019
2019:6

In response to renewed interest in virtual and augmented reality, the need for high-quality spatial audio systems has emerged. The reproduction of immersive and realistic virtual sound requires high resolution...

Authors: Zamir Ben-Hur, David Lou Alon, Boaz Rafaely and Ravish Mehra

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2019
2019:5

This paper proposes two novel linguistic features extracted from text input for prosody generation in a Mandarin text-to-speech system. The first feature is the punctuation confidence (PC), which measures the ...

Current automatic speech recognition (ASR) systems achieve over 90–95% accuracy, depending on the methodology applied and datasets used. However, the level of accuracy decreases significantly when the same ASR...

Authors: Kacper Radzikowski, Robert Nowak, Le Wang and Osamu Yoshie

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2019
2019:3

The overall recognition rate will reduce due to the increase of emotional confusion in multiple speech emotion recognition. To solve the problem, we propose a speech emotion recognition method based on the dec...

Authors: Linhui Sun, Sheng Fu and Fu Wang

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2019
2019:2

Filter banks on spectrums play an important role in many audio applications. Traditionally, the filters are linearly distributed on perceptual frequency scale such as Mel scale. To make the output smoother, th...

Authors: Teng Zhang and Ji Wu

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2019
2019:1

This paper deals with a project of Automatic Bird Species Recognition Based on Bird Vocalization. Eighteen bird species of 6 different families were analyzed. At first, human factor cepstral coefficients repre...

Authors: Jiri Stastny, Michal Munk and Lubos Juranek

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2018
2018:19

In this paper, a web-based spoken dialog generation environment which enables users to edit dialogs with a video virtual assistant is developed and to also select the 3D motions and tone of voice for the assis...

In this paper, a robust and highly imperceptible audio watermarking technique is presented based on discrete cosine transform (DCT) and singular value decomposition (SVD). The low-frequency components of the a...

Authors: Aniruddha Kanhe and Aghila Gnanasekaran

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2018
2018:16

The emerging field of computational acoustic monitoring aims at retrieving high-level information from acoustic scenes recorded by some network of sensors. These networks gather large amounts of data requiring...

Several factors contribute to the performance of speaker diarization systems. For instance, the appropriate selection of speech features is one of the key aspects that affect speaker diarization systems. The o...

Authors: Abraham Woubie Zewoudie, Jordi Luque and Javier Hernando

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2018
2018:14

Recently, sound recognition has been used to identify sounds, such as the sound of a car, or a river. However, sounds have nuances that may be better described by adjective-noun pairs such as “slow car” and ve...

As the foundation of many applications, multipitch estimation problem has always been the focus of acoustic music processing; however, existing algorithms perform deficiently due to its complexity. In this pap...

Authors: Xingda Li, Yujing Guan, Yingnian Wu and Zhongbo Zhang

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2018
2018:11

Voice activity detection (VAD) is an important preprocessing step for various speech applications to identify speech and non-speech periods in input signals. In this paper, we propose a deep neural network (DN...

Authors: Suci Dwijayanti, Kei Yamamori and Masato Miyoshi

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2018
2018:10

The performance of automatic speech recognition systems degrades in the presence of emotional states and in adverse environments (e.g., noisy conditions). This greatly limits the deployment of speech recogniti...

Authors: Meysam Bashirpour and Masoud Geravanchizadeh

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2018
2018:9

The successful treatment of hearing loss depends on the individual practitioner’s experience and skill. So far, there is no standard available to evaluate the practitioner’s testing skills. To assess every pra...

This work studies a wind noise reduction approach for communication applications in a car environment. An endfire array consisting of two microphones is considered as a substitute for an ordinary cardioid micr...

Authors: Simon Grimm and Jürgen Freudenberger

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2018
2018:7

Recurrent neural networks (RNNs) have shown an ability to model temporal dependencies. However, the problem of exploding or vanishing gradients has limited their application. In recent years, long short-term m...

In this paper, a novel parametric prosody coding approach for Mandarin speech is proposed. It employs a hierarchical prosodic model (HPM) as a prosody-generating model in the encoder to analyze the speech pros...

Authors: Chen-Yu Chiang

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2018
2018:5

Filter banks on short-time Fourier transform (STFT) spectrogram have long been studied to analyze and process audios. The frameshift in STFT procedure determines the temporal resolution. However, in many discr...

Authors: Teng Zhang and Ji Wu

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2018
2018:4

The speech intelligibility of indoor public address systems is degraded by reverberation and background noise. This paper proposes a preprocessing method that combines speech enhancement and inverse filtering ...

Authors: Huan-Yu Dong and Chang-Myung Lee

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2018
2018:3

Query-by-example Spoken Term Detection (QbE STD) aims to retrieve data from a speech repository given an acoustic (spoken) query containing the term of interest as the input. This paper presents the systems su...

Automatic extraction of acoustic regions of interest from recordings captured in realistic clinical environments is a necessary preprocessing step in any cry analysis system. In this study, we propose a hidden...

Audio signals are a type of high-dimensional data, and their clustering is critical. However, distance calculation failures, inefficient index trees, and cluster overlaps, derived from the equidistance, redund...

Authors: Wenfa Li, Gongming Wang and Ke Li

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2017
2017:26

Large vocabulary continuous speech recognition (LVCSR) has naturally been demanded for transcribing daily conversations, while developing spoken text data to train LVCSR is costly and time-consuming. In this p...

Authors: Vataya Chunwijitra and Chai Wutiwiwatchai

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2017
2017:24

Robustness against background noise is a major research area for speech-related applications such as speech recognition and speaker recognition. One of the many solutions for this problem is to detect speech-d...

Authors: Gökay Dişken, Zekeriya Tüfekci and Ulus Çevik

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2017
2017:23

Within search-on-speech, Spoken Term Detection (STD) aims to retrieve data from a speech repository given a textual representation of a search term. This paper presents an international open evaluation for sea...

The task of speaker diarization is to answer the question "who spoke when?" In this paper, we present different clustering approaches which consist of Evolutionary Computation Algorithms (ECAs) such as Genetic...

Authors: Karim Dabbabi, Salah Hajji and Adnen Cherif

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2017
2017:21

This paper throws light on chaotic shift keying-based speech encryption and decryption method. In this method, the input speech signals are sampled and its values are segmented into four levels, namely L
...

Authors: P. Sathiyamurthi and S. Ramakrishnan

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2017
2017:20

In normal modally voiced utterances, voiceless fricatives like [s], [ʃ], [f], and [x] vary such that their aperiodic pitch impressions mirror the pitch level of the adjacent F0 contour. For instance, if the F0...

Authors: Oliver Niebuhr

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2017
2017:19

An artificial neural network is an important model for training features of voice conversion (VC) tasks. Typically, neural networks (NNs) are very effective in processing nonlinear features, such as Mel Cepstr...

Authors: Zhaojie Luo, Jinhui Chen, Tetsuya Takiguchi and Yasuo Ariki

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2017
2017:18

Audio fingerprinting has been an active research field typically used for music identification. Robust audio fingerprinting technology is used to successfully perform content-based audio identification regardl...

Authors: Dominic Williams, Akash Pooransingh and Jesse Saitoo

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2017
2017:17

In this paper, we present a voice conversion (VC) method that does not use any parallel data while training the model. Voice conversion is a technique where only speaker-specific information in the source spee...

Authors: Toru Nakashika and Yasuhiro Minami

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2017
2017:16

Onset detection still has room for improvement, especially when dealing with polyphonic music signals. For certain purposes in which the correctness of the result is a must, user intervention is hence required...

Authors: Jose J. Valero-Mas and José M. Iñesta

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2017
2017:15

Autocorrelation domain is a proper domain for clean speech signal and noise separation. In this paper, a method is proposed to decrease effects of noise on the clean speech signal, autocorrelation-based noise ...

Authors: Gholamreza Farahani

Citation:EURASIP Journal on Audio, Speech, and Music Processing
2017
2017:13