Speech and music signals are multifractal phenomena. The time
displacement profile of speech and music signal show strikingly
different scaling behaviour. However, a full complexity analysis of
their frequency and amplitude has not been made so far. We propose a
novel complex network based approach (Visibility Graph) to study the
scaling behaviour of frequency wise amplitude variation of speech and
music signals over time and then extract their PSVG (Power of Scale
freeness of Visibility Graph). From this analysis it emerges that the
scaling behaviour of amplitude-profile of music varies a lot from
frequency to frequency whereas it’s almost consistent for the speech
signal. Our left auditory cortical areas are proposed to be
neurocognitively specialised in speech perception and right ones in
music. Hence we can conclude that human brain might have adapted to the
distinctly different scaling behaviour of speech and music signals and
developed different decoding mechanisms, as if following the so called
Fractal Darwinism. Using this method, we can capture all non-stationary
aspects of the acoustic properties of the source signal to the deepest
level, which has huge neurocognitive significance. Further, we propose a
novel non-invasive application to detect neurological illness (here
autism spectrum disorder, ASD), using the quantitative parameters
deduced from the variation of scaling behaviour for speech and music.