Authors

Issue Date

Metadata

Abstract

Conscious awareness plays a major role in human cognition and adaptive behaviour, though its function in multisensory
integration is not yet fully understood, hence, questions remain: How does the brain integrate the incoming
multisensory signals with respect to different external environments? How are the roles of these multisensory signals
defined to adhere to the anticipated behavioural-constraint of the environment? This work seeks to articulate a novel
theory on conscious multisensory integration that addresses the aforementioned research challenges. Specifically, the
well-established contextual field (CF) in pyramidal cells and coherent infomax theory [1][2] is split into two functionally
distinctive integrated input fields: local contextual field (LCF) and universal contextual field (UCF). LCF defines
the modulatory sensory signal coming from some other parts of the brain (in principle from anywhere in space-time)
and UCF defines the outside environment and anticipated behaviour (based on past learning and reasoning). Both LCF
and UCF are integrated with the receptive field (RF) to develop a new class of contextually-adaptive neuron (CAN),
which adapts to changing environments. The proposed theory is evaluated using human contextual audio-visual (AV)
speech modelling. Simulation results provide new insights into contextual modulation and selective multisensory
information amplification/suppression. The central hypothesis reviewed here suggests that the pyramidal cell, in addition
to the classical excitatory and inhibitory signals, receives LCF and UCF inputs. The UCF (as a steering force or
tuner) plays a decisive role in precisely selecting whether to amplify/suppress the transmission of relevant/irrelevant
feedforward signals, without changing the content e.g., which information is worth paying more attention to? This,
as opposed to, unconditional excitatory and inhibitory activity in existing deep neural networks (DNNs), is called
conditional amplification/suppression.

Publisher

Journal

URI

Additional Links

Type

Journal article

Language

en

Description

This is an accepted manuscript of an article published by Frontiers Media in Frontiers in Computational Neuroscience (in press).
The accepted version of the publication may differ from the final published version.

Human speech processing is inherently multi-modal, where visual cues (lip movements) help better understand speech in noise. Our recent work [1] has shown lip-reading driven, audio-visual (AV) speech enhancement can significantly outperform benchmark audio-only approaches at low signal-to-noise ratios (SNRs). However, consistent with our cognitive hypothesis, visual cues were found to be relatively less effective for speech enhancement at high SNRs or low levels of background noise, whereas audio-only cues worked well enough. Therefore, a more cognitively-inspired, context-aware AV approach is required, that contextually utilises both visual and noisy audio features, and thus more effectively accounts for different noisy conditions. In this paper, we introduce a novel context-aware AV framework that contextually exploits AV cues with respect to different operating conditions to estimate clean audio, without requiring any prior SNR estimation. The switching module is developed by integrating a convolutional neural network (CNN) and long-short-term memory (LSTM) network, that learns to switch between visual-only (V-only), audio-only (A-only), and both audio-visual cues at low, high and moderate SNR levels, respectively. For testing, the estimated clean audio features are utilised using an enhanced visually-derived Wiener filter (EVWF) for noisy speech filtering. The context-aware AV speech enhancement framework is evaluated under dynamic real-world scenarios (including cafe, street, bus, and pedestrian) at different SNR levels (ranging from low to high SNRs), using benchmark Grid and ChiME3 corpora. For objective testing, perceptual evaluation of speech quality (PESQ) is used to evaluate the quality of the restored speech. For subjective testing, the standard mean-opinion-score (MOS) method is used. Comparative experimental results show the superior performance of our context-aware AV approach, over A-only, V-only, spectral subtraction (SS), and log-minimum mean square error (LMMSE) based speech enhancement methods, at both low and high SNRs. These preliminary findings demonstrate the capability of our proposed approach to deal with spectro-temporal variations in any real-world noisy environment, by contextually exploiting the complementary strengths of audio and visual cues. In conclusion, our deep learning-driven AV framework is posited as a benchmark resource for the multi-modal speech processing and machine learning communities.

The creation of an effective construction schedule is fundamental to the successful completion of a construction project. Effectively communicating the temporal and spatial details of this schedule are vital, however current planning approaches often lead to multiple or misinterpretations of the schedule throughout the planning team. Four Dimensional Computer Aided Design (4D CAD) has emerged over the last twenty years as an effective tool during construction project planning. In recent years Building Information Modelling (BIM) has emerged as a valuable approach to construction informatics throughout the whole lifecycle of a building. Additionally, emerging trends in location-aware and wearable computing provide a future potential for untethered, contextual visualisation and data delivery away from the office. The purpose of this study was to develop a novel computer-based approach, to facilitate on-site 4D construction planning through interaction with a 3D construction model and corresponding building information data in outdoor Augmented Reality (AR). Based on a wide ranging literature review, a conceptual framework was put forward to represent software development requirements to support the sequencing of construction tasks in AR. Based on this framework, an approach was developed that represented the main processes required to plan a construction sequence using an onsite model based 4D methodology. Using this proposed approach, a prototype software tool was developed, 4DAR. The implemented tool facilitated the mapping of elements within an interactive 3D model with corresponding BIM data objects to provide an interface for two way communication with the underlying Industry Foundation Class (IFC) data model. Positioning data from RTK-GPS and an electronic compass enabled the geo-located 3D model to be registered in world coordinates and visualised using a head mounted display fitted with a ii forward facing video camera. The scheduling of construction tasks was achieved using a novel interactive technique that negated the need for a previous construction schedule to be input into the system. The resulting 4D simulation can be viewed at any time during the scheduling process, facilitating an iterative approach to project planning to be adopted. Furthermore, employing the IFC file as a central read/write repository for schedule data reduces the amount of disparate documentation and centralises the storage of schedule information, while improving communication and facilitating collaborative working practices within a project planning team. Post graduate students and construction professionals evaluated the implemented prototype tool to test its usefulness for construction planning requirements. It emerged from the evaluation sessions that the implemented tool had achieved the essential requirements highlighted in the conceptual framework and proposed approach. Furthermore, the evaluators expressed that the implemented software and proposed novel approach to construction planning had potential to assist with the planning process for both experienced and inexperienced construction planners. The following contributions to knowledge have been made by this study in the areas of 4D CAD, construction applications of augmented reality and Building Information Modelling;  4D Construction Planning in Outdoor Augmented Reality (AR)  The development of a novel 4D planning approach through decomposition  The deployment of Industry Foundation Classes (IFC) in AR  Leveraging IFC files for centralised data management within real time planning and visualisation environment.

As coaching psychology finds its feet, demands for evidence-based approaches are increasing both from inside and outside of the industry. There is an opportunity in the many evidence-based interventions in other areas of applied psychology that are of direct relevance to coaching psychology. However, there may too be risks associated with unprincipled eclecticism. Existing approaches that are gaining popularity in the coaching field such as Dialectic Behavioural Therapy and Mindfulness enjoy close affiliation with Contextual Behavioral Science (CBS). In this article, we provide a brief overview of CBS as a coherent philosophical, scientific, and practice framework for empirically supported coaching work. We review its evidence base, and its direct applicability to coaching by describing CBS’s most explicitly linked intervention – Acceptance and Commitment Therapy/Training (ACT). We highlight key strengths of ACT including: its great flexibility in regard of the kinds of client change it can support; the variety of materials and exercises available; and, the varied modes of delivery through which it has been shown to work. The article lays out guiding principles and provides a brief illustrative case study of Contextual Behavioural Coaching.

Export search results

The export option will allow you to export the current search results of the entered query to a file. Different
formats are available for download. To export the items, click on the button corresponding with the preferred download format.

By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export.
The amount of items that can be exported at once is similarly restricted as the full export.

After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.