David Gueorguiev

I am working mostly on touch and haptic interactions. My long-term goal is to bring haptics & neuroscience of touch to education.

After a degree in Physics at Free University of Brussels, I have completed a Master in computational neuroscience in Paris from EHESS,ENS, Paris 5 and an internship at the Cognition and Brain Sciences Unit in Cambridge. I wrote my Master thesis on the topic of attention and consciousness. During my PhD at Université catholique de Louvain, I studied the perception of natural textures and ultrasonic frictional feedback. In 2016, I became post-doc in the MINT team at INRIA Lille working on the subjective tactile perception during human-machine interaction. Since October 2017, I am now a post-doc in the Haptic Intelligence department at the Max-Planck institute in Stuttgart. My current projects investigate the finger behavior during 4D haptic interaction and the importance of timing for the perception of haptic cues.

The frictional forces we experience when our body interacts with objects provide essential sensory cues that help adapt our behavior. We rely on these sensory cues daily, for example when we feel the smoothness of...

Little is known about the shape and properties of the human finger during haptic interaction, even though these are essential parameters for controlling wearable finger devices and deliver realistic tactile feedback. This study explores a framework for four-dimensional scanning (3D over time) and modelling of finger-surface interactions, aiming to capture the motion and deformations of the entire finger with high resolution while simultaneously recording the interfacial forces at the contact. Preliminary results show that when the fingertip is actively pressing a rigid surface, it undergoes lateral expansion and proximal/distal bending, deformations that cannot be captured by imaging of the contact area alone. Therefore, we are currently capturing a dataset that will enable us to create a statistical model of the finger’s deformations and predict the contact forces induced by tactile interaction with objects. This technique could improve current methods for tactile rendering in wearable haptic devices, which rely on general physical modelling of the skin’s compliance, by developing an accurate model of the variations in finger properties across the human population. The availability of such a model will also enable a more realistic simulation of virtual finger behaviour in virtual reality (VR) environments, as well as the ability to accurately model a specific user’s finger from lower resolution data. It may also be relevant for inferring the physical properties of the underlying tissue from observing the surface mesh deformations, as previously shown for body tissues.

Little is known about the shape and properties of the human finger during haptic interaction even though this knowledge is essential to control wearable finger devices and deliver realistic tactile feedback. This study explores a framework for four-dimensional scanning and modeling of finger-surface interactions, aiming to capture the motion and deformations of the entire finger with high resolution. The results show that when the fingertip is actively pressing a rigid surface, it undergoes lateral expansion of about 0.2 cm and proximal/distal bending of about 30◦, deformations that cannot be captured by imaging of the contact area alone. This project constitutes a first step towards an accurate statistical model of the finger’s behavior during haptic interaction.

A realistic keyclick sensation is a serious challenge for haptic feedback since vibrotactile rendering faces the limitation of the absence of contact force as experienced on physical buttons. It has been shown that creating a keyclick sensation is possible with stepwise ultrasonic friction modulation. However, the intensity of the sensation is limited by the impedance of the fingertip and by the absence of a lateral force component external to the finger. In our study, we compare this technique to rendering with an ultrasonic travelling wave, which exerts a lateral force on the fingertip. For both techniques, participants were asked to report the detection (or not) of a keyclick during a forced choice one interval procedure. In experiment 1, participants could press the surface as many time as they wanted for a given trial. In experiment 2, they were constrained to press only once. The results show a lower perceptual threshold for travelling waves. Moreover, participants pressed less times per trial and exerted smaller normal force on the surface. The subjective quality of the sensation was found similar for both techniques. In general, haptic feedback based on travelling ultrasonic waves is promising for applications without lateral motion of the finger.

Recent research in haptic feedback is motivated by the crucial role that tactile perception plays in everyday touch interactions. In this paper, we describe psychophysical experiments to investigate the perceptual threshold of individual fingers on both the right and left hand of right-handed participants using active dynamic touch for spatial period discrimination of both sinusoidal and square-wave gratings on ultrasonic haptic touchscreens. Both one-finger and multi-finger touch were studied and compared. Our results indicate that users' finger identity (index finger, middle finger, etc.) significantly affect the perception of both gratings in the case of one-finger exploration. We show that index finger and thumb are the most sensitive in all conditions whereas little finger followed by ring are the least sensitive for haptic perception. For multi-finger exploration, the right hand was found to be more sensitive than the left hand for both gratings. Our findings also demonstrate similar perception sensitivity between multi-finger exploration and the index finger of users' right hands (i.e. dominant hand in our study), while significant difference was found between single and multi-finger perception sensitivity for the left hand.

2017

Journal of The Royal Society Interface, 14(137), The Royal Society, 2017 (article)

Abstract

When we touch an object or explore a texture, frictional strains are induced by the tactile interactions with the surface of the object. Little is known about how these interactions are perceived, although it becomes crucial for the nascent industry of interactive displays with haptic feedback (e.g. smartphones and tablets) where tactile feedback based on friction modulation is particularly relevant. To investigate the human perception of frictional strains, we mounted a high-fidelity friction modulating ultrasonic device on a robotic platform performing controlled rubbing of the fingertip and asked participants to detect induced decreases of friction during a forced-choice task. The ability to perceive the changes in friction was found to follow Weber{\textquoteright}s Law of just noticeable differences, as it consistently depended on the ratio between the reduction in tangential force and the pre-stimulation tangential force. The Weber fraction was 0.11 in all conditions demonstrating a very high sensitivity to transient changes in friction. Humid fingers experienced less friction reduction than drier ones for the same intensity of ultrasonic vibration but the Weber fraction for detecting changes in friction was not influenced by the humidity of the skin.

Ultrasonic vibration and electrovibration can modulate the friction between a surface and a sliding finger. The power consumption of these devices is critical to their integration in modern mobile devices such as smartphones. This paper presents a simple control solution to reduce up to 68.8 {\%} this power consumption by taking advantage of the human perception limits.

Long-lasting mechanical vibrations applied to the skin induce a reversible decrease in the perception of vibration at the stimulated skin site. This phenomenon of vibrotactile adaptation has been studied extensively, yet there is still no clear consensus on the mechanisms leading to vibrotactile adaptation. In particular, the respective contributions of 1) changes affecting mechanical skin impedance, 2) peripheral processes, and 3) central processes are largely unknown. Here we used direct electrical stimulation of nerve fibers to bypass mechanical transduction processes and thereby explore the possible contribution of central vs. peripheral processes to vibrotactile adaptation. Three experiments were conducted. In the first, adaptation was induced with mechanical vibration of the fingertip (51- or 251-Hz vibration delivered for 8 min, at 40× detection threshold). In the second, we attempted to induce adaptation with transcutaneous electrical stimulation of the median nerve (51- or 251-Hz constant-current pulses delivered for 8 min, at 1.5× detection threshold). Vibrotactile detection thresholds were measured before and after adaptation. Mechanical stimulation induced a clear increase of vibrotactile detection thresholds. In contrast, thresholds were unaffected by electrical stimulation. In the third experiment, we assessed the effect of mechanical adaptation on the detection thresholds to transcutaneous electrical nerve stimuli, measured before and after adaptation. Electrical detection thresholds were unaffected by the mechanical adaptation. Taken together, our results suggest that vibrotactile adaptation is predominantly the consequence of peripheral mechanoreceptor processes and/or changes in biomechanical properties of the skin.

There is increasing evidence that human perception is realized by a hierarchy of neural processes in which predictions sent backward from higher levels result in prediction errors that are fed forward from lower levels, to update the current model of the environment. Moreover, the precision of prediction errors is thought to be modulated by attention. Much of this evidence comes from paradigms in which a stimulus differs from that predicted by the recent history of other stimuli (generating a so-called {\textquotedblleft}mismatch response{\textquotedblright}). There is less evidence from situations where a prediction is not fulfilled by any sensory input (an {\textquotedblleft}omission{\textquotedblright} response). This situation arguably provides a more direct measure of {\textquotedblleft}top-down{\textquotedblright} predictions in the absence of confounding {\textquotedblleft}bottom-up{\textquotedblright} input. We applied Dynamic Causal Modeling of evoked electromagnetic responses recorded by EEG and MEG to an auditory paradigm in which we factorially crossed the presence versus absence of {\textquotedblleft}bottom-up{\textquotedblright} stimuli with the presence versus absence of {\textquotedblleft}top-down{\textquotedblright} attention. Model comparison revealed that both mismatch and omission responses were mediated by increased forward and backward connections, differing primarily in the driving input. In both responses, modeling results suggested that the presence of attention selectively modulated backward {\textquotedblleft}prediction{\textquotedblright} connections. Our results provide new model-driven evidence of the pure top-down prediction signal posited in theories of hierarchical perception, and highlight the role of attentional precision in strengthening this prediction.SIGNIFICANCE STATEMENT Human auditory perception is thought to be realized by a network of neurons that maintain a model of and predict future stimuli. Much of the evidence for this comes from experiments where a stimulus unexpectedly differs from previous ones, which generates a well-known {\textquotedblleft}mismatch response.{\textquotedblright} But what happens when a stimulus is unexpectedly omitted altogether? By measuring the brain{\textquoteright}s electromagnetic activity, we show that it also generates an {\textquotedblleft}omission response{\textquotedblright} that is contingent on the presence of attention. We model these responses computationally, revealing that mismatch and omission responses only differ in the location of inputs into the same underlying neuronal network. In both cases, we show that attention selectively strengthens the brain{\textquoteright}s prediction of the future.

Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding.