Haptic Modeling and Rendering Techniques for Material Simulation and Modulation in Virtual and Mixed Reality


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
Award date14 Aug 2023


Haptic technology attracts an increasing amount of attention with the rising of Metaverse. Such techniques allow users to interact with virtual objects (e.g., touch, grasp, and squeeze) and perceive the haptic properties (e.g., roughness, stiffness, and temperature) through touch in virtual and mixed reality (VR/MR) environments to provide immersive and realistic experiences. While there are existing commercial products for haptic feedback generation in VR, such as HaptX, reproducing or altering the haptic properties of physical materials in VR or MR environments remains challenging due to limitations in haptic modeling and rendering techniques. In this dissertation, I present three works to address different research problems related to haptic modeling and rendering for VR and MR.

My first work introduces ThermAirGlove, a pneumatic glove that provides thermal feedback for users to support the haptic experience of grabbing objects of different temperatures and materials in VR. The system features a glove with five inflatable airbags on the fingers and the palm, two temperature chambers (one hot and one cold), and a closed-loop pneumatic thermal control system. Technical experiments show that the system can achieve the highest temperature-changing speed of 2.75°C/s for cooling, and the pneumatic-control mechanism can generate the thermal cues of different materials. User-perception experiments show that the system enables five distinct levels of perceived thermal sensation and supports users' material identification among foam, glass, and copper with an average accuracy of 87.2%, with no significant difference compared to perceiving real physical objects. User studies on VR experience show that using ThermAirGlove in immersive VR can significantly improve users' experience of presence compared to situations without temperature or material simulation.

As ThermAirGlove adopts an explicit haptic modeling approach, requiring the determination of parameters and calculation of haptic signals in advance, my next work explores implicit haptic modeling, specifically focusing on deep-learning-based haptic modeling, to simulate the tactile properties of materials. Firstly, I present FrictGAN, a framework based on generative adversarial networks (GANs), which employs deep learning to directly generate frictional signals from textured images of fabric materials for tactile rendering on the electrovibration tactile display. The proposed FrictGAN system generates displacement-based data of frictional coefficients for the tactile display to simulate the tactile feedback of different fabric materials. Experimental results show that FrictGAN can generate frictional-coefficient signals visually and statistically close to the ground-truth signals. User studies on fabric-texture simulation show that users could not distinguish between the generated and the ground-truth frictional signals being rendered on the electrovibration tactile display, suggesting the effectiveness of the deep-frictional-signal-generation model. In addition to image-to-friction generation, I also present a deep-learning-based approach for two-way cross-modal visual-tactile data generation using GANs. This approach leverages a conditional-GAN structure with the residue-fusion (RF) module and incorporates additional feature-matching (FM) and perceptual losses during training. Experimental results show that including the RF module, the FM, and the perceptual losses significantly improves cross-modal data generation performance regarding the classification accuracy of the generated data and the visual similarity between the ground truth and the generated data. Such the proposed approach could be potentially applied in robotic cross-modal visual-tactile perception.

While FrictGAN facilitates friction rendering on electrovibration tactile displays, its capabilities are limited to providing 2-dimensional friction rendering on surface haptic displays. It does not incorporate haptic rendering for multi-modal material properties or shape information due to hardware limitations. In my next work, I introduce ViboPneumo, a finger-worn haptic device that uses vibratory-pneumatic feedback to modulate (i.e., increases and decreases) the perceived roughness of the material surface contacted by the user's fingerpad while preserving the perceived sensation of other haptic properties (e.g., temperature or stickiness) in MR. The device includes a silicone-based pneumatic actuator that can lift the user's fingerpad on the physical surface to reduce the contact area for roughness decreasing and an on-finger vibrator for roughness increasing. User-perception experimental results show that the participants could perceive changes in roughness, both increases and decreases, compared to the original material surface. I also observed overlapping roughness ratings among certain haptic stimuli (i.e., vibrotactile and pneumatic) and the originally perceived roughness of some materials without any haptic feedback. This suggests the potential to alter the perceived texture of one type of material to another in terms of roughness (e.g., modifying the perceived texture of ceramics to resemble glass). Lastly, a user study of MR experience shows that ViboPneumo significantly improves the MR user experience, particularly for visual-haptic matching, compared to the condition of a bare hand.

Through the aforementioned three projects, I present the design, implementation, and evaluation of two wearable haptic-rendering devices and a series of haptic modeling algorithms for material properties reproduction or modulation in VR/MR. These works contribute to haptic technology to render or alter the material properties of physical objects in VR/MR, improving the realism and user experience through haptic feedback.

    Research areas

  • Human computer interaction, Virtual reality, Haptic interface, Cross-modal visual-tactile generation, Generative adversarial network, Wearable computing