GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians

1ShanghaiTech University, 2Huazhong University of Science and Technology 3Deemos Technology 4LumiAni Technology
Corresponding author

Abstract

Hairstyle reflects culture and ethnicity at first glance. In the digital era, various realistic human hairstyles are also critical to high-fidelity digital human assets for beauty and inclusivity. Yet, realistic hair modeling and real-time rendering for animation is a formidable challenge due to its sheer number of strands, complicated structures of geometry, and sophisticated interaction with light. This paper presents GaussianHair, a novel explicit hair representation. It enables comprehensive modeling of hair geometry and appearance from images, fostering innovative illumination effects and dynamic animation capabilities. At the heart of GaussianHair is the novel concept of representing each hair strand as a sequence of connected cylindrical 3D Gaussian primitives. This approach not only retains the hair’s geometric structure and appearance but also allows for efficient rasterization onto a 2D image plane, facilitating differentiable volumetric rendering. We further enhance this model with the “GaussianHair Scattering Model”, adept at recreating the slender structure of hair strands and accurately capturing their local diffuse color in uniform lighting. Through extensive experiments, we substantiate that GaussianHair achieves breakthroughs in both geometric and appearance fidelity, transcending the limitations encountered in state-of-the-art methods for hair reconstruction. Beyond representation, GaussianHair extends to support editing, relighting, and dynamic rendering of hair, offering seamless integration with conventional CG pipeline workflows. Complementing these advancements, we have compiled an extensive dataset of real human hair, each with meticulously detailed strand geometry, to propel further research in this field.

Video

Overview

Our method employs a multi-stage process for hair modeling. Initially, an off-the-shelf decoder extracts coarse hair strands from multi-view images, which are then refined using differentiable strand-based splatting. This optimization aligns the rendered images with the ground truth. Finally, we apply a scattering model to the optimized strands, enhancing their relighting and dynamics modeling capabilities.

Dataset

Our RealHair dataset represents a comprehensive and culturally diverse collection of human hairstyles, encompassing a variety of distinctive styles reflective of global hair characteristics. It comprises 281 high-resolution (4K) videos, totaling approximately 3000 frames, each meticulously annotated with detailed geometry segmentations and individual hair strand information.

Results

Editing results. From left to right: 1. neural rendering results 2. changed lighting 3. roughness adjustment 4. hair cutting 5. base color alteration

Relighting. GaussianHair can render photorealistic relighting results with various lighting conditions. Column 1 is the ground truth reference. Columns 2&3 are rendering results under two ordinary composite lighting. Columns 4&5 show results under Cyber style and ”Avatar” style illuminations.

Dynamic results. After importing our strand model into a conventional CG rendering engine, the returned animation result is then utilized to animate the rendered hair, simulating the effect of wind blowing.

BibTeX


      @article{luo2024gaussianhair,
        title={GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians},
        author={Luo, Haimin and Ouyang, Min and Zhao, Zijun and Jiang, Suyi and Zhang, Longwen and Zhang, Qixuan and Yang, Wei and Xu, Lan and Yu, Jingyi},
        journal={arXiv preprint arXiv:2402.10483},
        year={2024}
      }