From a single monocular video → consistent 3D Gaussian avatar in pure Rust. Full differentiable Gaussian Splatting (wgpu) + Multi-View Diffusion (Candle) + FLAME binding. 796 passing tests, zero C/Fortran, production-ready digital twin reconstruction.
A major leap in digital humans for the COOLJAPAN ecosystem.
On February 24 we released OxiGAF 0.1.0 — a complete pure-Rust implementation of Gaussian Avatar Reconstruction from monocular videos using multi-view diffusion.
This is the first production-ready Rust implementation of the GAF method (arXiv:2412.10209) that turns a casual selfie video into a high-fidelity, animatable 3D Gaussian avatar — all without multi-camera rigs, heavy Python dependencies, or unsafe code.
Traditional avatar reconstruction (e.g., Gaussian Splatting papers, NeRF-based methods) relies on:
OxiGAF changes the game:
The core innovation combines three pillars:
Multi-View Diffusion (oxigaf-diffusion)
Differentiable Gaussian Splatting (oxigaf-render)
FLAME Parametric Binding (oxigaf-flame)
oxigaf-bridge.The full training loop verifies gradients end-to-end, ensuring the entire pipeline is differentiable and optimizable.
cuda, metal, simd, flash_attention, mixed_precisionunwrap() in critical pathsOxiGAF is now the avatar reconstruction layer for the entire COOLJAPAN stack:
Repository: https://github.com/cool-japan/oxigaf
Star the repo if you want digital humans that are fast, safe, and truly sovereign.
The era of “upload to a cloud service for avatar reconstruction” is over.
Pure Rust Gaussian avatars are here — and they run everywhere.
— KitaSan at COOLJAPAN OÜ
February 24, 2026