Non-linear sphere tracing for rendering deformed signed distance fields

We tackle the problem of rendering deformed signed distance fields (a), by phrasing sphere tracing in object space (b) as an initial value problem. Under non-linear deformation the straight deformed space ray becomes a curve, which we follow via numerical integration (c). We go to great lengths to avoid computing the inverse deformation. This enables us to easily apply many modern deformation techniques to signed distance fields (d).


Signed distance fields (SDFs) are a powerful implicit representation for modeling solids, volumes and surfaces. Their infinite resolution, controllable continuity and robust constructive solid geometry operations, coupled with smooth blending, enable powerful and intuitive sculpting tools for creating complex SDF models. SDF metric properties also admit efficient surface rendering with sphere tracing. Unfortunately, SDFs remain incompatible with many popular direct deformation techniques which re-position a surface via its explicit representation. Linear blend skinning used in character articulation, for example, directly displaces each vertex of a triangle mesh. To overcome this limitation, we propose a variant of sphere tracing for directly rendering deformed SDFs. We show that this problem reduces to integrating a non-linear ordinary differential equation. We propose an efficient numerical solution, with controllable error, which first automatically computes an initial value along each cast ray before walking conservatively along a curved ray in the undeformed space according to the signed distance. Importantly, our approach does not require knowledge, computation or even global existence of the inverse deformation, which allows us to readily apply many existing forward deformations. We demonstrate our method’s effectiveness for interactive rendering of a variety of popular deformation techniques that were, to date, limited to explicit surfaces.


We thank Alex Evans, Milan Grajetzki and the members of the Dartmouth Visual Computing Lab for helpful discussions, and the reviewers for their insightful feedback. The dinosaur and elephant model, created by Iniqo Quilez, as well as the human head model, created by Thomas Hooper, are shared under the Creative Commons Attribution-NonCommercial-ShareAlike license. The robot head model was created by Milan Grajetzki. This work was partially supported by the National Science Foundation (Grant 1844538) and the Canada Research Chairs program.

ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)
comments powered by Disqus