The ability to edit 3D scenes—whether through altering appearances, reshaping geometry, or transforming objects—has been a cornerstone of digital content creation. Traditional 3D editing methods, while effective in certain scenarios, often require labor-intensive manual adjustments or struggle with computational efficiency and detail preservation. The advent of radiance field-based methods, particularly Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), has revolutionized the field by offering scalable, flexible, and high-fidelity solutions for 3D scene representation and editing.
Recently, a research team led by Chenyang Zhu and Xinyao Liu from the National University of Defense Technology (NUDT) have published a comprehensive survey on 3D editing techniques based on NeRF and 3DGS in Frontiers of Computer Science. This survey provides a systematic review of the latest techniques, outlines their key advancements, challenges, and potential future research directions.
NeRF, which leverages deep neural networks to model complex scenes, and 3DGS, which utilizes a collection of Gaussians for efficient rendering, have bridged the gap between computational efficiency and visual fidelity. These technologies provide a robust foundation for advanced 3D editing techniques.
The survey systematically categorizes 3D editing tasks into five major areas: appearance editing, object transformation, shape deformation, scene inpainting, and creative editing. Each category is thoroughly analyzed, with a focus on the advantages and limitations of their existing methods.
Through a comprehensive overview and a forward-looking perspective on advancements, this survey aims to stimulate further innovation in the realm of 3D editing based on radiance fields. It seeks to deepen the scholarly and practical understanding of radiance field-based techniques and inspire the creation of more advanced and user-centric solutions in the field of 3D content manipulation and visualization.
DOI:10.1007/s11704-025-41176-9