Awesome-AIGC-3D
A curated list of awesome AIGC 3D papers, inspired by awesome-NeRF.
How to submit a pull request?
Table of Contents
Survey
- 3D Generative Models: A Survey, Shi et al., arxiv 2022 | bibtex
- Generative AI meets 3D: A Survey on Text-to-3D in AIGC Era, Li et al., arxiv 2023 | bibtex
- AI-Generated Content (AIGC) for Various Data Modalities: A Survey, Foo et al., arxiv 2023 | bibtex
- Advances in 3D Generation: A Survey, Li et al., arxiv 2024 | bibtex
- A Comprehensive Survey on 3D Content Generation, Liu et al., arxiv 2024 | bibtex
- Geometric Constraints in Deep Learning Frameworks: A Survey, Vats et al., arxiv 2024 | bibtex
Papers
3D Native Generative Methods
Object
- Text2Shape: Generating Shapes from Natural Language by Learning Joint Embeddings, Chen et al., ACCV 2018 | github | bibtex
- ShapeCrafter: A Recursive Text-Conditioned 3D Shape Generation Model, Fu et al., NeurIPS 2022 | github | bibtex
- GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images, Gao et al., NeurIPS 2022 | github | bibtex
- LION: Latent Point Diffusion Models for 3D Shape Generation, Zeng et al., NeurIPS 2022 | github | bibtex
- Diffusion-SDF: Conditional Generative Modeling of Signed Distance Functions, Chou et al., ICCV 2023 | github | bibtex
- MagicPony: Learning Articulated 3D Animals in the Wild, Wu et al., CVPR 2023 | github | bibtex
- DiffRF: Rendering-guided 3D Radiance Field Diffusion, Müller et al., CVPR 2023 | bibtex
- SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation, Cheng et al., CVPR 2023 | github | bibtex
- Point-E: A System for Generating 3D Point Clouds from Complex Prompts, Nichol et al., arxiv 2022 | github | bibtex
- 3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models, Zhang et al., TOG 2023 | github | bibtex
- 3DGen: Triplane Latent Diffusion for Textured Mesh Generation, Gupta et al., arxiv 2023 | bibtex
- MeshDiffusion: Score-based Generative 3D Mesh Modeling, Liu et al., ICLR 2023 | github | bibtex
- HoloDiffusion: Training a 3D Diffusion Model using 2D Images, Karnewar et al., CVPR 2023 | github | bibtex
- HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion, Erkoç et al., ICCV 2023 | github | bibtex
- Shap-E: Generating Conditional 3D Implicit Functions, Jun et al., arxiv 2023 | github | bibtex
- LAS-Diffusion: Locally Attentional SDF Diffusion for Controllable 3D Shape Generation, Zheng et al., TOG 2023 | github | bibtex
- Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation, Zhao et al., NeurIPS 2023 | github | bibtex
- DiffComplete: Diffusion-based Generative 3D Shape Completion, Chu et al., NeurIPS 2023 | bibtex
- DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation, Mo et al., arxiv 2023 | github | bibtext
- 3D VADER - AutoDecoding Latent 3D Diffusion Models, Ntavelis et al., arxiv 2023 | github | bibtex
- ARGUS: Visualization of AI-Assisted Task Guidance in AR, Castelo et al., TVCG 2023 | bibtex
- Large-Vocabulary 3D Diffusion Model with Transformer, Cao et al., ICLR 2024 | github | bibtext
- TextField3D: Towards Enhancing Open-Vocabulary 3D Generation with Noisy Text Fields, Huang et al., ICLR 2024 | bibtex
- HyperFields:Towards Zero-Shot Generation of NeRFs from Text, Babu et al., arxiv 2023 | github | bibtex
- LRM: Large Reconstruction Model for Single Image to 3D, Hong et al., ICLR 2024 | bibtex
- DMV3D:Denoising Multi-View Diffusion using 3D Large Reconstruction Model, Xu et al., ICLR 2024 | bibtex
- WildFusion:Learning 3D-Aware Latent Diffusion Models in View Space, Schwarz et al., ICLR 2024 | bibtex
- Functional Diffusion, Zhang et al., CVPR 2024 | github | bibtex
- MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers, Siddiqui et al., arxiv 2023 | github | bibtex
- SPiC·E: Structural Priors in 3D Diffusion Models using Cross-Entity Attention, Sella et al., arxiv 2023 | github | bibtex
- ZeroRF: Fast Sparse View 360° Reconstruction with Zero Pretraining, Shi et al., arxiv 2023 | github | bibtex
- Learning the 3D Fauna of the Web, Li et al., arxiv 2024 | bibtex
- Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability, Qian et al., arxiv 2024 | github | bibtext
- LN3Diff: Scalable Latent Neural Fields Diffusion for Speedy 3D Generation, Lan et al., arxiv 2024 | github | bibtext
- GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation, Xu et al., arxiv 2024 | github | bibtext
- Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D, Varma T et al., CVPR 2024 | github | bibtext
- MeshLRM: Large Reconstruction Model for High-Quality Meshes, Wei et al., arxiv 2024 | bibtext
- Interactive3D🪄: Create What You Want by Interactive 3D Generation, Dong et al., CVPR 2024 | github | bibtex
- BrepGen: A B-rep Generative Diffusion Model with Structured Latent Geometry, Xu et al., SIGGRAPH 2024 | github | bibtex
- Direct3D: Scalable Image-to-3D Generation via 3D Latent Diffusion Transformer, Wu et al., arxiv 2024 | bibtex
- MeshXL: Neural Coordinate Field for Generative 3D Foundation Models, Chen et al., arXiv 2024 | github | bibtex
- MeshAnything:Artist-Created Mesh Generation with Autoregressive Transformers, Chen et al., arxiv 2024 | github | bibtex
- CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets, Zhang et al., TOG 2024 | github | bibtex
- L4GM: Large 4D Gaussian Reconstruction Model, Ren et al., arxiv 2024 | bibtex
- Efficient Large-Baseline Radiance Fields, a feed-forward 2DGS model, Chen et al., ECCV 2024 | github | bibtex
- MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization, Chen et al., arXiv 2024 | github | bibtex
- SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and Illumination Disentanglement, Boss et al., arXiv 2024 | github | bibtex
Scene
- GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis, Schwarz et al., NeurIPS 2020 | github | bibtext
- ATISS: Autoregressive Transformers for Indoor Scene Synthesis, Paschalidou et al.,