Project Icon

awesome-conditional-content-generation

条件内容生成的前沿技术与资源集锦

这是一个综合性的条件内容生成资源库,主要聚焦于人体动作、图像和视频生成领域。项目汇集了最新研究论文和代码实现,包括音乐、文本和音频驱动的动作生成,以及人体动作预测等多个研究方向。同时还收录了条件图像和视频生成的相关资源,为该领域的研究和开发提供了丰富的参考材料。

awesome-conditional-content-generation Awesome

This repository contains a collection of resources and papers on Conditional Content Generation. Especially for human motion generation, image generation and video generation. This repo is maintained by Haofan Wang.

如果你对可控内容生成(2D/3D)方向感兴趣,希望与我保持更广泛的学术合作或寻求一份实习,并且已经发表过至少一篇顶会论文,欢迎随时发邮件至haofanwang.ai@gmail.com,高校、工业界均欢迎。

Contents

Papers

Music-Driven motion generation

Taming Diffusion Models for Music-driven Conducting Motion Generation
NUS, AAAI 2023 Summer Symposium, [Code]

Music-Driven Group Choreography
AIOZ AI, CVPR'23

Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
Illinois Institute of Technology, ICLR'23, [Code]

Magic: Multi Art Genre Intelligent Choreography Dataset and Network for 3D Dance Generation
Tsinghua University, 7 Dec 2022

Pretrained Diffusion Models for Unified Human Motion Synthesis
DAMO Academy, Alibaba Group, 6 Dec 2022

EDGE: Editable Dance Generation From Music
Stanford University, 19 Nov 2022

You Never Stop Dancing: Non-freezing Dance Generation via Bank-constrained Manifold Projection
MSRA, NeurIPS'22

GroupDancer: Music to Multi-People Dance Synthesis with Style Collaboration
Tsinghua University, ACMMM'22

A Brand New Dance Partner: Music-Conditioned Pluralistic Dancing Controlled by Multiple Dance Genres
Yonsei University, CVPR 2022, [Code]

Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory
NTU, CVPR 2022 (Oral), [Code]

Dance Style Transfer with Cross-modal Transformer
KTH, 22 Aug 2022, [Upcoming Code]

Music-driven Dance Regeneration with Controllable Key Pose Constraints
Tencent, 8 July 2022

AI Choreographer: Music Conditioned 3D Dance Generation with AIST++
USC, ICCV 2021, [Code]

Text-Driven motion generation

ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model
NTU, CVPR'23, [Code]

TEMOS: Generating diverse human motions from textual descriptions
ENPC, CVPR'23

GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents
Peking University, CVPR'23

Human Motion Diffusion as a Generative Prior
Anonymous Authors, [Code]

T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations
Tencent AI Lab, 16 Jan 2023, [Code]

Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion Probabilistic Models
Beihang University, 10 Jan 2023

Executing your Commands via Motion Diffusion in Latent Space
Tencent, 8 Dec 2022, [Code]

MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels
Seoul National University, AAAI 2023 Oral, [Code]

MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis
Max Planck Institute for Informatics, 8 Dec 2022

Executing your Commands via Motion Diffusion in Latent Space
Tencent PCG, 8 Dec 2022, [Upcoming Code]

UDE: A Unified Driving Engine for Human Motion Generation
Xiaobing Inc, 29 Nov 2022, [Upcoming Code]

MotionBERT: Unified Pretraining for Human Motion Analysis
SenseTime Research, 12 Oct 2022, [Code]

Human Motion Diffusion Model
Tel Aviv University, 3 Oct 2022, [Code]

FLAME: Free-form Language-based Motion Synthesis & Editing
Korea University, 1 Sep 2022

MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
NTU, 22 Aug 2022, [Code]

TEMOS: Generating diverse human motions from textual descriptions
MPI, ECCV 2022 (Oral), [Code]

GIMO: Gaze-Informed Human Motion Prediction in Context
Stanford University, ECCV 2022, [Code]

MotionCLIP: Exposing Human Motion Generation to CLIP Space
Tel Aviv University, ECCV 2022, [Code]

Generating Diverse and Natural 3D Human Motions from Text
University of Alberta, CVPR 2022, [Code]

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
NTU, SIGGRAPH 2022, [Code]

Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents
University of Maryland,, VR 2021, [Code]

Audio-Driven motion generation

For more recent paper, you can find from here

Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation
NTU, CVPR'23, [Code]

GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis
Zhejiang University, ICLR'23, [Code]

DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model
Macau University of Science and Technolog, 24 Jan 2023

DiffTalk: Crafting Diffusion Models for Generalized Talking Head Synthesis
Tsinghua University, 10 Jan 2023

Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation
University of Wrocław, 6 Jan 2023, [Incoming Code]

Generating Holistic 3D Human Motion from Speech
Max Planck Institute for Intelligent Systems, 8 Dev 2022

Audio-Driven Co-Speech Gesture Video Generation
NTU, 5 Dec 2022

Listen, denoise, action! Audio-driven motion synthesis with diffusion models
KTH Royal Institute of Technology, 17 Nov 2022

ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech
York University, 23 Sep 2022, [Code]

BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis
The University of Tokyo, ECCV 2022, [Code]

EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model
Nanjing University, SIGGRAPH 2022, [Code]

Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation
The Chinese University of Hong Kong, CVPR 2022, [Code]

SEEG: Semantic Energized Co-speech Gesture Generation
Alibaba DAMO Academy, CVPR 2022, [Code]

FaceFormer: Speech-Driven 3D Facial Animation with Transformers
The University of Hong Kong, CVPR 2022, [Code]

Freeform Body Motion Generation from Speech
JD AI Research, 4 Mar 2022, [Code]

Audio2Gestures: Generating Diverse Gestures from Speech Audio with Conditional Variational Autoencoders
Tencent AI Lab, ICCV 2021, [Code]

Learning Speech-driven 3D Conversational Gestures from Video
Max Planck Institute for Informatics, IVA 2021, [Code]

Learning Individual Styles of Conversational Gesture
UC Berkeley, CVPR 2019, [Code]

Human motion prediction

For more recent more, you can find from here

InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion
UIUC, ICCV 2023, [Code]

Stochastic Multi-Person 3D Motion Forecasting
UIUC, ICLR 2023 (Spotlight), [Code]

HumanMAC: Masked Motion Completion for Human Motion Prediction
Tsinghua University, ICCV 2023, [Code]

BeLFusion: Latent Diffusion for Behavior-Driven Human Motion Prediction
University of Barcelona, 25 Nov 2022, [Upcoming Code]

Diverse Human Motion Prediction Guided by Multi-Level Spatial-Temporal Anchors
UIUC, ECCV 2022 (Oral), [Code]

PoseGPT: Quantization-based 3D Human Motion Generation and Forecasting
NAVER LABS, ECCV'2022, [Code]

NeMF: Neural Motion Fields for Kinematic Animation
Yale University, NeurIPS 2022 (Spotlight), [Code]

Multi-Person Extreme Motion Prediction
Inria University, CVPR 2022, [Code]

MotionMixer: MLP-based 3D Human Body Pose Forecasting
Mercedes-Benz, IJCAI 2022 (Oral), [Code]

Multi-Person 3D Motion Prediction with Multi-Range Transformers
UCSD, NeurIPS 2021

Motion Applications

MIME: Human-Aware 3D Scene Generation
MPI

Scene Synthesis from Human Motion
Stanford University, SIGGRAPH Asia 2022, [Code]

TEACH: Temporal Action Compositions for 3D Humans
MPI, 3DV 2022,

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号