Project Icon

Awesome-Controllable-T2I-Diffusion-Models

可控文本到图像扩散模型研究进展综述

该项目汇集了文本到图像扩散模型中可控生成的前沿研究。内容涵盖个性化生成、空间控制、高级文本条件生成等多个方向,并总结了多条件生成和通用可控生成方法。项目为研究人员和开发者提供了全面了解可控T2I扩散模型最新进展的资源,有助于促进该领域的发展。

Awesome Maintenance PR's Welcome Survey Paper


Awesome Controllable T2I Diffusion Models


We are focusing on how to Control text-to-image diffusion models with Novel Conditions.

For more detailed information, please refer to our survey paper: Controllable Generation with Text-to-Image Diffusion Models: A Survey

img img

💖 Citation

If you find value in our survey paper or curated collection, please consider citing our work and starring our repo to support us.

@article{cao2024controllable,
  title={Controllable Generation with Text-to-Image Diffusion Models: A Survey},
  author={Pu Cao and Feng Zhou and Qing Song and Lu Yang},
  journal={arXiv preprint arXiv:2403.04279},
  year={2024}
}

🎁 How to contribute to this repository?

Since the following content is generated based on our database, please provide the following information in the issue to help us fill in the database to add new papers (please do not submit a PR directly).

1. Paper title
2. arXiv ID (if any)
3. Publication status (if any)

🌈 Contents

🚀Generation with Specific Condition

🍇Personalization

🍉Subject-Driven Generation

PartCraft: Crafting Creative Objects by Parts.
Kam Woh Ng, Xiatian Zhu, Yi-Zhe Song, Tao Xiang.
ECCV 2024. [PDF]

ClassDiffusion: More Aligned Personalization Tuning with Explicit Class Guidance.
Jiannan Huang, Jun Hao Liew, Hanshu Yan, Yuyang Yin, Yao Zhao, Yunchao Wei.
arXiv 2024. [PDF]

Personalized Residuals for Concept-Driven Text-to-Image Generation.
Cusuh Ham, Matthew Fisher, James Hays, Nicholas Kolkin, Yuchen Liu, Richard Zhang, Tobias Hinz.
arXiv 2024. [PDF]

Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance.
Kelvin C. K. Chan, Yang Zhao, Xuhui Jia, Ming-Hsuan Yang, Huisheng Wang.
arXiv 2024. [PDF]

MMTryon: Multi-Modal Multi-Reference Control for High-Quality Fashion Generation.
Xujie Zhang, Ente Lin, Xiu Li, Yuxuan Luo, Michael Kampffmeyer, Xin Dong, Xiaodan Liang.
arXiv 2024. [PDF]

Infusion: Preventing Customized Text-to-Image Diffusion from Overfitting.
Weili Zeng, Yichao Yan, Qi Zhu, Zhuo Chen, Pengzhi Chu, Weiming Zhao, Xiaokang Yang.
arXiv 2024. [PDF]

CAT: Contrastive Adapter Training for Personalized Image Generation.
Jae Wan Park, Sang Hyun Park, Jun Young Koh, Junha Lee, Min Song.
arXiv 2024. [PDF]

MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation.
Kunpeng Song, Yizhe Zhu, Bingchen Liu, Qing Yan, Ahmed Elgammal, Xiao Yang.
arXiv 2024. [PDF]

U-VAP: User-specified Visual Appearance Personalization via Decoupled Self Augmentation.
You Wu, Kean Liu, Xiaoyue Mi, Fan Tang, Juan Cao, Jintao Li.
arXiv 2024. [PDF]

Automated Black-box Prompt Engineering for Personalized Text-to-Image Generation.
Yutong He, Alexander Robey, Naoki Murata, Yiding Jiang, Joshua Williams, George J. Pappas, Hamed Hassani, Yuki Mitsufuji, Ruslan Salakhutdinov, J. Zico Kolter.
arXiv 2024. [PDF]

Attention Calibration for Disentangled Text-to-Image Personalization.
Yanbing Zhang, Mengping Yang, Qin Zhou, Zhe Wang.
arXiv 2024. [PDF]

Selectively Informative Description can Reduce Undesired Embedding Entanglements in Text-to-Image Personalization.
Jimyeong Kim, Jungwon Park, Wonjong Rhee.
arXiv 2024. [PDF]

MM-Diff: High-Fidelity Image Personalization via Multi-Modal Condition Integration.
Zhichao Wei, Qingkun Su, Long Qin, Weizhi Wang.
arXiv 2024. [PDF]

Generative Active Learning for Image Synthesis Personalization.
Xulu Zhang, Wengyu Zhang, Xiao-Yong Wei, Jinlin Wu, Zhaoxiang Zhang, Zhen Lei, Qing Li.
arXiv 2024. [PDF]

Harmonizing Visual and Textual Embeddings for Zero-Shot Text-to-Image Customization.
Yeji Song, Jimyeong Kim, Wonhark Park, Wonsik Shin, Wonjong Rhee, Nojun Kwak.
arXiv 2024. [PDF]

Tuning-Free Image Customization with Image and Text Guidance.
Pengzhi Li, Qiang Nie, Ying Chen, Xi Jiang, Kai Wu, Yuhuan Lin, Yong Liu, Jinlong Peng, Chengjie Wang, Feng Zheng.
arXiv 2024. [PDF]

Fast Personalized Text-to-Image Syntheses With Attention Injection.
Yuxuan Zhang, Yiren Song, Jinpeng Yu, Han Pan, Zhongliang Jing.
arXiv 2024. [PDF]

OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models.
Zhe Kong, Yong Zhang, Tianyu Yang, Tao Wang, Kaihao Zhang, Bizhu Wu, Guanying Chen, Wei Liu, Wenhan Luo.
arXiv 2024. [PDF]

StableGarment: Garment-Centric Generation via Stable Diffusion.
Rui Wang, Hailong Guo, Jiaming Liu, Huaxia Li, Haibo Zhao, Xu Tang, Yao Hu, Hao Tang, Peipei Li.
arXiv 2024. [PDF]

Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation.
Likun Li, Haoqi Zeng, Changpeng Yang, Haozhe Jia, Di Xu.
arXiv 2024. [PDF]

FaceChain-SuDe: Building Derived Class to Inherit Category Attributes for One-shot Subject-Driven Generation.
Pengchong Qiao, Lei Shang, Chang Liu, Baigui Sun, Xiangyang Ji, Jie Chen.
arXiv 2024. [PDF]

RealCustom: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization.
Mengqi Huang, Zhendong Mao, Mingcong Liu, Qian He, Yongdong Zhang.
arXiv 2024. [PDF]

DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models.
Shyam Marjit, Harshit Singh, Nityanand Mathur, Sayak Paul, Chia-Mu Yu, Pin-Yu Chen.
arXiv 2024. [PDF]

Direct Consistency Optimization for Compositional Text-to-Image Personalization.
Kyungmin Lee, Sangkyung Kwak, Kihyuk Sohn, Jinwoo Shin.
arXiv 2024. [PDF]

ComFusion: Personalized Subject Generation in Multiple Specific Scenes From Single Image.
Yan Hong, Jianfu Zhang.
arXiv 2024. [PDF]

Visual Concept-driven Image Generation with Text-to-Image Diffusion Model.
Tanzila Rahman, Shweta Mahajan, Hsin-Ying Lee, Jian Ren, Sergey Tulyakov, Leonid Sigal.
arXiv 2024. [PDF]

Textual Localization: Decomposing Multi-concept Images for Subject-Driven Text-to-Image Generation.
Junjie Shentu, Matthew Watson, Noura Al Moubayed.
arXiv 2024. [PDF]

DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization.
Jisu Nam, Heesu Kim, DongJae Lee, Siyoon Jin, Seungryong Kim, Seunggyu Chang.
CVPR 2024. [PDF]

SeFi-IDE: Semantic-Fidelity Identity Embedding for Personalized Diffusion-Based Generation.
Yang Li, Songlin Yang, Wei Wang, Jing Dong.
arXiv 2024. [PDF]

Pick-and-Draw: Training-free Semantic Guidance for Text-to-Image Personalization.
Henglei Lv, Jiayu Xiao, Liang Li, Qingming Huang.
arXiv 2024. [PDF]

Object-Driven One-Shot Fine-tuning of Text-to-Image Diffusion with Prototypical Embedding.
Jianxiang Lu, Cong Xie, Hui Guo.
arXiv 2024. [PDF]

BootPIG: Bootstrapping Zero-shot Personalized Image Generation Capabilities in Pretrained Diffusion Models.
Senthil Purushwalkam, Akash Gokul, Shafiq Joty, Nikhil Naik.
arXiv 2024. [PDF]

PALP: Prompt Aligned Personalization of Text-to-Image Models.
Moab Arar, Andrey Voynov, Amir Hertz, Omri Avrahami, Shlomi Fruchter, Yael Pritch, Daniel Cohen-Or, Ariel Shamir.
arXiv 2024. [PDF]

Cross Initialization for Personalized Text-to-Image Generation.
Lianyu Pang, Jian Yin, Haoran Xie, Qiping Wang, Qing Li, Xudong Mao.
CVPR 2024. [PDF]

DreamTuner: Single Image is Enough for Subject-Driven Generation.
Miao Hua, Jiawei Liu, Fei Ding, Wei Liu, Jie Wu, Qian He.
arXiv 2023. [PDF]

Decoupled Textual Embeddings for Customized Image Generation.
Yufei Cai, Yuxiang Wei, Zhilong Ji, Jinfeng Bai, Hu Han, Wangmeng Zuo.
arXiv 2023. [PDF]

Compositional Inversion for Stable Diffusion Models.
Xulu Zhang, Xiao-Yong Wei, Jinlin Wu, Tianyi Zhang, Zhaoxiang Zhang, Zhen Lei, Qing Li.
AAAI 2024. [PDF]

Customization Assistant for Text-to-image Generation.
Yufan Zhou, Ruiyi Zhang, Jiuxiang Gu, Tong Sun.
CVPR 2024. [PDF]

VideoBooth: Diffusion-based Video Generation with Image Prompts.
Yuming Jiang, Tianxing Wu, Shuai Yang, Chenyang Si, Dahua Lin, Yu Qiao, Chen Change Loy, Ziwei Liu.
arXiv 2023. [PDF]

HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models.
Zhonghao Wang, Wei Wei, Yang Zhao, Zhisheng Xiao, Mark Hasegawa-Johnson, Humphrey Shi, Tingbo Hou.
arXiv 2023. [PDF]

VideoAssembler: Identity-Consistent Video Generation with Reference Entities using Diffusion Model.
Haoyu Zhao, Tianyi Lu, Jiaxi Gu, Xing Zhang, Zuxuan Wu, Hang Xu, Yu-Gang Jiang.
arXiv 2023.

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

白日梦AI

白日梦AI提供专注于AI视频生成的多样化功能,包括文生视频、动态画面和形象生成等,帮助用户快速上手,创造专业级内容。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

讯飞绘镜

讯飞绘镜是一个支持从创意到完整视频创作的智能平台,用户可以快速生成视频素材并创作独特的音乐视频和故事。平台提供多样化的主题和精选作品,帮助用户探索创意灵感。

Project Cover

讯飞文书

讯飞文书依托讯飞星火大模型,为文书写作者提供从素材筹备到稿件撰写及审稿的全程支持。通过录音智记和以稿写稿等功能,满足事务性工作的高频需求,帮助撰稿人节省精力,提高效率,优化工作与生活。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号