Project Icon

Awesome-Controllable-T2I-Diffusion-Models

可控文本到图像扩散模型研究进展综述

该项目汇集了文本到图像扩散模型中可控生成的前沿研究。内容涵盖个性化生成、空间控制、高级文本条件生成等多个方向,并总结了多条件生成和通用可控生成方法。项目为研究人员和开发者提供了全面了解可控T2I扩散模型最新进展的资源,有助于促进该领域的发展。

Awesome Maintenance PR's Welcome Survey Paper


Awesome Controllable T2I Diffusion Models


We are focusing on how to Control text-to-image diffusion models with Novel Conditions.

For more detailed information, please refer to our survey paper: Controllable Generation with Text-to-Image Diffusion Models: A Survey

img img

💖 Citation

If you find value in our survey paper or curated collection, please consider citing our work and starring our repo to support us.

@article{cao2024controllable,
  title={Controllable Generation with Text-to-Image Diffusion Models: A Survey},
  author={Pu Cao and Feng Zhou and Qing Song and Lu Yang},
  journal={arXiv preprint arXiv:2403.04279},
  year={2024}
}

🎁 How to contribute to this repository?

Since the following content is generated based on our database, please provide the following information in the issue to help us fill in the database to add new papers (please do not submit a PR directly).

1. Paper title
2. arXiv ID (if any)
3. Publication status (if any)

🌈 Contents

🚀Generation with Specific Condition

🍇Personalization

🍉Subject-Driven Generation

PartCraft: Crafting Creative Objects by Parts.
Kam Woh Ng, Xiatian Zhu, Yi-Zhe Song, Tao Xiang.
ECCV 2024. [PDF]

ClassDiffusion: More Aligned Personalization Tuning with Explicit Class Guidance.
Jiannan Huang, Jun Hao Liew, Hanshu Yan, Yuyang Yin, Yao Zhao, Yunchao Wei.
arXiv 2024. [PDF]

Personalized Residuals for Concept-Driven Text-to-Image Generation.
Cusuh Ham, Matthew Fisher, James Hays, Nicholas Kolkin, Yuchen Liu, Richard Zhang, Tobias Hinz.
arXiv 2024. [PDF]

Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance.
Kelvin C. K. Chan, Yang Zhao, Xuhui Jia, Ming-Hsuan Yang, Huisheng Wang.
arXiv 2024. [PDF]

MMTryon: Multi-Modal Multi-Reference Control for High-Quality Fashion Generation.
Xujie Zhang, Ente Lin, Xiu Li, Yuxuan Luo, Michael Kampffmeyer, Xin Dong, Xiaodan Liang.
arXiv 2024. [PDF]

Infusion: Preventing Customized Text-to-Image Diffusion from Overfitting.
Weili Zeng, Yichao Yan, Qi Zhu, Zhuo Chen, Pengzhi Chu, Weiming Zhao, Xiaokang Yang.
arXiv 2024. [PDF]

CAT: Contrastive Adapter Training for Personalized Image Generation.
Jae Wan Park, Sang Hyun Park, Jun Young Koh, Junha Lee, Min Song.
arXiv 2024. [PDF]

MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation.
Kunpeng Song, Yizhe Zhu, Bingchen Liu, Qing Yan, Ahmed Elgammal, Xiao Yang.
arXiv 2024. [PDF]

U-VAP: User-specified Visual Appearance Personalization via Decoupled Self Augmentation.
You Wu, Kean Liu, Xiaoyue Mi, Fan Tang, Juan Cao, Jintao Li.
arXiv 2024. [PDF]

Automated Black-box Prompt Engineering for Personalized Text-to-Image Generation.
Yutong He, Alexander Robey, Naoki Murata, Yiding Jiang, Joshua Williams, George J. Pappas, Hamed Hassani, Yuki Mitsufuji, Ruslan Salakhutdinov, J. Zico Kolter.
arXiv 2024. [PDF]

Attention Calibration for Disentangled Text-to-Image Personalization.
Yanbing Zhang, Mengping Yang, Qin Zhou, Zhe Wang.
arXiv 2024. [PDF]

Selectively Informative Description can Reduce Undesired Embedding Entanglements in Text-to-Image Personalization.
Jimyeong Kim, Jungwon Park, Wonjong Rhee.
arXiv 2024. [PDF]

MM-Diff: High-Fidelity Image Personalization via Multi-Modal Condition Integration.
Zhichao Wei, Qingkun Su, Long Qin, Weizhi Wang.
arXiv 2024. [PDF]

Generative Active Learning for Image Synthesis Personalization.
Xulu Zhang, Wengyu Zhang, Xiao-Yong Wei, Jinlin Wu, Zhaoxiang Zhang, Zhen Lei, Qing Li.
arXiv 2024. [PDF]

Harmonizing Visual and Textual Embeddings for Zero-Shot Text-to-Image Customization.
Yeji Song, Jimyeong Kim, Wonhark Park, Wonsik Shin, Wonjong Rhee, Nojun Kwak.
arXiv 2024. [PDF]

Tuning-Free Image Customization with Image and Text Guidance.
Pengzhi Li, Qiang Nie, Ying Chen, Xi Jiang, Kai Wu, Yuhuan Lin, Yong Liu, Jinlong Peng, Chengjie Wang, Feng Zheng.
arXiv 2024. [PDF]

Fast Personalized Text-to-Image Syntheses With Attention Injection.
Yuxuan Zhang, Yiren Song, Jinpeng Yu, Han Pan, Zhongliang Jing.
arXiv 2024. [PDF]

OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models.
Zhe Kong, Yong Zhang, Tianyu Yang, Tao Wang, Kaihao Zhang, Bizhu Wu, Guanying Chen, Wei Liu, Wenhan Luo.
arXiv 2024. [PDF]

StableGarment: Garment-Centric Generation via Stable Diffusion.
Rui Wang, Hailong Guo, Jiaming Liu, Huaxia Li, Haibo Zhao, Xu Tang, Yao Hu, Hao Tang, Peipei Li.
arXiv 2024. [PDF]

Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation.
Likun Li, Haoqi Zeng, Changpeng Yang, Haozhe Jia, Di Xu.
arXiv 2024. [PDF]

FaceChain-SuDe: Building Derived Class to Inherit Category Attributes for One-shot Subject-Driven Generation.
Pengchong Qiao, Lei Shang, Chang Liu, Baigui Sun, Xiangyang Ji, Jie Chen.
arXiv 2024. [PDF]

RealCustom: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization.
Mengqi Huang, Zhendong Mao, Mingcong Liu, Qian He, Yongdong Zhang.
arXiv 2024. [PDF]

DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models.
Shyam Marjit, Harshit Singh, Nityanand Mathur, Sayak Paul, Chia-Mu Yu, Pin-Yu Chen.
arXiv 2024. [PDF]

Direct Consistency Optimization for Compositional Text-to-Image Personalization.
Kyungmin Lee, Sangkyung Kwak, Kihyuk Sohn, Jinwoo Shin.
arXiv 2024. [PDF]

ComFusion: Personalized Subject Generation in Multiple Specific Scenes From Single Image.
Yan Hong, Jianfu Zhang.
arXiv 2024. [PDF]

Visual Concept-driven Image Generation with Text-to-Image Diffusion Model.
Tanzila Rahman, Shweta Mahajan, Hsin-Ying Lee, Jian Ren, Sergey Tulyakov, Leonid Sigal.
arXiv 2024. [PDF]

Textual Localization: Decomposing Multi-concept Images for Subject-Driven Text-to-Image Generation.
Junjie Shentu, Matthew Watson, Noura Al Moubayed.
arXiv 2024. [PDF]

DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization.
Jisu Nam, Heesu Kim, DongJae Lee, Siyoon Jin, Seungryong Kim, Seunggyu Chang.
CVPR 2024. [PDF]

SeFi-IDE: Semantic-Fidelity Identity Embedding for Personalized Diffusion-Based Generation.
Yang Li, Songlin Yang, Wei Wang, Jing Dong.
arXiv 2024. [PDF]

Pick-and-Draw: Training-free Semantic Guidance for Text-to-Image Personalization.
Henglei Lv, Jiayu Xiao, Liang Li, Qingming Huang.
arXiv 2024. [PDF]

Object-Driven One-Shot Fine-tuning of Text-to-Image Diffusion with Prototypical Embedding.
Jianxiang Lu, Cong Xie, Hui Guo.
arXiv 2024. [PDF]

BootPIG: Bootstrapping Zero-shot Personalized Image Generation Capabilities in Pretrained Diffusion Models.
Senthil Purushwalkam, Akash Gokul, Shafiq Joty, Nikhil Naik.
arXiv 2024. [PDF]

PALP: Prompt Aligned Personalization of Text-to-Image Models.
Moab Arar, Andrey Voynov, Amir Hertz, Omri Avrahami, Shlomi Fruchter, Yael Pritch, Daniel Cohen-Or, Ariel Shamir.
arXiv 2024. [PDF]

Cross Initialization for Personalized Text-to-Image Generation.
Lianyu Pang, Jian Yin, Haoran Xie, Qiping Wang, Qing Li, Xudong Mao.
CVPR 2024. [PDF]

DreamTuner: Single Image is Enough for Subject-Driven Generation.
Miao Hua, Jiawei Liu, Fei Ding, Wei Liu, Jie Wu, Qian He.
arXiv 2023. [PDF]

Decoupled Textual Embeddings for Customized Image Generation.
Yufei Cai, Yuxiang Wei, Zhilong Ji, Jinfeng Bai, Hu Han, Wangmeng Zuo.
arXiv 2023. [PDF]

Compositional Inversion for Stable Diffusion Models.
Xulu Zhang, Xiao-Yong Wei, Jinlin Wu, Tianyi Zhang, Zhaoxiang Zhang, Zhen Lei, Qing Li.
AAAI 2024. [PDF]

Customization Assistant for Text-to-image Generation.
Yufan Zhou, Ruiyi Zhang, Jiuxiang Gu, Tong Sun.
CVPR 2024. [PDF]

VideoBooth: Diffusion-based Video Generation with Image Prompts.
Yuming Jiang, Tianxing Wu, Shuai Yang, Chenyang Si, Dahua Lin, Yu Qiao, Chen Change Loy, Ziwei Liu.
arXiv 2023. [PDF]

HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models.
Zhonghao Wang, Wei Wei, Yang Zhao, Zhisheng Xiao, Mark Hasegawa-Johnson, Humphrey Shi, Tingbo Hou.
arXiv 2023. [PDF]

VideoAssembler: Identity-Consistent Video Generation with Reference Entities using Diffusion Model.
Haoyu Zhao, Tianyi Lu, Jiaxi Gu, Xing Zhang, Zuxuan Wu, Hang Xu, Yu-Gang Jiang.
arXiv 2023.

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

稿定AI

稿定设计 是一个多功能的在线设计和创意平台,提供广泛的设计工具和资源,以满足不同用户的需求。从专业的图形设计师到普通用户,无论是进行图片处理、智能抠图、H5页面制作还是视频剪辑,稿定设计都能提供简单、高效的解决方案。该平台以其用户友好的界面和强大的功能集合,帮助用户轻松实现创意设计。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号