Awesome Controllable T2I Diffusion Models
We are focusing on how to Control text-to-image diffusion models with Novel Conditions.
For more detailed information, please refer to our survey paper: Controllable Generation with Text-to-Image Diffusion Models: A Survey
💖 Citation
If you find value in our survey paper or curated collection, please consider citing our work and starring our repo to support us.
@article{cao2024controllable,
title={Controllable Generation with Text-to-Image Diffusion Models: A Survey},
author={Pu Cao and Feng Zhou and Qing Song and Lu Yang},
journal={arXiv preprint arXiv:2403.04279},
year={2024}
}
🎁 How to contribute to this repository?
Since the following content is generated based on our database, please provide the following information in the issue to help us fill in the database to add new papers (please do not submit a PR directly).
1. Paper title
2. arXiv ID (if any)
3. Publication status (if any)
🌈 Contents
- Generation with Specific Condition
- Generation with Multiple Conditions
- Universal Controllable Generation
🚀Generation with Specific Condition
🍇Personalization
🍉Subject-Driven Generation
PartCraft: Crafting Creative Objects by Parts.
Kam Woh Ng, Xiatian Zhu, Yi-Zhe Song, Tao Xiang.
ECCV 2024. [PDF]
ClassDiffusion: More Aligned Personalization Tuning with Explicit Class Guidance.
Jiannan Huang, Jun Hao Liew, Hanshu Yan, Yuyang Yin, Yao Zhao, Yunchao Wei.
arXiv 2024. [PDF]
Personalized Residuals for Concept-Driven Text-to-Image Generation.
Cusuh Ham, Matthew Fisher, James Hays, Nicholas Kolkin, Yuchen Liu, Richard Zhang, Tobias Hinz.
arXiv 2024. [PDF]
Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance.
Kelvin C. K. Chan, Yang Zhao, Xuhui Jia, Ming-Hsuan Yang, Huisheng Wang.
arXiv 2024. [PDF]
MMTryon: Multi-Modal Multi-Reference Control for High-Quality Fashion Generation.
Xujie Zhang, Ente Lin, Xiu Li, Yuxuan Luo, Michael Kampffmeyer, Xin Dong, Xiaodan Liang.
arXiv 2024. [PDF]
Infusion: Preventing Customized Text-to-Image Diffusion from Overfitting.
Weili Zeng, Yichao Yan, Qi Zhu, Zhuo Chen, Pengzhi Chu, Weiming Zhao, Xiaokang Yang.
arXiv 2024. [PDF]
CAT: Contrastive Adapter Training for Personalized Image Generation.
Jae Wan Park, Sang Hyun Park, Jun Young Koh, Junha Lee, Min Song.
arXiv 2024. [PDF]
MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation.
Kunpeng Song, Yizhe Zhu, Bingchen Liu, Qing Yan, Ahmed Elgammal, Xiao Yang.
arXiv 2024. [PDF]
U-VAP: User-specified Visual Appearance Personalization via Decoupled Self Augmentation.
You Wu, Kean Liu, Xiaoyue Mi, Fan Tang, Juan Cao, Jintao Li.
arXiv 2024. [PDF]
Automated Black-box Prompt Engineering for Personalized Text-to-Image Generation.
Yutong He, Alexander Robey, Naoki Murata, Yiding Jiang, Joshua Williams, George J. Pappas, Hamed Hassani, Yuki Mitsufuji, Ruslan Salakhutdinov, J. Zico Kolter.
arXiv 2024. [PDF]
Attention Calibration for Disentangled Text-to-Image Personalization.
Yanbing Zhang, Mengping Yang, Qin Zhou, Zhe Wang.
arXiv 2024. [PDF]
Selectively Informative Description can Reduce Undesired Embedding Entanglements in Text-to-Image Personalization.
Jimyeong Kim, Jungwon Park, Wonjong Rhee.
arXiv 2024. [PDF]
MM-Diff: High-Fidelity Image Personalization via Multi-Modal Condition Integration.
Zhichao Wei, Qingkun Su, Long Qin, Weizhi Wang.
arXiv 2024. [PDF]
Generative Active Learning for Image Synthesis Personalization.
Xulu Zhang, Wengyu Zhang, Xiao-Yong Wei, Jinlin Wu, Zhaoxiang Zhang, Zhen Lei, Qing Li.
arXiv 2024. [PDF]
Harmonizing Visual and Textual Embeddings for Zero-Shot Text-to-Image Customization.
Yeji Song, Jimyeong Kim, Wonhark Park, Wonsik Shin, Wonjong Rhee, Nojun Kwak.
arXiv 2024. [PDF]
Tuning-Free Image Customization with Image and Text Guidance.
Pengzhi Li, Qiang Nie, Ying Chen, Xi Jiang, Kai Wu, Yuhuan Lin, Yong Liu, Jinlong Peng, Chengjie Wang, Feng Zheng.
arXiv 2024. [PDF]
Fast Personalized Text-to-Image Syntheses With Attention Injection.
Yuxuan Zhang, Yiren Song, Jinpeng Yu, Han Pan, Zhongliang Jing.
arXiv 2024. [PDF]
OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models.
Zhe Kong, Yong Zhang, Tianyu Yang, Tao Wang, Kaihao Zhang, Bizhu Wu, Guanying Chen, Wei Liu, Wenhan Luo.
arXiv 2024. [PDF]
StableGarment: Garment-Centric Generation via Stable Diffusion.
Rui Wang, Hailong Guo, Jiaming Liu, Huaxia Li, Haibo Zhao, Xu Tang, Yao Hu, Hao Tang, Peipei Li.
arXiv 2024. [PDF]
Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation.
Likun Li, Haoqi Zeng, Changpeng Yang, Haozhe Jia, Di Xu.
arXiv 2024. [PDF]
FaceChain-SuDe: Building Derived Class to Inherit Category Attributes for One-shot Subject-Driven Generation.
Pengchong Qiao, Lei Shang, Chang Liu, Baigui Sun, Xiangyang Ji, Jie Chen.
arXiv 2024. [PDF]
RealCustom: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization.
Mengqi Huang, Zhendong Mao, Mingcong Liu, Qian He, Yongdong Zhang.
arXiv 2024. [PDF]
DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models.
Shyam Marjit, Harshit Singh, Nityanand Mathur, Sayak Paul, Chia-Mu Yu, Pin-Yu Chen.
arXiv 2024. [PDF]
Direct Consistency Optimization for Compositional Text-to-Image Personalization.
Kyungmin Lee, Sangkyung Kwak, Kihyuk Sohn, Jinwoo Shin.
arXiv 2024. [PDF]
ComFusion: Personalized Subject Generation in Multiple Specific Scenes From Single Image.
Yan Hong, Jianfu Zhang.
arXiv 2024. [PDF]
Visual Concept-driven Image Generation with Text-to-Image Diffusion Model.
Tanzila Rahman, Shweta Mahajan, Hsin-Ying Lee, Jian Ren, Sergey Tulyakov, Leonid Sigal.
arXiv 2024. [PDF]
Textual Localization: Decomposing Multi-concept Images for Subject-Driven Text-to-Image Generation.
Junjie Shentu, Matthew Watson, Noura Al Moubayed.
arXiv 2024. [PDF]
DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization.
Jisu Nam, Heesu Kim, DongJae Lee, Siyoon Jin, Seungryong Kim, Seunggyu Chang.
CVPR 2024. [PDF]
SeFi-IDE: Semantic-Fidelity Identity Embedding for Personalized Diffusion-Based Generation.
Yang Li, Songlin Yang, Wei Wang, Jing Dong.
arXiv 2024. [PDF]
Pick-and-Draw: Training-free Semantic Guidance for Text-to-Image Personalization.
Henglei Lv, Jiayu Xiao, Liang Li, Qingming Huang.
arXiv 2024. [PDF]
Object-Driven One-Shot Fine-tuning of Text-to-Image Diffusion with Prototypical Embedding.
Jianxiang Lu, Cong Xie, Hui Guo.
arXiv 2024. [PDF]
BootPIG: Bootstrapping Zero-shot Personalized Image Generation Capabilities in Pretrained Diffusion Models.
Senthil Purushwalkam, Akash Gokul, Shafiq Joty, Nikhil Naik.
arXiv 2024. [PDF]
PALP: Prompt Aligned Personalization of Text-to-Image Models.
Moab Arar, Andrey Voynov, Amir Hertz, Omri Avrahami, Shlomi Fruchter, Yael Pritch, Daniel Cohen-Or, Ariel Shamir.
arXiv 2024. [PDF]
Cross Initialization for Personalized Text-to-Image Generation.
Lianyu Pang, Jian Yin, Haoran Xie, Qiping Wang, Qing Li, Xudong Mao.
CVPR 2024. [PDF]
DreamTuner: Single Image is Enough for Subject-Driven Generation.
Miao Hua, Jiawei Liu, Fei Ding, Wei Liu, Jie Wu, Qian He.
arXiv 2023. [PDF]
Decoupled Textual Embeddings for Customized Image Generation.
Yufei Cai, Yuxiang Wei, Zhilong Ji, Jinfeng Bai, Hu Han, Wangmeng Zuo.
arXiv 2023. [PDF]
Compositional Inversion for Stable Diffusion Models.
Xulu Zhang, Xiao-Yong Wei, Jinlin Wu, Tianyi Zhang, Zhaoxiang Zhang, Zhen Lei, Qing Li.
AAAI 2024. [PDF]
Customization Assistant for Text-to-image Generation.
Yufan Zhou, Ruiyi Zhang, Jiuxiang Gu, Tong Sun.
CVPR 2024. [PDF]
VideoBooth: Diffusion-based Video Generation with Image Prompts.
Yuming Jiang, Tianxing Wu, Shuai Yang, Chenyang Si, Dahua Lin, Yu Qiao, Chen Change Loy, Ziwei Liu.
arXiv 2023. [PDF]
HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models.
Zhonghao Wang, Wei Wei, Yang Zhao, Zhisheng Xiao, Mark Hasegawa-Johnson, Humphrey Shi, Tingbo Hou.
arXiv 2023. [PDF]
VideoAssembler: Identity-Consistent Video Generation with Reference Entities using Diffusion Model.
Haoyu Zhao, Tianyi Lu, Jiaxi Gu, Xing Zhang, Zuxuan Wu, Hang Xu, Yu-Gang Jiang.
arXiv 2023.