ECON: Explicit Clothed humans Optimized via Normal integration
Yuliang Xiu · Jinlong Yang · Xu Cao · Dimitrios Tzionas · Michael J. Black
CVPR 2023 (Highlight)
ECON is designed for "Human digitization from a color image", which combines the best properties of implicit and explicit representations, to infer high-fidelity 3D clothed humans from in-the-wild images, even with loose clothing or in challenging poses. ECON also supports multi-person reconstruction and SMPL-X based animation.
Applications
"3D guidance" for SHHQ Dataset | multi-person reconstruction w/ occlusion |
"All-in-One" Blender add-on | SMPL-X based Animation (Instruction) |
News :triangular_flag_on_post:
- [2023/08/19] We released TeCH, which extends ECON with full texture support.
- [2023/06/01] Lee Kwan Joong updates a Blender Addon (Github, Tutorial).
- [2023/04/16] is ready to use!
- [2023/02/27] ECON got accepted by CVPR 2023 as Highlight (top 10%)!
- [2023/01/12] Carlos Barreto creates a Blender Addon (Download, Tutorial).
- [2023/01/08] Teddy Huang creates install-with-docker for ECON .
- [2023/01/06] Justin John and Carlos Barreto creates install-on-windows for ECON .
- [2022/12/22] is now available, created by Aron Arzoomand.
- [2022/12/15] Both demo and arXiv are available.
Key idea: d-BiNI
d-BiNI jointly optimizes front-back 2.5D surfaces such that: (1) high-frequency surface details agree with normal maps, (2) low-frequency surface variations, including discontinuities, align with SMPL-X surfaces, and (3) front-back 2.5D surface silhouettes are coherent with each other.
Front-view | Back-view | Side-view |
---|---|---|
Please consider cite BiNI if it also helps on your project
@inproceedings{cao2022bilateral,
title={Bilateral normal integration},
author={Cao, Xu and Santo, Hiroaki and Shi, Boxin and Okura, Fumio and Matsushita, Yasuyuki},
booktitle={Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part I},
pages={552--567},
year={2022},
organization={Springer}
}
Table of Contents
Instructions
- See installion doc for Docker to run a docker container with pre-built image for ECON demo
- See installion doc for Windows to install all the required packages and setup the models on Windows
- See installion doc for Ubuntu to install all the required packages and setup the models on Ubuntu
- See magic tricks to know a few technical tricks to further improve and accelerate ECON
- See testing to prepare the testing data and evaluate ECON
Demos
-
Quick Start
# For single-person image-based reconstruction (w/ l visualization steps, 1.8min)
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results
# For multi-person image-based reconstruction (see config/econ.yaml)
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results -multi
# To generate the demo video of reconstruction results
python -m apps.multi_render -n <file_name>
-
Animation with SMPL-X sequences (ECON + HybrIK-X)
# 1. Use HybrIK-X to estimate SMPL-X pose sequences from input video
# 2. Rig ECON's reconstruction mesh, to be compatible with SMPL-X's parametrization (-dress for dress/skirts).
# 3. Animate with SMPL-X pose sequences obtained from HybrIK-X, getting <file_name>_motion.npz
# 4. Render the frames with Blender (rgb-partial texture, normal-normal colors), and combine them to get final video
python -m apps.avatarizer -n <file_name>
python -m apps.animation -n <file_name> -m <motion_name>
# Note: to install missing python packages into Blender
# blender -b --python-expr "__import__('pip._internal')._internal.main(['install', 'moviepy'])"
wget https://download.is.tue.mpg.de/icon/econ_empty.blend
blender -b --python apps.blender_dance.py -- normal <file_name> 10 > /tmp/NULL
Please consider cite HybrIK-X if it also helps on your project
@article{li2023hybrik,
title={HybrIK-X: Hybrid Analytical-Neural Inverse Kinematics for Whole-body Mesh Recovery},
author={Li, Jiefeng and Bian, Siyuan and Xu, Chao and Chen, Zhicun and Yang, Lixin and Lu, Cewu},
journal={arXiv preprint arXiv:2304.05690},
year={2023}
}
-
Gradio Demo
We also provide a UI for testing our method that is built with gradio. This demo also supports pose&prompt guided human image generation! Running the following command in a terminal will launch the demo:
git checkout main
python app.py
This demo is also hosted on HuggingFace Space
-
Full Texture Generation
Method 1: ECON+TEXTure
Please firstly follow the TEXTure's installation to setup the env of TEXTure.
# generate required UV atlas
python -m apps.avatarizer -n <file_name> -uv
# generate new texture using TEXTure
git clone https://github.com/YuliangXiu/TEXTure
cd TEXTure
ln -s ../ECON/results/econ/cache
python -m scripts.run_texture --config_path=configs/text_guided/avatar.yaml
Then check ./experiments/<file_name>/mesh
for the results.
Please consider cite TEXTure if it also helps on your project
@article{richardson2023texture,
title={Texture: Text-guided texturing of 3d shapes},
author={Richardson, Elad and Metzer, Gal and Alaluf, Yuval and Giryes, Raja and Cohen-Or, Daniel},
journal={ACM Transactions on Graphics (TOG)},
publisher={ACM New York, NY, USA},
year={2023}
}
Method 2: TeCH
Please check out our new paper, TeCH: Text-guided Reconstruction of Lifelike Clothed Humans (Page, Code)
Please consider cite TeCH if it also helps on your project
@inproceedings{huang2024tech,
title={{TeCH: Text-guided Reconstruction of Lifelike Clothed Humans}},
author={Huang, Yangyi and Yi, Hongwei and Xiu, Yuliang and Liao, Tingting and Tang, Jiaxiang and Cai, Deng and Thies, Justus},
booktitle={International Conference on 3D Vision (3DV)},
year={2024}
}
More Qualitative Results
Challenging Poses |
Loose Clothes |
Citation
@inproceedings{xiu2023econ,
title = {{ECON: Explicit Clothed humans Optimized via Normal integration}},
author = {Xiu, Yuliang and Yang, Jinlong and Cao, Xu and