Awesome-Multimodal-LLM
✨✨✨ Behold our meticulously curated trove of Multimodal Large Language Models (MLLM) resources! 📚🔍 Feast your eyes on an assortment of datasets, techniques for tuning multimodal instructions, methods for multimodal in-context learning, approaches for multimodal chain-of-thought, visual reasoning aided by gargantuan language models, foundational models, and much more. 🌟🔥
✨✨✨ This compilation shall forever stay in sync with the vanguard of breakthroughs in the realm of MLLM. 🔄 We are committed to its perpetual evolution, ensuring that you never miss out on the latest developments. 🚀💡
✨✨✨ And hold your breath, for we are diligently crafting a survey paper on latest LLM & MLLM, which shall soon grace the world with its wisdom. Stay tuned for its grand debut! 🎉📑
LLM Learning MindMap
Trending LLM Projects
- llm-course - Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
- Mixtral 8x7B - a high-quality sparse mixture of experts model (SMoE) with open weights.
- promptbase - All things prompt engineering.
- ollama - Get up and running with Llama 2 and other large language models locally.
- Devika Devin alternate SDE LLM
- anything-llm - A private ChatGPT to chat with anything!
- phi-2 - a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters.
Practical Guides for Prompting (Helpful)
High-quality generation
- [2023/10] Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond Liang Chen et al. arXiv. [paper] [code]
- This work proposes PCA-EVAL, which benchmarks embodied decision making via MLLM-based End-to-End method and LLM-based Tool-Using methods from Perception, Cognition and Action Levels.
- [2023/08] A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. Yejin Bang et al. arXiv. [paper]
- This work evaluates the multitask, multilingual and multimodal aspects of ChatGPT using 21 data sets covering 8 different common NLP application tasks.
- [2023/06] LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models. Yen-Ting Lin et al. arXiv. [paper]
- The LLM-EVAL method evaluates multiple dimensions of evaluation, such as content, grammar, relevance, and appropriateness.
- [2023/04] Is ChatGPT a Highly Fluent Grammatical Error Correction System? A Comprehensive Evaluation. Tao Fang et al. arXiv. [paper]
- The results of evaluation demonstrate that ChatGPT has excellent error detection capabilities and can freely correct errors to make the corrected sentences very fluent. Additionally, its performance in non-English and low-resource settings highlights its potential in multilingual GEC tasks.
Deep understanding
- [2023/06] Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models. Natalie Shapira et al. arXiv. [paper]
- LLMs exhibit certain theory of mind abilities, but this behavior is far from being robust.
- [2022/08] Inferring Rewards from Language in Context. Jessy Lin et al. ACL. [paper]
- This work presents a model that infers rewards from language and predicts optimal actions in unseen environment.
- [2021/10] Theory of Mind Based Assistive Communication in Complex Human Robot Cooperation. Moritz C. Buehler et al. arXiv. [paper]
- This work designs an agent Sushi with an understanding of the human during interaction.
Memory capability
Raising the length limit of Transformers
- [2023/10] MemGPT: Towards LLMs as Operating Systems. Charles Packer (UC Berkeley) et al. arXiv. [paper] [project page] [code] [dataset]
- [2023/05] Randomized Positional Encodings Boost Length Generalization of Transformers. Anian Ruoss (DeepMind) et al. arXiv. [paper] [code]
- [2023-03] CoLT5: Faster Long-Range Transformers with Conditional Computation. Joshua Ainslie (Google Research) et al. arXiv. [paper]
- [2022/03] Efficient Classification of Long Documents Using Transformers. Hyunji Hayley Park (Illinois University) et al. arXiv. [paper] [code]
- [2021/12] LongT5: Efficient Text-To-Text Transformer for Long Sequences. Mandy Guo (Google Research) et al. arXiv. [paper] [code]
- [2019/10] BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. Michael Lewis (Facebook AI) et al. arXiv. [paper] [code]
Summarizing memory
- [2023/10] Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading Howard Chen (Princeton University) et al. arXiv. [paper]
- [2023/09] Empowering Private Tutoring by Chaining Large Language Models Yulin Chen (Tsinghua University) et al. arXiv. [paper]
- [2023/08] ExpeL: LLM Agents Are Experiential Learners. Andrew Zhao (Tsinghua University) et al. arXiv. [paper] [code]
- [2023/08] ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate. Chi-Min Chan (Tsinghua University) et al. arXiv. [paper] [code]
- [2023/05] MemoryBank: Enhancing Large Language Models with Long-Term Memory. Wanjun Zhong (Harbin Institute of Technology) et al. arXiv. [paper] [code]
- [2023/04] Generative Agents: Interactive Simulacra of Human Behavior. Joon Sung Park (Stanford University) et al. arXiv. [paper] [code]
- [2023/04] Unleashing Infinite-Length Input Capacity for Large-scale Language Models with Self-Controlled Memory System. Xinnian Liang (Beihang University) et al. arXiv. [paper] [code]
- [2023/03] Reflexion: Language Agents with Verbal Reinforcement Learning. Noah Shinn (Northeastern University) et al. arXiv. [paper] [code]
- [2023/05] RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text. Wangchunshu Zhou (AIWaves) et al. arXiv. [paper] [code]
Compressing memories with vectors or data structures
- [2023/07] Communicative Agents for Software Development. Chen Qian (Tsinghua University) et al. arXiv. [paper] [code]
- [2023/06] ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory. Chenxu Hu (Tsinghua University) et al. arXiv. [paper] [code]
- [2023/05] Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory. Xizhou Zhu (Tsinghua University) et al. arXiv. [paper] [code]
- [2023/05] RET-LLM: Towards a General Read-Write Memory for Large Language Models. Ali Modarressi (LMU Munich) et al. arXiv. [paper] [code]
- [2023/05] RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text. Wangchunshu Zhou (AIWaves) et al. arXiv. [paper] [code]
Memory retrieval
- [2023/08] Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents. Ziheng Huang (University of California—San Diego) et al. arXiv. [paper]
- [2023/08] AgentSims: An Open-Source Sandbox for Large Language Model Evaluation. Jiaju Lin (PTA Studio) et al. arXiv. [paper] [project page] [code]
- [2023/06] ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory. Chenxu Hu (Tsinghua University) et al. arXiv. [paper] [code]
- [2023/05] MemoryBank: Enhancing Large Language Models with Long-Term Memory. Wanjun Zhong (Harbin Institute of Technology) et al. arXiv. [paper] [code]
- [2023/04] Generative Agents: Interactive Simulacra of Human Behavior. Joon Sung Park (Stanford) et al. arXiv. [paper] [code]
- [2023/05] RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text. Wangchunshu Zhou (AIWaves) et al. arXiv. [paper] [code]
Awesome Papers
Multimodal Instruction Tuning
Title | Venue | Date | Code | Demo |
---|---|---|---|---|
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models | arXiv | 2023-06-08 | Github | Demo |
MIMIC-IT: Multi-Modal In-Context Instruction Tuning | arXiv | 2023-06-08 | Github | Demo |
M3IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning | arXiv | 2023-06-07 | - | - |
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding | arXiv | 2023-06-05 |