Awesome AGI Survey
Must-read papers on Artificial General Intelligence
🔔 News
- [2024-05] 🎉 We released our paper in arxiv: "How Far Are We From AGI?".
- [2024-05] 🥳 We organized an ICLR 2024 Workshop on "How Far Are We From AGI". Learn more about the workshop.
🔥 Our project is an ongoing, open initiative that will evolve in parallel with advancements in AGI. We plan to add more work soon, and we highly welcome pull requests!
BibTex citation if you find our work/resources useful:
@article{feng2024far,
title={How Far Are We From AGI},
author={Feng, Tao and Jin, Chuanyang and Liu, Jingyu and Zhu, Kunlun and Tu, Haoqin and Cheng, Zirui and Lin, Guanyu and You, Jiaxuan},
journal={arXiv preprint arXiv:2405.10313},
year={2024}
}
📜Content
- 📜Content
-> The framework design of our paper. <-
1. Introduction
-> Proportion of Human Activities Surpassed by AI. <-
2. AGI Internal: Unveiling the Mind of AGI
2.1 AI Perception
- Flamingo: a Visual Language Model for Few-Shot Learning. Jean-Baptiste Alayrac et al. NeurIPS 2022. [paper]
- BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models. Junnan Li et al. ICML 2023. [paper]
- SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models. Ziyi Lin et al. EMNLP 2023. [paper]
- Visual Instruction Tuning. Haotian Liu et al. NeurIPS 2023. [paper]
- GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction. Rui Yang et al. NeurIPS 2023. [paper]
- Otter: A Multi-Modal Model with In-Context Instruction Tuning. Bo Li et al. arXiv 2023. [paper]
- VideoChat: Chat-Centric Video Understanding. KunChang Li et al. arXiv 2023. [paper]
- mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality. Qinghao Ye et al. arXiv 2023. [paper]
- A Survey on Multimodal Large Language Models. Shukang Yin et al. arXiv 2023. [paper]
- PandaGPT: One Model To Instruction-Follow Them All. Yixuan Su et al. arXiv 2023. [paper]
- LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention. Renrui Zhang et al. arXiv 2023. [paper]
- Gemini: A Family of Highly Capable Multimodal Models. Rohan Anil et al. arXiv 2023. [paper]
- Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic. Keqin Chen et al. arXiv 2023. [paper]
- ImageBind: One Embedding Space To Bind Them All. Rohit Girdhar et al. CVPR 2023. [paper]
- MobileVLM : A Fast, Strong and Open Vision Language Assistant for Mobile Devices. Xiangxiang Chu et al. arXiv 2023. [paper]
- What Makes for Good Visual Tokenizers for Large Language Models?. Guangzhi Wang et al. arXiv 2023. [paper]
- MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models. Deyao Zhu et al. ICLR 2024. [paper]
- LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment. Bin Zhu et al. ICLR 2024. [paper]
2.2 AI Reasoning
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. Jason Wei et al. NeurIPS 2022. [paper]
- Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs. Maarten Sap et al. EMNLP 2022. [paper]
- Inner Monologue: Embodied Reasoning through Planning with Language Models. Wenlong Huang et al. CoRL 2022. [paper]
- Survey of Hallucination in Natural Language Generation. Ziwei Ji et al. ACM Computing Surveys 2022. [paper]
- ReAct: Synergizing Reasoning and Acting in Language Models. Shunyu Yao et al. ICLR 2023. [paper]
- Decomposed Prompting: A Modular Approach for Solving Complex Tasks. Tushar Khot et al. ICLR 2023. [paper]
- Complexity-Based Prompting for Multi-Step Reasoning. Yao Fu et al. ICLR 2023. [paper]
- Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. Denny Zhou et al. ICLR 2023. [paper]
- Towards Reasoning in Large Language Models: A Survey. Jie Huang et al. ACL Findings 2023. [paper]
- ProgPrompt: Generating Situated Robot Task Plans using Large Language Models. Ishika Singh et al. ICRA 2023. [paper]
- Reasoning with Language Model is Planning with World Model. Shibo Hao et al. EMNLP 2023. [paper]
- Evaluating Object Hallucination in Large Vision-Language Models. Yifan Li et al. EMNLP 2023. [paper]
- Tree of Thoughts: Deliberate Problem Solving with Large Language Models. Shunyu Yao et al. NeurIPS 2023. [paper]
- Self-Refine: Iterative Refinement with Self-Feedback. Aman Madaan et al. NeurIPS 2023. [paper]
- Reflexion: Language Agents with Verbal Reinforcement Learning. Noah Shinn et al. NeurIPS 2023. [paper]
- Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents. Zihao Wang et al. NeurIPS 2023. [paper]
- LLM+P: Empowering Large Language Models with Optimal Planning Proficiency. Bo Liu et al. arXiv 2023. [paper]
- Language Models, Agent Models, and World Models: The LAW for Machine Reasoning and Planning. Zhiting Hu et al. arXiv 2023. [paper]
- MMToM-QA: Multimodal Theory of Mind Question Answering. Chuanyang Jin et al. arXiv 2024. [paper]
- Graph of Thoughts: Solving Elaborate Problems with Large Language Models. Maciej Besta et al. AAAI 2024. [paper]
- Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Perfect Reasoners. Qihuang Zhong et al. arXiv 2024. [paper] pending
2.3 AI Memory
- Dense Passage Retrieval for Open-Domain Question Answering. Vladimir Karpukhin et al. EMNLP 2020. [paper]
- Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Patrick Lewis et al. NeurIPS 2020. [paper]
- REALM: Retrieval-Augmented Language Model Pre-Training. Kelvin Guu et al. ICML 2020. [paper]
- Retrieval Augmentation Reduces Hallucination in Conversation. Kurt Shuster et al. EMNLP Findings 2021. [paper]
- Improving Language Models by Retrieving from Trillions of Tokens. Sebastian Borgeaud et al. ICML 2022. [paper]
- Generative Agents: Interactive Simulacra of Human Behavior. Joon Sung Park et al. UIST 2023. [paper]
- Cognitive Architectures for Language Agents. Theodore R. Sumers et al. TMLR 2024. [paper]
- Voyager: An Open-Ended Embodied Agent with Large Language Models. Guanzhi Wang et al. arXiv 2023. [paper]
- **A Survey on the Memory Mechanism of Large Language Model based