Survey: Tool Learning with Large Language Models
Recently, tool learning with large language models(LLMs) has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.
This is the collection of papers related to tool learning with LLMs. These papers are organized according to our survey paper "Tool Learning with Large Language Models: A Survey".
中文: We have noticed that PaperAgent and 旺知识 have provided a brief and a comprehensive introduction in Chinese, respectively. We greatly appreciate their assistance.
Please feel free to contact us if you have any questions or suggestions!
Contribution
:tada::+1: Please feel free to open an issue or make a pull request! :tada::+1:
Citation
If you find our work helps your research, please kindly cite our paper:
@article{qu2024toolsurvey, author={Qu, Changle and Dai, Sunhao and Wei, Xiaochi and Cai, Hengyi and Wang, Shuaiqiang and Yin, Dawei and Xu, Jun and Wen, Ji-Rong}, title={Tool Learning with Large Language Models: A Survey}, journal={arXiv preprint arXiv:2405.17935}, year={2024} }
📋 Contents
- Survey: Tool Learning with Large Language Models
🌟 Introduction
Recently, tool learning with large language models (LLMs) has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems. Despite growing attention and rapid advancements in this field, the existing literature remains fragmented and lacks systematic organization, posing barriers to entry for newcomers. This gap motivates us to conduct a comprehensive survey of existing works on tool learning with LLMs. In this survey, we focus on reviewing existing literature from the two primary aspects (1) why tool learning is beneficial and (2) how tool learning is implemented, enabling a comprehensive understanding of tool learning with LLMs. We first explore the “why” by reviewing both the benefits of tool integration and the inherent benefits of the tool learning paradigm from six specific aspects. In terms of “how”, we systematically review the literature according to a taxonomy of four key stages in the tool learning workflow: task planning, tool selection, tool calling, and response generation. Additionally, we provide a detailed summary of existing benchmarks and evaluation methods, categorizing them according to their relevance to different stages. Finally, we discuss current challenges and outline potential future directions, aiming to inspire both researchers and industrial developers to further explore this emerging and promising area.
The overall workflow for tool learning with large language models.
<div align=center> <img src="assets/Framework.png" height="500"/> </div>📄 Paper List
Why Tool Learning?
Benefit of Tools.
-
Knowledge Acquisition.
-
Search Engine
Internet-Augmented Dialogue Generation, ACL 2022. [Paper]
WebGPT: Browser-assisted question-answering with human feedback, Preprint 2021. [Paper]
Internet-augmented language models through few-shot prompting for open-domain question answering, Preprint 2022. [Paper]
REPLUG: Retrieval-Augmented Black-Box Language Models, Preprint 2023. [Paper]
Toolformer: Language Models Can Teach Themselves to Use Tools, NeurIPS 2023. [Paper]
ART: Automatic multi-step reasoning and tool-use for large language models, Preprint 2023. [Paper]
ToolCoder: Teach Code Generation Models to use API search tools, Preprint 2023. [Paper]
CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, ICLR 2024. [Paper]
-
Database & Knowledge Graph
Lamda: Language models for dialog applications, Preprint 2022. [Paper]
Gorilla: Large Language Model Connected with Massive APIs, Preprint 2023. [Paper]
ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings, NeurIPS 2023. [Paper]
ToolQA: A Dataset for LLM Question Answering with External Tools, NeurIPS 2023. [Paper]
Syntax Error-Free and Generalizable Tool Use for LLMs via Finite-State Decoding, NeurIPS 2023. [Paper]
Middleware for LLMs: Tools are Instrumental for Language Agents in Complex Environments, Preprint 2024. [Paper]
-
Weather or Map
On the Tool Manipulation Capability of Open-source Large Language Models, NeurIPS 2023. [Paper]
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases, Preprint 2023. [Paper]
Tool Learning with Foundation Models, Preprint 2023. [Paper]
-
-
Expertise Enhancement.
-
Mathematical Tools
Training verifiers to solve math word problems, Preprint 2021. [Paper]
MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning, Preprint 2021. [Paper]
Chaining Simultaneous Thoughts for Numerical Reasoning, EMNLP 2022. [Paper]
Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems, EMNLP 2023. [Paper]
Solving math word problems by combining language models with symbolic solvers, NeurIPS 2023. [Paper]
Evaluating and improving tool-augmented computation-intensive math reasoning, NeurIPS 2023. [Paper]
ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving, ICLR 2024. [Paper]
MATHSENSEI: A Tool-Augmented Large Language Model for Mathematical Reasoning, Preprint 2024. [Paper]
Calc-CMU at SemEval-2024 Task 7: Pre-Calc -- Learning to Use the Calculator Improves Numeracy in Language Models, NAACL 2024. [Paper]
MathViz-E: A Case-study in Domain-Specialized Tool-Using Agents, Preprint 2024. [Paper]
-
Python Interpreter
Pal: Program-aided language models, ICML 2023. [Paper]
Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks, TMLR 2023. [Paper]
Fact-Checking Complex Claims with Program-Guided Reasoning, ACL 2023. [Paper]
Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models, NeurIPS 2023. [Paper]
LeTI: Learning to Generate from Textual Interactions, NAACL 2024. [Paper]
Mint: Evaluating llms in multi-turn interaction with tools and language feedback, ICLR 2024. [Paper]
Executable Code Actions Elicit Better LLM Agents, ICML 2024. [Paper]
CodeNav: Beyond tool-use to using real-world codebases with LLM agents, Preprint 2024. [Paper]
APPL: A Prompt Programming Language for Harmonious Integration of Programs and Large Language Model Prompts, Preprint 2024. [Paper]
BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions, Preprint 2024. [Paper]
CodeAgent: Enhancing Code Generation with Tool-Integrated Agent Systems for Real-World Repo-level Coding Challenges, ACL 2024. [Paper]
-
Others
Chemical: MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting, ACL 2023. [Paper]
ChemCrow: Augmenting large-language models with chemistry tools, Nature Machine Intelligence 2024. [Paper]
A REVIEW OF LARGE LANGUAGE MODELS AND AUTONOMOUS AGENTS IN CHEMISTRY, Preprint 2024. [Paper]
Biomedical: GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information, ISMB 2024. [Paper]
Financial: Equipping Language Models with Tool Use Capability for Tabular Data Analysis in Finance, EACL 2024. [Paper]
Financial: Simulating Financial Market via Large Language Model based Agents, Preprint 2024. [Paper]
Medical: AgentMD: Empowering Language Agents for Risk Prediction with Large-Scale Clinical Tool Learning, Preprint 2024. [Paper]
MMedAgent: Learning to Use Medical Tools with Multi-modal Agent, Preprint 2024. [Paper]
Recommendation: Let Me Do It For You: Towards LLM Empowered Recommendation via Tool Learning, SIGIR 2024. [Paper]
Gas Turbines: DOMAIN-SPECIFIC ReAct FOR PHYSICS-INTEGRATED ITERATIVE MODELING: A CASE STUDY OF LLM AGENTS FOR GAS PATH ANALYSIS OF GAS TURBINES, Preprint 2024. [Paper]
WORLDAPIS: The World Is Worth How Many APIs? A Thought Experiment, ACL 2024 Workshop. [Paper]
-
-
Automation and Efficiency.
-
Schedule Tools
ToolQA: A Dataset for LLM Question Answering with External Tools, NeurIPS 2023. [Paper]
-
Set Reminders
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs, ICLR 2024. [Paper]
-
Filter Emails
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs, ICLR 2024. [Paper]
-
Project Management
ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs, ICLR 2024. [Paper]
-
Online Shopping Assistants
WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents, NeurIPS 2022. [Paper]
-
-
Interaction Enhancement.
-
Multi-modal Tools
Vipergpt: Visual inference via python execution for reasoning, ICCV 2023. [Paper]
MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action, Preprint 2023. [Paper]
InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language, Preprint 2023. [Paper]
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn, Preprint 2023. [Paper]
CLOVA: A closed-loop visual assistant with tool usage and update, CVPR 2024. [Paper]
DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model, CVPR 2024. [Paper]
MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning, Preprint 2024. [Paper]
m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks, Preprint 2024. [Paper]
From the Least to the Most: Building a Plug-and-Play Visual Reasoner via Data Synthesis, Preprint 2024. [Paper]
-
Machine Translator
Toolformer: Language Models Can Teach Themselves to Use Tools, NeurIPS 2023. [Paper]
Tool Learning with Foundation Models, Preprint 2023. [Paper]
-
Natural Language Processing Tools
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face, NeurIPS 2023. [Paper]
GitAgent: Facilitating Autonomous Agent with GitHub by Tool Extension, Preprint 2023. [Paper]
-
Benefit of Tool Learning.
- Enhanced Interpretability and User Trust.
- Improved Robustness and Adaptability.
How
编辑推荐精选


Manus
全面超越基准的 AI Agent助手
Manus 是一款通用人工智能代理平台,能够将您的创意和想法迅速转化为实际成果。无论是定制旅行规划、深入的数据分析,还是教育支持与商业决策,Manus 都能高效整合信息,提供精准解决方案。它以直 观的交互体验和领先的技术,为用户开启了一个智慧驱动、轻松高效的新时代,让每个灵感都能得到完美落地。


飞书知识问答
飞书官方推出的AI知识库 上传word pdf即可部署AI私有知识库
基于DeepSeek R1大模型构建的知识管理系统,支持PDF、Word、PPT等常见文档格式解析,实现云端与本地数据的双向同步。系统具备实时网络检索能力,可自动关联外部信息源,通过语义理解技术处理结构化与非结构化数据。免费版本提供基础知识库搭建功能,适用于企业文档管理和个人学习资料整理场景。


Trae
字节跳动发布的AI编程神器IDE
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

酷表ChatExcel
大模型驱动的Excel数据处理工具
基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。


DeepEP
DeepSeek开源的专家并行通信优化框架
DeepEP是一个专为大规模分布式计算设计的通信库,重点解决专家并行模式中的通信瓶颈问题。其核心架构采用分层拓扑感知技术,能够自动识别节点间物理连接关系,优化数据传输路径。通过实现动态路由选择与负载均衡机制,系统在千卡级计算集群中维持稳定的低延迟特性,同时兼容主流深度学习框架的通信接口。


DeepSeek
全球领先开源大模型,高效智能助手
DeepSeek是一家幻方量化创办的专注于通用人工智能的中国科技公司,主攻大模型研发与应用。DeepSeek-R1是开源的推理模型,擅长处理复杂任务且可免费商用。


KnowS
AI医学搜索 引擎 整合4000万+实时更新的全球医学文献
医学领域专用搜索引擎整合4000万+实时更新的全球医学文献,通过自主研发AI模型实现精准知识检索。系统每日更新指南、中英文文献及会议资料,搜索准确率较传统工具提升80%,同时将大模型幻觉率控制在8%以下。支持临床建议生成、文献深度解析、学术报告制作等全流程科研辅助,典型用户反馈显示每周可节省医疗工作者70%时间。


Windsurf Wave 3
Windsurf Editor推出第三次重大更新Wave 3
新增模型上下文协议支持与智能编辑功能。本次更新包含五项核心改进:支持接入MCP协议扩展工具生态,Tab键智能跳转提升编码效率,Turbo模式实现自动化终端操作,图片拖拽功能优化多模态交互,以及面向付费用户的个性化图 标定制。系统同步集成DeepSeek、Gemini等新模型,并通过信用点数机制实现差异化的资源调配。


腾讯元宝
腾讯自研的混元大模型AI助手
腾讯元宝是腾讯基于自研的混元大模型推出的一款多功能AI应用,旨在通过人工智能技术提升用户在写作、绘画、翻译、编程、搜索、阅读总结等多个领域的工作与生活效率。


Grok3
埃隆·马斯克旗下的人工智能公司 xAI 推出的第三代大规模语言模型
Grok3 是由埃隆·马斯克旗下的人工智能公司 xAI 推出的第三代大规模语言模型,常被马斯克称为“地球上最聪明的 AI”。它不仅是在前代产品 Grok 1 和 Grok 2 基础上的一次飞跃,还在多个关键技术上实现了创新突破。
推荐工具精选
AI云服务特惠
懂AI专属折扣关注微信公众号
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号