Project Icon

The_Prompt_Report

提示工程研究自动化分析平台

The Prompt Report项目代码仓库提供自动化论文分析工具,用于构建提示(prompt)的结构化理解。该项目实现了论文自动审查、数据采集和实验执行,并建立了完整的提示技术分类体系。代码库包含安装指南、API配置说明和运行步骤,为生成式AI系统中的提示工程研究提供支持。项目还提供了相关数据集和研究论文链接,方便研究人员深入了解。代码结构清晰,包括论文下载、数据处理和实验模块,便于扩展和定制化研究。

The Prompt Report Code Repository

Generative Artificial Intelligence (GenAI) systems are being increasingly deployed across all parts of industry and research settings. Developers and end users interact with these systems through the use of prompting or prompt engineering. While prompting is a widespread and highly researched concept, there exists conflicting terminology and a poor ontological understanding of what constitutes a prompt due to the area’s nascency. This repository is the code for The Prompt Report, our research that establishes a structured understanding of prompts, by assembling a taxonomy of prompting techniques and analyzing their use. This code allows for the automated review of papers, the collection of data, and the running of experiments. Our dataset is available on Hugging Face and our paper is available on ArXiv.org. Information is also available on our website.

Table of Contents

Install requirements

after cloning, run pip install -r requirements.txt from root

Setting up API keys

Make a file at root called .env.

For OpenAI: https://platform.openai.com/docs/quickstart
For Hugging Face: https://huggingface.co/docs/hub/security-tokens, also run huggingface-cli login
For Sematic Scholar: https://www.semanticscholar.org/product/api#api-key

Use the reference example.env file to fill in your API keys/tokens.

OPENAI_API_KEY=sk.-...
SEMANTIC_SCHOLAR_API_KEY=...
HF_TOKEN=...

Setting up keys for running tests

Then to load the .env file, type:
pip install pytest-dotenv

You can also choose to update the env file by doing:
py.test --envfile path/to/.env

In the case that you have several .env files, create a new env_files in the pytest config folder and type:

env_files =
.env
.test.env
.deploy.env

Structure of the Repository

The script main.py calls the necessary functions to download all the papers, deduplicate and filter them, and then run all the experiments.

The core of the repository is in src/prompt_systematic_review. The config_data.py script contains configurations that are important for running experiments and saving time. You can see in main.py how some of these options are used.

The source folder is divided into 4 main sections: 3 scripts (automated_review.py, collect_papers.py,config_data.py) that deal with collecting the data and running the automated review, the utils folder that contains utility functions that are used throughout the repository, the get_papers folder that contains the scripts to download the papers, and the experiments folder that contains the scripts to run the experiments.

At the root, there is a data folder. It comes pre-loaded with some data that is used for the experiments, however the bulk of the dataset can either be generated by running main.py or by downloading the data from Hugging Face. It is in data/experiments_output that the results of the experiments are saved.

Notably, the keywords used in the automated review/scraping process are in src/prompt_systematic_review/utils/keywords.py. Anyone who wishes to run the automated review can adjust these keywords to their liking in that file.

Running the code

TLDR;

git clone https://github.com/trigaten/Prompt_Systematic_Review.git && cd Prompt_Systematic_Review
pip install -r requirements.txt
# create a .env file with your API keys
nano .env
git lfs install
git clone https://huggingface.co/datasets/PromptSystematicReview/ThePromptReport
mv ThePromptReport/* data/
python main.py

Running main.py will download the papers, run the automated review, and run the experiments. However, if you wish to save time and only run the experiments, you can download the data from Hugging Face and move the papers folder and all the csv files in the dataset into the data folder (should look like data/papers/*.pdf and data/master_papers.csv etc). Adjust main.py accordingly.

Every experiment script has a run_experiment function that is called in main.py. The run_experiment function is responsible for running the experiment and saving the results. However each script can be run individually by just running python src/prompt_systematic_review/experiments/<experiment_name>.py from root.

There is one experiment, graph_internal_references that, because of weird issues with parallelism, is better run from root as an individual script. To avoid it causing issues with other experiments, it is run last as it is ordered at the bottom of the list in experiments/__init__.py.

Notes

  • Sometimes a paper title may appear differently on the arXiv API. For example, "Visual Attention-Prompted Prediction and Learning" (arXiv:2310.08420), according to arXiv API is titled "A visual encoding model based on deep neural networks and transfer learning"
项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

稿定AI

稿定设计 是一个多功能的在线设计和创意平台,提供广泛的设计工具和资源,以满足不同用户的需求。从专业的图形设计师到普通用户,无论是进行图片处理、智能抠图、H5页面制作还是视频剪辑,稿定设计都能提供简单、高效的解决方案。该平台以其用户友好的界面和强大的功能集合,帮助用户轻松实现创意设计。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号