Project Icon

context

Python库查询与代码生成的多功能CLI工具

Fleet Context是一款覆盖1221个顶级Python库的CLI工具和API。它支持库问答和代码生成,兼容所有OpenAI模型。用户可通过命令行或API使用,支持嵌入向量下载和数据库查询。丰富的元数据显著提升了检索质量。测评结果表明,Fleet Context在代码生成方面比GPT-4提高了37分。

🛩️ Fleet Context

License Discord

A CLI tool & API over the top 1221 Python libraries.
Used for library q/a & code generation with all available OpenAI models

Website      |      Data Visualizer      |      PyPI      |      @fleet_ai


https://github.com/fleet-ai/context/assets/44193474/80381b25-551e-4602-8987-071e92354f6f




Quick Start

Install the package and run context to ask questions about the most up-to-date Python libraries. You will have to provide your OpenAI key to start a session.

pip install fleet-context
context

If you'd like to run the CLI tool locally, you can clone this repository, cd into it, then run:

pip install -e .
context

If you have an existing package that already uses the keyword context, you can also activate Fleet Context by running:

fleet-context




API

Downloading embeddings

You can download any library's embeddings and load it up into a dataframe by running:

from context import download_embeddings

df = download_embeddings("langchain")
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 901k/901k [00:00<00:00, 2.64MiB/s]
                                     id                                   dense_embeddings                                           metadata                                      sparse_values
0  91cd9f22-b3b6-49e1-8672-e1e42a1cf766  [-0.014795871, -0.013938751, 0.02374646, -0.02...  {'id': '91cd9f22-b3b6-49e1-8672-e1e42a1cf766',...  {'indices': [4279915734, 3106554626, 771291085...
1  80cd620e-7408-4649-aaa7-3fe3c719b4ed  [-0.0027519625, 0.013772411, 0.0019546314, -0....  {'id': '80cd620e-7408-4649-aaa7-3fe3c719b4ed',...  {'indices': [1497795724, 573857107, 2203090375...
2  87a406ad-e413-42fc-8813-6fa042f80f6a  [-0.022883521, -0.0036436971, 0.0026068306, 0....  {'id': '87a406ad-e413-42fc-8813-6fa042f80f6a',...  {'indices': [1558403699, 640376310, 358389376,...
3  8bdd8dae-8384-414d-87d2-4390ca29d857  [-0.024882555, -0.0041470923, -0.011419726, -0...  {'id': '8bdd8dae-8384-414d-87d2-4390ca29d857',...  {'indices': [1558403699, 3778951566, 274301652...
4  8cc5eb61-317a-4196-8099-51c47ef70406  [-0.036361936, 0.0027855083, -0.013214805, -0....  {'id': '8cc5eb61-317a-4196-8099-51c47ef70406',...  {'indices': [3586802366, 1110127215, 161253108...

You can see a full list of supported libraries & search through them on our website at the bottom of the page.


Querying

If you'd like to directly query from our hosted vector database, you can run:

from context import query

results = query("How do I set up Langchain?")
for result in results:
    print(f"{result['metadata']['text']}\n{result['metadata']['text']}")
[
    {
        'id': '859e8dff-f9ec-497d-aa07-344e48b2f67b',
        'score': 0.848275101,
        'values': [],
        'metadata': {
            'library_id': '4506492b-70de-49f1-ba2e-d65bd7048a28',
            'page_id': '732e264c-c077-4978-bc93-380d7dc28983',
            'parent': '3be9bbcc-b5d6-4a91-9f72-a570c2db33e5',
            'section_id': '',
            'section_index': 0.0,
            'text': "Quickstart ## Installation\u200b To install LangChain run: - Pip - Conda pip install langchain conda install langchain -c conda-forge For more details, see our Installation guide. ## Environment setup\u200b Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs. First we'll need to install their Python package: pip install openai Accessing the API requires an API key, which you can get by creating an account and heading here.",
            'title': 'Quickstart | 🦜️🔗 Langchain',
            'type': '',
            'url': 'https://python.langchain.com/docs/get_started/quickstart'
        }
    },
    # ...and 9 more
]

You can also set a custom k value and filters by any metadata field we support (listed below), plus library_name:

results = query("How do I set up Langchain?", k=15, filters={"library_name": "langchain"})

Using Fleet Context's rich metadata

One of the biggest advantages of using Fleet Context's embeddings is the amount of information preserved throughout the chunking and embeddings process. You can take advantage of the metadata to improve the quality of your retrievals significantly.

Here's a full list of metadata that we support.

IDs:

  • library_id: the uuid of the library referenced
  • page_id: the uuid of the page the chunk was retrieved from
  • parent: the uuid of the section the chunk was retrieved from (not to be confused with section_id)

Page/section information:

  • url: the url of the section or page the chunk was retrieved from, formatted as f"{page_url}#{section_id}
  • section_id: the section's id field from the html
  • section_index: the ordering of the chunk within the section. If there are 2 chunks that have the same parent, this will tell you which one was presented first.

Chunk information:

  • title: the title of the section or of the page (if section title does not exist)
  • text: the text, formatted in markdown. Note that markdown is removed from the embeddings for better retrieval results.
  • type: the type of the chunk. Can be None (most common) or a defined value like class, function, attribute, data, exception, and more.

Improving retrievals with Fleet Context

Re-ranking with section_index

Re-ranking is commonly known to improve results pretty dramatically. We can take that a step further and take advantage of the fact that the ordering within each section/page is preserved, because it follows that ordering content in the order of which it is presented to the reader will likely derive the best results.

Use section_index to do a smart reranking of your chunks.


Parent/child retrieval with parent

If you notice 2 or more chunks with the same parent field and are relatively similar in position on the page via section_index, you can go up one level and query all chunks with the same parent uuid and pass in the entire document.


Better filtering and prompt construction with type

On retrieval, you can map intent and filter via type. If the user intends to generate code, you can pre-filter your retrieval to filter type to just class or function. You can use this in creative ways. We've found that pairing it with OpenAI's function calling works really well.

Also, type allow you to construct your prompt with more clarity, and display more rich information to the user. For example, adding the type to the prompt followed by the chunk will produce better results, because it allows the language model to understand what the chunk is trying to say.

Note that type is not guaranteed to be present and defined for all libraries — only the ones that have had their documentation generated by Sphinx/readthedocs.


Rich prompt construction & information presentation with text

Our text field preserves all information from the HTML elements by converting it to Markdown. This allows for two big advantages:

  1. From our tests, we've discovered that language models perform better with markdown formatting than without
  2. You're able to display rich information (titles, urls, images) to the user if you're sourcing a chunk

Precise sourcing with url and section_id

You can link the user to the exact section with url (if supported, it's already pre-loaded with the section within the page).




CLI Tool

Limit libraries

You can use the -l or --libraries followed by a list of libraries to limit your session to a certain number of libraries. Defaults to all. View a list of all supported libraries on our website.

context -l langchain pydantic openai

Use a different OpenAI model

You can select a different OpenAI model by using -m or --model. Defaults to gpt-4. You can set your model to gpt-4-1106-preview (gpt-4-turbo), gpt-3.5-turbo, or gpt-3.5-turbo-16k.

context -m gpt-4-1106-preview

Use non-OpenAI models

You can use Claude, CodeLlama, Mistral, and many other models by

  1. creating an API key on OpenRouter (visit the Keys page after signing up)
  2. setting OPENROUTER_API_KEY as an environment variable
  3. specifying your model using the company prefix, e.g.:
context -m phind/phind-codellama-34b

OpenAI models work this way as well; just use e.g. openai/gpt-4-32k. Other model options are available here.

Optionally, you can attribute your inference token usage to your app or website by setting OPENROUTER_APP_URL and OPENROUTER_APP_TITLE. Your app will show on the homepage of https://openrouter.ai if ranked.


Using local models

Local model support is powered by LM Studio. To use local models, you can use --local or -n:

context --local

You need to download your local model through LM Studio. To do that:

  1. Download LM Studio. You can find the download link here: https://lmstudio.ai
  2. Open LM Studio and download your model of choice.
  3. Click the ↔ icon on the very left sidebar
  4. Select your model and click "Start Server"

The context window is defaulted to 3000. You can change this by using --context_window or -w:

context --local --context_window 4096

Advanced settings

You can control the number of retrieved chunks by using -k or --k_value (defaulted to 15), and you can toggle whether the model cites its source by using -c or --cite_sources (defaults to true).

context -k 25 -c false




Evaluations

Results

Sampled libraries

We saw a 37-point improvement for gpt-4 generation scores and a 34-point improvement for gpt-4-turbo generation scores amongst a randomly sampled set of 50 libraries.

We attribute this to a lack of knowledge for the most up-to-date versions of libraries for gpt-4, and a combination of relevant up-to-date information to generate with and relevance of information for gpt-4-turbo.




Embeddings

Check out our visualized data here.

You can download all embeddings here.

<img width="100%" alt="Screenshot 2023-11-06 at 10 01 22 PM"

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号