📚 Table of Contents
✨ Highlights
Support both Sync and Async
Authentication
- Log in with your Poe tokens
- Auto Proxy requests
- Specify Proxy context
Message Automation
- Create new chat thread
- Send messages
- Stream bot responses
- Send concurrent messages
- Retry the last message
- Support file attachments
- Retrieve suggested replies
- Stop message generation
- Delete chat threads
- Clear conversation context
- Purge messages of 1 bot
- Purge all messages of user
- Fetch previous messages
- Share and import messages
- Get citations
Chat Management
- Get Chat Ids & Chat Codes of bot(s)
- Get subscription info and remaining points
Bot Management
- Get bot info
- Get available creation models
- Create custom bot
- Edit custom bot
- Delete a custom bot
Knowledge Base Customization
- Get available knowledge bases
- Upload knowledge bases for custom bots
- Edit knowledge bases for custom bots
Discovery
- Get available bots
- Get a user's bots
- Get available categories
- Explore 3rd party bots and users
Bots Group Chat (Beta)
- Create a group chat
- Delete a group chat
- Get created groups
- Get group data
- Save group chat history
- Load group chat history
🔧 Installation
- First, install this library with the following command:
pip install -U poe-api-wrapper
Or you can install auto-proxy version of this library for Python 3.9+
pip install -U 'poe-api-wrapper[proxy]'
Quick setup for Async Client:
from poe_api_wrapper import AsyncPoeApi
import asyncio
tokens = {
'p-b': ...,
'p-lat': ...,
}
async def main():
client = await AsyncPoeApi(tokens=tokens).create()
message = "Explain quantum computing in simple terms"
async for chunk in client.send_message(bot="gpt3_5", message=message):
print(chunk["response"], end='', flush=True)
asyncio.run(main())
- You can run an example of this library:
from poe_api_wrapper import PoeExample
tokens = {
'p-b': ...,
'p-lat': ...,
}
PoeExample(tokens=tokens).chat_with_bot()
- This library also supports command-line interface:
poe -b P-B_HERE -lat P-LAT_HERE -f FORMKEY_HERE
[!TIP] Type
poe -h
for more info
🦄 Documentation
Available Default Bots
Display Name | Model | Token Limit | Words | Access Type |
---|---|---|---|---|
Assistant | capybara | 4K | 3K | |
Claude-3.5-Sonnet | claude_3_igloo | 4K | 3K | |
Claude-3-Opus | claude_2_1_cedar | 4K | 3K | |
Claude-3-Sonnet | claude_2_1_bamboo | 4K | 3K | |
Claude-3-Haiku | claude_3_haiku | 4K | 3K | |
Claude-3.5-Sonnet-200k | claude_3_igloo_200k | 200K | 150K | |
Claude-3-Opus-200k | claude_3_opus_200k | 200K | 150K | |
Claude-3-Sonnet-200k | claude_3_sonnet_200k | 200K | 150K | |
Claude-3-Haiku-200k | claude_3_haiku_200k | 200K | 150K | |
Claude-2 | claude_2_short | 4K | 3K | |
Claude-2-100k | a2_2 | 100K | 75K | |
Claude-instant | a2 | 9K | 7K | |
Claude-instant-100k | a2_100k | 100K | 75K | |
GPT-3.5-Turbo | chinchilla | 4K | 3K | |
GPT-3.5-Turbo-Raw | gpt3_5 | 2k | 1.5K | |
GPT-3.5-Turbo-Instruct | chinchilla_instruct | 2K | 1.5K | |
ChatGPT-16k | agouti | 16K | 12K | |
GPT-4-Classic | gpt4_classic | 2K | 1.5K | |
GPT-4-Turbo | beaver | 4K | 3K | |
GPT-4-Turbo-128k | vizcacha | 128K | 96K | |
GPT-4o | gpt4_o | 4k | 3K | |
GPT-4o-128k | gpt4_o_128k | 128K | 96K | |
GPT-4o-Mini | gpt4_o_mini | 4K | 3K | |
GPT-4o-Mini-128k | gpt4_o_mini_128k | 128K | 96K | |
Google-PaLM | acouchy | 8K | 6K | |
Code-Llama-13b | code_llama_13b_instruct | 4K | 3K | |
Code-Llama-34b | code_llama_34b_instruct | 4K | 3K | |
Solar-Mini | upstage_solar_0_70b_16bit | 2K | 1.5K | |
Gemini-1.5-Flash-Search | gemini_pro_search | 4K | 3K | |
Gemini-1.5-Pro-2M | gemini_1_5_pro_1m | 2M | 1.5M |
[!IMPORTANT]
The data on token limits and word counts listed above are approximate and may not be entirely accurate, as the pre-prompt engineering process of poe.com is private and not publicly disclosed.The table above only shows bots with different display names from their models. Other bots on poe.com have the same display name as model.
How to get your Token
Getting p-b and p-lat cookies (required)
Sign in at https://poe.com/
F12 for Devtools (Right-click + Inspect)
- Chromium: Devtools > Application > Cookies > poe.com
- Firefox: Devtools > Storage > Cookies
- Safari: Devtools > Storage > Cookies
Copy the values of p-b
and p-lat
cookies
Getting formkey (optional)
[!IMPORTANT] By default, poe-api-wrapper will automatically retrieve formkey for you. If it doesn't work, please pass this token manually by following these steps:
There are two ways to get formkey:
F12 for Devtools (Right-click + Inspect)
-
1st Method: Devtools > Network > gql_POST > Headers > Poe-Formkey
Copy the value of
Poe-Formkey
-
2nd Method: Devtools > Console > Type:
allow pasting
> Paste this script:window.ereNdsRqhp2Rd3LEW()
Copy the result
OpenAI
Read Docs
Available Routes
- /models
- /chat/completions
- /images/generations
- /images/edits
- /v1/models
- /v1/chat/completions
- /v1/images/generations
- /v1/images/edits
Quick Setup
- First, install the additional packages:
pip install -U 'poe-api-wrapper[llm]'
- Clone the repo or use the same setup in
openai
folder:
git clone https://github.com/snowby666/poe-api-wrapper.git
cd poe-api-wrapper\poe_api_wrapper\openai
-
Modify secrets.json with your own tokens
-
Run the FastAPI server:
python api.py
- Run the examples:
python example.py
Built-in completion (WIP)
OpenAI Proxy Server
- Start the server
from poe_api_wrapper import PoeServer
tokens = [
{"p-b": "XXXXXXXX", "p-lat": "XXXXXXXX"},
{"p-b": "XXXXXXXX", "p-lat": "XXXXXXXX"},
{"p-b": "XXXXXXXX", "p-lat": "XXXXXXXX"}
]
PoeServer(tokens=tokens)
# You can also specify address and port (default is 127.0.0.1:8000)
PoeServer(tokens=tokens, address="0.0.0.0", port="8080")
Chat
- Non-streamed example:
import openai
client = openai.OpenAI(api_key="anything", base_url="http://127.0.0.1:8000/v1/", default_headers={"Authorization": "Bearer anything"})
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
- Streaming example:
import openai
client = openai.OpenAI(api_key="anything", base_url="http://127.0.0.1:8000/v1/", default_headers={"Authorization": "Bearer anything"})
stream = client.chat.completions.create(
model="gpt-3.5-turbo",
messages = [
{"role": "user", "content": "this is a test request, write a short poem"}
],
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)
# Set max_tokens
stream_2 = client.chat.completions.create(
model="claude-instant",
messages = [
{"role": "user", "content": "Can you tell me about the creation of blackholes?"}
],
stream=True,
max_tokens=20, # if max_tokens reached, finish_reason will be 'length'
)
for chunk in stream_2:
print(chunk.choices[0].delta.content or "", end="", flush=True)
# Include usage
stream_3 = client.chat.completions.create(
model="claude-instant",
messages = [
{"role": "user", "content": "Write a 100-character meta description for my blog post about llamas"}
],