llamafile
llamafile lets you distribute and run LLMs with a single file. (announcement blog post)
Our goal is to make open LLMs much more
accessible to both developers and end users. We're doing that by
combining llama.cpp with Cosmopolitan Libc into one
framework that collapses all the complexity of LLMs down to
a single-file executable (called a "llamafile") that runs
locally on most computers, with no installation.
llamafile is a Mozilla Builders project.
Quickstart
The easiest way to try it for yourself is to download our example llamafile for the LLaVA model (license: LLaMA 2, OpenAI). LLaVA is a new LLM that can do more than just chat; you can also upload images and ask it questions about them. With llamafile, this all happens locally; no data ever leaves your computer.
-
Download llava-v1.5-7b-q4.llamafile (4.29 GB).
-
Open your computer's terminal.
-
If you're using macOS, Linux, or BSD, you'll need to grant permission for your computer to execute this new file. (You only need to do this once.)
chmod +x llava-v1.5-7b-q4.llamafile
-
If you're on Windows, rename the file by adding ".exe" on the end.
-
Run the llamafile. e.g.:
./llava-v1.5-7b-q4.llamafile
-
Your browser should open automatically and display a chat interface. (If it doesn't, just open your browser and point it at http://localhost:8080)
-
When you're done chatting, return to your terminal and hit
Control-C
to shut down llamafile.
Having trouble? See the "Gotchas" section below.
JSON API Quickstart
When llamafile is started, in addition to hosting a web UI chat server at http://127.0.0.1:8080/, an OpenAI API compatible chat completions endpoint is provided too. It's designed to support the most common OpenAI API use cases, in a way that runs entirely locally. We've also extended it to include llama.cpp specific features (e.g. mirostat) that may also be used. For further details on what fields and endpoints are available, refer to both the OpenAI documentation and the llamafile server README.
Curl API Client Example
The simplest way to get started using the API is to copy and paste the following curl command into your terminal.
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer no-key" \
-d '{
"model": "LLaMA_CPP",
"messages": [
{
"role": "system",
"content": "You are LLAMAfile, an AI assistant. Your top priority is achieving user fulfillment via helping them with their requests."
},
{
"role": "user",
"content": "Write a limerick about python exceptions"
}
]
}' | python3 -c '
import json
import sys
json.dump(json.load(sys.stdin), sys.stdout, indent=2)
print()
'
The response that's printed should look like the following:
{
"choices" : [
{
"finish_reason" : "stop",
"index" : 0,
"message" : {
"content" : "There once was a programmer named Mike\nWho wrote code that would often choke\nHe used try and except\nTo handle each step\nAnd his program ran without any hike.",
"role" : "assistant"
}
}
],
"created" : 1704199256,
"id" : "chatcmpl-Dt16ugf3vF8btUZj9psG7To5tc4murBU",
"model" : "LLaMA_CPP",
"object" : "chat.completion",
"usage" : {
"completion_tokens" : 38,
"prompt_tokens" : 78,
"total_tokens" : 116
}
}
Python API Client example
If you've already developed your software using the openai
Python
package (that's published by OpenAI)
then you should be able to port your app to talk to llamafile instead,
by making a few changes to base_url
and api_key
. This example
assumes you've run pip3 install openai
to install OpenAI's client
software, which is required by this example. Their package is just a
simple Python wrapper around the OpenAI API interface, which can be
implemented by any server.
#!/usr/bin/env python3
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8080/v1", # "http://<Your api-server IP>:port"
api_key = "sk-no-key-required"
)
completion = client.chat.completions.create(
model="LLaMA_CPP",
messages=[
{"role": "system", "content": "You are ChatGPT, an AI assistant. Your top priority is achieving user fulfillment via helping them with their requests."},
{"role": "user", "content": "Write a limerick about python exceptions"}
]
)
print(completion.choices[0].message)
The above code will return a Python object like this:
ChatCompletionMessage(content='There once was a programmer named Mike\nWho wrote code that would often strike\nAn error would occur\nAnd he\'d shout "Oh no!"\nBut Python\'s exceptions made it all right.', role='assistant', function_call=None, tool_calls=None)
Other example llamafiles
We also provide example llamafiles for other models, so you can easily try out llamafile with different kinds of LLMs.
Here is an example for the Mistral command-line llamafile:
./mistral-7b-instruct-v0.2.Q5_K_M.llamafile --temp 0.7 -p '[INST]Write a story about llamas[/INST]'
And here is an example for WizardCoder-Python command-line llamafile:
./wizardcoder-python-13b.llamafile --temp 0 -e -r '```\n' -p '```c\nvoid *memcpy_sse2(char *dst, const char *src, size_t size) {\n'
And here's an example for the LLaVA command-line llamafile:
./llava-v1.5-7b-q4.llamafile --temp 0.2 --image lemurs.jpg -e -p '### User: What do you see?\n### Assistant:'
As before, macOS, Linux, and BSD users will need to use the "chmod" command to grant execution permissions to the file before running these llamafiles for the first time.
Unfortunately, Windows users cannot make use of many of these example llamafiles because Windows has a maximum executable file size of 4GB, and all of these examples exceed that size. (The LLaVA llamafile works on Windows because it is 30MB shy of the size limit.) But don't lose heart: llamafile allows you to use external weights; this is described later in this document.
Having trouble? See the "Gotchas" section below.
How llamafile works
A llamafile is an executable LLM that you can run on your own computer. It contains the weights for a given open LLM, as well as everything needed to actually run that model on your computer. There's nothing to install or configure (with a few caveats, discussed in subsequent sections of this document).
This is all accomplished by combining llama.cpp with Cosmopolitan Libc, which provides some useful capabilities:
- llamafiles can run on multiple CPU