BuildKit
BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner.
Key features:
- Automatic garbage collection
- Extendable frontend formats
- Concurrent dependency resolution
- Efficient instruction caching
- Build cache import/export
- Nested build job invocations
- Distributable workers
- Multiple output formats
- Pluggable architecture
- Execution without root privileges
Read the proposal from https://github.com/moby/moby/issues/32925
Introductory blog post https://blog.mobyproject.org/introducing-buildkit-17e056cc5317
Join #buildkit
channel on Docker Community Slack
[!NOTE] If you are visiting this repo for the usage of BuildKit-only Dockerfile features like
RUN --mount=type=(bind|cache|tmpfs|secret|ssh)
, please refer to the Dockerfile reference.
[!NOTE]
docker build
uses Buildx and BuildKit by default since Docker Engine 23.0. You don't need to read this document unless you want to use the full-featured standalone version of BuildKit.
- Used by
- Quick start
- Cache
- Metadata
- Systemd socket activation
- Expose BuildKit as a TCP service
- Containerizing BuildKit
- OpenTelemetry support
- Running BuildKit without root privileges
- Building multi-platform images
- Contributing
Used by
BuildKit is used by the following projects:
- Moby & Docker (
DOCKER_BUILDKIT=1 docker build
) - img
- OpenFaaS Cloud
- container build interface
- Tekton Pipelines (formerly Knative Build Templates)
- the Sanic build tool
- vab
- Rio
- kim
- PouchContainer
- Docker buildx
- Okteto Cloud
- Earthly earthfiles
- Gitpod
- Dagger
- envd
- Depot
- Namespace
- Unikraft
Quick start
:information_source: For Kubernetes deployments, see examples/kubernetes
.
BuildKit is composed of the buildkitd
daemon and the buildctl
client.
While the buildctl
client is available for Linux, macOS, and Windows, the buildkitd
daemon is only available for Linux and *Windows currently.
The latest binaries of BuildKit are available here for Linux, macOS, and Windows.
Linux Setup
The buildkitd
daemon requires the following components to be installed:
- runc or crun
- containerd (if you want to use containerd worker)
Starting the buildkitd
daemon:
You need to run buildkitd
as the root user on the host.
$ sudo buildkitd
To run buildkitd
as a non-root user, see docs/rootless.md
.
The buildkitd daemon supports two worker backends: OCI (runc) and containerd.
By default, the OCI (runc) worker is used. You can set --oci-worker=false --containerd-worker=true
to use the containerd worker.
We are open to adding more backends.
To start the buildkitd daemon using systemd socket activation, you can install the buildkit systemd unit files. See Systemd socket activation
The buildkitd daemon listens gRPC API on /run/buildkit/buildkitd.sock
by default, but you can also use TCP sockets.
See Expose BuildKit as a TCP service.
Windows Setup
See instructions and notes at docs/windows.md
.
macOS Setup
Homebrew formula (unofficial) is available for macOS.
$ brew install buildkit
The Homebrew formula does not contain the daemon (buildkitd
).
For example, Lima can be used for launching the daemon inside a Linux VM.
brew install lima
limactl start template://buildkit
export BUILDKIT_HOST="unix://$HOME/.lima/buildkit/sock/buildkitd.sock"
Build from source
To build BuildKit from source, see .github/CONTRIBUTING.md
.
For a buildctl
reference, see this document.
Exploring LLB
BuildKit builds are based on a binary intermediate format called LLB that is used for defining the dependency graph for processes running part of your build. tl;dr: LLB is to Dockerfile what LLVM IR is to C.
- Marshaled as Protobuf messages
- Concurrently executable
- Efficiently cacheable
- Vendor-neutral (i.e. non-Dockerfile languages can be easily implemented)
See solver/pb/ops.proto
for the format definition, and see ./examples/README.md
for example LLB applications.
Currently, the following high-level languages have been implemented for LLB:
- Dockerfile (See Exploring Dockerfiles)
- Buildpacks
- Mockerfile
- Gockerfile
- bldr (Pkgfile)
- HLB
- Earthfile (Earthly)
- Cargo Wharf (Rust)
- Nix
- mopy (Python)
- envd (starlark)
- Blubber
- Bass
- kraft.yaml (Unikraft)
- (open a PR to add your own language)
Exploring Dockerfiles
Frontends are components that run inside BuildKit and convert any build definition to LLB. There is a special frontend called gateway (gateway.v0
) that allows using any image as a frontend.
During development, Dockerfile frontend (dockerfile.v0
) is also part of the BuildKit repo. In the future, this will be moved out, and Dockerfiles can be built using an external image.
Building a Dockerfile with buildctl
buildctl build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=.
# or
buildctl build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt target=foo \
--opt build-arg:foo=bar
--local
exposes local source files from client to the builder. context
and dockerfile
are the names Dockerfile frontend looks for build context and Dockerfile location.
If the Dockerfile has a different filename it can be specified with --opt filename=./Dockerfile-alternative
.
Building a Dockerfile using external frontend
External versions of the Dockerfile frontend are pushed to https://hub.docker.com/r/docker/dockerfile-upstream and https://hub.docker.com/r/docker/dockerfile and can be used with the gateway frontend. The source for the external frontend is currently located in ./frontend/dockerfile/cmd/dockerfile-frontend
but will move out of this repository in the future (#163). For automatic build from master branch of this repository docker/dockerfile-upstream:master
or docker/dockerfile-upstream:master-labs
image can be used.
buildctl build \
--frontend gateway.v0 \
--opt source=docker/dockerfile \
--local context=. \
--local dockerfile=.
buildctl build \
--frontend gateway.v0 \
--opt source=docker/dockerfile \
--opt context=https://github.com/moby/moby.git \
--opt build-arg:APT_MIRROR=cdn-fastly.deb.debian.org
Output
By default, the build result and intermediate cache will only remain internally in BuildKit. An output needs to be specified to retrieve the result.
Image/Registry
buildctl build ... --output type=image,name=docker.io/username/image,push=true
To export the image to multiple registries:
buildctl build ... --output type=image,\"name=docker.io/username/image,docker.io/username2/image2\",push=true
To export the cache embed with the image and pushing them to registry together, type registry
is required to import the cache, you should specify --export-cache type=inline
and --import-cache type=registry,ref=...
. To export the cache to a local directly, you should specify --export-cache type=local
.
Details in Export cache.
buildctl build ...\
--output type=image,name=docker.io/username/image,push=true \
--export-cache type=inline \
--import-cache type=registry,ref=docker.io/username/image
Keys supported by image output:
name=<value>
: specify image name(s)push=true
: push after creating the imagepush-by-digest=true
: push unnamed imageregistry.insecure=true
: push to insecure HTTP registryoci-mediatypes=true
: use OCI mediatypes in configuration JSON instead of Docker'sunpack=true
: unpack image after creation (for use with containerd)dangling-name-prefix=<value>
: name image withprefix@<digest>
, used for anonymous imagesname-canonical=true
: add additional canonical namename@<digest>
compression=<uncompressed|gzip|estargz|zstd>
: choose compression type for layers newly created and cached, gzip is default value. estargz should be used withoci-mediatypes=true
.compression-level=<value>
: compression level for gzip, estargz (0-9) and zstd (0-22)rewrite-timestamp=true
: rewrite the file timestamps to theSOURCE_DATE_EPOCH
value. Seedocs/build-repro.md
for how to specify theSOURCE_DATE_EPOCH
value.force-compression=true
: forcefully applycompression
option to all layers (including already existing layers)store=true
: store the result images to the worker's (e.g. containerd) image store as well as ensures that the image has all blobs in the content store (defaulttrue
). Ignored if the worker doesn't have image store (e.g. OCI worker).annotation.<key>=<value>
: attach an annotation with the respectivekey
andvalue
to the built image- Using the extended syntaxes,
annotation-<type>.<key>=<value>
,annotation[<platform>].<key>=<value>
and both combined withannotation-<type>[<platform>].<key>=<value>
, allows configuring exactly where to attach the annotation. <type>
specifies what object to attach to, and can be any ofmanifest
(the default),manifest-descriptor
,index
andindex-descriptor
<platform>
specifies which objects to attach to (by default, all), and is the same key passed into theplatform
opt, seedocs/multi-platform.md
.- See
docs/annotations.md
for more details.
- Using the extended syntaxes,
If credentials are required, buildctl
will attempt to read Docker configuration file $DOCKER_CONFIG/config.json
.
$DOCKER_CONFIG
defaults to ~/.docker
.
Local directory
The local client will copy the files directly to the client. This is useful if BuildKit is being used for building something else than container images.
buildctl build ... --output type=local,dest=path/to/output-dir
To export specific files use multi-stage builds with a scratch stage and copy the needed files into that stage with `COPY