Core ML Stable Diffusion
Run Stable Diffusion on Apple Silicon with Core ML
This repository comprises:
python_coreml_stable_diffusion
, a Python package for converting PyTorch models to Core ML format and performing image generation with Hugging Face diffusers in PythonStableDiffusion
, a Swift package that developers can add to their Xcode projects as a dependency to deploy image generation capabilities in their apps. The Swift package relies on the Core ML model files generated bypython_coreml_stable_diffusion
If you run into issues during installation or runtime, please refer to the FAQ section. Please refer to the System Requirements section before getting started.
System Requirements
Details (Click to expand)
Model Conversion:
macOS | Python | coremltools |
---|---|---|
13.1 | 3.8 | 7.0 |
Project Build:
macOS | Xcode | Swift |
---|---|---|
13.1 | 14.3 | 5.8 |
Target Device Runtime:
macOS | iPadOS, iOS |
---|---|
13.1 | 16.2 |
Target Device Runtime (With Memory Improvements):
macOS | iPadOS, iOS |
---|---|
14.0 | 17.0 |
Target Device Hardware Generation:
Mac | iPad | iPhone |
---|---|---|
M1 | M1 | A14 |
Performance Benchmarks
Details (Click to expand)
stabilityai/stable-diffusion-2-1-base
(512x512)
Device | --compute-unit | --attention-implementation | End-to-End Latency (s) | Diffusion Speed (iter/s) |
---|---|---|---|---|
iPhone 12 Mini | CPU_AND_NE | SPLIT_EINSUM_V2 | 18.5* | 1.44 |
iPhone 12 Pro Max | CPU_AND_NE | SPLIT_EINSUM_V2 | 15.4 | 1.45 |
iPhone 13 | CPU_AND_NE | SPLIT_EINSUM_V2 | 10.8* | 2.53 |
iPhone 13 Pro Max | CPU_AND_NE | SPLIT_EINSUM_V2 | 10.4 | 2.55 |
iPhone 14 | CPU_AND_NE | SPLIT_EINSUM_V2 | 8.6 | 2.57 |
iPhone 14 Pro Max | CPU_AND_NE | SPLIT_EINSUM_V2 | 7.9 | 2.69 |
iPad Pro (M1) | CPU_AND_NE | SPLIT_EINSUM_V2 | 11.2 | 2.19 |
iPad Pro (M2) | CPU_AND_NE | SPLIT_EINSUM_V2 | 7.0 | 3.07 |
Details (Click to expand)
- This benchmark was conducted by Apple and Hugging Face using public beta versions of iOS 17.0, iPadOS 17.0 and macOS 14.0 Seed 8 in August 2023.
- The performance data was collected using the
benchmark
branch of the Diffusers app - Swift code is not fully optimized, introducing up to ~10% overhead unrelated to Core ML model execution.
- The median latency value across 5 back-to-back end-to-end executions are reported
- The image generation procedure follows the standard configuration: 20 inference steps, 512x512 output image resolution, 77 text token sequence length, classifier-free guidance (batch size of 2 for unet).
- The actual prompt length does not impact performance because the Core ML model is converted with a static shape that computes the forward pass for all of the 77 elements (
tokenizer.model_max_length
) in the text token sequence regardless of the actual length of the input text. - Weights are compressed to 6 bit precision. Please refer to this section for details.
- Activations are in float16 precision for both the GPU and the Neural Engine.
*
indicates that the reduceMemory option was enabled which loads and unloads models just-in-time to avoid memory shortage. This added up to 2 seconds to the end-to-end latency.- In the benchmark table, we report the best performing
--compute-unit
and--attention-implementation
values per device. The former does not modify the Core ML model and can be applied during runtime. The latter modifies the Core ML model. Note that the best performing compute unit is model version and hardware-specific. - Note that the performance optimizations in this repository (e.g.
--attention-implementation
) are generally applicable to Transformers and not customized to Stable Diffusion. Better performance may be observed upon custom kernel tuning. Therefore, these numbers do not represent peak HW capability. - Performance may vary across different versions of Stable Diffusion due to architecture changes in the model itself. Each reported number is specific to the model version mentioned in that context.
- Performance may vary due to factors like increased system load from other applications or suboptimal device thermal state.
stabilityai/stable-diffusion-xl-base-1.0-ios
(768x768)
Device | --compute-unit | --attention-implementation | End-to-End Latency (s) | Diffusion Speed (iter/s) |
---|---|---|---|---|
iPhone 12 Pro | CPU_AND_NE | SPLIT_EINSUM | 116* | 0.50 |
iPhone 13 Pro Max | CPU_AND_NE | SPLIT_EINSUM | 86* | 0.68 |
iPhone 14 Pro Max | CPU_AND_NE | SPLIT_EINSUM | 77* | 0.83 |
iPhone 15 Pro Max | CPU_AND_NE | SPLIT_EINSUM | 31 | 0.85 |
iPad Pro (M1) | CPU_AND_NE | SPLIT_EINSUM | 36 | 0.69 |
iPad Pro (M2) | CPU_AND_NE | SPLIT_EINSUM | 27 | 0.98 |
Details (Click to expand)
- This benchmark was conducted by Apple and Hugging Face using iOS 17.0.2 and iPadOS 17.0.2 in September 2023.
- The performance data was collected using the
benchmark
branch of the Diffusers app - The median latency value across 5 back-to-back end-to-end executions are reported
- The image generation procedure follows this configuration: 20 inference steps, 768x768 output image resolution, 77 text token sequence length, classifier-free guidance (batch size of 2 for unet).
Unet.mlmodelc
is compressed to 4.04 bit precision following the Mixed-Bit Palettization algorithm recipe published here- All models except for
Unet.mlmodelc
are compressed to 16 bit precision - madebyollin/sdxl-vae-fp16-fix by @madebyollin was used as the source PyTorch model for
VAEDecoder.mlmodelc
in order to enable float16 weight and activation quantization for the VAE model. --attention-implementation SPLIT_EINSUM
is chosen in lieu ofSPLIT_EINSUM_V2
due to the prohibitively long compilation time of the latter*
indicates that the reduceMemory option was enabled which loads and unloads models just-in-time to avoid memory shortage. This added significant overhead to the end-to-end latency. Note that end-to-end latency difference betweeniPad Pro (M1)
andiPhone 13 Pro Max
despite identical diffusion speed.- The actual prompt length does not impact performance because the Core ML model is converted with a static shape that computes the forward pass for all of the 77 elements (
tokenizer.model_max_length
) in the text token sequence regardless of the actual length of the input text. - In the benchmark table, we report the best performing
--compute-unit
and--attention-implementation
values per device. The former does not modify the Core ML model and can be applied during runtime. The latter modifies the Core ML model. Note that the best performing compute unit is model version and hardware-specific. - Note that the performance optimizations in this repository (e.g.
--attention-implementation
) are generally applicable to Transformers and not customized to Stable Diffusion. Better performance may be observed upon custom kernel tuning. Therefore, these numbers do not represent peak HW capability. - Performance may vary across different versions of Stable Diffusion due to architecture changes in the model itself. Each reported number is specific to the model version mentioned in that context.
- Performance may vary due to factors like increased system load from other applications or suboptimal device thermal state.
stabilityai/stable-diffusion-xl-base-1.0
(1024x1024)
Device | --compute-unit | --attention-implementation | End-to-End Latency (s) | Diffusion Speed (iter/s) |
---|---|---|---|---|
MacBook Pro (M1 Max) | CPU_AND_GPU | ORIGINAL | 46 | 0.46 |
MacBook Pro (M2 Max) | CPU_AND_GPU | ORIGINAL | 37 | 0.57 |
Mac Studio (M1 Ultra) | CPU_AND_GPU | ORIGINAL | 25 | 0.89 |
Mac Studio (M2 Ultra) | CPU_AND_GPU | ORIGINAL | 20 | 1.11 |
Details (Click to expand)
- This benchmark was conducted by Apple and Hugging Face using public beta versions of iOS 17.0, iPadOS 17.0 and macOS 14.0 in July 2023.
- The performance data was collected by running the
StableDiffusion
Swift pipeline. - The median latency value across 3 back-to-back end-to-end executions are reported
- The image generation procedure follows the standard configuration: 20 inference steps, 1024x1024 output image resolution, classifier-free guidance (batch size of 2 for unet).
- Weights and activations are in float16 precision
- Performance may vary across different versions of Stable Diffusion due to architecture changes in the model itself. Each reported number is specific to the model version mentioned in that context.
- Performance may vary due to factors like increased system load from other applications or suboptimal device thermal state. Given these factors, we do not report sub-second variance in latency.
Weight Compression (6-bits and higher)
Details (Click to expand)
coremltools-7.0 supports advanced weight compression techniques for pruning, palettization and linear 8-bit quantization. For these techniques, coremltools.optimize.torch.*
includes APIs that require fine-tuning to maintain accuracy at higher compression rates whereas coremltools.optimize.coreml.*
includes APIs that are applied post-training and are data-free.
We demonstrate how data-free post-training palettization implemented in coremltools.optimize.coreml.palettize_weights
enables us to achieve greatly improved performance for Stable Diffusion on mobile devices. This API implements the Fast Exact k-Means algorithm for optimal weight clustering which yields more accurate palettes. Using --quantize-nbits {2,4,6,8}
during conversion is going to apply this compression to the unet and text_encoder models.
For best results, we recommend training-time palettization: coremltools.optimize.torch.palettization.DKMPalettizer
if fine-tuning your model is feasible. This API implements the Differentiable k-Means (DKM) learned palettization algorithm. In this exercise, we stick to post-training palettization for the sake of simplicity and ease of reproducibility.
The Neural Engine is capable of accelerating models with low-bit palettization: 1, 2, 4, 6 or 8 bits. With iOS 17 and macOS 14, compressed weights for Core ML models can be just-in-time decompressed during runtime (as opposed to ahead-of-time decompression upon load) to match the precision of activation tensors. This yields significant memory savings and enables models to run on devices with smaller RAM (e.g. iPhone 12 Mini). In addition, compressed weights are faster to fetch from memory which reduces the latency of memory bandwidth-bound layers. The just-in-time decompression behavior depends on the compute unit, layer type and hardware generation.
Weight Precision | --compute-unit | stabilityai/stable-diffusion-2-1-base generating "a high quality photo of a surfing dog" |
---|---|---|
6-bit | cpuAndNeuralEngine | |
16-bit | cpuAndNeuralEngine | |
16-bit | cpuAndGPU |
Note that there are minor differences across 16-bit (float16) and 6-bit results. These differences are comparable to the differences across float16 and float32 or differences across compute units as exemplified above. We recommend a minimum of 6 bits for palettizing Stable Diffusion. Smaller number of bits (1, 2 and 4) will require either fine-tuning or advanced palettization techniques such as MBP.
Resources:
Advanced Weight Compression (Lower than 6-bits)
Details (Click to expand)
This section describes an advanced compression algorithm called Mixed-Bit Palettization (MBP) built on top of the Post-Training Weight Palettization tools and using the Weights Metadata API from coremltools.
MBP builds a per-layer "palettization