mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-08-09 10:26:38 +08:00
Clarification in the stable diffusion readme
This commit is contained in:
parent
e488831e03
commit
a1b4a475e1
@ -65,8 +65,9 @@ Performance
|
|||||||
-----------
|
-----------
|
||||||
|
|
||||||
The following table compares the performance of the UNet in stable diffusion.
|
The following table compares the performance of the UNet in stable diffusion.
|
||||||
We report throughput in images per second for the provided `txt2image.py`
|
We report throughput in images per second **processed by the UNet** for the
|
||||||
script and the `diffusers` library using the MPS PyTorch backend.
|
provided `txt2image.py` script and the `diffusers` library using the MPS
|
||||||
|
PyTorch backend.
|
||||||
|
|
||||||
At the time of writing this comparison convolutions are still some of the least
|
At the time of writing this comparison convolutions are still some of the least
|
||||||
optimized operations in MLX. Despite that, MLX still achieves **~40% higher
|
optimized operations in MLX. Despite that, MLX still achieves **~40% higher
|
||||||
@ -93,3 +94,7 @@ The above experiments were made on an M2 Ultra with PyTorch version 2.1,
|
|||||||
diffusers version 0.21.4 and transformers version 4.33.3. For the generation we
|
diffusers version 0.21.4 and transformers version 4.33.3. For the generation we
|
||||||
used classifier free guidance which means that the above batch sizes result
|
used classifier free guidance which means that the above batch sizes result
|
||||||
double the images processed by the UNet.
|
double the images processed by the UNet.
|
||||||
|
|
||||||
|
Note that the above table means that it takes about 90 seconds to fully
|
||||||
|
generate 16 images with MLX and 50 diffusion steps with classifier free
|
||||||
|
guidance and about 120 for PyTorch.
|
||||||
|
Loading…
Reference in New Issue
Block a user