This commit is contained in:
Angelos Katharopoulos 2025-03-24 22:15:37 -07:00
parent c4d08de8b3
commit d06c4cde92
2 changed files with 2 additions and 2 deletions

View File

@ -226,7 +226,7 @@ this section assumes you can launch distributed MLX programs using `mlx.launch
### Distributed Finetuning ### Distributed Finetuning
Distributed finetuning scales very well with FLUX and all one has to do is Distributed finetuning scales very well with FLUX and all one has to do is
simply to adjust the gradient accumulation and iterations so that the batch adjust the gradient accumulation and training iterations so that the batch
size remains the same. For instance, to replicate the following training size remains the same. For instance, to replicate the following training
```shell ```shell

View File

@ -35,7 +35,7 @@ def to_latent_size(image_size):
if __name__ == "__main__": if __name__ == "__main__":
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description="Generate images from a textual prompt using stable diffusion" description="Generate images from a textual prompt using FLUX"
) )
parser.add_argument("--quantize", "-q", action="store_true") parser.add_argument("--quantize", "-q", action="store_true")
parser.add_argument("--model", choices=["schnell", "dev"], default="schnell") parser.add_argument("--model", choices=["schnell", "dev"], default="schnell")