mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-09-01 12:49:50 +08:00
Fix unsupported ScalarType BFloat16
This commit is contained in:
@@ -29,7 +29,7 @@ Once you've converted the weights to MLX format, you can interact with the
|
||||
LLaMA model:
|
||||
|
||||
```
|
||||
python llama.py mlx_llama.npz tokenizer.model "hello"
|
||||
python llama.py mlx_llama_weights.npz <path_to_tokenizer.model> "hello"
|
||||
```
|
||||
|
||||
Run `python llama.py --help` for more details.
|
||||
|
Reference in New Issue
Block a user