mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-09-01 12:49:50 +08:00
comments + readme updates
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
## Generate Text in MLX
|
||||
## Generate Text with MLX and Hugging Face
|
||||
|
||||
This an example of Llama style large language model text generation that can
|
||||
pull models from the Hugging Face Hub.
|
||||
@@ -17,6 +17,14 @@ pip install -r requirements.txt
|
||||
python generate.py --model <model_path> --prompt "hello"
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
python generate.py --model mistralai/Mistral-7B-v0.1 --prompt "hello"
|
||||
```
|
||||
|
||||
will download the Mistral 7B model and generate text using the given prompt.
|
||||
|
||||
The `<model_path>` should be either a path to a local directory or a Hugging
|
||||
Face repo with weights stored in `safetensors` format. If you use a repo from
|
||||
the Hugging Face hub, then the model will be downloaded and cached the first
|
||||
@@ -27,17 +35,22 @@ Run `python generate.py --help` to see all the options.
|
||||
|
||||
### Models
|
||||
|
||||
The example supports Hugging Face format Llama-style models. If the
|
||||
model you want to convert is not supported, file an
|
||||
The example supports Hugging Face format Mistral and Llama-style models. If the
|
||||
model you want to run is not supported, file an
|
||||
[issue](https://github.com/ml-explore/mlx-examples/issues/new) or better yet,
|
||||
submit a pull request.
|
||||
|
||||
Here is a list of a few Hugging Face models which work with this example:
|
||||
Here is a list of a few more Hugging Face models repos which work with this example:
|
||||
|
||||
- meta-llama/Llama-2-7b-hf
|
||||
- mistralai/Mistral-7B-v0.1
|
||||
- TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
|
||||
- [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
||||
- [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
||||
- [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
|
||||
|
||||
Most
|
||||
[Mistral](https://huggingface.co/models?library=transformers,safetensors&other=mistral&sort=trending)
|
||||
and
|
||||
[Llama](https://huggingface.co/models?library=transformers,safetensors&other=llama&sort=trending)
|
||||
style models should work out of the box.
|
||||
|
||||
### Convert new models
|
||||
|
||||
|
@@ -77,9 +77,11 @@ def upload_to_hub(path: str, name: str):
|
||||
|
||||
api = HfApi()
|
||||
|
||||
repo_id = f"mlx-community/{name}"
|
||||
api.create_repo(repo_id=repo_id, exist_ok=True)
|
||||
api.upload_folder(
|
||||
folder_path=path,
|
||||
repo_id=f"mlx-community/{name}",
|
||||
repo_id=repo_id,
|
||||
repo_type="model",
|
||||
multi_commits=True,
|
||||
multi_commits_verbose=True,
|
||||
|
Reference in New Issue
Block a user