Files
mlx-examples/llms/mlx_lm
JosefAlbers 10853b57d9 Add model_config parameter to load() and load_model() (#770)
* Add `model_config` parameter to `load()` and `load_model()`

For easy editing of the loaded model configuration (e.g., for changing RoPE theta or scaling of Phi-3 model)

Example:

```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Phi-3-mini-4k-instruct-4bit-no-q-embed", model_config={"rope_theta":50000.0})
response = generate(model, tokenizer, prompt, max_tokens=MAX_TOKENS)
```

* Possible bug (default_loss)

* Revert "Possible bug (default_loss)"

This reverts commit 70a55ace18.

* Fix default_loss for lora

* 1. move load_model's new optional `model_config` arg to the end (fetch_from_hub()'s `model = load_model(model_path, lazy)`) 2. fix indentations (`black` hook)
2024-05-10 10:13:34 -07:00
..
2024-04-28 10:24:34 -07:00
2024-05-08 08:35:54 -07:00
2024-05-10 09:51:41 -07:00
2024-02-27 08:47:56 -08:00
2024-04-02 13:52:53 -07:00
2024-04-25 15:29:28 -07:00
2024-05-02 21:55:09 -07:00
2024-01-23 08:44:37 -08:00
2024-04-11 13:18:23 -07:00
2024-05-08 08:18:13 -07:00
2024-01-12 10:25:56 -08:00
2024-05-08 08:18:13 -07:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.