mlx-examples/llms/mlx_lm
Sugato Ray 2cd793dd69
feat: add update_config functionality (#531)
* feat: add `update_config` finctionality

- sorts the config for better readability
- updates "_name_or_path" key in config with upload_repo
- sets indentation of 4 spaces
- allows adding other key-value pairs via kwargs
- reduces code duplication
- standardizes config-update across mlx-lm

* feat: standardize updating config

Impactes:
- fuse.py
- merge.py

* update formatting

* remove commented out code

* update func: update_config to save_config

- drop kwards
- rename func as save_config
- incorporate review suggestions

* update func: save_config

- ensure only config-saving functionality
- function oes not return config as a dict anymore
- added review suggestions

* fixed formatting

* update formatting instruction in contribution guide

* nits

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-03-14 06:36:05 -07:00
..
examples Some improvements to LoRA (#528) 2024-03-12 20:02:03 -07:00
models Add support for Cohere's Command-R (#565) 2024-03-13 07:03:36 -07:00
tuner add peak_memory info to training callback (#572) 2024-03-13 20:17:10 -07:00
__init__.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
convert.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
fuse.py feat: add update_config functionality (#531) 2024-03-14 06:36:05 -07:00
generate.py chore(mlx-lm): add adapter support in generate.py (#494) 2024-02-28 07:49:25 -08:00
LORA.md YAML configuration for mlx_lm.lora (#503) 2024-03-08 07:57:52 -08:00
lora.py LoRA: some minor optimizations (#573) 2024-03-13 20:26:30 -07:00
MERGE.md Support for slerp merging models (#455) 2024-02-19 20:37:15 -08:00
merge.py feat: add update_config functionality (#531) 2024-03-14 06:36:05 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt [mlx-lm] Use sdpa in llama / mistral model (#515) 2024-03-07 17:41:23 -08:00
SERVER.md Prevent llms/mlx_lm from serving the local directory as a webserver (#498) 2024-02-27 19:40:42 -08:00
server.py Refactoring of mlx_lm example (#501) 2024-03-06 06:24:31 -08:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py feat: add update_config functionality (#531) 2024-03-14 06:36:05 -07:00
version.py version (#570) 2024-03-13 10:09:36 -07:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.