mlx-examples/llms/mlx_lm
Chime Ogbuji df6bc09d74
Configuration-based use of HF hub-hosted datasets for training (#701)
* Add hf_dataset configuration for using HF hub-hosted datasets for (Q)LoRA training

* Pre-commit formatting

* Fix YAML config example

* Print DS info

* Include name

* Add hf_dataset parameter default

* Remove TextHFDataset and CompletionsHFDataset and use Dataset and CompletionsDataset instead, adding a text_key constructor argument to the former (and changing it to work with a provided data structure instead of just from a JSON file), and prompt_key and completion_key arguments to the latter with defaults for backwards compatibility.

* nits

* update docs

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-06-26 10:20:50 -07:00
..
examples Configuration-based use of HF hub-hosted datasets for training (#701) 2024-06-26 10:20:50 -07:00
models Fix mypy errors with models/{qwen2,qwen2_moe,startcoder2}.py (#835) 2024-06-14 09:44:50 -07:00
tuner Configuration-based use of HF hub-hosted datasets for training (#701) 2024-06-26 10:20:50 -07:00
__init__.py mlx_lm: Add Streaming Capability to Generate Function (#807) 2024-06-03 09:04:39 -07:00
convert.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
fuse.py Block sparse MM MoEs (#782) 2024-05-21 15:58:08 -07:00
generate.py mlx_lm: Add Streaming Capability to Generate Function (#807) 2024-06-03 09:04:39 -07:00
gguf.py fix(mlx-lm): type hints in gguf.py (#621) 2024-03-26 07:56:01 -07:00
LORA.md Configuration-based use of HF hub-hosted datasets for training (#701) 2024-06-26 10:20:50 -07:00
lora.py Fixing "NameError: name 'resume_adapter_file' is not defined" (#817) 2024-06-05 10:07:31 -07:00
MANAGE.md Add model management functionality for local caches (#736) 2024-05-03 12:20:13 -07:00
manage.py Add model management functionality for local caches (#736) 2024-05-03 12:20:13 -07:00
MERGE.md Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
merge.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt Port of phi3small (#794) 2024-05-31 12:54:14 -07:00
sample_utils.py Use async eval (#670) 2024-04-11 13:18:23 -07:00
SERVER.md Logprobs info to completion API (#806) 2024-06-23 10:35:13 -07:00
server.py Logprobs info to completion API (#806) 2024-06-23 10:35:13 -07:00
tokenizer_utils.py Kv cache (#643) 2024-05-08 08:18:13 -07:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Logprobs info to completion API (#806) 2024-06-23 10:35:13 -07:00
version.py Configuration-based use of HF hub-hosted datasets for training (#701) 2024-06-26 10:20:50 -07:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.