mlx-examples/llms/mlx_lm
Anchen f30413b63c
chore(mlx-lm): fix the number of validation batches configuration. (#752)
* chore: fix number of validation batches

* clean up

* address comment
2024-05-04 06:52:42 -07:00
..
examples Update lora_config.yaml (#735) 2024-04-28 10:24:34 -07:00
models Fix lora for qwen moe (#743) 2024-05-02 21:55:09 -07:00
tuner chore(mlx-lm): fix the number of validation batches configuration. (#752) 2024-05-04 06:52:42 -07:00
__init__.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
convert.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
fuse.py Save lora config (#636) 2024-04-02 13:52:53 -07:00
generate.py Add MLX Cache Limit setting for mlx_lm.generate and mlx_lm.server CLI (#744) 2024-05-03 12:42:48 -07:00
gguf.py fix(mlx-lm): type hints in gguf.py (#621) 2024-03-26 07:56:01 -07:00
LORA.md MiniCPM implementation (#685) 2024-04-25 15:29:28 -07:00
lora.py Fix lora for qwen moe (#743) 2024-05-02 21:55:09 -07:00
MANAGE.md Add model management functionality for local caches (#736) 2024-05-03 12:20:13 -07:00
manage.py Add model management functionality for local caches (#736) 2024-05-03 12:20:13 -07:00
MERGE.md Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
merge.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt Quantize embedding / Update quantize API (#680) 2024-04-18 18:16:10 -07:00
sample_utils.py Use async eval (#670) 2024-04-11 13:18:23 -07:00
SERVER.md Validate server params & fix logit bias bug (#731) 2024-04-30 07:27:40 -07:00
server.py Add MLX Cache Limit setting for mlx_lm.generate and mlx_lm.server CLI (#744) 2024-05-03 12:42:48 -07:00
tokenizer_utils.py Quantize embedding / Update quantize API (#680) 2024-04-18 18:16:10 -07:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Use stable url for MNIST (#749) 2024-05-03 17:13:05 -07:00
version.py MiniCPM implementation (#685) 2024-04-25 15:29:28 -07:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.