mlx-examples/llms/mlx_lm
Awni Hannun 685012c2ad
Couple fixes for LoRA (#711)
* don't overwrite in test only mode

* only load model specific safetensors
2024-04-25 14:16:13 -07:00
..
examples Configurable LR schedulers (#604) 2024-03-29 13:41:10 -07:00
models Add support for phi-3 (#712) 2024-04-23 09:20:00 -07:00
tuner Add support for phi-3 (#712) 2024-04-23 09:20:00 -07:00
__init__.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
convert.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
fuse.py Save lora config (#636) 2024-04-02 13:52:53 -07:00
generate.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
gguf.py fix(mlx-lm): type hints in gguf.py (#621) 2024-03-26 07:56:01 -07:00
LORA.md Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
lora.py Couple fixes for LoRA (#711) 2024-04-25 14:16:13 -07:00
MERGE.md Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
merge.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt Quantize embedding / Update quantize API (#680) 2024-04-18 18:16:10 -07:00
sample_utils.py Use async eval (#670) 2024-04-11 13:18:23 -07:00
SERVER.md Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
server.py Use CORS headers for streaming for MLX Server (#716) 2024-04-25 07:26:04 -07:00
tokenizer_utils.py Quantize embedding / Update quantize API (#680) 2024-04-18 18:16:10 -07:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Couple fixes for LoRA (#711) 2024-04-25 14:16:13 -07:00
version.py Quantize embedding / Update quantize API (#680) 2024-04-18 18:16:10 -07:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.