mlx-examples/llms/mlx_lm
madroid 5079af62db
Update model card describe (#654)
* Update model card describe

- Add full link jump
- Add the address of the model uploader's Hugging Face homepage

* Add user_info to reduce whoami calls

* Remove the -U argument

* remove HF user info

* run pre-commit
2024-05-02 21:22:04 -07:00
..
examples Update lora_config.yaml (#735) 2024-04-28 10:24:34 -07:00
models Fixes Typo in Starcoder2 (#740) 2024-04-29 13:14:45 -07:00
tuner Add support for OpenELM (#719) 2024-04-25 16:49:28 -07:00
__init__.py Fix import warning (#479) 2024-02-27 08:47:56 -08:00
convert.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
fuse.py Save lora config (#636) 2024-04-02 13:52:53 -07:00
generate.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
gguf.py fix(mlx-lm): type hints in gguf.py (#621) 2024-03-26 07:56:01 -07:00
LORA.md MiniCPM implementation (#685) 2024-04-25 15:29:28 -07:00
lora.py Couple fixes for LoRA (#711) 2024-04-25 14:16:13 -07:00
MERGE.md Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
merge.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt Quantize embedding / Update quantize API (#680) 2024-04-18 18:16:10 -07:00
sample_utils.py Use async eval (#670) 2024-04-11 13:18:23 -07:00
SERVER.md Validate server params & fix logit bias bug (#731) 2024-04-30 07:27:40 -07:00
server.py Validate server params & fix logit bias bug (#731) 2024-04-30 07:27:40 -07:00
tokenizer_utils.py Quantize embedding / Update quantize API (#680) 2024-04-18 18:16:10 -07:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Update model card describe (#654) 2024-05-02 21:22:04 -07:00
version.py MiniCPM implementation (#685) 2024-04-25 15:29:28 -07:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.