mlx-examples/llms/mlx_lm
Alex Cheema cd8efc7fbc
Add support for Llama-3.1 (#907)
* add dynamicNTK scaling rope

* remove unused var

* fix rope base

* llama3.1 fixes

* TODO for rope eval

* vectorise llama3 base freq calculation

* removed the arbitrary 2.0 rope_scale default case

* fix slow llama3.1 generation by evaluating stateless part of DynamicNTKScalingRoPE in init

* nits + format

* use mx.pi

* fix tests and add test for 3.1

---------

Co-authored-by: Prince Canuma <prince.gdt@gmail.com>
Co-authored-by: Awni Hannun <awni@apple.com>
2024-07-23 13:21:32 -07:00
..
examples Example of response generation with optional arguments (#853) 2024-07-09 06:49:59 -07:00
models Add support for Llama-3.1 (#907) 2024-07-23 13:21:32 -07:00
tuner Add GPT-neox model (#863) 2024-07-11 06:13:17 -07:00
__init__.py mlx_lm: Add Streaming Capability to Generate Function (#807) 2024-06-03 09:04:39 -07:00
convert.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
fuse.py Block sparse MM MoEs (#782) 2024-05-21 15:58:08 -07:00
generate.py mlx_lm: Add Streaming Capability to Generate Function (#807) 2024-06-03 09:04:39 -07:00
gguf.py fix(mlx-lm): type hints in gguf.py (#621) 2024-03-26 07:56:01 -07:00
LORA.md Configuration-based use of HF hub-hosted datasets for training (#701) 2024-06-26 10:20:50 -07:00
lora.py Pass use_dora parameter to linear_to_lora_layers (#885) 2024-07-11 14:34:34 -07:00
MANAGE.md Add model management functionality for local caches (#736) 2024-05-03 12:20:13 -07:00
manage.py Add model management functionality for local caches (#736) 2024-05-03 12:20:13 -07:00
MERGE.md Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
merge.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt Example of response generation with optional arguments (#853) 2024-07-09 06:49:59 -07:00
sample_utils.py Use async eval (#670) 2024-04-11 13:18:23 -07:00
SERVER.md Logprobs info to completion API (#806) 2024-06-23 10:35:13 -07:00
server.py keep the server in a valid state (#889) 2024-07-15 18:35:36 -07:00
tokenizer_utils.py fix yi (#852) 2024-06-27 06:38:19 -07:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py Add recurrent gemma (#856) 2024-07-07 12:10:04 -07:00
version.py Configuration-based use of HF hub-hosted datasets for training (#701) 2024-06-26 10:20:50 -07:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.