mlx-examples/llms/mlx_lm
Cavit Erginsoy 7ee76a32a4 Add memory estimation tool for MLX language models
This commit introduces a comprehensive memory estimation utility for MLX language models, supporting:
- Dynamic parameter calculation across diverse model architectures
- Handling of quantized and standard models
- Estimation of model weights, KV cache, and overhead memory
- Support for bounded and unbounded KV cache modes
- Flexible configuration via command-line arguments

The new tool provides detailed memory usage insights for different model configurations and generation scenarios.
2025-03-10 03:03:01 +00:00
..
examples Adding multiple optimizers to mlx lm (#1315) 2025-03-05 13:54:54 -08:00
models adding OLMoE architecture (#1321) 2025-03-05 13:46:06 -08:00
tuner adding OLMoE architecture (#1321) 2025-03-05 13:46:06 -08:00
__init__.py Add memory estimation tool for MLX language models 2025-03-10 03:03:01 +00:00
_version.py support kimi + more options in chat mode (#1312) 2025-02-28 11:33:18 -08:00
cache_prompt.py Fix prompt cache for models without chat template (#1250) 2025-02-06 11:10:58 -08:00
chat.py Change DEFAULT_SEED to None for stochastic generation by default (#1323) 2025-03-06 06:49:35 -08:00
convert.py Mixed quant recipes (#1300) 2025-02-26 11:32:36 -08:00
estimate_memory.py Add memory estimation tool for MLX language models 2025-03-10 03:03:01 +00:00
evaluate.py Use max tokens from options in mlx_lm evaluate (#1302) 2025-02-26 15:46:16 -08:00
fuse.py Adding full finetuning (#903) 2024-09-29 17:12:47 -07:00
generate.py Change DEFAULT_SEED to None for stochastic generation by default (#1323) 2025-03-06 06:49:35 -08:00
gguf.py Fix export to gguf (#993) 2024-09-20 13:33:45 -07:00
LORA.md Completion only fine-tuning of instruction models with collections of HF datasets (#1103) 2025-02-09 20:12:34 -08:00
lora.py Adding multiple optimizers to mlx lm (#1315) 2025-03-05 13:54:54 -08:00
MANAGE.md Add model management functionality for local caches (#736) 2024-05-03 12:20:13 -07:00
manage.py fix manage for new transformers (#1304) 2025-02-26 15:44:57 -08:00
MERGE.md Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
merge.py Create executables for generate, lora, server, merge, convert (#682) 2024-04-16 16:08:49 -07:00
py.typed Add py.typed to support PEP-561 (type-hinting) (#389) 2024-01-30 21:17:38 -08:00
README.md feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
requirements.txt deepseek v3 model with pipeline parallelism (#1191) 2025-01-09 15:55:53 -08:00
sample_utils.py batched min p and fix spec gen sampling (#1222) 2025-01-27 15:40:31 -08:00
SERVER.md Fix object property value in mlx_lm.server chat completions response to match OpenAI spec (#1119) 2024-11-24 16:37:37 -08:00
server.py chore(mlx-lm): support text type content in messages (#1225) 2025-01-27 17:13:50 -08:00
tokenizer_utils.py Completion only fine-tuning of instruction models with collections of HF datasets (#1103) 2025-02-09 20:12:34 -08:00
UPLOAD.md Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
utils.py support kimi + more options in chat mode (#1312) 2025-02-28 11:33:18 -08:00

Generate Text with MLX and 🤗 Hugging Face

This an example of large language model text generation that can pull models from the Hugging Face Hub.

For more information on this example, see the README in the parent directory.

This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.