.. |
examples
|
fix deepseek sharding (#1242)
|
2025-02-03 16:59:50 -08:00 |
models
|
fix deepseek sharding (#1242)
|
2025-02-03 16:59:50 -08:00 |
tuner
|
Fix no validation in lora (#1241)
|
2025-02-03 09:55:24 -08:00 |
__init__.py
|
Fix detokenizer space match for quote (#1072)
|
2024-10-27 15:06:07 -07:00 |
_version.py
|
deepseek v3 model with pipeline parallelism (#1191)
|
2025-01-09 15:55:53 -08:00 |
cache_prompt.py
|
fix encoding with special tokens + chat template (#1189)
|
2025-01-03 10:50:59 -08:00 |
chat.py
|
fix encoding with special tokens + chat template (#1189)
|
2025-01-03 10:50:59 -08:00 |
convert.py
|
override dtype with quant (#1062)
|
2024-10-22 09:56:45 -07:00 |
evaluate.py
|
Add support for fewshot and apply chat template lm_eval functionality (#1180)
|
2025-01-06 07:58:43 -08:00 |
fuse.py
|
Adding full finetuning (#903)
|
2024-09-29 17:12:47 -07:00 |
generate.py
|
Add a speculative decoding generator (#1155)
|
2025-01-10 15:27:08 -08:00 |
gguf.py
|
Fix export to gguf (#993)
|
2024-09-20 13:33:45 -07:00 |
LORA.md
|
Custom local dataset features (#1085)
|
2025-01-13 10:01:18 -08:00 |
lora.py
|
fix(lora): add back store_true default args (#1205)
|
2025-01-16 11:15:42 -08:00 |
MANAGE.md
|
Add model management functionality for local caches (#736)
|
2024-05-03 12:20:13 -07:00 |
manage.py
|
Improvements to mlx_lm.manage (#1178)
|
2025-01-01 07:25:57 -08:00 |
MERGE.md
|
Create executables for generate, lora, server, merge, convert (#682)
|
2024-04-16 16:08:49 -07:00 |
merge.py
|
Create executables for generate, lora, server, merge, convert (#682)
|
2024-04-16 16:08:49 -07:00 |
py.typed
|
Add py.typed to support PEP-561 (type-hinting) (#389)
|
2024-01-30 21:17:38 -08:00 |
README.md
|
feat: move lora into mlx-lm (#337)
|
2024-01-23 08:44:37 -08:00 |
requirements.txt
|
deepseek v3 model with pipeline parallelism (#1191)
|
2025-01-09 15:55:53 -08:00 |
sample_utils.py
|
batched min p and fix spec gen sampling (#1222)
|
2025-01-27 15:40:31 -08:00 |
SERVER.md
|
Fix object property value in mlx_lm.server chat completions response to match OpenAI spec (#1119)
|
2024-11-24 16:37:37 -08:00 |
server.py
|
chore(mlx-lm): support text type content in messages (#1225)
|
2025-01-27 17:13:50 -08:00 |
tokenizer_utils.py
|
Change the eos-token argument for mlx_lm.generate (#1176)
|
2025-01-05 22:26:05 -08:00 |
UPLOAD.md
|
Mlx llm package (#301)
|
2024-01-12 10:25:56 -08:00 |
utils.py
|
only download local shard (#1240)
|
2025-02-02 13:58:44 -08:00 |