.. |
examples
|
More cache improvements (#1015)
|
2024-10-07 20:45:51 -07:00 |
models
|
Tokenizer updates + tests (#1024)
|
2024-10-14 10:48:46 -07:00 |
tuner
|
Feature: QDoRA (#891)
|
2024-09-30 08:01:11 -07:00 |
__init__.py
|
Make sure to import the correct "version" module when installing mlx_whisper and mlx_lm from local source code. (#969)
|
2024-09-03 13:16:21 -07:00 |
_version.py
|
More cache improvements (#1015)
|
2024-10-07 20:45:51 -07:00 |
cache_prompt.py
|
More cache improvements (#1015)
|
2024-10-07 20:45:51 -07:00 |
chat.py
|
More cache improvements (#1015)
|
2024-10-07 20:45:51 -07:00 |
convert.py
|
Create executables for generate, lora, server, merge, convert (#682)
|
2024-04-16 16:08:49 -07:00 |
fuse.py
|
Adding full finetuning (#903)
|
2024-09-29 17:12:47 -07:00 |
generate.py
|
More cache improvements (#1015)
|
2024-10-07 20:45:51 -07:00 |
gguf.py
|
Fix export to gguf (#993)
|
2024-09-20 13:33:45 -07:00 |
LORA.md
|
LoRA: Support HuggingFace dataset via data parameter (#996)
|
2024-09-30 07:36:21 -07:00 |
lora.py
|
LoRA: Support HuggingFace dataset via data parameter (#996)
|
2024-09-30 07:36:21 -07:00 |
MANAGE.md
|
Add model management functionality for local caches (#736)
|
2024-05-03 12:20:13 -07:00 |
manage.py
|
Add model management functionality for local caches (#736)
|
2024-05-03 12:20:13 -07:00 |
MERGE.md
|
Create executables for generate, lora, server, merge, convert (#682)
|
2024-04-16 16:08:49 -07:00 |
merge.py
|
Create executables for generate, lora, server, merge, convert (#682)
|
2024-04-16 16:08:49 -07:00 |
py.typed
|
Add py.typed to support PEP-561 (type-hinting) (#389)
|
2024-01-30 21:17:38 -08:00 |
README.md
|
feat: move lora into mlx-lm (#337)
|
2024-01-23 08:44:37 -08:00 |
requirements.txt
|
Use fast rope (#945)
|
2024-08-23 13:18:51 -07:00 |
sample_utils.py
|
Min P implementation (#926)
|
2024-08-15 15:45:02 -07:00 |
SERVER.md
|
Prompt caching in mlx_lm.server (#1026)
|
2024-10-14 10:57:22 -07:00 |
server.py
|
Prompt caching in mlx_lm.server (#1026)
|
2024-10-14 10:57:22 -07:00 |
tokenizer_utils.py
|
Tokenizer updates + tests (#1024)
|
2024-10-14 10:48:46 -07:00 |
UPLOAD.md
|
Mlx llm package (#301)
|
2024-01-12 10:25:56 -08:00 |
utils.py
|
Make llm async eval less brittle (#1040)
|
2024-10-14 10:25:24 -07:00 |