Param Thakkar
4c9f9f9be7
Made llama and mistral files mypy compatible ( #1359 )
...
* Made mypy compatible
* reformatted
* Added more fixes
* Added fixes to speculative-decoding
* Fixes
* fix circle
* revert some stuff
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2025-04-23 14:23:46 -07:00
dmdaksh
7d7e236061
- Removed unused Python imports ( #683 )
...
- bert/model.py:10: tree_unflatten
- bert/model.py:2: dataclass
- bert/model.py:8: numpy
- cifar/resnet.py:6: Any
- clip/model.py:15: tree_flatten
- clip/model.py:9: Union
- gcn/main.py:8: download_cora
- gcn/main.py:9: cross_entropy
- llms/gguf_llm/models.py:12: tree_flatten, tree_unflatten
- llms/gguf_llm/models.py:9: numpy
- llms/mixtral/mixtral.py:12: tree_map
- llms/mlx_lm/models/dbrx.py:2: Dict, Union
- llms/mlx_lm/tuner/trainer.py:5: partial
- llms/speculative_decoding/decoder.py:1: dataclass, field
- llms/speculative_decoding/decoder.py:2: Optional
- llms/speculative_decoding/decoder.py:5: mlx.nn
- llms/speculative_decoding/decoder.py:6: numpy
- llms/speculative_decoding/main.py:2: glob
- llms/speculative_decoding/main.py:3: json
- llms/speculative_decoding/main.py:5: Path
- llms/speculative_decoding/main.py:8: mlx.nn
- llms/speculative_decoding/model.py:6: tree_unflatten
- llms/speculative_decoding/model.py:7: AutoTokenizer
- llms/tests/test_lora.py:13: yaml_loader
- lora/lora.py:14: tree_unflatten
- lora/models.py:11: numpy
- lora/models.py:3: glob
- speechcommands/kwt.py:1: Any
- speechcommands/main.py:7: mlx.data
- stable_diffusion/stable_diffusion/model_io.py:4: partial
- whisper/benchmark.py:5: sys
- whisper/test.py:5: subprocess
- whisper/whisper/audio.py:6: Optional
- whisper/whisper/decoding.py:8: mlx.nn
2024-04-16 07:50:32 -07:00
Awni Hannun
b8a348c1b8
Switch to fast RMS/LN Norm ( #603 )
...
* use nn.RMSNorm, use sdpa, cleanup
* bump mlx versions
* minor update
* use fast layer norm
* version bump
* update requirement for whisper
* update requirement for gguf
2024-03-23 07:13:51 -07:00
Anchen
1415595409
chore(lora): support mixtral in lora example ( #343 )
2024-01-20 06:07:45 -08:00
Anchen
7cfda327fd
fix(lora): tokenizer return incompatible mx array ( #271 )
...
* fix(lora): tokenizer return incompatible encodeing mx array
* add readme nit
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-01-09 19:46:38 -08:00
Awni Hannun
7b258f33ac
Move lora example to use the same model format / conversion as hf_llm
( #252 )
...
* huffing face the lora example to allow more models
* fixes
* comments
* more readme nits
* fusion + works better for qlora
* nits'
* comments
2024-01-09 11:14:52 -08:00
Awni Hannun
37b41cec60
Qlora ( #219 )
...
qlora
2024-01-04 21:05:59 -08:00
Awni Hannun
27c0a8c002
Add llms subdir + update README ( #145 )
...
* add llms subdir + update README
* nits
* use same pre-commit as mlx
* update readmes a bit
* format
2023-12-20 10:22:25 -08:00
Awni Hannun
8c8f9d6440
keep base weights in fp16
2023-12-15 10:42:18 -08:00
Awni Hannun
84f02ef58b
use lower precision base weights
2023-12-15 10:29:42 -08:00
Awni Hannun
b8332a1e66
generalize lora finetuning for llama and mistral
2023-12-09 14:13:55 -08:00