paramthakkar123
d7cab9d5f5
Made mypy compatible
2025-04-04 07:34:43 +05:30
Awni Hannun
2146bcd7ee
Quantize embedding / Update quantize API ( #680 )
...
* more async eval
* quantize embedding / update quantize api
* more updates for quantize
* update for quantize embeddings
* update sd quant API
* update sdxl quants
* error for datasets < batch_size
* async
* fix config loading
* fix quant
* fix tests
* fix req
* remove lm head if tie weights is true
* fix test
2024-04-18 18:16:10 -07:00
dmdaksh
7d7e236061
- Removed unused Python imports ( #683 )
...
- bert/model.py:10: tree_unflatten
- bert/model.py:2: dataclass
- bert/model.py:8: numpy
- cifar/resnet.py:6: Any
- clip/model.py:15: tree_flatten
- clip/model.py:9: Union
- gcn/main.py:8: download_cora
- gcn/main.py:9: cross_entropy
- llms/gguf_llm/models.py:12: tree_flatten, tree_unflatten
- llms/gguf_llm/models.py:9: numpy
- llms/mixtral/mixtral.py:12: tree_map
- llms/mlx_lm/models/dbrx.py:2: Dict, Union
- llms/mlx_lm/tuner/trainer.py:5: partial
- llms/speculative_decoding/decoder.py:1: dataclass, field
- llms/speculative_decoding/decoder.py:2: Optional
- llms/speculative_decoding/decoder.py:5: mlx.nn
- llms/speculative_decoding/decoder.py:6: numpy
- llms/speculative_decoding/main.py:2: glob
- llms/speculative_decoding/main.py:3: json
- llms/speculative_decoding/main.py:5: Path
- llms/speculative_decoding/main.py:8: mlx.nn
- llms/speculative_decoding/model.py:6: tree_unflatten
- llms/speculative_decoding/model.py:7: AutoTokenizer
- llms/tests/test_lora.py:13: yaml_loader
- lora/lora.py:14: tree_unflatten
- lora/models.py:11: numpy
- lora/models.py:3: glob
- speechcommands/kwt.py:1: Any
- speechcommands/main.py:7: mlx.data
- stable_diffusion/stable_diffusion/model_io.py:4: partial
- whisper/benchmark.py:5: sys
- whisper/test.py:5: subprocess
- whisper/whisper/audio.py:6: Optional
- whisper/whisper/decoding.py:8: mlx.nn
2024-04-16 07:50:32 -07:00
Awni Hannun
d3f8e4aee9
Fix argpartition call in Mixtral and other MOES ( #676 )
...
* Update mixtral.py
* fix all moes
---------
Co-authored-by: yuhai-china <yuhai.china@gmail.com>
2024-04-12 11:00:56 -07:00
Awni Hannun
b8a348c1b8
Switch to fast RMS/LN Norm ( #603 )
...
* use nn.RMSNorm, use sdpa, cleanup
* bump mlx versions
* minor update
* use fast layer norm
* version bump
* update requirement for whisper
* update requirement for gguf
2024-03-23 07:13:51 -07:00
Juan B. Rodriguez
838990b33b
fix: remove custom rope ( #470 )
2024-02-20 13:46:16 -08:00
Angelos Katharopoulos
f71e965d57
Change gqa to use repeat instead of concatenate ( #443 )
2024-02-14 17:40:11 -08:00
Daniel Strobusch
85258b2be7
make parameter naming consistent with other examples. ( #214 )
2024-01-02 08:18:12 -08:00
devonthomas35
939086e6a3
Mixtral: Stop at EOS token ( #183 )
...
* Stop at EOS token
* Precommit format files
* Fix precommit hooks
* Fix precommit hooks
2023-12-23 21:25:42 -08:00
Alvaro Bartolome
f4709cb807
Align CLI args and some smaller fixes ( #167 )
...
* Add `.DS_Store` files to `.gitignore`
* Fix variable naming of `config` in `mixtral/convert.py`
* Align CLI args and minor fixes
* standardize
* one more
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2023-12-22 14:34:32 -08:00
Awni Hannun
3cf436b529
Quantize example ( #162 )
...
* testing quantization
* conversion + quantization working
* one config processor
* quantization in mistral / nits in llama
* args for quantization
* llama / mistral conversion in good shape
* phi2 quantized
* mixtral
* qwen conversion
2023-12-21 12:59:37 -08:00
Daniel Strobusch
43b6522af2
rename --model_path to --model-path ( #151 )
...
use same argument convention for mistral/mixtral as for llama convert.
2023-12-21 06:28:57 -08:00
Awni Hannun
27c0a8c002
Add llms subdir + update README ( #145 )
...
* add llms subdir + update README
* nits
* use same pre-commit as mlx
* update readmes a bit
* format
2023-12-20 10:22:25 -08:00