Anchen
a39b735c3b
chore(mlx-lm): update phi2 model args to sync with hf config format. ( #311 )
...
* chore(mlx-lm): update phi2 model args to sync with hf config format
* chore: fix type hint
2024-01-13 07:51:45 -08:00
Yousif
7575125d5d
Added lora support for Phi-2 ( #302 )
...
* Added lora support for Phi-2
* Added Phi-2 support in fuse and convert
* format + readme
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-01-12 13:45:30 -08:00
Alexandre Boucaud
3ac731dd4f
Fix TypeError in whisper benchmark script ( #306 )
...
* Add missing keyword to the decoding options
* Reverting last commit
* Fixing transcribe keyword in benckmark.py
* Add argument name to load_model
This is intended to avoid confusion
2024-01-12 13:07:15 -08:00
Pedro Cuenca
ef93979973
Update model card uploaded with converted models ( #309 )
2024-01-12 13:03:52 -08:00
Angelos Katharopoulos
1fa40067fe
Change tuple type definitions to use Tuple ( #308 )
2024-01-12 11:15:09 -08:00
Awni Hannun
c1342b8e89
Use pip for mlx data with speech commands ( #307 )
...
* update to use pypi mlx data
* nit in readme
2024-01-12 11:06:33 -08:00
Awni Hannun
c6440416a2
Mlx llm package ( #301 )
...
* fix converter
* add recursive files
* remove gitignore
* remove gitignore
* add packages properly
* read me update
* remove dup readme
* relative
* fix convert
* fix community name
* fix url
* version
2024-01-12 10:25:56 -08:00
Markus Enzweiler
2b61d9deb6
Updated CIFAR-10 ResNet example to use BatchNorm instead of LayerNorm ( #257 )
...
* replaced nn.LayerNorm by nn.BatchNorm
* mlx>=0.0.8 required
* updated default to 30 epochs instead of 100
* updated README after adding BatchNorm
* requires mlx>=0.0.9
* updated README.md with results for mlx-0.0.9
2024-01-12 05:43:11 -08:00
Anchen
6217d7acd0
Delete llms/hf_llm/models/.gitignore ( #300 )
2024-01-11 16:56:50 -08:00
Anchen
a2402116ae
refactor(hf_llm): moving phi2 example into hf_llm ( #293 )
...
* refactor: moving phi2 example into hf_llm
* chore: clean up
* chore: update phi2 model args so it can load args from config
* fix phi2 + nits + readme
* allow any HF repo, update README
* fix bug in llama
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-01-11 12:29:12 -08:00
Anjor Kanekar
e74889d0fa
prompt parameter ( #291 )
2024-01-11 06:04:57 -08:00
Anchen
7380ebfb0d
fix: undefined hf_path ( #292 )
2024-01-11 05:53:52 -08:00
Konstantin Kerekovski
047d4650c4
Add -local flag to llms/hf_llm/convert.py for reading source HF models from filesystem. ( #260 )
...
* * Add --local flag for reading models from filesystem and related code for doing so
* Disable uploading to huggingface if --local flag is set
* Remove code related to .bin files and merge fetch_from_local and fetch_from_hub into one function.
* Update llms/hf_llm/convert.py
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
* format / nits
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
Co-authored-by: Awni Hannun <awni@apple.com>
2024-01-10 19:53:01 -08:00
Awni Hannun
80d18671ad
[Lora] Fix generate ( #282 )
...
* fix generate
* update readme, fix test, better default
* nits
* typo
2024-01-10 16:13:06 -08:00
Rishi Narang
a2bc8426f2
Update txt2image.py ( #285 )
...
added np alias
2024-01-10 09:31:59 -08:00
Alwin Arrasyid
2bbe9d3bd8
fix use of args in generate function ( #284 )
2024-01-10 08:09:21 -08:00
Vaibhav Srivastav
44f86092ea
Fix Tokenizer save error. ( #278 )
2024-01-10 05:49:32 -08:00
Awni Hannun
841c8f7b30
fix max tokens ( #275 )
2024-01-09 21:41:12 -08:00
Anchen
7cfda327fd
fix(lora): tokenizer return incompatible mx array ( #271 )
...
* fix(lora): tokenizer return incompatible encodeing mx array
* add readme nit
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-01-09 19:46:38 -08:00
Awni Hannun
7b258f33ac
Move lora example to use the same model format / conversion as hf_llm
( #252 )
...
* huffing face the lora example to allow more models
* fixes
* comments
* more readme nits
* fusion + works better for qlora
* nits'
* comments
2024-01-09 11:14:52 -08:00
Awni Hannun
bbd7172eef
Some fixes / cleanup for BERT example ( #269 )
...
* some fixes/cleaning for bert + test
* nit
2024-01-09 08:44:51 -08:00
Awni Hannun
6759dfddf1
Fix SD image conversion ( #266 )
2024-01-09 08:41:31 -08:00
Alwin Arrasyid
6e6eff326e
fix: use of undefined args in generate function in phi-2 example ( #265 )
2024-01-09 06:43:59 -08:00
Vaibhav Srivastav
bb35e878cb
[Whisper] Add load from Hub. ( #255 )
...
* Add load from Hub.
* Up.
2024-01-08 06:20:00 -08:00
Vaibhav Srivastav
d4c3a9cb54
[Whisper] Add HF Hub upload option. ( #254 )
...
* Add HF Hub upload option.
* up.
* Add missing requirements.
2024-01-08 06:18:24 -08:00
Anchen
6e5b0de4d3
refactor: make the phi2 example can be directly load the model from hf without convert needed ( #253 )
...
* refactor: make the phi2 example can be directly load the model from hf without convert needed
* chore: add super().__init__() for all module, otherwise will cause error in lora
2024-01-08 06:01:23 -08:00
Nino Risteski
9742ad0f51
Update README.md ( #248 )
...
fixed a few typos
2024-01-07 20:13:58 -08:00
Awni Hannun
485fb9ac0f
quantize linear ( #250 )
2024-01-07 18:48:59 -08:00
Ikko Eltociear Ashimine
737b4c81a3
Update README.md ( #251 )
...
minor fix
2024-01-07 11:35:39 -08:00
bofeng huang
bf9926489e
[Whisper] Add word timestamps and confidence scores ( #201 )
...
* Add word timestamps and confidence scores
* Create a separate forward_with_cross_qk function
* Move multiple ops from np to mlx, clean comments
* Save alignment_heads
* Cast qk to fp32
* Add test for word-level timestamps and confidence scores
* format + readme
* nit
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2024-01-07 10:01:29 -08:00
mc0ps
25ebd36112
Fix typo in lora convert.py ( #245 )
2024-01-07 03:30:30 -08:00
Nino Risteski
b152d12d7b
Update README.md ( #243 )
...
a few typos
2024-01-06 11:44:49 -08:00
Anchen
758f05c09a
refactor: merge deepseek coder example into hf_llm example ( #234 )
...
* refactor: merge deepseek coder example into hf_llm example
* remove deepseek example
* chore: fix format in readme
* chore: remove default rope_scaling dict and use get to access type and factor to avoid key error
* Update llms/hf_llm/models.py
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
* chore: fix lint
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
2024-01-06 07:53:46 -08:00
Awni Hannun
cf0ad26a89
force fp16 for quantized models ( #240 )
2024-01-05 21:29:15 -08:00
Lawrence Wu
37856f70a8
add numpy as a requirement to run lora.py ( #238 )
...
* add numpy as a requirement to run lora.py
* removed unused imports
2024-01-05 16:16:28 -08:00
Awni Hannun
37b41cec60
Qlora ( #219 )
...
qlora
2024-01-04 21:05:59 -08:00
Christian Bieniak
4fa659acbd
Handle receiving 0 tokens gracefully ( #231 )
...
* handle 0 tokens gracefully
* Formatting
* Move no token check to statistics section
2024-01-04 19:14:13 -08:00
Andy Peatling
12c9bafbf5
Update README.md to fix --hf-model param call. ( #229 )
...
Update `--hf-model` to `--hf-path` since the `--hf-model` param does not exist in convert.py.
2024-01-04 11:53:51 -08:00
Awni Hannun
e14afb3e77
fix to use actual prompt ( #227 )
2024-01-04 11:12:05 -08:00
Vaibhav Srivastav
f95cf30a31
Fix upload to hub for HF LLMs conversion script. ( #221 )
...
* Fix upload to hub snippet.
* Weights -> model.
* reverting last commit.
2024-01-04 06:06:05 -08:00
Awni Hannun
a5d6d0436c
Support Hugging Face models ( #215 )
...
* support hf direct models
2024-01-03 15:13:26 -08:00
Daniel Strobusch
1d09c4fecd
keep dtype on model conversion ( #186 )
2024-01-02 11:20:29 -08:00
Daniel Strobusch
85258b2be7
make parameter naming consistent with other examples. ( #214 )
2024-01-02 08:18:12 -08:00
Anchen
e632d7aaaa
fix: deepseek coder tokenizer error ( #211 )
2024-01-01 06:10:37 -08:00
Anchen
ee3c44d231
chore: make the Deepseek example compatible with Yi models. ( #205 )
...
* Update convert.py
* Update convert.py
* Update deepseek_coder.py
2023-12-30 06:11:33 -08:00
bofeng huang
581a5733a1
[Whisper] Load customized MLX model & Quantization ( #191 )
...
* Add option to load customized mlx model
* Add quantization
* Apply reviews
* Separate model conversion and loading
* Update test
* Fix benchmark
* Add notes about conversion
* Improve doc
2023-12-29 10:22:15 -08:00
Anchen
1cdbf9e886
chore: fix the load quantization model for deepseek coder ( #203 )
...
* chore: fix the load quantization model
* change to explicitly check for quantization config
2023-12-29 05:25:38 -08:00
Anchen
31ddbd7806
add deepseek coder example ( #172 )
...
* feat: add example for deepseek coder
* chore: remove hardcoded rope_scaling_factor
* feat: add quantization support
* chore: update readme
* chore: clean up the rope scalling factor param in create cos sin theta
* feat: add repetition_penalty
* style /consistency changes to ease future integration
* nits in README
* one more typo
---------
Co-authored-by: Awni Hannun <awni@apple.com>
2023-12-28 21:42:22 -08:00
Angelos Katharopoulos
37fd2464dc
Add an image2image example in the stable diffusion ( #198 )
2023-12-28 18:31:45 -08:00
Benjamin Anderson
09566c7257
add speculative decoding example for llama ( #149 )
...
* speculative decoding
* add sample 0
* spec decode gives same results as regular decode
* rebase
* use accept reject criteria
* switch to t5
* update readme
* readme nit
* nits
* nits
* nits
---------
Co-authored-by: Benjamin Anderson <benjamin@Benjamins-MBP.lan>
Co-authored-by: Awni Hannun <awni@apple.com>
2023-12-28 15:20:43 -08:00