Usama Ahmed
|
09b641aaa7
|
Fix FutureWarning in torch.load by setting weights_only=True (#1295)
|
2025-02-22 06:08:54 -08:00 |
|
Saurav Maheshkar
|
4971462bf0
|
feat(clip): add linear probe evaluation script (#960)
|
2024-10-24 21:56:17 -07:00 |
|
M. Ali Bayram
|
47060a8130
|
refactor: add force_download parameter to get_model_path function (#800)
|
2024-07-23 13:10:20 -07:00 |
|
dmdaksh
|
7d7e236061
|
- Removed unused Python imports (#683)
- bert/model.py:10: tree_unflatten
- bert/model.py:2: dataclass
- bert/model.py:8: numpy
- cifar/resnet.py:6: Any
- clip/model.py:15: tree_flatten
- clip/model.py:9: Union
- gcn/main.py:8: download_cora
- gcn/main.py:9: cross_entropy
- llms/gguf_llm/models.py:12: tree_flatten, tree_unflatten
- llms/gguf_llm/models.py:9: numpy
- llms/mixtral/mixtral.py:12: tree_map
- llms/mlx_lm/models/dbrx.py:2: Dict, Union
- llms/mlx_lm/tuner/trainer.py:5: partial
- llms/speculative_decoding/decoder.py:1: dataclass, field
- llms/speculative_decoding/decoder.py:2: Optional
- llms/speculative_decoding/decoder.py:5: mlx.nn
- llms/speculative_decoding/decoder.py:6: numpy
- llms/speculative_decoding/main.py:2: glob
- llms/speculative_decoding/main.py:3: json
- llms/speculative_decoding/main.py:5: Path
- llms/speculative_decoding/main.py:8: mlx.nn
- llms/speculative_decoding/model.py:6: tree_unflatten
- llms/speculative_decoding/model.py:7: AutoTokenizer
- llms/tests/test_lora.py:13: yaml_loader
- lora/lora.py:14: tree_unflatten
- lora/models.py:11: numpy
- lora/models.py:3: glob
- speechcommands/kwt.py:1: Any
- speechcommands/main.py:7: mlx.data
- stable_diffusion/stable_diffusion/model_io.py:4: partial
- whisper/benchmark.py:5: sys
- whisper/test.py:5: subprocess
- whisper/whisper/audio.py:6: Optional
- whisper/whisper/decoding.py:8: mlx.nn
|
2024-04-16 07:50:32 -07:00 |
|
Anchen
|
47dd6bd17f
|
chore(clip): update the clip example to make it compatible with HF format (#472)
* chore(clip): update the clip model to be HF format
* Update clip/convert.py
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
* chore: address comments
* chore: rename ClipVisionModel and ClipTextModel
* chore: add output hidden_states support
* chore: remove custom conv2d and apply weight transpose during weight sanitizing
* Update clip/model.py
* Update clip/model.py
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
|
2024-02-23 06:49:53 -08:00 |
|
Gabrijel Boduljak
|
94358219cf
|
CLIP (ViT) (#315)
* probably approximatelly correct CLIPTextEncoder
* implemented CLIPEncoderLayer as built-in nn.TransformerEncoderLayer
* replaced embedding layer with simple matrix
* implemented ViT
* added ViT tests
* fixed tests
* added pooler_output for text
* implemented complete CLIPModel
* implemented init
* implemented convert.py and from_pretrained
* fixed some minor bugs and added the README.md
* removed tokenizer unused comments
* removed unused deps
* updated ACKNOWLEDGEMENTS.md
* Feat: Image Processor for CLIP (#1)
@nkasmanoff:
* clip image processor
* added example usage
* refactored image preprocessing
* deleted unused image_config.py
* removed preprocessing port
* added dependency to mlx-data
* fixed attribution and moved photos to assets
* implemented a simple port of CLIPImageProcessor
* review changes
* PR review changes
* renamed too verbose arg
* updated README.md
* nits in readme / conversion
* simplify some stuff, remove unneeded inits
* remove more init stuff
* more simplify
* make test a unit test
* update main readme
* readme nits
---------
Co-authored-by: Noah Kasmanoff <nkasmanoff@gmail.com>
Co-authored-by: Awni Hannun <awni@apple.com>
|
2024-01-31 14:19:53 -08:00 |
|