mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-06-28 03:41:17 +08:00

* probably approximatelly correct CLIPTextEncoder * implemented CLIPEncoderLayer as built-in nn.TransformerEncoderLayer * replaced embedding layer with simple matrix * implemented ViT * added ViT tests * fixed tests * added pooler_output for text * implemented complete CLIPModel * implemented init * implemented convert.py and from_pretrained * fixed some minor bugs and added the README.md * removed tokenizer unused comments * removed unused deps * updated ACKNOWLEDGEMENTS.md * Feat: Image Processor for CLIP (#1) @nkasmanoff: * clip image processor * added example usage * refactored image preprocessing * deleted unused image_config.py * removed preprocessing port * added dependency to mlx-data * fixed attribution and moved photos to assets * implemented a simple port of CLIPImageProcessor * review changes * PR review changes * renamed too verbose arg * updated README.md * nits in readme / conversion * simplify some stuff, remove unneeded inits * remove more init stuff * more simplify * make test a unit test * update main readme * readme nits --------- Co-authored-by: Noah Kasmanoff <nkasmanoff@gmail.com> Co-authored-by: Awni Hannun <awni@apple.com>
14 lines
493 B
Markdown
14 lines
493 B
Markdown
# Individual Contributors
|
|
|
|
If you wish to be acknowledged for your contributions, please list your name
|
|
with a short description of your contribution(s) below. For example:
|
|
|
|
- Jane Smith: Added the `foo` example.
|
|
|
|
MLX Examples was developed with contributions from the following individuals:
|
|
|
|
- Juarez Bochi: Added support for T5 models.
|
|
- Sarthak Yadav: Added the `cifar` and `speechcommands` examples.
|
|
- Shunta Saito: Added support for PLaMo models.
|
|
- Gabrijel Boduljak: Implemented `CLIP`.
|