mlx-examples/llms/mlx_lm/models
Markus Enzweiler 9b387007ab
Example of a Convolutional Variational Autoencoder (CVAE) on MNIST (#264)
* initial commit

* style fixes

* update of ACKNOWLEDGMENTS

* fixed comment

* minor refactoring; removed unused imports

* added cifar and cvae to top-level README.md

* removed mention of cuda/mps in argparse

* fixed training status output

* load_weights() with strict=True

* pretrained model update

* fixed imports and style

* requires mlx>=0.0.9

* updated with results using mlx 0.0.9

* removed mention of private repo

* simplify and combine to one file, more consistency with other exmaples

* few more nits

* nits

* spell

* format

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-02-06 20:02:27 -08:00
..
__init__.py Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
base.py Mlx llm package (#301) 2024-01-12 10:25:56 -08:00
llama.py two minor fixes (#335) 2024-01-18 14:18:13 -08:00
mixtral.py feat: move lora into mlx-lm (#337) 2024-01-23 08:44:37 -08:00
olmo.py Example of a Convolutional Variational Autoencoder (CVAE) on MNIST (#264) 2024-02-06 20:02:27 -08:00
phi2.py chore(mlx-lm): update phi2 model args to sync with hf config format. (#311) 2024-01-13 07:51:45 -08:00
plamo.py Add PLaMo-13B model as an LLM example (#303) 2024-01-23 07:17:24 -08:00
qwen2.py add qwen2 (#411) 2024-02-04 08:31:38 -08:00
qwen.py refactor(qwen): moving qwen into mlx-lm (#312) 2024-01-22 15:00:07 -08:00
stablelm_epoch.py Add StableLM-2 1.6B (#378) 2024-01-26 10:28:00 -08:00