![]() * initial commit * initial commit * Adding first lines * adding x, and dt projection layers * adding the clamping mechanism * First succesful inference * last commit for today - added custom geenrate function and it works as expected, will try training and then with loading a model from the hub * clean up * save up * almost * update * update * fixed cache handeling * fixed loading * added seperate generat_step method in the model and also in the utils to automaticaly use the generate step mthod in the model class * quick update * still not working * save * still not working * initial commit * utils.py logits = logits[:, -1, :] TypeError: tuple indices must be integers or slices, not tuple * update * update * Fixing the Batching Depfwise Comnvolution and multi token input * fixing generate and logits outputs * Done! * Fixing the cache handling, generating works now trying training * update ACKNOWLEDGEMENTS * removing the model_type if stuff in the _step loop in generate_step and adding MambaCache in base.py for training easier generations and removing mamba in tuner/utils. * quick clean up * update trainer/utils for right initialisation of the layers for LoRA, but not working. * clean up * Forther update to trainer/utils for correct layer selection. Successfull training * removing extra mamba-infer.py file * clean up, reformating will come later * reformat and big clean up, final commit * some speedups and cleanups * fix test * nits * nits --------- Co-authored-by: Awni Hannun <awni@apple.com> |
||
---|---|---|
.circleci | ||
bert | ||
cifar | ||
clip | ||
cvae | ||
encodec | ||
gcn | ||
llava | ||
llms | ||
lora | ||
mnist | ||
normalizing_flow | ||
segment_anything | ||
speechcommands | ||
stable_diffusion | ||
t5 | ||
transformer_lm | ||
whisper | ||
.gitignore | ||
.pre-commit-config.yaml | ||
ACKNOWLEDGMENTS.md | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
LICENSE | ||
README.md |
MLX Examples
This repo contains a variety of standalone examples using the MLX framework.
The MNIST example is a good starting point to learn how to use MLX.
Some more useful examples are listed below.
Text Models
- MLX LM a package for LLM text generation, fine-tuning, and more.
- Transformer language model training.
- Minimal examples of large scale text generation with LLaMA, Mistral, and more in the LLMs directory.
- A mixture-of-experts (MoE) language model with Mixtral 8x7B.
- Parameter efficient fine-tuning with LoRA or QLoRA.
- Text-to-text multi-task Transformers with T5.
- Bidirectional language understanding with BERT.
Image Models
- Image classification using ResNets on CIFAR-10.
- Generating images with Stable Diffusion or SDXL.
- Convolutional variational autoencoder (CVAE) on MNIST.
Audio Models
- Speech recognition with OpenAI's Whisper.
- Audio compression and generation with Meta's EnCodec.
Multimodal models
- Joint text and image embeddings with CLIP.
- Text generation from image and text inputs with LLaVA.
- Image segmentation with Segment Anything (SAM).
Other Models
- Semi-supervised learning on graph-structured data with GCN.
- Real NVP normalizing flow for density estimation and sampling.
Hugging Face
Note: You can now directly download a few converted checkpoints from the MLX Community organization on Hugging Face. We encourage you to join the community and contribute new models.
Contributing
We are grateful for all of our contributors. If you contribute to MLX Examples and wish to be acknowledged, please add your name to the list in your pull request.
Citing MLX Examples
The MLX software suite was initially developed with equal contribution by Awni Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. If you find MLX Examples useful in your research and wish to cite it, please use the following BibTex entry:
@software{mlx2023,
author = {Awni Hannun and Jagrit Digani and Angelos Katharopoulos and Ronan Collobert},
title = {{MLX}: Efficient and flexible machine learning on Apple silicon},
url = {https://github.com/ml-explore},
version = {0.0},
year = {2023},
}