![]() This commit introduces native MLX support for DeciLM models, including NVIDIA's Nemotron series that use Neural Architecture Search (NAS) optimizations. Key features: - Support for dummy layers (no-op attention/FFN components) - FFN fusion for improved efficiency - Variable Grouped Query Attention (VGQA) with different KV heads per layer - Block configuration handling for NAS architectures - Full conversion pipeline from HuggingFace to MLX format - Comprehensive test suite Tested with: - nvidia/Llama-3_1-Nemotron-Ultra-253B-v1 (Q5: 3.86 tokens/sec on M3 Ultra) - Memory usage: ~175GB peak for 253B model This enables running massive NAS-optimized models on Apple Silicon that were previously incompatible with MLX due to their unique architecture. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
---|---|---|
.circleci | ||
bert | ||
cifar | ||
clip | ||
cvae | ||
encodec | ||
flux | ||
gcn | ||
llava | ||
llms | ||
lora | ||
mnist | ||
musicgen | ||
normalizing_flow | ||
segment_anything | ||
speechcommands | ||
stable_diffusion | ||
t5 | ||
transformer_lm | ||
whisper | ||
.gitignore | ||
.pre-commit-config.yaml | ||
ACKNOWLEDGMENTS.md | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
LICENSE | ||
README.md |
MLX Examples
This repo contains a variety of standalone examples using the MLX framework.
The MNIST example is a good starting point to learn how to use MLX. Some more useful examples are listed below. Check-out MLX LM for a more fully featured Python package for LLMs with MLX.
Text Models
- Transformer language model training.
- Minimal examples of large scale text generation with LLaMA, Mistral, and more in the LLMs directory.
- A mixture-of-experts (MoE) language model with Mixtral 8x7B.
- Parameter efficient fine-tuning with LoRA or QLoRA.
- Text-to-text multi-task Transformers with T5.
- Bidirectional language understanding with BERT.
Image Models
- Generating images
- Image classification using ResNets on CIFAR-10.
- Convolutional variational autoencoder (CVAE) on MNIST.
Audio Models
- Speech recognition with OpenAI's Whisper.
- Audio compression and generation with Meta's EnCodec.
- Music generation with Meta's MusicGen.
Multimodal models
- Joint text and image embeddings with CLIP.
- Text generation from image and text inputs with LLaVA.
- Image segmentation with Segment Anything (SAM).
Other Models
- Semi-supervised learning on graph-structured data with GCN.
- Real NVP normalizing flow for density estimation and sampling.
Hugging Face
You can directly use or download converted checkpoints from the MLX Community organization on Hugging Face. We encourage you to join the community and contribute new models.
Contributing
We are grateful for all of our contributors. If you contribute to MLX Examples and wish to be acknowledged, please add your name to the list in your pull request.
Citing MLX Examples
The MLX software suite was initially developed with equal contribution by Awni Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. If you find MLX Examples useful in your research and wish to cite it, please use the following BibTex entry:
@software{mlx2023,
author = {Awni Hannun and Jagrit Digani and Angelos Katharopoulos and Ronan Collobert},
title = {{MLX}: Efficient and flexible machine learning on Apple silicon},
url = {https://github.com/ml-explore},
version = {0.0},
year = {2023},
}