mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-09-17 17:28:15 +08:00

* feat: Nemotron https://huggingface.co/nvidia/Minitron-4B-Base This is basically Llama with partial RoPE and LayerNorm instead of BatchNorm. Also they add 1 to the LayerNorm weight for some reason. * fixup! feat: Nemotron * nits --------- Co-authored-by: Awni Hannun <awni@apple.com>