mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-12-16 02:08:55 +08:00
adding the modelnames in the LORA.md file and removing unused functions from mamba2.py
This commit is contained in:
@@ -7,12 +7,37 @@ LoRA (QLoRA).[^qlora] LoRA fine-tuning works with the following model families:
|
||||
- Mistral
|
||||
- Llama
|
||||
- Phi2
|
||||
- Phi3
|
||||
- Phi3 Small
|
||||
- PhiMOE
|
||||
- Phixtral
|
||||
- Plamo
|
||||
- Mixtral
|
||||
- Qwen
|
||||
- Qwen2
|
||||
- Qwen2 MOE
|
||||
- Gemma
|
||||
- Gemma2
|
||||
- OLMo
|
||||
- OLMo2
|
||||
- MiniCPM
|
||||
- InternLM2
|
||||
- Mamba
|
||||
- Mamba2
|
||||
- EXAONE
|
||||
- Hunyuan
|
||||
- GPT 2
|
||||
- GPT Neo
|
||||
- GPT BigCode
|
||||
- Deepseek
|
||||
- Deepseek2
|
||||
- OpenLM
|
||||
- StableLM
|
||||
- Cohere
|
||||
- DBRX
|
||||
- Nemotron
|
||||
- Recurrent Gemma
|
||||
- Starcoder
|
||||
|
||||
## Contents
|
||||
|
||||
|
||||
Reference in New Issue
Block a user