mirror of
				https://github.com/ml-explore/mlx-examples.git
				synced 2025-10-31 19:18:09 +08:00 
			
		
		
		
	typos in readme
This commit is contained in:
		| @@ -2,8 +2,7 @@ | ||||
|  | ||||
| Run the Mixtral[^mixtral] 8x7B mixture-of-experts (MoE) model in MLX on Apple silicon. | ||||
|  | ||||
| Note, this model needs a machine with substantial RAM (>= 128GB) to run in | ||||
| 16-bit precision.  | ||||
| Note, for 16-bit precision this model needs a machine with substantial RAM (~100GB) to run. | ||||
|  | ||||
| ### Setup | ||||
|  | ||||
| @@ -15,7 +14,7 @@ For example with Homebrew: | ||||
| brew install git-lfs | ||||
| ``` | ||||
|  | ||||
| Download the models from HugginFace: | ||||
| Download the models from HuggingFace: | ||||
|  | ||||
| ``` | ||||
| git clone https://huggingface.co/someone13574/mixtral-8x7b-32kseqlen | ||||
| @@ -27,7 +26,8 @@ cd mixtral-8x7b-32kseqlen/ | ||||
| cat consolidated.00.pth-split0 consolidated.00.pth-split1 consolidated.00.pth-split2 consolidated.00.pth-split3 consolidated.00.pth-split4 consolidated.00.pth-split5 consolidated.00.pth-split6 consolidated.00.pth-split7 consolidated.00.pth-split8 consolidated.00.pth-split9 consolidated.00.pth-split10 > consolidated.00.pth | ||||
| ``` | ||||
|  | ||||
| Now from `mlx-exmaples/mixtral` conver the weights to NumPy so MLX can read them: | ||||
| Now from `mlx-exmaples/mixtral` conver and save the weights as NumPy arrays so | ||||
| MLX can read them: | ||||
|  | ||||
| ``` | ||||
| python convert.py --model_path mixtral-8x7b-32kseqlen/ | ||||
| @@ -49,4 +49,4 @@ As easy as: | ||||
| python mixtral.py --model_path mixtral mixtral-8x7b-32kseqlen/ | ||||
| ``` | ||||
|  | ||||
| [^mixtral] Refer to Mistral's [blog post](https://mistral.ai/news/mixtral-of-experts/) for more details. | ||||
| [^mixtral]: Refer to Mistral's [blog post](https://mistral.ai/news/mixtral-of-experts/) for more details. | ||||
|   | ||||
		Reference in New Issue
	
	Block a user
	 Awni Hannun
					Awni Hannun