diff --git a/llama/README.md b/llama/README.md index 39c0267c..e6dcc132 100644 --- a/llama/README.md +++ b/llama/README.md @@ -20,10 +20,8 @@ weights you will need to request access from Meta: - [Request Llama v1](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform) - [Request Llama v2](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) - -Alternatively, you can also download a select converted checkpoints from the -[mlx-llama](https://huggingface.co/mlx-llama) community organisation on Hugging -Face and skip the conversion step. +> [!TIP] +> Alternatively, you can also download a select converted checkpoints from the [mlx-community](https://huggingface.co/mlx-community) community organisation on Hugging Face and skip the conversion step. You can download the TinyLlama models directly from [Hugging Face](https://huggingface.co/TinyLlama). diff --git a/mistral/README.md b/mistral/README.md index c2406b6d..ddc25a57 100644 --- a/mistral/README.md +++ b/mistral/README.md @@ -28,6 +28,9 @@ python convert.py The conversion script will save the converted weights in the same location. +> [!TIP] +> Alternatively, you can also download a select converted checkpoints from the [mlx-community](https://huggingface.co/mlx-community) community organisation on Hugging Face and skip the conversion step. + ### Run Once you've converted the weights to MLX format, you can generate text with diff --git a/phi2/README.md b/phi2/README.md index f5d80696..18d8afc5 100644 --- a/phi2/README.md +++ b/phi2/README.md @@ -17,6 +17,9 @@ python convert.py This will make the `weights.npz` file which MLX can read. +> [!TIP] +> Alternatively, you can also download a select converted checkpoints from the [mlx-community](https://huggingface.co/mlx-community) community organisation on Hugging Face and skip the conversion step. + ## Generate To generate text with the default prompt: