mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-06-25 01:41:19 +08:00

* Add MusicGen model * add benchmarks * change to from_pretrained * symlinks * add readme and requirements * fix readme * readme
44 lines
1.3 KiB
Markdown
44 lines
1.3 KiB
Markdown
# T5
|
|
|
|
The T5 models are encoder-decoder models pre-trained on a mixture of
|
|
unsupervised and supervised tasks.[^1] These models work well on a variety of
|
|
tasks by prepending task-specific prefixes to the input, e.g.:
|
|
`translate English to German: …`, `summarize: ….`, etc.
|
|
|
|
This example also supports the FLAN-T5 models variants.[^2]
|
|
|
|
## Generate
|
|
|
|
Generate text with:
|
|
|
|
```sh
|
|
python t5.py --model t5-small --prompt "translate English to German: A tasty apple"
|
|
```
|
|
|
|
This should give the output: `Ein leckerer Apfel`
|
|
|
|
To see a list of options run:
|
|
|
|
```sh
|
|
python t5.py --help
|
|
```
|
|
|
|
The `<model>` can be any of the following:
|
|
|
|
| Model Name | Model Size |
|
|
| ---------- | ----------
|
|
| t5-small | 60 million |
|
|
| t5-base | 220 million |
|
|
| t5-large | 770 million |
|
|
| t5-3b | 3 billion |
|
|
| t5-11b | 11 billion |
|
|
|
|
The FLAN variants can be specified with `google/flan-t5-small`,
|
|
`google/flan-t5-base`, etc. See the [Hugging Face
|
|
page](https://huggingface.co/docs/transformers/model_doc/flan-t5) for a
|
|
complete list of models.
|
|
|
|
[^1]: For more information on T5 see the [original paper](https://arxiv.org/abs/1910.10683)
|
|
or the [Hugging Face page](https://huggingface.co/docs/transformers/model_doc/t5).
|
|
[^2]: For more information on FLAN-T5 see the [original paper](https://arxiv.org/abs/2210.11416).
|