mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-06-24 17:31:18 +08:00
![]() * feature: LoRA adapter for Embeddings * feature: wire in LoRAEmbedding into the tuner. Allow the embedding and non model.layers Linear layers to be targeted for fine tuning * feature: DoRA adapter for Embeddings * feature: wire in DoRAEmbedding * bugfix: ensure self.m is recalculated when the linear layer is changed in DoRALinear.from_linear * refactor: prefer from_base over from_linear or from_embedding. prefer fuse over to_linear or to_embedding * cleanup: remove unused imports in test_dora.py * refactor: remove unnecessary non_layer_modules * cleanup: remove wrong comments for lora embedding dropout. remove uncessary parens in dora embedding dropout * nits --------- Co-authored-by: Awni Hannun <awni@apple.com> |
||
---|---|---|
.. | ||
examples | ||
models | ||
tuner | ||
__init__.py | ||
convert.py | ||
fuse.py | ||
generate.py | ||
gguf.py | ||
LORA.md | ||
lora.py | ||
MANAGE.md | ||
manage.py | ||
MERGE.md | ||
merge.py | ||
py.typed | ||
README.md | ||
requirements.txt | ||
sample_utils.py | ||
SERVER.md | ||
server.py | ||
tokenizer_utils.py | ||
UPLOAD.md | ||
utils.py | ||
version.py |
Generate Text with MLX and 🤗 Hugging Face
This an example of large language model text generation that can pull models from the Hugging Face Hub.
For more information on this example, see the README in the parent directory.
This package also supports fine tuning with LoRA or QLoRA. For more information see the LoRA documentation.