mlx-examples/llms/mlx_lm/tuner
Chime Ogbuji 01e330d6bb Add input masking for fine-tuning in documentation
Renamed the batch iteration function (iterate_delineated_batches -> iterate_completion_batches).
2024-11-10 09:54:32 -05:00
..
__init__.py LoRA: Extract small function (#614) 2024-06-02 06:38:42 -07:00
datasets.py Replace iterate_input_masked_batches with iterate_delineated_batches, an updated attempt to better sync with iterate_batches logic 2024-11-05 15:17:23 -05:00
dora.py Feature: QDoRA (#891) 2024-09-30 08:01:11 -07:00
lora.py Allow the entire model to be targed for LoRA and DoRA fine tuning: LoRA and DoRA embeddings with small DoRALinear bug fix (#914) 2024-08-16 07:38:36 -07:00
trainer.py Add input masking for fine-tuning in documentation 2024-11-10 09:54:32 -05:00
utils.py Adding full finetuning (#903) 2024-09-29 17:12:47 -07:00