mlx-examples/llms/mlx_lm/tuner
Anupam Mediratta 607c300e18 Add Direct Preference Optimization (DPO) method
Fixes #513

Implement the Direct Preference Optimization (DPO) method as a Reinforcement Learning from Human Feedback (RLHF) example.

* **Add DPO Functions**: Add `get_batched_logps` and `dpo_loss` functions to `llms/mlx_lm/utils.py` for DPO implementation.
* **Update Training Logic**: Update `llms/mlx_lm/tuner/trainer.py` to include DPO-specific training logic, including a new `dpo_loss` function and condition to check for DPO loss in the training loop.
* **Add Configuration Options**: Add configuration options for DPO in `llms/mlx_lm/examples/lora_config.yaml`.
* **Update Documentation**: Update `llms/mlx_lm/README.md` to include instructions for using DPO.
* **Add Unit Tests**: Add `llms/tests/test_dpo.py` with unit tests for `get_batched_logps`, `dpo_loss`, and DPO-specific training logic.

---

For more details, open the [Copilot Workspace session](https://copilot-workspace.githubnext.com/ml-explore/mlx-examples/issues/513?shareId=XXXX-XXXX-XXXX-XXXX).
2025-02-12 15:21:21 +05:30
..
__init__.py LoRA: Extract small function (#614) 2024-06-02 06:38:42 -07:00
datasets.py Completion only fine-tuning of instruction models with collections of HF datasets (#1103) 2025-02-09 20:12:34 -08:00
dora.py Feature: QDoRA (#891) 2024-09-30 08:01:11 -07:00
lora.py Allow the entire model to be targed for LoRA and DoRA fine tuning: LoRA and DoRA embeddings with small DoRALinear bug fix (#914) 2024-08-16 07:38:36 -07:00
trainer.py Add Direct Preference Optimization (DPO) method 2025-02-12 15:21:21 +05:30
utils.py hunyuan finetune (#1270) 2025-02-11 16:49:35 -08:00