mlx-examples/llms/tests
Billel Mokeddem 845efddc8c
Fix decoding manually added tokens (#1164)
* Fix decoding manually added tokens

* fix + test

* nit

* nit

* no lag bpe

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-12-17 09:54:29 -08:00
..
test_datsets.py Configuration-based use of HF hub-hosted datasets for training (#701) 2024-06-26 10:20:50 -07:00
test_finetune.py Enable distributed LoRA training (#821) 2024-11-02 18:02:31 -07:00
test_generate.py Generation refactor: part 2 (#1099) 2024-11-23 11:47:06 -08:00
test_gguf.py fix(mlx-lm): type hints in gguf.py (#621) 2024-03-26 07:56:01 -07:00
test_models.py Add support for cohere2 (#1157) 2024-12-16 08:01:03 -08:00
test_prompt_cache.py Allow prompt callback to generate_step (#1133) 2024-12-03 16:17:14 -08:00
test_sample_utils.py Generation refactor: part 2 (#1099) 2024-11-23 11:47:06 -08:00
test_server.py Prompt caching in mlx_lm.server (#1026) 2024-10-14 10:57:22 -07:00
test_tokenizers.py Fix decoding manually added tokens (#1164) 2024-12-17 09:54:29 -08:00
test_tuner_utils.py LoRA: Extract small function (#614) 2024-06-02 06:38:42 -07:00
test_utils_load_model.py Support for multiple EOS tokens (#1141) 2024-12-09 08:53:58 -08:00
test_utils.py Fix whipser conversion for safetensors models (#935) 2024-08-14 10:22:04 -07:00