mlx-examples/llms/tests
Awni Hannun 1963df8565
Allow prompt callback to generate_step (#1133)
* allow prompt callback and use in cache_prompt

* nit

* comments

* bump version
2024-12-03 16:17:14 -08:00
..
test_datsets.py Configuration-based use of HF hub-hosted datasets for training (#701) 2024-06-26 10:20:50 -07:00
test_finetune.py Enable distributed LoRA training (#821) 2024-11-02 18:02:31 -07:00
test_generate.py Generation refactor: part 2 (#1099) 2024-11-23 11:47:06 -08:00
test_gguf.py fix(mlx-lm): type hints in gguf.py (#621) 2024-03-26 07:56:01 -07:00
test_models.py Add olmo2 (#1128) 2024-12-02 11:42:58 -08:00
test_prompt_cache.py Allow prompt callback to generate_step (#1133) 2024-12-03 16:17:14 -08:00
test_sample_utils.py Generation refactor: part 2 (#1099) 2024-11-23 11:47:06 -08:00
test_server.py Prompt caching in mlx_lm.server (#1026) 2024-10-14 10:57:22 -07:00
test_tokenizers.py Generation refactor: part 2 (#1099) 2024-11-23 11:47:06 -08:00
test_tuner_utils.py LoRA: Extract small function (#614) 2024-06-02 06:38:42 -07:00
test_utils_load_model.py support load model by custom get_model_classes (#899) 2024-07-25 11:01:17 -07:00
test_utils.py Fix whipser conversion for safetensors models (#935) 2024-08-14 10:22:04 -07:00