mlx-examples/llms/tests
Awni Hannun 657b4cc0aa
[MLX LM] Sampler refactor + a few improvements (#1094)
* starting

* refactor sampler/processor and a few improvements

* fix stream

* fix stream generate

* fix eos handling in stream generate
2024-11-07 16:15:24 -08:00
..
test_datsets.py Configuration-based use of HF hub-hosted datasets for training (#701) 2024-06-26 10:20:50 -07:00
test_finetune.py Enable distributed LoRA training (#821) 2024-11-02 18:02:31 -07:00
test_generate.py [MLX LM] Sampler refactor + a few improvements (#1094) 2024-11-07 16:15:24 -08:00
test_gguf.py fix(mlx-lm): type hints in gguf.py (#621) 2024-03-26 07:56:01 -07:00
test_models.py More cache improvements (#1015) 2024-10-07 20:45:51 -07:00
test_prompt_cache.py [MLX LM] Sampler refactor + a few improvements (#1094) 2024-11-07 16:15:24 -08:00
test_sample_utils.py Faster sampling with mx.compile (#937) 2024-08-15 11:29:09 -07:00
test_server.py Prompt caching in mlx_lm.server (#1026) 2024-10-14 10:57:22 -07:00
test_tokenizers.py fix spm decoder multi-byte (#1092) 2024-11-05 06:06:26 -08:00
test_tuner_utils.py LoRA: Extract small function (#614) 2024-06-02 06:38:42 -07:00
test_utils_load_model.py support load model by custom get_model_classes (#899) 2024-07-25 11:01:17 -07:00
test_utils.py Fix whipser conversion for safetensors models (#935) 2024-08-14 10:22:04 -07:00