mlx-examples/llms/tests
nathan ace2bb5890
Add logits_processor option to generate_step function (#983)
* Add logits_processor option for the generation as in huggingface transformers library

* concatenation correction

* Rename the tokens variable for clarity

* remove the logit_bias argument from generate_step method

* fix the variable name

* nits + test

* test

* add back logit bias + test

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-09-28 10:08:49 -07:00
..
test_datsets.py Configuration-based use of HF hub-hosted datasets for training (#701) 2024-06-26 10:20:50 -07:00
test_finetune.py Allow the entire model to be targed for LoRA and DoRA fine tuning: LoRA and DoRA embeddings with small DoRALinear bug fix (#914) 2024-08-16 07:38:36 -07:00
test_generate.py Add logits_processor option to generate_step function (#983) 2024-09-28 10:08:49 -07:00
test_gguf.py fix(mlx-lm): type hints in gguf.py (#621) 2024-03-26 07:56:01 -07:00
test_models.py Adding support for mamba (#940) 2024-09-28 07:02:53 -07:00
test_sample_utils.py Faster sampling with mx.compile (#937) 2024-08-15 11:29:09 -07:00
test_server.py Add /v1/models endpoint to mlx_lm.server (#984) 2024-09-28 07:21:11 -07:00
test_tuner_utils.py LoRA: Extract small function (#614) 2024-06-02 06:38:42 -07:00
test_utils_load_model.py support load model by custom get_model_classes (#899) 2024-07-25 11:01:17 -07:00
test_utils.py Fix whipser conversion for safetensors models (#935) 2024-08-14 10:22:04 -07:00