* Add colorized output option to generate script
Two new functions were added to the script that allow output to be colorized based on the T[0] probability. Changes were made to the `generate_step` function in utils.py to permit colorization. Additionally, an argument for colorization was introduced to the command-line parser.
* Rename 'colorize' parameter with 'return_probability' in generate_step
* add an option to apply the tokenizer chat template
* fix the option to apply the tokenizer chat template
* better error messages for chat template issues
* apply the chat template by default when possible
* nit in comment'
* rebase
---------
Co-authored-by: Awni Hannun <awni@apple.com>
* refactor(qwen): moving qwen into mlx-lm
* chore: update doc
* chore: fix type hint
* add qwen model support in convert
* chore: fix doc
* chore: only load model in quantize_model
* chore: make the convert script only copy tokenizer files instead of load it and save
* chore: update docstring
* chore: remove unnecessary try catch
* chore: clean up for tokenizer and update transformers 4.37
* nits in README
---------
Co-authored-by: Awni Hannun <awni@apple.com>
* Implement normalizing flow Real NVP example
* Add requirements and basic usage to normalizing flow example
* Minor changes to README in normalizing flow example
* Remove trailing commas in function arguments for unified formatting in flows example
* Fix minor typos, add some annotations
* format + nits in README
* readme fix
* mov, minor changes in main, copywright
* remove debug
* fix
* Simplified class structure in distributions; better code re-use in bijectors
* Remove rogue space
* change name again
* nits
---------
Co-authored-by: Awni Hannun <awni@apple.com>
* Add missing keyword to the decoding options
* Reverting last commit
* Fixing transcribe keyword in benckmark.py
* Add argument name to load_model
This is intended to avoid confusion
* refactor: moving phi2 example into hf_llm
* chore: clean up
* chore: update phi2 model args so it can load args from config
* fix phi2 + nits + readme
* allow any HF repo, update README
* fix bug in llama
---------
Co-authored-by: Awni Hannun <awni@apple.com>
* * Add --local flag for reading models from filesystem and related code for doing so
* Disable uploading to huggingface if --local flag is set
* Remove code related to .bin files and merge fetch_from_local and fetch_from_hub into one function.
* Update llms/hf_llm/convert.py
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
* format / nits
---------
Co-authored-by: Awni Hannun <awni.hannun@gmail.com>
Co-authored-by: Awni Hannun <awni@apple.com>
* refactor: make the phi2 example can be directly load the model from hf without convert needed
* chore: add super().__init__() for all module, otherwise will cause error in lora
* Add word timestamps and confidence scores
* Create a separate forward_with_cross_qk function
* Move multiple ops from np to mlx, clean comments
* Save alignment_heads
* Cast qk to fp32
* Add test for word-level timestamps and confidence scores
* format + readme
* nit
---------
Co-authored-by: Awni Hannun <awni@apple.com>