* Added support for the MiniCPM architecture
* Added support for the MiniCPM architecture
* Updated utils.py and LORA.md
* Updated utils.py and LORA.md
* Update implementation details for MiniCPM architecture
* Cleaning up
* fixed the missing lm.head layer problem
* Refactor Model class to dynamically handle tied and untied word embeddings
* Quick update
* added a dynamic rope scaling base calucaltion
* Added support for the MiniCPM architecture
* Added support for the MiniCPM architecture
* Updated utils.py and LORA.md
* Updated utils.py and LORA.md
* Update implementation details for MiniCPM architecture
* Cleaning up
* fixed the missing lm.head layer problem
* Refactor Model class to dynamically handle tied and untied word embeddings
* added a dynamic rope scaling base calucaltion
* quick fix and clean up
* clean up again
* removed the MiniCPMNorm class as its not used
* forgot something, sorry
* format
* version bump
---------
Co-authored-by: Awni Hannun <awni@apple.com>
* wip
* wip
* feat: convert mlx model to gguf f16
* chore: conver norm layer to float32 to avoid overflow issue
* chore: add support for mixtral
* chore: clean up
* chore: remove unused import statement
* chore: clean up weight name mapping
* version and readme
* actual version bump
---------
Co-authored-by: Awni Hannun <awni@apple.com>
* Convert mlx_lm.lora to use YAML configuration
* pre-commit run fixes
* Fix loading of config file
* Remove invalid YAML from doc
* Update command-line options and YAML parameter overriding, per feedback in #503
* Minor wording change
* Positional argument
* Moved config to a (-c/--config) flag
* Removed CLI option defaults (since CLI options take precedence and their defaults are in CONFIG_DEFAULTS)
* pre-commit format updates
* Fix handling of CLI option defaults
* Prevent None values of unspecified CLI options from overwriting values from CONFIG_DEFAULTS
* nits
---------
Co-authored-by: Awni Hannun <awni@apple.com>