chenguangjian.jk
|
c656c927dc
|
run:
#mlx_lm.server --model mlx-community/Meta-Llama-3.1-8B-Instruct-8bit --trust-remote-code --port 8722
mlx_lm.server --model mlx-community/Mistral-Nemo-Instruct-2407-8bit --trust-remote-code --port 8722
k:
ps -ef|grep 'mlx_lm.server'|awk '{print $2}'|xargs kill -9
w:
curl -X GET "http://127.0.0.1:9000/api/ai/WriteBlogRandomlyWithLLM?model=MLXLMServer" -H "Request-Origion:SwaggerBootstrapUi" -H "accept:*/*"
c:
conda activate m3mlx
|
2024-07-24 14:44:39 +08:00 |
|
chenguangjian.jk
|
8e3f04f66c
|
mlx_lm.server --model mlx-community/Meta-Llama-3.1-8B-Instruct-8bit --trust-remote-code --port 8722
|
2024-07-24 11:45:37 +08:00 |
|
Anchen
|
362e88a744
|
feat: move lora into mlx-lm (#337)
* feat: Add lora and qlora training to mlx-lm
---------
Co-authored-by: Awni Hannun <awni@apple.com>
|
2024-01-23 08:44:37 -08:00 |
|
Awni Hannun
|
c6440416a2
|
Mlx llm package (#301)
* fix converter
* add recursive files
* remove gitignore
* remove gitignore
* add packages properly
* read me update
* remove dup readme
* relative
* fix convert
* fix community name
* fix url
* version
|
2024-01-12 10:25:56 -08:00 |
|