Awni Hannun
|
700b67fa3a
|
Merge pull request #90 from bofenghuang/fix-fp16
Fix whisper fp16 inference
|
2023-12-13 07:29:10 -08:00 |
|
Awni Hannun
|
3b7cfeb8ed
|
Merge pull request #88 from dastrobu/meta-form-url
fix "request access" form url for Llama models
|
2023-12-13 07:20:51 -08:00 |
|
bofenghuang
|
4b1a06c0cb
|
Fix fp16
|
2023-12-13 11:07:47 +01:00 |
|
Daniel Strobusch
|
5515c2a75b
|
fix "request access" form url for Llama models
|
2023-12-13 10:19:29 +01:00 |
|
Awni Hannun
|
74c4ed40d2
|
Merge pull request #76 from bofenghuang/add-whisper-large-v3
Add whisper-large-v3
|
2023-12-12 20:22:31 -08:00 |
|
Awni Hannun
|
a614e951c4
|
Merge pull request #82 from ml-explore/llamav2
llama v2 with sharded weights
|
2023-12-12 17:08:24 -08:00 |
|
Awni Hannun
|
a99e9d551e
|
hf correction
|
2023-12-12 17:08:04 -08:00 |
|
Awni Hannun
|
d3bd2e5d68
|
Merge pull request #79 from ml-explore/whisper_fp16
Enable FP16 for Whisper
|
2023-12-12 17:05:21 -08:00 |
|
Awni Hannun
|
66253a324c
|
Merge pull request #84 from iammerrick/patch-1
Update convert.py
|
2023-12-12 17:02:21 -08:00 |
|
Awni Hannun
|
b7081feb62
|
Merge pull request #86 from 1-ashraful-islam/patch-2
Update README.md with recently added examples
|
2023-12-12 17:01:02 -08:00 |
|
Ashraful Islam
|
2e6a6c32ae
|
Update README.md
updates readme with recently added examples
|
2023-12-12 18:26:13 -06:00 |
|
Merrick Christensen
|
2206e8f7d9
|
Update convert.py
Docs are right, however, the code has a typo.
|
2023-12-12 14:33:33 -07:00 |
|
Awni Hannun
|
e0a53edb46
|
llama v1 request
|
2023-12-12 13:32:05 -08:00 |
|
Awni Hannun
|
f0c57c1361
|
llama v2 with sharded weights
|
2023-12-12 12:48:15 -08:00 |
|
Awni Hannun
|
9a02dce35c
|
Merge pull request #80 from 805karansaini/main
Typo Fix
|
2023-12-12 12:20:13 -08:00 |
|
805karansaini
|
eae9431143
|
Typo Fix
|
2023-12-13 01:45:50 +05:30 |
|
Awni Hannun
|
9b3f64d196
|
Merge pull request #75 from ml-explore/mixtral
Mixtral
|
2023-12-12 10:41:25 -08:00 |
|
Awni Hannun
|
034d0cfc2e
|
nit
|
2023-12-12 08:42:32 -08:00 |
|
Awni Hannun
|
0f66a12721
|
typos in readme
|
2023-12-12 08:41:28 -08:00 |
|
Awni Hannun
|
2ffd0da009
|
mixtral runs a bit faster
|
2023-12-12 08:36:40 -08:00 |
|
bofenghuang
|
94705ed38b
|
Add large v3
|
2023-12-12 17:26:52 +01:00 |
|
Awni Hannun
|
e42682dced
|
initial mixtral
|
2023-12-12 07:44:23 -08:00 |
|
Awni Hannun
|
6e723a015a
|
whisper default in fp16
|
2023-12-12 07:37:35 -08:00 |
|
Awni Hannun
|
13f1142eaa
|
Merge pull request #73 from jj701/mnist-requirements-txt
Adding Requirements.txt to the Mnist Example
|
2023-12-11 19:37:53 -08:00 |
|
jj701
|
bd742ec03c
|
Adding Requirements.txt
|
2023-12-11 20:45:39 -06:00 |
|
Awni Hannun
|
5be26ae91b
|
Merge pull request #69 from TristanBilot/main
Add Graph Convolutional Network example
|
2023-12-11 14:22:47 -08:00 |
|
Tristan Bilot
|
b606bfa6a7
|
fix comments before merge
|
2023-12-11 23:10:46 +01:00 |
|
Tristan Bilot
|
b95e48e146
|
use tree_flatten within L2 regularization
|
2023-12-11 20:15:11 +01:00 |
|
Tristan Bilot
|
ed5a830626
|
add GCN implementation
|
2023-12-11 17:48:07 +01:00 |
|
Awni Hannun
|
ecd96acfe4
|
Merge pull request #66 from Haixing-Hu/fix-issue-54
fix: fix issue #54, use CPU device to load the Torch model
|
2023-12-10 18:57:51 -08:00 |
|
Haixing Hu
|
5b62270556
|
fix: fix issue #54, use CPU device to load the Torch model
|
2023-12-11 10:54:55 +08:00 |
|
Awni Hannun
|
a4d932bf26
|
fix conversion
|
2023-12-10 16:56:41 -08:00 |
|
Awni Hannun
|
2652b4f055
|
Merge pull request #52 from ricardo-larosa/mistral_batch_size
Mistral: Pass argument --tokens_per_eval for token generation
|
2023-12-10 11:25:23 -08:00 |
|
ricardo-larosa
|
ce9ba916a3
|
Add arg tokens_per_eval for token generation
|
2023-12-10 11:09:13 +01:00 |
|
Awni Hannun
|
3a3ea3cfb0
|
Merge pull request #53 from ml-explore/mistral_lora
Generalize lora finetuning to Mistral
|
2023-12-09 15:05:29 -08:00 |
|
Awni Hannun
|
5a5decf767
|
revert accidental change
|
2023-12-09 14:58:45 -08:00 |
|
Awni Hannun
|
036090f508
|
few more nits
|
2023-12-09 14:20:19 -08:00 |
|
Awni Hannun
|
98f4346c81
|
black format
|
2023-12-09 14:15:25 -08:00 |
|
Awni Hannun
|
b8332a1e66
|
generalize lora finetuning for llama and mistral
|
2023-12-09 14:13:55 -08:00 |
|
Awni Hannun
|
46c6bbe0a1
|
Merge pull request #43 from jbarrow/main
BERT implementation
|
2023-12-09 09:03:49 -08:00 |
|
Joe Barrow
|
d873e10dfe
|
Updating README for current example, making python>=3.8 compatibile, and fixing code type
|
2023-12-09 12:01:58 -05:00 |
|
Joe Barrow
|
20d920a7eb
|
Requirements for running BERT
|
2023-12-09 10:52:55 -05:00 |
|
Joe Barrow
|
45ca4ed3f6
|
Updating README
|
2023-12-09 10:48:34 -05:00 |
|
Joe Barrow
|
7320456226
|
Cleaning implementation for merge
|
2023-12-09 10:41:15 -05:00 |
|
Awni Hannun
|
8b2a6fee33
|
Merge pull request #50 from eltociear/patch-1
Update CONTRIBUTING.md
|
2023-12-08 17:00:25 -08:00 |
|
Ikko Eltociear Ashimine
|
ba0bd65510
|
Update CONTRIBUTING.md
formating -> formatting
|
2023-12-09 08:02:34 +09:00 |
|
Awni Hannun
|
331690491f
|
Merge pull request #45 from bbelescot/clarify-llama-readme-instructions
📝 Clarify python command for llama example
|
2023-12-08 09:38:07 -08:00 |
|
bbelescot
|
5bdd030387
|
📝 apply the path change to the convert cmd for consistency
|
2023-12-08 17:11:50 +01:00 |
|
Joe Barrow
|
e05ee57bab
|
Update README for mlx-examples repo
|
2023-12-08 10:20:50 -05:00 |
|
Awni Hannun
|
6259c9a048
|
Merge pull request #40 from Jacksonzhang0316/main
fix: Unsupported BFloat16 Data Type Issue with MPS Backend
|
2023-12-08 06:48:21 -08:00 |
|