mirror of
https://github.com/ml-explore/mlx-examples.git
synced 2025-09-01 12:49:50 +08:00
QWEN: Fix unsupported ScalarType BFloat16
Fix unsupported ScalarType BFloat16.
Env: Mac M1 Ultra
torch: torch-2.0.0, metal
Apple clang version 15.0.0 (clang-1500.0.40.1)
Target: arm64-apple-darwin23.1.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
```
Traceback (most recent call last):
File "/Volumes/v1/models/mlx-examples-main/llms/qwen/convert.py", line 110, in <module>
convert(args)
File "/Volumes/v1/models/mlx-examples-main/llms/qwen/convert.py", line 63, in convert
weights = {replace_key(k): v.numpy() for k, v in state_dict.items()}
File "/Volumes/v1/models/mlx-examples-main/llms/qwen/convert.py", line 63, in <dictcomp>
weights = {replace_key(k): v.numpy() for k, v in state_dict.items()}
TypeError: Got unsupported ScalarType BFloat16
```
Fix: almost same as [#10](429ddb30dc
)
This commit is contained in:
@@ -60,7 +60,7 @@ def convert(args):
|
||||
args.model, trust_remote_code=True, torch_dtype=torch.float16
|
||||
)
|
||||
state_dict = model.state_dict()
|
||||
weights = {replace_key(k): v.numpy() for k, v in state_dict.items()}
|
||||
weights = {replace_key(k): (v.numpy() if v.dtype != torch.bfloat16 else v.to(torch.float32).numpy()) for k, v in state_dict.items()}
|
||||
config = model.config.to_dict()
|
||||
|
||||
if args.quantize:
|
||||
|
Reference in New Issue
Block a user