PyTorch: build flash attention by default, except in CI (#48521)
* PyTorch: build flash attention by default, except in CI * Variant is boolean, only available when +cuda/+rocm * desc -> _desc
This commit is contained in:
@@ -12,6 +12,13 @@ spack:
|
||||
require: ~cuda
|
||||
mpi:
|
||||
require: openmpi
|
||||
py-torch:
|
||||
require:
|
||||
- target=aarch64
|
||||
- ~rocm
|
||||
- +cuda
|
||||
- cuda_arch=80
|
||||
- ~flash_attention
|
||||
|
||||
specs:
|
||||
# Horovod
|
||||
|
@@ -12,6 +12,13 @@ spack:
|
||||
require: ~cuda
|
||||
mpi:
|
||||
require: openmpi
|
||||
py-torch:
|
||||
require:
|
||||
- target=x86_64_v3
|
||||
- ~rocm
|
||||
- +cuda
|
||||
- cuda_arch=80
|
||||
- ~flash_attention
|
||||
|
||||
specs:
|
||||
# Horovod
|
||||
|
@@ -11,6 +11,13 @@ spack:
|
||||
require: "osmesa"
|
||||
mpi:
|
||||
require: openmpi
|
||||
py-torch:
|
||||
require:
|
||||
- target=x86_64_v3
|
||||
- ~cuda
|
||||
- +rocm
|
||||
- amdgpu_target=gfx90a
|
||||
- ~flash_attention
|
||||
|
||||
specs:
|
||||
# Horovod
|
||||
|
Reference in New Issue
Block a user