Compare commits

..

64 Commits

Author SHA1 Message Date
Wouter Deconinck
53a45c820c Update entrypoint.bash 2023-11-11 21:39:42 -06:00
Christian Glusa
15dcd3c65c py-pynucleus: Add variant, modify dependencies (#41006) 2023-11-11 16:24:12 -06:00
Matthew Archer
49c2894def update to latest version (#40905) 2023-11-11 16:16:45 -06:00
Stephen Hudson
1ae37f6720 libEnsemble: add v1.1.0 (#40969) 2023-11-11 16:15:43 -06:00
Adrien Berchet
15f6368c7f Add geomdl package (#40933) 2023-11-11 15:55:08 -06:00
Terry Cojean
57b63228ce Ginkgo: 1.7.0, change compatibility, update option oneapi->sycl (#40874)
Signed-off-by: Terry Cojean <terry.cojean@kit.edu>
2023-11-11 09:00:52 -06:00
Greg Becker
13abfb7013 spack deconcretize command (#38803)
We have two ways to concretize now:
* `spack concretize` concretizes only the root specs that are not concrete in the environment.
* `spack concretize -f` eliminates all cached concretization data and reconcretizes the *entire* environment.

This PR adds `spack deconcretize`, which eliminates cached concretization data for a spec.  This allows
users greater control over what is preserved from their `spack.lock` file and what is reused when not
using `spack concretize -f`.  If you want to update a spec installed in your environment, you can call
`spack deconcretize` on it, and that spec and any relevant dependents will be removed from the lock file.

`spack concretize` has two options:
* `--root`: limits deconcretized specs to *specific* roots in the environment. You can use this to
  deconcretize exactly one root in a `unify: false` environment.  i.e., if `foo` root is a dependent
  of `bar`, both roots, `spack deconcretize bar` will *not* deconcretize `foo`.
* `--all`: deconcretize *all* specs that match the input spec. By default `spack deconcretize`
  will complain about multiple matches, like `spack uninstall`.
2023-11-10 14:55:35 -08:00
Nils Lehmann
b41fc1ec79 new release (#41010) 2023-11-10 11:52:59 -07:00
Henri Menke
124e41da23 libpspio 0.3.0 (#40953)
Co-authored-by: Alec Scott <alec@bcs.sh>
2023-11-10 09:48:50 -08:00
Massimiliano Culpo
f6039d1d45 builtin.repo: fix ^mkl pattern in minor packages (#41003)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-11-10 17:18:24 +01:00
Victoria Cherkas
8871bd5ba5 fdb: add dependency on eckit later release (#40737)
* depends_on("eckit@1.24.4:", when="@5.11.22:")

* Update var/spack/repos/builtin/packages/fdb/package.py

Co-authored-by: Alec Scott <alec@bcs.sh>

* make latest tagged release the default install

* revert f258f46660

---------

Co-authored-by: Alec Scott <alec@bcs.sh>
2023-11-10 07:54:25 -08:00
Cody Balos
efe85755d8 alquimia: apply patch for iso_c_binding to latest version (#40989) 2023-11-10 08:31:38 -06:00
Cody Balos
7aaa17856d pflotran: tweak for building with xsdk rocm/hip (#40990) 2023-11-10 08:30:35 -06:00
Massimiliano Culpo
fbf02b561a gromacs et al: fix ^mkl pattern (#41002)
The ^mkl pattern was used to refer to three packages
even though none of software using it was depending
on "mkl".

This pattern, which follows Hyrum's law, is now being
removed in favor of a more explicit one.

In this PR gromacs, abinit, lammps, and quantum-espresso
are modified.

Intel packages are also modified to provide "lapack"
and "blas" together.
2023-11-10 13:56:04 +00:00
Harmen Stoppels
4027a2139b env: compute env mods only for installed roots (#40997)
And improve the error message (load vs unload).

Of course you could have some uninstalled dependency too, but as long as
it doesn't implement `setup_run_environment` etc, I don't think it hurts
to attempt to load the root anyways, given that failure to do so is a
warning, not a fatal error.
2023-11-10 12:32:48 +01:00
Todd Gamblin
f0ced1af42 info: rework spack info command to display variants better (#40998)
This changes variant display to use a much more legible format, and to use screen space
much better (particularly on narrow terminals). It also adds color the variant display
to match other parts of `spack info`.

Descriptions and variant value lists that were frequently squished into a tiny column
before now have closer to the full terminal width.

This change also preserves any whitespace formatting present in `package.py`, so package
maintainers can make easer-to-read descriptions of variant values if they want. For
example, `gasnet` has had a nice description of the `conduits` variant for a while, but
it was wrapped and made illegible by `spack info`. That is now fixed and the original
newlines are kept.

Conditional variants are grouped by their when clauses by default, but if you do not
like the grouping, you can display all the variants in order with `--variants-by-name`.
I'm not sure when people will prefer this, but it makes it easier to tell that a
particular variant is/isn't there. I do think grouping by `when` is the better default.
2023-11-10 12:31:28 +01:00
David Boehme
2e45edf4e3 Add adiak v0.4.0 (#40993)
* Add adiak v0.4.0
* Fix style checks
2023-11-09 21:23:00 -07:00
Adam J. Stewart
4bcfb01566 py-black: add v23.10: (#40959) 2023-11-10 00:10:28 +01:00
Adam J. Stewart
b8bb8a70ce PyTorch: specify CUDA root directory (#40855) 2023-11-09 14:25:54 -08:00
Hariharan Devarajan
dd2b436b5a new release cpp-logger v0.0.2 (#40972) 2023-11-09 13:08:04 -08:00
Hariharan Devarajan
da2cc2351c Release Gotcha v1.0.5 (#40973) 2023-11-09 13:06:56 -08:00
eugeneswalker
383ec19a0c Revert "Deactivate Cray sles, due to unavailable runner (#40291)" (#40910)
This reverts commit 4b06862a7f.
2023-11-09 12:24:18 -08:00
Todd Gamblin
45f8a0e42c docs: tweak formatting of +: and -: operators (#40988)
Just trying to make these stand out a bit more in the docs.
2023-11-09 19:55:29 +00:00
Dom Heinzeller
4636a7f14f Add symlinks for hdf5 library names when built in debug mode (#40965)
* Add symlinks for hdf5 library names when built in debug mode
* Only apply bug fix for debug libs when build type is Debug
2023-11-09 11:40:53 -08:00
Kelly (KT) Thompson
38f3f57a54 [lcov] Add build and runtime deps necessary for lcov@2.0.0: (#40974)
* [lcov] Add build and runtime deps necessary for lcov@2.0.0:
   + Many additional Perl package dependecies are required for the new version of lcov.
   + Some of the new dependencies were not known to spack until now.
* Style fix
2023-11-09 11:37:38 -08:00
Satish Balay
b17d7cd0e6 mfem: add hipblas dependency for superlu-dist (#40981) 2023-11-09 11:19:48 -08:00
Satish Balay
b5e2f23b6c hypre: add in hipblas dependency due to superlu-dist (#40980) 2023-11-09 11:03:03 -08:00
Brian Van Essen
7a4df732e1 DiHydrogen, Hydrogen, and Aluminum CachedCMakePackage (#39714) 2023-11-09 19:08:37 +01:00
George Young
7e6aaf9458 py-macs3: adding zlib dependency (#40979) 2023-11-09 16:44:24 +01:00
Cody Balos
2d35d29e0f sundials: add v6.6.2 (#40920) 2023-11-09 07:38:40 -06:00
Scott Wittenburg
1baed0d833 buildcache: skip unrecognized metadata files (#40941)
This commit improves forward compatibility of Spack with newer build cache metadata formats.

Before this commit, invalid or unrecognized metadata would be fatal errors, now they just cause
a mirror to be skipped.

Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2023-11-09 13:30:41 +00:00
Harmen Stoppels
cadc2a1aa5 Set version to 0.22.0.dev0 (#40975) 2023-11-09 10:02:29 +01:00
Satish Balay
78449ba92b intel-oneapi-mkl: do not set __INTEL_POST_CFLAGS env variable (#40947)
This triggers warnings from icx compiler - that breaks petsc configure

$ I_MPI_CC=icx /opt/intel/oneapi/mpi/2021.7.0/bin/mpiicc -E a.c > /dev/null
$ __INTEL_POST_CFLAGS=-Wl,-rpath,/opt/intel/oneapi/mkl/2022.2.0/lib/intel64 I_MPI_CC=icx /opt/intel/oneapi/mpi/2021.7.0/bin/mpiicc -E a.c > /dev/null
icx: warning: -Wl,-rpath,/opt/intel/oneapi/mkl/2022.2.0/lib/intel64: 'linker' input unused [-Wunused-command-line-argument]
2023-11-09 08:40:12 +01:00
Massimiliano Culpo
26d6bfbb7f modules: remove deprecated code and test data (#40966)
This removes a few deprecated attributes from the
schema of the "modules" section. Test data for
deprecated options is removed as well.
2023-11-09 08:15:46 +01:00
Sergio Sánchez Ramírez
3405fe60f1 libgit2: add python as test dependency (#40863)
Libgit2 requires python as build dependency. I was getting an error because it was falling back to system Python which is compiled with Intel compilers and thus, `libgit2` was failing because it couldn't find `libimf.so` (which doesn't make sense).

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-11-08 23:20:55 +01:00
Harmen Stoppels
53c266b161 modules: restore exclude_implicits (#40958) 2023-11-08 22:56:55 +01:00
Thomas Madlener
ed8ecc469e podio: Add the latest tag (0.17.2) (#40956)
* podio: Add myself as maintainer
* podio: Add 0.17.2 tag
2023-11-08 14:53:23 -07:00
Harmen Stoppels
b2840acd52 Revert "defaults/modules.yaml: hide implicits (#40906)" (#40955)
This reverts commit a2f00886e9.
2023-11-08 14:33:50 -07:00
Tom Vander Aa
c35250b313 libevent: always autogen.sh (#40945)
The libevent release tarballs ship with a `configure` script generated by an old `libtool`. The `libtool` generated by `configure` is not compatible with `MACOSX_DEPLOYMENT_VERSION` > 10. Regeneration of the `configure` scripts fixes build on macOS. 

Original configure contains:
```
    case $host_os in
    rhapsody* | darwin1.[012])
      _lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;;
    darwin1.*)
      _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
    darwin*) # darwin 5.x on
      # if running on 10.5 or later, the deployment target defaults
      # to the OS version, if on x86, and 10.4, the deployment
      # target defaults to 10.4. Don't you love it?
      case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in
        10.0,*86*-darwin8*|10.0,*-darwin[91]*)
          _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
        10.[012][,.]*)
          _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
        10.*)
          _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
      esac
```

After re-running `autogen.sh`:
```
    case $host_os in
    rhapsody* | darwin1.[012])
      _lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;;
    darwin1.*)
      _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
    darwin*)
      case $MACOSX_DEPLOYMENT_TARGET,$host in
        10.[012],*|,*powerpc*-darwin[5-8]*)
          _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
        *)
          _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
      esac
```
2023-11-08 22:33:09 +01:00
Adam J. Stewart
e114853115 py-lightning: add v2.1.1 (#40957) 2023-11-08 13:15:23 -08:00
Cameron Smith
89fc9a9d47 lcov: add version2, embed perl path in binaries (#39342)
* lcov: add version2, perl dep at build and runtime
* lcov: add runtime deps
* namespace-autoclean: new perl package
* datetime: dep on autoclean
* formatting
2023-11-08 11:23:23 -08:00
Massimiliano Culpo
afc693645a tcl: filter compiler wrappers to avoid pointing to Spack (#40946) 2023-11-08 19:38:41 +01:00
downloadico
4ac0e511ad abinit: add v9.10.3 (#40919)
* abinit:	add v9.10.3
Changed	configure arguments for specfying how to use Wannier90 for versions
after 9.8.
When the mpi variant is requested, set the F90 environment variable to point
to the MPI Fortran wrapper when building versions after 9.8 instead of FC.
---------

Co-authored-by: Alec Scott <hi@alecbcs.com>
2023-11-08 10:15:49 -08:00
Henri Menke
b0355d6cc0 ScaFaCoS 1.0.4 (#40948) 2023-11-08 10:17:58 -07:00
Konstantinos Parasyris
300d53d6f8 Add new tag on AMS (#40949) 2023-11-08 08:52:53 -08:00
Greg Becker
0b344e0fd3 tutorial stack: update for changes to the basics section for SC23 (#40942) 2023-11-07 23:46:57 -08:00
Peter Scheibel
15adb308bf RAJA package: find libs (#40885) 2023-11-08 08:33:04 +01:00
Michael Kuhn
050d565375 julia: constrain patchelf version (#40938)
* julia: constrain patchelf version

patchelf@0.18 breaks (at least) `libjulea-internal.so`, leading to
errors like:
```
$ julia --version
ERROR: Unable to load dependent library $SPACK/opt/spack/linux-centos8-x86_64_v3/gcc-12.3.0/julia-1.9.2-6hf5qx2q27jth2fkm6kgqmfdlhzzw6pl/bin/../lib/julia/libjulia-internal.so.1
Message:$SPACK/opt/spack/linux-centos8-x86_64_v3/gcc-12.3.0/julia-1.9.2-6hf5qx2q27jth2fkm6kgqmfdlhzzw6pl/bin/../lib/julia/libjulia-internal.so.1: ELF load command address/offset not properly aligned
```

* patchelf: prefer v0.17.x since v0.18 breaks libraries
2023-11-08 08:13:54 +01:00
Matthew Thompson
f6ef2c254e mapl: add v2.41 and v2.42 (#40870)
* mapl: add 2.41 and 2.42

* Conflict MPICH 3
2023-11-07 17:36:11 -08:00
Freifrau von Bleifrei
62c27b1924 discotec: add compression variant (#40925) 2023-11-07 14:58:48 -08:00
SWAT Team (JSC)
2ff0766aa4 adds cubew 4.8.1, cubelib 4.8.1 and cubegui 4.8.1, 4.8.2 (#40612)
* exago: fix v1.5.1 tag; only allow python up to 3.10 for for @:1.5 (#40676)
* exago: fix v1.5.1 tag; only allow python up to 3.10 for for @:1.5 due to pybind error with py 3.11
* hiop@:1.0 +cuda: constrain to cuda@:11.9
* fixes syntax of maintainers

---------

Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
2023-11-07 14:40:36 -08:00
Mark W. Krentel
dc245e87f9 intel-xed: fix git hash for mbuild, add version 2023.10.11 (#40922)
* intel-xed: fix git hash for mbuild, add version 2023.10.11
   Fixes #40912
* Fix the git commit hash for mbuild 2022.04.17.  This was broken in
   commit eef9939c21 by mixing up the hashes for xed versus mbuild.
* Add versions 2023.08.21 and 2023.10.11.
* fix style
2023-11-07 14:36:42 -08:00
Harmen Stoppels
c1f134e2a0 tutorial: use lmod@8.7.18 because @8.7.19: has bugs (#40939) 2023-11-07 23:04:45 +01:00
Mosè Giordano
391940d2eb julia: Add v1.9.3 (#40911) 2023-11-07 22:06:12 +01:00
Adam J. Stewart
8c061e51e3 sleef: build shared libs (#40893) 2023-11-07 21:48:59 +01:00
Richarda Butler
5774df6b7a Propagate variant across nodes that don't have that variant (#38512)
Before this PR, variant were not propagated to leaf nodes that could accept 
the propagated value, if some intermediate node couldn't accept it.

This PR fixes that issue by marking nodes as "candidate" for propagation
and by setting the variant only if it can be accepted by the node.

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-11-07 21:04:41 +01:00
Harmen Stoppels
3a5c1eb5f3 tutorial pipeline: force gcc@12.3.0 (#40937) 2023-11-07 20:53:44 +01:00
Harmen Stoppels
3a2ec729f7 Ensure global command line arguments end up in args like before (#40929) 2023-11-07 20:35:56 +01:00
Jacob King
a093f4a8ce superlu-dist: add +parmetis variant. (#40746)
* Expose ability to make parmetis an optional superlu-dist dependency to
spack package management.

* rename parmetis variant: Enable ParMETIS library

---------

Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
2023-11-07 10:21:38 -08:00
Scott Wittenburg
b8302a8277 ci: do not retry timed out build jobs (#40936) 2023-11-07 17:44:28 +00:00
Massimiliano Culpo
32f319157d Update the branch for the tutorial command (#40934) 2023-11-07 16:59:48 +00:00
Harmen Stoppels
75dfad8788 catch exceptions in which_string (#40935) 2023-11-07 17:17:31 +01:00
Vanessasaurus
f3ba20db26 fix configure args for darshan-runtime (#40873)
Problem: the current configure arguments are added lists to a list,
and this needs to be adding strings to the same list.
Solution: ensure we add each item (string) separately.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2023-11-07 07:00:28 -08:00
Rob Falgout
6301edbd5d Update package.py for new release 2.30.0 (#40907) 2023-11-07 07:58:00 -07:00
113 changed files with 2170 additions and 2350 deletions

View File

@@ -1,316 +1,3 @@
# v0.21.1 (2024-01-11)
## New features
- Add support for reading buildcaches created by Spack v0.22 (#41773)
## Bugfixes
- spack graph: fix coloring with environments (#41240)
- spack info: sort variants in --variants-by-name (#41389)
- Spec.format: error on old style format strings (#41934)
- ASP-based solver:
- fix infinite recursion when computing concretization errors (#41061)
- don't error for type mismatch on preferences (#41138)
- don't emit spurious debug output (#41218)
- Improve the error message for deprecated preferences (#41075)
- Fix MSVC preview version breaking clingo build on Windows (#41185)
- Fix multi-word aliases (#41126)
- Add a warning for unconfigured compiler (#41213)
- environment: fix an issue with deconcretization/reconcretization of specs (#41294)
- buildcache: don't error if a patch is missing, when installing from binaries (#41986)
- Multiple improvements to unit-tests (#41215,#41369,#41495,#41359,#41361,#41345,#41342,#41308,#41226)
## Package updates
- root: add a webgui patch to address security issue (#41404)
- BerkeleyGW: update source urls (#38218)
# v0.21.0 (2023-11-11)
`v0.21.0` is a major feature release.
## Features in this release
1. **Better error messages with condition chaining**
In v0.18, we added better error messages that could tell you what problem happened,
but they couldn't tell you *why* it happened. `0.21` adds *condition chaining* to the
solver, and Spack can now trace back through the conditions that led to an error and
build a tree of causes potential causes and where they came from. For example:
```console
$ spack solve hdf5 ^cmake@3.0.1
==> Error: concretization failed for the following reasons:
1. Cannot satisfy 'cmake@3.0.1'
2. Cannot satisfy 'cmake@3.0.1'
required because hdf5 ^cmake@3.0.1 requested from CLI
3. Cannot satisfy 'cmake@3.18:' and 'cmake@3.0.1
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 depends on cmake@3.18: when @1.13:
required because hdf5 ^cmake@3.0.1 requested from CLI
4. Cannot satisfy 'cmake@3.12:' and 'cmake@3.0.1
required because hdf5 depends on cmake@3.12:
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 ^cmake@3.0.1 requested from CLI
```
More details in #40173.
2. **OCI build caches**
You can now use an arbitrary [OCI](https://opencontainers.org) registry as a build
cache:
```console
$ spack mirror add my_registry oci://user/image # Dockerhub
$ spack mirror add my_registry oci://ghcr.io/haampie/spack-test # GHCR
$ spack mirror set --push --oci-username ... --oci-password ... my_registry # set login creds
$ spack buildcache push my_registry [specs...]
```
And you can optionally add a base image to get *runnable* images:
```console
$ spack buildcache push --base-image ubuntu:23.04 my_registry python
Pushed ... as [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
$ docker run --rm -it [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
```
This creates a container image from the Spack installations on the host system,
without the need to run `spack install` from a `Dockerfile` or `sif` file. It also
addresses the inconvenience of losing binaries of dependencies when `RUN spack
install` fails inside `docker build`.
Further, the container image layers and build cache tarballs are the same files. This
means that `spack install` and `docker pull` use the exact same underlying binaries.
If you previously used `spack install` inside of `docker build`, this feature helps
you save storage by a factor two.
More details in #38358.
3. **Multiple versions of build dependencies**
Increasingly, complex package builds require multiple versions of some build
dependencies. For example, Python packages frequently require very specific versions
of `setuptools`, `cython`, and sometimes different physics packages require different
versions of Python to build. The concretizer enforced that every solve was *unified*,
i.e., that there only be one version of every package. The concretizer now supports
"duplicate" nodes for *build dependencies*, but enforces unification through
transitive link and run dependencies. This will allow it to better resolve complex
dependency graphs in ecosystems like Python, and it also gets us very close to
modeling compilers as proper dependencies.
This change required a major overhaul of the concretizer, as well as a number of
performance optimizations. See #38447, #39621.
4. **Cherry-picking virtual dependencies**
You can now select only a subset of virtual dependencies from a spec that may provide
more. For example, if you want `mpich` to be your `mpi` provider, you can be explicit
by writing:
```
hdf5 ^[virtuals=mpi] mpich
```
Or, if you want to use, e.g., `intel-parallel-studio` for `blas` along with an external
`lapack` like `openblas`, you could write:
```
strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
```
The `virtuals=mpi` is an edge attribute, and dependency edges in Spack graphs now
track which virtuals they satisfied. More details in #17229 and #35322.
Note for packaging: in Spack 0.21 `spec.satisfies("^virtual")` is true if and only if
the package specifies `depends_on("virtual")`. This is different from Spack 0.20,
where depending on a provider implied depending on the virtual provided. See #41002
for an example where `^mkl` was being used to test for several `mkl` providers in a
package that did not depend on `mkl`.
5. **License directive**
Spack packages can now have license metadata, with the new `license()` directive:
```python
license("Apache-2.0")
```
Licenses use [SPDX identifiers](https://spdx.org/licenses), and you can use SPDX
expressions to combine them:
```python
license("Apache-2.0 OR MIT")
```
Like other directives in Spack, it's conditional, so you can handle complex cases like
Spack itself:
```python
license("LGPL-2.1", when="@:0.11")
license("Apache-2.0 OR MIT", when="@0.12:")
```
More details in #39346, #40598.
6. **`spack deconcretize` command**
We are getting close to having a `spack update` command for environments, but we're
not quite there yet. This is the next best thing. `spack deconcretize` gives you
control over what you want to update in an already concrete environment. If you have
an environment built with, say, `meson`, and you want to update your `meson` version,
you can run:
```console
spack deconcretize meson
```
and have everything that depends on `meson` rebuilt the next time you run `spack
concretize`. In a future Spack version, we'll handle all of this in a single command,
but for now you can use this to drop bits of your lockfile and resolve your
dependencies again. More in #38803.
7. **UI Improvements**
The venerable `spack info` command was looking shabby compared to the rest of Spack's
UI, so we reworked it to have a bit more flair. `spack info` now makes much better
use of terminal space and shows variants, their values, and their descriptions much
more clearly. Conditional variants are grouped separately so you can more easily
understand how packages are structured. More in #40998.
`spack checksum` now allows you to filter versions from your editor, or by version
range. It also notifies you about potential download URL changes. See #40403.
8. **Environments can include definitions**
Spack did not previously support using `include:` with The
[definitions](https://spack.readthedocs.io/en/latest/environments.html#spec-list-references)
section of an environment, but now it does. You can use this to curate lists of specs
and more easily reuse them across environments. See #33960.
9. **Aliases**
You can now add aliases to Spack commands in `config.yaml`, e.g. this might enshrine
your favorite args to `spack find` as `spack f`:
```yaml
config:
aliases:
f: find -lv
```
See #17229.
10. **Improved autoloading of modules**
Spack 0.20 was the first release to enable autoloading of direct dependencies in
module files.
The downside of this was that `module avail` and `module load` tab completion would
show users too many modules to choose from, and many users disabled generating
modules for dependencies through `exclude_implicits: true`. Further, it was
necessary to keep hashes in module names to avoid file name clashes.
In this release, you can start using `hide_implicits: true` instead, which exposes
only explicitly installed packages to the user, while still autoloading
dependencies. On top of that, you can safely use `hash_length: 0`, as this config
now only applies to the modules exposed to the user -- you don't have to worry about
file name clashes for hidden dependencies.
Note: for `tcl` this feature requires Modules 4.7 or higher
11. **Updated container labeling**
Nightly Docker images from the `develop` branch will now be tagged as `:develop` and
`:nightly`. The `:latest` tag is no longer associated with `:develop`, but with the
latest stable release. Releases will be tagged with `:{major}`, `:{major}.{minor}`
and `:{major}.{minor}.{patch}`. `ubuntu:18.04` has also been removed from the list of
generated Docker images, as it is no longer supported. See #40593.
## Other new commands and directives
* `spack env activate` without arguments now loads a `default` environment that you do
not have to create (#40756).
* `spack find -H` / `--hashes`: a new shortcut for piping `spack find` output to
other commands (#38663)
* Add `spack checksum --verify`, fix `--add` (#38458)
* New `default_args` context manager factors out common args for directives (#39964)
* `spack compiler find --[no]-mixed-toolchain` lets you easily mix `clang` and
`gfortran` on Linux (#40902)
## Performance improvements
* `spack external find` execution is now much faster (#39843)
* `spack location -i` now much faster on success (#40898)
* Drop redundant rpaths post install (#38976)
* ASP-based solver: avoid cycles in clingo using hidden directive (#40720)
* Fix multiple quadratic complexity issues in environments (#38771)
## Other new features of note
* archspec: update to v0.2.2, support for Sapphire Rapids, Power10, Neoverse V2 (#40917)
* Propagate variants across nodes that don't have that variant (#38512)
* Implement fish completion (#29549)
* Can now distinguish between source/binary mirror; don't ping mirror.spack.io as much (#34523)
* Improve status reporting on install (add [n/total] display) (#37903)
## Windows
This release has the best Windows support of any Spack release yet, with numerous
improvements and much larger swaths of tests passing:
* MSVC and SDK improvements (#37711, #37930, #38500, #39823, #39180)
* Windows external finding: update default paths; treat .bat as executable on Windows (#39850)
* Windows decompression: fix removal of intermediate file (#38958)
* Windows: executable/path handling (#37762)
* Windows build systems: use ninja and enable tests (#33589)
* Windows testing (#36970, #36972, #36973, #36840, #36977, #36792, #36834, #34696, #36971)
* Windows PowerShell support (#39118, #37951)
* Windows symlinking and libraries (#39933, #38599, #34701, #38578, #34701)
## Notable refactors
* User-specified flags take precedence over others in Spack compiler wrappers (#37376)
* Improve setup of build, run, and test environments (#35737, #40916)
* `make` is no longer a required system dependency of Spack (#40380)
* Support Python 3.12 (#40404, #40155, #40153)
* docs: Replace package list with packages.spack.io (#40251)
* Drop Python 2 constructs in Spack (#38720, #38718, #38703)
## Binary cache and stack updates
* e4s arm stack: duplicate and target neoverse v1 (#40369)
* Add macOS ML CI stacks (#36586)
* E4S Cray CI Stack (#37837)
* e4s cray: expand spec list (#38947)
* e4s cray sles ci: expand spec list (#39081)
## Removals, deprecations, and syntax changes
* ASP: targets, compilers and providers soft-preferences are only global (#31261)
* Parser: fix ambiguity with whitespace in version ranges (#40344)
* Module file generation is disabled by default; you'll need to enable it to use it (#37258)
* Remove deprecated "extra_instructions" option for containers (#40365)
* Stand-alone test feature deprecation postponed to v0.22 (#40600)
* buildcache push: make `--allow-root` the default and deprecate the option (#38878)
## Notable Bugfixes
* Bugfix: propagation of multivalued variants (#39833)
* Allow `/` in git versions (#39398)
* Fetch & patch: actually acquire stage lock, and many more issues (#38903)
* Environment/depfile: better escaping of targets with Git versions (#37560)
* Prevent "spack external find" to error out on wrong permissions (#38755)
* lmod: allow core compiler to be specified with a version range (#37789)
## Spack community stats
* 7,469 total packages, 303 new since `v0.20.0`
* 150 new Python packages
* 34 new R packages
* 353 people contributed to this release
* 336 committers to packages
* 65 committers to core
# v0.20.3 (2023-10-31)
## Bugfixes

View File

@@ -37,11 +37,7 @@ to enable reuse for a single installation, and you can use:
spack install --fresh <spec>
to do a fresh install if ``reuse`` is enabled by default.
``reuse: dependencies`` is the default.
.. seealso::
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
``reuse: true`` is the default.
------------------------------------------
Selection of the target microarchitectures
@@ -103,3 +99,547 @@ while `py-numpy` still needs an older version:
Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the
default behavior is ``duplicates:strategy:minimal``.
.. _build-settings:
================================
Package Settings (packages.yaml)
================================
Spack allows you to customize how your software is built through the
``packages.yaml`` file. Using it, you can make Spack prefer particular
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
or you can make it prefer to build with particular compilers. You can
also tell Spack to use *external* software installations already
present on your system.
At a high level, the ``packages.yaml`` file is structured like this:
.. code-block:: yaml
packages:
package1:
# settings for package1
package2:
# settings for package2
# ...
all:
# settings that apply to all packages.
So you can either set build preferences specifically for *one* package,
or you can specify that certain settings should apply to *all* packages.
The types of settings you can customize are described in detail below.
Spack's build defaults are in the default
``etc/spack/defaults/packages.yaml`` file. You can override them in
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
details on how this works, see :ref:`configuration-scopes`.
.. _sec-external-packages:
-----------------
External Packages
-----------------
Spack can be configured to use externally-installed
packages rather than building its own packages. This may be desirable
if machines ship with system packages, such as a customized MPI
that should be used instead of Spack building its own MPI.
External packages are configured through the ``packages.yaml`` file.
Here's an example of an external configuration:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This example lists three installations of OpenMPI, one built with GCC,
one built with GCC and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the pre-installed OpenMPI in
the given directory. Note that the specified path is the top-level
install prefix, not the ``bin`` subdirectory.
``packages.yaml`` can also be used to specify modules to load instead
of the installation prefixes. The following example says that module
``CMake/3.7.2`` provides cmake version 3.7.2.
.. code-block:: yaml
cmake:
externals:
- spec: cmake@3.7.2
modules:
- CMake/3.7.2
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
by a list of package names. To specify externals, add an ``externals:``
attribute under the package name, which lists externals.
Each external should specify a ``spec:`` string that should be as
well-defined as reasonably possible. If a
package lacks a spec component, such as missing a compiler or
package version, then Spack will guess the missing component based
on its most-favored packages, and it may guess incorrectly.
Each package version and compiler listed in an external should
have entries in Spack's packages and compiler configuration, even
though the package and compiler may not ever be built.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Prevent packages from being built from sources
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
but it does not prevent Spack from building packages from sources. In the above example,
Spack might choose for many valid reasons to start building and linking with the
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
To prevent this, the ``packages.yaml`` configuration also allows packages
to be flagged as non-buildable. The previous example could be modified to
be:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
buildable: False
The addition of the ``buildable`` flag tells Spack that it should never build
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
OpenMPI.
.. note::
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
pre-built specs include specs already available from a local store, an upstream store, a registered
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
external specs in ``packages.yaml`` are included in the list of pre-built specs.
If an external module is specified as not buildable, then Spack will load the
external module into the build environment which can be used for linking.
The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Non-buildable virtual packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Virtual packages in Spack can also be specified as not buildable, and
external implementations can be provided. In the example above,
OpenMPI is configured as not buildable, but Spack will often prefer
other MPI implementations over the externally available OpenMPI. Spack
can be configured with every MPI provider not buildable individually,
but more conveniently:
.. code-block:: yaml
packages:
mpi:
buildable: False
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
Spack can then use any of the listed external implementations of MPI
to satisfy a dependency, and will choose depending on the compiler and
architecture.
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
(available via stores or buildcaches) are not wanted, Spack can be configured to require
specs matching only the available externals:
.. code-block:: yaml
packages:
mpi:
buildable: False
require:
- one_of: [
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
]
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
:ref:`package-requirements`.
.. _cmd-spack-external-find:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatically Find External Packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can run the :ref:`spack external find <spack-external-find>` command
to search for system-provided packages and add them to ``packages.yaml``.
After running this command your ``packages.yaml`` may include new entries:
.. code-block:: yaml
packages:
cmake:
externals:
- spec: cmake@3.17.2
prefix: /usr
Generally this is useful for detecting a small set of commonly-used packages;
for now this is generally limited to finding build-only dependencies.
Specific limitations include:
* Packages are not discoverable by default: For a package to be
discoverable with ``spack external find``, it needs to add special
logic. See :ref:`here <make-package-findable>` for more details.
* The logic does not search through module files, it can only detect
packages with executables defined in ``PATH``; you can help Spack locate
externals which use module files by loading any associated modules for
packages that you want Spack to know about before running
``spack external find``.
* Spack does not overwrite existing entries in the package configuration:
If there is an external defined for a spec at any configuration scope,
then Spack will not add a new external entry (``spack config blame packages``
can help locate all external entries).
.. _package-requirements:
--------------------
Package Requirements
--------------------
Spack can be configured to always use certain compilers, package
versions, and variants during concretization through package
requirements.
Package requirements are useful when you find yourself repeatedly
specifying the same constraints on the command line, and wish that
Spack respects these constraints whether you mention them explicitly
or not. Another use case is specifying constraints that should apply
to all root specs in an environment, without having to repeat the
constraint everywhere.
Apart from that, requirements config is more flexible than constraints
on the command line, because it can specify constraints on packages
*when they occur* as a dependency. In contrast, on the command line it
is not possible to specify constraints on dependencies while also keeping
those dependencies optional.
^^^^^^^^^^^^^^^^^^^
Requirements syntax
^^^^^^^^^^^^^^^^^^^
The package requirements configuration is specified in ``packages.yaml``,
keyed by package name and expressed using the Spec syntax. In the simplest
case you can specify attributes that you always want the package to have
by providing a single spec string to ``require``:
.. code-block:: yaml
packages:
libfabric:
require: "@1.13.2"
In the above example, ``libfabric`` will always build with version 1.13.2. If you
need to compose multiple configuration scopes ``require`` accepts a list of
strings:
.. code-block:: yaml
packages:
libfabric:
require:
- "@1.13.2"
- "%gcc"
In this case ``libfabric`` will always build with version 1.13.2 **and** using GCC
as a compiler.
For more complex use cases, require accepts also a list of objects. These objects
must have either a ``any_of`` or a ``one_of`` field, containing a list of spec strings,
and they can optionally have a ``when`` and a ``message`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["@4.1.5", "%gcc"]
message: "in this example only 4.1.5 can build with other compilers"
``any_of`` is a list of specs. One of those specs must be satisfied
and it is also allowed for the concretized spec to match more than one.
In the above example, that means you could build ``openmpi@4.1.5%gcc``,
``openmpi@4.1.5%clang`` or ``openmpi@3.9%gcc``, but
not ``openmpi@3.9%clang``.
If a custom message is provided, and the requirement is not satisfiable,
Spack will print the custom error message:
.. code-block:: console
$ spack spec openmpi@3.9%clang
==> Error: in this example only 4.1.5 can build with other compilers
We could express a similar requirement using the ``when`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["%gcc"]
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC.
For readability, Spack also allows a ``spec`` key accepting a string when there is only a single
constraint:
.. code-block:: yaml
packages:
openmpi:
require:
- spec: "%gcc"
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
This code snippet and the one before it are semantically equivalent.
Finally, instead of ``any_of`` you can use ``one_of`` which also takes a list of specs. The final
concretized spec must match one and only one of them:
.. code-block:: yaml
packages:
mpich:
require:
- one_of: ["+cuda", "+rocm"]
In the example above, that means you could build ``mpich+cuda`` or ``mpich+rocm`` but not ``mpich+cuda+rocm``.
.. note::
For ``any_of`` and ``one_of``, the order of specs indicates a
preference: items that appear earlier in the list are preferred
(note that these preferences can be ignored in favor of others).
.. note::
When using a conditional requirement, Spack is allowed to actively avoid the triggering
condition (the ``when=...`` spec) if that leads to a concrete spec with better scores in
the optimization criteria. To check the current optimization criteria and their
priorities you can run ``spack solve zlib``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting default requirements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can also set default requirements for all packages under ``all``
like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
which means every spec will be required to use ``clang`` as a compiler.
Note that in this case ``all`` represents a *default set of requirements* -
if there are specific package requirements, then the default requirements
under ``all`` are disregarded. For example, with a configuration like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
cmake:
require: '%gcc'
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
dependencies) to use ``clang``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting requirements on virtual specs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
This can be useful for fixing which virtual provider you want to use:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
Requirements on the virtual spec and on the specific provider are both applied, if
present. For instance with a configuration like:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
mvapich2:
require: '~cuda'
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
.. _package-preferences:
-------------------
Package Preferences
-------------------
In some cases package requirements can be too strong, and package
preferences are the better option. Package preferences do not impose
constraints on packages for particular versions or variants values,
they rather only set defaults. The concretizer is free to change
them if it must, due to other constraints, and also prefers reusing
installed packages over building new ones that are a better match for
preferences.
Most package preferences (``compilers``, ``target`` and ``providers``)
can only be set globally under the ``all`` section of ``packages.yaml``:
.. code-block:: yaml
packages:
all:
compiler: [gcc@12.2.0, clang@12:, oneapi@2023:]
target: [x86_64_v3]
providers:
mpi: [mvapich2, mpich, openmpi]
These preferences override Spack's default and effectively reorder priorities
when looking for the best compiler, target or virtual package provider. Each
preference takes an ordered list of spec constraints, with earlier entries in
the list being preferred over later entries.
In the example above all packages prefer to be compiled with ``gcc@12.2.0``,
to target the ``x86_64_v3`` microarchitecture and to use ``mvapich2`` if they
depend on ``mpi``.
The ``variants`` and ``version`` preferences can be set under
package specific sections of the ``packages.yaml`` file:
.. code-block:: yaml
packages:
opencv:
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
In this case, the preference for ``opencv`` is to build with debug options, while
``gperftools`` prefers version 2.2 over 2.4.
Any preference can be overwritten on the command line if explicitly requested.
Preferences cannot overcome explicit constraints, as they only set a preferred
ordering among homogeneous attribute values. Going back to the example, if
``gperftools@2.3:`` was requested, then Spack will install version 2.4
since the most preferred version 2.2 is prohibited by the version constraint.
.. _package_permissions:
-------------------
Package Permissions
-------------------
Spack can be configured to assign permissions to the files installed
by a package.
In the ``packages.yaml`` file under ``permissions``, the attributes
``read``, ``write``, and ``group`` control the package
permissions. These attributes can be set per-package, or for all
packages under ``all``. If permissions are set under ``all`` and for a
specific package, the package-specific settings take precedence.
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
and ``world``.
.. code-block:: yaml
packages:
all:
permissions:
write: group
group: spack
my_app:
permissions:
read: group
group: my_team
The permissions settings describe the broadest level of access to
installations of the specified packages. The execute permissions of
the file are set to the same level as read permissions for those files
that are executable. The default setting for ``read`` is ``world``,
and for ``write`` is ``user``. In the example above, installations of
``my_app`` will be installed with user and group permissions but no
world permissions, and owned by the group ``my_team``. All other
packages will be installed with user and group write privileges, and
world read privileges. Those packages will be owned by the group
``spack``.
The ``group`` attribute assigns a Unix-style group to a package. All
files installed by the package will be owned by the assigned group,
and the sticky group bit will be set on the install prefix and all
directories inside the install prefix. This will ensure that even
manually placed files within the install prefix are owned by the
assigned group. If no group is assigned, Spack will allow the OS
default behavior to go as expected.
----------------------------
Assigning Package Attributes
----------------------------
You can assign class-level attributes in the configuration:
.. code-block:: yaml
packages:
mpileaks:
# Override existing attributes
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
# ... or add new ones
x: 1
Attributes set this way will be accessible to any method executed
in the package.py file (e.g. the ``install()`` method). Values for these
attributes may be any value parseable by yaml.
These can only be applied to specific packages, not "all" or
virtual packages.

View File

@@ -392,7 +392,7 @@ See section
:ref:`Configuration Scopes <configuration-scopes>`
for an explanation about the different files
and section
:ref:`Build customization <packages-config>`
:ref:`Build customization <build-settings>`
for specifics and examples for ``packages.yaml`` files.
.. If your system administrator did not provide modules for pre-installed Intel

View File

@@ -17,7 +17,7 @@ case you want to skip directly to specific docs:
* :ref:`config.yaml <config-yaml>`
* :ref:`mirrors.yaml <mirrors>`
* :ref:`modules.yaml <modules>`
* :ref:`packages.yaml <packages-config>`
* :ref:`packages.yaml <build-settings>`
* :ref:`repos.yaml <repositories>`
You can also add any of these as inline configuration in the YAML
@@ -243,9 +243,11 @@ lower-precedence settings. Completely ignoring higher-level configuration
options is supported with the ``::`` notation for keys (see
:ref:`config-overrides` below).
There are also special notations for string concatenation and precendense override.
Using the ``+:`` notation can be used to force *prepending* strings or lists. For lists, this is identical
to the default behavior. Using the ``-:`` works similarly, but for *appending* values.
There are also special notations for string concatenation and precendense override:
* ``+:`` will force *prepending* strings or lists. For lists, this is the default behavior.
* ``-:`` works similarly, but for *appending* values.
:ref:`config-prepend-append`
^^^^^^^^^^^

View File

@@ -1,77 +0,0 @@
.. Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
==========================
Frequently Asked Questions
==========================
This page contains answers to frequently asked questions about Spack.
If you have questions that are not answered here, feel free to ask on
`Slack <https://slack.spack.io>`_ or `GitHub Discussions
<https://github.com/spack/spack/discussions>`_. If you've learned the
answer to a question that you think should be here, please consider
contributing to this page.
.. _faq-concretizer-precedence:
-----------------------------------------------------
Why does Spack pick particular versions and variants?
-----------------------------------------------------
This question comes up in a variety of forms:
1. Why does Spack seem to ignore my package preferences from ``packages.yaml`` config?
2. Why does Spack toggle a variant instead of using the default from the ``package.py`` file?
The short answer is that Spack always picks an optimal configuration
based on a complex set of criteria\ [#f1]_. These criteria are more nuanced
than always choosing the latest versions or default variants.
.. note::
As a rule of thumb: requirements + constraints > reuse > preferences > defaults.
The following set of criteria (from lowest to highest precedence) explain
common cases where concretization output may seem surprising at first.
1. :ref:`Package preferences <package-preferences>` configured in ``packages.yaml``
override variant defaults from ``package.py`` files, and influence the optimal
ordering of versions. Preferences are specified as follows:
.. code-block:: yaml
packages:
foo:
version: [1.0, 1.1]
variants: ~mpi
2. :ref:`Reuse concretization <concretizer-options>` configured in ``concretizer.yaml``
overrides preferences, since it's typically faster to reuse an existing spec than to
build a preferred one from sources. When build caches are enabled, specs may be reused
from a remote location too. Reuse concretization is configured as follows:
.. code-block:: yaml
concretizer:
reuse: dependencies # other options are 'true' and 'false'
3. :ref:`Package requirements <package-requirements>` configured in ``packages.yaml``,
and constraints from the command line as well as ``package.py`` files override all
of the above. Requirements are specified as follows:
.. code-block:: yaml
packages:
foo:
require:
- "@1.2: +mpi"
Requirements and constraints restrict the set of possible solutions, while reuse
behavior and preferences influence what an optimal solution looks like.
.. rubric:: Footnotes
.. [#f1] The exact list of criteria can be retrieved with the ``spack solve`` command

View File

@@ -55,7 +55,6 @@ or refer to the full manual below.
getting_started
basic_usage
replace_conda_homebrew
frequently_asked_questions
.. toctree::
:maxdepth: 2
@@ -71,7 +70,7 @@ or refer to the full manual below.
configuration
config_yaml
packages_yaml
bootstrapping
build_settings
environments
containers
@@ -79,7 +78,6 @@ or refer to the full manual below.
module_file_support
repositories
binary_caches
bootstrapping
command_index
chain
extensions

View File

@@ -1,559 +0,0 @@
.. Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _packages-config:
================================
Package Settings (packages.yaml)
================================
Spack allows you to customize how your software is built through the
``packages.yaml`` file. Using it, you can make Spack prefer particular
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
or you can make it prefer to build with particular compilers. You can
also tell Spack to use *external* software installations already
present on your system.
At a high level, the ``packages.yaml`` file is structured like this:
.. code-block:: yaml
packages:
package1:
# settings for package1
package2:
# settings for package2
# ...
all:
# settings that apply to all packages.
So you can either set build preferences specifically for *one* package,
or you can specify that certain settings should apply to *all* packages.
The types of settings you can customize are described in detail below.
Spack's build defaults are in the default
``etc/spack/defaults/packages.yaml`` file. You can override them in
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
details on how this works, see :ref:`configuration-scopes`.
.. _sec-external-packages:
-----------------
External Packages
-----------------
Spack can be configured to use externally-installed
packages rather than building its own packages. This may be desirable
if machines ship with system packages, such as a customized MPI
that should be used instead of Spack building its own MPI.
External packages are configured through the ``packages.yaml`` file.
Here's an example of an external configuration:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This example lists three installations of OpenMPI, one built with GCC,
one built with GCC and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the pre-installed OpenMPI in
the given directory. Note that the specified path is the top-level
install prefix, not the ``bin`` subdirectory.
``packages.yaml`` can also be used to specify modules to load instead
of the installation prefixes. The following example says that module
``CMake/3.7.2`` provides cmake version 3.7.2.
.. code-block:: yaml
cmake:
externals:
- spec: cmake@3.7.2
modules:
- CMake/3.7.2
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
by a list of package names. To specify externals, add an ``externals:``
attribute under the package name, which lists externals.
Each external should specify a ``spec:`` string that should be as
well-defined as reasonably possible. If a
package lacks a spec component, such as missing a compiler or
package version, then Spack will guess the missing component based
on its most-favored packages, and it may guess incorrectly.
Each package version and compiler listed in an external should
have entries in Spack's packages and compiler configuration, even
though the package and compiler may not ever be built.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Prevent packages from being built from sources
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
but it does not prevent Spack from building packages from sources. In the above example,
Spack might choose for many valid reasons to start building and linking with the
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
To prevent this, the ``packages.yaml`` configuration also allows packages
to be flagged as non-buildable. The previous example could be modified to
be:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
buildable: False
The addition of the ``buildable`` flag tells Spack that it should never build
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
OpenMPI.
.. note::
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
pre-built specs include specs already available from a local store, an upstream store, a registered
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
external specs in ``packages.yaml`` are included in the list of pre-built specs.
If an external module is specified as not buildable, then Spack will load the
external module into the build environment which can be used for linking.
The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Non-buildable virtual packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Virtual packages in Spack can also be specified as not buildable, and
external implementations can be provided. In the example above,
OpenMPI is configured as not buildable, but Spack will often prefer
other MPI implementations over the externally available OpenMPI. Spack
can be configured with every MPI provider not buildable individually,
but more conveniently:
.. code-block:: yaml
packages:
mpi:
buildable: False
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
Spack can then use any of the listed external implementations of MPI
to satisfy a dependency, and will choose depending on the compiler and
architecture.
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
(available via stores or buildcaches) are not wanted, Spack can be configured to require
specs matching only the available externals:
.. code-block:: yaml
packages:
mpi:
buildable: False
require:
- one_of: [
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
]
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
:ref:`package-requirements`.
.. _cmd-spack-external-find:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatically Find External Packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can run the :ref:`spack external find <spack-external-find>` command
to search for system-provided packages and add them to ``packages.yaml``.
After running this command your ``packages.yaml`` may include new entries:
.. code-block:: yaml
packages:
cmake:
externals:
- spec: cmake@3.17.2
prefix: /usr
Generally this is useful for detecting a small set of commonly-used packages;
for now this is generally limited to finding build-only dependencies.
Specific limitations include:
* Packages are not discoverable by default: For a package to be
discoverable with ``spack external find``, it needs to add special
logic. See :ref:`here <make-package-findable>` for more details.
* The logic does not search through module files, it can only detect
packages with executables defined in ``PATH``; you can help Spack locate
externals which use module files by loading any associated modules for
packages that you want Spack to know about before running
``spack external find``.
* Spack does not overwrite existing entries in the package configuration:
If there is an external defined for a spec at any configuration scope,
then Spack will not add a new external entry (``spack config blame packages``
can help locate all external entries).
.. _package-requirements:
--------------------
Package Requirements
--------------------
Spack can be configured to always use certain compilers, package
versions, and variants during concretization through package
requirements.
Package requirements are useful when you find yourself repeatedly
specifying the same constraints on the command line, and wish that
Spack respects these constraints whether you mention them explicitly
or not. Another use case is specifying constraints that should apply
to all root specs in an environment, without having to repeat the
constraint everywhere.
Apart from that, requirements config is more flexible than constraints
on the command line, because it can specify constraints on packages
*when they occur* as a dependency. In contrast, on the command line it
is not possible to specify constraints on dependencies while also keeping
those dependencies optional.
.. seealso::
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
^^^^^^^^^^^^^^^^^^^
Requirements syntax
^^^^^^^^^^^^^^^^^^^
The package requirements configuration is specified in ``packages.yaml``,
keyed by package name and expressed using the Spec syntax. In the simplest
case you can specify attributes that you always want the package to have
by providing a single spec string to ``require``:
.. code-block:: yaml
packages:
libfabric:
require: "@1.13.2"
In the above example, ``libfabric`` will always build with version 1.13.2. If you
need to compose multiple configuration scopes ``require`` accepts a list of
strings:
.. code-block:: yaml
packages:
libfabric:
require:
- "@1.13.2"
- "%gcc"
In this case ``libfabric`` will always build with version 1.13.2 **and** using GCC
as a compiler.
For more complex use cases, require accepts also a list of objects. These objects
must have either a ``any_of`` or a ``one_of`` field, containing a list of spec strings,
and they can optionally have a ``when`` and a ``message`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["@4.1.5", "%gcc"]
message: "in this example only 4.1.5 can build with other compilers"
``any_of`` is a list of specs. One of those specs must be satisfied
and it is also allowed for the concretized spec to match more than one.
In the above example, that means you could build ``openmpi@4.1.5%gcc``,
``openmpi@4.1.5%clang`` or ``openmpi@3.9%gcc``, but
not ``openmpi@3.9%clang``.
If a custom message is provided, and the requirement is not satisfiable,
Spack will print the custom error message:
.. code-block:: console
$ spack spec openmpi@3.9%clang
==> Error: in this example only 4.1.5 can build with other compilers
We could express a similar requirement using the ``when`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["%gcc"]
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC.
For readability, Spack also allows a ``spec`` key accepting a string when there is only a single
constraint:
.. code-block:: yaml
packages:
openmpi:
require:
- spec: "%gcc"
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
This code snippet and the one before it are semantically equivalent.
Finally, instead of ``any_of`` you can use ``one_of`` which also takes a list of specs. The final
concretized spec must match one and only one of them:
.. code-block:: yaml
packages:
mpich:
require:
- one_of: ["+cuda", "+rocm"]
In the example above, that means you could build ``mpich+cuda`` or ``mpich+rocm`` but not ``mpich+cuda+rocm``.
.. note::
For ``any_of`` and ``one_of``, the order of specs indicates a
preference: items that appear earlier in the list are preferred
(note that these preferences can be ignored in favor of others).
.. note::
When using a conditional requirement, Spack is allowed to actively avoid the triggering
condition (the ``when=...`` spec) if that leads to a concrete spec with better scores in
the optimization criteria. To check the current optimization criteria and their
priorities you can run ``spack solve zlib``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting default requirements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can also set default requirements for all packages under ``all``
like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
which means every spec will be required to use ``clang`` as a compiler.
Note that in this case ``all`` represents a *default set of requirements* -
if there are specific package requirements, then the default requirements
under ``all`` are disregarded. For example, with a configuration like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
cmake:
require: '%gcc'
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
dependencies) to use ``clang``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting requirements on virtual specs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
This can be useful for fixing which virtual provider you want to use:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
Requirements on the virtual spec and on the specific provider are both applied, if
present. For instance with a configuration like:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
mvapich2:
require: '~cuda'
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
.. _package-preferences:
-------------------
Package Preferences
-------------------
In some cases package requirements can be too strong, and package
preferences are the better option. Package preferences do not impose
constraints on packages for particular versions or variants values,
they rather only set defaults. The concretizer is free to change
them if it must, due to other constraints, and also prefers reusing
installed packages over building new ones that are a better match for
preferences.
.. seealso::
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
Most package preferences (``compilers``, ``target`` and ``providers``)
can only be set globally under the ``all`` section of ``packages.yaml``:
.. code-block:: yaml
packages:
all:
compiler: [gcc@12.2.0, clang@12:, oneapi@2023:]
target: [x86_64_v3]
providers:
mpi: [mvapich2, mpich, openmpi]
These preferences override Spack's default and effectively reorder priorities
when looking for the best compiler, target or virtual package provider. Each
preference takes an ordered list of spec constraints, with earlier entries in
the list being preferred over later entries.
In the example above all packages prefer to be compiled with ``gcc@12.2.0``,
to target the ``x86_64_v3`` microarchitecture and to use ``mvapich2`` if they
depend on ``mpi``.
The ``variants`` and ``version`` preferences can be set under
package specific sections of the ``packages.yaml`` file:
.. code-block:: yaml
packages:
opencv:
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
In this case, the preference for ``opencv`` is to build with debug options, while
``gperftools`` prefers version 2.2 over 2.4.
Any preference can be overwritten on the command line if explicitly requested.
Preferences cannot overcome explicit constraints, as they only set a preferred
ordering among homogeneous attribute values. Going back to the example, if
``gperftools@2.3:`` was requested, then Spack will install version 2.4
since the most preferred version 2.2 is prohibited by the version constraint.
.. _package_permissions:
-------------------
Package Permissions
-------------------
Spack can be configured to assign permissions to the files installed
by a package.
In the ``packages.yaml`` file under ``permissions``, the attributes
``read``, ``write``, and ``group`` control the package
permissions. These attributes can be set per-package, or for all
packages under ``all``. If permissions are set under ``all`` and for a
specific package, the package-specific settings take precedence.
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
and ``world``.
.. code-block:: yaml
packages:
all:
permissions:
write: group
group: spack
my_app:
permissions:
read: group
group: my_team
The permissions settings describe the broadest level of access to
installations of the specified packages. The execute permissions of
the file are set to the same level as read permissions for those files
that are executable. The default setting for ``read`` is ``world``,
and for ``write`` is ``user``. In the example above, installations of
``my_app`` will be installed with user and group permissions but no
world permissions, and owned by the group ``my_team``. All other
packages will be installed with user and group write privileges, and
world read privileges. Those packages will be owned by the group
``spack``.
The ``group`` attribute assigns a Unix-style group to a package. All
files installed by the package will be owned by the assigned group,
and the sticky group bit will be set on the install prefix and all
directories inside the install prefix. This will ensure that even
manually placed files within the install prefix are owned by the
assigned group. If no group is assigned, Spack will allow the OS
default behavior to go as expected.
----------------------------
Assigning Package Attributes
----------------------------
You can assign class-level attributes in the configuration:
.. code-block:: yaml
packages:
mpileaks:
# Override existing attributes
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
# ... or add new ones
x: 1
Attributes set this way will be accessible to any method executed
in the package.py file (e.g. the ``install()`` method). Values for these
attributes may be any value parseable by yaml.
These can only be applied to specific packages, not "all" or
virtual packages.

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
__version__ = "0.21.1"
__version__ = "0.22.0.dev0"
spack_version = __version__

View File

@@ -40,7 +40,6 @@ def _search_duplicate_compilers(error_cls):
import collections.abc
import glob
import inspect
import io
import itertools
import pathlib
import pickle
@@ -55,7 +54,6 @@ def _search_duplicate_compilers(error_cls):
import spack.repo
import spack.spec
import spack.util.crypto
import spack.util.spack_yaml as syaml
import spack.variant
#: Map an audit tag to a list of callables implementing checks
@@ -252,88 +250,6 @@ def _search_duplicate_specs_in_externals(error_cls):
return errors
@config_packages
def _deprecated_preferences(error_cls):
"""Search package preferences deprecated in v0.21 (and slated for removal in v0.22)"""
# TODO (v0.22): remove this audit as the attributes will not be allowed in config
errors = []
packages_yaml = spack.config.CONFIG.get_config("packages")
def make_error(attribute_name, config_data, summary):
s = io.StringIO()
s.write("Occurring in the following file:\n")
dict_view = syaml.syaml_dict((k, v) for k, v in config_data.items() if k == attribute_name)
syaml.dump_config(dict_view, stream=s, blame=True)
return error_cls(summary=summary, details=[s.getvalue()])
if "all" in packages_yaml and "version" in packages_yaml["all"]:
summary = "Using the deprecated 'version' attribute under 'packages:all'"
errors.append(make_error("version", packages_yaml["all"], summary))
for package_name in packages_yaml:
if package_name == "all":
continue
package_conf = packages_yaml[package_name]
for attribute in ("compiler", "providers", "target"):
if attribute not in package_conf:
continue
summary = (
f"Using the deprecated '{attribute}' attribute " f"under 'packages:{package_name}'"
)
errors.append(make_error(attribute, package_conf, summary))
return errors
@config_packages
def _avoid_mismatched_variants(error_cls):
"""Warns if variant preferences have mismatched types or names."""
errors = []
packages_yaml = spack.config.CONFIG.get_config("packages")
def make_error(config_data, summary):
s = io.StringIO()
s.write("Occurring in the following file:\n")
syaml.dump_config(config_data, stream=s, blame=True)
return error_cls(summary=summary, details=[s.getvalue()])
for pkg_name in packages_yaml:
# 'all:' must be more forgiving, since it is setting defaults for everything
if pkg_name == "all" or "variants" not in packages_yaml[pkg_name]:
continue
preferences = packages_yaml[pkg_name]["variants"]
if not isinstance(preferences, list):
preferences = [preferences]
for variants in preferences:
current_spec = spack.spec.Spec(variants)
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
for variant in current_spec.variants.values():
# Variant does not exist at all
if variant.name not in pkg_cls.variants:
summary = (
f"Setting a preference for the '{pkg_name}' package to the "
f"non-existing variant '{variant.name}'"
)
errors.append(make_error(preferences, summary))
continue
# Variant cannot accept this value
s = spack.spec.Spec(pkg_name)
try:
s.update_variant_validate(variant.name, variant.value)
except Exception:
summary = (
f"Setting the variant '{variant.name}' of the '{pkg_name}' package "
f"to the invalid value '{str(variant)}'"
)
errors.append(make_error(preferences, summary))
return errors
#: Sanity checks on package directives
package_directives = AuditClass(
group="packages",
@@ -860,7 +776,7 @@ def _version_constraints_are_satisfiable_by_some_version_in_repo(pkgs, error_cls
)
except Exception:
summary = (
"{0}: dependency on {1} cannot be satisfied by known versions of {1.name}"
"{0}: dependency on {1} cannot be satisfied " "by known versions of {1.name}"
).format(pkg_name, s)
details = ["happening in " + filename]
if dependency_pkg_cls is not None:
@@ -902,53 +818,6 @@ def _analyze_variants_in_directive(pkg, constraint, directive, error_cls):
return errors
@package_directives
def _named_specs_in_when_arguments(pkgs, error_cls):
"""Reports named specs in the 'when=' attribute of a directive.
Note that 'conflicts' is the only directive allowing that.
"""
errors = []
for pkg_name in pkgs:
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
def _extracts_errors(triggers, summary):
_errors = []
for trigger in list(triggers):
when_spec = spack.spec.Spec(trigger)
if when_spec.name is not None and when_spec.name != pkg_name:
details = [f"using '{trigger}', should be '^{trigger}'"]
_errors.append(error_cls(summary=summary, details=details))
return _errors
for dname, triggers in pkg_cls.dependencies.items():
summary = f"{pkg_name}: wrong 'when=' condition for the '{dname}' dependency"
errors.extend(_extracts_errors(triggers, summary))
for vname, (variant, triggers) in pkg_cls.variants.items():
summary = f"{pkg_name}: wrong 'when=' condition for the '{vname}' variant"
errors.extend(_extracts_errors(triggers, summary))
for provided, triggers in pkg_cls.provided.items():
summary = f"{pkg_name}: wrong 'when=' condition for the '{provided}' virtual"
errors.extend(_extracts_errors(triggers, summary))
for _, triggers in pkg_cls.requirements.items():
triggers = [when_spec for when_spec, _, _ in triggers]
summary = f"{pkg_name}: wrong 'when=' condition in 'requires' directive"
errors.extend(_extracts_errors(triggers, summary))
triggers = list(pkg_cls.patches)
summary = f"{pkg_name}: wrong 'when=' condition in 'patch' directives"
errors.extend(_extracts_errors(triggers, summary))
triggers = list(pkg_cls.resources)
summary = f"{pkg_name}: wrong 'when=' condition in 'resource' directives"
errors.extend(_extracts_errors(triggers, summary))
return llnl.util.lang.dedupe(errors)
#: Sanity checks on package directives
external_detection = AuditClass(
group="externals",

View File

@@ -69,7 +69,6 @@
BUILD_CACHE_RELATIVE_PATH = "build_cache"
BUILD_CACHE_KEYS_RELATIVE_PATH = "_pgp"
CURRENT_BUILD_CACHE_LAYOUT_VERSION = 1
FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION = 2
class BuildCacheDatabase(spack_db.Database):
@@ -1697,7 +1696,7 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
try:
_get_valid_spec_file(
local_specfile_stage.save_filename,
FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION,
CURRENT_BUILD_CACHE_LAYOUT_VERSION,
)
except InvalidMetadataFile as e:
tty.warn(
@@ -1738,7 +1737,7 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
try:
_get_valid_spec_file(
local_specfile_path, FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION
local_specfile_path, CURRENT_BUILD_CACHE_LAYOUT_VERSION
)
except InvalidMetadataFile as e:
tty.warn(
@@ -2027,12 +2026,11 @@ def _extract_inner_tarball(spec, filename, extract_to, unsigned, remote_checksum
def _tar_strip_component(tar: tarfile.TarFile, prefix: str):
"""Yield all members of tarfile that start with given prefix, and strip that prefix (including
symlinks)"""
"""Strip the top-level directory `prefix` from the member names in a tarfile."""
# Including trailing /, otherwise we end up with absolute paths.
regex = re.compile(re.escape(prefix) + "/*")
# Only yield members in the package prefix.
# Remove the top-level directory from the member (link)names.
# Note: when a tarfile is created, relative in-prefix symlinks are
# expanded to matching member names of tarfile entries. So, we have
# to ensure that those are updated too.
@@ -2040,14 +2038,12 @@ def _tar_strip_component(tar: tarfile.TarFile, prefix: str):
# them.
for m in tar.getmembers():
result = regex.match(m.name)
if not result:
continue
assert result is not None
m.name = m.name[result.end() :]
if m.linkname:
result = regex.match(m.linkname)
if result:
m.linkname = m.linkname[result.end() :]
yield m
def extract_tarball(spec, download_result, unsigned=False, force=False, timer=timer.NULL_TIMER):
@@ -2071,7 +2067,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
specfile_path = download_result["specfile_stage"].save_filename
spec_dict, layout_version = _get_valid_spec_file(
specfile_path, FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION
specfile_path, CURRENT_BUILD_CACHE_LAYOUT_VERSION
)
bchecksum = spec_dict["binary_cache_checksum"]
@@ -2090,7 +2086,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
_delete_staged_downloads(download_result)
shutil.rmtree(tmpdir)
raise e
elif 1 <= layout_version <= 2:
elif layout_version == 1:
# Newer buildcache layout: the .spack file contains just
# in the install tree, the signature, if it exists, is
# wrapped around the spec.json at the root. If sig verify
@@ -2117,10 +2113,8 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
try:
with closing(tarfile.open(tarfile_path, "r")) as tar:
# Remove install prefix from tarfil to extract directly into spec.prefix
tar.extractall(
path=spec.prefix,
members=_tar_strip_component(tar, prefix=_ensure_common_prefix(tar)),
)
_tar_strip_component(tar, prefix=_ensure_common_prefix(tar))
tar.extractall(path=spec.prefix)
except Exception:
shutil.rmtree(spec.prefix, ignore_errors=True)
_delete_staged_downloads(download_result)
@@ -2155,47 +2149,20 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
def _ensure_common_prefix(tar: tarfile.TarFile) -> str:
# Find the lowest `binary_distribution` file (hard-coded forward slash is on purpose).
binary_distribution = min(
(
e.name
for e in tar.getmembers()
if e.isfile() and e.name.endswith(".spack/binary_distribution")
),
key=len,
default=None,
)
# Get the shortest length directory.
common_prefix = min((e.name for e in tar.getmembers() if e.isdir()), key=len, default=None)
if binary_distribution is None:
raise ValueError("Tarball is not a Spack package, missing binary_distribution file")
if common_prefix is None:
raise ValueError("Tarball does not contain a common prefix")
pkg_path = pathlib.PurePosixPath(binary_distribution).parent.parent
# Even the most ancient Spack version has required to list the dir of the package itself, so
# guard against broken tarballs where `path.parent.parent` is empty.
if pkg_path == pathlib.PurePosixPath():
raise ValueError("Invalid tarball, missing package prefix dir")
pkg_prefix = str(pkg_path)
# Ensure all tar entries are in the pkg_prefix dir, and if they're not, they should be parent
# dirs of it.
has_prefix = False
# Validate that each file starts with the prefix
for member in tar.getmembers():
stripped = member.name.rstrip("/")
if not (
stripped.startswith(pkg_prefix) or member.isdir() and pkg_prefix.startswith(stripped)
):
raise ValueError(f"Tarball contains file {stripped} outside of prefix {pkg_prefix}")
if member.isdir() and stripped == pkg_prefix:
has_prefix = True
if not member.name.startswith(common_prefix):
raise ValueError(
f"Tarball contains file {member.name} outside of prefix {common_prefix}"
)
# This is technically not required, but let's be defensive about the existence of the package
# prefix dir.
if not has_prefix:
raise ValueError(f"Tarball does not contain a common prefix {pkg_prefix}")
return pkg_prefix
return common_prefix
def install_root_node(spec, unsigned=False, force=False, sha256=None):

View File

@@ -213,8 +213,7 @@ def _root_spec(spec_str: str) -> str:
if str(spack.platforms.host()) == "darwin":
spec_str += " %apple-clang"
elif str(spack.platforms.host()) == "windows":
# TODO (johnwparent): Remove version constraint when clingo patch is up
spec_str += " %msvc@:19.37"
spec_str += " %msvc"
else:
spec_str += " %gcc"

View File

@@ -324,29 +324,19 @@ def set_compiler_environment_variables(pkg, env):
# ttyout, ttyerr, etc.
link_dir = spack.paths.build_env_path
# Set SPACK compiler variables so that our wrapper knows what to
# call. If there is no compiler configured then use a default
# wrapper which will emit an error if it is used.
# Set SPACK compiler variables so that our wrapper knows what to call
if compiler.cc:
env.set("SPACK_CC", compiler.cc)
env.set("CC", os.path.join(link_dir, compiler.link_paths["cc"]))
else:
env.set("CC", os.path.join(link_dir, "cc"))
if compiler.cxx:
env.set("SPACK_CXX", compiler.cxx)
env.set("CXX", os.path.join(link_dir, compiler.link_paths["cxx"]))
else:
env.set("CC", os.path.join(link_dir, "c++"))
if compiler.f77:
env.set("SPACK_F77", compiler.f77)
env.set("F77", os.path.join(link_dir, compiler.link_paths["f77"]))
else:
env.set("F77", os.path.join(link_dir, "f77"))
if compiler.fc:
env.set("SPACK_FC", compiler.fc)
env.set("FC", os.path.join(link_dir, compiler.link_paths["fc"]))
else:
env.set("FC", os.path.join(link_dir, "fc"))
# Set SPACK compiler rpath flags so that our wrapper knows what to use
env.set("SPACK_CC_RPATH_ARG", compiler.cc_rpath_arg)
@@ -753,16 +743,15 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
set_compiler_environment_variables(pkg, env_mods)
set_wrapper_variables(pkg, env_mods)
# Platform specific setup goes before package specific setup. This is for setting
# defaults like MACOSX_DEPLOYMENT_TARGET on macOS.
platform = spack.platforms.by_name(pkg.spec.architecture.platform)
target = platform.target(pkg.spec.architecture.target)
platform.setup_platform_environment(pkg, env_mods)
tty.debug("setup_package: grabbing modifications from dependencies")
env_mods.extend(setup_context.get_env_modifications())
tty.debug("setup_package: collected all modifications from dependencies")
# architecture specific setup
platform = spack.platforms.by_name(pkg.spec.architecture.platform)
target = platform.target(pkg.spec.architecture.target)
platform.setup_platform_environment(pkg, env_mods)
if context == Context.TEST:
env_mods.prepend_path("PATH", ".")
elif context == Context.BUILD and not dirty and not env_mods.is_unset("CPATH"):
@@ -1333,7 +1322,7 @@ def make_stack(tb, stack=None):
# don't provide context if the code is actually in the base classes.
obj = frame.f_locals["self"]
func = getattr(obj, tb.tb_frame.f_code.co_name, "")
if func and hasattr(func, "__qualname__"):
if func:
typename, *_ = func.__qualname__.partition(".")
if isinstance(obj, CONTEXT_BASES) and typename not in basenames:
break

View File

@@ -34,6 +34,11 @@ def cmake_cache_option(name, boolean_value, comment="", force=False):
return 'set({0} {1} CACHE BOOL "{2}"{3})\n'.format(name, value, comment, force_str)
def cmake_cache_filepath(name, value, comment=""):
"""Generate a string for a cmake cache variable of type FILEPATH"""
return 'set({0} "{1}" CACHE FILEPATH "{2}")\n'.format(name, value, comment)
class CachedCMakeBuilder(CMakeBuilder):
#: Phases of a Cached CMake package
#: Note: the initconfig phase is used for developer builds as a final phase to stop on
@@ -257,6 +262,15 @@ def initconfig_hardware_entries(self):
entries.append(
cmake_cache_path("HIP_CXX_COMPILER", "{0}".format(self.spec["hip"].hipcc))
)
llvm_bin = spec["llvm-amdgpu"].prefix.bin
llvm_prefix = spec["llvm-amdgpu"].prefix
# Some ROCm systems seem to point to /<path>/rocm-<ver>/ and
# others point to /<path>/rocm-<ver>/llvm
if os.path.basename(os.path.normpath(llvm_prefix)) != "llvm":
llvm_bin = os.path.join(llvm_prefix, "llvm/bin/")
entries.append(
cmake_cache_filepath("CMAKE_HIP_COMPILER", os.path.join(llvm_bin, "clang++"))
)
archs = self.spec.variants["amdgpu_target"].value
if archs[0] != "none":
arch_str = ";".join(archs)
@@ -277,7 +291,7 @@ def std_initconfig_entries(self):
"#------------------{0}".format("-" * 60),
"# CMake executable path: {0}".format(self.pkg.spec["cmake"].command.path),
"#------------------{0}\n".format("-" * 60),
cmake_cache_path("CMAKE_PREFIX_PATH", cmake_prefix_path),
cmake_cache_string("CMAKE_PREFIX_PATH", cmake_prefix_path),
self.define_cmake_cache_from_variant("CMAKE_BUILD_TYPE", "build_type"),
]

View File

@@ -46,7 +46,22 @@
from spack.reporters import CDash, CDashConfiguration
from spack.reporters.cdash import build_stamp as cdash_build_stamp
JOB_RETRY_CONDITIONS = ["always"]
# See https://docs.gitlab.com/ee/ci/yaml/#retry for descriptions of conditions
JOB_RETRY_CONDITIONS = [
# "always",
"unknown_failure",
"script_failure",
"api_failure",
"stuck_or_timeout_failure",
"runner_system_failure",
"runner_unsupported",
"stale_schedule",
# "job_execution_timeout",
"archived_failure",
"unmet_prerequisites",
"scheduler_failure",
"data_integrity_failure",
]
TEMP_STORAGE_MIRROR_NAME = "ci_temporary_mirror"
SPACK_RESERVED_TAGS = ["public", "protected", "notary"]

View File

@@ -2,8 +2,6 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import warnings
import llnl.util.tty as tty
import llnl.util.tty.colify
import llnl.util.tty.color as cl
@@ -54,10 +52,8 @@ def setup_parser(subparser):
def configs(parser, args):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
reports = spack.audit.run_group(args.subcommand)
_process_reports(reports)
reports = spack.audit.run_group(args.subcommand)
_process_reports(reports)
def packages(parser, args):

View File

@@ -61,7 +61,7 @@ def graph(parser, args):
args.dot = True
env = ev.active_environment()
if env:
specs = env.concrete_roots()
specs = env.all_specs()
else:
specs = spack.store.STORE.db.query()

View File

@@ -327,7 +327,7 @@ def _variants_by_name_when(pkg):
"""Adaptor to get variants keyed by { name: { when: { [Variant...] } }."""
# TODO: replace with pkg.variants_by_name(when=True) when unified directive dicts are merged.
variants = {}
for name, (variant, whens) in sorted(pkg.variants.items()):
for name, (variant, whens) in pkg.variants.items():
for when in whens:
variants.setdefault(name, {}).setdefault(when, []).append(variant)
return variants

View File

@@ -154,14 +154,6 @@ def add_compilers_to_config(compilers, scope=None, init_config=True):
"""
compiler_config = get_compiler_config(scope, init_config)
for compiler in compilers:
if not compiler.cc:
tty.debug(f"{compiler.spec} does not have a C compiler")
if not compiler.cxx:
tty.debug(f"{compiler.spec} does not have a C++ compiler")
if not compiler.f77:
tty.debug(f"{compiler.spec} does not have a Fortran77 compiler")
if not compiler.fc:
tty.debug(f"{compiler.spec} does not have a Fortran compiler")
compiler_config.append(_to_dict(compiler))
spack.config.set("compilers", compiler_config, scope=scope)

View File

@@ -380,13 +380,14 @@ def _print_timer(pre: str, pkg_id: str, timer: timer.BaseTimer) -> None:
def _install_from_cache(
pkg: "spack.package_base.PackageBase", explicit: bool, unsigned: bool = False
pkg: "spack.package_base.PackageBase", cache_only: bool, explicit: bool, unsigned: bool = False
) -> bool:
"""
Install the package from binary cache
Extract the package from binary cache
Args:
pkg: package to install from the binary cache
cache_only: only extract from binary cache
explicit: ``True`` if installing the package was explicitly
requested by the user, otherwise, ``False``
unsigned: ``True`` if binary package signatures to be checked,
@@ -398,11 +399,15 @@ def _install_from_cache(
installed_from_cache = _try_install_from_binary_cache(
pkg, explicit, unsigned=unsigned, timer=t
)
pkg_id = package_id(pkg)
if not installed_from_cache:
pre = f"No binary for {pkg_id} found"
if cache_only:
tty.die(f"{pre} when cache-only specified")
tty.msg(f"{pre}: installing from source")
return False
t.stop()
pkg_id = package_id(pkg)
tty.debug(f"Successfully extracted {pkg_id} from binary cache")
_write_timer_json(pkg, t, True)
@@ -1330,6 +1335,7 @@ def _prepare_for_install(self, task: BuildTask) -> None:
"""
install_args = task.request.install_args
keep_prefix = install_args.get("keep_prefix")
restage = install_args.get("restage")
# Make sure the package is ready to be locally installed.
self._ensure_install_ready(task.pkg)
@@ -1361,6 +1367,10 @@ def _prepare_for_install(self, task: BuildTask) -> None:
else:
tty.debug(f"{task.pkg_id} is partially installed")
# Destroy the stage for a locally installed, non-DIYStage, package
if restage and task.pkg.stage.managed_by_spack:
task.pkg.stage.destroy()
if (
rec
and installed_in_db
@@ -1661,16 +1671,11 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
task.status = STATUS_INSTALLING
# Use the binary cache if requested
if use_cache:
if _install_from_cache(pkg, explicit, unsigned):
self._update_installed(task)
if task.compiler:
self._add_compiler_package_to_config(pkg)
return
elif cache_only:
raise InstallError("No binary found when cache-only was specified", pkg=pkg)
else:
tty.msg(f"No binary for {pkg_id} found: installing from source")
if use_cache and _install_from_cache(pkg, cache_only, explicit, unsigned):
self._update_installed(task)
if task.compiler:
self._add_compiler_package_to_config(pkg)
return
pkg.run_tests = tests if isinstance(tests, bool) else pkg.name in tests
@@ -1686,10 +1691,6 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
try:
self._setup_install_dir(pkg)
# Create stage object now and let it be serialized for the child process. That
# way monkeypatch in tests works correctly.
pkg.stage
# Create a child process to do the actual installation.
# Preserve verbosity settings across installs.
spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process(
@@ -2222,6 +2223,11 @@ def install(self) -> None:
if not keep_prefix and not action == InstallAction.OVERWRITE:
pkg.remove_prefix()
# The subprocess *may* have removed the build stage. Mark it
# not created so that the next time pkg.stage is invoked, we
# check the filesystem for it.
pkg.stage.created = False
# Perform basic task cleanup for the installed spec to
# include downgrading the write to a read lock
self._cleanup_task(pkg)
@@ -2291,9 +2297,6 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
# whether to keep the build stage after installation
self.keep_stage = install_args.get("keep_stage", False)
# whether to restage
self.restage = install_args.get("restage", False)
# whether to skip the patch phase
self.skip_patch = install_args.get("skip_patch", False)
@@ -2324,13 +2327,9 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
def run(self) -> bool:
"""Main entry point from ``build_process`` to kick off install in child."""
stage = self.pkg.stage
stage.keep = self.keep_stage
self.pkg.stage.keep = self.keep_stage
if self.restage:
stage.destroy()
with stage:
with self.pkg.stage:
self.timer.start("stage")
if not self.fake:

View File

@@ -1016,16 +1016,14 @@ def _main(argv=None):
bootstrap_context = bootstrap.ensure_bootstrap_configuration()
with bootstrap_context:
return finish_parse_and_run(parser, cmd_name, args, env_format_error)
return finish_parse_and_run(parser, cmd_name, args.command, env_format_error)
def finish_parse_and_run(parser, cmd_name, main_args, env_format_error):
def finish_parse_and_run(parser, cmd_name, cmd, env_format_error):
"""Finish parsing after we know the command to run."""
# add the found command to the parser and re-run then re-parse
command = parser.add_command(cmd_name)
args, unknown = parser.parse_known_args(main_args.command)
# we need to inherit verbose since the install command checks for it
args.verbose = main_args.verbose
args, unknown = parser.parse_known_args()
# Now that we know what command this is and what its args are, determine
# whether we can continue with a bad environment and raise if not.

View File

@@ -93,7 +93,7 @@ def _filter_compiler_wrappers_impl(pkg_or_builder):
replacements = []
for idx, (env_var, compiler_path) in enumerate(compiler_vars):
if env_var in os.environ and compiler_path is not None:
if env_var in os.environ:
# filter spack wrapper and links to spack wrapper in case
# build system runs realpath
wrapper = os.environ[env_var]

View File

@@ -32,6 +32,7 @@
from spack.build_systems.bundle import BundlePackage
from spack.build_systems.cached_cmake import (
CachedCMakePackage,
cmake_cache_filepath,
cmake_cache_option,
cmake_cache_path,
cmake_cache_string,

View File

@@ -69,8 +69,6 @@
"patternProperties": {r"\w+": {}},
}
REQUIREMENT_URL = "https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements"
#: Properties for inclusion in other schemas
properties = {
"packages": {
@@ -119,7 +117,7 @@
"properties": ["version"],
"message": "setting version preferences in the 'all' section of packages.yaml "
"is deprecated and will be removed in v0.22\n\n\tThese preferences "
"will be ignored by Spack. You can set them only in package-specific sections "
"will be ignored by Spack. You can set them only in package specific sections "
"of the same file.\n",
"error": False,
},
@@ -164,14 +162,10 @@
},
"deprecatedProperties": {
"properties": ["target", "compiler", "providers"],
"message": "setting 'compiler:', 'target:' or 'provider:' preferences in "
"a package-specific section of packages.yaml is deprecated, and will be "
"removed in v0.22.\n\n\tThese preferences will be ignored by Spack, and "
"can be set only in the 'all' section of the same file. "
"You can run:\n\n\t\t$ spack audit configs\n\n\tto get better diagnostics, "
"including files:lines where the deprecated attributes are used.\n\n"
"\tUse requirements to enforce conditions on specific packages: "
f"{REQUIREMENT_URL}\n",
"message": "setting compiler, target or provider preferences in a package "
"specific section of packages.yaml is deprecated, and will be removed in "
"v0.22.\n\n\tThese preferences will be ignored by Spack. You "
"can set them only in the 'all' section of the same file.\n",
"error": False,
},
}

View File

@@ -713,7 +713,7 @@ def _get_cause_tree(
(condition_id, set_id) in which the latter idea means that the condition represented by
the former held in the condition set represented by the latter.
"""
seen.add(cause)
seen = set(seen) | set(cause)
parents = [c for e, c in condition_causes if e == cause and c not in seen]
local = "required because %s " % conditions[cause[0]]
@@ -812,14 +812,7 @@ def on_model(model):
errors = sorted(
[(int(priority), msg, args) for priority, msg, *args in error_args], reverse=True
)
try:
msg = self.message(errors)
except Exception as e:
msg = (
f"unexpected error during concretization [{str(e)}]. "
f"Please report a bug at https://github.com/spack/spack/issues"
)
raise spack.error.SpackError(msg)
msg = self.message(errors)
raise UnsatisfiableSpecError(msg)
@@ -1013,6 +1006,14 @@ def on_model(model):
# record the possible dependencies in the solve
result.possible_dependencies = setup.pkgs
# print any unknown functions in the model
for sym in best_model:
if sym.name not in ("attr", "error", "opt_criterion"):
tty.debug(
"UNKNOWN SYMBOL: %s(%s)"
% (sym.name, ", ".join([str(s) for s in intermediate_repr(sym.arguments)]))
)
elif cores:
result.control = self.control
result.cores.extend(cores)
@@ -1117,8 +1118,11 @@ def __init__(self, tests=False):
self.reusable_and_possible = ConcreteSpecsByHash()
self._id_counter = itertools.count()
# id for dummy variables
self._condition_id_counter = itertools.count()
self._trigger_id_counter = itertools.count()
self._trigger_cache = collections.defaultdict(dict)
self._effect_id_counter = itertools.count()
self._effect_cache = collections.defaultdict(dict)
# Caches to optimize the setup phase of the solver
@@ -1514,7 +1518,7 @@ def condition(
# In this way, if a condition can't be emitted but the exception is handled in the caller,
# we won't emit partial facts.
condition_id = next(self._id_counter)
condition_id = next(self._condition_id_counter)
self.gen.fact(fn.pkg_fact(named_cond.name, fn.condition(condition_id)))
self.gen.fact(fn.condition_reason(condition_id, msg))
@@ -1522,7 +1526,7 @@ def condition(
named_cond_key = (str(named_cond), transform_required)
if named_cond_key not in cache:
trigger_id = next(self._id_counter)
trigger_id = next(self._trigger_id_counter)
requirements = self.spec_clauses(named_cond, body=True, required_from=name)
if transform_required:
@@ -1538,7 +1542,7 @@ def condition(
cache = self._effect_cache[named_cond.name]
imposed_spec_key = (str(imposed_spec), transform_imposed)
if imposed_spec_key not in cache:
effect_id = next(self._id_counter)
effect_id = next(self._effect_id_counter)
requirements = self.spec_clauses(imposed_spec, body=False, required_from=name)
if transform_imposed:
@@ -1828,13 +1832,7 @@ def preferred_variants(self, pkg_name):
# perform validation of the variant and values
spec = spack.spec.Spec(pkg_name)
try:
spec.update_variant_validate(variant_name, values)
except (spack.variant.InvalidVariantValueError, KeyError, ValueError) as e:
tty.debug(
f"[SETUP]: rejected {str(variant)} as a preference for {pkg_name}: {str(e)}"
)
continue
spec.update_variant_validate(variant_name, values)
for value in values:
self.variant_values_from_specs.add((pkg_name, variant.name, value))
@@ -2552,8 +2550,12 @@ def setup(
reuse: list of concrete specs that can be reused
allow_deprecated: if True adds deprecated versions into the solve
"""
self._condition_id_counter = itertools.count()
# preliminary checks
check_packages_exist(specs)
# get list of all possible dependencies
self.possible_virtuals = set(x.name for x in specs if x.virtual)
node_counter = _create_counter(specs, tests=self.tests)
@@ -2673,21 +2675,23 @@ def setup(
def literal_specs(self, specs):
for spec in specs:
self.gen.h2("Spec: %s" % str(spec))
condition_id = next(self._id_counter)
trigger_id = next(self._id_counter)
condition_id = next(self._condition_id_counter)
trigger_id = next(self._trigger_id_counter)
# Special condition triggered by "literal_solved"
self.gen.fact(fn.literal(trigger_id))
self.gen.fact(fn.pkg_fact(spec.name, fn.condition_trigger(condition_id, trigger_id)))
self.gen.fact(fn.condition_reason(condition_id, f"{spec} requested explicitly"))
self.gen.fact(fn.condition_reason(condition_id, f"{spec} requested from CLI"))
# Effect imposes the spec
imposed_spec_key = str(spec), None
cache = self._effect_cache[spec.name]
if imposed_spec_key in cache:
effect_id, requirements = cache[imposed_spec_key]
else:
effect_id = next(self._id_counter)
requirements = self.spec_clauses(spec)
msg = (
"literal specs have different requirements. clear cache before computing literals"
)
assert imposed_spec_key not in cache, msg
effect_id = next(self._effect_id_counter)
requirements = self.spec_clauses(spec)
root_name = spec.name
for clause in requirements:
clause_name = clause.args[0]
@@ -2788,11 +2792,9 @@ class SpecBuilder:
r"^.*_propagate$",
r"^.*_satisfies$",
r"^.*_set$",
r"^dependency_holds$",
r"^node_compiler$",
r"^package_hash$",
r"^root$",
r"^track_dependencies$",
r"^variant_default_value_from_cli$",
r"^virtual_node$",
r"^virtual_root$",

View File

@@ -213,19 +213,6 @@ def __call__(self, match):
return clr.colorize(re.sub(_SEPARATORS, insert_color(), str(spec)) + "@.")
OLD_STYLE_FMT_RE = re.compile(r"\${[A-Z]+}")
def ensure_modern_format_string(fmt: str) -> None:
"""Ensure that the format string does not contain old ${...} syntax."""
result = OLD_STYLE_FMT_RE.search(fmt)
if result:
raise SpecFormatStringError(
f"Format string `{fmt}` contains old syntax `{result.group(0)}`. "
"This is no longer supported."
)
@lang.lazy_lexicographic_ordering
class ArchSpec:
"""Aggregate the target platform, the operating system and the target microarchitecture."""
@@ -4373,7 +4360,6 @@ def format(self, format_string=DEFAULT_FORMAT, **kwargs):
that accepts a string and returns another one
"""
ensure_modern_format_string(format_string)
color = kwargs.get("color", False)
transform = kwargs.get("transform", {})

View File

@@ -201,9 +201,6 @@ def dummy_prefix(tmpdir):
with open(data, "w") as f:
f.write("hello world")
with open(p.join(".spack", "binary_distribution"), "w") as f:
f.write("{}")
os.symlink("app", relative_app_link)
os.symlink(app, absolute_app_link)
@@ -1027,9 +1024,7 @@ def test_tarball_common_prefix(dummy_prefix, tmpdir):
bindist._tar_strip_component(tar, common_prefix)
# Extract into prefix2
tar.extractall(
path="prefix2", members=bindist._tar_strip_component(tar, common_prefix)
)
tar.extractall(path="prefix2")
# Verify files are all there at the correct level.
assert set(os.listdir("prefix2")) == {"bin", "share", ".spack"}
@@ -1049,30 +1044,13 @@ def test_tarball_common_prefix(dummy_prefix, tmpdir):
)
def test_tarfile_missing_binary_distribution_file(tmpdir):
"""A tarfile that does not contain a .spack/binary_distribution file cannot be
used to install."""
with tmpdir.as_cwd():
# An empty .spack dir.
with tarfile.open("empty.tar", mode="w") as tar:
tarinfo = tarfile.TarInfo(name="example/.spack")
tarinfo.type = tarfile.DIRTYPE
tar.addfile(tarinfo)
with pytest.raises(ValueError, match="missing binary_distribution file"):
bindist._ensure_common_prefix(tarfile.open("empty.tar", mode="r"))
def test_tarfile_without_common_directory_prefix_fails(tmpdir):
"""A tarfile that only contains files without a common package directory
should fail to extract, as we won't know where to put the files."""
with tmpdir.as_cwd():
# Create a broken tarball with just a file, no directories.
with tarfile.open("empty.tar", mode="w") as tar:
tar.addfile(
tarfile.TarInfo(name="example/.spack/binary_distribution"),
fileobj=io.BytesIO(b"hello"),
)
tar.addfile(tarfile.TarInfo(name="example/file"), fileobj=io.BytesIO(b"hello"))
with pytest.raises(ValueError, match="Tarball does not contain a common prefix"):
bindist._ensure_common_prefix(tarfile.open("empty.tar", mode="r"))
@@ -1188,7 +1166,7 @@ def test_get_valid_spec_file_no_json(tmp_path, filename):
def test_download_tarball_with_unsupported_layout_fails(tmp_path, mutable_config, capsys):
layout_version = bindist.FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION + 1
layout_version = bindist.CURRENT_BUILD_CACHE_LAYOUT_VERSION + 1
spec = Spec("gmake@4.4.1%gcc@13.1.0 arch=linux-ubuntu23.04-zen2")
spec._mark_concrete()
spec_dict = spec.to_dict()

View File

@@ -25,7 +25,7 @@ def test_error_when_multiple_specs_are_given():
assert "only takes one spec" in output
@pytest.mark.parametrize("args", [("--", "/bin/sh", "-c", "echo test"), ("--",), ()])
@pytest.mark.parametrize("args", [("--", "/bin/bash", "-c", "echo test"), ("--",), ()])
@pytest.mark.usefixtures("config", "mock_packages", "working_env")
def test_build_env_requires_a_spec(args):
output = build_env(*args, fail_on_error=False)
@@ -35,7 +35,7 @@ def test_build_env_requires_a_spec(args):
_out_file = "env.out"
@pytest.mark.parametrize("shell", ["pwsh", "bat"] if sys.platform == "win32" else ["sh"])
@pytest.mark.parametrize("shell", ["pwsh", "bat"] if sys.platform == "win32" else ["bash"])
@pytest.mark.usefixtures("config", "mock_packages", "working_env")
def test_dump(shell_as, shell, tmpdir):
with tmpdir.as_cwd():

View File

@@ -2000,7 +2000,7 @@ def test_ci_reproduce(
install_script = os.path.join(working_dir.strpath, "install.sh")
with open(install_script, "w") as fd:
fd.write("#!/bin/sh\n\n#fake install\nspack install blah\n")
fd.write("#!/bin/bash\n\n#fake install\nspack install blah\n")
spack_info_file = os.path.join(working_dir.strpath, "spack_info.txt")
with open(spack_info_file, "w") as fd:

View File

@@ -3538,7 +3538,7 @@ def test_environment_created_in_users_location(mutable_mock_env_path, tmp_path):
assert os.path.isdir(os.path.join(env_dir, dir_name))
def test_environment_created_from_lockfile_has_view(mock_packages, temporary_store, tmpdir):
def test_environment_created_from_lockfile_has_view(mock_packages, tmpdir):
"""When an env is created from a lockfile, a view should be generated for it"""
env_a = str(tmpdir.join("a"))
env_b = str(tmpdir.join("b"))

View File

@@ -43,7 +43,7 @@ def test_find_gpg(cmd_name, version, tmpdir, mock_gnupghome, monkeypatch):
f.write(TEMPLATE.format(version=version))
fs.set_executable(fname)
monkeypatch.setenv("PATH", str(tmpdir))
monkeypatch.setitem(os.environ, "PATH", str(tmpdir))
if version == "undetectable" or version.endswith("1.3.4"):
with pytest.raises(spack.util.gpg.SpackGPGError):
spack.util.gpg.init(force=True)
@@ -54,7 +54,7 @@ def test_find_gpg(cmd_name, version, tmpdir, mock_gnupghome, monkeypatch):
def test_no_gpg_in_path(tmpdir, mock_gnupghome, monkeypatch, mutable_config):
monkeypatch.setenv("PATH", str(tmpdir))
monkeypatch.setitem(os.environ, "PATH", str(tmpdir))
bootstrap("disable")
with pytest.raises(RuntimeError):
spack.util.gpg.init(force=True)

View File

@@ -904,12 +904,13 @@ def test_install_help_cdash():
@pytest.mark.disable_clean_stage_check
def test_cdash_auth_token(tmpdir, mock_fetch, install_mockery, monkeypatch, capfd):
def test_cdash_auth_token(tmpdir, mock_fetch, install_mockery, capfd):
# capfd interferes with Spack's capturing
with tmpdir.as_cwd(), capfd.disabled():
monkeypatch.setenv("SPACK_CDASH_AUTH_TOKEN", "asdf")
out = install("-v", "--log-file=cdash_reports", "--log-format=cdash", "a")
assert "Using CDash auth token from environment" in out
with tmpdir.as_cwd():
with capfd.disabled():
os.environ["SPACK_CDASH_AUTH_TOKEN"] = "asdf"
out = install("-v", "--log-file=cdash_reports", "--log-format=cdash", "a")
assert "Using CDash auth token from environment" in out
@pytest.mark.not_on_windows("Windows log_output logs phase header out of order")

View File

@@ -253,8 +253,8 @@ def test_get_compiler_link_paths_load_env(working_env, monkeypatch, tmpdir):
gcc = str(tmpdir.join("gcc"))
with open(gcc, "w") as f:
f.write(
"""#!/bin/sh
if [ "$ENV_SET" = "1" ] && [ "$MODULE_LOADED" = "1" ]; then
"""#!/bin/bash
if [[ $ENV_SET == "1" && $MODULE_LOADED == "1" ]]; then
echo '"""
+ no_flag_output
+ """'
@@ -701,8 +701,8 @@ def test_compiler_get_real_version(working_env, monkeypatch, tmpdir):
gcc = str(tmpdir.join("gcc"))
with open(gcc, "w") as f:
f.write(
"""#!/bin/sh
if [ "$CMP_ON" = "1" ]; then
"""#!/bin/bash
if [[ $CMP_ON == "1" ]]; then
echo "$CMP_VER"
fi
"""
@@ -747,8 +747,8 @@ def test_compiler_get_real_version_fails(working_env, monkeypatch, tmpdir):
gcc = str(tmpdir.join("gcc"))
with open(gcc, "w") as f:
f.write(
"""#!/bin/sh
if [ "$CMP_ON" = "1" ]; then
"""#!/bin/bash
if [[ $CMP_ON == "1" ]]; then
echo "$CMP_VER"
fi
"""
@@ -801,7 +801,7 @@ def test_compiler_flags_use_real_version(working_env, monkeypatch, tmpdir):
gcc = str(tmpdir.join("gcc"))
with open(gcc, "w") as f:
f.write(
"""#!/bin/sh
"""#!/bin/bash
echo "4.4.4"
"""
) # Version for which c++11 flag is -std=c++0x

View File

@@ -203,9 +203,7 @@ def change(self, changes=None):
# TODO: in case tests using this fixture start failing.
if sys.modules.get("spack.pkg.changing.changing"):
del sys.modules["spack.pkg.changing.changing"]
if sys.modules.get("spack.pkg.changing.root"):
del sys.modules["spack.pkg.changing.root"]
if sys.modules.get("spack.pkg.changing"):
del sys.modules["spack.pkg.changing"]
# Change the recipe
@@ -1606,9 +1604,7 @@ def test_installed_version_is_selected_only_for_reuse(
assert not new_root["changing"].satisfies("@1.0")
@pytest.mark.regression("28259")
def test_reuse_with_unknown_namespace_dont_raise(
self, temporary_store, mock_custom_repository
):
def test_reuse_with_unknown_namespace_dont_raise(self, mock_custom_repository):
with spack.repo.use_repositories(mock_custom_repository, override=False):
s = Spec("c").concretized()
assert s.namespace != "builtin.mock"
@@ -1619,8 +1615,8 @@ def test_reuse_with_unknown_namespace_dont_raise(
assert s.namespace == "builtin.mock"
@pytest.mark.regression("28259")
def test_reuse_with_unknown_package_dont_raise(self, tmpdir, temporary_store, monkeypatch):
builder = spack.repo.MockRepositoryBuilder(tmpdir.mkdir("mock.repo"), namespace="myrepo")
def test_reuse_with_unknown_package_dont_raise(self, tmpdir, monkeypatch):
builder = spack.repo.MockRepositoryBuilder(tmpdir, namespace="myrepo")
builder.add_package("c")
with spack.repo.use_repositories(builder.root, override=False):
s = Spec("c").concretized()

View File

@@ -16,8 +16,8 @@
version_error_messages = [
"Cannot satisfy 'fftw@:1.0' and 'fftw@1.1:",
" required because quantum-espresso depends on fftw@:1.0",
" required because quantum-espresso ^fftw@1.1: requested explicitly",
" required because quantum-espresso ^fftw@1.1: requested explicitly",
" required because quantum-espresso ^fftw@1.1: requested from CLI",
" required because quantum-espresso ^fftw@1.1: requested from CLI",
]
external_error_messages = [
@@ -30,15 +30,15 @@
" which was not satisfied"
),
" 'quantum-espresso+veritas' required",
" required because quantum-espresso+veritas requested explicitly",
" required because quantum-espresso+veritas requested from CLI",
]
variant_error_messages = [
"'fftw' required multiple values for single-valued variant 'mpi'",
" Requested '~mpi' and '+mpi'",
" required because quantum-espresso depends on fftw+mpi when +invino",
" required because quantum-espresso+invino ^fftw~mpi requested explicitly",
" required because quantum-espresso+invino ^fftw~mpi requested explicitly",
" required because quantum-espresso+invino ^fftw~mpi requested from CLI",
" required because quantum-espresso+invino ^fftw~mpi requested from CLI",
]
external_config = {

View File

@@ -504,13 +504,3 @@ def test_sticky_variant_accounts_for_packages_yaml(self):
with spack.config.override("packages:sticky-variant", {"variants": "+allow-gcc"}):
s = Spec("sticky-variant %gcc").concretized()
assert s.satisfies("%gcc") and s.satisfies("+allow-gcc")
@pytest.mark.regression("41134")
@pytest.mark.only_clingo("Not backporting the fix to the old concretizer")
def test_default_preference_variant_different_type_does_not_error(self):
"""Tests that a different type for an existing variant in the 'all:' section of
packages.yaml doesn't fail with an error.
"""
with spack.config.override("packages:all", {"variants": "+foo"}):
s = Spec("a").concretized()
assert s.satisfies("foo=bar")

View File

@@ -1239,11 +1239,11 @@ def test_user_config_path_is_default_when_env_var_is_empty(working_env):
assert os.path.expanduser("~%s.spack" % os.sep) == spack.paths._get_user_config_path()
def test_default_install_tree(monkeypatch, default_config):
def test_default_install_tree(monkeypatch):
s = spack.spec.Spec("nonexistent@x.y.z %none@a.b.c arch=foo-bar-baz")
monkeypatch.setattr(s, "dag_hash", lambda: "abc123")
_, _, projections = spack.store.parse_install_tree(spack.config.get("config"))
assert s.format(projections["all"]) == "foo-bar-baz/none-a.b.c/nonexistent-x.y.z-abc123"
projection = spack.config.get("config:install_tree:projections:all", scope="defaults")
assert s.format(projection) == "foo-bar-baz/none-a.b.c/nonexistent-x.y.z-abc123"
def test_local_config_can_be_disabled(working_env):

View File

@@ -629,7 +629,7 @@ def platform_config():
spack.config.add_default_platform_scope(spack.platforms.real_host().name)
@pytest.fixture
@pytest.fixture(scope="session")
def default_config():
"""Isolates the default configuration from the user configs.
@@ -713,6 +713,9 @@ def configuration_dir(tmpdir_factory, linux_os):
t.write(content)
yield tmpdir
# Once done, cleanup the directory
shutil.rmtree(str(tmpdir))
def _create_mock_configuration_scopes(configuration_dir):
"""Create the configuration scopes used in `config` and `mutable_config`."""

View File

@@ -695,7 +695,7 @@ def test_removing_spec_from_manifest_with_exact_duplicates(
@pytest.mark.regression("35298")
@pytest.mark.only_clingo("Propagation not supported in the original concretizer")
def test_variant_propagation_with_unify_false(tmp_path, mock_packages, config):
def test_variant_propagation_with_unify_false(tmp_path, mock_packages):
"""Spack distributes concretizations to different processes, when unify:false is selected and
the number of roots is 2 or more. When that happens, the specs to be concretized need to be
properly reconstructed on the worker process, if variant propagation was requested.
@@ -778,32 +778,3 @@ def test_env_with_include_def_missing(mutable_mock_env_path, mock_packages):
with e:
with pytest.raises(UndefinedReferenceError, match=r"which does not appear"):
e.concretize()
@pytest.mark.regression("41292")
def test_deconcretize_then_concretize_does_not_error(mutable_mock_env_path, mock_packages):
"""Tests that, after having deconcretized a spec, we can reconcretize an environment which
has 2 or more user specs mapping to the same concrete spec.
"""
mutable_mock_env_path.mkdir()
spack_yaml = mutable_mock_env_path / ev.manifest_name
spack_yaml.write_text(
"""spack:
specs:
# These two specs concretize to the same hash
- c
- c@1.0
# Spec used to trigger the bug
- a
concretizer:
unify: true
"""
)
e = ev.Environment(mutable_mock_env_path)
with e:
e.concretize()
e.deconcretize(spack.spec.Spec("a"), concrete=False)
e.concretize()
assert len(e.concrete_roots()) == 3
all_root_hashes = set(x.dag_hash() for x in e.concrete_roots())
assert len(all_root_hashes) == 2

View File

@@ -12,12 +12,10 @@
import llnl.util.filesystem as fs
import spack.error
import spack.mirror
import spack.patch
import spack.repo
import spack.store
import spack.util.spack_json as sjson
from spack import binary_distribution
from spack.package_base import (
InstallError,
PackageBase,
@@ -120,25 +118,59 @@ def remove_prefix(self):
self.wrapped_rm_prefix()
class MockStage:
def __init__(self, wrapped_stage):
self.wrapped_stage = wrapped_stage
self.test_destroyed = False
def __enter__(self):
self.create()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is None:
self.destroy()
def destroy(self):
self.test_destroyed = True
self.wrapped_stage.destroy()
def create(self):
self.wrapped_stage.create()
def __getattr__(self, attr):
if attr == "wrapped_stage":
# This attribute may not be defined at some point during unpickling
raise AttributeError()
return getattr(self.wrapped_stage, attr)
def test_partial_install_delete_prefix_and_stage(install_mockery, mock_fetch, working_env):
s = Spec("canfail").concretized()
instance_rm_prefix = s.package.remove_prefix
s.package.remove_prefix = mock_remove_prefix
with pytest.raises(MockInstallError):
s.package.do_install()
assert os.path.isdir(s.package.prefix)
rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix)
s.package.remove_prefix = rm_prefix_checker.remove_prefix
try:
s.package.remove_prefix = mock_remove_prefix
with pytest.raises(MockInstallError):
s.package.do_install()
assert os.path.isdir(s.package.prefix)
rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix)
s.package.remove_prefix = rm_prefix_checker.remove_prefix
# must clear failure markings for the package before re-installing it
spack.store.STORE.failure_tracker.clear(s, True)
# must clear failure markings for the package before re-installing it
spack.store.STORE.failure_tracker.clear(s, True)
s.package.set_install_succeed()
s.package.do_install(restage=True)
assert rm_prefix_checker.removed
assert s.package.spec.installed
s.package.set_install_succeed()
s.package.stage = MockStage(s.package.stage)
s.package.do_install(restage=True)
assert rm_prefix_checker.removed
assert s.package.stage.test_destroyed
assert s.package.spec.installed
finally:
s.package.remove_prefix = instance_rm_prefix
@pytest.mark.disable_clean_stage_check
@@ -325,8 +357,10 @@ def test_partial_install_keep_prefix(install_mockery, mock_fetch, monkeypatch, w
spack.store.STORE.failure_tracker.clear(s, True)
s.package.set_install_succeed()
s.package.stage = MockStage(s.package.stage)
s.package.do_install(keep_prefix=True)
assert s.package.spec.installed
assert not s.package.stage.test_destroyed
def test_second_install_no_overwrite_first(install_mockery, mock_fetch, monkeypatch):
@@ -610,48 +644,3 @@ def test_empty_install_sanity_check_prefix(
spec = Spec("failing-empty-install").concretized()
with pytest.raises(spack.build_environment.ChildError, match="Nothing was installed"):
spec.package.do_install()
def test_install_from_binary_with_missing_patch_succeeds(
temporary_store: spack.store.Store, mutable_config, tmp_path, mock_packages
):
"""If a patch is missing in the local package repository, but was present when building and
pushing the package to a binary cache, installation from that binary cache shouldn't error out
because of the missing patch."""
# Create a spec s with non-existing patches
s = Spec("trivial-install-test-package").concretized()
patches = ["a" * 64]
s_dict = s.to_dict()
s_dict["spec"]["nodes"][0]["patches"] = patches
s_dict["spec"]["nodes"][0]["parameters"]["patches"] = patches
s = Spec.from_dict(s_dict)
# Create an install dir for it
os.makedirs(os.path.join(s.prefix, ".spack"))
with open(os.path.join(s.prefix, ".spack", "spec.json"), "w") as f:
s.to_json(f)
# And register it in the database
temporary_store.db.add(s, directory_layout=temporary_store.layout, explicit=True)
# Push it to a binary cache
build_cache = tmp_path / "my_build_cache"
binary_distribution.push_or_raise(
s,
build_cache.as_uri(),
binary_distribution.PushOptions(unsigned=True, regenerate_index=True),
)
# Now re-install it.
s.package.do_uninstall()
assert not temporary_store.db.query_local_by_spec_hash(s.dag_hash())
# Source install: fails, we don't have the patch.
with pytest.raises(spack.error.SpecError, match="Couldn't find patch for package"):
s.package.do_install()
# Binary install: succeeds, we don't need the patch.
spack.mirror.add(spack.mirror.Mirror.from_local_path(str(build_cache)))
s.package.do_install(package_cache_only=True, dependencies_cache_only=True, unsigned=True)
assert temporary_store.db.query_local_by_spec_hash(s.dag_hash())

View File

@@ -165,19 +165,23 @@ def test_install_msg(monkeypatch):
assert inst.install_msg(name, pid, None) == expected
def test_install_from_cache_errors(install_mockery):
"""Test to ensure cover install from cache errors."""
def test_install_from_cache_errors(install_mockery, capsys):
"""Test to ensure cover _install_from_cache errors."""
spec = spack.spec.Spec("trivial-install-test-package")
spec.concretize()
assert spec.concrete
# Check with cache-only
with pytest.raises(inst.InstallError, match="No binary found when cache-only was specified"):
spec.package.do_install(package_cache_only=True, dependencies_cache_only=True)
with pytest.raises(SystemExit):
inst._install_from_cache(spec.package, True, True, False)
captured = str(capsys.readouterr())
assert "No binary" in captured
assert "found when cache-only specified" in captured
assert not spec.package.installed_from_binary_cache
# Check when don't expect to install only from binary cache
assert not inst._install_from_cache(spec.package, explicit=True, unsigned=False)
assert not inst._install_from_cache(spec.package, False, True, False)
assert not spec.package.installed_from_binary_cache
@@ -188,7 +192,7 @@ def test_install_from_cache_ok(install_mockery, monkeypatch):
monkeypatch.setattr(inst, "_try_install_from_binary_cache", _true)
monkeypatch.setattr(spack.hooks, "post_install", _noop)
assert inst._install_from_cache(spec.package, explicit=True, unsigned=False)
assert inst._install_from_cache(spec.package, True, True, False)
def test_process_external_package_module(install_mockery, monkeypatch, capfd):

View File

@@ -9,7 +9,10 @@
This just tests whether the right args are getting passed to make.
"""
import os
import shutil
import sys
import tempfile
import unittest
import pytest
@@ -17,104 +20,110 @@
from spack.util.environment import path_put_first
pytestmark = pytest.mark.skipif(
sys.platform == "win32", reason="MakeExecutable not supported on Windows"
sys.platform == "win32",
reason="MakeExecutable \
not supported on Windows",
)
@pytest.fixture(autouse=True)
def make_executable(tmp_path, working_env):
make_exe = tmp_path / "make"
with open(make_exe, "w") as f:
f.write("#!/bin/sh\n")
f.write('echo "$@"')
os.chmod(make_exe, 0o700)
class MakeExecutableTest(unittest.TestCase):
def setUp(self):
self.tmpdir = tempfile.mkdtemp()
path_put_first("PATH", [tmp_path])
make_exe = os.path.join(self.tmpdir, "make")
with open(make_exe, "w") as f:
f.write("#!/bin/sh\n")
f.write('echo "$@"')
os.chmod(make_exe, 0o700)
path_put_first("PATH", [self.tmpdir])
def test_make_normal():
make = MakeExecutable("make", 8)
assert make(output=str).strip() == "-j8"
assert make("install", output=str).strip() == "-j8 install"
def tearDown(self):
shutil.rmtree(self.tmpdir)
def test_make_normal(self):
make = MakeExecutable("make", 8)
self.assertEqual(make(output=str).strip(), "-j8")
self.assertEqual(make("install", output=str).strip(), "-j8 install")
def test_make_explicit():
make = MakeExecutable("make", 8)
assert make(parallel=True, output=str).strip() == "-j8"
assert make("install", parallel=True, output=str).strip() == "-j8 install"
def test_make_explicit(self):
make = MakeExecutable("make", 8)
self.assertEqual(make(parallel=True, output=str).strip(), "-j8")
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install")
def test_make_one_job(self):
make = MakeExecutable("make", 1)
self.assertEqual(make(output=str).strip(), "-j1")
self.assertEqual(make("install", output=str).strip(), "-j1 install")
def test_make_one_job():
make = MakeExecutable("make", 1)
assert make(output=str).strip() == "-j1"
assert make("install", output=str).strip() == "-j1 install"
def test_make_parallel_false(self):
make = MakeExecutable("make", 8)
self.assertEqual(make(parallel=False, output=str).strip(), "-j1")
self.assertEqual(make("install", parallel=False, output=str).strip(), "-j1 install")
def test_make_parallel_disabled(self):
make = MakeExecutable("make", 8)
def test_make_parallel_false():
make = MakeExecutable("make", 8)
assert make(parallel=False, output=str).strip() == "-j1"
assert make("install", parallel=False, output=str).strip() == "-j1 install"
os.environ["SPACK_NO_PARALLEL_MAKE"] = "true"
self.assertEqual(make(output=str).strip(), "-j1")
self.assertEqual(make("install", output=str).strip(), "-j1 install")
os.environ["SPACK_NO_PARALLEL_MAKE"] = "1"
self.assertEqual(make(output=str).strip(), "-j1")
self.assertEqual(make("install", output=str).strip(), "-j1 install")
def test_make_parallel_disabled(monkeypatch):
make = MakeExecutable("make", 8)
# These don't disable (false and random string)
os.environ["SPACK_NO_PARALLEL_MAKE"] = "false"
self.assertEqual(make(output=str).strip(), "-j8")
self.assertEqual(make("install", output=str).strip(), "-j8 install")
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "true")
assert make(output=str).strip() == "-j1"
assert make("install", output=str).strip() == "-j1 install"
os.environ["SPACK_NO_PARALLEL_MAKE"] = "foobar"
self.assertEqual(make(output=str).strip(), "-j8")
self.assertEqual(make("install", output=str).strip(), "-j8 install")
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "1")
assert make(output=str).strip() == "-j1"
assert make("install", output=str).strip() == "-j1 install"
del os.environ["SPACK_NO_PARALLEL_MAKE"]
# These don't disable (false and random string)
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "false")
assert make(output=str).strip() == "-j8"
assert make("install", output=str).strip() == "-j8 install"
def test_make_parallel_precedence(self):
make = MakeExecutable("make", 8)
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "foobar")
assert make(output=str).strip() == "-j8"
assert make("install", output=str).strip() == "-j8 install"
# These should work
os.environ["SPACK_NO_PARALLEL_MAKE"] = "true"
self.assertEqual(make(parallel=True, output=str).strip(), "-j1")
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j1 install")
os.environ["SPACK_NO_PARALLEL_MAKE"] = "1"
self.assertEqual(make(parallel=True, output=str).strip(), "-j1")
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j1 install")
def test_make_parallel_precedence(monkeypatch):
make = MakeExecutable("make", 8)
# These don't disable (false and random string)
os.environ["SPACK_NO_PARALLEL_MAKE"] = "false"
self.assertEqual(make(parallel=True, output=str).strip(), "-j8")
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install")
# These should work
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "true")
assert make(parallel=True, output=str).strip() == "-j1"
assert make("install", parallel=True, output=str).strip() == "-j1 install"
os.environ["SPACK_NO_PARALLEL_MAKE"] = "foobar"
self.assertEqual(make(parallel=True, output=str).strip(), "-j8")
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install")
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "1")
assert make(parallel=True, output=str).strip() == "-j1"
assert make("install", parallel=True, output=str).strip() == "-j1 install"
del os.environ["SPACK_NO_PARALLEL_MAKE"]
# These don't disable (false and random string)
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "false")
assert make(parallel=True, output=str).strip() == "-j8"
assert make("install", parallel=True, output=str).strip() == "-j8 install"
def test_make_jobs_env(self):
make = MakeExecutable("make", 8)
dump_env = {}
self.assertEqual(
make(output=str, jobs_env="MAKE_PARALLELISM", _dump_env=dump_env).strip(), "-j8"
)
self.assertEqual(dump_env["MAKE_PARALLELISM"], "8")
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "foobar")
assert make(parallel=True, output=str).strip() == "-j8"
assert make("install", parallel=True, output=str).strip() == "-j8 install"
def test_make_jobserver(self):
make = MakeExecutable("make", 8)
os.environ["MAKEFLAGS"] = "--jobserver-auth=X,Y"
self.assertEqual(make(output=str).strip(), "")
self.assertEqual(make(parallel=False, output=str).strip(), "-j1")
del os.environ["MAKEFLAGS"]
def test_make_jobs_env():
make = MakeExecutable("make", 8)
dump_env = {}
assert make(output=str, jobs_env="MAKE_PARALLELISM", _dump_env=dump_env).strip() == "-j8"
assert dump_env["MAKE_PARALLELISM"] == "8"
def test_make_jobserver(monkeypatch):
make = MakeExecutable("make", 8)
monkeypatch.setenv("MAKEFLAGS", "--jobserver-auth=X,Y")
assert make(output=str).strip() == ""
assert make(parallel=False, output=str).strip() == "-j1"
def test_make_jobserver_not_supported(monkeypatch):
make = MakeExecutable("make", 8, supports_jobserver=False)
monkeypatch.setenv("MAKEFLAGS", "--jobserver-auth=X,Y")
# Currently fallback on default job count, Maybe it should force -j1 ?
assert make(output=str).strip() == "-j8"
def test_make_jobserver_not_supported(self):
make = MakeExecutable("make", 8, supports_jobserver=False)
os.environ["MAKEFLAGS"] = "--jobserver-auth=X,Y"
# Currently fallback on default job count, Maybe it should force -j1 ?
self.assertEqual(make(output=str).strip(), "-j8")
del os.environ["MAKEFLAGS"]

View File

@@ -27,13 +27,16 @@
]
def test_module_function_change_env(tmp_path):
environb = {b"TEST_MODULE_ENV_VAR": b"TEST_FAIL", b"NOT_AFFECTED": b"NOT_AFFECTED"}
src_file = tmp_path / "src_me"
src_file.write_text("export TEST_MODULE_ENV_VAR=TEST_SUCCESS\n")
module("load", str(src_file), module_template=f". {src_file} 2>&1", environb=environb)
assert environb[b"TEST_MODULE_ENV_VAR"] == b"TEST_SUCCESS"
assert environb[b"NOT_AFFECTED"] == b"NOT_AFFECTED"
def test_module_function_change_env(tmpdir, working_env):
src_file = str(tmpdir.join("src_me"))
with open(src_file, "w") as f:
f.write("export TEST_MODULE_ENV_VAR=TEST_SUCCESS\n")
os.environ["NOT_AFFECTED"] = "NOT_AFFECTED"
module("load", src_file, module_template=". {0} 2>&1".format(src_file))
assert os.environ["TEST_MODULE_ENV_VAR"] == "TEST_SUCCESS"
assert os.environ["NOT_AFFECTED"] == "NOT_AFFECTED"
def test_module_function_no_change(tmpdir):

View File

@@ -1517,9 +1517,3 @@ def test_edge_equality_does_not_depend_on_virtual_order():
assert edge1 == edge2
assert tuple(sorted(edge1.virtuals)) == edge1.virtuals
assert tuple(sorted(edge2.virtuals)) == edge1.virtuals
def test_old_format_strings_trigger_error(default_mock_concretization):
s = Spec("a").concretized()
with pytest.raises(SpecFormatStringError):
s.format("${PACKAGE}-${VERSION}-${HASH}")

View File

@@ -147,8 +147,7 @@ def test_reverse_environment_modifications(working_env):
reversal = to_reverse.reversed()
os.environ.clear()
os.environ.update(start_env)
os.environ = start_env.copy()
print(os.environ)
to_reverse.apply_modifications()

View File

@@ -89,8 +89,8 @@ def test_which_with_slash_ignores_path(tmpdir, working_env):
assert exe.path == path
def test_which(tmpdir, monkeypatch):
monkeypatch.setenv("PATH", str(tmpdir))
def test_which(tmpdir):
os.environ["PATH"] = str(tmpdir)
assert ex.which("spack-test-exe") is None
with pytest.raises(ex.CommandNotFoundError):

View File

@@ -10,7 +10,6 @@
import os
import re
import subprocess
from typing import MutableMapping, Optional
import llnl.util.tty as tty
@@ -22,13 +21,8 @@
awk_cmd = r"""awk 'BEGIN{for(name in ENVIRON)""" r"""printf("%s=%s%c", name, ENVIRON[name], 0)}'"""
def module(
*args,
module_template: Optional[str] = None,
environb: Optional[MutableMapping[bytes, bytes]] = None,
):
module_cmd = module_template or ("module " + " ".join(args))
environb = environb or os.environb
def module(*args, **kwargs):
module_cmd = kwargs.get("module_template", "module " + " ".join(args))
if args[0] in module_change_commands:
# Suppress module output
@@ -39,10 +33,10 @@ def module(
stderr=subprocess.STDOUT,
shell=True,
executable="/bin/bash",
env=environb,
)
new_environb = {}
# In Python 3, keys and values of `environ` are byte strings.
environ = {}
output = module_p.communicate()[0]
# Loop over each environment variable key=value byte string
@@ -51,11 +45,11 @@ def module(
parts = entry.split(b"=", 1)
if len(parts) != 2:
continue
new_environb[parts[0]] = parts[1]
environ[parts[0]] = parts[1]
# Update os.environ with new dict
environb.clear()
environb.update(new_environb) # novermin
os.environ.clear()
os.environb.update(environ) # novermin
else:
# Simply execute commands that don't change state and return output

View File

@@ -51,8 +51,12 @@ case "$mode" in
# COPY spack.yaml .
# RUN spack install # <- Spack is loaded and ready to use.
# # No manual initialization necessary.
# RUN <<EOT # Supports multi-line scripts.
# spack install # Will fail on first error (set -e).
# spack buildcache push
# EOT
. $SPACK_ROOT/share/spack/setup-env.sh
exec bash -c "$*"
exec bash -e -c "$*"
;;
"interactiveshell")

View File

@@ -370,25 +370,25 @@ e4s-rocm-external-build:
########################################
# GPU Testing Pipeline
########################################
# .gpu-tests:
# extends: [ ".linux_x86_64_v3" ]
# variables:
# SPACK_CI_STACK_NAME: gpu-tests
.gpu-tests:
extends: [ ".linux_x86_64_v3" ]
variables:
SPACK_CI_STACK_NAME: gpu-tests
# gpu-tests-generate:
# extends: [ ".gpu-tests", ".generate-x86_64"]
# image: ghcr.io/spack/ubuntu20.04-runner-x86_64:2023-01-01
gpu-tests-generate:
extends: [ ".gpu-tests", ".generate-x86_64"]
image: ghcr.io/spack/ubuntu20.04-runner-x86_64:2023-01-01
# gpu-tests-build:
# extends: [ ".gpu-tests", ".build" ]
# trigger:
# include:
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
# job: gpu-tests-generate
# strategy: depend
# needs:
# - artifacts: True
# job: gpu-tests-generate
gpu-tests-build:
extends: [ ".gpu-tests", ".build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: gpu-tests-generate
strategy: depend
needs:
- artifacts: True
job: gpu-tests-generate
########################################
# E4S OneAPI Pipeline
@@ -894,16 +894,16 @@ e4s-cray-rhel-build:
variables:
SPACK_CI_STACK_NAME: e4s-cray-sles
# e4s-cray-sles-generate:
# extends: [ ".generate-cray-sles", ".e4s-cray-sles" ]
e4s-cray-sles-generate:
extends: [ ".generate-cray-sles", ".e4s-cray-sles" ]
# e4s-cray-sles-build:
# extends: [ ".build", ".e4s-cray-sles" ]
# trigger:
# include:
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
# job: e4s-cray-sles-generate
# strategy: depend
# needs:
# - artifacts: True
# job: e4s-cray-sles-generate
e4s-cray-sles-build:
extends: [ ".build", ".e4s-cray-sles" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: e4s-cray-sles-generate
strategy: depend
needs:
- artifacts: True
job: e4s-cray-sles-generate

View File

@@ -27,6 +27,8 @@ class Abinit(AutotoolsPackage):
homepage = "https://www.abinit.org/"
url = "https://www.abinit.org/sites/default/files/packages/abinit-8.6.3.tar.gz"
maintainers("downloadico")
version("9.10.3", sha256="3f2a9aebbf1fee9855a09dd687f88d2317b8b8e04f97b2628ab96fb898dce49b")
version("9.8.4", sha256="a086d5045f0093b432e6a044d5f71f7edf5a41a62d67b3677cb0751d330c564a")
version("9.8.3", sha256="de823878aea2c20098f177524fbb4b60de9b1b5971b2e835ec244dfa3724589b")
version("9.6.1", sha256="b6a12760fd728eb4aacca431ae12150609565bedbaa89763f219fcd869f79ac6")
@@ -143,19 +145,27 @@ def configure_args(self):
oapp(f"--with-optim-flavor={self.spec.variants['optimization-flavor'].value}")
if "+wannier90" in spec:
if "@:8" in spec:
if spec.satisfies("@:8"):
oapp(f"--with-wannier90-libs=-L{spec['wannier90'].prefix.lib} -lwannier -lm")
oapp(f"--with-wannier90-incs=-I{spec['wannier90'].prefix.modules}")
oapp(f"--with-wannier90-bins={spec['wannier90'].prefix.bin}")
oapp("--enable-connectors")
oapp("--with-dft-flavor=atompaw+libxc+wannier90")
else:
elif spec.satisfies("@:9.8"):
options.extend(
[
f"WANNIER90_CPPFLAGS=-I{spec['wannier90'].prefix.modules}",
f"WANNIER90_LIBS=-L{spec['wannier90'].prefix.lib} -lwannier",
]
)
else:
options.extend(
[
f"WANNIER90_CPPFLAGS=-I{spec['wannier90'].prefix.modules}",
f"WANNIER90_LIBS=-L{spec['wannier90'].prefix.lib}"
"WANNIER90_LDFLAGS=-lwannier",
]
)
else:
if "@:9.8" in spec:
oapp(f"--with-fftw={spec['fftw-api'].prefix}")
@@ -169,7 +179,10 @@ def configure_args(self):
if "+mpi" in spec:
oapp(f"CC={spec['mpi'].mpicc}")
oapp(f"CXX={spec['mpi'].mpicxx}")
oapp(f"FC={spec['mpi'].mpifc}")
if spec.satisfies("@9.8:"):
oapp(f"F90={spec['mpi'].mpifc}")
else:
oapp(f"FC={spec['mpi'].mpifc}")
# MPI version:
# let the configure script auto-detect MPI support from mpi_prefix

View File

@@ -20,8 +20,9 @@ class Adiak(CMakePackage):
variant("shared", default=True, description="Build dynamic libraries")
version(
"0.2.2", commit="3aedd494c81c01df1183af28bc09bade2fabfcd3", submodules=True, preferred=True
"0.4.0", commit="7e8b7233f8a148b402128ed46b2f0c643e3b397e", submodules=True, preferred=True
)
version("0.2.2", commit="3aedd494c81c01df1183af28bc09bade2fabfcd3", submodules=True)
version(
"0.3.0-alpha",
commit="054d2693a977ed0e1f16c665b4966bb90924779e",

View File

@@ -34,7 +34,7 @@ class Alquimia(CMakePackage):
depends_on("pflotran@develop", when="@develop")
depends_on("petsc@3.10:", when="@develop")
@when("@1.0.10")
@when("@1.0.10:1.1.0")
def patch(self):
filter_file(
"use iso_[cC]_binding",

View File

@@ -9,7 +9,7 @@
from spack.package import *
class Aluminum(CMakePackage, CudaPackage, ROCmPackage):
class Aluminum(CachedCMakePackage, CudaPackage, ROCmPackage):
"""Aluminum provides a generic interface to high-performance
communication libraries, with a focus on allreduce
algorithms. Blocking and non-blocking algorithms and GPU-aware
@@ -22,208 +22,207 @@ class Aluminum(CMakePackage, CudaPackage, ROCmPackage):
git = "https://github.com/LLNL/Aluminum.git"
tags = ["ecp", "radiuss"]
maintainers("bvanessen")
maintainers("benson31", "bvanessen")
version("master", branch="master")
version("1.4.1", sha256="d130a67fef1cb7a9cb3bbec1d0de426f020fe68c9df6e172c83ba42281cd90e3")
version("1.4.0", sha256="ac54de058f38cead895ec8163f7b1fa7674e4dc5aacba683a660a61babbfe0c6")
version("1.3.1", sha256="28ce0af6c6f29f97b7f19c5e45184bd2f8a0b1428f1e898b027d96d47cb74b0b")
version("1.3.0", sha256="d0442efbebfdfb89eec793ae65eceb8f1ba65afa9f2e48df009f81985a4c27e3")
version("1.2.3", sha256="9b214bdf30f9b7e8e017f83e6615db6be2631f5be3dd186205dbe3aa62f4018a")
version(
"1.2.2",
sha256="c01d9dd98be4cab9b944bae99b403abe76d65e9e1750e7f23bf0105636ad5485",
deprecated=True,
)
version(
"1.2.1",
sha256="869402708c8a102a67667b83527b4057644a32b8cdf4990bcd1a5c4e5f0e30af",
deprecated=True,
)
version(
"1.2.0",
sha256="2f3725147f4dbc045b945af68d3d747f5dffbe2b8e928deed64136785210bc9a",
deprecated=True,
)
version(
"1.1.0",
sha256="78b03e36e5422e8651f400feb4d8a527f87302db025d77aa37e223be6b9bdfc9",
deprecated=True,
)
version("1.0.0-lbann", tag="v1.0.0-lbann", commit="40a062b1f63e84e074489c0f926f36b806c6b8f3")
version("1.0.0", sha256="028d12e271817214db5c07c77b0528f88862139c3e442e1b12f58717290f414a")
version(
"0.7.0",
sha256="bbb73d2847c56efbe6f99e46b41d837763938483f2e2d1982ccf8350d1148caa",
deprecated=True,
)
version(
"0.6.0",
sha256="6ca329951f4c7ea52670e46e5020e7e7879d9b56fed5ff8c5df6e624b313e925",
deprecated=True,
)
version(
"0.5.0",
sha256="dc365a5849eaba925355a8efb27005c5f22bcd1dca94aaed8d0d29c265c064c1",
deprecated=True,
)
version(
"0.4.0",
sha256="4d6fab5481cc7c994b32fb23a37e9ee44041a9f91acf78f981a97cb8ef57bb7d",
deprecated=True,
)
version(
"0.3.3",
sha256="26e7f263f53c6c6ee0fe216e981a558dfdd7ec997d0dd2a24285a609a6c68f3b",
deprecated=True,
)
version(
"0.3.2",
sha256="09b6d1bcc02ac54ba269b1123eee7be20f0104b93596956c014b794ba96b037f",
deprecated=True,
)
version(
"0.2.1-1",
sha256="066b750e9d1134871709a3e2414b96b166e0e24773efc7d512df2f1d96ee8eef",
deprecated=True,
)
version(
"0.2.1",
sha256="3d5d15853cccc718f60df68205e56a2831de65be4d96e7f7e8497097e7905f89",
deprecated=True,
)
version(
"0.2",
sha256="fc8f06c6d8faab17a2aedd408d3fe924043bf857da1094d5553f35c4d2af893b",
deprecated=True,
)
version(
"0.1",
sha256="3880b736866e439dd94e6a61eeeb5bb2abccebbac82b82d52033bc6c94950bdb",
deprecated=True,
)
variant("nccl", default=False, description="Builds with support for NCCL communication lib")
# Library capabilities
variant(
"cuda_rma",
default=False,
when="+cuda",
description="Builds with support for CUDA intra-node "
" Put/Get and IPC RMA functionality",
)
variant(
"ht",
default=False,
description="Builds with support for host-enabled MPI"
" communication of accelerator data",
)
variant("nccl", default=False, description="Builds with support for NCCL communication lib")
variant("shared", default=True, description="Build Aluminum as a shared library")
# Debugging features
variant("hang_check", default=False, description="Enable hang checking")
variant("trace", default=False, description="Enable runtime tracing")
# Profiler support
variant("nvtx", default=False, when="+cuda", description="Enable profiling via nvprof/NVTX")
variant(
"cuda_rma",
"roctracer", default=False, when="+rocm", description="Enable profiling via rocprof/roctx"
)
# Advanced options
variant("mpi_serialize", default=False, description="Serialize MPI operations")
variant("stream_mem_ops", default=False, description="Enable stream memory operations")
variant(
"thread_multiple",
default=False,
description="Builds with support for CUDA intra-node "
" Put/Get and IPC RMA functionality",
)
variant("rccl", default=False, description="Builds with support for RCCL communication lib")
variant(
"ofi_libfabric_plugin",
default=spack.platforms.cray.slingshot_network(),
when="+rccl",
sticky=True,
description="Builds with support for OFI libfabric enhanced RCCL/NCCL communication lib",
)
variant(
"ofi_libfabric_plugin",
default=spack.platforms.cray.slingshot_network(),
when="+nccl",
sticky=True,
description="Builds with support for OFI libfabric enhanced RCCL/NCCL communication lib",
description="Allow multiple threads to call Aluminum concurrently",
)
depends_on("cmake@3.21.0:", type="build", when="@1.0.1:")
depends_on("cmake@3.17.0:", type="build", when="@:1.0.0")
depends_on("mpi")
depends_on("nccl@2.7.0-0:", when="+nccl")
depends_on("hwloc@1.11:")
depends_on("hwloc +cuda +nvml", when="+cuda")
depends_on("hwloc@2.3.0:", when="+rocm")
depends_on("cub", when="@:0.1,0.6.0: +cuda ^cuda@:10")
depends_on("hipcub", when="@:0.1,0.6.0: +rocm")
# Benchmark/testing support
variant(
"benchmarks",
default=False,
description="Build the Aluminum benchmarking drivers "
"(warning: may significantly increase build time!)",
)
variant(
"tests",
default=False,
description="Build the Aluminum test drivers "
"(warning: may moderately increase build time!)",
)
depends_on("rccl", when="+rccl")
depends_on("aws-ofi-rccl", when="+rccl +ofi_libfabric_plugin")
depends_on("aws-ofi-nccl", when="+nccl +ofi_libfabric_plugin")
# FIXME: Do we want to expose tuning parameters to the Spack
# recipe? Some are numeric values, some are on/off switches.
conflicts("~cuda", when="+cuda_rma", msg="CUDA RMA support requires CUDA")
conflicts("+cuda", when="+rocm", msg="CUDA and ROCm support are mutually exclusive")
conflicts("+nccl", when="+rccl", msg="NCCL and RCCL support are mutually exclusive")
generator("ninja")
depends_on("mpi")
depends_on("cmake@3.21.0:", type="build", when="@1.0.1:")
depends_on("hwloc@1.11:")
with when("+cuda"):
depends_on("cub", when="^cuda@:10")
depends_on("hwloc +cuda +nvml")
with when("+nccl"):
depends_on("nccl@2.7.0-0:")
for arch in CudaPackage.cuda_arch_values:
depends_on(
"nccl +cuda cuda_arch={0}".format(arch),
when="+cuda cuda_arch={0}".format(arch),
)
if spack.platforms.cray.slingshot_network():
depends_on("aws-ofi-nccl") # Note: NOT a CudaPackage
with when("+rocm"):
for val in ROCmPackage.amdgpu_targets:
depends_on(
"hipcub +rocm amdgpu_target={0}".format(val), when="amdgpu_target={0}".format(val)
)
depends_on(
"hwloc@2.3.0: +rocm amdgpu_target={0}".format(val),
when="amdgpu_target={0}".format(val),
)
# RCCL is *NOT* implented as a ROCmPackage
depends_on(
"rccl amdgpu_target={0}".format(val), when="+nccl amdgpu_target={0}".format(val)
)
depends_on(
"roctracer-dev +rocm amdgpu_target={0}".format(val),
when="+roctracer amdgpu_target={0}".format(val),
)
if spack.platforms.cray.slingshot_network():
depends_on("aws-ofi-rccl", when="+nccl")
def cmake_args(self):
spec = self.spec
args = [
"-DCMAKE_CXX_STANDARD:STRING=17",
"-DALUMINUM_ENABLE_CUDA:BOOL=%s" % ("+cuda" in spec),
"-DALUMINUM_ENABLE_NCCL:BOOL=%s" % ("+nccl" in spec or "+rccl" in spec),
"-DALUMINUM_ENABLE_ROCM:BOOL=%s" % ("+rocm" in spec),
]
if not spec.satisfies("^cmake@3.23.0"):
# There is a bug with using Ninja generator in this version
# of CMake
args.append("-DCMAKE_EXPORT_COMPILE_COMMANDS=ON")
if "+cuda" in spec:
if self.spec.satisfies("%clang"):
for flag in self.spec.compiler_flags["cxxflags"]:
if "gcc-toolchain" in flag:
args.append("-DCMAKE_CUDA_FLAGS=-Xcompiler={0}".format(flag))
if spec.satisfies("^cuda@11.0:"):
args.append("-DCMAKE_CUDA_STANDARD=17")
else:
args.append("-DCMAKE_CUDA_STANDARD=14")
archs = spec.variants["cuda_arch"].value
if archs != "none":
arch_str = ";".join(archs)
args.append("-DCMAKE_CUDA_ARCHITECTURES=%s" % arch_str)
if spec.satisfies("%cce") and spec.satisfies("^cuda+allow-unsupported-compilers"):
args.append("-DCMAKE_CUDA_FLAGS=-allow-unsupported-compiler")
if spec.satisfies("@0.5:"):
args.extend(
[
"-DALUMINUM_ENABLE_HOST_TRANSFER:BOOL=%s" % ("+ht" in spec),
"-DALUMINUM_ENABLE_MPI_CUDA:BOOL=%s" % ("+cuda_rma" in spec),
"-DALUMINUM_ENABLE_MPI_CUDA_RMA:BOOL=%s" % ("+cuda_rma" in spec),
]
)
else:
args.append("-DALUMINUM_ENABLE_MPI_CUDA:BOOL=%s" % ("+ht" in spec))
if spec.satisfies("@:0.1,0.6.0: +cuda ^cuda@:10"):
args.append("-DCUB_DIR:FILEPATH=%s" % spec["cub"].prefix)
# Add support for OS X to find OpenMP (LLVM installed via brew)
if self.spec.satisfies("%clang platform=darwin"):
clang = self.compiler.cc
clang_bin = os.path.dirname(clang)
clang_root = os.path.dirname(clang_bin)
args.extend(["-DOpenMP_DIR={0}".format(clang_root)])
if "+rocm" in spec:
args.extend(
[
"-DHIP_ROOT_DIR={0}".format(spec["hip"].prefix),
"-DHIP_CXX_COMPILER={0}".format(self.spec["hip"].hipcc),
"-DCMAKE_CXX_FLAGS=-std=c++17",
]
)
archs = self.spec.variants["amdgpu_target"].value
if archs != "none":
arch_str = ",".join(archs)
if spec.satisfies("%rocmcc@:5"):
args.append(
"-DHIP_HIPCC_FLAGS=--amdgpu-target={0}"
" -g -fsized-deallocation -fPIC -std=c++17".format(arch_str)
)
args.extend(
[
"-DCMAKE_HIP_ARCHITECTURES=%s" % arch_str,
"-DAMDGPU_TARGETS=%s" % arch_str,
"-DGPU_TARGETS=%s" % arch_str,
]
)
args = []
return args
def get_cuda_flags(self):
spec = self.spec
args = []
if spec.satisfies("^cuda+allow-unsupported-compilers"):
args.append("-allow-unsupported-compiler")
if spec.satisfies("%clang"):
for flag in spec.compiler_flags["cxxflags"]:
if "gcc-toolchain" in flag:
args.append("-Xcompiler={0}".format(flag))
return args
def std_initconfig_entries(self):
entries = super(Aluminum, self).std_initconfig_entries()
# CMAKE_PREFIX_PATH, in CMake types, is a "STRING", not a "PATH". :/
entries = [x for x in entries if "CMAKE_PREFIX_PATH" not in x]
cmake_prefix_path = os.environ["CMAKE_PREFIX_PATH"].replace(":", ";")
entries.append(cmake_cache_string("CMAKE_PREFIX_PATH", cmake_prefix_path))
return entries
def initconfig_compiler_entries(self):
spec = self.spec
entries = super(Aluminum, self).initconfig_compiler_entries()
# FIXME: Enforce this better in the actual CMake.
entries.append(cmake_cache_string("CMAKE_CXX_STANDARD", "17"))
entries.append(cmake_cache_option("BUILD_SHARED_LIBS", "+shared" in spec))
entries.append(cmake_cache_option("CMAKE_EXPORT_COMPILE_COMMANDS", True))
entries.append(cmake_cache_option("MPI_ASSUME_NO_BUILTIN_MPI", True))
return entries
def initconfig_hardware_entries(self):
spec = self.spec
entries = super(Aluminum, self).initconfig_hardware_entries()
entries.append(cmake_cache_option("ALUMINUM_ENABLE_CUDA", "+cuda" in spec))
if spec.satisfies("+cuda"):
entries.append(cmake_cache_string("CMAKE_CUDA_STANDARD", "17"))
if not spec.satisfies("cuda_arch=none"):
archs = spec.variants["cuda_arch"].value
arch_str = ";".join(archs)
entries.append(cmake_cache_string("CMAKE_CUDA_ARCHITECTURES", arch_str))
# FIXME: Should this use the "cuda_flags" function of the
# CudaPackage class or something? There might be other
# flags in play, and we need to be sure to get them all.
cuda_flags = self.get_cuda_flags()
if len(cuda_flags) > 0:
entries.append(cmake_cache_string("CMAKE_CUDA_FLAGS", " ".join(cuda_flags)))
entries.append(cmake_cache_option("ALUMINUM_ENABLE_ROCM", "+rocm" in spec))
if spec.satisfies("+rocm"):
entries.append(cmake_cache_string("CMAKE_HIP_STANDARD", "17"))
if not spec.satisfies("amdgpu_target=none"):
archs = self.spec.variants["amdgpu_target"].value
arch_str = ";".join(archs)
entries.append(cmake_cache_string("CMAKE_HIP_ARCHITECTURES", arch_str))
entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str))
entries.append(cmake_cache_string("GPU_TARGETS", arch_str))
entries.append(cmake_cache_path("HIP_ROOT_DIR", spec["hip"].prefix))
return entries
def initconfig_package_entries(self):
spec = self.spec
entries = super(Aluminum, self).initconfig_package_entries()
# Library capabilities
entries.append(cmake_cache_option("ALUMINUM_ENABLE_MPI_CUDA", "+cuda_rma" in spec))
entries.append(cmake_cache_option("ALUMINUM_ENABLE_MPI_CUDA_RMA", "+cuda_rma" in spec))
entries.append(cmake_cache_option("ALUMINUM_ENABLE_HOST_TRANSFER", "+ht" in spec))
entries.append(cmake_cache_option("ALUMINUM_ENABLE_NCCL", "+nccl" in spec))
# Debugging features
entries.append(cmake_cache_option("ALUMINUM_DEBUG_HANG_CHECK", "+hang_check" in spec))
entries.append(cmake_cache_option("ALUMINUM_ENABLE_TRACE", "+trace" in spec))
# Profiler support
entries.append(cmake_cache_option("ALUMINUM_ENABLE_NVPROF", "+nvtx" in spec))
entries.append(cmake_cache_option("ALUMINUM_ENABLE_ROCTRACER", "+roctracer" in spec))
# Advanced options
entries.append(cmake_cache_option("ALUMINUM_MPI_SERIALIZE", "+mpi_serialize" in spec))
entries.append(
cmake_cache_option("ALUMINUM_ENABLE_STREAM_MEM_OPS", "+stream_mem_ops" in spec)
)
entries.append(
cmake_cache_option("ALUMINUM_ENABLE_THREAD_MULTIPLE", "+thread_multiple" in spec)
)
# Benchmark/testing support
entries.append(cmake_cache_option("ALUMINUM_ENABLE_BENCHMARKS", "+benchmarks" in spec))
entries.append(cmake_cache_option("ALUMINUM_ENABLE_TESTS", "+tests" in spec))
return entries

View File

@@ -15,6 +15,12 @@ class Ams(CMakePackage, CudaPackage):
maintainers("koparasy", "lpottier")
version("develop", branch="develop", submodules=False)
version(
"11.08.23.alpha",
tag="11.08.23.alpha",
commit="1a42b29268bb916dae301654ca0b92fdfe288732",
submodules=False,
)
version(
"07.25.23-alpha",
tag="07.25.23-alpha",

View File

@@ -16,28 +16,22 @@ class Berkeleygw(MakefilePackage):
maintainers("migueldiascosta")
version(
"3.1.0",
sha256="7e890a5faa5a6bb601aa665c73903b3af30df7bdd13ee09362b69793bbefa6d2",
url="https://app.box.com/shared/static/2bik75lrs85zt281ydbup2xa7i5594gy.gz",
expand=False,
)
version(
"3.0.1",
sha256="7d8c2cc1ee679afb48efbdd676689d4d537226b50e13a049dbcb052aaaf3654f",
url="https://app.box.com/shared/static/m1dgnhiemo47lhxczrn6si71bwxoxor8.gz",
url="https://berkeley.box.com/shared/static/m1dgnhiemo47lhxczrn6si71bwxoxor8.gz",
expand=False,
)
version(
"3.0",
sha256="ab411acead5e979fd42b8d298dbb0a12ce152e7be9eee0bb87e9e5a06a638e2a",
url="https://app.box.com/shared/static/lp6hj4kxr459l5a6t05qfuzl2ucyo03q.gz",
url="https://berkeley.box.com/shared/static/lp6hj4kxr459l5a6t05qfuzl2ucyo03q.gz",
expand=False,
)
version(
"2.1",
sha256="31f3b643dd937350c3866338321d675d4a1b1f54c730b43ad74ae67e75a9e6f2",
url="https://app.box.com/shared/static/ze3azi5vlyw7hpwvl9i5f82kaiid6g0x.gz",
url="https://berkeley.box.com/shared/static/ze3azi5vlyw7hpwvl9i5f82kaiid6g0x.gz",
expand=False,
)

View File

@@ -122,9 +122,8 @@ def pgo_train(self):
# Run spack solve --fresh hdf5 with instrumented clingo.
python_runtime_env = EnvironmentModifications()
python_runtime_env.extend(
spack.user_environment.environment_modifications_for_specs(self.spec)
)
for s in self.spec.traverse(deptype=("run", "link"), order="post"):
python_runtime_env.extend(spack.user_environment.environment_modifications_for_spec(s))
python_runtime_env.unset("SPACK_ENV")
python_runtime_env.unset("SPACK_PYTHON")
self.spec["python"].command(

View File

@@ -16,3 +16,4 @@ class CppLogger(CMakePackage):
version("develop", branch="develop")
version("master", branch="master")
version("0.0.1", tag="v0.0.1", commit="d48b38ab14477bb7c53f8189b8b4be2ea214c28a")
version("0.0.2", tag="v0.0.2", commit="329a48401033d2d2a1f1196141763cab029220ae")

View File

@@ -18,7 +18,7 @@ class Cpr(CMakePackage):
version("1.9.2", sha256="3bfbffb22c51f322780d10d3ca8f79424190d7ac4b5ad6ad896de08dbd06bf31")
depends_on("curl")
depends_on("git", type="build")
depends_on("git", when="build")
def cmake_args(self):
_force = "_FORCE" if self.spec.satisfies("@:1.9") else ""

View File

@@ -16,7 +16,10 @@ class Cube(AutotoolsPackage):
homepage = "https://www.scalasca.org/software/cube-4.x/download.html"
url = "https://apps.fz-juelich.de/scalasca/releases/cube/4.4/dist/cubegui-4.4.2.tar.gz"
maintainers("swat-jsc")
version("4.8.2", sha256="bf2e02002bb2e5c4f61832ce37b62a440675c6453463014b33b2474aac78f86d")
version("4.8.1", sha256="a8a2a62b4e587c012d3d32385bed7c500db14232419795e0f4272d1dcefc55bc")
version("4.8", sha256="1df8fcaea95323e7eaf0cc010784a41243532c2123a27ce93cb7e3241557ff76")
version("4.7.1", sha256="7c96bf9ffb8cc132945f706657756fe6f88b7f7a5243ecd3741f599c2006d428")
version("4.7", sha256="103fe00fa9846685746ce56231f64d850764a87737dc0407c9d0a24037590f68")

View File

@@ -14,6 +14,7 @@ class Cubelib(AutotoolsPackage):
maintainers = ("swat-jsc", "wrwilliams")
version("4.8.2", sha256="d6fdef57b1bc9594f1450ba46cf08f431dd0d4ae595c47e2f3454e17e4ae74f4")
version("4.8.1", sha256="e4d974248963edab48c5d0fc5831146d391b0ae4632cccafe840bf5f12cd80a9")
version("4.8", sha256="171c93ac5afd6bc74c50a9a58efdaf8589ff5cc1e5bd773ebdfb2347b77e2f68")
version("4.7.1", sha256="62cf33a51acd9a723fff9a4a5411cd74203e24e0c4ffc5b9e82e011778ed4f2f")
version("4.7", sha256="e44352c80a25a49b0fa0748792ccc9f1be31300a96c32de982b92477a8740938")

View File

@@ -14,6 +14,7 @@ class Cubew(AutotoolsPackage):
maintainers = ("swat-jsc", "wrwilliams")
version("4.8.2", sha256="4f3bcf0622c2429b8972b5eb3f14d79ec89b8161e3c1cc5862ceda417d7975d2")
version("4.8.1", sha256="42cbd743d87c16e805c8e28e79292ab33de259f2cfba46f2682cb35c1bc032d6")
version("4.8", sha256="73c7f9e9681ee45d71943b66c01cfe675b426e4816e751ed2e0b670563ca4cf3")
version("4.7.1", sha256="0d364a4930ca876aa887ec40d12399d61a225dbab69e57379b293516d7b6db8d")
version("4.7", sha256="a7c7fca13e6cb252f08d4380223d7c56a8e86a67de147bcc0279ebb849c884a5")

View File

@@ -115,9 +115,9 @@ def configure_args(self):
if "+apmpi" in spec:
extra_args.append("--enable-apmpi-mod")
if "+apmpi_sync" in spec:
extra_args.append(["--enable-apmpi-mod", "--enable-apmpi-coll-sync"])
extra_args.extend(["--enable-apmpi-mod", "--enable-apmpi-coll-sync"])
if "+apxc" in spec:
extra_args.append(["--enable-apxc-mod"])
extra_args.append("--enable-apxc-mod")
extra_args.append("--with-mem-align=8")
extra_args.append("--with-log-path-by-env=DARSHAN_LOG_DIR_PATH")

View File

@@ -8,7 +8,39 @@
from spack.package import *
class Dihydrogen(CMakePackage, CudaPackage, ROCmPackage):
# This is a hack to get around some deficiencies in Hydrogen.
def get_blas_entries(inspec):
entries = []
spec = inspec["hydrogen"]
if "blas=openblas" in spec:
entries.append(cmake_cache_option("DiHydrogen_USE_OpenBLAS", True))
elif "blas=mkl" in spec or spec.satisfies("^intel-mkl"):
entries.append(cmake_cache_option("DiHydrogen_USE_MKL", True))
elif "blas=essl" in spec or spec.satisfies("^essl"):
entries.append(cmake_cache_string("BLA_VENDOR", "IBMESSL"))
# IF IBM ESSL is used it needs help finding the proper LAPACK libraries
entries.append(
cmake_cache_string(
"LAPACK_LIBRARIES",
"%s;-llapack;-lblas"
% ";".join("-l{0}".format(lib) for lib in self.spec["essl"].libs.names),
)
)
entries.append(
cmake_cache_string(
"BLAS_LIBRARIES",
"%s;-lblas"
% ";".join("-l{0}".format(lib) for lib in self.spec["essl"].libs.names),
)
)
elif "blas=accelerate" in spec:
entries.append(cmake_cache_option("DiHydrogen_USE_ACCELERATE", True))
elif spec.satisfies("^netlib-lapack"):
entries.append(cmake_cache_string("BLA_VENDOR", "Generic"))
return entries
class Dihydrogen(CachedCMakePackage, CudaPackage, ROCmPackage):
"""DiHydrogen is the second version of the Hydrogen fork of the
well-known distributed linear algebra library,
Elemental. DiHydrogen aims to be a basic distributed
@@ -20,117 +52,179 @@ class Dihydrogen(CMakePackage, CudaPackage, ROCmPackage):
git = "https://github.com/LLNL/DiHydrogen.git"
tags = ["ecp", "radiuss"]
maintainers("bvanessen")
maintainers("benson31", "bvanessen")
version("develop", branch="develop")
version("master", branch="master")
version("0.2.1", sha256="11e2c0f8a94ffa22e816deff0357dde6f82cc8eac21b587c800a346afb5c49ac")
version("0.2.0", sha256="e1f597e80f93cf49a0cb2dbc079a1f348641178c49558b28438963bd4a0bdaa4")
version("0.1", sha256="171d4b8adda1e501c38177ec966e6f11f8980bf71345e5f6d87d0a988fef4c4e")
version("0.3.0", sha256="8dd143441a28e0c7662cd92694e9a4894b61fd48508ac1d77435f342bc226dcf")
# Primary features
variant("dace", default=False, sticky=True, description="Enable DaCe backend.")
variant(
"distconv",
default=False,
sticky=True,
description="Enable (legacy) Distributed Convolution support.",
)
variant(
"nvshmem",
default=False,
sticky=True,
description="Enable support for NVSHMEM-based halo exchanges.",
when="+distconv",
)
variant(
"shared", default=True, sticky=True, description="Enables the build of shared libraries"
)
# Some features of developer interest
variant("al", default=True, description="Builds with Aluminum communication library")
variant(
"developer",
default=False,
description="Enable extra warnings and force tests to be enabled.",
)
variant("half", default=False, description="Enable FP16 support on the CPU.")
variant("ci", default=False, description="Use default options for CI builds")
variant(
"distconv",
"coverage",
default=False,
description="Support distributed convolutions: spatial, channel, " "filter.",
description="Decorate build with code coverage instrumentation options",
when="%gcc",
)
variant("nvshmem", default=False, description="Builds with support for NVSHMEM")
variant("openmp", default=False, description="Enable CPU acceleration with OpenMP threads.")
variant("rocm", default=False, description="Enable ROCm/HIP language features.")
variant("shared", default=True, description="Enables the build of shared libraries")
# Variants related to BLAS
variant(
"openmp_blas", default=False, description="Use OpenMP for threading in the BLAS library"
"coverage",
default=False,
description="Decorate build with code coverage instrumentation options",
when="%clang",
)
variant("int64_blas", default=False, description="Use 64bit integers for BLAS.")
variant(
"blas",
default="openblas",
values=("openblas", "mkl", "accelerate", "essl", "libsci"),
description="Enable the use of OpenBlas/MKL/Accelerate/ESSL/LibSci",
"coverage",
default=False,
description="Decorate build with code coverage instrumentation options",
when="%rocmcc",
)
conflicts("~cuda", when="+nvshmem")
# Package conflicts and requirements
depends_on("mpi")
depends_on("catch2", type="test")
conflicts("+nvshmem", when="~cuda", msg="NVSHMEM requires CUDA support.")
# Specify the correct version of Aluminum
depends_on("aluminum@0.4.0:0.4", when="@0.1 +al")
depends_on("aluminum@0.5.0:0.5", when="@0.2.0 +al")
depends_on("aluminum@0.7.0:0.7", when="@0.2.1 +al")
depends_on("aluminum@0.7.0:", when="@:0.0,0.2.1: +al")
conflicts("+cuda", when="+rocm", msg="CUDA and ROCm are mutually exclusive.")
# Add Aluminum variants
depends_on("aluminum +cuda +nccl +cuda_rma", when="+al +cuda")
depends_on("aluminum +rocm +rccl", when="+al +rocm")
depends_on("aluminum +ht", when="+al +distconv")
requires(
"+cuda",
"+rocm",
when="+distconv",
policy="any_of",
msg="DistConv support requires CUDA or ROCm.",
)
for arch in CudaPackage.cuda_arch_values:
depends_on("aluminum cuda_arch=%s" % arch, when="+al +cuda cuda_arch=%s" % arch)
depends_on("nvshmem cuda_arch=%s" % arch, when="+nvshmem +cuda cuda_arch=%s" % arch)
# Dependencies
# variants +rocm and amdgpu_targets are not automatically passed to
# dependencies, so do it manually.
for val in ROCmPackage.amdgpu_targets:
depends_on("aluminum amdgpu_target=%s" % val, when="amdgpu_target=%s" % val)
depends_on("catch2@3.0.1:", type=("build", "test"), when="+developer")
depends_on("cmake@3.21.0:", type="build")
depends_on("cuda@11.0:", when="+cuda")
depends_on("spdlog@1.11.0", when="@:0.1,0.2:")
depends_on("roctracer-dev", when="+rocm +distconv")
with when("@0.3.0:"):
depends_on("hydrogen +al")
for arch in CudaPackage.cuda_arch_values:
depends_on(
"hydrogen +cuda cuda_arch={0}".format(arch),
when="+cuda cuda_arch={0}".format(arch),
)
depends_on("cudnn", when="+cuda")
depends_on("cub", when="^cuda@:10")
for val in ROCmPackage.amdgpu_targets:
depends_on(
"hydrogen amdgpu_target={0}".format(val),
when="+rocm amdgpu_target={0}".format(val),
)
# Note that #1712 forces us to enumerate the different blas variants
depends_on("openblas", when="blas=openblas")
depends_on("openblas +ilp64", when="blas=openblas +int64_blas")
depends_on("openblas threads=openmp", when="blas=openblas +openmp_blas")
with when("+distconv"):
depends_on("mpi")
depends_on("intel-mkl", when="blas=mkl")
depends_on("intel-mkl +ilp64", when="blas=mkl +int64_blas")
depends_on("intel-mkl threads=openmp", when="blas=mkl +openmp_blas")
# All this nonsense for one silly little package.
depends_on("aluminum@1.4.1:")
depends_on("veclibfort", when="blas=accelerate")
conflicts("blas=accelerate +openmp_blas")
# Add Aluminum variants
depends_on("aluminum +cuda +nccl", when="+distconv +cuda")
depends_on("aluminum +rocm +nccl", when="+distconv +rocm")
depends_on("essl", when="blas=essl")
depends_on("essl +ilp64", when="blas=essl +int64_blas")
depends_on("essl threads=openmp", when="blas=essl +openmp_blas")
depends_on("netlib-lapack +external-blas", when="blas=essl")
# TODO: Debug linker errors when NVSHMEM is built with UCX
depends_on("nvshmem +nccl~ucx", when="+nvshmem")
depends_on("cray-libsci", when="blas=libsci")
depends_on("cray-libsci +openmp", when="blas=libsci +openmp_blas")
# OMP support is only used in DistConv, and only Apple needs
# hand-holding with it.
depends_on("llvm-openmp", when="%apple-clang")
# FIXME: when="platform=darwin"??
# Distconv builds require cuda or rocm
conflicts("+distconv", when="~cuda ~rocm")
# CUDA/ROCm arch forwarding
conflicts("+distconv", when="+half")
conflicts("+rocm", when="+half")
for arch in CudaPackage.cuda_arch_values:
depends_on(
"aluminum +cuda cuda_arch={0}".format(arch),
when="+cuda cuda_arch={0}".format(arch),
)
depends_on("half", when="+half")
# This is a workaround for a bug in the Aluminum package,
# as it should be responsible for its own NCCL dependency.
# Rather than failing to concretize, we help it along.
depends_on(
"nccl cuda_arch={0}".format(arch),
when="+distconv +cuda cuda_arch={0}".format(arch),
)
generator("ninja")
depends_on("cmake@3.17.0:", type="build")
# NVSHMEM also needs arch forwarding
depends_on(
"nvshmem +cuda cuda_arch={0}".format(arch),
when="+nvshmem +cuda cuda_arch={0}".format(arch),
)
depends_on("spdlog", when="@:0.1,0.2:")
# Idenfity versions of cuda_arch that are too old from
# lib/spack/spack/build_systems/cuda.py. We require >=60.
illegal_cuda_arch_values = [
"10",
"11",
"12",
"13",
"20",
"21",
"30",
"32",
"35",
"37",
"50",
"52",
"53",
]
for value in illegal_cuda_arch_values:
conflicts("cuda_arch=" + value)
depends_on("llvm-openmp", when="%apple-clang +openmp")
for val in ROCmPackage.amdgpu_targets:
depends_on(
"aluminum amdgpu_target={0}".format(val),
when="+rocm amdgpu_target={0}".format(val),
)
# TODO: Debug linker errors when NVSHMEM is built with UCX
depends_on("nvshmem +nccl~ucx", when="+nvshmem")
# CUDA-specific distconv dependencies
depends_on("cudnn", when="+cuda")
# Idenfity versions of cuda_arch that are too old
# from lib/spack/spack/build_systems/cuda.py
illegal_cuda_arch_values = ["10", "11", "12", "13", "20", "21"]
for value in illegal_cuda_arch_values:
conflicts("cuda_arch=" + value)
# ROCm-specific distconv dependencies
depends_on("hipcub", when="+rocm")
depends_on("miopen-hip", when="+rocm")
depends_on("roctracer-dev", when="+rocm")
with when("+ci+coverage"):
depends_on("lcov", type=("build", "run"))
depends_on("py-gcovr", type=("build", "run"))
# Technically it's not used in the build, but CMake sets up a
# target, so it needs to be found.
@property
def libs(self):
@@ -138,104 +232,127 @@ def libs(self):
return find_libraries("libH2Core", root=self.prefix, shared=shared, recursive=True)
def cmake_args(self):
args = []
return args
def get_cuda_flags(self):
spec = self.spec
args = []
if spec.satisfies("^cuda+allow-unsupported-compilers"):
args.append("-allow-unsupported-compiler")
args = [
"-DCMAKE_CXX_STANDARD=17",
"-DCMAKE_INSTALL_MESSAGE:STRING=LAZY",
"-DBUILD_SHARED_LIBS:BOOL=%s" % ("+shared" in spec),
"-DH2_ENABLE_ALUMINUM=%s" % ("+al" in spec),
"-DH2_ENABLE_CUDA=%s" % ("+cuda" in spec),
"-DH2_ENABLE_DISTCONV_LEGACY=%s" % ("+distconv" in spec),
"-DH2_ENABLE_OPENMP=%s" % ("+openmp" in spec),
"-DH2_ENABLE_FP16=%s" % ("+half" in spec),
"-DH2_DEVELOPER_BUILD=%s" % ("+developer" in spec),
]
if spec.satisfies("%clang"):
for flag in spec.compiler_flags["cxxflags"]:
if "gcc-toolchain" in flag:
args.append("-Xcompiler={0}".format(flag))
return args
if spec.version < Version("0.3"):
args.append("-DH2_ENABLE_HIP_ROCM=%s" % ("+rocm" in spec))
else:
args.append("-DH2_ENABLE_ROCM=%s" % ("+rocm" in spec))
def initconfig_compiler_entries(self):
spec = self.spec
entries = super(Dihydrogen, self).initconfig_compiler_entries()
if not spec.satisfies("^cmake@3.23.0"):
# There is a bug with using Ninja generator in this version
# of CMake
args.append("-DCMAKE_EXPORT_COMPILE_COMMANDS=ON")
# FIXME: Enforce this better in the actual CMake.
entries.append(cmake_cache_string("CMAKE_CXX_STANDARD", "17"))
entries.append(cmake_cache_option("BUILD_SHARED_LIBS", "+shared" in spec))
entries.append(cmake_cache_option("CMAKE_EXPORT_COMPILE_COMMANDS", True))
if "+cuda" in spec:
if self.spec.satisfies("%clang"):
for flag in self.spec.compiler_flags["cxxflags"]:
if "gcc-toolchain" in flag:
args.append("-DCMAKE_CUDA_FLAGS=-Xcompiler={0}".format(flag))
if spec.satisfies("^cuda@11.0:"):
args.append("-DCMAKE_CUDA_STANDARD=17")
else:
args.append("-DCMAKE_CUDA_STANDARD=14")
archs = spec.variants["cuda_arch"].value
if archs != "none":
arch_str = ";".join(archs)
args.append("-DCMAKE_CUDA_ARCHITECTURES=%s" % arch_str)
# It's possible this should have a `if "platform=cray" in
# spec:` in front of it, but it's not clear to me when this is
# set. In particular, I don't actually see this blurb showing
# up on Tioga builds. Which is causing the obvious problem
# (namely, the one this was added to supposedly solve in the
# first place.
entries.append(cmake_cache_option("MPI_ASSUME_NO_BUILTIN_MPI", True))
if spec.satisfies("%cce") and spec.satisfies("^cuda+allow-unsupported-compilers"):
args.append("-DCMAKE_CUDA_FLAGS=-allow-unsupported-compiler")
if "+cuda" in spec:
args.append("-DcuDNN_DIR={0}".format(spec["cudnn"].prefix))
if spec.satisfies("^cuda@:10"):
if "+cuda" in spec or "+distconv" in spec:
args.append("-DCUB_DIR={0}".format(spec["cub"].prefix))
# Add support for OpenMP with external (Brew) clang
if spec.satisfies("%clang +openmp platform=darwin"):
if spec.satisfies("%clang +distconv platform=darwin"):
clang = self.compiler.cc
clang_bin = os.path.dirname(clang)
clang_root = os.path.dirname(clang_bin)
args.extend(
[
"-DOpenMP_CXX_FLAGS=-fopenmp=libomp",
"-DOpenMP_CXX_LIB_NAMES=libomp",
"-DOpenMP_libomp_LIBRARY={0}/lib/libomp.dylib".format(clang_root),
]
)
if "+rocm" in spec:
args.extend(
[
"-DCMAKE_CXX_FLAGS=-std=c++17",
"-DHIP_ROOT_DIR={0}".format(spec["hip"].prefix),
"-DHIP_CXX_COMPILER={0}".format(self.spec["hip"].hipcc),
]
)
if "platform=cray" in spec:
args.extend(["-DMPI_ASSUME_NO_BUILTIN_MPI=ON"])
archs = self.spec.variants["amdgpu_target"].value
if archs != "none":
arch_str = ",".join(archs)
args.append(
"-DHIP_HIPCC_FLAGS=--amdgpu-target={0}"
" -g -fsized-deallocation -fPIC -std=c++17".format(arch_str)
entries.append(cmake_cache_string("OpenMP_CXX_FLAGS", "-fopenmp=libomp"))
entries.append(cmake_cache_string("OpenMP_CXX_LIB_NAMES", "libomp"))
entries.append(
cmake_cache_string(
"OpenMP_libomp_LIBRARY", "{0}/lib/libomp.dylib".format(clang_root)
)
args.extend(
[
"-DCMAKE_HIP_ARCHITECTURES=%s" % arch_str,
"-DAMDGPU_TARGETS=%s" % arch_str,
"-DGPU_TARGETS=%s" % arch_str,
]
)
if self.spec.satisfies("^essl"):
# IF IBM ESSL is used it needs help finding the proper LAPACK libraries
args.extend(
[
"-DLAPACK_LIBRARIES=%s;-llapack;-lblas"
% ";".join("-l{0}".format(lib) for lib in self.spec["essl"].libs.names),
"-DBLAS_LIBRARIES=%s;-lblas"
% ";".join("-l{0}".format(lib) for lib in self.spec["essl"].libs.names),
]
)
return args
return entries
def initconfig_hardware_entries(self):
spec = self.spec
entries = super(Dihydrogen, self).initconfig_hardware_entries()
entries.append(cmake_cache_option("H2_ENABLE_CUDA", "+cuda" in spec))
if spec.satisfies("+cuda"):
entries.append(cmake_cache_string("CMAKE_CUDA_STANDARD", "17"))
if not spec.satisfies("cuda_arch=none"):
archs = spec.variants["cuda_arch"].value
arch_str = ";".join(archs)
entries.append(cmake_cache_string("CMAKE_CUDA_ARCHITECTURES", arch_str))
# FIXME: Should this use the "cuda_flags" function of the
# CudaPackage class or something? There might be other
# flags in play, and we need to be sure to get them all.
cuda_flags = self.get_cuda_flags()
if len(cuda_flags) > 0:
entries.append(cmake_cache_string("CMAKE_CUDA_FLAGS", " ".join(cuda_flags)))
enable_rocm_var = (
"H2_ENABLE_ROCM" if spec.version < Version("0.3") else "H2_ENABLE_HIP_ROCM"
)
entries.append(cmake_cache_option(enable_rocm_var, "+rocm" in spec))
if spec.satisfies("+rocm"):
entries.append(cmake_cache_string("CMAKE_HIP_STANDARD", "17"))
if not spec.satisfies("amdgpu_target=none"):
archs = self.spec.variants["amdgpu_target"].value
arch_str = ";".join(archs)
entries.append(cmake_cache_string("CMAKE_HIP_ARCHITECTURES", arch_str))
entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str))
entries.append(cmake_cache_string("GPU_TARGETS", arch_str))
entries.append(cmake_cache_path("HIP_ROOT_DIR", spec["hip"].prefix))
return entries
def initconfig_package_entries(self):
spec = self.spec
entries = super(Dihydrogen, self).initconfig_package_entries()
# Basic H2 options
entries.append(cmake_cache_option("H2_DEVELOPER_BUILD", "+developer" in spec))
entries.append(cmake_cache_option("H2_ENABLE_TESTS", "+developer" in spec))
entries.append(cmake_cache_option("H2_ENABLE_CODE_COVERAGE", "+coverage" in spec))
entries.append(cmake_cache_option("H2_CI_BUILD", "+ci" in spec))
entries.append(cmake_cache_option("H2_ENABLE_DACE", "+dace" in spec))
# DistConv options
entries.append(cmake_cache_option("H2_ENABLE_ALUMINUM", "+distconv" in spec))
entries.append(cmake_cache_option("H2_ENABLE_DISTCONV_LEGACY", "+distconv" in spec))
entries.append(cmake_cache_option("H2_ENABLE_OPENMP", "+distconv" in spec))
# Paths to stuff, just in case. CMAKE_PREFIX_PATH should catch
# all this, but this shouldn't hurt to have.
entries.append(cmake_cache_path("spdlog_ROOT", spec["spdlog"].prefix))
if "+developer" in spec:
entries.append(cmake_cache_path("Catch2_ROOT", spec["catch2"].prefix))
if "+coverage" in spec:
entries.append(cmake_cache_path("lcov_ROOT", spec["lcov"].prefix))
entries.append(cmake_cache_path("genhtml_ROOT", spec["lcov"].prefix))
if "+ci" in spec:
entries.append(cmake_cache_path("gcovr_ROOT", spec["py-gcovr"].prefix))
if "+distconv" in spec:
entries.append(cmake_cache_path("Aluminum_ROOT", spec["aluminum"].prefix))
if "+cuda" in spec:
entries.append(cmake_cache_path("cuDNN_ROOT", spec["cudnn"].prefix))
# Currently this is a hack for all Hydrogen versions. WIP to
# fix this at develop.
entries.extend(get_blas_entries(spec))
return entries
def setup_build_environment(self, env):
if self.spec.satisfies("%apple-clang +openmp"):

View File

@@ -18,6 +18,7 @@ class Discotec(CMakePackage):
version("main", branch="main")
variant("compression", default=False, description="Write sparse grid files compressed")
variant("ft", default=False, description="DisCoTec with algorithm-based fault tolerance")
variant("gene", default=False, description="Build for GENE (as task library)")
variant("hdf5", default=True, description="Interpolation output with HDF5")
@@ -31,6 +32,7 @@ class Discotec(CMakePackage):
depends_on("cmake@3.24.2:", type="build")
depends_on("glpk")
depends_on("highfive+mpi+boost+ipo", when="+hdf5")
depends_on("lz4", when="+compression")
depends_on("mpi")
depends_on("selalib", when="+selalib")
depends_on("vtk", when="+vtk")
@@ -38,6 +40,7 @@ class Discotec(CMakePackage):
def cmake_args(self):
args = [
self.define("DISCOTEC_BUILD_MISSING_DEPS", False),
self.define_from_variant("DISCOTEC_WITH_COMPRESSION", "compression"),
self.define_from_variant("DISCOTEC_ENABLEFT", "ft"),
self.define_from_variant("DISCOTEC_GENE", "gene"),
self.define_from_variant("DISCOTEC_OPENMP", "openmp"),

View File

@@ -16,7 +16,6 @@ class Fdb(CMakePackage):
maintainers("skosukhin")
# master version of fdb is subject to frequent changes and is to be used experimentally.
version("master", branch="master")
version("5.11.23", sha256="09b1d93f2b71d70c7b69472dfbd45a7da0257211f5505b5fcaf55bfc28ca6c65")
version("5.11.17", sha256="375c6893c7c60f6fdd666d2abaccb2558667bd450100817c0e1072708ad5591e")
@@ -44,6 +43,7 @@ class Fdb(CMakePackage):
depends_on("ecbuild@3.7:", type="build", when="@5.11.6:")
depends_on("eckit@1.16:")
depends_on("eckit@1.24.4:", when="@5.11.22:")
depends_on("eckit+admin", when="+tools")
depends_on("eccodes@2.10:")

View File

@@ -24,7 +24,8 @@ class Ginkgo(CMakePackage, CudaPackage, ROCmPackage):
version("develop", branch="develop")
version("master", branch="master")
version("1.6.0", commit="1f1ed46e724334626f016f105213c047e16bc1ae", preferred=True) # v1.6.0
version("1.7.0", commit="49242ff89af1e695d7794f6d50ed9933024b66fe") # v1.7.0
version("1.6.0", commit="1f1ed46e724334626f016f105213c047e16bc1ae") # v1.6.0
version("1.5.0", commit="234594c92b58e2384dfb43c2d08e7f43e2b58e7a") # v1.5.0
version("1.5.0.glu_experimental", branch="glu_experimental")
version("1.4.0", commit="f811917c1def4d0fcd8db3fe5c948ce13409e28e") # v1.4.0
@@ -37,13 +38,18 @@ class Ginkgo(CMakePackage, CudaPackage, ROCmPackage):
variant("shared", default=True, description="Build shared libraries")
variant("full_optimizations", default=False, description="Compile with all optimizations")
variant("openmp", default=sys.platform != "darwin", description="Build with OpenMP")
variant("oneapi", default=False, description="Build with oneAPI support")
variant("sycl", default=False, description="Enable SYCL backend")
variant("develtools", default=False, description="Compile with develtools enabled")
variant("hwloc", default=False, description="Enable HWLOC support")
variant("mpi", default=False, description="Enable MPI support")
depends_on("cmake@3.9:", type="build")
depends_on("cuda@9:", when="+cuda")
depends_on("cmake@3.9:", type="build", when="@:1.3.0")
depends_on("cmake@3.13:", type="build", when="@1.4.0:1.6.0")
depends_on("cmake@3.16:", type="build", when="@1.7.0:")
depends_on("cmake@3.18:", type="build", when="+cuda@1.7.0:")
depends_on("cuda@9:", when="+cuda @:1.4.0")
depends_on("cuda@9.2:", when="+cuda @1.5.0:")
depends_on("cuda@10.1:", when="+cuda @1.7.0:")
depends_on("mpi", when="+mpi")
depends_on("rocthrust", when="+rocm")
@@ -60,14 +66,13 @@ class Ginkgo(CMakePackage, CudaPackage, ROCmPackage):
depends_on("googletest", type="test")
depends_on("numactl", type="test", when="+hwloc")
depends_on("intel-oneapi-mkl", when="+oneapi")
depends_on("intel-oneapi-dpl", when="+oneapi")
depends_on("intel-oneapi-mkl", when="+sycl")
depends_on("intel-oneapi-dpl", when="+sycl")
depends_on("intel-oneapi-tbb", when="+sycl")
conflicts("%gcc@:5.2.9")
conflicts("+rocm", when="@:1.1.1")
conflicts("+mpi", when="@:1.4.0")
conflicts("+cuda", when="+rocm")
conflicts("+openmp", when="+oneapi")
# ROCm 4.1.0 breaks platform settings which breaks Ginkgo's HIP support.
conflicts("^hip@4.1.0:", when="@:1.3.0")
@@ -76,22 +81,35 @@ class Ginkgo(CMakePackage, CudaPackage, ROCmPackage):
conflicts("^rocthrust@4.1.0:", when="@:1.3.0")
conflicts("^rocprim@4.1.0:", when="@:1.3.0")
# Ginkgo 1.6.0 start relying on ROCm 4.5.0
conflicts("^hip@:4.3.1", when="@1.6.0:")
conflicts("^hipblas@:4.3.1", when="@1.6.0:")
conflicts("^hipsparse@:4.3.1", when="@1.6.0:")
conflicts("^rocthrust@:4.3.1", when="@1.6.0:")
conflicts("^rocprim@:4.3.1", when="@1.6.0:")
conflicts(
"+sycl", when="@:1.4.0", msg="For SYCL support, please use Ginkgo version 1.4.0 and newer."
)
# Skip smoke tests if compatible hardware isn't found
patch("1.4.0_skip_invalid_smoke_tests.patch", when="@1.4.0")
# Newer DPC++ compilers use the updated SYCL 2020 standard which change
# kernel attribute propagation rules. This doesn't work well with the
# initial Ginkgo oneAPI support.
patch("1.4.0_dpcpp_use_old_standard.patch", when="+oneapi @1.4.0")
# Add missing include statement
patch("thrust-count-header.patch", when="+rocm @1.5.0")
def setup_build_environment(self, env):
spec = self.spec
if "+oneapi" in spec:
if "+sycl" in spec:
env.set("MKLROOT", join_path(spec["intel-oneapi-mkl"].prefix, "mkl", "latest"))
env.set("DPL_ROOT", join_path(spec["intel-oneapi-dpl"].prefix, "dpl", "latest"))
# The `IntelSYCLConfig.cmake` is broken with spack. By default, it
# relies on the CMAKE_CXX_COMPILER being the real ipcx/dpcpp
# compiler. If not, the variable SYCL_COMPILER of that script is
# broken, and all the SYCL detection mechanism is wrong. We fix it
# by giving hint environment variables.
env.set("SYCL_LIBRARY_DIR_HINT", os.path.dirname(os.path.dirname(self.compiler.cxx)))
env.set("SYCL_INCLUDE_DIR_HINT", os.path.dirname(os.path.dirname(self.compiler.cxx)))
def cmake_args(self):
# Check that the have the correct C++ standard is available
@@ -106,18 +124,19 @@ def cmake_args(self):
except UnsupportedCompilerFlag:
raise InstallError("Ginkgo requires a C++14-compliant C++ compiler")
cxx_is_dpcpp = os.path.basename(self.compiler.cxx) == "dpcpp"
if self.spec.satisfies("+oneapi") and not cxx_is_dpcpp:
raise InstallError(
"Ginkgo's oneAPI backend requires the" + "DPC++ compiler as main CXX compiler."
)
if self.spec.satisfies("@1.4.0:1.6.0 +sycl") and not self.spec.satisfies(
"%oneapi@2021.3.0:"
):
raise InstallError("ginkgo +sycl requires %oneapi@2021.3.0:")
elif self.spec.satisfies("@1.7.0: +sycl") and not self.spec.satisfies("%oneapi@2022.1.0:"):
raise InstallError("ginkgo +sycl requires %oneapi@2022.1.0:")
spec = self.spec
from_variant = self.define_from_variant
args = [
from_variant("GINKGO_BUILD_CUDA", "cuda"),
from_variant("GINKGO_BUILD_HIP", "rocm"),
from_variant("GINKGO_BUILD_DPCPP", "oneapi"),
from_variant("GINKGO_BUILD_SYCL", "sycl"),
from_variant("GINKGO_BUILD_OMP", "openmp"),
from_variant("GINKGO_BUILD_MPI", "mpi"),
from_variant("BUILD_SHARED_LIBS", "shared"),
@@ -161,6 +180,11 @@ def cmake_args(self):
args.append(
self.define("CMAKE_MODULE_PATH", self.spec["hip"].prefix.lib.cmake.hip)
)
if "+sycl" in self.spec:
sycl_compatible_compilers = ["dpcpp", "icpx"]
if not (os.path.basename(self.compiler.cxx) in sycl_compatible_compilers):
raise InstallError("ginkgo +sycl requires DPC++ (dpcpp) or icpx compiler.")
return args
@property

View File

@@ -17,6 +17,7 @@ class Gotcha(CMakePackage):
version("develop", branch="develop")
version("master", branch="master")
version("1.0.5", tag="1.0.5", commit="e28f10c45a0cda0e1ec225eaea6abfe72c8353aa")
version("1.0.4", tag="1.0.4", commit="46f2aaedc885f140a3f31a17b9b9a9d171f3d6f0")
version("1.0.3", tag="1.0.3", commit="1aafd1e30d46ce4e6555c8a4ea5f5edf6a5eade5")
version("1.0.2", tag="1.0.2", commit="bed1b7c716ebb0604b3e063121649b5611640f25")

View File

@@ -711,6 +711,17 @@ def fix_package_config(self):
if not os.path.exists(tgt_filename):
symlink(src_filename, tgt_filename)
@run_after("install")
def link_debug_libs(self):
# When build_type is Debug, the hdf5 build appends _debug to all library names.
# Dependents of hdf5 (netcdf-c etc.) can't handle those, thus make symlinks.
if "build_type=Debug" in self.spec:
libs = find(self.prefix.lib, "libhdf5*_debug.*", recursive=False)
with working_dir(self.prefix.lib):
for lib in libs:
libname = os.path.split(lib)[1]
os.symlink(libname, libname.replace("_debug", ""))
@property
@llnl.util.lang.memoized
def _output_version(self):

View File

@@ -7,254 +7,268 @@
from spack.package import *
# This limits the versions of lots of things pretty severely.
#
# - Only v1.5.2 and newer are buildable.
# - CMake must be v3.22 or newer.
# - CUDA must be v11.0.0 or newer.
class Hydrogen(CMakePackage, CudaPackage, ROCmPackage):
class Hydrogen(CachedCMakePackage, CudaPackage, ROCmPackage):
"""Hydrogen: Distributed-memory dense and sparse-direct linear algebra
and optimization library. Based on the Elemental library."""
homepage = "https://libelemental.org"
url = "https://github.com/LLNL/Elemental/archive/v1.0.1.tar.gz"
url = "https://github.com/LLNL/Elemental/archive/v1.5.1.tar.gz"
git = "https://github.com/LLNL/Elemental.git"
tags = ["ecp", "radiuss"]
maintainers("bvanessen")
version("develop", branch="hydrogen")
version("1.5.3", sha256="faefbe738bd364d0e26ce9ad079a11c93a18c6f075719a365fd4fa5f1f7a989a")
version("1.5.2", sha256="a902cad3962471216cfa278ba0561c18751d415cd4d6b2417c02a43b0ab2ea33")
version("1.5.1", sha256="447da564278f98366906d561d9c8bc4d31678c56d761679c2ff3e59ee7a2895c")
version("1.5.0", sha256="03dd487fb23b9fdbc715554a8ea48c3196a1021502e61b0172ef3fdfbee75180")
version("1.4.0", sha256="c13374ff4a6c4d1076e47ba8c8d91a7082588b9958d1ed89cffb12f1d2e1452e")
version("1.3.4", sha256="7979f6656f698f0bbad6798b39d4b569835b3013ff548d98089fce7c283c6741")
version("1.3.3", sha256="a51a1cfd40ac74d10923dfce35c2c04a3082477683f6b35e7b558ea9f4bb6d51")
version("1.3.2", sha256="50bc5e87955f8130003d04dfd9dcad63107e92b82704f8107baf95b0ccf98ed6")
version("1.3.1", sha256="a8b8521458e9e747f2b24af87c4c2749a06e500019c383e0cefb33e5df6aaa1d")
version("1.3.0", sha256="0f3006aa1d8235ecdd621e7344c99f56651c6836c2e1bc0cf006331b70126b36")
version("1.2.0", sha256="8545975139582ee7bfe5d00f8d83a8697afc285bf7026b0761e9943355974806")
version("1.1.0-1", sha256="73ce05e4166853a186469269cb00a454de71e126b2019f95bbae703b65606808")
version("1.1.0", sha256="b4c12913acd01c72d31f4522266bfeb8df1d4d3b4aef02e07ccbc9a477894e71")
version("1.0.1", sha256="27cf76e1ef1d58bd8f9b1e34081a14a682b7ff082fb5d1da56713e5e0040e528")
version("1.0", sha256="d8a97de3133f2c6b6bb4b80d32b4a4cc25eb25e0df4f0cec0f8cb19bf34ece98")
# Older versions are no longer supported.
variant("shared", default=True, description="Enables the build of shared libraries")
variant("openmp", default=True, description="Make use of OpenMP within CPU-kernels")
variant(
"openmp_blas", default=False, description="Use OpenMP for threading in the BLAS library"
)
variant("quad", default=False, description="Enable quad precision")
variant("int64", default=False, description="Use 64bit integers")
variant("int64_blas", default=False, description="Use 64bit integers for BLAS.")
variant("scalapack", default=False, description="Build with ScaLAPACK library")
variant("shared", default=True, description="Enables the build of shared libraries.")
variant(
"build_type",
default="Release",
description="The build type to build",
values=("Debug", "Release"),
)
variant("int64", default=False, description="Use 64-bit integers")
variant("al", default=False, description="Use Aluminum communication library")
variant(
"cub", default=True, when="+cuda", description="Use CUB/hipCUB for GPU memory management"
)
variant(
"cub", default=True, when="+rocm", description="Use CUB/hipCUB for GPU memory management"
)
variant("half", default=False, description="Support for FP16 precision data types")
# TODO: Add netlib-lapack. For GPU-enabled builds, typical
# workflows don't touch host BLAS/LAPACK all that often, and even
# less frequently in performance-critical regions.
variant(
"blas",
default="openblas",
values=("openblas", "mkl", "accelerate", "essl", "libsci"),
description="Enable the use of OpenBlas/MKL/Accelerate/ESSL/LibSci",
default="any",
values=("any", "openblas", "mkl", "accelerate", "essl", "libsci"),
description="Specify a host BLAS library preference",
)
variant(
"mpfr",
default=False,
description="Support GNU MPFR's" "arbitrary-precision floating-point arithmetic",
)
variant("test", default=False, description="Builds test suite")
variant("al", default=False, description="Builds with Aluminum communication library")
variant("int64_blas", default=False, description="Use 64-bit integers for (host) BLAS.")
variant("openmp", default=True, description="Make use of OpenMP within CPU kernels")
variant(
"omp_taskloops",
when="+openmp",
default=False,
description="Use OpenMP taskloops instead of parallel for loops.",
description="Use OpenMP taskloops instead of parallel for loops",
)
variant("half", default=False, description="Builds with support for FP16 precision data types")
conflicts("~openmp", when="+omp_taskloops")
# Users should spec this on their own on the command line, no?
# This doesn't affect Hydrogen itself at all. Not one bit.
# variant(
# "openmp_blas",
# default=False,
# description="Use OpenMP for threading in the BLAS library")
variant("test", default=False, description="Builds test suite")
conflicts("+cuda", when="+rocm", msg="CUDA and ROCm support are mutually exclusive")
conflicts("+half", when="+rocm", msg="FP16 support not implemented for ROCm.")
depends_on("cmake@3.21.0:", type="build", when="@1.5.2:")
depends_on("cmake@3.17.0:", type="build", when="@:1.5.1")
depends_on("cmake@3.22.0:", type="build", when="%cce")
depends_on("cmake@3.22.0:", type="build", when="@1.5.2:")
depends_on("cmake@3.17.0:", type="build", when="@1.5.1")
depends_on("mpi")
depends_on("hwloc@1.11:")
depends_on("hwloc +cuda +nvml", when="+cuda")
depends_on("hwloc@2.3.0:", when="+rocm")
depends_on("blas")
depends_on("lapack")
# Note that #1712 forces us to enumerate the different blas variants
# Note that this forces us to use OpenBLAS until #1712 is fixed
depends_on("openblas", when="blas=openblas")
depends_on("openblas +ilp64", when="blas=openblas +int64_blas")
depends_on("openblas threads=openmp", when="blas=openblas +openmp_blas")
depends_on("intel-mkl", when="blas=mkl")
depends_on("intel-mkl +ilp64", when="blas=mkl +int64_blas")
depends_on("intel-mkl threads=openmp", when="blas=mkl +openmp_blas")
# I don't think this is true...
depends_on("veclibfort", when="blas=accelerate")
conflicts("blas=accelerate +openmp_blas")
depends_on("essl", when="blas=essl")
depends_on("essl +ilp64", when="blas=essl +int64_blas")
depends_on("essl threads=openmp", when="blas=essl +openmp_blas")
depends_on("netlib-lapack +external-blas", when="blas=essl")
depends_on("cray-libsci", when="blas=libsci")
depends_on("cray-libsci +openmp", when="blas=libsci +openmp_blas")
# Specify the correct version of Aluminum
depends_on("aluminum@:0.3", when="@:1.3 +al")
depends_on("aluminum@0.4.0:0.4", when="@1.4.0:1.4 +al")
depends_on("aluminum@0.6.0:0.6", when="@1.5.0:1.5.1 +al")
depends_on("aluminum@0.7.0:", when="@:1.0,1.5.2: +al")
depends_on("aluminum@0.7.0:", when="@1.5.2: +al")
# Add Aluminum variants
depends_on("aluminum +cuda +nccl +cuda_rma", when="+al +cuda")
depends_on("aluminum +rocm +rccl", when="+al +rocm")
depends_on("aluminum +cuda +ht", when="+al +cuda")
depends_on("aluminum +rocm +ht", when="+al +rocm")
for arch in CudaPackage.cuda_arch_values:
depends_on("aluminum cuda_arch=%s" % arch, when="+al +cuda cuda_arch=%s" % arch)
depends_on("aluminum +cuda cuda_arch=%s" % arch, when="+al +cuda cuda_arch=%s" % arch)
# variants +rocm and amdgpu_targets are not automatically passed to
# dependencies, so do it manually.
for val in ROCmPackage.amdgpu_targets:
depends_on("aluminum amdgpu_target=%s" % val, when="+al +rocm amdgpu_target=%s" % val)
depends_on(
"aluminum +rocm amdgpu_target=%s" % val, when="+al +rocm amdgpu_target=%s" % val
)
# Note that this forces us to use OpenBLAS until #1712 is fixed
depends_on("lapack", when="blas=openblas ~openmp_blas")
depends_on("scalapack", when="+scalapack")
depends_on("gmp", when="+mpfr")
depends_on("mpc", when="+mpfr")
depends_on("mpfr", when="+mpfr")
depends_on("cuda", when="+cuda")
depends_on("cub", when="^cuda@:10")
depends_on("hipcub", when="+rocm")
depends_on("cuda@11.0.0:", when="+cuda")
depends_on("hipcub +rocm", when="+rocm +cub")
depends_on("half", when="+half")
depends_on("llvm-openmp", when="%apple-clang +openmp")
conflicts(
"@0:0.98",
msg="Hydrogen did not exist before v0.99. " + "Did you mean to use Elemental instead?",
)
generator("ninja")
@property
def libs(self):
shared = True if "+shared" in self.spec else False
return find_libraries("libEl", root=self.prefix, shared=shared, recursive=True)
return find_libraries("libHydrogen", root=self.prefix, shared=shared, recursive=True)
def cmake_args(self):
args = []
return args
def get_cuda_flags(self):
spec = self.spec
args = []
if spec.satisfies("^cuda+allow-unsupported-compilers"):
args.append("-allow-unsupported-compiler")
enable_gpu_fp16 = "+cuda" in spec and "+half" in spec
if spec.satisfies("%clang"):
for flag in spec.compiler_flags["cxxflags"]:
if "gcc-toolchain" in flag:
args.append("-Xcompiler={0}".format(flag))
return args
args = [
"-DCMAKE_CXX_STANDARD=17",
"-DCMAKE_INSTALL_MESSAGE:STRING=LAZY",
"-DBUILD_SHARED_LIBS:BOOL=%s" % ("+shared" in spec),
"-DHydrogen_ENABLE_OPENMP:BOOL=%s" % ("+openmp" in spec),
"-DHydrogen_ENABLE_QUADMATH:BOOL=%s" % ("+quad" in spec),
"-DHydrogen_USE_64BIT_INTS:BOOL=%s" % ("+int64" in spec),
"-DHydrogen_USE_64BIT_BLAS_INTS:BOOL=%s" % ("+int64_blas" in spec),
"-DHydrogen_ENABLE_MPC:BOOL=%s" % ("+mpfr" in spec),
"-DHydrogen_GENERAL_LAPACK_FALLBACK=ON",
"-DHydrogen_ENABLE_ALUMINUM=%s" % ("+al" in spec),
"-DHydrogen_ENABLE_CUB=%s" % ("+cuda" in spec or "+rocm" in spec),
"-DHydrogen_ENABLE_CUDA=%s" % ("+cuda" in spec),
"-DHydrogen_ENABLE_ROCM=%s" % ("+rocm" in spec),
"-DHydrogen_ENABLE_TESTING=%s" % ("+test" in spec),
"-DHydrogen_ENABLE_HALF=%s" % ("+half" in spec),
"-DHydrogen_ENABLE_GPU_FP16=%s" % enable_gpu_fp16,
]
def std_initconfig_entries(self):
entries = super(Hydrogen, self).std_initconfig_entries()
if not spec.satisfies("^cmake@3.23.0"):
# There is a bug with using Ninja generator in this version
# of CMake
args.append("-DCMAKE_EXPORT_COMPILE_COMMANDS=ON")
# CMAKE_PREFIX_PATH, in CMake types, is a "STRING", not a "PATH". :/
entries = [x for x in entries if "CMAKE_PREFIX_PATH" not in x]
cmake_prefix_path = os.environ["CMAKE_PREFIX_PATH"].replace(":", ";")
entries.append(cmake_cache_string("CMAKE_PREFIX_PATH", cmake_prefix_path))
# IDK why this is here, but it was in the original recipe. So, yeah.
entries.append(cmake_cache_string("CMAKE_INSTALL_MESSAGE", "LAZY"))
return entries
if "+cuda" in spec:
if self.spec.satisfies("%clang"):
for flag in self.spec.compiler_flags["cxxflags"]:
if "gcc-toolchain" in flag:
args.append("-DCMAKE_CUDA_FLAGS=-Xcompiler={0}".format(flag))
args.append("-DCMAKE_CUDA_STANDARD=14")
archs = spec.variants["cuda_arch"].value
if archs != "none":
arch_str = ";".join(archs)
args.append("-DCMAKE_CUDA_ARCHITECTURES=%s" % arch_str)
def initconfig_compiler_entries(self):
spec = self.spec
entries = super(Hydrogen, self).initconfig_compiler_entries()
if spec.satisfies("%cce") and spec.satisfies("^cuda+allow-unsupported-compilers"):
args.append("-DCMAKE_CUDA_FLAGS=-allow-unsupported-compiler")
# FIXME: Enforce this better in the actual CMake.
entries.append(cmake_cache_string("CMAKE_CXX_STANDARD", "17"))
entries.append(cmake_cache_option("BUILD_SHARED_LIBS", "+shared" in spec))
entries.append(cmake_cache_option("CMAKE_EXPORT_COMPILE_COMMANDS", True))
if "+rocm" in spec:
args.extend(
[
"-DCMAKE_CXX_FLAGS=-std=c++17",
"-DHIP_ROOT_DIR={0}".format(spec["hip"].prefix),
"-DHIP_CXX_COMPILER={0}".format(self.spec["hip"].hipcc),
]
)
archs = self.spec.variants["amdgpu_target"].value
if archs != "none":
arch_str = ",".join(archs)
cxxflags_str = " ".join(self.spec.compiler_flags["cxxflags"])
args.append(
"-DHIP_HIPCC_FLAGS=--amdgpu-target={0}"
" -g -fsized-deallocation -fPIC {1}"
" -std=c++17".format(arch_str, cxxflags_str)
)
args.extend(
[
"-DCMAKE_HIP_ARCHITECTURES=%s" % arch_str,
"-DAMDGPU_TARGETS=%s" % arch_str,
"-DGPU_TARGETS=%s" % arch_str,
]
)
entries.append(cmake_cache_option("MPI_ASSUME_NO_BUILTIN_MPI", True))
# Add support for OS X to find OpenMP (LLVM installed via brew)
if self.spec.satisfies("%clang +openmp platform=darwin"):
if spec.satisfies("%clang +openmp platform=darwin") or spec.satisfies(
"%clang +omp_taskloops platform=darwin"
):
clang = self.compiler.cc
clang_bin = os.path.dirname(clang)
clang_root = os.path.dirname(clang_bin)
args.extend(["-DOpenMP_DIR={0}".format(clang_root)])
entries.append(cmake_cache_string("OpenMP_CXX_FLAGS", "-fopenmp=libomp"))
entries.append(cmake_cache_string("OpenMP_CXX_LIB_NAMES", "libomp"))
entries.append(
cmake_cache_string(
"OpenMP_libomp_LIBRARY", "{0}/lib/libomp.dylib".format(clang_root)
)
)
return entries
def initconfig_hardware_entries(self):
spec = self.spec
entries = super(Hydrogen, self).initconfig_hardware_entries()
entries.append(cmake_cache_option("Hydrogen_ENABLE_CUDA", "+cuda" in spec))
if spec.satisfies("+cuda"):
entries.append(cmake_cache_string("CMAKE_CUDA_STANDARD", "17"))
if not spec.satisfies("cuda_arch=none"):
archs = spec.variants["cuda_arch"].value
arch_str = ";".join(archs)
entries.append(cmake_cache_string("CMAKE_CUDA_ARCHITECTURES", arch_str))
# FIXME: Should this use the "cuda_flags" function of the
# CudaPackage class or something? There might be other
# flags in play, and we need to be sure to get them all.
cuda_flags = self.get_cuda_flags()
if len(cuda_flags) > 0:
entries.append(cmake_cache_string("CMAKE_CUDA_FLAGS", " ".join(cuda_flags)))
entries.append(cmake_cache_option("Hydrogen_ENABLE_ROCM", "+rocm" in spec))
if spec.satisfies("+rocm"):
entries.append(cmake_cache_string("CMAKE_HIP_STANDARD", "17"))
if not spec.satisfies("amdgpu_target=none"):
archs = self.spec.variants["amdgpu_target"].value
arch_str = ";".join(archs)
entries.append(cmake_cache_string("CMAKE_HIP_ARCHITECTURES", arch_str))
entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str))
entries.append(cmake_cache_string("GPU_TARGETS", arch_str))
entries.append(cmake_cache_path("HIP_ROOT_DIR", spec["hip"].prefix))
return entries
def initconfig_package_entries(self):
spec = self.spec
entries = super(Hydrogen, self).initconfig_package_entries()
# Basic Hydrogen options
entries.append(cmake_cache_option("Hydrogen_ENABLE_TESTING", "+test" in spec))
entries.append(cmake_cache_option("Hydrogen_GENERAL_LAPACK_FALLBACK", True))
entries.append(cmake_cache_option("Hydrogen_USE_64BIT_INTS", "+int64" in spec))
entries.append(cmake_cache_option("Hydrogen_USE_64BIT_BLAS_INTS", "+int64_blas" in spec))
# Advanced dependency options
entries.append(cmake_cache_option("Hydrogen_ENABLE_ALUMINUM", "+al" in spec))
entries.append(cmake_cache_option("Hydrogen_ENABLE_CUB", "+cub" in spec))
entries.append(cmake_cache_option("Hydrogen_ENABLE_GPU_FP16", "+cuda +half" in spec))
entries.append(cmake_cache_option("Hydrogen_ENABLE_HALF", "+half" in spec))
entries.append(cmake_cache_option("Hydrogen_ENABLE_OPENMP", "+openmp" in spec))
entries.append(
cmake_cache_option("Hydrogen_ENABLE_OMP_TASKLOOP", "+omp_taskloops" in spec)
)
# Note that CUDA/ROCm are handled above.
if "blas=openblas" in spec:
args.extend(
[
"-DHydrogen_USE_OpenBLAS:BOOL=%s" % ("blas=openblas" in spec),
"-DOpenBLAS_DIR:STRING={0}".format(spec["openblas"].prefix),
]
)
elif "blas=mkl" in spec:
args.extend(["-DHydrogen_USE_MKL:BOOL=%s" % ("blas=mkl" in spec)])
elif "blas=accelerate" in spec:
args.extend(["-DHydrogen_USE_ACCELERATE:BOOL=TRUE"])
elif "blas=essl" in spec:
entries.append(cmake_cache_option("Hydrogen_USE_OpenBLAS", "blas=openblas" in spec))
# CMAKE_PREFIX_PATH should handle this
entries.append(cmake_cache_string("OpenBLAS_DIR", spec["openblas"].prefix))
elif "blas=mkl" in spec or spec.satisfies("^intel-mkl"):
entries.append(cmake_cache_option("Hydrogen_USE_MKL", True))
elif "blas=essl" in spec or spec.satisfies("^essl"):
entries.append(cmake_cache_string("BLA_VENDOR", "IBMESSL"))
# IF IBM ESSL is used it needs help finding the proper LAPACK libraries
args.extend(
[
"-DLAPACK_LIBRARIES=%s;-llapack;-lblas"
entries.append(
cmake_cache_string(
"LAPACK_LIBRARIES",
"%s;-llapack;-lblas"
% ";".join("-l{0}".format(lib) for lib in self.spec["essl"].libs.names),
"-DBLAS_LIBRARIES=%s;-lblas"
)
)
entries.append(
cmake_cache_string(
"BLAS_LIBRARIES",
"%s;-lblas"
% ";".join("-l{0}".format(lib) for lib in self.spec["essl"].libs.names),
]
)
)
elif "blas=accelerate" in spec:
entries.append(cmake_cache_option("Hydrogen_USE_ACCELERATE", True))
elif spec.satisfies("^netlib-lapack"):
entries.append(cmake_cache_string("BLA_VENDOR", "Generic"))
if "+omp_taskloops" in spec:
args.extend(["-DHydrogen_ENABLE_OMP_TASKLOOP:BOOL=%s" % ("+omp_taskloops" in spec)])
if "+al" in spec:
args.extend(
[
"-DHydrogen_ENABLE_ALUMINUM:BOOL=%s" % ("+al" in spec),
"-DALUMINUM_DIR={0}".format(spec["aluminum"].prefix),
]
)
return args
return entries
def setup_build_environment(self, env):
if self.spec.satisfies("%apple-clang +openmp"):

View File

@@ -24,6 +24,7 @@ class Hypre(AutotoolsPackage, CudaPackage, ROCmPackage):
test_requires_compiler = True
version("develop", branch="master")
version("2.30.0", sha256="8e2af97d9a25bf44801c6427779f823ebc6f306438066bba7fcbc2a5f9b78421")
version("2.29.0", sha256="98b72115407a0e24dbaac70eccae0da3465f8f999318b2c9241631133f42d511")
version("2.28.0", sha256="2eea68740cdbc0b49a5e428f06ad7af861d1e169ce6a12d2cf0aa2fc28c4a2ae")
version("2.27.0", sha256="507a3d036bb1ac21a55685ae417d769dd02009bde7e09785d0ae7446b4ae1f98")
@@ -107,6 +108,7 @@ def patch(self): # fix sequential compilation in 'src/seq_mv'
depends_on("rocthrust", when="+rocm")
depends_on("rocrand", when="+rocm")
depends_on("rocprim", when="+rocm")
depends_on("hipblas", when="+rocm +superlu-dist")
depends_on("umpire", when="+umpire")
depends_on("caliper", when="+caliper")
@@ -258,6 +260,8 @@ def configure_args(self):
if "+rocm" in spec:
rocm_pkgs = ["rocsparse", "rocthrust", "rocprim", "rocrand"]
if "+superlu-dist" in spec:
rocm_pkgs.append("hipblas")
rocm_inc = ""
for pkg in rocm_pkgs:
if "^" + pkg in spec:

View File

@@ -21,6 +21,8 @@ class IntelXed(Package):
# Current versions now have actual releases and tags.
version("main", branch="main")
version("2023.10.11", tag="v2023.10.11", commit="d7d46c73fb04a1742e99c9382a4acb4ed07ae272")
version("2023.08.21", tag="v2023.08.21", commit="01a6da8090af84cd52f6c1070377ae6e885b078f")
version("2023.07.09", tag="v2023.07.09", commit="539a6a349cf7538a182ed3ee1f48bb9317eb185f")
version("2023.06.07", tag="v2023.06.07", commit="4dc77137f651def2ece4ac0416607b215c18e6e4")
version("2023.04.16", tag="v2023.04.16", commit="a3055cd0209f5c63c88e280bbff9579b1e2942e2")
@@ -40,7 +42,12 @@ class IntelXed(Package):
# Match xed more closely with the version of mbuild at the time.
resource(
name="mbuild", placement=mdir, git=mbuild_git, tag="v2022.07.28", when="@2022.07:9999"
name="mbuild",
placement=mdir,
git=mbuild_git,
tag="v2022.07.28",
commit="75cb46e6536758f1a3cdb3d6bd83a4a9fd0338bb",
when="@2022.07:9999",
)
resource(
@@ -48,7 +55,7 @@ class IntelXed(Package):
placement=mdir,
git=mbuild_git,
tag="v2022.04.17",
commit="ef19f00de14a9c2c253c1c9b1119e1617280e3f2",
commit="b41485956bf65d51b8c2379768de7eaaa7a4245b",
when="@:2022.06",
)

View File

@@ -45,21 +45,21 @@ class Interproscan(Package):
)
resource(
when="@5.56-89.0 +databases",
when="5.56-89.0 +databases",
name="databases",
url="https://ftp.ebi.ac.uk/pub/databases/interpro/iprscan/5/5.56-89.0/alt/interproscan-data-5.56-89.0.tar.gz",
sha256="49cd0c69711f9469f3b68857f4581b23ff12765ca2b12893d18e5a9a5cd8032d",
)
resource(
when="@5.38-76.0 +databases",
when="5.38-76.0 +databases",
name="databases",
url="https://ftp.ebi.ac.uk/pub/databases/interpro/iprscan/5/5.38-76.0/alt/interproscan-data-5.38-76.0.tar.gz",
sha256="e05e15d701037504f92ecf849c20317e70df28e78ff1945826b3c1e16d9b9cce",
)
resource(
when="@5.36-75.0 +databases",
when="5.36-75.0 +databases",
name="databases",
url="https://ftp.ebi.ac.uk/pub/databases/interpro/iprscan/5/5.36-75.0/alt/interproscan-data-5.36-75.0.tar.gz",
sha256="e9b1e6f2d1c20d06661a31a08c973bc8ddf039a4cf1e45ec4443200375e5d6a4",

View File

@@ -26,6 +26,7 @@ class Julia(MakefilePackage):
maintainers("vchuravy", "haampie", "giordano")
version("master", branch="master")
version("1.9.3", sha256="8d7dbd8c90e71179e53838cdbe24ff40779a90d7360e29766609ed90d982081d")
version("1.9.2", sha256="015438875d591372b80b09d01ba899657a6517b7c72ed41222298fef9d4ad86b")
version("1.9.0", sha256="48f4c8a7d5f33d0bc6ce24226df20ab49e385c2d0c3767ec8dfdb449602095b2")
version("1.8.5", sha256="d31026cc6b275d14abce26fd9fd5b4552ac9d2ce8bde4291e494468af5743031")
@@ -163,7 +164,8 @@ class Julia(MakefilePackage):
)
# patchelf 0.13 is required because the rpath patch uses --add-rpath
depends_on("patchelf@0.13:", type="build")
# patchelf 0.18 breaks (at least) libjulia-internal.so
depends_on("patchelf@0.13:0.17", type="build")
depends_on("perl", type="build")
depends_on("libwhich", type="build")
depends_on("python", type="build")

View File

@@ -0,0 +1,39 @@
diff --git a/src/callbacks/memory_profiler.cpp b/src/callbacks/memory_profiler.cpp
index 0d5cec5d2..6f40705af 100644
--- a/src/callbacks/memory_profiler.cpp
+++ b/src/callbacks/memory_profiler.cpp
@@ -158,7 +158,10 @@ struct MemUsage
size_t total_mem;
MemUsage(const std::string& r, size_t m) : report(r), total_mem(m) {}
- bool operator<(const MemUsage& other) { return total_mem < other.total_mem; }
+ bool operator<(const MemUsage& other) const
+ {
+ return total_mem < other.total_mem;
+ }
};
} // namespace
diff --git a/src/optimizers/adam.cpp b/src/optimizers/adam.cpp
index d00dfbe7c..1d9ad3949 100644
--- a/src/optimizers/adam.cpp
+++ b/src/optimizers/adam.cpp
@@ -34,14 +34,12 @@
namespace lbann {
-#if defined (LBANN_HAS_ROCM) && defined (LBANN_HAS_GPU_FP16)
+#if defined(LBANN_HAS_ROCM) && defined(LBANN_HAS_GPU_FP16)
namespace {
-bool isfinite(fp16 const& x)
-{
- return std::isfinite(float(x));
-}
-}
+bool isfinite(fp16 const& x) { return std::isfinite(float(x)); }
+} // namespace
#endif
+using std::isfinite;
template <typename TensorDataType>
adam<TensorDataType>::adam(TensorDataType learning_rate,

View File

@@ -5,7 +5,6 @@
import os
import socket
import sys
from spack.package import *
@@ -24,109 +23,42 @@ class Lbann(CachedCMakePackage, CudaPackage, ROCmPackage):
maintainers("bvanessen")
version("develop", branch="develop")
version("0.102", sha256="3734a76794991207e2dd2221f05f0e63a86ddafa777515d93d99d48629140f1a")
version("benchmarking", branch="benchmarking")
version("0.104", sha256="a847c7789082ab623ed5922ab1248dd95f5f89d93eed44ac3d6a474703bbc0bf")
version("0.103", sha256="9da1bf308f38323e30cb07f8ecf8efa05c7f50560e8683b9cd961102b1b3e25a")
version(
"0.101",
sha256="69d3fe000a88a448dc4f7e263bcb342c34a177bd9744153654528cd86335a1f7",
deprecated=True,
)
version(
"0.100",
sha256="d1bab4fb6f1b80ae83a7286cc536a32830890f6e5b0c3107a17c2600d0796912",
deprecated=True,
)
version(
"0.99",
sha256="3358d44f1bc894321ce07d733afdf6cb7de39c33e3852d73c9f31f530175b7cd",
deprecated=True,
)
version(
"0.98.1",
sha256="9a2da8f41cd8bf17d1845edf9de6d60f781204ebd37bffba96d8872036c10c66",
deprecated=True,
)
version(
"0.98",
sha256="8d64b9ac0f1d60db553efa4e657f5ea87e790afe65336117267e9c7ae6f68239",
deprecated=True,
)
version(
"0.97.1",
sha256="2f2756126ac8bb993202cf532d72c4d4044e877f4d52de9fdf70d0babd500ce4",
deprecated=True,
)
version(
"0.97",
sha256="9794a706fc7ac151926231efdf74564c39fbaa99edca4acb745ee7d20c32dae7",
deprecated=True,
)
version(
"0.96",
sha256="97af78e9d3c405e963361d0db96ee5425ee0766fa52b43c75b8a5670d48e4b4a",
deprecated=True,
)
version(
"0.95",
sha256="d310b986948b5ee2bedec36383a7fe79403721c8dc2663a280676b4e431f83c2",
deprecated=True,
)
version(
"0.94",
sha256="567e99b488ebe6294933c98a212281bffd5220fc13a0a5cd8441f9a3761ceccf",
deprecated=True,
)
version(
"0.93",
sha256="77bfd7fe52ee7495050f49bcdd0e353ba1730e3ad15042c678faa5eeed55fb8c",
deprecated=True,
)
version(
"0.92",
sha256="9187c5bcbc562c2828fe619d53884ab80afb1bcd627a817edb935b80affe7b84",
deprecated=True,
)
version(
"0.91",
sha256="b69f470829f434f266119a33695592f74802cff4b76b37022db00ab32de322f5",
"0.102",
sha256="3734a76794991207e2dd2221f05f0e63a86ddafa777515d93d99d48629140f1a",
deprecated=True,
)
variant("al", default=True, description="Builds with support for Aluminum Library")
variant(
"build_type",
default="Release",
description="The build type to build",
values=("Debug", "Release"),
)
variant(
"conduit",
default=True,
description="Builds with support for Conduit Library "
"(note that for v0.99 conduit is required)",
)
variant(
"deterministic",
default=False,
description="Builds with support for deterministic execution",
)
variant(
"dihydrogen", default=True, description="Builds with support for DiHydrogen Tensor Library"
)
variant(
"distconv",
default=False,
sticky=True,
description="Builds with support for spatial, filter, or channel "
"distributed convolutions",
)
variant(
"dtype",
default="float",
sticky=True,
description="Type for floating point representation of weights",
values=("float", "double"),
)
variant("fft", default=False, description="Support for FFT operations")
variant("half", default=False, description="Builds with support for FP16 precision data types")
variant("hwloc", default=True, description="Add support for topology aware algorithms")
variant("nvprof", default=False, description="Build with region annotations for NVPROF")
variant(
"numpy", default=False, description="Builds with support for processing NumPy data files"
@@ -139,7 +71,7 @@ class Lbann(CachedCMakePackage, CudaPackage, ROCmPackage):
variant("vtune", default=False, description="Builds with support for Intel VTune")
variant("onednn", default=False, description="Support for OneDNN")
variant("onnx", default=False, description="Support for exporting models into ONNX format")
variant("nvshmem", default=False, description="Support for NVSHMEM")
variant("nvshmem", default=False, description="Support for NVSHMEM", when="+distconv")
variant(
"python",
default=True,
@@ -168,20 +100,13 @@ class Lbann(CachedCMakePackage, CudaPackage, ROCmPackage):
# Don't expose this a dependency until Spack can find the external properly
# depends_on('binutils+gold', type='build', when='+gold')
patch("lbann_v0.104_build_cleanup.patch", when="@0.104:")
# Variant Conflicts
conflicts("@:0.90,0.99:", when="~conduit")
conflicts("@0.90:0.101", when="+fft")
conflicts("@:0.90,0.102:", when="~dihydrogen")
conflicts("~cuda", when="+nvprof")
conflicts("~hwloc", when="+al")
conflicts("~cuda", when="+nvshmem")
conflicts("+cuda", when="+rocm", msg="CUDA and ROCm support are mutually exclusive")
conflicts("~vision", when="@0.91:0.101")
conflicts("~numpy", when="@0.91:0.101")
conflicts("~python", when="@0.91:0.101")
conflicts("~pfe", when="@0.91:0.101")
requires("%clang", when="+lld")
conflicts("+lld", when="+gold")
@@ -191,84 +116,56 @@ class Lbann(CachedCMakePackage, CudaPackage, ROCmPackage):
depends_on("cmake@3.17.0:", type="build")
depends_on("cmake@3.21.0:", type="build", when="@0.103:")
# Specify the correct versions of Hydrogen
depends_on("hydrogen@:1.3.4", when="@0.95:0.100")
depends_on("hydrogen@1.4.0:1.4", when="@0.101:0.101.99")
depends_on("hydrogen@1.5.0:", when="@:0.90,0.102:")
# Specify the core libraries: Hydrogen, DiHydrogen, Aluminum
depends_on("hydrogen@1.5.3:")
depends_on("aluminum@1.4.1:")
depends_on("dihydrogen@0.2.0:")
# Align the following variants across Hydrogen and DiHydrogen
forwarded_variants = ["cuda", "rocm", "half", "nvshmem"]
for v in forwarded_variants:
if v != "nvshmem":
depends_on("hydrogen +{0}".format(v), when="+{0}".format(v))
depends_on("hydrogen ~{0}".format(v), when="~{0}".format(v))
if v != "al" and v != "half":
depends_on("dihydrogen +{0}".format(v), when="+{0}".format(v))
depends_on("dihydrogen ~{0}".format(v), when="~{0}".format(v))
if v == "cuda" or v == "rocm":
depends_on("aluminum +{0} +nccl".format(v), when="+{0}".format(v))
# Add Hydrogen variants
depends_on("hydrogen +openmp +shared +int64")
depends_on("hydrogen +openmp_blas", when=sys.platform != "darwin")
depends_on("hydrogen ~al", when="~al")
depends_on("hydrogen +al", when="+al")
depends_on("hydrogen ~cuda", when="~cuda")
depends_on("hydrogen +cuda", when="+cuda")
depends_on("hydrogen ~half", when="~half")
depends_on("hydrogen +half", when="+half")
depends_on("hydrogen ~rocm", when="~rocm")
depends_on("hydrogen +rocm", when="+rocm")
depends_on("hydrogen build_type=Debug", when="build_type=Debug")
# Older versions depended on Elemental not Hydrogen
depends_on("elemental +openmp_blas +shared +int64", when="@0.91:0.94")
depends_on(
"elemental +openmp_blas +shared +int64 build_type=Debug",
when="build_type=Debug @0.91:0.94",
)
# Specify the correct version of Aluminum
depends_on("aluminum@:0.3", when="@0.95:0.100 +al")
depends_on("aluminum@0.4.0:0.4", when="@0.101:0.101.99 +al")
depends_on("aluminum@0.5.0:", when="@:0.90,0.102: +al")
# Add DiHydrogen variants
depends_on("dihydrogen +distconv", when="+distconv")
depends_on("dihydrogen@develop", when="@develop")
# Add Aluminum variants
depends_on("aluminum +cuda +nccl", when="+al +cuda")
depends_on("aluminum +rocm +rccl", when="+al +rocm")
depends_on("dihydrogen@0.2.0:", when="@:0.90,0.102:")
depends_on("dihydrogen +openmp", when="+dihydrogen")
depends_on("dihydrogen +openmp_blas", when=sys.platform != "darwin")
depends_on("dihydrogen ~cuda", when="+dihydrogen ~cuda")
depends_on("dihydrogen +cuda", when="+dihydrogen +cuda")
depends_on("dihydrogen ~al", when="+dihydrogen ~al")
depends_on("dihydrogen +al", when="+dihydrogen +al")
depends_on("dihydrogen +distconv +cuda", when="+distconv +cuda")
depends_on("dihydrogen +distconv +rocm", when="+distconv +rocm")
depends_on("dihydrogen ~half", when="+dihydrogen ~half")
depends_on("dihydrogen +half", when="+dihydrogen +half")
depends_on("dihydrogen ~nvshmem", when="+dihydrogen ~nvshmem")
depends_on("dihydrogen +nvshmem", when="+dihydrogen +nvshmem")
depends_on("dihydrogen ~rocm", when="+dihydrogen ~rocm")
depends_on("dihydrogen +rocm", when="+dihydrogen +rocm")
depends_on("dihydrogen@0.1", when="@0.101:0.101.99 +dihydrogen")
depends_on("dihydrogen@:0.0,0.2:", when="@:0.90,0.102: +dihydrogen")
conflicts("~dihydrogen", when="+distconv")
depends_on("aluminum@master", when="@develop")
depends_on("hdf5+mpi", when="+distconv")
for arch in CudaPackage.cuda_arch_values:
depends_on("hydrogen cuda_arch=%s" % arch, when="+cuda cuda_arch=%s" % arch)
depends_on("aluminum cuda_arch=%s" % arch, when="+al +cuda cuda_arch=%s" % arch)
depends_on("dihydrogen cuda_arch=%s" % arch, when="+dihydrogen +cuda cuda_arch=%s" % arch)
depends_on("aluminum cuda_arch=%s" % arch, when="+cuda cuda_arch=%s" % arch)
depends_on("dihydrogen cuda_arch=%s" % arch, when="+cuda cuda_arch=%s" % arch)
depends_on("nccl cuda_arch=%s" % arch, when="+cuda cuda_arch=%s" % arch)
# variants +rocm and amdgpu_targets are not automatically passed to
# dependencies, so do it manually.
for val in ROCmPackage.amdgpu_targets:
depends_on("hydrogen amdgpu_target=%s" % val, when="amdgpu_target=%s" % val)
depends_on("aluminum amdgpu_target=%s" % val, when="+al amdgpu_target=%s" % val)
depends_on("dihydrogen amdgpu_target=%s" % val, when="+dihydrogen amdgpu_target=%s" % val)
depends_on("aluminum amdgpu_target=%s" % val, when="amdgpu_target=%s" % val)
depends_on("dihydrogen amdgpu_target=%s" % val, when="amdgpu_target=%s" % val)
depends_on("roctracer-dev", when="+rocm +distconv")
depends_on("cudnn", when="@0.90:0.100 +cuda")
depends_on("cudnn@8.0.2:", when="@:0.90,0.101: +cuda")
depends_on("cub", when="@0.94:0.98.2 +cuda ^cuda@:10")
depends_on("cutensor", when="@:0.90,0.102: +cuda")
depends_on("cudnn@8.0.2:", when="+cuda")
depends_on("cutensor", when="+cuda")
depends_on("hipcub", when="+rocm")
depends_on("mpi")
depends_on("hwloc@1.11:", when="@:0.90,0.102: +hwloc")
depends_on("hwloc@1.11.0:1.11", when="@0.95:0.101 +hwloc")
depends_on("hwloc@1.11:")
depends_on("hwloc +cuda +nvml", when="+cuda")
depends_on("hwloc@2.3.0:", when="+rocm")
depends_on("hiptt", when="+rocm")
@@ -296,9 +193,7 @@ class Lbann(CachedCMakePackage, CudaPackage, ROCmPackage):
# Note that conduit defaults to +fortran +parmetis +python, none of which are
# necessary by LBANN: you may want to disable those options in your
# packages.yaml
depends_on("conduit@0.4.0: +hdf5", when="@0.94:0 +conduit")
depends_on("conduit@0.5.0:0.6 +hdf5", when="@0.100:0.101 +conduit")
depends_on("conduit@0.6.0: +hdf5", when="@:0.90,0.99:")
depends_on("conduit@0.6.0: +hdf5")
# LBANN can use Python in two modes 1) as part of an extensible framework
# and 2) to drive the front end model creation and launch
@@ -308,13 +203,13 @@ class Lbann(CachedCMakePackage, CudaPackage, ROCmPackage):
extends("python", when="+python")
# Python front end and possible extra packages
depends_on("python@3: +shared", type=("build", "run"), when="@:0.90,0.99: +pfe")
depends_on("python@3: +shared", type=("build", "run"), when="+pfe")
extends("python", when="+pfe")
depends_on("py-setuptools", type="build", when="+pfe")
depends_on("py-protobuf+cpp@3.10.0:", type=("build", "run"), when="@:0.90,0.99: +pfe")
depends_on("py-protobuf+cpp@3.10.0:4.21.12", type=("build", "run"), when="+pfe")
depends_on("protobuf+shared@3.10.0:", when="@:0.90,0.99:")
depends_on("zlib-api", when="^protobuf@3.11.0:")
depends_on("protobuf+shared@3.10.0:3.21.12")
depends_on("zlib-api", when="protobuf@3.11.0:")
# using cereal@1.3.1 and above requires changing the
# find_package call to lowercase, so stick with :1.3.0
@@ -328,7 +223,7 @@ class Lbann(CachedCMakePackage, CudaPackage, ROCmPackage):
depends_on("onnx", when="+onnx")
depends_on("nvshmem", when="+nvshmem")
depends_on("spdlog", when="@:0.90,0.102:")
depends_on("spdlog@1.11.0")
depends_on("zstr")
depends_on("caliper+adiak+mpi", when="+caliper")
@@ -336,6 +231,7 @@ class Lbann(CachedCMakePackage, CudaPackage, ROCmPackage):
generator("ninja")
def setup_build_environment(self, env):
env.append_flags("CXXFLAGS", "-fno-omit-frame-pointer")
if self.spec.satisfies("%apple-clang"):
env.append_flags("CPPFLAGS", self.compiler.openmp_flag)
env.append_flags("CFLAGS", self.spec["llvm-openmp"].headers.include_flags)
@@ -357,7 +253,7 @@ def libs(self):
def cache_name(self):
hostname = socket.gethostname()
# Get a hostname that has no node identifier
hostname = hostname.rstrip("1234567890")
hostname = hostname.rstrip("1234567890-")
return "LBANN_{0}_{1}-{2}-{3}@{4}.cmake".format(
hostname,
self.spec.version,
@@ -440,12 +336,9 @@ def initconfig_package_entries(self):
cmake_variant_fields = [
("LBANN_WITH_CNPY", "numpy"),
("LBANN_DETERMINISTIC", "deterministic"),
("LBANN_WITH_HWLOC", "hwloc"),
("LBANN_WITH_ALUMINUM", "al"),
("LBANN_WITH_ADDRESS_SANITIZER", "asan"),
("LBANN_WITH_BOOST", "boost"),
("LBANN_WITH_CALIPER", "caliper"),
("LBANN_WITH_CONDUIT", "conduit"),
("LBANN_WITH_NVSHMEM", "nvshmem"),
("LBANN_WITH_FFT", "fft"),
("LBANN_WITH_ONEDNN", "onednn"),
@@ -460,6 +353,9 @@ def initconfig_package_entries(self):
for opt, val in cmake_variant_fields:
entries.append(self.define_cmake_cache_from_variant(opt, val))
entries.append(cmake_cache_option("LBANN_WITH_ALUMINUM", True))
entries.append(cmake_cache_option("LBANN_WITH_CONDUIT", True))
entries.append(cmake_cache_option("LBANN_WITH_HWLOC", True))
entries.append(cmake_cache_option("LBANN_WITH_ROCTRACER", "+rocm +distconv" in spec))
entries.append(cmake_cache_option("LBANN_WITH_TBINF", False))
entries.append(
@@ -492,7 +388,7 @@ def initconfig_package_entries(self):
)
)
entries.append(self.define_cmake_cache_from_variant("LBANN_WITH_DIHYDROGEN", "dihydrogen"))
entries.append(cmake_cache_option("LBANN_WITH_DIHYDROGEN", True))
entries.append(self.define_cmake_cache_from_variant("LBANN_WITH_DISTCONV", "distconv"))
# IF IBM ESSL is used it needs help finding the proper LAPACK libraries

View File

@@ -13,14 +13,42 @@ class Lcov(MakefilePackage):
supports statement, function and branch coverage measurement."""
homepage = "http://ltp.sourceforge.net/coverage/lcov.php"
url = "https://github.com/linux-test-project/lcov/releases/download/v1.14/lcov-1.14.tar.gz"
url = "https://github.com/linux-test-project/lcov/releases/download/v2.0/lcov-2.0.tar.gz"
maintainers("KineticTheory")
version("2.0", sha256="1857bb18e27abe8bcec701a907d5c47e01db4d4c512fc098d1a6acd29267bf46")
version("1.16", sha256="987031ad5528c8a746d4b52b380bc1bffe412de1f2b9c2ba5224995668e3240b")
version("1.15", sha256="c1cda2fa33bec9aa2c2c73c87226cfe97de0831887176b45ee523c5e30f8053a")
version("1.14", sha256="14995699187440e0ae4da57fe3a64adc0a3c5cf14feab971f8db38fb7d8f071a")
depends_on("perl")
# dependencies from
# https://github.com/linux-test-project/lcov/blob/02ece21d54ccd16255d74f8b00f8875b6c15653a/README#L91-L111
depends_on("perl", type=("build", "run"))
depends_on("perl-b-hooks-endofscope", type=("run"))
depends_on("perl-capture-tiny", type=("run"))
depends_on("perl-class-inspector", type=("run"))
depends_on("perl-class-singleton", type=("run"))
depends_on("perl-datetime", type=("run"))
depends_on("perl-datetime-locale", type=("run"))
depends_on("perl-datetime-timezone", type=("run"))
depends_on("perl-devel-cover", type=("run"))
depends_on("perl-devel-stacktrace", type=("run"))
depends_on("perl-digest-md5", type=("run"))
depends_on("perl-eval-closure", type=("run"))
depends_on("perl-exception-class", type=("run"))
depends_on("perl-file-sharedir", type=("run"))
depends_on("perl-file-spec", type=("run"))
depends_on("perl-json", type=("run"))
depends_on("perl-memory-process", type=("run"))
depends_on("perl-module-implementation", type=("run"))
depends_on("perl-mro-compat", type=("run"))
depends_on("perl-namespace-clean", type=("run"))
depends_on("perl-package-stash", type=("run"))
depends_on("perl-params-validationcompiler", type=("run"))
depends_on("perl-role-tiny", type=("run"))
depends_on("perl-specio", type=("run"))
depends_on("perl-sub-identify", type=("run"))
depends_on("perl-time-hires", type=("run"))
def install(self, spec, prefix):
make(

View File

@@ -83,6 +83,7 @@ class Libgit2(CMakePackage):
depends_on("cmake@2.8:", type="build", when="@:0.28")
depends_on("cmake@3.5:", type="build", when="@0.99:")
depends_on("pkgconfig", type="build")
depends_on("python", type="test")
# Runtime Dependencies
depends_on("libssh2", when="+ssh")
@@ -123,5 +124,6 @@ def cmake_args(self):
# Control tests
args.append(self.define("BUILD_CLAR", self.run_tests))
args.append(self.define("BUILD_TESTS", self.run_tests))
return args

View File

@@ -36,6 +36,11 @@ class Mapl(CMakePackage):
version("develop", branch="develop")
version("main", branch="main")
version("2.42.0", sha256="9b6c3434919c14ef79004db5f76cb3dd8ef375584227101c230a372bb0470fdd")
version("2.41.2", sha256="73e1f0961f1b70e8159c0a2ce3499eb5158f3ca6d081f4c7826af7854ebfb44d")
version("2.41.1", sha256="2b384bd4fbaac1bff4ef009922c436c4ab54832172a5cd4d312ea44e32c1ae7c")
version("2.41.0", sha256="1142f9395e161174e3ec1654fba8bda1d0bd93edc7438b1927d8f5d7b42a0a86")
version("2.40.4", sha256="fb843b118d6e56cd4fc4b114c4d6f91956d5c8b3d9389ada56da1dfdbc58904f")
version("2.40.3", sha256="4b82a314c88a035fc2b91395750aa7950d6bee838786178ed16a3f39a1e45519")
version("2.40.2", sha256="7327f6f5bce6e09e7f7b930013fba86ee7cbfe8ed4c7c087fc9ab5acbf6640fd")
version("2.40.1", sha256="6f40f946fabea6ba73b0764092e495505d220455b191b4e454736a0a25ee058c")
@@ -116,6 +121,12 @@ class Mapl(CMakePackage):
# Versions later than 3.14 remove FindESMF.cmake
# from ESMA_CMake.
resource(
name="esma_cmake",
git="https://github.com/GEOS-ESM/ESMA_cmake.git",
tag="v3.36.0",
when="@2.42.0:",
)
resource(
name="esma_cmake",
git="https://github.com/GEOS-ESM/ESMA_cmake.git",
@@ -159,6 +170,12 @@ class Mapl(CMakePackage):
# Patch to add missing MPI Fortran target to top-level CMakeLists.txt
patch("mapl-2.12.3-mpi-fortran.patch", when="@:2.12.3")
# MAPL only compiles with MPICH from version 2.42.0 and higher so we conflict
# with older versions. Also, it's only been tested with MPICH 4, so we don't
# allow older MPICH
conflicts("mpich@:3")
conflicts("mpich@4", when="@:2.41")
variant("flap", default=False, description="Build with FLAP support", when="@:2.39")
variant("pflogger", default=True, description="Build with pFlogger support")
variant("fargparse", default=True, description="Build with fArgParse support")

View File

@@ -967,6 +967,9 @@ def find_optional_library(name, prefix):
if "^rocthrust" in spec and not spec["hip"].external:
# petsc+rocm needs the rocthrust header path
hip_headers += spec["rocthrust"].headers
if "^hipblas" in spec and not spec["hip"].external:
# superlu-dist+rocm needs the hipblas header path
hip_headers += spec["hipblas"].headers
if "%cce" in spec:
# We assume the proper Cray CCE module (cce) is loaded:
craylibs_path = env["CRAYLIBS_" + machine().upper()]

View File

@@ -19,7 +19,14 @@ class Patchelf(AutotoolsPackage):
maintainers("haampie")
version("0.18.0", sha256="64de10e4c6b8b8379db7e87f58030f336ea747c0515f381132e810dbf84a86e7")
version("0.17.2", sha256="20427b718dd130e4b66d95072c2a2bd5e17232e20dad58c1bea9da81fae330e0")
# patchelf 0.18 breaks libraries:
# https://github.com/spack/spack/issues/39252
# https://github.com/spack/spack/pull/40938
version(
"0.17.2",
sha256="20427b718dd130e4b66d95072c2a2bd5e17232e20dad58c1bea9da81fae330e0",
preferred=True,
)
version("0.16.1", sha256="1a562ed28b16f8a00456b5f9ee573bb1af7c39c1beea01d94fc0c7b3256b0406")
version("0.15.0", sha256="53a8d58ed4e060412b8fdcb6489562b3c62be6f65cee5af30eba60f4423bfa0f")
version("0.14.5", sha256="113ada3f1ace08f0a7224aa8500f1fa6b08320d8f7df05ff58585286ec5faa6f")

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PerlClassSingleton(PerlPackage):
"""Class::Singleton - Implementation of a "Singleton" class"""
homepage = "https://metacpan.org/pod/Class::Singleton"
url = "https://cpan.metacpan.org/authors/id/S/SH/SHAY/Class-Singleton-1.6.tar.gz"
version("1.6", sha256="27ba13f0d9512929166bbd8c9ef95d90d630fc80f0c9a1b7458891055e9282a4")

View File

@@ -0,0 +1,17 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PerlDatetimeLocale(PerlPackage):
"""DateTime::Locale - Localization support for DateTime.pm"""
homepage = "https://metacpan.org/pod/DateTime::Locale"
url = "https://cpan.metacpan.org/authors/id/D/DR/DROLSKY/DateTime-Locale-1.40.tar.gz"
version("1.40", sha256="7490b4194b5d23a4e144976dedb3bdbcc6d3364b5d139cc922a86d41fdb87afb")
depends_on("perl-file-sharedir-install", type=("build", "run"))

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PerlDatetimeTimezone(PerlPackage):
"""DateTime::TimeZone - Time zone object base class and factory"""
homepage = "https://metacpan.org/pod/DateTime::TimeZone"
url = "https://cpan.metacpan.org/authors/id/D/DR/DROLSKY/DateTime-TimeZone-2.60.tar.gz"
version("2.60", sha256="f0460d379323905b579bed44e141237a337dc25dd26b6ab0c60ac2b80629323d")

View File

@@ -0,0 +1,17 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PerlDatetime(PerlPackage):
"""DateTime - A date and time object for Perl"""
homepage = "https://metacpan.org/pod/DateTime"
url = "https://cpan.metacpan.org/authors/id/D/DR/DROLSKY/DateTime-1.63.tar.gz"
version("1.63", sha256="1b11e49ec6e184ae2a10eccd05eda9534f32458fc644c12ab710c29a3a816f6f")
depends_on("perl-namespace-autoclean", type=("run"))

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PerlDevelCover(PerlPackage):
"""Devel::Cover - Perl extension for code coverage metrics"""
homepage = "https://metacpan.org/pod/Devel::Cover"
url = "https://cpan.metacpan.org/authors/id/P/PJ/PJCJ/Devel-Cover-1.40.tar.gz"
version("1.40", sha256="26e2f431fbcf7bff3851f352f83b84067c09ff206f40ab975cad8d2bafe711a8")

View File

@@ -0,0 +1,17 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PerlFileSharedir(PerlPackage):
"""File::ShareDir - Locate per-dist and per-module shared files"""
homepage = "https://metacpan.org/pod/File::ShareDir"
url = "https://cpan.metacpan.org/authors/id/R/RE/REHSACK/File-ShareDir-1.118.tar.gz"
version("1.118", sha256="3bb2a20ba35df958dc0a4f2306fc05d903d8b8c4de3c8beefce17739d281c958")
# depends_on("perl-module-build", type="build")

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PerlFileSpec(PerlPackage):
"""File::Spec - Perl extension for portably performing operations on file names"""
homepage = "https://metacpan.org/pod/File::Spec"
url = "https://cpan.metacpan.org/authors/id/K/KW/KWILLIAMS/File-Spec-0.90.tar.gz"
version("0.90", sha256="695a34604e1b6a98327fe2b374504329735b07c2c45db9f55df1636e4c29bf79")

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PerlMemoryProcess(PerlPackage):
"""Memory::Process - Perl class to determine actual memory usage"""
homepage = "https://metacpan.org/pod/Memory::Process"
url = "https://cpan.metacpan.org/authors/id/S/SK/SKIM/Memory-Process-0.06.tar.gz"
version("0.06", sha256="35814488ffd29c97621625ea3b3d700afbfa60ed055bd759d4e58d9c8fd44e4e")

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PerlNamespaceAutoclean(PerlPackage):
"""Namespace::Autoclean - Keep imports out of your namespace"""
homepage = "https://metacpan.org/pod/namespace::autoclean"
url = "https://cpan.metacpan.org/authors/id/E/ET/ETHER/namespace-autoclean-0.29.tar.gz"
version("0.29", sha256="45ebd8e64a54a86f88d8e01ae55212967c8aa8fed57e814085def7608ac65804")

View File

@@ -0,0 +1,16 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PerlParamsValidationcompiler(PerlPackage):
"""Params::ValidationCompiler - Build an optimized subroutine parameter validator once,
use it forever"""
homepage = "https://metacpan.org/pod/Params::ValidationCompiler"
url = "https://cpan.metacpan.org/authors/id/D/DR/DROLSKY/Params-ValidationCompiler-0.31.tar.gz"
version("0.31", sha256="7b6497173f1b6adb29f5d51d8cf9ec36d2f1219412b4b2410e9d77a901e84a6d")

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PerlSpecio(PerlPackage):
"""Type constraints and coercions for Perl ."""
homepage = "https://metacpan.org/dist/Specio"
url = "http://search.cpan.org/CPAN/authors/id/D/DR/DROLSKY/Specio-0.48.tar.gz"
version("0.48", sha256="0c85793580f1274ef08173079131d101f77b22accea7afa8255202f0811682b2")

View File

@@ -57,3 +57,9 @@ def flag_handler(self, name, flags):
if "%gcc@10:" in self.spec and name == "fflags":
flags.append("-fallow-argument-mismatch")
return flags, None, None
@when("@5.0.0")
def patch(self):
filter_file(
"use iso_[cC]_binding", "use, intrinsic :: iso_c_binding", "src/pflotran/hdf5_aux.F90"
)

View File

@@ -15,11 +15,12 @@ class Podio(CMakePackage):
url = "https://github.com/AIDASoft/podio/archive/v00-09-02.tar.gz"
git = "https://github.com/AIDASoft/podio.git"
maintainers("vvolkl", "drbenmorgan", "jmcarcell")
maintainers("vvolkl", "drbenmorgan", "jmcarcell", "tmadlener")
tags = ["hep", "key4hep"]
version("master", branch="master")
version("0.17.2", sha256="5b519335c4e1708f71ed85b3cac8ca81e544cc4572a5c37019ce9fc414c5e74d")
version("0.17.1", sha256="97d6c5f81d50ee42bf7c01f041af2fd333c806f1bbf0a4828ca961a24cea6bb2")
version("0.17", sha256="0c19f69970a891459cab227ab009514f1c1ce102b70e8c4b7d204eb6a0c643c1")
version("0.16.7", sha256="8af7c947e2637f508b7af053412bacd9218d41a455d69addd7492f05b7a4338d")

View File

@@ -17,7 +17,7 @@ class PyAbipy(PythonPackage):
version("0.2.0", sha256="c72b796ba0f9ea4299eac3085bede092d2652e9e5e8074d3badd19ef7b600792")
variant("gui", default=False, description="Build the GUI")
variant("ipython", default=False, when="@0.2.0", description="Build IPython support")
variant("ipython", default=False, when="0.2.0", description="Build IPython support")
depends_on("py-setuptools", type="build")
# in newer pip versions --install-option does not exist

View File

@@ -0,0 +1,26 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyGeomdl(PythonPackage):
"""Object-oriented pure Python B-Spline and NURBS library."""
homepage = "https://pypi.org/project/geomdl"
pypi = "geomdl/geomdl-5.3.1.tar.gz"
version("5.3.1", sha256="e81a31b4d5f111267b16045ba1d9539235a98b2cff5e4bad18f7ddcd4cb804c8")
depends_on("py-setuptools@40.6.3:", type="build")
# For compiling geomdl.core module
depends_on("py-cython@:2", type="build")
variant("viz", default=False, description="Add viz dependencies")
depends_on("py-numpy@1.15.4:", type="run", when="+viz")
depends_on("py-matplotlib@2.2.3:", type="run", when="+viz")
depends_on("py-plotly", type="run", when="+viz")

View File

@@ -32,7 +32,7 @@ class PyKombu(PythonPackage):
depends_on("py-amqp@5.0.0:5", when="@5.0.0:5.0.2", type=("build", "run"))
depends_on("py-amqp@5.0.9:5.0", when="@5.2.3", type=("build", "run"))
depends_on("py-vine", when="@5.1.0:", type=("build", "run"))
depends_on("py-importlib-metadata@0.18:", type=("build", "run"), when="^python@:3.7")
depends_on("py-cached-property", type=("build", "run"), when="^python@:3.7")
depends_on("py-importlib-metadata@0.18:", type=("build", "run"), when="python@:3.7")
depends_on("py-cached-property", type=("build", "run"), when="python@:3.7")
depends_on("py-redis@3.4.1:3,4.0.2:", when="+redis", type=("build", "run"))

View File

@@ -12,13 +12,14 @@ class PyLibensemble(PythonPackage):
"""Library for managing ensemble-like collections of computations."""
homepage = "https://libensemble.readthedocs.io"
pypi = "libensemble/libensemble-1.0.0.tar.gz"
pypi = "libensemble/libensemble-1.1.0.tar.gz"
git = "https://github.com/Libensemble/libensemble.git"
maintainers("shuds13", "jlnav")
tags = ["e4s"]
version("develop", branch="develop")
version("1.1.0", sha256="3e3ddc4233272d3651e9d62c7bf420018930a4b9b135ef9ede01d5356235c1c6")
version("1.0.0", sha256="b164e044f16f15b68fd565684ad8ce876c93aaeb84e5078f4ea2a29684b110ca")
version("0.10.2", sha256="ef8dfe5d233dcae2636a3d6aa38f3c2ad0f42c65bd38f664e99b3e63b9f86622")
version("0.10.1", sha256="56ae42ec9a28d3df8f46bdf7d016db9526200e9df2a28d849902e3c44fe5c1ba")

View File

@@ -15,6 +15,7 @@ class PyLightning(PythonPackage):
maintainers("adamjstewart")
version("2.1.1", sha256="865491940d20a9754eac7494aa18cab893e0c2b31e83743349eeeaf31dfb52db")
version("2.1.0", sha256="1f78f5995ae7dcffa1edf34320db136902b73a0d1b304404c48ec8be165b3a93")
version("2.0.9", sha256="2395ece6e29e12064718ff16b8edec5685df7f7095d4fee78edb0a654f5cd7eb")
version("2.0.8", sha256="db914e211b5c3b079a821be6e4344e72d0a729163676a65c4e00aae98390ae7b")

View File

@@ -24,3 +24,5 @@ class PyMacs3(PythonPackage):
depends_on("py-numpy@1.19:", type=("build", "run"))
depends_on("py-cykhash@2", type=("build", "run"))
depends_on("py-hmmlearn@0.3:", type=("build", "run"))
depends_on("zlib-api")

View File

@@ -23,6 +23,9 @@ class PyNanobind(PythonPackage):
maintainers("chrisrichardson", "garth-wells", "ma595")
version("master", branch="master", submodules=True)
version(
"1.8.0", tag="v1.8.0", commit="1a309ba444a47e081dc6213d72345a2fbbd20795", submodules=True
)
version(
"1.7.0", tag="v1.7.0", commit="555ec7595c89c60ce7cf53e803bc226dc4899abb", submodules=True
)

View File

@@ -170,20 +170,20 @@ class PyNvidiaDali(PythonPackage):
)
cuda120_versions = (
"@1.27.0-cuda120",
"@1.26.0-cuda120",
"@1.25.0-cuda120",
"@1.24.0-cuda120",
"@1.23.0-cuda120",
"@1.22.0-cuda120",
"1.27.0-cuda120",
"1.26.0-cuda120",
"1.25.0-cuda120",
"1.24.0-cuda120",
"1.23.0-cuda120",
"1.22.0-cuda120",
)
cuda110_versions = (
"@1.27.0-cuda110",
"@1.26.0-cuda110",
"@1.25.0-cuda110",
"@1.24.0-cuda110",
"@1.23.0-cuda110",
"@1.22.0-cuda110",
"1.27.0-cuda110",
"1.26.0-cuda110",
"1.25.0-cuda110",
"1.24.0-cuda110",
"1.23.0-cuda110",
"1.22.0-cuda110",
)
for v in cuda120_versions:

Some files were not shown because too many files have changed in this diff Show More