Compare commits
91 Commits
v0.21.2
...
package/py
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
08aaecd1e2 | ||
|
|
b237d303df | ||
|
|
c3015d2a1b | ||
|
|
ab60bfe36a | ||
|
|
8bcc3e2820 | ||
|
|
388f141a92 | ||
|
|
f74b083a15 | ||
|
|
5b9d260054 | ||
|
|
4bd47d89db | ||
|
|
96f3c76052 | ||
|
|
9c74eda61f | ||
|
|
d9de93a0fc | ||
|
|
3892fadbf6 | ||
|
|
62b32080a8 | ||
|
|
09d66168c4 | ||
|
|
d7869da36b | ||
|
|
b6864fb1c3 | ||
|
|
e6125061e1 | ||
|
|
491bd48897 | ||
|
|
ad4878f770 | ||
|
|
420eff11b7 | ||
|
|
15e7aaf94d | ||
|
|
bd6c5ec82d | ||
|
|
4e171453c0 | ||
|
|
420bce5cd2 | ||
|
|
b4f6c49bc0 | ||
|
|
da4f2776d2 | ||
|
|
e2f274a634 | ||
|
|
15dcd3c65c | ||
|
|
49c2894def | ||
|
|
1ae37f6720 | ||
|
|
15f6368c7f | ||
|
|
57b63228ce | ||
|
|
13abfb7013 | ||
|
|
b41fc1ec79 | ||
|
|
124e41da23 | ||
|
|
f6039d1d45 | ||
|
|
8871bd5ba5 | ||
|
|
efe85755d8 | ||
|
|
7aaa17856d | ||
|
|
fbf02b561a | ||
|
|
4027a2139b | ||
|
|
f0ced1af42 | ||
|
|
2e45edf4e3 | ||
|
|
4bcfb01566 | ||
|
|
b8bb8a70ce | ||
|
|
dd2b436b5a | ||
|
|
da2cc2351c | ||
|
|
383ec19a0c | ||
|
|
45f8a0e42c | ||
|
|
4636a7f14f | ||
|
|
38f3f57a54 | ||
|
|
b17d7cd0e6 | ||
|
|
b5e2f23b6c | ||
|
|
7a4df732e1 | ||
|
|
7e6aaf9458 | ||
|
|
2d35d29e0f | ||
|
|
1baed0d833 | ||
|
|
cadc2a1aa5 | ||
|
|
78449ba92b | ||
|
|
26d6bfbb7f | ||
|
|
3405fe60f1 | ||
|
|
53c266b161 | ||
|
|
ed8ecc469e | ||
|
|
b2840acd52 | ||
|
|
c35250b313 | ||
|
|
e114853115 | ||
|
|
89fc9a9d47 | ||
|
|
afc693645a | ||
|
|
4ac0e511ad | ||
|
|
b0355d6cc0 | ||
|
|
300d53d6f8 | ||
|
|
0b344e0fd3 | ||
|
|
15adb308bf | ||
|
|
050d565375 | ||
|
|
f6ef2c254e | ||
|
|
62c27b1924 | ||
|
|
2ff0766aa4 | ||
|
|
dc245e87f9 | ||
|
|
c1f134e2a0 | ||
|
|
391940d2eb | ||
|
|
8c061e51e3 | ||
|
|
5774df6b7a | ||
|
|
3a5c1eb5f3 | ||
|
|
3a2ec729f7 | ||
|
|
a093f4a8ce | ||
|
|
b8302a8277 | ||
|
|
32f319157d | ||
|
|
75dfad8788 | ||
|
|
f3ba20db26 | ||
|
|
6301edbd5d |
2
.github/workflows/style/requirements.txt
vendored
2
.github/workflows/style/requirements.txt
vendored
@@ -1,4 +1,4 @@
|
||||
black==23.10.1
|
||||
black==23.11.0
|
||||
clingo==5.6.2
|
||||
flake8==6.1.0
|
||||
isort==5.12.0
|
||||
|
||||
332
CHANGELOG.md
332
CHANGELOG.md
@@ -1,335 +1,3 @@
|
||||
# v0.21.2 (2024-03-01)
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- Containerize: accommodate nested or pre-existing spack-env paths (#41558)
|
||||
- Fix setup-env script, when going back and forth between instances (#40924)
|
||||
- Fix using fully-qualified namespaces from root specs (#41957)
|
||||
- Fix a bug when a required provider is requested for multiple virtuals (#42088)
|
||||
- OCI buildcaches:
|
||||
- only push in parallel when forking (#42143)
|
||||
- use pickleable errors (#42160)
|
||||
- Fix using sticky variants in externals (#42253)
|
||||
- Fix a rare issue with conditional requirements and multi-valued variants (#42566)
|
||||
|
||||
## Package updates
|
||||
- rust: add v1.75, rework a few variants (#41161,#41903)
|
||||
- py-transformers: add v4.35.2 (#41266)
|
||||
- mgard: fix OpenMP on AppleClang (#42933)
|
||||
|
||||
# v0.21.1 (2024-01-11)
|
||||
|
||||
## New features
|
||||
- Add support for reading buildcaches created by Spack v0.22 (#41773)
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- spack graph: fix coloring with environments (#41240)
|
||||
- spack info: sort variants in --variants-by-name (#41389)
|
||||
- Spec.format: error on old style format strings (#41934)
|
||||
- ASP-based solver:
|
||||
- fix infinite recursion when computing concretization errors (#41061)
|
||||
- don't error for type mismatch on preferences (#41138)
|
||||
- don't emit spurious debug output (#41218)
|
||||
- Improve the error message for deprecated preferences (#41075)
|
||||
- Fix MSVC preview version breaking clingo build on Windows (#41185)
|
||||
- Fix multi-word aliases (#41126)
|
||||
- Add a warning for unconfigured compiler (#41213)
|
||||
- environment: fix an issue with deconcretization/reconcretization of specs (#41294)
|
||||
- buildcache: don't error if a patch is missing, when installing from binaries (#41986)
|
||||
- Multiple improvements to unit-tests (#41215,#41369,#41495,#41359,#41361,#41345,#41342,#41308,#41226)
|
||||
|
||||
## Package updates
|
||||
- root: add a webgui patch to address security issue (#41404)
|
||||
- BerkeleyGW: update source urls (#38218)
|
||||
|
||||
# v0.21.0 (2023-11-11)
|
||||
|
||||
`v0.21.0` is a major feature release.
|
||||
|
||||
## Features in this release
|
||||
|
||||
1. **Better error messages with condition chaining**
|
||||
|
||||
In v0.18, we added better error messages that could tell you what problem happened,
|
||||
but they couldn't tell you *why* it happened. `0.21` adds *condition chaining* to the
|
||||
solver, and Spack can now trace back through the conditions that led to an error and
|
||||
build a tree of causes potential causes and where they came from. For example:
|
||||
|
||||
```console
|
||||
$ spack solve hdf5 ^cmake@3.0.1
|
||||
==> Error: concretization failed for the following reasons:
|
||||
|
||||
1. Cannot satisfy 'cmake@3.0.1'
|
||||
2. Cannot satisfy 'cmake@3.0.1'
|
||||
required because hdf5 ^cmake@3.0.1 requested from CLI
|
||||
3. Cannot satisfy 'cmake@3.18:' and 'cmake@3.0.1
|
||||
required because hdf5 ^cmake@3.0.1 requested from CLI
|
||||
required because hdf5 depends on cmake@3.18: when @1.13:
|
||||
required because hdf5 ^cmake@3.0.1 requested from CLI
|
||||
4. Cannot satisfy 'cmake@3.12:' and 'cmake@3.0.1
|
||||
required because hdf5 depends on cmake@3.12:
|
||||
required because hdf5 ^cmake@3.0.1 requested from CLI
|
||||
required because hdf5 ^cmake@3.0.1 requested from CLI
|
||||
```
|
||||
|
||||
More details in #40173.
|
||||
|
||||
2. **OCI build caches**
|
||||
|
||||
You can now use an arbitrary [OCI](https://opencontainers.org) registry as a build
|
||||
cache:
|
||||
|
||||
```console
|
||||
$ spack mirror add my_registry oci://user/image # Dockerhub
|
||||
$ spack mirror add my_registry oci://ghcr.io/haampie/spack-test # GHCR
|
||||
$ spack mirror set --push --oci-username ... --oci-password ... my_registry # set login creds
|
||||
$ spack buildcache push my_registry [specs...]
|
||||
```
|
||||
|
||||
And you can optionally add a base image to get *runnable* images:
|
||||
|
||||
```console
|
||||
$ spack buildcache push --base-image ubuntu:23.04 my_registry python
|
||||
Pushed ... as [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
|
||||
|
||||
$ docker run --rm -it [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
|
||||
```
|
||||
|
||||
This creates a container image from the Spack installations on the host system,
|
||||
without the need to run `spack install` from a `Dockerfile` or `sif` file. It also
|
||||
addresses the inconvenience of losing binaries of dependencies when `RUN spack
|
||||
install` fails inside `docker build`.
|
||||
|
||||
Further, the container image layers and build cache tarballs are the same files. This
|
||||
means that `spack install` and `docker pull` use the exact same underlying binaries.
|
||||
If you previously used `spack install` inside of `docker build`, this feature helps
|
||||
you save storage by a factor two.
|
||||
|
||||
More details in #38358.
|
||||
|
||||
3. **Multiple versions of build dependencies**
|
||||
|
||||
Increasingly, complex package builds require multiple versions of some build
|
||||
dependencies. For example, Python packages frequently require very specific versions
|
||||
of `setuptools`, `cython`, and sometimes different physics packages require different
|
||||
versions of Python to build. The concretizer enforced that every solve was *unified*,
|
||||
i.e., that there only be one version of every package. The concretizer now supports
|
||||
"duplicate" nodes for *build dependencies*, but enforces unification through
|
||||
transitive link and run dependencies. This will allow it to better resolve complex
|
||||
dependency graphs in ecosystems like Python, and it also gets us very close to
|
||||
modeling compilers as proper dependencies.
|
||||
|
||||
This change required a major overhaul of the concretizer, as well as a number of
|
||||
performance optimizations. See #38447, #39621.
|
||||
|
||||
4. **Cherry-picking virtual dependencies**
|
||||
|
||||
You can now select only a subset of virtual dependencies from a spec that may provide
|
||||
more. For example, if you want `mpich` to be your `mpi` provider, you can be explicit
|
||||
by writing:
|
||||
|
||||
```
|
||||
hdf5 ^[virtuals=mpi] mpich
|
||||
```
|
||||
|
||||
Or, if you want to use, e.g., `intel-parallel-studio` for `blas` along with an external
|
||||
`lapack` like `openblas`, you could write:
|
||||
|
||||
```
|
||||
strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
|
||||
```
|
||||
|
||||
The `virtuals=mpi` is an edge attribute, and dependency edges in Spack graphs now
|
||||
track which virtuals they satisfied. More details in #17229 and #35322.
|
||||
|
||||
Note for packaging: in Spack 0.21 `spec.satisfies("^virtual")` is true if and only if
|
||||
the package specifies `depends_on("virtual")`. This is different from Spack 0.20,
|
||||
where depending on a provider implied depending on the virtual provided. See #41002
|
||||
for an example where `^mkl` was being used to test for several `mkl` providers in a
|
||||
package that did not depend on `mkl`.
|
||||
|
||||
5. **License directive**
|
||||
|
||||
Spack packages can now have license metadata, with the new `license()` directive:
|
||||
|
||||
```python
|
||||
license("Apache-2.0")
|
||||
```
|
||||
|
||||
Licenses use [SPDX identifiers](https://spdx.org/licenses), and you can use SPDX
|
||||
expressions to combine them:
|
||||
|
||||
```python
|
||||
license("Apache-2.0 OR MIT")
|
||||
```
|
||||
|
||||
Like other directives in Spack, it's conditional, so you can handle complex cases like
|
||||
Spack itself:
|
||||
|
||||
```python
|
||||
license("LGPL-2.1", when="@:0.11")
|
||||
license("Apache-2.0 OR MIT", when="@0.12:")
|
||||
```
|
||||
|
||||
More details in #39346, #40598.
|
||||
|
||||
6. **`spack deconcretize` command**
|
||||
|
||||
We are getting close to having a `spack update` command for environments, but we're
|
||||
not quite there yet. This is the next best thing. `spack deconcretize` gives you
|
||||
control over what you want to update in an already concrete environment. If you have
|
||||
an environment built with, say, `meson`, and you want to update your `meson` version,
|
||||
you can run:
|
||||
|
||||
```console
|
||||
spack deconcretize meson
|
||||
```
|
||||
|
||||
and have everything that depends on `meson` rebuilt the next time you run `spack
|
||||
concretize`. In a future Spack version, we'll handle all of this in a single command,
|
||||
but for now you can use this to drop bits of your lockfile and resolve your
|
||||
dependencies again. More in #38803.
|
||||
|
||||
7. **UI Improvements**
|
||||
|
||||
The venerable `spack info` command was looking shabby compared to the rest of Spack's
|
||||
UI, so we reworked it to have a bit more flair. `spack info` now makes much better
|
||||
use of terminal space and shows variants, their values, and their descriptions much
|
||||
more clearly. Conditional variants are grouped separately so you can more easily
|
||||
understand how packages are structured. More in #40998.
|
||||
|
||||
`spack checksum` now allows you to filter versions from your editor, or by version
|
||||
range. It also notifies you about potential download URL changes. See #40403.
|
||||
|
||||
8. **Environments can include definitions**
|
||||
|
||||
Spack did not previously support using `include:` with The
|
||||
[definitions](https://spack.readthedocs.io/en/latest/environments.html#spec-list-references)
|
||||
section of an environment, but now it does. You can use this to curate lists of specs
|
||||
and more easily reuse them across environments. See #33960.
|
||||
|
||||
9. **Aliases**
|
||||
|
||||
You can now add aliases to Spack commands in `config.yaml`, e.g. this might enshrine
|
||||
your favorite args to `spack find` as `spack f`:
|
||||
|
||||
```yaml
|
||||
config:
|
||||
aliases:
|
||||
f: find -lv
|
||||
```
|
||||
|
||||
See #17229.
|
||||
|
||||
10. **Improved autoloading of modules**
|
||||
|
||||
Spack 0.20 was the first release to enable autoloading of direct dependencies in
|
||||
module files.
|
||||
|
||||
The downside of this was that `module avail` and `module load` tab completion would
|
||||
show users too many modules to choose from, and many users disabled generating
|
||||
modules for dependencies through `exclude_implicits: true`. Further, it was
|
||||
necessary to keep hashes in module names to avoid file name clashes.
|
||||
|
||||
In this release, you can start using `hide_implicits: true` instead, which exposes
|
||||
only explicitly installed packages to the user, while still autoloading
|
||||
dependencies. On top of that, you can safely use `hash_length: 0`, as this config
|
||||
now only applies to the modules exposed to the user -- you don't have to worry about
|
||||
file name clashes for hidden dependencies.
|
||||
|
||||
Note: for `tcl` this feature requires Modules 4.7 or higher
|
||||
|
||||
11. **Updated container labeling**
|
||||
|
||||
Nightly Docker images from the `develop` branch will now be tagged as `:develop` and
|
||||
`:nightly`. The `:latest` tag is no longer associated with `:develop`, but with the
|
||||
latest stable release. Releases will be tagged with `:{major}`, `:{major}.{minor}`
|
||||
and `:{major}.{minor}.{patch}`. `ubuntu:18.04` has also been removed from the list of
|
||||
generated Docker images, as it is no longer supported. See #40593.
|
||||
|
||||
## Other new commands and directives
|
||||
|
||||
* `spack env activate` without arguments now loads a `default` environment that you do
|
||||
not have to create (#40756).
|
||||
* `spack find -H` / `--hashes`: a new shortcut for piping `spack find` output to
|
||||
other commands (#38663)
|
||||
* Add `spack checksum --verify`, fix `--add` (#38458)
|
||||
* New `default_args` context manager factors out common args for directives (#39964)
|
||||
* `spack compiler find --[no]-mixed-toolchain` lets you easily mix `clang` and
|
||||
`gfortran` on Linux (#40902)
|
||||
|
||||
## Performance improvements
|
||||
|
||||
* `spack external find` execution is now much faster (#39843)
|
||||
* `spack location -i` now much faster on success (#40898)
|
||||
* Drop redundant rpaths post install (#38976)
|
||||
* ASP-based solver: avoid cycles in clingo using hidden directive (#40720)
|
||||
* Fix multiple quadratic complexity issues in environments (#38771)
|
||||
|
||||
## Other new features of note
|
||||
|
||||
* archspec: update to v0.2.2, support for Sapphire Rapids, Power10, Neoverse V2 (#40917)
|
||||
* Propagate variants across nodes that don't have that variant (#38512)
|
||||
* Implement fish completion (#29549)
|
||||
* Can now distinguish between source/binary mirror; don't ping mirror.spack.io as much (#34523)
|
||||
* Improve status reporting on install (add [n/total] display) (#37903)
|
||||
|
||||
## Windows
|
||||
|
||||
This release has the best Windows support of any Spack release yet, with numerous
|
||||
improvements and much larger swaths of tests passing:
|
||||
|
||||
* MSVC and SDK improvements (#37711, #37930, #38500, #39823, #39180)
|
||||
* Windows external finding: update default paths; treat .bat as executable on Windows (#39850)
|
||||
* Windows decompression: fix removal of intermediate file (#38958)
|
||||
* Windows: executable/path handling (#37762)
|
||||
* Windows build systems: use ninja and enable tests (#33589)
|
||||
* Windows testing (#36970, #36972, #36973, #36840, #36977, #36792, #36834, #34696, #36971)
|
||||
* Windows PowerShell support (#39118, #37951)
|
||||
* Windows symlinking and libraries (#39933, #38599, #34701, #38578, #34701)
|
||||
|
||||
## Notable refactors
|
||||
* User-specified flags take precedence over others in Spack compiler wrappers (#37376)
|
||||
* Improve setup of build, run, and test environments (#35737, #40916)
|
||||
* `make` is no longer a required system dependency of Spack (#40380)
|
||||
* Support Python 3.12 (#40404, #40155, #40153)
|
||||
* docs: Replace package list with packages.spack.io (#40251)
|
||||
* Drop Python 2 constructs in Spack (#38720, #38718, #38703)
|
||||
|
||||
## Binary cache and stack updates
|
||||
* e4s arm stack: duplicate and target neoverse v1 (#40369)
|
||||
* Add macOS ML CI stacks (#36586)
|
||||
* E4S Cray CI Stack (#37837)
|
||||
* e4s cray: expand spec list (#38947)
|
||||
* e4s cray sles ci: expand spec list (#39081)
|
||||
|
||||
## Removals, deprecations, and syntax changes
|
||||
* ASP: targets, compilers and providers soft-preferences are only global (#31261)
|
||||
* Parser: fix ambiguity with whitespace in version ranges (#40344)
|
||||
* Module file generation is disabled by default; you'll need to enable it to use it (#37258)
|
||||
* Remove deprecated "extra_instructions" option for containers (#40365)
|
||||
* Stand-alone test feature deprecation postponed to v0.22 (#40600)
|
||||
* buildcache push: make `--allow-root` the default and deprecate the option (#38878)
|
||||
|
||||
## Notable Bugfixes
|
||||
* Bugfix: propagation of multivalued variants (#39833)
|
||||
* Allow `/` in git versions (#39398)
|
||||
* Fetch & patch: actually acquire stage lock, and many more issues (#38903)
|
||||
* Environment/depfile: better escaping of targets with Git versions (#37560)
|
||||
* Prevent "spack external find" to error out on wrong permissions (#38755)
|
||||
* lmod: allow core compiler to be specified with a version range (#37789)
|
||||
|
||||
## Spack community stats
|
||||
|
||||
* 7,469 total packages, 303 new since `v0.20.0`
|
||||
* 150 new Python packages
|
||||
* 34 new R packages
|
||||
* 353 people contributed to this release
|
||||
* 336 committers to packages
|
||||
* 65 committers to core
|
||||
|
||||
|
||||
# v0.20.3 (2023-10-31)
|
||||
|
||||
## Bugfixes
|
||||
|
||||
@@ -37,11 +37,7 @@ to enable reuse for a single installation, and you can use:
|
||||
spack install --fresh <spec>
|
||||
|
||||
to do a fresh install if ``reuse`` is enabled by default.
|
||||
``reuse: dependencies`` is the default.
|
||||
|
||||
.. seealso::
|
||||
|
||||
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
|
||||
``reuse: true`` is the default.
|
||||
|
||||
------------------------------------------
|
||||
Selection of the target microarchitectures
|
||||
@@ -103,3 +99,547 @@ while `py-numpy` still needs an older version:
|
||||
|
||||
Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the
|
||||
default behavior is ``duplicates:strategy:minimal``.
|
||||
|
||||
.. _build-settings:
|
||||
|
||||
================================
|
||||
Package Settings (packages.yaml)
|
||||
================================
|
||||
|
||||
Spack allows you to customize how your software is built through the
|
||||
``packages.yaml`` file. Using it, you can make Spack prefer particular
|
||||
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
|
||||
or you can make it prefer to build with particular compilers. You can
|
||||
also tell Spack to use *external* software installations already
|
||||
present on your system.
|
||||
|
||||
At a high level, the ``packages.yaml`` file is structured like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
package1:
|
||||
# settings for package1
|
||||
package2:
|
||||
# settings for package2
|
||||
# ...
|
||||
all:
|
||||
# settings that apply to all packages.
|
||||
|
||||
So you can either set build preferences specifically for *one* package,
|
||||
or you can specify that certain settings should apply to *all* packages.
|
||||
The types of settings you can customize are described in detail below.
|
||||
|
||||
Spack's build defaults are in the default
|
||||
``etc/spack/defaults/packages.yaml`` file. You can override them in
|
||||
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
|
||||
details on how this works, see :ref:`configuration-scopes`.
|
||||
|
||||
.. _sec-external-packages:
|
||||
|
||||
-----------------
|
||||
External Packages
|
||||
-----------------
|
||||
|
||||
Spack can be configured to use externally-installed
|
||||
packages rather than building its own packages. This may be desirable
|
||||
if machines ship with system packages, such as a customized MPI
|
||||
that should be used instead of Spack building its own MPI.
|
||||
|
||||
External packages are configured through the ``packages.yaml`` file.
|
||||
Here's an example of an external configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
This example lists three installations of OpenMPI, one built with GCC,
|
||||
one built with GCC and debug information, and another built with Intel.
|
||||
If Spack is asked to build a package that uses one of these MPIs as a
|
||||
dependency, it will use the pre-installed OpenMPI in
|
||||
the given directory. Note that the specified path is the top-level
|
||||
install prefix, not the ``bin`` subdirectory.
|
||||
|
||||
``packages.yaml`` can also be used to specify modules to load instead
|
||||
of the installation prefixes. The following example says that module
|
||||
``CMake/3.7.2`` provides cmake version 3.7.2.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cmake:
|
||||
externals:
|
||||
- spec: cmake@3.7.2
|
||||
modules:
|
||||
- CMake/3.7.2
|
||||
|
||||
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
|
||||
by a list of package names. To specify externals, add an ``externals:``
|
||||
attribute under the package name, which lists externals.
|
||||
Each external should specify a ``spec:`` string that should be as
|
||||
well-defined as reasonably possible. If a
|
||||
package lacks a spec component, such as missing a compiler or
|
||||
package version, then Spack will guess the missing component based
|
||||
on its most-favored packages, and it may guess incorrectly.
|
||||
|
||||
Each package version and compiler listed in an external should
|
||||
have entries in Spack's packages and compiler configuration, even
|
||||
though the package and compiler may not ever be built.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Prevent packages from being built from sources
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
|
||||
but it does not prevent Spack from building packages from sources. In the above example,
|
||||
Spack might choose for many valid reasons to start building and linking with the
|
||||
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
|
||||
|
||||
To prevent this, the ``packages.yaml`` configuration also allows packages
|
||||
to be flagged as non-buildable. The previous example could be modified to
|
||||
be:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
buildable: False
|
||||
|
||||
The addition of the ``buildable`` flag tells Spack that it should never build
|
||||
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
|
||||
OpenMPI.
|
||||
|
||||
.. note::
|
||||
|
||||
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
|
||||
pre-built specs include specs already available from a local store, an upstream store, a registered
|
||||
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
|
||||
external specs in ``packages.yaml`` are included in the list of pre-built specs.
|
||||
|
||||
If an external module is specified as not buildable, then Spack will load the
|
||||
external module into the build environment which can be used for linking.
|
||||
|
||||
The ``buildable`` does not need to be paired with external packages.
|
||||
It could also be used alone to forbid packages that may be
|
||||
buggy or otherwise undesirable.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Non-buildable virtual packages
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Virtual packages in Spack can also be specified as not buildable, and
|
||||
external implementations can be provided. In the example above,
|
||||
OpenMPI is configured as not buildable, but Spack will often prefer
|
||||
other MPI implementations over the externally available OpenMPI. Spack
|
||||
can be configured with every MPI provider not buildable individually,
|
||||
but more conveniently:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
buildable: False
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
Spack can then use any of the listed external implementations of MPI
|
||||
to satisfy a dependency, and will choose depending on the compiler and
|
||||
architecture.
|
||||
|
||||
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
|
||||
(available via stores or buildcaches) are not wanted, Spack can be configured to require
|
||||
specs matching only the available externals:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
buildable: False
|
||||
require:
|
||||
- one_of: [
|
||||
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
|
||||
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
|
||||
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
]
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
|
||||
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
|
||||
:ref:`package-requirements`.
|
||||
|
||||
.. _cmd-spack-external-find:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Automatically Find External Packages
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can run the :ref:`spack external find <spack-external-find>` command
|
||||
to search for system-provided packages and add them to ``packages.yaml``.
|
||||
After running this command your ``packages.yaml`` may include new entries:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
cmake:
|
||||
externals:
|
||||
- spec: cmake@3.17.2
|
||||
prefix: /usr
|
||||
|
||||
Generally this is useful for detecting a small set of commonly-used packages;
|
||||
for now this is generally limited to finding build-only dependencies.
|
||||
Specific limitations include:
|
||||
|
||||
* Packages are not discoverable by default: For a package to be
|
||||
discoverable with ``spack external find``, it needs to add special
|
||||
logic. See :ref:`here <make-package-findable>` for more details.
|
||||
* The logic does not search through module files, it can only detect
|
||||
packages with executables defined in ``PATH``; you can help Spack locate
|
||||
externals which use module files by loading any associated modules for
|
||||
packages that you want Spack to know about before running
|
||||
``spack external find``.
|
||||
* Spack does not overwrite existing entries in the package configuration:
|
||||
If there is an external defined for a spec at any configuration scope,
|
||||
then Spack will not add a new external entry (``spack config blame packages``
|
||||
can help locate all external entries).
|
||||
|
||||
.. _package-requirements:
|
||||
|
||||
--------------------
|
||||
Package Requirements
|
||||
--------------------
|
||||
|
||||
Spack can be configured to always use certain compilers, package
|
||||
versions, and variants during concretization through package
|
||||
requirements.
|
||||
|
||||
Package requirements are useful when you find yourself repeatedly
|
||||
specifying the same constraints on the command line, and wish that
|
||||
Spack respects these constraints whether you mention them explicitly
|
||||
or not. Another use case is specifying constraints that should apply
|
||||
to all root specs in an environment, without having to repeat the
|
||||
constraint everywhere.
|
||||
|
||||
Apart from that, requirements config is more flexible than constraints
|
||||
on the command line, because it can specify constraints on packages
|
||||
*when they occur* as a dependency. In contrast, on the command line it
|
||||
is not possible to specify constraints on dependencies while also keeping
|
||||
those dependencies optional.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
Requirements syntax
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The package requirements configuration is specified in ``packages.yaml``,
|
||||
keyed by package name and expressed using the Spec syntax. In the simplest
|
||||
case you can specify attributes that you always want the package to have
|
||||
by providing a single spec string to ``require``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
libfabric:
|
||||
require: "@1.13.2"
|
||||
|
||||
In the above example, ``libfabric`` will always build with version 1.13.2. If you
|
||||
need to compose multiple configuration scopes ``require`` accepts a list of
|
||||
strings:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
libfabric:
|
||||
require:
|
||||
- "@1.13.2"
|
||||
- "%gcc"
|
||||
|
||||
In this case ``libfabric`` will always build with version 1.13.2 **and** using GCC
|
||||
as a compiler.
|
||||
|
||||
For more complex use cases, require accepts also a list of objects. These objects
|
||||
must have either a ``any_of`` or a ``one_of`` field, containing a list of spec strings,
|
||||
and they can optionally have a ``when`` and a ``message`` attribute:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- any_of: ["@4.1.5", "%gcc"]
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
``any_of`` is a list of specs. One of those specs must be satisfied
|
||||
and it is also allowed for the concretized spec to match more than one.
|
||||
In the above example, that means you could build ``openmpi@4.1.5%gcc``,
|
||||
``openmpi@4.1.5%clang`` or ``openmpi@3.9%gcc``, but
|
||||
not ``openmpi@3.9%clang``.
|
||||
|
||||
If a custom message is provided, and the requirement is not satisfiable,
|
||||
Spack will print the custom error message:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack spec openmpi@3.9%clang
|
||||
==> Error: in this example only 4.1.5 can build with other compilers
|
||||
|
||||
We could express a similar requirement using the ``when`` attribute:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- any_of: ["%gcc"]
|
||||
when: "@:4.1.4"
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC.
|
||||
For readability, Spack also allows a ``spec`` key accepting a string when there is only a single
|
||||
constraint:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- spec: "%gcc"
|
||||
when: "@:4.1.4"
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
This code snippet and the one before it are semantically equivalent.
|
||||
|
||||
Finally, instead of ``any_of`` you can use ``one_of`` which also takes a list of specs. The final
|
||||
concretized spec must match one and only one of them:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpich:
|
||||
require:
|
||||
- one_of: ["+cuda", "+rocm"]
|
||||
|
||||
In the example above, that means you could build ``mpich+cuda`` or ``mpich+rocm`` but not ``mpich+cuda+rocm``.
|
||||
|
||||
.. note::
|
||||
|
||||
For ``any_of`` and ``one_of``, the order of specs indicates a
|
||||
preference: items that appear earlier in the list are preferred
|
||||
(note that these preferences can be ignored in favor of others).
|
||||
|
||||
.. note::
|
||||
|
||||
When using a conditional requirement, Spack is allowed to actively avoid the triggering
|
||||
condition (the ``when=...`` spec) if that leads to a concrete spec with better scores in
|
||||
the optimization criteria. To check the current optimization criteria and their
|
||||
priorities you can run ``spack solve zlib``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Setting default requirements
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can also set default requirements for all packages under ``all``
|
||||
like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
require: '%clang'
|
||||
|
||||
which means every spec will be required to use ``clang`` as a compiler.
|
||||
|
||||
Note that in this case ``all`` represents a *default set of requirements* -
|
||||
if there are specific package requirements, then the default requirements
|
||||
under ``all`` are disregarded. For example, with a configuration like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
require: '%clang'
|
||||
cmake:
|
||||
require: '%gcc'
|
||||
|
||||
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
|
||||
dependencies) to use ``clang``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Setting requirements on virtual specs
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
|
||||
This can be useful for fixing which virtual provider you want to use:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
require: 'mvapich2 %gcc'
|
||||
|
||||
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
|
||||
|
||||
Requirements on the virtual spec and on the specific provider are both applied, if
|
||||
present. For instance with a configuration like:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
require: 'mvapich2 %gcc'
|
||||
mvapich2:
|
||||
require: '~cuda'
|
||||
|
||||
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
|
||||
|
||||
.. _package-preferences:
|
||||
|
||||
-------------------
|
||||
Package Preferences
|
||||
-------------------
|
||||
|
||||
In some cases package requirements can be too strong, and package
|
||||
preferences are the better option. Package preferences do not impose
|
||||
constraints on packages for particular versions or variants values,
|
||||
they rather only set defaults. The concretizer is free to change
|
||||
them if it must, due to other constraints, and also prefers reusing
|
||||
installed packages over building new ones that are a better match for
|
||||
preferences.
|
||||
|
||||
Most package preferences (``compilers``, ``target`` and ``providers``)
|
||||
can only be set globally under the ``all`` section of ``packages.yaml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
compiler: [gcc@12.2.0, clang@12:, oneapi@2023:]
|
||||
target: [x86_64_v3]
|
||||
providers:
|
||||
mpi: [mvapich2, mpich, openmpi]
|
||||
|
||||
These preferences override Spack's default and effectively reorder priorities
|
||||
when looking for the best compiler, target or virtual package provider. Each
|
||||
preference takes an ordered list of spec constraints, with earlier entries in
|
||||
the list being preferred over later entries.
|
||||
|
||||
In the example above all packages prefer to be compiled with ``gcc@12.2.0``,
|
||||
to target the ``x86_64_v3`` microarchitecture and to use ``mvapich2`` if they
|
||||
depend on ``mpi``.
|
||||
|
||||
The ``variants`` and ``version`` preferences can be set under
|
||||
package specific sections of the ``packages.yaml`` file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
opencv:
|
||||
variants: +debug
|
||||
gperftools:
|
||||
version: [2.2, 2.4, 2.3]
|
||||
|
||||
In this case, the preference for ``opencv`` is to build with debug options, while
|
||||
``gperftools`` prefers version 2.2 over 2.4.
|
||||
|
||||
Any preference can be overwritten on the command line if explicitly requested.
|
||||
|
||||
Preferences cannot overcome explicit constraints, as they only set a preferred
|
||||
ordering among homogeneous attribute values. Going back to the example, if
|
||||
``gperftools@2.3:`` was requested, then Spack will install version 2.4
|
||||
since the most preferred version 2.2 is prohibited by the version constraint.
|
||||
|
||||
.. _package_permissions:
|
||||
|
||||
-------------------
|
||||
Package Permissions
|
||||
-------------------
|
||||
|
||||
Spack can be configured to assign permissions to the files installed
|
||||
by a package.
|
||||
|
||||
In the ``packages.yaml`` file under ``permissions``, the attributes
|
||||
``read``, ``write``, and ``group`` control the package
|
||||
permissions. These attributes can be set per-package, or for all
|
||||
packages under ``all``. If permissions are set under ``all`` and for a
|
||||
specific package, the package-specific settings take precedence.
|
||||
|
||||
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
|
||||
and ``world``.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
permissions:
|
||||
write: group
|
||||
group: spack
|
||||
my_app:
|
||||
permissions:
|
||||
read: group
|
||||
group: my_team
|
||||
|
||||
The permissions settings describe the broadest level of access to
|
||||
installations of the specified packages. The execute permissions of
|
||||
the file are set to the same level as read permissions for those files
|
||||
that are executable. The default setting for ``read`` is ``world``,
|
||||
and for ``write`` is ``user``. In the example above, installations of
|
||||
``my_app`` will be installed with user and group permissions but no
|
||||
world permissions, and owned by the group ``my_team``. All other
|
||||
packages will be installed with user and group write privileges, and
|
||||
world read privileges. Those packages will be owned by the group
|
||||
``spack``.
|
||||
|
||||
The ``group`` attribute assigns a Unix-style group to a package. All
|
||||
files installed by the package will be owned by the assigned group,
|
||||
and the sticky group bit will be set on the install prefix and all
|
||||
directories inside the install prefix. This will ensure that even
|
||||
manually placed files within the install prefix are owned by the
|
||||
assigned group. If no group is assigned, Spack will allow the OS
|
||||
default behavior to go as expected.
|
||||
|
||||
----------------------------
|
||||
Assigning Package Attributes
|
||||
----------------------------
|
||||
|
||||
You can assign class-level attributes in the configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpileaks:
|
||||
# Override existing attributes
|
||||
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
|
||||
# ... or add new ones
|
||||
x: 1
|
||||
|
||||
Attributes set this way will be accessible to any method executed
|
||||
in the package.py file (e.g. the ``install()`` method). Values for these
|
||||
attributes may be any value parseable by yaml.
|
||||
|
||||
These can only be applied to specific packages, not "all" or
|
||||
virtual packages.
|
||||
|
||||
@@ -392,7 +392,7 @@ See section
|
||||
:ref:`Configuration Scopes <configuration-scopes>`
|
||||
for an explanation about the different files
|
||||
and section
|
||||
:ref:`Build customization <packages-config>`
|
||||
:ref:`Build customization <build-settings>`
|
||||
for specifics and examples for ``packages.yaml`` files.
|
||||
|
||||
.. If your system administrator did not provide modules for pre-installed Intel
|
||||
|
||||
@@ -17,7 +17,7 @@ case you want to skip directly to specific docs:
|
||||
* :ref:`config.yaml <config-yaml>`
|
||||
* :ref:`mirrors.yaml <mirrors>`
|
||||
* :ref:`modules.yaml <modules>`
|
||||
* :ref:`packages.yaml <packages-config>`
|
||||
* :ref:`packages.yaml <build-settings>`
|
||||
* :ref:`repos.yaml <repositories>`
|
||||
|
||||
You can also add any of these as inline configuration in the YAML
|
||||
@@ -243,9 +243,11 @@ lower-precedence settings. Completely ignoring higher-level configuration
|
||||
options is supported with the ``::`` notation for keys (see
|
||||
:ref:`config-overrides` below).
|
||||
|
||||
There are also special notations for string concatenation and precendense override.
|
||||
Using the ``+:`` notation can be used to force *prepending* strings or lists. For lists, this is identical
|
||||
to the default behavior. Using the ``-:`` works similarly, but for *appending* values.
|
||||
There are also special notations for string concatenation and precendense override:
|
||||
|
||||
* ``+:`` will force *prepending* strings or lists. For lists, this is the default behavior.
|
||||
* ``-:`` works similarly, but for *appending* values.
|
||||
|
||||
:ref:`config-prepend-append`
|
||||
|
||||
^^^^^^^^^^^
|
||||
|
||||
@@ -1,77 +0,0 @@
|
||||
.. Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
|
||||
Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
|
||||
SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
==========================
|
||||
Frequently Asked Questions
|
||||
==========================
|
||||
|
||||
This page contains answers to frequently asked questions about Spack.
|
||||
If you have questions that are not answered here, feel free to ask on
|
||||
`Slack <https://slack.spack.io>`_ or `GitHub Discussions
|
||||
<https://github.com/spack/spack/discussions>`_. If you've learned the
|
||||
answer to a question that you think should be here, please consider
|
||||
contributing to this page.
|
||||
|
||||
.. _faq-concretizer-precedence:
|
||||
|
||||
-----------------------------------------------------
|
||||
Why does Spack pick particular versions and variants?
|
||||
-----------------------------------------------------
|
||||
|
||||
This question comes up in a variety of forms:
|
||||
|
||||
1. Why does Spack seem to ignore my package preferences from ``packages.yaml`` config?
|
||||
2. Why does Spack toggle a variant instead of using the default from the ``package.py`` file?
|
||||
|
||||
The short answer is that Spack always picks an optimal configuration
|
||||
based on a complex set of criteria\ [#f1]_. These criteria are more nuanced
|
||||
than always choosing the latest versions or default variants.
|
||||
|
||||
.. note::
|
||||
|
||||
As a rule of thumb: requirements + constraints > reuse > preferences > defaults.
|
||||
|
||||
The following set of criteria (from lowest to highest precedence) explain
|
||||
common cases where concretization output may seem surprising at first.
|
||||
|
||||
1. :ref:`Package preferences <package-preferences>` configured in ``packages.yaml``
|
||||
override variant defaults from ``package.py`` files, and influence the optimal
|
||||
ordering of versions. Preferences are specified as follows:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
foo:
|
||||
version: [1.0, 1.1]
|
||||
variants: ~mpi
|
||||
|
||||
2. :ref:`Reuse concretization <concretizer-options>` configured in ``concretizer.yaml``
|
||||
overrides preferences, since it's typically faster to reuse an existing spec than to
|
||||
build a preferred one from sources. When build caches are enabled, specs may be reused
|
||||
from a remote location too. Reuse concretization is configured as follows:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
concretizer:
|
||||
reuse: dependencies # other options are 'true' and 'false'
|
||||
|
||||
3. :ref:`Package requirements <package-requirements>` configured in ``packages.yaml``,
|
||||
and constraints from the command line as well as ``package.py`` files override all
|
||||
of the above. Requirements are specified as follows:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
foo:
|
||||
require:
|
||||
- "@1.2: +mpi"
|
||||
|
||||
Requirements and constraints restrict the set of possible solutions, while reuse
|
||||
behavior and preferences influence what an optimal solution looks like.
|
||||
|
||||
|
||||
.. rubric:: Footnotes
|
||||
|
||||
.. [#f1] The exact list of criteria can be retrieved with the ``spack solve`` command
|
||||
@@ -55,7 +55,6 @@ or refer to the full manual below.
|
||||
getting_started
|
||||
basic_usage
|
||||
replace_conda_homebrew
|
||||
frequently_asked_questions
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
@@ -71,7 +70,7 @@ or refer to the full manual below.
|
||||
|
||||
configuration
|
||||
config_yaml
|
||||
packages_yaml
|
||||
bootstrapping
|
||||
build_settings
|
||||
environments
|
||||
containers
|
||||
@@ -79,7 +78,6 @@ or refer to the full manual below.
|
||||
module_file_support
|
||||
repositories
|
||||
binary_caches
|
||||
bootstrapping
|
||||
command_index
|
||||
chain
|
||||
extensions
|
||||
|
||||
@@ -1,560 +0,0 @@
|
||||
.. Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
|
||||
Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
|
||||
SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
|
||||
.. _packages-config:
|
||||
|
||||
================================
|
||||
Package Settings (packages.yaml)
|
||||
================================
|
||||
|
||||
Spack allows you to customize how your software is built through the
|
||||
``packages.yaml`` file. Using it, you can make Spack prefer particular
|
||||
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
|
||||
or you can make it prefer to build with particular compilers. You can
|
||||
also tell Spack to use *external* software installations already
|
||||
present on your system.
|
||||
|
||||
At a high level, the ``packages.yaml`` file is structured like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
package1:
|
||||
# settings for package1
|
||||
package2:
|
||||
# settings for package2
|
||||
# ...
|
||||
all:
|
||||
# settings that apply to all packages.
|
||||
|
||||
So you can either set build preferences specifically for *one* package,
|
||||
or you can specify that certain settings should apply to *all* packages.
|
||||
The types of settings you can customize are described in detail below.
|
||||
|
||||
Spack's build defaults are in the default
|
||||
``etc/spack/defaults/packages.yaml`` file. You can override them in
|
||||
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
|
||||
details on how this works, see :ref:`configuration-scopes`.
|
||||
|
||||
.. _sec-external-packages:
|
||||
|
||||
-----------------
|
||||
External Packages
|
||||
-----------------
|
||||
|
||||
Spack can be configured to use externally-installed
|
||||
packages rather than building its own packages. This may be desirable
|
||||
if machines ship with system packages, such as a customized MPI
|
||||
that should be used instead of Spack building its own MPI.
|
||||
|
||||
External packages are configured through the ``packages.yaml`` file.
|
||||
Here's an example of an external configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
This example lists three installations of OpenMPI, one built with GCC,
|
||||
one built with GCC and debug information, and another built with Intel.
|
||||
If Spack is asked to build a package that uses one of these MPIs as a
|
||||
dependency, it will use the pre-installed OpenMPI in
|
||||
the given directory. Note that the specified path is the top-level
|
||||
install prefix, not the ``bin`` subdirectory.
|
||||
|
||||
``packages.yaml`` can also be used to specify modules to load instead
|
||||
of the installation prefixes. The following example says that module
|
||||
``CMake/3.7.2`` provides cmake version 3.7.2.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cmake:
|
||||
externals:
|
||||
- spec: cmake@3.7.2
|
||||
modules:
|
||||
- CMake/3.7.2
|
||||
|
||||
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
|
||||
by a list of package names. To specify externals, add an ``externals:``
|
||||
attribute under the package name, which lists externals.
|
||||
Each external should specify a ``spec:`` string that should be as
|
||||
well-defined as reasonably possible. If a
|
||||
package lacks a spec component, such as missing a compiler or
|
||||
package version, then Spack will guess the missing component based
|
||||
on its most-favored packages, and it may guess incorrectly.
|
||||
|
||||
Each package version and compiler listed in an external should
|
||||
have entries in Spack's packages and compiler configuration, even
|
||||
though the package and compiler may not ever be built.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Prevent packages from being built from sources
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
|
||||
but it does not prevent Spack from building packages from sources. In the above example,
|
||||
Spack might choose for many valid reasons to start building and linking with the
|
||||
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
|
||||
|
||||
To prevent this, the ``packages.yaml`` configuration also allows packages
|
||||
to be flagged as non-buildable. The previous example could be modified to
|
||||
be:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
buildable: False
|
||||
|
||||
The addition of the ``buildable`` flag tells Spack that it should never build
|
||||
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
|
||||
OpenMPI.
|
||||
|
||||
.. note::
|
||||
|
||||
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
|
||||
pre-built specs include specs already available from a local store, an upstream store, a registered
|
||||
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
|
||||
external specs in ``packages.yaml`` are included in the list of pre-built specs.
|
||||
|
||||
If an external module is specified as not buildable, then Spack will load the
|
||||
external module into the build environment which can be used for linking.
|
||||
|
||||
The ``buildable`` does not need to be paired with external packages.
|
||||
It could also be used alone to forbid packages that may be
|
||||
buggy or otherwise undesirable.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Non-buildable virtual packages
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Virtual packages in Spack can also be specified as not buildable, and
|
||||
external implementations can be provided. In the example above,
|
||||
OpenMPI is configured as not buildable, but Spack will often prefer
|
||||
other MPI implementations over the externally available OpenMPI. Spack
|
||||
can be configured with every MPI provider not buildable individually,
|
||||
but more conveniently:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
buildable: False
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
Spack can then use any of the listed external implementations of MPI
|
||||
to satisfy a dependency, and will choose depending on the compiler and
|
||||
architecture.
|
||||
|
||||
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
|
||||
(available via stores or buildcaches) are not wanted, Spack can be configured to require
|
||||
specs matching only the available externals:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
buildable: False
|
||||
require:
|
||||
- one_of: [
|
||||
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
|
||||
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
|
||||
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
]
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
|
||||
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
|
||||
:ref:`package-requirements`.
|
||||
|
||||
.. _cmd-spack-external-find:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Automatically Find External Packages
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can run the :ref:`spack external find <spack-external-find>` command
|
||||
to search for system-provided packages and add them to ``packages.yaml``.
|
||||
After running this command your ``packages.yaml`` may include new entries:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
cmake:
|
||||
externals:
|
||||
- spec: cmake@3.17.2
|
||||
prefix: /usr
|
||||
|
||||
Generally this is useful for detecting a small set of commonly-used packages;
|
||||
for now this is generally limited to finding build-only dependencies.
|
||||
Specific limitations include:
|
||||
|
||||
* Packages are not discoverable by default: For a package to be
|
||||
discoverable with ``spack external find``, it needs to add special
|
||||
logic. See :ref:`here <make-package-findable>` for more details.
|
||||
* The logic does not search through module files, it can only detect
|
||||
packages with executables defined in ``PATH``; you can help Spack locate
|
||||
externals which use module files by loading any associated modules for
|
||||
packages that you want Spack to know about before running
|
||||
``spack external find``.
|
||||
* Spack does not overwrite existing entries in the package configuration:
|
||||
If there is an external defined for a spec at any configuration scope,
|
||||
then Spack will not add a new external entry (``spack config blame packages``
|
||||
can help locate all external entries).
|
||||
|
||||
.. _package-requirements:
|
||||
|
||||
--------------------
|
||||
Package Requirements
|
||||
--------------------
|
||||
|
||||
Spack can be configured to always use certain compilers, package
|
||||
versions, and variants during concretization through package
|
||||
requirements.
|
||||
|
||||
Package requirements are useful when you find yourself repeatedly
|
||||
specifying the same constraints on the command line, and wish that
|
||||
Spack respects these constraints whether you mention them explicitly
|
||||
or not. Another use case is specifying constraints that should apply
|
||||
to all root specs in an environment, without having to repeat the
|
||||
constraint everywhere.
|
||||
|
||||
Apart from that, requirements config is more flexible than constraints
|
||||
on the command line, because it can specify constraints on packages
|
||||
*when they occur* as a dependency. In contrast, on the command line it
|
||||
is not possible to specify constraints on dependencies while also keeping
|
||||
those dependencies optional.
|
||||
|
||||
.. seealso::
|
||||
|
||||
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
|
||||
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
Requirements syntax
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The package requirements configuration is specified in ``packages.yaml``,
|
||||
keyed by package name and expressed using the Spec syntax. In the simplest
|
||||
case you can specify attributes that you always want the package to have
|
||||
by providing a single spec string to ``require``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
libfabric:
|
||||
require: "@1.13.2"
|
||||
|
||||
In the above example, ``libfabric`` will always build with version 1.13.2. If you
|
||||
need to compose multiple configuration scopes ``require`` accepts a list of
|
||||
strings:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
libfabric:
|
||||
require:
|
||||
- "@1.13.2"
|
||||
- "%gcc"
|
||||
|
||||
In this case ``libfabric`` will always build with version 1.13.2 **and** using GCC
|
||||
as a compiler.
|
||||
|
||||
For more complex use cases, require accepts also a list of objects. These objects
|
||||
must have either a ``any_of`` or a ``one_of`` field, containing a list of spec strings,
|
||||
and they can optionally have a ``when`` and a ``message`` attribute:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- any_of: ["@4.1.5", "%gcc"]
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
``any_of`` is a list of specs. One of those specs must be satisfied
|
||||
and it is also allowed for the concretized spec to match more than one.
|
||||
In the above example, that means you could build ``openmpi@4.1.5%gcc``,
|
||||
``openmpi@4.1.5%clang`` or ``openmpi@3.9%gcc``, but
|
||||
not ``openmpi@3.9%clang``.
|
||||
|
||||
If a custom message is provided, and the requirement is not satisfiable,
|
||||
Spack will print the custom error message:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack spec openmpi@3.9%clang
|
||||
==> Error: in this example only 4.1.5 can build with other compilers
|
||||
|
||||
We could express a similar requirement using the ``when`` attribute:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- any_of: ["%gcc"]
|
||||
when: "@:4.1.4"
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC.
|
||||
For readability, Spack also allows a ``spec`` key accepting a string when there is only a single
|
||||
constraint:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- spec: "%gcc"
|
||||
when: "@:4.1.4"
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
This code snippet and the one before it are semantically equivalent.
|
||||
|
||||
Finally, instead of ``any_of`` you can use ``one_of`` which also takes a list of specs. The final
|
||||
concretized spec must match one and only one of them:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpich:
|
||||
require:
|
||||
- one_of: ["+cuda", "+rocm"]
|
||||
|
||||
In the example above, that means you could build ``mpich+cuda`` or ``mpich+rocm`` but not ``mpich+cuda+rocm``.
|
||||
|
||||
.. note::
|
||||
|
||||
For ``any_of`` and ``one_of``, the order of specs indicates a
|
||||
preference: items that appear earlier in the list are preferred
|
||||
(note that these preferences can be ignored in favor of others).
|
||||
|
||||
.. note::
|
||||
|
||||
When using a conditional requirement, Spack is allowed to actively avoid the triggering
|
||||
condition (the ``when=...`` spec) if that leads to a concrete spec with better scores in
|
||||
the optimization criteria. To check the current optimization criteria and their
|
||||
priorities you can run ``spack solve zlib``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Setting default requirements
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can also set default requirements for all packages under ``all``
|
||||
like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
require: '%clang'
|
||||
|
||||
which means every spec will be required to use ``clang`` as a compiler.
|
||||
|
||||
Note that in this case ``all`` represents a *default set of requirements* -
|
||||
if there are specific package requirements, then the default requirements
|
||||
under ``all`` are disregarded. For example, with a configuration like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
require: '%clang'
|
||||
cmake:
|
||||
require: '%gcc'
|
||||
|
||||
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
|
||||
dependencies) to use ``clang``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Setting requirements on virtual specs
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
|
||||
This can be useful for fixing which virtual provider you want to use:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
require: 'mvapich2 %gcc'
|
||||
|
||||
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
|
||||
|
||||
Requirements on the virtual spec and on the specific provider are both applied, if
|
||||
present. For instance with a configuration like:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
require: 'mvapich2 %gcc'
|
||||
mvapich2:
|
||||
require: '~cuda'
|
||||
|
||||
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
|
||||
|
||||
.. _package-preferences:
|
||||
|
||||
-------------------
|
||||
Package Preferences
|
||||
-------------------
|
||||
|
||||
In some cases package requirements can be too strong, and package
|
||||
preferences are the better option. Package preferences do not impose
|
||||
constraints on packages for particular versions or variants values,
|
||||
they rather only set defaults. The concretizer is free to change
|
||||
them if it must, due to other constraints, and also prefers reusing
|
||||
installed packages over building new ones that are a better match for
|
||||
preferences.
|
||||
|
||||
.. seealso::
|
||||
|
||||
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
|
||||
|
||||
|
||||
Most package preferences (``compilers``, ``target`` and ``providers``)
|
||||
can only be set globally under the ``all`` section of ``packages.yaml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
compiler: [gcc@12.2.0, clang@12:, oneapi@2023:]
|
||||
target: [x86_64_v3]
|
||||
providers:
|
||||
mpi: [mvapich2, mpich, openmpi]
|
||||
|
||||
These preferences override Spack's default and effectively reorder priorities
|
||||
when looking for the best compiler, target or virtual package provider. Each
|
||||
preference takes an ordered list of spec constraints, with earlier entries in
|
||||
the list being preferred over later entries.
|
||||
|
||||
In the example above all packages prefer to be compiled with ``gcc@12.2.0``,
|
||||
to target the ``x86_64_v3`` microarchitecture and to use ``mvapich2`` if they
|
||||
depend on ``mpi``.
|
||||
|
||||
The ``variants`` and ``version`` preferences can be set under
|
||||
package specific sections of the ``packages.yaml`` file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
opencv:
|
||||
variants: +debug
|
||||
gperftools:
|
||||
version: [2.2, 2.4, 2.3]
|
||||
|
||||
In this case, the preference for ``opencv`` is to build with debug options, while
|
||||
``gperftools`` prefers version 2.2 over 2.4.
|
||||
|
||||
Any preference can be overwritten on the command line if explicitly requested.
|
||||
|
||||
Preferences cannot overcome explicit constraints, as they only set a preferred
|
||||
ordering among homogeneous attribute values. Going back to the example, if
|
||||
``gperftools@2.3:`` was requested, then Spack will install version 2.4
|
||||
since the most preferred version 2.2 is prohibited by the version constraint.
|
||||
|
||||
.. _package_permissions:
|
||||
|
||||
-------------------
|
||||
Package Permissions
|
||||
-------------------
|
||||
|
||||
Spack can be configured to assign permissions to the files installed
|
||||
by a package.
|
||||
|
||||
In the ``packages.yaml`` file under ``permissions``, the attributes
|
||||
``read``, ``write``, and ``group`` control the package
|
||||
permissions. These attributes can be set per-package, or for all
|
||||
packages under ``all``. If permissions are set under ``all`` and for a
|
||||
specific package, the package-specific settings take precedence.
|
||||
|
||||
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
|
||||
and ``world``.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
permissions:
|
||||
write: group
|
||||
group: spack
|
||||
my_app:
|
||||
permissions:
|
||||
read: group
|
||||
group: my_team
|
||||
|
||||
The permissions settings describe the broadest level of access to
|
||||
installations of the specified packages. The execute permissions of
|
||||
the file are set to the same level as read permissions for those files
|
||||
that are executable. The default setting for ``read`` is ``world``,
|
||||
and for ``write`` is ``user``. In the example above, installations of
|
||||
``my_app`` will be installed with user and group permissions but no
|
||||
world permissions, and owned by the group ``my_team``. All other
|
||||
packages will be installed with user and group write privileges, and
|
||||
world read privileges. Those packages will be owned by the group
|
||||
``spack``.
|
||||
|
||||
The ``group`` attribute assigns a Unix-style group to a package. All
|
||||
files installed by the package will be owned by the assigned group,
|
||||
and the sticky group bit will be set on the install prefix and all
|
||||
directories inside the install prefix. This will ensure that even
|
||||
manually placed files within the install prefix are owned by the
|
||||
assigned group. If no group is assigned, Spack will allow the OS
|
||||
default behavior to go as expected.
|
||||
|
||||
----------------------------
|
||||
Assigning Package Attributes
|
||||
----------------------------
|
||||
|
||||
You can assign class-level attributes in the configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpileaks:
|
||||
package_attributes:
|
||||
# Override existing attributes
|
||||
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
|
||||
# ... or add new ones
|
||||
x: 1
|
||||
|
||||
Attributes set this way will be accessible to any method executed
|
||||
in the package.py file (e.g. the ``install()`` method). Values for these
|
||||
attributes may be any value parseable by yaml.
|
||||
|
||||
These can only be applied to specific packages, not "all" or
|
||||
virtual packages.
|
||||
@@ -8,6 +8,6 @@ pygments==2.16.1
|
||||
urllib3==2.0.7
|
||||
pytest==7.4.3
|
||||
isort==5.12.0
|
||||
black==23.10.1
|
||||
black==23.11.0
|
||||
flake8==6.1.0
|
||||
mypy==1.6.1
|
||||
mypy==1.7.0
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
|
||||
__version__ = "0.21.2"
|
||||
__version__ = "0.22.0.dev0"
|
||||
spack_version = __version__
|
||||
|
||||
|
||||
|
||||
@@ -40,7 +40,6 @@ def _search_duplicate_compilers(error_cls):
|
||||
import collections.abc
|
||||
import glob
|
||||
import inspect
|
||||
import io
|
||||
import itertools
|
||||
import pathlib
|
||||
import pickle
|
||||
@@ -55,7 +54,6 @@ def _search_duplicate_compilers(error_cls):
|
||||
import spack.repo
|
||||
import spack.spec
|
||||
import spack.util.crypto
|
||||
import spack.util.spack_yaml as syaml
|
||||
import spack.variant
|
||||
|
||||
#: Map an audit tag to a list of callables implementing checks
|
||||
@@ -252,88 +250,6 @@ def _search_duplicate_specs_in_externals(error_cls):
|
||||
return errors
|
||||
|
||||
|
||||
@config_packages
|
||||
def _deprecated_preferences(error_cls):
|
||||
"""Search package preferences deprecated in v0.21 (and slated for removal in v0.22)"""
|
||||
# TODO (v0.22): remove this audit as the attributes will not be allowed in config
|
||||
errors = []
|
||||
packages_yaml = spack.config.CONFIG.get_config("packages")
|
||||
|
||||
def make_error(attribute_name, config_data, summary):
|
||||
s = io.StringIO()
|
||||
s.write("Occurring in the following file:\n")
|
||||
dict_view = syaml.syaml_dict((k, v) for k, v in config_data.items() if k == attribute_name)
|
||||
syaml.dump_config(dict_view, stream=s, blame=True)
|
||||
return error_cls(summary=summary, details=[s.getvalue()])
|
||||
|
||||
if "all" in packages_yaml and "version" in packages_yaml["all"]:
|
||||
summary = "Using the deprecated 'version' attribute under 'packages:all'"
|
||||
errors.append(make_error("version", packages_yaml["all"], summary))
|
||||
|
||||
for package_name in packages_yaml:
|
||||
if package_name == "all":
|
||||
continue
|
||||
|
||||
package_conf = packages_yaml[package_name]
|
||||
for attribute in ("compiler", "providers", "target"):
|
||||
if attribute not in package_conf:
|
||||
continue
|
||||
summary = (
|
||||
f"Using the deprecated '{attribute}' attribute " f"under 'packages:{package_name}'"
|
||||
)
|
||||
errors.append(make_error(attribute, package_conf, summary))
|
||||
|
||||
return errors
|
||||
|
||||
|
||||
@config_packages
|
||||
def _avoid_mismatched_variants(error_cls):
|
||||
"""Warns if variant preferences have mismatched types or names."""
|
||||
errors = []
|
||||
packages_yaml = spack.config.CONFIG.get_config("packages")
|
||||
|
||||
def make_error(config_data, summary):
|
||||
s = io.StringIO()
|
||||
s.write("Occurring in the following file:\n")
|
||||
syaml.dump_config(config_data, stream=s, blame=True)
|
||||
return error_cls(summary=summary, details=[s.getvalue()])
|
||||
|
||||
for pkg_name in packages_yaml:
|
||||
# 'all:' must be more forgiving, since it is setting defaults for everything
|
||||
if pkg_name == "all" or "variants" not in packages_yaml[pkg_name]:
|
||||
continue
|
||||
|
||||
preferences = packages_yaml[pkg_name]["variants"]
|
||||
if not isinstance(preferences, list):
|
||||
preferences = [preferences]
|
||||
|
||||
for variants in preferences:
|
||||
current_spec = spack.spec.Spec(variants)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
for variant in current_spec.variants.values():
|
||||
# Variant does not exist at all
|
||||
if variant.name not in pkg_cls.variants:
|
||||
summary = (
|
||||
f"Setting a preference for the '{pkg_name}' package to the "
|
||||
f"non-existing variant '{variant.name}'"
|
||||
)
|
||||
errors.append(make_error(preferences, summary))
|
||||
continue
|
||||
|
||||
# Variant cannot accept this value
|
||||
s = spack.spec.Spec(pkg_name)
|
||||
try:
|
||||
s.update_variant_validate(variant.name, variant.value)
|
||||
except Exception:
|
||||
summary = (
|
||||
f"Setting the variant '{variant.name}' of the '{pkg_name}' package "
|
||||
f"to the invalid value '{str(variant)}'"
|
||||
)
|
||||
errors.append(make_error(preferences, summary))
|
||||
|
||||
return errors
|
||||
|
||||
|
||||
#: Sanity checks on package directives
|
||||
package_directives = AuditClass(
|
||||
group="packages",
|
||||
@@ -860,7 +776,7 @@ def _version_constraints_are_satisfiable_by_some_version_in_repo(pkgs, error_cls
|
||||
)
|
||||
except Exception:
|
||||
summary = (
|
||||
"{0}: dependency on {1} cannot be satisfied by known versions of {1.name}"
|
||||
"{0}: dependency on {1} cannot be satisfied " "by known versions of {1.name}"
|
||||
).format(pkg_name, s)
|
||||
details = ["happening in " + filename]
|
||||
if dependency_pkg_cls is not None:
|
||||
@@ -902,53 +818,6 @@ def _analyze_variants_in_directive(pkg, constraint, directive, error_cls):
|
||||
return errors
|
||||
|
||||
|
||||
@package_directives
|
||||
def _named_specs_in_when_arguments(pkgs, error_cls):
|
||||
"""Reports named specs in the 'when=' attribute of a directive.
|
||||
|
||||
Note that 'conflicts' is the only directive allowing that.
|
||||
"""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
|
||||
def _extracts_errors(triggers, summary):
|
||||
_errors = []
|
||||
for trigger in list(triggers):
|
||||
when_spec = spack.spec.Spec(trigger)
|
||||
if when_spec.name is not None and when_spec.name != pkg_name:
|
||||
details = [f"using '{trigger}', should be '^{trigger}'"]
|
||||
_errors.append(error_cls(summary=summary, details=details))
|
||||
return _errors
|
||||
|
||||
for dname, triggers in pkg_cls.dependencies.items():
|
||||
summary = f"{pkg_name}: wrong 'when=' condition for the '{dname}' dependency"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
for vname, (variant, triggers) in pkg_cls.variants.items():
|
||||
summary = f"{pkg_name}: wrong 'when=' condition for the '{vname}' variant"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
for provided, triggers in pkg_cls.provided.items():
|
||||
summary = f"{pkg_name}: wrong 'when=' condition for the '{provided}' virtual"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
for _, triggers in pkg_cls.requirements.items():
|
||||
triggers = [when_spec for when_spec, _, _ in triggers]
|
||||
summary = f"{pkg_name}: wrong 'when=' condition in 'requires' directive"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
triggers = list(pkg_cls.patches)
|
||||
summary = f"{pkg_name}: wrong 'when=' condition in 'patch' directives"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
triggers = list(pkg_cls.resources)
|
||||
summary = f"{pkg_name}: wrong 'when=' condition in 'resource' directives"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
return llnl.util.lang.dedupe(errors)
|
||||
|
||||
|
||||
#: Sanity checks on package directives
|
||||
external_detection = AuditClass(
|
||||
group="externals",
|
||||
|
||||
@@ -69,7 +69,6 @@
|
||||
BUILD_CACHE_RELATIVE_PATH = "build_cache"
|
||||
BUILD_CACHE_KEYS_RELATIVE_PATH = "_pgp"
|
||||
CURRENT_BUILD_CACHE_LAYOUT_VERSION = 1
|
||||
FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION = 2
|
||||
|
||||
|
||||
class BuildCacheDatabase(spack_db.Database):
|
||||
@@ -1697,7 +1696,7 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
|
||||
try:
|
||||
_get_valid_spec_file(
|
||||
local_specfile_stage.save_filename,
|
||||
FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION,
|
||||
CURRENT_BUILD_CACHE_LAYOUT_VERSION,
|
||||
)
|
||||
except InvalidMetadataFile as e:
|
||||
tty.warn(
|
||||
@@ -1738,7 +1737,7 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
|
||||
|
||||
try:
|
||||
_get_valid_spec_file(
|
||||
local_specfile_path, FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION
|
||||
local_specfile_path, CURRENT_BUILD_CACHE_LAYOUT_VERSION
|
||||
)
|
||||
except InvalidMetadataFile as e:
|
||||
tty.warn(
|
||||
@@ -2027,12 +2026,11 @@ def _extract_inner_tarball(spec, filename, extract_to, unsigned, remote_checksum
|
||||
|
||||
|
||||
def _tar_strip_component(tar: tarfile.TarFile, prefix: str):
|
||||
"""Yield all members of tarfile that start with given prefix, and strip that prefix (including
|
||||
symlinks)"""
|
||||
"""Strip the top-level directory `prefix` from the member names in a tarfile."""
|
||||
# Including trailing /, otherwise we end up with absolute paths.
|
||||
regex = re.compile(re.escape(prefix) + "/*")
|
||||
|
||||
# Only yield members in the package prefix.
|
||||
# Remove the top-level directory from the member (link)names.
|
||||
# Note: when a tarfile is created, relative in-prefix symlinks are
|
||||
# expanded to matching member names of tarfile entries. So, we have
|
||||
# to ensure that those are updated too.
|
||||
@@ -2040,14 +2038,12 @@ def _tar_strip_component(tar: tarfile.TarFile, prefix: str):
|
||||
# them.
|
||||
for m in tar.getmembers():
|
||||
result = regex.match(m.name)
|
||||
if not result:
|
||||
continue
|
||||
assert result is not None
|
||||
m.name = m.name[result.end() :]
|
||||
if m.linkname:
|
||||
result = regex.match(m.linkname)
|
||||
if result:
|
||||
m.linkname = m.linkname[result.end() :]
|
||||
yield m
|
||||
|
||||
|
||||
def extract_tarball(spec, download_result, unsigned=False, force=False, timer=timer.NULL_TIMER):
|
||||
@@ -2071,7 +2067,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
|
||||
|
||||
specfile_path = download_result["specfile_stage"].save_filename
|
||||
spec_dict, layout_version = _get_valid_spec_file(
|
||||
specfile_path, FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION
|
||||
specfile_path, CURRENT_BUILD_CACHE_LAYOUT_VERSION
|
||||
)
|
||||
bchecksum = spec_dict["binary_cache_checksum"]
|
||||
|
||||
@@ -2090,7 +2086,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
|
||||
_delete_staged_downloads(download_result)
|
||||
shutil.rmtree(tmpdir)
|
||||
raise e
|
||||
elif 1 <= layout_version <= 2:
|
||||
elif layout_version == 1:
|
||||
# Newer buildcache layout: the .spack file contains just
|
||||
# in the install tree, the signature, if it exists, is
|
||||
# wrapped around the spec.json at the root. If sig verify
|
||||
@@ -2117,10 +2113,8 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
|
||||
try:
|
||||
with closing(tarfile.open(tarfile_path, "r")) as tar:
|
||||
# Remove install prefix from tarfil to extract directly into spec.prefix
|
||||
tar.extractall(
|
||||
path=spec.prefix,
|
||||
members=_tar_strip_component(tar, prefix=_ensure_common_prefix(tar)),
|
||||
)
|
||||
_tar_strip_component(tar, prefix=_ensure_common_prefix(tar))
|
||||
tar.extractall(path=spec.prefix)
|
||||
except Exception:
|
||||
shutil.rmtree(spec.prefix, ignore_errors=True)
|
||||
_delete_staged_downloads(download_result)
|
||||
@@ -2155,47 +2149,20 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
|
||||
|
||||
|
||||
def _ensure_common_prefix(tar: tarfile.TarFile) -> str:
|
||||
# Find the lowest `binary_distribution` file (hard-coded forward slash is on purpose).
|
||||
binary_distribution = min(
|
||||
(
|
||||
e.name
|
||||
for e in tar.getmembers()
|
||||
if e.isfile() and e.name.endswith(".spack/binary_distribution")
|
||||
),
|
||||
key=len,
|
||||
default=None,
|
||||
)
|
||||
# Get the shortest length directory.
|
||||
common_prefix = min((e.name for e in tar.getmembers() if e.isdir()), key=len, default=None)
|
||||
|
||||
if binary_distribution is None:
|
||||
raise ValueError("Tarball is not a Spack package, missing binary_distribution file")
|
||||
if common_prefix is None:
|
||||
raise ValueError("Tarball does not contain a common prefix")
|
||||
|
||||
pkg_path = pathlib.PurePosixPath(binary_distribution).parent.parent
|
||||
|
||||
# Even the most ancient Spack version has required to list the dir of the package itself, so
|
||||
# guard against broken tarballs where `path.parent.parent` is empty.
|
||||
if pkg_path == pathlib.PurePosixPath():
|
||||
raise ValueError("Invalid tarball, missing package prefix dir")
|
||||
|
||||
pkg_prefix = str(pkg_path)
|
||||
|
||||
# Ensure all tar entries are in the pkg_prefix dir, and if they're not, they should be parent
|
||||
# dirs of it.
|
||||
has_prefix = False
|
||||
# Validate that each file starts with the prefix
|
||||
for member in tar.getmembers():
|
||||
stripped = member.name.rstrip("/")
|
||||
if not (
|
||||
stripped.startswith(pkg_prefix) or member.isdir() and pkg_prefix.startswith(stripped)
|
||||
):
|
||||
raise ValueError(f"Tarball contains file {stripped} outside of prefix {pkg_prefix}")
|
||||
if member.isdir() and stripped == pkg_prefix:
|
||||
has_prefix = True
|
||||
if not member.name.startswith(common_prefix):
|
||||
raise ValueError(
|
||||
f"Tarball contains file {member.name} outside of prefix {common_prefix}"
|
||||
)
|
||||
|
||||
# This is technically not required, but let's be defensive about the existence of the package
|
||||
# prefix dir.
|
||||
if not has_prefix:
|
||||
raise ValueError(f"Tarball does not contain a common prefix {pkg_prefix}")
|
||||
|
||||
return pkg_prefix
|
||||
return common_prefix
|
||||
|
||||
|
||||
def install_root_node(spec, unsigned=False, force=False, sha256=None):
|
||||
|
||||
@@ -213,8 +213,7 @@ def _root_spec(spec_str: str) -> str:
|
||||
if str(spack.platforms.host()) == "darwin":
|
||||
spec_str += " %apple-clang"
|
||||
elif str(spack.platforms.host()) == "windows":
|
||||
# TODO (johnwparent): Remove version constraint when clingo patch is up
|
||||
spec_str += " %msvc@:19.37"
|
||||
spec_str += " %msvc"
|
||||
else:
|
||||
spec_str += " %gcc"
|
||||
|
||||
|
||||
@@ -324,29 +324,19 @@ def set_compiler_environment_variables(pkg, env):
|
||||
# ttyout, ttyerr, etc.
|
||||
link_dir = spack.paths.build_env_path
|
||||
|
||||
# Set SPACK compiler variables so that our wrapper knows what to
|
||||
# call. If there is no compiler configured then use a default
|
||||
# wrapper which will emit an error if it is used.
|
||||
# Set SPACK compiler variables so that our wrapper knows what to call
|
||||
if compiler.cc:
|
||||
env.set("SPACK_CC", compiler.cc)
|
||||
env.set("CC", os.path.join(link_dir, compiler.link_paths["cc"]))
|
||||
else:
|
||||
env.set("CC", os.path.join(link_dir, "cc"))
|
||||
if compiler.cxx:
|
||||
env.set("SPACK_CXX", compiler.cxx)
|
||||
env.set("CXX", os.path.join(link_dir, compiler.link_paths["cxx"]))
|
||||
else:
|
||||
env.set("CC", os.path.join(link_dir, "c++"))
|
||||
if compiler.f77:
|
||||
env.set("SPACK_F77", compiler.f77)
|
||||
env.set("F77", os.path.join(link_dir, compiler.link_paths["f77"]))
|
||||
else:
|
||||
env.set("F77", os.path.join(link_dir, "f77"))
|
||||
if compiler.fc:
|
||||
env.set("SPACK_FC", compiler.fc)
|
||||
env.set("FC", os.path.join(link_dir, compiler.link_paths["fc"]))
|
||||
else:
|
||||
env.set("FC", os.path.join(link_dir, "fc"))
|
||||
|
||||
# Set SPACK compiler rpath flags so that our wrapper knows what to use
|
||||
env.set("SPACK_CC_RPATH_ARG", compiler.cc_rpath_arg)
|
||||
@@ -753,16 +743,15 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
|
||||
set_compiler_environment_variables(pkg, env_mods)
|
||||
set_wrapper_variables(pkg, env_mods)
|
||||
|
||||
# Platform specific setup goes before package specific setup. This is for setting
|
||||
# defaults like MACOSX_DEPLOYMENT_TARGET on macOS.
|
||||
platform = spack.platforms.by_name(pkg.spec.architecture.platform)
|
||||
target = platform.target(pkg.spec.architecture.target)
|
||||
platform.setup_platform_environment(pkg, env_mods)
|
||||
|
||||
tty.debug("setup_package: grabbing modifications from dependencies")
|
||||
env_mods.extend(setup_context.get_env_modifications())
|
||||
tty.debug("setup_package: collected all modifications from dependencies")
|
||||
|
||||
# architecture specific setup
|
||||
platform = spack.platforms.by_name(pkg.spec.architecture.platform)
|
||||
target = platform.target(pkg.spec.architecture.target)
|
||||
platform.setup_platform_environment(pkg, env_mods)
|
||||
|
||||
if context == Context.TEST:
|
||||
env_mods.prepend_path("PATH", ".")
|
||||
elif context == Context.BUILD and not dirty and not env_mods.is_unset("CPATH"):
|
||||
@@ -1333,7 +1322,7 @@ def make_stack(tb, stack=None):
|
||||
# don't provide context if the code is actually in the base classes.
|
||||
obj = frame.f_locals["self"]
|
||||
func = getattr(obj, tb.tb_frame.f_code.co_name, "")
|
||||
if func and hasattr(func, "__qualname__"):
|
||||
if func:
|
||||
typename, *_ = func.__qualname__.partition(".")
|
||||
if isinstance(obj, CONTEXT_BASES) and typename not in basenames:
|
||||
break
|
||||
|
||||
@@ -34,6 +34,11 @@ def cmake_cache_option(name, boolean_value, comment="", force=False):
|
||||
return 'set({0} {1} CACHE BOOL "{2}"{3})\n'.format(name, value, comment, force_str)
|
||||
|
||||
|
||||
def cmake_cache_filepath(name, value, comment=""):
|
||||
"""Generate a string for a cmake cache variable of type FILEPATH"""
|
||||
return 'set({0} "{1}" CACHE FILEPATH "{2}")\n'.format(name, value, comment)
|
||||
|
||||
|
||||
class CachedCMakeBuilder(CMakeBuilder):
|
||||
#: Phases of a Cached CMake package
|
||||
#: Note: the initconfig phase is used for developer builds as a final phase to stop on
|
||||
@@ -257,6 +262,15 @@ def initconfig_hardware_entries(self):
|
||||
entries.append(
|
||||
cmake_cache_path("HIP_CXX_COMPILER", "{0}".format(self.spec["hip"].hipcc))
|
||||
)
|
||||
llvm_bin = spec["llvm-amdgpu"].prefix.bin
|
||||
llvm_prefix = spec["llvm-amdgpu"].prefix
|
||||
# Some ROCm systems seem to point to /<path>/rocm-<ver>/ and
|
||||
# others point to /<path>/rocm-<ver>/llvm
|
||||
if os.path.basename(os.path.normpath(llvm_prefix)) != "llvm":
|
||||
llvm_bin = os.path.join(llvm_prefix, "llvm/bin/")
|
||||
entries.append(
|
||||
cmake_cache_filepath("CMAKE_HIP_COMPILER", os.path.join(llvm_bin, "clang++"))
|
||||
)
|
||||
archs = self.spec.variants["amdgpu_target"].value
|
||||
if archs[0] != "none":
|
||||
arch_str = ";".join(archs)
|
||||
@@ -277,7 +291,7 @@ def std_initconfig_entries(self):
|
||||
"#------------------{0}".format("-" * 60),
|
||||
"# CMake executable path: {0}".format(self.pkg.spec["cmake"].command.path),
|
||||
"#------------------{0}\n".format("-" * 60),
|
||||
cmake_cache_path("CMAKE_PREFIX_PATH", cmake_prefix_path),
|
||||
cmake_cache_string("CMAKE_PREFIX_PATH", cmake_prefix_path),
|
||||
self.define_cmake_cache_from_variant("CMAKE_BUILD_TYPE", "build_type"),
|
||||
]
|
||||
|
||||
|
||||
@@ -46,7 +46,22 @@
|
||||
from spack.reporters import CDash, CDashConfiguration
|
||||
from spack.reporters.cdash import build_stamp as cdash_build_stamp
|
||||
|
||||
JOB_RETRY_CONDITIONS = ["always"]
|
||||
# See https://docs.gitlab.com/ee/ci/yaml/#retry for descriptions of conditions
|
||||
JOB_RETRY_CONDITIONS = [
|
||||
# "always",
|
||||
"unknown_failure",
|
||||
"script_failure",
|
||||
"api_failure",
|
||||
"stuck_or_timeout_failure",
|
||||
"runner_system_failure",
|
||||
"runner_unsupported",
|
||||
"stale_schedule",
|
||||
# "job_execution_timeout",
|
||||
"archived_failure",
|
||||
"unmet_prerequisites",
|
||||
"scheduler_failure",
|
||||
"data_integrity_failure",
|
||||
]
|
||||
|
||||
TEMP_STORAGE_MIRROR_NAME = "ci_temporary_mirror"
|
||||
SPACK_RESERVED_TAGS = ["public", "protected", "notary"]
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import warnings
|
||||
|
||||
import llnl.util.tty as tty
|
||||
import llnl.util.tty.colify
|
||||
import llnl.util.tty.color as cl
|
||||
@@ -54,10 +52,8 @@ def setup_parser(subparser):
|
||||
|
||||
|
||||
def configs(parser, args):
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore")
|
||||
reports = spack.audit.run_group(args.subcommand)
|
||||
_process_reports(reports)
|
||||
reports = spack.audit.run_group(args.subcommand)
|
||||
_process_reports(reports)
|
||||
|
||||
|
||||
def packages(parser, args):
|
||||
|
||||
@@ -7,14 +7,13 @@
|
||||
import glob
|
||||
import hashlib
|
||||
import json
|
||||
import multiprocessing
|
||||
import multiprocessing.pool
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
import urllib.request
|
||||
from typing import Dict, List, Optional, Tuple, Union
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
|
||||
import llnl.util.tty as tty
|
||||
from llnl.string import plural
|
||||
@@ -308,30 +307,8 @@ def _progress(i: int, total: int):
|
||||
return ""
|
||||
|
||||
|
||||
class NoPool:
|
||||
def map(self, func, args):
|
||||
return [func(a) for a in args]
|
||||
|
||||
def starmap(self, func, args):
|
||||
return [func(*a) for a in args]
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *args):
|
||||
pass
|
||||
|
||||
|
||||
MaybePool = Union[multiprocessing.pool.Pool, NoPool]
|
||||
|
||||
|
||||
def _make_pool() -> MaybePool:
|
||||
"""Can't use threading because it's unsafe, and can't use spawned processes because of globals.
|
||||
That leaves only forking"""
|
||||
if multiprocessing.get_start_method() == "fork":
|
||||
return multiprocessing.pool.Pool(determine_number_of_jobs(parallel=True))
|
||||
else:
|
||||
return NoPool()
|
||||
def _make_pool():
|
||||
return multiprocessing.pool.Pool(determine_number_of_jobs(parallel=True))
|
||||
|
||||
|
||||
def push_fn(args):
|
||||
@@ -614,7 +591,7 @@ def _push_oci(
|
||||
image_ref: ImageReference,
|
||||
installed_specs_with_deps: List[Spec],
|
||||
tmpdir: str,
|
||||
pool: MaybePool,
|
||||
pool: multiprocessing.pool.Pool,
|
||||
) -> List[str]:
|
||||
"""Push specs to an OCI registry
|
||||
|
||||
@@ -715,10 +692,11 @@ def _config_from_tag(image_ref: ImageReference, tag: str) -> Optional[dict]:
|
||||
return config if "spec" in config else None
|
||||
|
||||
|
||||
def _update_index_oci(image_ref: ImageReference, tmpdir: str, pool: MaybePool) -> None:
|
||||
request = urllib.request.Request(url=image_ref.tags_url())
|
||||
response = spack.oci.opener.urlopen(request)
|
||||
spack.oci.opener.ensure_status(request, response, 200)
|
||||
def _update_index_oci(
|
||||
image_ref: ImageReference, tmpdir: str, pool: multiprocessing.pool.Pool
|
||||
) -> None:
|
||||
response = spack.oci.opener.urlopen(urllib.request.Request(url=image_ref.tags_url()))
|
||||
spack.oci.opener.ensure_status(response, 200)
|
||||
tags = json.load(response)["tags"]
|
||||
|
||||
# Fetch all image config files in parallel
|
||||
|
||||
@@ -200,6 +200,8 @@ def diff(parser, args):
|
||||
|
||||
specs = []
|
||||
for spec in spack.cmd.parse_specs(args.specs):
|
||||
# If the spec has a hash, check it before disambiguating
|
||||
spec.replace_hash()
|
||||
if spec.concrete:
|
||||
specs.append(spec)
|
||||
else:
|
||||
|
||||
@@ -61,7 +61,7 @@ def graph(parser, args):
|
||||
args.dot = True
|
||||
env = ev.active_environment()
|
||||
if env:
|
||||
specs = env.concrete_roots()
|
||||
specs = env.all_specs()
|
||||
else:
|
||||
specs = spack.store.STORE.db.query()
|
||||
|
||||
|
||||
@@ -139,7 +139,7 @@ def lines(self):
|
||||
yield " " + self.fmt % t
|
||||
|
||||
|
||||
def print_dependencies(pkg):
|
||||
def print_dependencies(pkg, args):
|
||||
"""output build, link, and run package dependencies"""
|
||||
|
||||
for deptype in ("build", "link", "run"):
|
||||
@@ -152,7 +152,7 @@ def print_dependencies(pkg):
|
||||
color.cprint(" None")
|
||||
|
||||
|
||||
def print_detectable(pkg):
|
||||
def print_detectable(pkg, args):
|
||||
"""output information on external detection"""
|
||||
|
||||
color.cprint("")
|
||||
@@ -180,7 +180,7 @@ def print_detectable(pkg):
|
||||
color.cprint(" False")
|
||||
|
||||
|
||||
def print_maintainers(pkg):
|
||||
def print_maintainers(pkg, args):
|
||||
"""output package maintainers"""
|
||||
|
||||
if len(pkg.maintainers) > 0:
|
||||
@@ -189,7 +189,7 @@ def print_maintainers(pkg):
|
||||
color.cprint(section_title("Maintainers: ") + mnt)
|
||||
|
||||
|
||||
def print_phases(pkg):
|
||||
def print_phases(pkg, args):
|
||||
"""output installation phases"""
|
||||
|
||||
if hasattr(pkg.builder, "phases") and pkg.builder.phases:
|
||||
@@ -201,7 +201,7 @@ def print_phases(pkg):
|
||||
color.cprint(phase_str)
|
||||
|
||||
|
||||
def print_tags(pkg):
|
||||
def print_tags(pkg, args):
|
||||
"""output package tags"""
|
||||
|
||||
color.cprint("")
|
||||
@@ -213,7 +213,7 @@ def print_tags(pkg):
|
||||
color.cprint(" None")
|
||||
|
||||
|
||||
def print_tests(pkg):
|
||||
def print_tests(pkg, args):
|
||||
"""output relevant build-time and stand-alone tests"""
|
||||
|
||||
# Some built-in base packages (e.g., Autotools) define callback (e.g.,
|
||||
@@ -327,7 +327,7 @@ def _variants_by_name_when(pkg):
|
||||
"""Adaptor to get variants keyed by { name: { when: { [Variant...] } }."""
|
||||
# TODO: replace with pkg.variants_by_name(when=True) when unified directive dicts are merged.
|
||||
variants = {}
|
||||
for name, (variant, whens) in sorted(pkg.variants.items()):
|
||||
for name, (variant, whens) in pkg.variants.items():
|
||||
for when in whens:
|
||||
variants.setdefault(name, {}).setdefault(when, []).append(variant)
|
||||
return variants
|
||||
@@ -407,12 +407,15 @@ def print_variants_by_name(pkg):
|
||||
sys.stdout.write("\n")
|
||||
|
||||
|
||||
def print_variants(pkg):
|
||||
def print_variants(pkg, args):
|
||||
"""output variants"""
|
||||
print_variants_grouped_by_when(pkg)
|
||||
if args.variants_by_name:
|
||||
print_variants_by_name(pkg)
|
||||
else:
|
||||
print_variants_grouped_by_when(pkg)
|
||||
|
||||
|
||||
def print_versions(pkg):
|
||||
def print_versions(pkg, args):
|
||||
"""output versions"""
|
||||
|
||||
color.cprint("")
|
||||
@@ -465,7 +468,7 @@ def get_url(version):
|
||||
color.cprint(line)
|
||||
|
||||
|
||||
def print_virtuals(pkg):
|
||||
def print_virtuals(pkg, args):
|
||||
"""output virtual packages"""
|
||||
|
||||
color.cprint("")
|
||||
@@ -488,7 +491,7 @@ def print_virtuals(pkg):
|
||||
color.cprint(" None")
|
||||
|
||||
|
||||
def print_licenses(pkg):
|
||||
def print_licenses(pkg, args):
|
||||
"""Output the licenses of the project."""
|
||||
|
||||
color.cprint("")
|
||||
@@ -523,17 +526,13 @@ def info(parser, args):
|
||||
if getattr(pkg, "homepage"):
|
||||
color.cprint(section_title("Homepage: ") + pkg.homepage)
|
||||
|
||||
_print_variants = (
|
||||
print_variants_by_name if args.variants_by_name else print_variants_grouped_by_when
|
||||
)
|
||||
|
||||
# Now output optional information in expected order
|
||||
sections = [
|
||||
(args.all or args.maintainers, print_maintainers),
|
||||
(args.all or args.detectable, print_detectable),
|
||||
(args.all or args.tags, print_tags),
|
||||
(args.all or not args.no_versions, print_versions),
|
||||
(args.all or not args.no_variants, _print_variants),
|
||||
(args.all or not args.no_variants, print_variants),
|
||||
(args.all or args.phases, print_phases),
|
||||
(args.all or not args.no_dependencies, print_dependencies),
|
||||
(args.all or args.virtuals, print_virtuals),
|
||||
@@ -542,6 +541,6 @@ def info(parser, args):
|
||||
]
|
||||
for print_it, func in sections:
|
||||
if print_it:
|
||||
func(pkg)
|
||||
func(pkg, args)
|
||||
|
||||
color.cprint("")
|
||||
|
||||
@@ -154,14 +154,6 @@ def add_compilers_to_config(compilers, scope=None, init_config=True):
|
||||
"""
|
||||
compiler_config = get_compiler_config(scope, init_config)
|
||||
for compiler in compilers:
|
||||
if not compiler.cc:
|
||||
tty.debug(f"{compiler.spec} does not have a C compiler")
|
||||
if not compiler.cxx:
|
||||
tty.debug(f"{compiler.spec} does not have a C++ compiler")
|
||||
if not compiler.f77:
|
||||
tty.debug(f"{compiler.spec} does not have a Fortran77 compiler")
|
||||
if not compiler.fc:
|
||||
tty.debug(f"{compiler.spec} does not have a Fortran compiler")
|
||||
compiler_config.append(_to_dict(compiler))
|
||||
spack.config.set("compilers", compiler_config, scope=scope)
|
||||
|
||||
@@ -514,10 +506,9 @@ def get_compilers(config, cspec=None, arch_spec=None):
|
||||
for items in config:
|
||||
items = items["compiler"]
|
||||
|
||||
# We might use equality here.
|
||||
if cspec and not spack.spec.parse_with_version_concrete(
|
||||
items["spec"], compiler=True
|
||||
).satisfies(cspec):
|
||||
# NOTE: in principle this should be equality not satisfies, but config can still
|
||||
# be written in old format gcc@10.1.0 instead of gcc@=10.1.0.
|
||||
if cspec and not cspec.satisfies(items["spec"]):
|
||||
continue
|
||||
|
||||
# If an arch spec is given, confirm that this compiler
|
||||
|
||||
@@ -40,7 +40,6 @@ def debug_flags(self):
|
||||
"-gdwarf-5",
|
||||
"-gline-tables-only",
|
||||
"-gmodules",
|
||||
"-gz",
|
||||
"-g",
|
||||
]
|
||||
|
||||
|
||||
@@ -55,7 +55,6 @@ def debug_flags(self):
|
||||
"-gdwarf-5",
|
||||
"-gline-tables-only",
|
||||
"-gmodules",
|
||||
"-gz",
|
||||
"-g",
|
||||
]
|
||||
|
||||
|
||||
@@ -380,13 +380,14 @@ def _print_timer(pre: str, pkg_id: str, timer: timer.BaseTimer) -> None:
|
||||
|
||||
|
||||
def _install_from_cache(
|
||||
pkg: "spack.package_base.PackageBase", explicit: bool, unsigned: bool = False
|
||||
pkg: "spack.package_base.PackageBase", cache_only: bool, explicit: bool, unsigned: bool = False
|
||||
) -> bool:
|
||||
"""
|
||||
Install the package from binary cache
|
||||
Extract the package from binary cache
|
||||
|
||||
Args:
|
||||
pkg: package to install from the binary cache
|
||||
cache_only: only extract from binary cache
|
||||
explicit: ``True`` if installing the package was explicitly
|
||||
requested by the user, otherwise, ``False``
|
||||
unsigned: ``True`` if binary package signatures to be checked,
|
||||
@@ -398,11 +399,15 @@ def _install_from_cache(
|
||||
installed_from_cache = _try_install_from_binary_cache(
|
||||
pkg, explicit, unsigned=unsigned, timer=t
|
||||
)
|
||||
pkg_id = package_id(pkg)
|
||||
if not installed_from_cache:
|
||||
pre = f"No binary for {pkg_id} found"
|
||||
if cache_only:
|
||||
tty.die(f"{pre} when cache-only specified")
|
||||
|
||||
tty.msg(f"{pre}: installing from source")
|
||||
return False
|
||||
t.stop()
|
||||
|
||||
pkg_id = package_id(pkg)
|
||||
tty.debug(f"Successfully extracted {pkg_id} from binary cache")
|
||||
|
||||
_write_timer_json(pkg, t, True)
|
||||
@@ -1330,6 +1335,7 @@ def _prepare_for_install(self, task: BuildTask) -> None:
|
||||
"""
|
||||
install_args = task.request.install_args
|
||||
keep_prefix = install_args.get("keep_prefix")
|
||||
restage = install_args.get("restage")
|
||||
|
||||
# Make sure the package is ready to be locally installed.
|
||||
self._ensure_install_ready(task.pkg)
|
||||
@@ -1361,6 +1367,10 @@ def _prepare_for_install(self, task: BuildTask) -> None:
|
||||
else:
|
||||
tty.debug(f"{task.pkg_id} is partially installed")
|
||||
|
||||
# Destroy the stage for a locally installed, non-DIYStage, package
|
||||
if restage and task.pkg.stage.managed_by_spack:
|
||||
task.pkg.stage.destroy()
|
||||
|
||||
if (
|
||||
rec
|
||||
and installed_in_db
|
||||
@@ -1661,16 +1671,11 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
|
||||
task.status = STATUS_INSTALLING
|
||||
|
||||
# Use the binary cache if requested
|
||||
if use_cache:
|
||||
if _install_from_cache(pkg, explicit, unsigned):
|
||||
self._update_installed(task)
|
||||
if task.compiler:
|
||||
self._add_compiler_package_to_config(pkg)
|
||||
return
|
||||
elif cache_only:
|
||||
raise InstallError("No binary found when cache-only was specified", pkg=pkg)
|
||||
else:
|
||||
tty.msg(f"No binary for {pkg_id} found: installing from source")
|
||||
if use_cache and _install_from_cache(pkg, cache_only, explicit, unsigned):
|
||||
self._update_installed(task)
|
||||
if task.compiler:
|
||||
self._add_compiler_package_to_config(pkg)
|
||||
return
|
||||
|
||||
pkg.run_tests = tests if isinstance(tests, bool) else pkg.name in tests
|
||||
|
||||
@@ -1686,10 +1691,6 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
|
||||
try:
|
||||
self._setup_install_dir(pkg)
|
||||
|
||||
# Create stage object now and let it be serialized for the child process. That
|
||||
# way monkeypatch in tests works correctly.
|
||||
pkg.stage
|
||||
|
||||
# Create a child process to do the actual installation.
|
||||
# Preserve verbosity settings across installs.
|
||||
spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process(
|
||||
@@ -2222,6 +2223,11 @@ def install(self) -> None:
|
||||
if not keep_prefix and not action == InstallAction.OVERWRITE:
|
||||
pkg.remove_prefix()
|
||||
|
||||
# The subprocess *may* have removed the build stage. Mark it
|
||||
# not created so that the next time pkg.stage is invoked, we
|
||||
# check the filesystem for it.
|
||||
pkg.stage.created = False
|
||||
|
||||
# Perform basic task cleanup for the installed spec to
|
||||
# include downgrading the write to a read lock
|
||||
self._cleanup_task(pkg)
|
||||
@@ -2291,9 +2297,6 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
|
||||
# whether to keep the build stage after installation
|
||||
self.keep_stage = install_args.get("keep_stage", False)
|
||||
|
||||
# whether to restage
|
||||
self.restage = install_args.get("restage", False)
|
||||
|
||||
# whether to skip the patch phase
|
||||
self.skip_patch = install_args.get("skip_patch", False)
|
||||
|
||||
@@ -2324,13 +2327,9 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
|
||||
def run(self) -> bool:
|
||||
"""Main entry point from ``build_process`` to kick off install in child."""
|
||||
|
||||
stage = self.pkg.stage
|
||||
stage.keep = self.keep_stage
|
||||
self.pkg.stage.keep = self.keep_stage
|
||||
|
||||
if self.restage:
|
||||
stage.destroy()
|
||||
|
||||
with stage:
|
||||
with self.pkg.stage:
|
||||
self.timer.start("stage")
|
||||
|
||||
if not self.fake:
|
||||
|
||||
@@ -1016,16 +1016,14 @@ def _main(argv=None):
|
||||
bootstrap_context = bootstrap.ensure_bootstrap_configuration()
|
||||
|
||||
with bootstrap_context:
|
||||
return finish_parse_and_run(parser, cmd_name, args, env_format_error)
|
||||
return finish_parse_and_run(parser, cmd_name, args.command, env_format_error)
|
||||
|
||||
|
||||
def finish_parse_and_run(parser, cmd_name, main_args, env_format_error):
|
||||
def finish_parse_and_run(parser, cmd_name, cmd, env_format_error):
|
||||
"""Finish parsing after we know the command to run."""
|
||||
# add the found command to the parser and re-run then re-parse
|
||||
command = parser.add_command(cmd_name)
|
||||
args, unknown = parser.parse_known_args(main_args.command)
|
||||
# we need to inherit verbose since the install command checks for it
|
||||
args.verbose = main_args.verbose
|
||||
args, unknown = parser.parse_known_args()
|
||||
|
||||
# Now that we know what command this is and what its args are, determine
|
||||
# whether we can continue with a bad environment and raise if not.
|
||||
|
||||
@@ -93,7 +93,7 @@ def _filter_compiler_wrappers_impl(pkg_or_builder):
|
||||
replacements = []
|
||||
|
||||
for idx, (env_var, compiler_path) in enumerate(compiler_vars):
|
||||
if env_var in os.environ and compiler_path is not None:
|
||||
if env_var in os.environ:
|
||||
# filter spack wrapper and links to spack wrapper in case
|
||||
# build system runs realpath
|
||||
wrapper = os.environ[env_var]
|
||||
|
||||
@@ -134,7 +134,7 @@ def upload_blob(
|
||||
return True
|
||||
|
||||
# Otherwise, do another PUT request.
|
||||
spack.oci.opener.ensure_status(request, response, 202)
|
||||
spack.oci.opener.ensure_status(response, 202)
|
||||
assert "Location" in response.headers
|
||||
|
||||
# Can be absolute or relative, joining handles both
|
||||
@@ -143,16 +143,19 @@ def upload_blob(
|
||||
)
|
||||
f.seek(0)
|
||||
|
||||
request = Request(
|
||||
url=upload_url,
|
||||
method="PUT",
|
||||
data=f,
|
||||
headers={"Content-Type": "application/octet-stream", "Content-Length": str(file_size)},
|
||||
response = _urlopen(
|
||||
Request(
|
||||
url=upload_url,
|
||||
method="PUT",
|
||||
data=f,
|
||||
headers={
|
||||
"Content-Type": "application/octet-stream",
|
||||
"Content-Length": str(file_size),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
response = _urlopen(request)
|
||||
|
||||
spack.oci.opener.ensure_status(request, response, 201)
|
||||
spack.oci.opener.ensure_status(response, 201)
|
||||
|
||||
# print elapsed time and # MB/s
|
||||
_log_upload_progress(digest, file_size, time.time() - start)
|
||||
@@ -186,16 +189,16 @@ def upload_manifest(
|
||||
if not tag:
|
||||
ref = ref.with_digest(digest)
|
||||
|
||||
request = Request(
|
||||
url=ref.manifest_url(),
|
||||
method="PUT",
|
||||
data=data,
|
||||
headers={"Content-Type": oci_manifest["mediaType"]},
|
||||
response = _urlopen(
|
||||
Request(
|
||||
url=ref.manifest_url(),
|
||||
method="PUT",
|
||||
data=data,
|
||||
headers={"Content-Type": oci_manifest["mediaType"]},
|
||||
)
|
||||
)
|
||||
|
||||
response = _urlopen(request)
|
||||
|
||||
spack.oci.opener.ensure_status(request, response, 201)
|
||||
spack.oci.opener.ensure_status(response, 201)
|
||||
return digest, size
|
||||
|
||||
|
||||
|
||||
@@ -310,15 +310,19 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
|
||||
# Login failed, avoid infinite recursion where we go back and
|
||||
# forth between auth server and registry
|
||||
if hasattr(req, "login_attempted"):
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
req, code, f"Failed to login: {msg}", headers, fp
|
||||
raise urllib.error.HTTPError(
|
||||
req.full_url, code, f"Failed to login to {req.full_url}: {msg}", headers, fp
|
||||
)
|
||||
|
||||
# On 401 Unauthorized, parse the WWW-Authenticate header
|
||||
# to determine what authentication is required
|
||||
if "WWW-Authenticate" not in headers:
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
req, code, "Cannot login to registry, missing WWW-Authenticate header", headers, fp
|
||||
raise urllib.error.HTTPError(
|
||||
req.full_url,
|
||||
code,
|
||||
"Cannot login to registry, missing WWW-Authenticate header",
|
||||
headers,
|
||||
fp,
|
||||
)
|
||||
|
||||
header_value = headers["WWW-Authenticate"]
|
||||
@@ -326,8 +330,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
|
||||
try:
|
||||
challenge = get_bearer_challenge(parse_www_authenticate(header_value))
|
||||
except ValueError as e:
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
req,
|
||||
raise urllib.error.HTTPError(
|
||||
req.full_url,
|
||||
code,
|
||||
f"Cannot login to registry, malformed WWW-Authenticate header: {header_value}",
|
||||
headers,
|
||||
@@ -336,8 +340,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
|
||||
|
||||
# If there is no bearer challenge, we can't handle it
|
||||
if not challenge:
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
req,
|
||||
raise urllib.error.HTTPError(
|
||||
req.full_url,
|
||||
code,
|
||||
f"Cannot login to registry, unsupported authentication scheme: {header_value}",
|
||||
headers,
|
||||
@@ -352,8 +356,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
|
||||
timeout=req.timeout,
|
||||
)
|
||||
except ValueError as e:
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
req,
|
||||
raise urllib.error.HTTPError(
|
||||
req.full_url,
|
||||
code,
|
||||
f"Cannot login to registry, failed to obtain bearer token: {e}",
|
||||
headers,
|
||||
@@ -408,13 +412,13 @@ def create_opener():
|
||||
return opener
|
||||
|
||||
|
||||
def ensure_status(request: urllib.request.Request, response: HTTPResponse, status: int):
|
||||
def ensure_status(response: HTTPResponse, status: int):
|
||||
"""Raise an error if the response status is not the expected one."""
|
||||
if response.status == status:
|
||||
return
|
||||
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
request, response.status, response.reason, response.info(), None
|
||||
raise urllib.error.HTTPError(
|
||||
response.geturl(), response.status, response.reason, response.info(), None
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -32,6 +32,7 @@
|
||||
from spack.build_systems.bundle import BundlePackage
|
||||
from spack.build_systems.cached_cmake import (
|
||||
CachedCMakePackage,
|
||||
cmake_cache_filepath,
|
||||
cmake_cache_option,
|
||||
cmake_cache_path,
|
||||
cmake_cache_string,
|
||||
|
||||
@@ -24,9 +24,8 @@
|
||||
import textwrap
|
||||
import time
|
||||
import traceback
|
||||
import typing
|
||||
import warnings
|
||||
from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Tuple, Type, TypeVar, Union
|
||||
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type, TypeVar
|
||||
|
||||
import llnl.util.filesystem as fsys
|
||||
import llnl.util.tty as tty
|
||||
@@ -683,13 +682,13 @@ def __init__(self, spec):
|
||||
@classmethod
|
||||
def possible_dependencies(
|
||||
cls,
|
||||
transitive: bool = True,
|
||||
expand_virtuals: bool = True,
|
||||
transitive=True,
|
||||
expand_virtuals=True,
|
||||
depflag: dt.DepFlag = dt.ALL,
|
||||
visited: Optional[dict] = None,
|
||||
missing: Optional[dict] = None,
|
||||
virtuals: Optional[set] = None,
|
||||
) -> Dict[str, Set[str]]:
|
||||
visited=None,
|
||||
missing=None,
|
||||
virtuals=None,
|
||||
):
|
||||
"""Return dict of possible dependencies of this package.
|
||||
|
||||
Args:
|
||||
@@ -2450,21 +2449,14 @@ def flatten_dependencies(spec, flat_dir):
|
||||
dep_files.merge(flat_dir + "/" + name)
|
||||
|
||||
|
||||
def possible_dependencies(
|
||||
*pkg_or_spec: Union[str, spack.spec.Spec, typing.Type[PackageBase]],
|
||||
transitive: bool = True,
|
||||
expand_virtuals: bool = True,
|
||||
depflag: dt.DepFlag = dt.ALL,
|
||||
missing: Optional[dict] = None,
|
||||
virtuals: Optional[set] = None,
|
||||
) -> Dict[str, Set[str]]:
|
||||
def possible_dependencies(*pkg_or_spec, **kwargs):
|
||||
"""Get the possible dependencies of a number of packages.
|
||||
|
||||
See ``PackageBase.possible_dependencies`` for details.
|
||||
"""
|
||||
packages = []
|
||||
for pos in pkg_or_spec:
|
||||
if isinstance(pos, PackageMeta) and issubclass(pos, PackageBase):
|
||||
if isinstance(pos, PackageMeta):
|
||||
packages.append(pos)
|
||||
continue
|
||||
|
||||
@@ -2477,16 +2469,9 @@ def possible_dependencies(
|
||||
else:
|
||||
packages.append(pos.package_class)
|
||||
|
||||
visited: Dict[str, Set[str]] = {}
|
||||
visited = {}
|
||||
for pkg in packages:
|
||||
pkg.possible_dependencies(
|
||||
visited=visited,
|
||||
transitive=transitive,
|
||||
expand_virtuals=expand_virtuals,
|
||||
depflag=depflag,
|
||||
missing=missing,
|
||||
virtuals=virtuals,
|
||||
)
|
||||
pkg.possible_dependencies(visited=visited, **kwargs)
|
||||
|
||||
return visited
|
||||
|
||||
|
||||
@@ -490,7 +490,7 @@ def read(self, stream):
|
||||
self.index = spack.tag.TagIndex.from_json(stream, self.repository)
|
||||
|
||||
def update(self, pkg_fullname):
|
||||
self.index.update_package(pkg_fullname.split(".")[-1])
|
||||
self.index.update_package(pkg_fullname)
|
||||
|
||||
def write(self, stream):
|
||||
self.index.to_json(stream)
|
||||
|
||||
@@ -69,8 +69,6 @@
|
||||
"patternProperties": {r"\w+": {}},
|
||||
}
|
||||
|
||||
REQUIREMENT_URL = "https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements"
|
||||
|
||||
#: Properties for inclusion in other schemas
|
||||
properties = {
|
||||
"packages": {
|
||||
@@ -119,7 +117,7 @@
|
||||
"properties": ["version"],
|
||||
"message": "setting version preferences in the 'all' section of packages.yaml "
|
||||
"is deprecated and will be removed in v0.22\n\n\tThese preferences "
|
||||
"will be ignored by Spack. You can set them only in package-specific sections "
|
||||
"will be ignored by Spack. You can set them only in package specific sections "
|
||||
"of the same file.\n",
|
||||
"error": False,
|
||||
},
|
||||
@@ -164,14 +162,10 @@
|
||||
},
|
||||
"deprecatedProperties": {
|
||||
"properties": ["target", "compiler", "providers"],
|
||||
"message": "setting 'compiler:', 'target:' or 'provider:' preferences in "
|
||||
"a package-specific section of packages.yaml is deprecated, and will be "
|
||||
"removed in v0.22.\n\n\tThese preferences will be ignored by Spack, and "
|
||||
"can be set only in the 'all' section of the same file. "
|
||||
"You can run:\n\n\t\t$ spack audit configs\n\n\tto get better diagnostics, "
|
||||
"including files:lines where the deprecated attributes are used.\n\n"
|
||||
"\tUse requirements to enforce conditions on specific packages: "
|
||||
f"{REQUIREMENT_URL}\n",
|
||||
"message": "setting compiler, target or provider preferences in a package "
|
||||
"specific section of packages.yaml is deprecated, and will be removed in "
|
||||
"v0.22.\n\n\tThese preferences will be ignored by Spack. You "
|
||||
"can set them only in the 'all' section of the same file.\n",
|
||||
"error": False,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -12,7 +12,6 @@
|
||||
import pprint
|
||||
import re
|
||||
import types
|
||||
import typing
|
||||
import warnings
|
||||
from typing import Callable, Dict, List, NamedTuple, Optional, Sequence, Set, Tuple, Union
|
||||
|
||||
@@ -380,7 +379,7 @@ def check_packages_exist(specs):
|
||||
for spec in specs:
|
||||
for s in spec.traverse():
|
||||
try:
|
||||
check_passed = repo.repo_for_pkg(s).exists(s.name) or repo.is_virtual(s.name)
|
||||
check_passed = repo.exists(s.name) or repo.is_virtual(s.name)
|
||||
except Exception as e:
|
||||
msg = "Cannot find package: {0}".format(str(e))
|
||||
check_passed = False
|
||||
@@ -714,7 +713,7 @@ def _get_cause_tree(
|
||||
(condition_id, set_id) in which the latter idea means that the condition represented by
|
||||
the former held in the condition set represented by the latter.
|
||||
"""
|
||||
seen.add(cause)
|
||||
seen = set(seen) | set(cause)
|
||||
parents = [c for e, c in condition_causes if e == cause and c not in seen]
|
||||
local = "required because %s " % conditions[cause[0]]
|
||||
|
||||
@@ -813,14 +812,7 @@ def on_model(model):
|
||||
errors = sorted(
|
||||
[(int(priority), msg, args) for priority, msg, *args in error_args], reverse=True
|
||||
)
|
||||
try:
|
||||
msg = self.message(errors)
|
||||
except Exception as e:
|
||||
msg = (
|
||||
f"unexpected error during concretization [{str(e)}]. "
|
||||
f"Please report a bug at https://github.com/spack/spack/issues"
|
||||
)
|
||||
raise spack.error.SpackError(msg)
|
||||
msg = self.message(errors)
|
||||
raise UnsatisfiableSpecError(msg)
|
||||
|
||||
|
||||
@@ -1014,6 +1006,14 @@ def on_model(model):
|
||||
# record the possible dependencies in the solve
|
||||
result.possible_dependencies = setup.pkgs
|
||||
|
||||
# print any unknown functions in the model
|
||||
for sym in best_model:
|
||||
if sym.name not in ("attr", "error", "opt_criterion"):
|
||||
tty.debug(
|
||||
"UNKNOWN SYMBOL: %s(%s)"
|
||||
% (sym.name, ", ".join([str(s) for s in intermediate_repr(sym.arguments)]))
|
||||
)
|
||||
|
||||
elif cores:
|
||||
result.control = self.control
|
||||
result.cores.extend(cores)
|
||||
@@ -1118,8 +1118,11 @@ def __init__(self, tests=False):
|
||||
|
||||
self.reusable_and_possible = ConcreteSpecsByHash()
|
||||
|
||||
self._id_counter = itertools.count()
|
||||
# id for dummy variables
|
||||
self._condition_id_counter = itertools.count()
|
||||
self._trigger_id_counter = itertools.count()
|
||||
self._trigger_cache = collections.defaultdict(dict)
|
||||
self._effect_id_counter = itertools.count()
|
||||
self._effect_cache = collections.defaultdict(dict)
|
||||
|
||||
# Caches to optimize the setup phase of the solver
|
||||
@@ -1133,7 +1136,6 @@ def __init__(self, tests=False):
|
||||
|
||||
# Set during the call to setup
|
||||
self.pkgs = None
|
||||
self.explicitly_required_namespaces = {}
|
||||
|
||||
def pkg_version_rules(self, pkg):
|
||||
"""Output declared versions of a package.
|
||||
@@ -1146,9 +1148,7 @@ def key_fn(version):
|
||||
# Origins are sorted by "provenance" first, see the Provenance enumeration above
|
||||
return version.origin, version.idx
|
||||
|
||||
if isinstance(pkg, str):
|
||||
pkg = self.pkg_class(pkg)
|
||||
|
||||
pkg = packagize(pkg)
|
||||
declared_versions = self.declared_versions[pkg.name]
|
||||
partially_sorted_versions = sorted(set(declared_versions), key=key_fn)
|
||||
|
||||
@@ -1340,10 +1340,7 @@ def _rule_from_str(
|
||||
)
|
||||
|
||||
def pkg_rules(self, pkg, tests):
|
||||
pkg = self.pkg_class(pkg)
|
||||
|
||||
# Namespace of the package
|
||||
self.gen.fact(fn.pkg_fact(pkg.name, fn.namespace(pkg.namespace)))
|
||||
pkg = packagize(pkg)
|
||||
|
||||
# versions
|
||||
self.pkg_version_rules(pkg)
|
||||
@@ -1521,7 +1518,7 @@ def condition(
|
||||
# In this way, if a condition can't be emitted but the exception is handled in the caller,
|
||||
# we won't emit partial facts.
|
||||
|
||||
condition_id = next(self._id_counter)
|
||||
condition_id = next(self._condition_id_counter)
|
||||
self.gen.fact(fn.pkg_fact(named_cond.name, fn.condition(condition_id)))
|
||||
self.gen.fact(fn.condition_reason(condition_id, msg))
|
||||
|
||||
@@ -1529,7 +1526,7 @@ def condition(
|
||||
|
||||
named_cond_key = (str(named_cond), transform_required)
|
||||
if named_cond_key not in cache:
|
||||
trigger_id = next(self._id_counter)
|
||||
trigger_id = next(self._trigger_id_counter)
|
||||
requirements = self.spec_clauses(named_cond, body=True, required_from=name)
|
||||
|
||||
if transform_required:
|
||||
@@ -1545,7 +1542,7 @@ def condition(
|
||||
cache = self._effect_cache[named_cond.name]
|
||||
imposed_spec_key = (str(imposed_spec), transform_imposed)
|
||||
if imposed_spec_key not in cache:
|
||||
effect_id = next(self._id_counter)
|
||||
effect_id = next(self._effect_id_counter)
|
||||
requirements = self.spec_clauses(imposed_spec, body=False, required_from=name)
|
||||
|
||||
if transform_imposed:
|
||||
@@ -1676,10 +1673,9 @@ def provider_requirements(self):
|
||||
rules = self._rules_from_requirements(
|
||||
virtual_str, requirements, kind=RequirementKind.VIRTUAL
|
||||
)
|
||||
if rules:
|
||||
self.emit_facts_from_requirement_rules(rules)
|
||||
self.trigger_rules()
|
||||
self.effect_rules()
|
||||
self.emit_facts_from_requirement_rules(rules)
|
||||
self.trigger_rules()
|
||||
self.effect_rules()
|
||||
|
||||
def emit_facts_from_requirement_rules(self, rules: List[RequirementRule]):
|
||||
"""Generate facts to enforce requirements.
|
||||
@@ -1806,12 +1802,15 @@ def external_packages(self):
|
||||
for local_idx, spec in enumerate(external_specs):
|
||||
msg = "%s available as external when satisfying %s" % (spec.name, spec)
|
||||
|
||||
def external_imposition(input_spec, requirements):
|
||||
return requirements + [
|
||||
fn.attr("external_conditions_hold", input_spec.name, local_idx)
|
||||
]
|
||||
def external_imposition(input_spec, _):
|
||||
return [fn.attr("external_conditions_hold", input_spec.name, local_idx)]
|
||||
|
||||
self.condition(spec, spec, msg=msg, transform_imposed=external_imposition)
|
||||
self.condition(
|
||||
spec,
|
||||
spack.spec.Spec(spec.name),
|
||||
msg=msg,
|
||||
transform_imposed=external_imposition,
|
||||
)
|
||||
self.possible_versions[spec.name].add(spec.version)
|
||||
self.gen.newline()
|
||||
|
||||
@@ -1833,13 +1832,7 @@ def preferred_variants(self, pkg_name):
|
||||
|
||||
# perform validation of the variant and values
|
||||
spec = spack.spec.Spec(pkg_name)
|
||||
try:
|
||||
spec.update_variant_validate(variant_name, values)
|
||||
except (spack.variant.InvalidVariantValueError, KeyError, ValueError) as e:
|
||||
tty.debug(
|
||||
f"[SETUP]: rejected {str(variant)} as a preference for {pkg_name}: {str(e)}"
|
||||
)
|
||||
continue
|
||||
spec.update_variant_validate(variant_name, values)
|
||||
|
||||
for value in values:
|
||||
self.variant_values_from_specs.add((pkg_name, variant.name, value))
|
||||
@@ -1978,7 +1971,7 @@ class Body:
|
||||
if not spec.concrete:
|
||||
reserved_names = spack.directives.reserved_names
|
||||
if not spec.virtual and vname not in reserved_names:
|
||||
pkg_cls = self.pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
try:
|
||||
variant_def, _ = pkg_cls.variants[vname]
|
||||
except KeyError:
|
||||
@@ -2097,7 +2090,7 @@ def define_package_versions_and_validate_preferences(
|
||||
"""Declare any versions in specs not declared in packages."""
|
||||
packages_yaml = spack.config.get("packages")
|
||||
for pkg_name in possible_pkgs:
|
||||
pkg_cls = self.pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
|
||||
# All the versions from the corresponding package.py file. Since concepts
|
||||
# like being a "develop" version or being preferred exist only at a
|
||||
@@ -2557,8 +2550,14 @@ def setup(
|
||||
reuse: list of concrete specs that can be reused
|
||||
allow_deprecated: if True adds deprecated versions into the solve
|
||||
"""
|
||||
self._condition_id_counter = itertools.count()
|
||||
|
||||
# preliminary checks
|
||||
check_packages_exist(specs)
|
||||
|
||||
# get list of all possible dependencies
|
||||
self.possible_virtuals = set(x.name for x in specs if x.virtual)
|
||||
|
||||
node_counter = _create_counter(specs, tests=self.tests)
|
||||
self.possible_virtuals = node_counter.possible_virtuals()
|
||||
self.pkgs = node_counter.possible_dependencies()
|
||||
@@ -2571,10 +2570,6 @@ def setup(
|
||||
if missing_deps:
|
||||
raise spack.spec.InvalidDependencyError(spec.name, missing_deps)
|
||||
|
||||
for node in spack.traverse.traverse_nodes(specs):
|
||||
if node.namespace is not None:
|
||||
self.explicitly_required_namespaces[node.name] = node.namespace
|
||||
|
||||
# driver is used by all the functions below to add facts and
|
||||
# rules to generate an ASP program.
|
||||
self.gen = driver
|
||||
@@ -2680,21 +2675,23 @@ def setup(
|
||||
def literal_specs(self, specs):
|
||||
for spec in specs:
|
||||
self.gen.h2("Spec: %s" % str(spec))
|
||||
condition_id = next(self._id_counter)
|
||||
trigger_id = next(self._id_counter)
|
||||
condition_id = next(self._condition_id_counter)
|
||||
trigger_id = next(self._trigger_id_counter)
|
||||
|
||||
# Special condition triggered by "literal_solved"
|
||||
self.gen.fact(fn.literal(trigger_id))
|
||||
self.gen.fact(fn.pkg_fact(spec.name, fn.condition_trigger(condition_id, trigger_id)))
|
||||
self.gen.fact(fn.condition_reason(condition_id, f"{spec} requested explicitly"))
|
||||
self.gen.fact(fn.condition_reason(condition_id, f"{spec} requested from CLI"))
|
||||
|
||||
# Effect imposes the spec
|
||||
imposed_spec_key = str(spec), None
|
||||
cache = self._effect_cache[spec.name]
|
||||
if imposed_spec_key in cache:
|
||||
effect_id, requirements = cache[imposed_spec_key]
|
||||
else:
|
||||
effect_id = next(self._id_counter)
|
||||
requirements = self.spec_clauses(spec)
|
||||
msg = (
|
||||
"literal specs have different requirements. clear cache before computing literals"
|
||||
)
|
||||
assert imposed_spec_key not in cache, msg
|
||||
effect_id = next(self._effect_id_counter)
|
||||
requirements = self.spec_clauses(spec)
|
||||
root_name = spec.name
|
||||
for clause in requirements:
|
||||
clause_name = clause.args[0]
|
||||
@@ -2784,13 +2781,6 @@ def _specs_from_requires(self, pkg_name, section):
|
||||
for s in spec_group[key]:
|
||||
yield _spec_with_default_name(s, pkg_name)
|
||||
|
||||
def pkg_class(self, pkg_name: str) -> typing.Type["spack.package_base.PackageBase"]:
|
||||
request = pkg_name
|
||||
if pkg_name in self.explicitly_required_namespaces:
|
||||
namespace = self.explicitly_required_namespaces[pkg_name]
|
||||
request = f"{namespace}.{pkg_name}"
|
||||
return spack.repo.PATH.get_pkg_class(request)
|
||||
|
||||
|
||||
class SpecBuilder:
|
||||
"""Class with actions to rebuild a spec from ASP results."""
|
||||
@@ -2802,11 +2792,9 @@ class SpecBuilder:
|
||||
r"^.*_propagate$",
|
||||
r"^.*_satisfies$",
|
||||
r"^.*_set$",
|
||||
r"^dependency_holds$",
|
||||
r"^node_compiler$",
|
||||
r"^package_hash$",
|
||||
r"^root$",
|
||||
r"^track_dependencies$",
|
||||
r"^variant_default_value_from_cli$",
|
||||
r"^virtual_node$",
|
||||
r"^virtual_root$",
|
||||
@@ -2850,9 +2838,6 @@ def _arch(self, node):
|
||||
self._specs[node].architecture = arch
|
||||
return arch
|
||||
|
||||
def namespace(self, node, namespace):
|
||||
self._specs[node].namespace = namespace
|
||||
|
||||
def node_platform(self, node, platform):
|
||||
self._arch(node).platform = platform
|
||||
|
||||
@@ -3067,6 +3052,14 @@ def build_specs(self, function_tuples):
|
||||
|
||||
action(*args)
|
||||
|
||||
# namespace assignment is done after the fact, as it is not
|
||||
# currently part of the solve
|
||||
for spec in self._specs.values():
|
||||
if spec.namespace:
|
||||
continue
|
||||
repo = spack.repo.PATH.repo_for_pkg(spec)
|
||||
spec.namespace = repo.namespace
|
||||
|
||||
# fix flags after all specs are constructed
|
||||
self.reorder_flags()
|
||||
|
||||
|
||||
@@ -45,9 +45,6 @@
|
||||
:- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "link"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)), internal_error("link dependency out of the root unification set").
|
||||
:- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "run"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)), internal_error("run dependency out of the root unification set").
|
||||
|
||||
% Namespaces are statically assigned by a package fact
|
||||
attr("namespace", node(ID, Package), Namespace) :- attr("node", node(ID, Package)), pkg_fact(Package, namespace(Namespace)).
|
||||
|
||||
% Rules on "unification sets", i.e. on sets of nodes allowing a single configuration of any given package
|
||||
unify(SetID, PackageName) :- unification_set(SetID, node(_, PackageName)).
|
||||
:- 2 { unification_set(SetID, node(_, PackageName)) }, unify(SetID, PackageName).
|
||||
@@ -698,26 +695,6 @@ requirement_group_satisfied(node(ID, Package), X) :-
|
||||
activate_requirement(node(ID, Package), X),
|
||||
requirement_group(Package, X).
|
||||
|
||||
% Do not impose requirements, if the conditional requirement is not active
|
||||
do_not_impose(EffectID, node(ID, Package)) :-
|
||||
trigger_condition_holds(TriggerID, node(ID, Package)),
|
||||
pkg_fact(Package, condition_trigger(ConditionID, TriggerID)),
|
||||
pkg_fact(Package, condition_effect(ConditionID, EffectID)),
|
||||
requirement_group_member(ConditionID , Package, RequirementID),
|
||||
not activate_requirement(node(ID, Package), RequirementID).
|
||||
|
||||
% When we have a required provider, we need to ensure that the provider/2 facts respect
|
||||
% the requirement. This is particularly important for packages that could provide multiple
|
||||
% virtuals independently
|
||||
required_provider(Provider, Virtual)
|
||||
:- requirement_group_member(ConditionID, Virtual, RequirementID),
|
||||
condition_holds(ConditionID, _),
|
||||
virtual(Virtual),
|
||||
pkg_fact(Virtual, condition_effect(ConditionID, EffectID)),
|
||||
imposed_constraint(EffectID, "node", Provider).
|
||||
|
||||
:- provider(node(Y, Package), node(X, Virtual)), required_provider(Provider, Virtual), Package != Provider.
|
||||
|
||||
% TODO: the following two choice rules allow the solver to add compiler
|
||||
% flags if their only source is from a requirement. This is overly-specific
|
||||
% and should use a more-generic approach like in https://github.com/spack/spack/pull/37180
|
||||
|
||||
@@ -213,19 +213,6 @@ def __call__(self, match):
|
||||
return clr.colorize(re.sub(_SEPARATORS, insert_color(), str(spec)) + "@.")
|
||||
|
||||
|
||||
OLD_STYLE_FMT_RE = re.compile(r"\${[A-Z]+}")
|
||||
|
||||
|
||||
def ensure_modern_format_string(fmt: str) -> None:
|
||||
"""Ensure that the format string does not contain old ${...} syntax."""
|
||||
result = OLD_STYLE_FMT_RE.search(fmt)
|
||||
if result:
|
||||
raise SpecFormatStringError(
|
||||
f"Format string `{fmt}` contains old syntax `{result.group(0)}`. "
|
||||
"This is no longer supported."
|
||||
)
|
||||
|
||||
|
||||
@lang.lazy_lexicographic_ordering
|
||||
class ArchSpec:
|
||||
"""Aggregate the target platform, the operating system and the target microarchitecture."""
|
||||
@@ -4373,7 +4360,6 @@ def format(self, format_string=DEFAULT_FORMAT, **kwargs):
|
||||
that accepts a string and returns another one
|
||||
|
||||
"""
|
||||
ensure_modern_format_string(format_string)
|
||||
color = kwargs.get("color", False)
|
||||
transform = kwargs.get("transform", {})
|
||||
|
||||
|
||||
@@ -201,9 +201,6 @@ def dummy_prefix(tmpdir):
|
||||
with open(data, "w") as f:
|
||||
f.write("hello world")
|
||||
|
||||
with open(p.join(".spack", "binary_distribution"), "w") as f:
|
||||
f.write("{}")
|
||||
|
||||
os.symlink("app", relative_app_link)
|
||||
os.symlink(app, absolute_app_link)
|
||||
|
||||
@@ -1027,9 +1024,7 @@ def test_tarball_common_prefix(dummy_prefix, tmpdir):
|
||||
bindist._tar_strip_component(tar, common_prefix)
|
||||
|
||||
# Extract into prefix2
|
||||
tar.extractall(
|
||||
path="prefix2", members=bindist._tar_strip_component(tar, common_prefix)
|
||||
)
|
||||
tar.extractall(path="prefix2")
|
||||
|
||||
# Verify files are all there at the correct level.
|
||||
assert set(os.listdir("prefix2")) == {"bin", "share", ".spack"}
|
||||
@@ -1049,30 +1044,13 @@ def test_tarball_common_prefix(dummy_prefix, tmpdir):
|
||||
)
|
||||
|
||||
|
||||
def test_tarfile_missing_binary_distribution_file(tmpdir):
|
||||
"""A tarfile that does not contain a .spack/binary_distribution file cannot be
|
||||
used to install."""
|
||||
with tmpdir.as_cwd():
|
||||
# An empty .spack dir.
|
||||
with tarfile.open("empty.tar", mode="w") as tar:
|
||||
tarinfo = tarfile.TarInfo(name="example/.spack")
|
||||
tarinfo.type = tarfile.DIRTYPE
|
||||
tar.addfile(tarinfo)
|
||||
|
||||
with pytest.raises(ValueError, match="missing binary_distribution file"):
|
||||
bindist._ensure_common_prefix(tarfile.open("empty.tar", mode="r"))
|
||||
|
||||
|
||||
def test_tarfile_without_common_directory_prefix_fails(tmpdir):
|
||||
"""A tarfile that only contains files without a common package directory
|
||||
should fail to extract, as we won't know where to put the files."""
|
||||
with tmpdir.as_cwd():
|
||||
# Create a broken tarball with just a file, no directories.
|
||||
with tarfile.open("empty.tar", mode="w") as tar:
|
||||
tar.addfile(
|
||||
tarfile.TarInfo(name="example/.spack/binary_distribution"),
|
||||
fileobj=io.BytesIO(b"hello"),
|
||||
)
|
||||
tar.addfile(tarfile.TarInfo(name="example/file"), fileobj=io.BytesIO(b"hello"))
|
||||
|
||||
with pytest.raises(ValueError, match="Tarball does not contain a common prefix"):
|
||||
bindist._ensure_common_prefix(tarfile.open("empty.tar", mode="r"))
|
||||
@@ -1188,7 +1166,7 @@ def test_get_valid_spec_file_no_json(tmp_path, filename):
|
||||
|
||||
|
||||
def test_download_tarball_with_unsupported_layout_fails(tmp_path, mutable_config, capsys):
|
||||
layout_version = bindist.FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION + 1
|
||||
layout_version = bindist.CURRENT_BUILD_CACHE_LAYOUT_VERSION + 1
|
||||
spec = Spec("gmake@4.4.1%gcc@13.1.0 arch=linux-ubuntu23.04-zen2")
|
||||
spec._mark_concrete()
|
||||
spec_dict = spec.to_dict()
|
||||
|
||||
@@ -25,7 +25,7 @@ def test_error_when_multiple_specs_are_given():
|
||||
assert "only takes one spec" in output
|
||||
|
||||
|
||||
@pytest.mark.parametrize("args", [("--", "/bin/sh", "-c", "echo test"), ("--",), ()])
|
||||
@pytest.mark.parametrize("args", [("--", "/bin/bash", "-c", "echo test"), ("--",), ()])
|
||||
@pytest.mark.usefixtures("config", "mock_packages", "working_env")
|
||||
def test_build_env_requires_a_spec(args):
|
||||
output = build_env(*args, fail_on_error=False)
|
||||
@@ -35,7 +35,7 @@ def test_build_env_requires_a_spec(args):
|
||||
_out_file = "env.out"
|
||||
|
||||
|
||||
@pytest.mark.parametrize("shell", ["pwsh", "bat"] if sys.platform == "win32" else ["sh"])
|
||||
@pytest.mark.parametrize("shell", ["pwsh", "bat"] if sys.platform == "win32" else ["bash"])
|
||||
@pytest.mark.usefixtures("config", "mock_packages", "working_env")
|
||||
def test_dump(shell_as, shell, tmpdir):
|
||||
with tmpdir.as_cwd():
|
||||
|
||||
@@ -2000,7 +2000,7 @@ def test_ci_reproduce(
|
||||
|
||||
install_script = os.path.join(working_dir.strpath, "install.sh")
|
||||
with open(install_script, "w") as fd:
|
||||
fd.write("#!/bin/sh\n\n#fake install\nspack install blah\n")
|
||||
fd.write("#!/bin/bash\n\n#fake install\nspack install blah\n")
|
||||
|
||||
spack_info_file = os.path.join(working_dir.strpath, "spack_info.txt")
|
||||
with open(spack_info_file, "w") as fd:
|
||||
|
||||
@@ -3538,7 +3538,7 @@ def test_environment_created_in_users_location(mutable_mock_env_path, tmp_path):
|
||||
assert os.path.isdir(os.path.join(env_dir, dir_name))
|
||||
|
||||
|
||||
def test_environment_created_from_lockfile_has_view(mock_packages, temporary_store, tmpdir):
|
||||
def test_environment_created_from_lockfile_has_view(mock_packages, tmpdir):
|
||||
"""When an env is created from a lockfile, a view should be generated for it"""
|
||||
env_a = str(tmpdir.join("a"))
|
||||
env_b = str(tmpdir.join("b"))
|
||||
|
||||
@@ -43,7 +43,7 @@ def test_find_gpg(cmd_name, version, tmpdir, mock_gnupghome, monkeypatch):
|
||||
f.write(TEMPLATE.format(version=version))
|
||||
fs.set_executable(fname)
|
||||
|
||||
monkeypatch.setenv("PATH", str(tmpdir))
|
||||
monkeypatch.setitem(os.environ, "PATH", str(tmpdir))
|
||||
if version == "undetectable" or version.endswith("1.3.4"):
|
||||
with pytest.raises(spack.util.gpg.SpackGPGError):
|
||||
spack.util.gpg.init(force=True)
|
||||
@@ -54,7 +54,7 @@ def test_find_gpg(cmd_name, version, tmpdir, mock_gnupghome, monkeypatch):
|
||||
|
||||
|
||||
def test_no_gpg_in_path(tmpdir, mock_gnupghome, monkeypatch, mutable_config):
|
||||
monkeypatch.setenv("PATH", str(tmpdir))
|
||||
monkeypatch.setitem(os.environ, "PATH", str(tmpdir))
|
||||
bootstrap("disable")
|
||||
with pytest.raises(RuntimeError):
|
||||
spack.util.gpg.init(force=True)
|
||||
|
||||
@@ -33,10 +33,11 @@ def _print(*args, **kwargs):
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"pkg", ["openmpi", "trilinos", "boost", "python", "dealii", "xsdk"] # a BundlePackage
|
||||
"pkg", ["openmpi", "trilinos", "boost", "python", "dealii", "xsdk", "gasnet", "warpx"]
|
||||
)
|
||||
def test_it_just_runs(pkg):
|
||||
info(pkg)
|
||||
@pytest.mark.parametrize("extra_args", [[], ["--variants-by-name"]])
|
||||
def test_it_just_runs(pkg, extra_args):
|
||||
info(pkg, *extra_args)
|
||||
|
||||
|
||||
def test_info_noversion(mock_packages, print_buffer):
|
||||
@@ -78,7 +79,8 @@ def test_is_externally_detectable(pkg_query, expected, parser, print_buffer):
|
||||
"gcc", # This should ensure --test's c_names processing loop covered
|
||||
],
|
||||
)
|
||||
def test_info_fields(pkg_query, parser, print_buffer):
|
||||
@pytest.mark.parametrize("extra_args", [[], ["--variants-by-name"]])
|
||||
def test_info_fields(pkg_query, extra_args, parser, print_buffer):
|
||||
expected_fields = (
|
||||
"Description:",
|
||||
"Homepage:",
|
||||
@@ -91,7 +93,7 @@ def test_info_fields(pkg_query, parser, print_buffer):
|
||||
"Licenses:",
|
||||
)
|
||||
|
||||
args = parser.parse_args(["--all", pkg_query])
|
||||
args = parser.parse_args(["--all", pkg_query] + extra_args)
|
||||
spack.cmd.info.info(parser, args)
|
||||
|
||||
for text in expected_fields:
|
||||
|
||||
@@ -904,12 +904,13 @@ def test_install_help_cdash():
|
||||
|
||||
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
def test_cdash_auth_token(tmpdir, mock_fetch, install_mockery, monkeypatch, capfd):
|
||||
def test_cdash_auth_token(tmpdir, mock_fetch, install_mockery, capfd):
|
||||
# capfd interferes with Spack's capturing
|
||||
with tmpdir.as_cwd(), capfd.disabled():
|
||||
monkeypatch.setenv("SPACK_CDASH_AUTH_TOKEN", "asdf")
|
||||
out = install("-v", "--log-file=cdash_reports", "--log-format=cdash", "a")
|
||||
assert "Using CDash auth token from environment" in out
|
||||
with tmpdir.as_cwd():
|
||||
with capfd.disabled():
|
||||
os.environ["SPACK_CDASH_AUTH_TOKEN"] = "asdf"
|
||||
out = install("-v", "--log-file=cdash_reports", "--log-format=cdash", "a")
|
||||
assert "Using CDash auth token from environment" in out
|
||||
|
||||
|
||||
@pytest.mark.not_on_windows("Windows log_output logs phase header out of order")
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
import spack.compiler
|
||||
import spack.compilers
|
||||
import spack.compilers as compilers
|
||||
import spack.spec
|
||||
import spack.util.environment
|
||||
from spack.compiler import Compiler
|
||||
@@ -25,14 +25,12 @@ class MockOs:
|
||||
pass
|
||||
|
||||
compiler_name = "gcc"
|
||||
compiler_cls = spack.compilers.class_for_compiler_name(compiler_name)
|
||||
compiler_cls = compilers.class_for_compiler_name(compiler_name)
|
||||
monkeypatch.setattr(compiler_cls, "cc_version", lambda x: version)
|
||||
|
||||
compiler_id = spack.compilers.CompilerID(
|
||||
os=MockOs, compiler_name=compiler_name, version=None
|
||||
)
|
||||
variation = spack.compilers.NameVariation(prefix="", suffix="")
|
||||
return spack.compilers.DetectVersionArgs(
|
||||
compiler_id = compilers.CompilerID(os=MockOs, compiler_name=compiler_name, version=None)
|
||||
variation = compilers.NameVariation(prefix="", suffix="")
|
||||
return compilers.DetectVersionArgs(
|
||||
id=compiler_id, variation=variation, language="cc", path=path
|
||||
)
|
||||
|
||||
@@ -58,7 +56,7 @@ def test_multiple_conflicting_compiler_definitions(mutable_config):
|
||||
mutable_config.update_config("compilers", compiler_config)
|
||||
|
||||
arch_spec = spack.spec.ArchSpec(("test", "test", "test"))
|
||||
cmp = spack.compilers.compiler_for_spec("clang@=0.0.0", arch_spec)
|
||||
cmp = compilers.compiler_for_spec("clang@=0.0.0", arch_spec)
|
||||
assert cmp.f77 == "f77"
|
||||
|
||||
|
||||
@@ -66,7 +64,7 @@ def test_get_compiler_duplicates(config):
|
||||
# In this case there is only one instance of the specified compiler in
|
||||
# the test configuration (so it is not actually a duplicate), but the
|
||||
# method behaves the same.
|
||||
cfg_file_to_duplicates = spack.compilers.get_compiler_duplicates(
|
||||
cfg_file_to_duplicates = compilers.get_compiler_duplicates(
|
||||
"gcc@4.5.0", spack.spec.ArchSpec("cray-CNL-xeon")
|
||||
)
|
||||
|
||||
@@ -76,7 +74,7 @@ def test_get_compiler_duplicates(config):
|
||||
|
||||
|
||||
def test_all_compilers(config):
|
||||
all_compilers = spack.compilers.all_compilers()
|
||||
all_compilers = compilers.all_compilers()
|
||||
filtered = [x for x in all_compilers if str(x.spec) == "clang@=3.3"]
|
||||
filtered = [x for x in filtered if x.operating_system == "SuSE11"]
|
||||
assert len(filtered) == 1
|
||||
@@ -90,7 +88,7 @@ def test_version_detection_is_empty(
|
||||
make_args_for_version, input_version, expected_version, expected_error
|
||||
):
|
||||
args = make_args_for_version(version=input_version)
|
||||
result, error = spack.compilers.detect_version(args)
|
||||
result, error = compilers.detect_version(args)
|
||||
if not error:
|
||||
assert result.id.version == expected_version
|
||||
|
||||
@@ -106,7 +104,7 @@ def test_compiler_flags_from_config_are_grouped():
|
||||
"modules": None,
|
||||
}
|
||||
|
||||
compiler = spack.compilers.compiler_from_dict(compiler_entry)
|
||||
compiler = compilers.compiler_from_dict(compiler_entry)
|
||||
assert any(x == "-foo-flag foo-val" for x in compiler.flags["cflags"])
|
||||
|
||||
|
||||
@@ -255,8 +253,8 @@ def test_get_compiler_link_paths_load_env(working_env, monkeypatch, tmpdir):
|
||||
gcc = str(tmpdir.join("gcc"))
|
||||
with open(gcc, "w") as f:
|
||||
f.write(
|
||||
"""#!/bin/sh
|
||||
if [ "$ENV_SET" = "1" ] && [ "$MODULE_LOADED" = "1" ]; then
|
||||
"""#!/bin/bash
|
||||
if [[ $ENV_SET == "1" && $MODULE_LOADED == "1" ]]; then
|
||||
echo '"""
|
||||
+ no_flag_output
|
||||
+ """'
|
||||
@@ -290,7 +288,7 @@ def flag_value(flag, spec):
|
||||
else:
|
||||
compiler_entry = copy(default_compiler_entry)
|
||||
compiler_entry["spec"] = spec
|
||||
compiler = spack.compilers.compiler_from_dict(compiler_entry)
|
||||
compiler = compilers.compiler_from_dict(compiler_entry)
|
||||
|
||||
return getattr(compiler, flag)
|
||||
|
||||
@@ -424,7 +422,6 @@ def test_clang_flags():
|
||||
"-gdwarf-5",
|
||||
"-gline-tables-only",
|
||||
"-gmodules",
|
||||
"-gz",
|
||||
"-g",
|
||||
],
|
||||
"clang@3.3",
|
||||
@@ -447,7 +444,6 @@ def test_aocc_flags():
|
||||
"-gdwarf-5",
|
||||
"-gline-tables-only",
|
||||
"-gmodules",
|
||||
"-gz",
|
||||
"-g",
|
||||
],
|
||||
"aocc@2.2.0",
|
||||
@@ -661,8 +657,8 @@ def test_xl_r_flags():
|
||||
[("gcc@4.7.2", False), ("clang@3.3", False), ("clang@8.0.0", True)],
|
||||
)
|
||||
def test_detecting_mixed_toolchains(compiler_spec, expected_result, config):
|
||||
compiler = spack.compilers.compilers_for_spec(compiler_spec).pop()
|
||||
assert spack.compilers.is_mixed_toolchain(compiler) is expected_result
|
||||
compiler = compilers.compilers_for_spec(compiler_spec).pop()
|
||||
assert compilers.is_mixed_toolchain(compiler) is expected_result
|
||||
|
||||
|
||||
@pytest.mark.regression("14798,13733")
|
||||
@@ -703,8 +699,8 @@ def test_compiler_get_real_version(working_env, monkeypatch, tmpdir):
|
||||
gcc = str(tmpdir.join("gcc"))
|
||||
with open(gcc, "w") as f:
|
||||
f.write(
|
||||
"""#!/bin/sh
|
||||
if [ "$CMP_ON" = "1" ]; then
|
||||
"""#!/bin/bash
|
||||
if [[ $CMP_ON == "1" ]]; then
|
||||
echo "$CMP_VER"
|
||||
fi
|
||||
"""
|
||||
@@ -741,49 +737,6 @@ def module(*args):
|
||||
assert version == test_version
|
||||
|
||||
|
||||
@pytest.mark.regression("42679")
|
||||
def test_get_compilers(config):
|
||||
"""Tests that we can select compilers whose versions differ only for a suffix."""
|
||||
common = {
|
||||
"flags": {},
|
||||
"operating_system": "ubuntu23.10",
|
||||
"target": "x86_64",
|
||||
"modules": [],
|
||||
"environment": {},
|
||||
"extra_rpaths": [],
|
||||
}
|
||||
with_suffix = {
|
||||
"spec": "gcc@13.2.0-suffix",
|
||||
"paths": {
|
||||
"cc": "/usr/bin/gcc-13.2.0-suffix",
|
||||
"cxx": "/usr/bin/g++-13.2.0-suffix",
|
||||
"f77": "/usr/bin/gfortran-13.2.0-suffix",
|
||||
"fc": "/usr/bin/gfortran-13.2.0-suffix",
|
||||
},
|
||||
**common,
|
||||
}
|
||||
without_suffix = {
|
||||
"spec": "gcc@13.2.0",
|
||||
"paths": {
|
||||
"cc": "/usr/bin/gcc-13.2.0",
|
||||
"cxx": "/usr/bin/g++-13.2.0",
|
||||
"f77": "/usr/bin/gfortran-13.2.0",
|
||||
"fc": "/usr/bin/gfortran-13.2.0",
|
||||
},
|
||||
**common,
|
||||
}
|
||||
|
||||
compilers = [{"compiler": without_suffix}, {"compiler": with_suffix}]
|
||||
|
||||
assert spack.compilers.get_compilers(
|
||||
compilers, cspec=spack.spec.CompilerSpec("gcc@=13.2.0-suffix")
|
||||
) == [spack.compilers._compiler_from_config_entry(with_suffix)]
|
||||
|
||||
assert spack.compilers.get_compilers(
|
||||
compilers, cspec=spack.spec.CompilerSpec("gcc@=13.2.0")
|
||||
) == [spack.compilers._compiler_from_config_entry(without_suffix)]
|
||||
|
||||
|
||||
def test_compiler_get_real_version_fails(working_env, monkeypatch, tmpdir):
|
||||
# Test variables
|
||||
test_version = "2.2.2"
|
||||
@@ -792,8 +745,8 @@ def test_compiler_get_real_version_fails(working_env, monkeypatch, tmpdir):
|
||||
gcc = str(tmpdir.join("gcc"))
|
||||
with open(gcc, "w") as f:
|
||||
f.write(
|
||||
"""#!/bin/sh
|
||||
if [ "$CMP_ON" = "1" ]; then
|
||||
"""#!/bin/bash
|
||||
if [[ $CMP_ON == "1" ]]; then
|
||||
echo "$CMP_VER"
|
||||
fi
|
||||
"""
|
||||
@@ -846,7 +799,7 @@ def test_compiler_flags_use_real_version(working_env, monkeypatch, tmpdir):
|
||||
gcc = str(tmpdir.join("gcc"))
|
||||
with open(gcc, "w") as f:
|
||||
f.write(
|
||||
"""#!/bin/sh
|
||||
"""#!/bin/bash
|
||||
echo "4.4.4"
|
||||
"""
|
||||
) # Version for which c++11 flag is -std=c++0x
|
||||
|
||||
@@ -203,9 +203,7 @@ def change(self, changes=None):
|
||||
# TODO: in case tests using this fixture start failing.
|
||||
if sys.modules.get("spack.pkg.changing.changing"):
|
||||
del sys.modules["spack.pkg.changing.changing"]
|
||||
if sys.modules.get("spack.pkg.changing.root"):
|
||||
del sys.modules["spack.pkg.changing.root"]
|
||||
if sys.modules.get("spack.pkg.changing"):
|
||||
del sys.modules["spack.pkg.changing"]
|
||||
|
||||
# Change the recipe
|
||||
@@ -1501,30 +1499,6 @@ def test_sticky_variant_in_package(self):
|
||||
s = Spec("sticky-variant %clang").concretized()
|
||||
assert s.satisfies("%clang") and s.satisfies("~allow-gcc")
|
||||
|
||||
@pytest.mark.regression("42172")
|
||||
@pytest.mark.only_clingo("Original concretizer cannot use sticky variants")
|
||||
@pytest.mark.parametrize(
|
||||
"spec,allow_gcc",
|
||||
[
|
||||
("sticky-variant@1.0+allow-gcc", True),
|
||||
("sticky-variant@1.0~allow-gcc", False),
|
||||
("sticky-variant@1.0", False),
|
||||
],
|
||||
)
|
||||
def test_sticky_variant_in_external(self, spec, allow_gcc):
|
||||
# setup external for sticky-variant+allow-gcc
|
||||
config = {"externals": [{"spec": spec, "prefix": "/fake/path"}], "buildable": False}
|
||||
spack.config.set("packages:sticky-variant", config)
|
||||
|
||||
maybe = llnl.util.lang.nullcontext if allow_gcc else pytest.raises
|
||||
with maybe(spack.error.SpackError):
|
||||
s = Spec("sticky-variant-dependent%gcc").concretized()
|
||||
|
||||
if allow_gcc:
|
||||
assert s.satisfies("%gcc")
|
||||
assert s["sticky-variant"].satisfies("+allow-gcc")
|
||||
assert s["sticky-variant"].external
|
||||
|
||||
@pytest.mark.only_clingo("Use case not supported by the original concretizer")
|
||||
def test_do_not_invent_new_concrete_versions_unless_necessary(self):
|
||||
# ensure we select a known satisfying version rather than creating
|
||||
@@ -1630,9 +1604,7 @@ def test_installed_version_is_selected_only_for_reuse(
|
||||
assert not new_root["changing"].satisfies("@1.0")
|
||||
|
||||
@pytest.mark.regression("28259")
|
||||
def test_reuse_with_unknown_namespace_dont_raise(
|
||||
self, temporary_store, mock_custom_repository
|
||||
):
|
||||
def test_reuse_with_unknown_namespace_dont_raise(self, mock_custom_repository):
|
||||
with spack.repo.use_repositories(mock_custom_repository, override=False):
|
||||
s = Spec("c").concretized()
|
||||
assert s.namespace != "builtin.mock"
|
||||
@@ -1643,8 +1615,8 @@ def test_reuse_with_unknown_namespace_dont_raise(
|
||||
assert s.namespace == "builtin.mock"
|
||||
|
||||
@pytest.mark.regression("28259")
|
||||
def test_reuse_with_unknown_package_dont_raise(self, tmpdir, temporary_store, monkeypatch):
|
||||
builder = spack.repo.MockRepositoryBuilder(tmpdir.mkdir("mock.repo"), namespace="myrepo")
|
||||
def test_reuse_with_unknown_package_dont_raise(self, tmpdir, monkeypatch):
|
||||
builder = spack.repo.MockRepositoryBuilder(tmpdir, namespace="myrepo")
|
||||
builder.add_package("c")
|
||||
with spack.repo.use_repositories(builder.root, override=False):
|
||||
s = Spec("c").concretized()
|
||||
@@ -2208,33 +2180,6 @@ def test_reuse_python_from_cli_and_extension_from_db(self, mutable_database):
|
||||
|
||||
assert with_reuse.dag_hash() == without_reuse.dag_hash()
|
||||
|
||||
@pytest.mark.regression("35536")
|
||||
@pytest.mark.parametrize(
|
||||
"spec_str,expected_namespaces",
|
||||
[
|
||||
# Single node with fully qualified namespace
|
||||
("builtin.mock.gmake", {"gmake": "builtin.mock"}),
|
||||
# Dependency with fully qualified namespace
|
||||
("hdf5 ^builtin.mock.gmake", {"gmake": "builtin.mock", "hdf5": "duplicates.test"}),
|
||||
("hdf5 ^gmake", {"gmake": "duplicates.test", "hdf5": "duplicates.test"}),
|
||||
],
|
||||
)
|
||||
@pytest.mark.only_clingo("Uses specs requiring multiple gmake specs")
|
||||
def test_select_lower_priority_package_from_repository_stack(
|
||||
self, spec_str, expected_namespaces
|
||||
):
|
||||
"""Tests that a user can explicitly select a lower priority, fully qualified dependency
|
||||
from cli.
|
||||
"""
|
||||
# 'builtin.mock" and "duplicates.test" share a 'gmake' package
|
||||
additional_repo = os.path.join(spack.paths.repos_path, "duplicates.test")
|
||||
with spack.repo.use_repositories(additional_repo, override=False):
|
||||
s = Spec(spec_str).concretized()
|
||||
|
||||
for name, namespace in expected_namespaces.items():
|
||||
assert s[name].concrete
|
||||
assert s[name].namespace == namespace
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def duplicates_test_repository():
|
||||
|
||||
@@ -16,8 +16,8 @@
|
||||
version_error_messages = [
|
||||
"Cannot satisfy 'fftw@:1.0' and 'fftw@1.1:",
|
||||
" required because quantum-espresso depends on fftw@:1.0",
|
||||
" required because quantum-espresso ^fftw@1.1: requested explicitly",
|
||||
" required because quantum-espresso ^fftw@1.1: requested explicitly",
|
||||
" required because quantum-espresso ^fftw@1.1: requested from CLI",
|
||||
" required because quantum-espresso ^fftw@1.1: requested from CLI",
|
||||
]
|
||||
|
||||
external_error_messages = [
|
||||
@@ -30,15 +30,15 @@
|
||||
" which was not satisfied"
|
||||
),
|
||||
" 'quantum-espresso+veritas' required",
|
||||
" required because quantum-espresso+veritas requested explicitly",
|
||||
" required because quantum-espresso+veritas requested from CLI",
|
||||
]
|
||||
|
||||
variant_error_messages = [
|
||||
"'fftw' required multiple values for single-valued variant 'mpi'",
|
||||
" Requested '~mpi' and '+mpi'",
|
||||
" required because quantum-espresso depends on fftw+mpi when +invino",
|
||||
" required because quantum-espresso+invino ^fftw~mpi requested explicitly",
|
||||
" required because quantum-espresso+invino ^fftw~mpi requested explicitly",
|
||||
" required because quantum-espresso+invino ^fftw~mpi requested from CLI",
|
||||
" required because quantum-espresso+invino ^fftw~mpi requested from CLI",
|
||||
]
|
||||
|
||||
external_config = {
|
||||
|
||||
@@ -504,13 +504,3 @@ def test_sticky_variant_accounts_for_packages_yaml(self):
|
||||
with spack.config.override("packages:sticky-variant", {"variants": "+allow-gcc"}):
|
||||
s = Spec("sticky-variant %gcc").concretized()
|
||||
assert s.satisfies("%gcc") and s.satisfies("+allow-gcc")
|
||||
|
||||
@pytest.mark.regression("41134")
|
||||
@pytest.mark.only_clingo("Not backporting the fix to the old concretizer")
|
||||
def test_default_preference_variant_different_type_does_not_error(self):
|
||||
"""Tests that a different type for an existing variant in the 'all:' section of
|
||||
packages.yaml doesn't fail with an error.
|
||||
"""
|
||||
with spack.config.override("packages:all", {"variants": "+foo"}):
|
||||
s = Spec("a").concretized()
|
||||
assert s.satisfies("foo=bar")
|
||||
|
||||
@@ -896,69 +896,3 @@ def test_requires_directive(concretize_scope, mock_packages):
|
||||
# This package can only be compiled with clang
|
||||
with pytest.raises(spack.error.SpackError, match="can only be compiled with Clang"):
|
||||
Spec("requires_clang").concretized()
|
||||
|
||||
|
||||
@pytest.mark.regression("42084")
|
||||
def test_requiring_package_on_multiple_virtuals(concretize_scope, mock_packages):
|
||||
update_packages_config(
|
||||
"""
|
||||
packages:
|
||||
all:
|
||||
providers:
|
||||
scalapack: [netlib-scalapack]
|
||||
blas:
|
||||
require: intel-parallel-studio
|
||||
lapack:
|
||||
require: intel-parallel-studio
|
||||
scalapack:
|
||||
require: intel-parallel-studio
|
||||
"""
|
||||
)
|
||||
s = Spec("dla-future").concretized()
|
||||
|
||||
assert s["blas"].name == "intel-parallel-studio"
|
||||
assert s["lapack"].name == "intel-parallel-studio"
|
||||
assert s["scalapack"].name == "intel-parallel-studio"
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"spec_str,expected,not_expected",
|
||||
[
|
||||
(
|
||||
"forward-multi-value +cuda cuda_arch=10 ^dependency-mv~cuda",
|
||||
["cuda_arch=10", "^dependency-mv~cuda"],
|
||||
["cuda_arch=11", "^dependency-mv cuda_arch=10", "^dependency-mv cuda_arch=11"],
|
||||
),
|
||||
(
|
||||
"forward-multi-value +cuda cuda_arch=10 ^dependency-mv+cuda",
|
||||
["cuda_arch=10", "^dependency-mv cuda_arch=10"],
|
||||
["cuda_arch=11", "^dependency-mv cuda_arch=11"],
|
||||
),
|
||||
(
|
||||
"forward-multi-value +cuda cuda_arch=11 ^dependency-mv+cuda",
|
||||
["cuda_arch=11", "^dependency-mv cuda_arch=11"],
|
||||
["cuda_arch=10", "^dependency-mv cuda_arch=10"],
|
||||
),
|
||||
(
|
||||
"forward-multi-value +cuda cuda_arch=10,11 ^dependency-mv+cuda",
|
||||
["cuda_arch=10,11", "^dependency-mv cuda_arch=10,11"],
|
||||
[],
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_forward_multi_valued_variant_using_requires(
|
||||
spec_str, expected, not_expected, config, mock_packages
|
||||
):
|
||||
"""Tests that a package can forward multivalue variants to dependencies, using
|
||||
`requires` directives of the form:
|
||||
|
||||
for _val in ("shared", "static"):
|
||||
requires(f"^some-virtual-mv libs={_val}", when=f"libs={_val} ^some-virtual-mv")
|
||||
"""
|
||||
s = Spec(spec_str).concretized()
|
||||
|
||||
for constraint in expected:
|
||||
assert s.satisfies(constraint)
|
||||
|
||||
for constraint in not_expected:
|
||||
assert not s.satisfies(constraint)
|
||||
|
||||
@@ -1239,11 +1239,11 @@ def test_user_config_path_is_default_when_env_var_is_empty(working_env):
|
||||
assert os.path.expanduser("~%s.spack" % os.sep) == spack.paths._get_user_config_path()
|
||||
|
||||
|
||||
def test_default_install_tree(monkeypatch, default_config):
|
||||
def test_default_install_tree(monkeypatch):
|
||||
s = spack.spec.Spec("nonexistent@x.y.z %none@a.b.c arch=foo-bar-baz")
|
||||
monkeypatch.setattr(s, "dag_hash", lambda: "abc123")
|
||||
_, _, projections = spack.store.parse_install_tree(spack.config.get("config"))
|
||||
assert s.format(projections["all"]) == "foo-bar-baz/none-a.b.c/nonexistent-x.y.z-abc123"
|
||||
projection = spack.config.get("config:install_tree:projections:all", scope="defaults")
|
||||
assert s.format(projection) == "foo-bar-baz/none-a.b.c/nonexistent-x.y.z-abc123"
|
||||
|
||||
|
||||
def test_local_config_can_be_disabled(working_env):
|
||||
|
||||
@@ -629,7 +629,7 @@ def platform_config():
|
||||
spack.config.add_default_platform_scope(spack.platforms.real_host().name)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@pytest.fixture(scope="session")
|
||||
def default_config():
|
||||
"""Isolates the default configuration from the user configs.
|
||||
|
||||
@@ -713,6 +713,9 @@ def configuration_dir(tmpdir_factory, linux_os):
|
||||
t.write(content)
|
||||
yield tmpdir
|
||||
|
||||
# Once done, cleanup the directory
|
||||
shutil.rmtree(str(tmpdir))
|
||||
|
||||
|
||||
def _create_mock_configuration_scopes(configuration_dir):
|
||||
"""Create the configuration scopes used in `config` and `mutable_config`."""
|
||||
@@ -1950,5 +1953,17 @@ def pytest_runtest_setup(item):
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def disable_parallel_buildcache_push(monkeypatch):
|
||||
"""Disable process pools in tests."""
|
||||
monkeypatch.setattr(spack.cmd.buildcache, "_make_pool", spack.cmd.buildcache.NoPool)
|
||||
class MockPool:
|
||||
def map(self, func, args):
|
||||
return [func(a) for a in args]
|
||||
|
||||
def starmap(self, func, args):
|
||||
return [func(*a) for a in args]
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *args):
|
||||
pass
|
||||
|
||||
monkeypatch.setattr(spack.cmd.buildcache, "_make_pool", MockPool)
|
||||
|
||||
@@ -695,7 +695,7 @@ def test_removing_spec_from_manifest_with_exact_duplicates(
|
||||
|
||||
@pytest.mark.regression("35298")
|
||||
@pytest.mark.only_clingo("Propagation not supported in the original concretizer")
|
||||
def test_variant_propagation_with_unify_false(tmp_path, mock_packages, config):
|
||||
def test_variant_propagation_with_unify_false(tmp_path, mock_packages):
|
||||
"""Spack distributes concretizations to different processes, when unify:false is selected and
|
||||
the number of roots is 2 or more. When that happens, the specs to be concretized need to be
|
||||
properly reconstructed on the worker process, if variant propagation was requested.
|
||||
@@ -778,32 +778,3 @@ def test_env_with_include_def_missing(mutable_mock_env_path, mock_packages):
|
||||
with e:
|
||||
with pytest.raises(UndefinedReferenceError, match=r"which does not appear"):
|
||||
e.concretize()
|
||||
|
||||
|
||||
@pytest.mark.regression("41292")
|
||||
def test_deconcretize_then_concretize_does_not_error(mutable_mock_env_path, mock_packages):
|
||||
"""Tests that, after having deconcretized a spec, we can reconcretize an environment which
|
||||
has 2 or more user specs mapping to the same concrete spec.
|
||||
"""
|
||||
mutable_mock_env_path.mkdir()
|
||||
spack_yaml = mutable_mock_env_path / ev.manifest_name
|
||||
spack_yaml.write_text(
|
||||
"""spack:
|
||||
specs:
|
||||
# These two specs concretize to the same hash
|
||||
- c
|
||||
- c@1.0
|
||||
# Spec used to trigger the bug
|
||||
- a
|
||||
concretizer:
|
||||
unify: true
|
||||
"""
|
||||
)
|
||||
e = ev.Environment(mutable_mock_env_path)
|
||||
with e:
|
||||
e.concretize()
|
||||
e.deconcretize(spack.spec.Spec("a"), concrete=False)
|
||||
e.concretize()
|
||||
assert len(e.concrete_roots()) == 3
|
||||
all_root_hashes = set(x.dag_hash() for x in e.concrete_roots())
|
||||
assert len(all_root_hashes) == 2
|
||||
|
||||
@@ -12,12 +12,10 @@
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
import spack.error
|
||||
import spack.mirror
|
||||
import spack.patch
|
||||
import spack.repo
|
||||
import spack.store
|
||||
import spack.util.spack_json as sjson
|
||||
from spack import binary_distribution
|
||||
from spack.package_base import (
|
||||
InstallError,
|
||||
PackageBase,
|
||||
@@ -120,25 +118,59 @@ def remove_prefix(self):
|
||||
self.wrapped_rm_prefix()
|
||||
|
||||
|
||||
class MockStage:
|
||||
def __init__(self, wrapped_stage):
|
||||
self.wrapped_stage = wrapped_stage
|
||||
self.test_destroyed = False
|
||||
|
||||
def __enter__(self):
|
||||
self.create()
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
if exc_type is None:
|
||||
self.destroy()
|
||||
|
||||
def destroy(self):
|
||||
self.test_destroyed = True
|
||||
self.wrapped_stage.destroy()
|
||||
|
||||
def create(self):
|
||||
self.wrapped_stage.create()
|
||||
|
||||
def __getattr__(self, attr):
|
||||
if attr == "wrapped_stage":
|
||||
# This attribute may not be defined at some point during unpickling
|
||||
raise AttributeError()
|
||||
return getattr(self.wrapped_stage, attr)
|
||||
|
||||
|
||||
def test_partial_install_delete_prefix_and_stage(install_mockery, mock_fetch, working_env):
|
||||
s = Spec("canfail").concretized()
|
||||
|
||||
instance_rm_prefix = s.package.remove_prefix
|
||||
|
||||
s.package.remove_prefix = mock_remove_prefix
|
||||
with pytest.raises(MockInstallError):
|
||||
s.package.do_install()
|
||||
assert os.path.isdir(s.package.prefix)
|
||||
rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix)
|
||||
s.package.remove_prefix = rm_prefix_checker.remove_prefix
|
||||
try:
|
||||
s.package.remove_prefix = mock_remove_prefix
|
||||
with pytest.raises(MockInstallError):
|
||||
s.package.do_install()
|
||||
assert os.path.isdir(s.package.prefix)
|
||||
rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix)
|
||||
s.package.remove_prefix = rm_prefix_checker.remove_prefix
|
||||
|
||||
# must clear failure markings for the package before re-installing it
|
||||
spack.store.STORE.failure_tracker.clear(s, True)
|
||||
# must clear failure markings for the package before re-installing it
|
||||
spack.store.STORE.failure_tracker.clear(s, True)
|
||||
|
||||
s.package.set_install_succeed()
|
||||
s.package.do_install(restage=True)
|
||||
assert rm_prefix_checker.removed
|
||||
assert s.package.spec.installed
|
||||
s.package.set_install_succeed()
|
||||
s.package.stage = MockStage(s.package.stage)
|
||||
|
||||
s.package.do_install(restage=True)
|
||||
assert rm_prefix_checker.removed
|
||||
assert s.package.stage.test_destroyed
|
||||
assert s.package.spec.installed
|
||||
|
||||
finally:
|
||||
s.package.remove_prefix = instance_rm_prefix
|
||||
|
||||
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
@@ -325,8 +357,10 @@ def test_partial_install_keep_prefix(install_mockery, mock_fetch, monkeypatch, w
|
||||
spack.store.STORE.failure_tracker.clear(s, True)
|
||||
|
||||
s.package.set_install_succeed()
|
||||
s.package.stage = MockStage(s.package.stage)
|
||||
s.package.do_install(keep_prefix=True)
|
||||
assert s.package.spec.installed
|
||||
assert not s.package.stage.test_destroyed
|
||||
|
||||
|
||||
def test_second_install_no_overwrite_first(install_mockery, mock_fetch, monkeypatch):
|
||||
@@ -610,48 +644,3 @@ def test_empty_install_sanity_check_prefix(
|
||||
spec = Spec("failing-empty-install").concretized()
|
||||
with pytest.raises(spack.build_environment.ChildError, match="Nothing was installed"):
|
||||
spec.package.do_install()
|
||||
|
||||
|
||||
def test_install_from_binary_with_missing_patch_succeeds(
|
||||
temporary_store: spack.store.Store, mutable_config, tmp_path, mock_packages
|
||||
):
|
||||
"""If a patch is missing in the local package repository, but was present when building and
|
||||
pushing the package to a binary cache, installation from that binary cache shouldn't error out
|
||||
because of the missing patch."""
|
||||
# Create a spec s with non-existing patches
|
||||
s = Spec("trivial-install-test-package").concretized()
|
||||
patches = ["a" * 64]
|
||||
s_dict = s.to_dict()
|
||||
s_dict["spec"]["nodes"][0]["patches"] = patches
|
||||
s_dict["spec"]["nodes"][0]["parameters"]["patches"] = patches
|
||||
s = Spec.from_dict(s_dict)
|
||||
|
||||
# Create an install dir for it
|
||||
os.makedirs(os.path.join(s.prefix, ".spack"))
|
||||
with open(os.path.join(s.prefix, ".spack", "spec.json"), "w") as f:
|
||||
s.to_json(f)
|
||||
|
||||
# And register it in the database
|
||||
temporary_store.db.add(s, directory_layout=temporary_store.layout, explicit=True)
|
||||
|
||||
# Push it to a binary cache
|
||||
build_cache = tmp_path / "my_build_cache"
|
||||
binary_distribution.push_or_raise(
|
||||
s,
|
||||
build_cache.as_uri(),
|
||||
binary_distribution.PushOptions(unsigned=True, regenerate_index=True),
|
||||
)
|
||||
|
||||
# Now re-install it.
|
||||
s.package.do_uninstall()
|
||||
assert not temporary_store.db.query_local_by_spec_hash(s.dag_hash())
|
||||
|
||||
# Source install: fails, we don't have the patch.
|
||||
with pytest.raises(spack.error.SpecError, match="Couldn't find patch for package"):
|
||||
s.package.do_install()
|
||||
|
||||
# Binary install: succeeds, we don't need the patch.
|
||||
spack.mirror.add(spack.mirror.Mirror.from_local_path(str(build_cache)))
|
||||
s.package.do_install(package_cache_only=True, dependencies_cache_only=True, unsigned=True)
|
||||
|
||||
assert temporary_store.db.query_local_by_spec_hash(s.dag_hash())
|
||||
|
||||
@@ -165,19 +165,23 @@ def test_install_msg(monkeypatch):
|
||||
assert inst.install_msg(name, pid, None) == expected
|
||||
|
||||
|
||||
def test_install_from_cache_errors(install_mockery):
|
||||
"""Test to ensure cover install from cache errors."""
|
||||
def test_install_from_cache_errors(install_mockery, capsys):
|
||||
"""Test to ensure cover _install_from_cache errors."""
|
||||
spec = spack.spec.Spec("trivial-install-test-package")
|
||||
spec.concretize()
|
||||
assert spec.concrete
|
||||
|
||||
# Check with cache-only
|
||||
with pytest.raises(inst.InstallError, match="No binary found when cache-only was specified"):
|
||||
spec.package.do_install(package_cache_only=True, dependencies_cache_only=True)
|
||||
with pytest.raises(SystemExit):
|
||||
inst._install_from_cache(spec.package, True, True, False)
|
||||
|
||||
captured = str(capsys.readouterr())
|
||||
assert "No binary" in captured
|
||||
assert "found when cache-only specified" in captured
|
||||
assert not spec.package.installed_from_binary_cache
|
||||
|
||||
# Check when don't expect to install only from binary cache
|
||||
assert not inst._install_from_cache(spec.package, explicit=True, unsigned=False)
|
||||
assert not inst._install_from_cache(spec.package, False, True, False)
|
||||
assert not spec.package.installed_from_binary_cache
|
||||
|
||||
|
||||
@@ -188,7 +192,7 @@ def test_install_from_cache_ok(install_mockery, monkeypatch):
|
||||
monkeypatch.setattr(inst, "_try_install_from_binary_cache", _true)
|
||||
monkeypatch.setattr(spack.hooks, "post_install", _noop)
|
||||
|
||||
assert inst._install_from_cache(spec.package, explicit=True, unsigned=False)
|
||||
assert inst._install_from_cache(spec.package, True, True, False)
|
||||
|
||||
|
||||
def test_process_external_package_module(install_mockery, monkeypatch, capfd):
|
||||
|
||||
@@ -9,7 +9,10 @@
|
||||
This just tests whether the right args are getting passed to make.
|
||||
"""
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
import pytest
|
||||
|
||||
@@ -17,104 +20,110 @@
|
||||
from spack.util.environment import path_put_first
|
||||
|
||||
pytestmark = pytest.mark.skipif(
|
||||
sys.platform == "win32", reason="MakeExecutable not supported on Windows"
|
||||
sys.platform == "win32",
|
||||
reason="MakeExecutable \
|
||||
not supported on Windows",
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def make_executable(tmp_path, working_env):
|
||||
make_exe = tmp_path / "make"
|
||||
with open(make_exe, "w") as f:
|
||||
f.write("#!/bin/sh\n")
|
||||
f.write('echo "$@"')
|
||||
os.chmod(make_exe, 0o700)
|
||||
class MakeExecutableTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.tmpdir = tempfile.mkdtemp()
|
||||
|
||||
path_put_first("PATH", [tmp_path])
|
||||
make_exe = os.path.join(self.tmpdir, "make")
|
||||
with open(make_exe, "w") as f:
|
||||
f.write("#!/bin/sh\n")
|
||||
f.write('echo "$@"')
|
||||
os.chmod(make_exe, 0o700)
|
||||
|
||||
path_put_first("PATH", [self.tmpdir])
|
||||
|
||||
def test_make_normal():
|
||||
make = MakeExecutable("make", 8)
|
||||
assert make(output=str).strip() == "-j8"
|
||||
assert make("install", output=str).strip() == "-j8 install"
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.tmpdir)
|
||||
|
||||
def test_make_normal(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
self.assertEqual(make(output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j8 install")
|
||||
|
||||
def test_make_explicit():
|
||||
make = MakeExecutable("make", 8)
|
||||
assert make(parallel=True, output=str).strip() == "-j8"
|
||||
assert make("install", parallel=True, output=str).strip() == "-j8 install"
|
||||
def test_make_explicit(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
self.assertEqual(make(parallel=True, output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install")
|
||||
|
||||
def test_make_one_job(self):
|
||||
make = MakeExecutable("make", 1)
|
||||
self.assertEqual(make(output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j1 install")
|
||||
|
||||
def test_make_one_job():
|
||||
make = MakeExecutable("make", 1)
|
||||
assert make(output=str).strip() == "-j1"
|
||||
assert make("install", output=str).strip() == "-j1 install"
|
||||
def test_make_parallel_false(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
self.assertEqual(make(parallel=False, output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", parallel=False, output=str).strip(), "-j1 install")
|
||||
|
||||
def test_make_parallel_disabled(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
|
||||
def test_make_parallel_false():
|
||||
make = MakeExecutable("make", 8)
|
||||
assert make(parallel=False, output=str).strip() == "-j1"
|
||||
assert make("install", parallel=False, output=str).strip() == "-j1 install"
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "true"
|
||||
self.assertEqual(make(output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j1 install")
|
||||
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "1"
|
||||
self.assertEqual(make(output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j1 install")
|
||||
|
||||
def test_make_parallel_disabled(monkeypatch):
|
||||
make = MakeExecutable("make", 8)
|
||||
# These don't disable (false and random string)
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "false"
|
||||
self.assertEqual(make(output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j8 install")
|
||||
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "true")
|
||||
assert make(output=str).strip() == "-j1"
|
||||
assert make("install", output=str).strip() == "-j1 install"
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "foobar"
|
||||
self.assertEqual(make(output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j8 install")
|
||||
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "1")
|
||||
assert make(output=str).strip() == "-j1"
|
||||
assert make("install", output=str).strip() == "-j1 install"
|
||||
del os.environ["SPACK_NO_PARALLEL_MAKE"]
|
||||
|
||||
# These don't disable (false and random string)
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "false")
|
||||
assert make(output=str).strip() == "-j8"
|
||||
assert make("install", output=str).strip() == "-j8 install"
|
||||
def test_make_parallel_precedence(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "foobar")
|
||||
assert make(output=str).strip() == "-j8"
|
||||
assert make("install", output=str).strip() == "-j8 install"
|
||||
# These should work
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "true"
|
||||
self.assertEqual(make(parallel=True, output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j1 install")
|
||||
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "1"
|
||||
self.assertEqual(make(parallel=True, output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j1 install")
|
||||
|
||||
def test_make_parallel_precedence(monkeypatch):
|
||||
make = MakeExecutable("make", 8)
|
||||
# These don't disable (false and random string)
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "false"
|
||||
self.assertEqual(make(parallel=True, output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install")
|
||||
|
||||
# These should work
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "true")
|
||||
assert make(parallel=True, output=str).strip() == "-j1"
|
||||
assert make("install", parallel=True, output=str).strip() == "-j1 install"
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "foobar"
|
||||
self.assertEqual(make(parallel=True, output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install")
|
||||
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "1")
|
||||
assert make(parallel=True, output=str).strip() == "-j1"
|
||||
assert make("install", parallel=True, output=str).strip() == "-j1 install"
|
||||
del os.environ["SPACK_NO_PARALLEL_MAKE"]
|
||||
|
||||
# These don't disable (false and random string)
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "false")
|
||||
assert make(parallel=True, output=str).strip() == "-j8"
|
||||
assert make("install", parallel=True, output=str).strip() == "-j8 install"
|
||||
def test_make_jobs_env(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
dump_env = {}
|
||||
self.assertEqual(
|
||||
make(output=str, jobs_env="MAKE_PARALLELISM", _dump_env=dump_env).strip(), "-j8"
|
||||
)
|
||||
self.assertEqual(dump_env["MAKE_PARALLELISM"], "8")
|
||||
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "foobar")
|
||||
assert make(parallel=True, output=str).strip() == "-j8"
|
||||
assert make("install", parallel=True, output=str).strip() == "-j8 install"
|
||||
def test_make_jobserver(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
os.environ["MAKEFLAGS"] = "--jobserver-auth=X,Y"
|
||||
self.assertEqual(make(output=str).strip(), "")
|
||||
self.assertEqual(make(parallel=False, output=str).strip(), "-j1")
|
||||
del os.environ["MAKEFLAGS"]
|
||||
|
||||
|
||||
def test_make_jobs_env():
|
||||
make = MakeExecutable("make", 8)
|
||||
dump_env = {}
|
||||
assert make(output=str, jobs_env="MAKE_PARALLELISM", _dump_env=dump_env).strip() == "-j8"
|
||||
assert dump_env["MAKE_PARALLELISM"] == "8"
|
||||
|
||||
|
||||
def test_make_jobserver(monkeypatch):
|
||||
make = MakeExecutable("make", 8)
|
||||
monkeypatch.setenv("MAKEFLAGS", "--jobserver-auth=X,Y")
|
||||
assert make(output=str).strip() == ""
|
||||
assert make(parallel=False, output=str).strip() == "-j1"
|
||||
|
||||
|
||||
def test_make_jobserver_not_supported(monkeypatch):
|
||||
make = MakeExecutable("make", 8, supports_jobserver=False)
|
||||
monkeypatch.setenv("MAKEFLAGS", "--jobserver-auth=X,Y")
|
||||
# Currently fallback on default job count, Maybe it should force -j1 ?
|
||||
assert make(output=str).strip() == "-j8"
|
||||
def test_make_jobserver_not_supported(self):
|
||||
make = MakeExecutable("make", 8, supports_jobserver=False)
|
||||
os.environ["MAKEFLAGS"] = "--jobserver-auth=X,Y"
|
||||
# Currently fallback on default job count, Maybe it should force -j1 ?
|
||||
self.assertEqual(make(output=str).strip(), "-j8")
|
||||
del os.environ["MAKEFLAGS"]
|
||||
|
||||
@@ -27,13 +27,16 @@
|
||||
]
|
||||
|
||||
|
||||
def test_module_function_change_env(tmp_path):
|
||||
environb = {b"TEST_MODULE_ENV_VAR": b"TEST_FAIL", b"NOT_AFFECTED": b"NOT_AFFECTED"}
|
||||
src_file = tmp_path / "src_me"
|
||||
src_file.write_text("export TEST_MODULE_ENV_VAR=TEST_SUCCESS\n")
|
||||
module("load", str(src_file), module_template=f". {src_file} 2>&1", environb=environb)
|
||||
assert environb[b"TEST_MODULE_ENV_VAR"] == b"TEST_SUCCESS"
|
||||
assert environb[b"NOT_AFFECTED"] == b"NOT_AFFECTED"
|
||||
def test_module_function_change_env(tmpdir, working_env):
|
||||
src_file = str(tmpdir.join("src_me"))
|
||||
with open(src_file, "w") as f:
|
||||
f.write("export TEST_MODULE_ENV_VAR=TEST_SUCCESS\n")
|
||||
|
||||
os.environ["NOT_AFFECTED"] = "NOT_AFFECTED"
|
||||
module("load", src_file, module_template=". {0} 2>&1".format(src_file))
|
||||
|
||||
assert os.environ["TEST_MODULE_ENV_VAR"] == "TEST_SUCCESS"
|
||||
assert os.environ["NOT_AFFECTED"] == "NOT_AFFECTED"
|
||||
|
||||
|
||||
def test_module_function_no_change(tmpdir):
|
||||
|
||||
@@ -1517,9 +1517,3 @@ def test_edge_equality_does_not_depend_on_virtual_order():
|
||||
assert edge1 == edge2
|
||||
assert tuple(sorted(edge1.virtuals)) == edge1.virtuals
|
||||
assert tuple(sorted(edge2.virtuals)) == edge1.virtuals
|
||||
|
||||
|
||||
def test_old_format_strings_trigger_error(default_mock_concretization):
|
||||
s = Spec("a").concretized()
|
||||
with pytest.raises(SpecFormatStringError):
|
||||
s.format("${PACKAGE}-${VERSION}-${HASH}")
|
||||
|
||||
@@ -147,8 +147,7 @@ def test_reverse_environment_modifications(working_env):
|
||||
|
||||
reversal = to_reverse.reversed()
|
||||
|
||||
os.environ.clear()
|
||||
os.environ.update(start_env)
|
||||
os.environ = start_env.copy()
|
||||
|
||||
print(os.environ)
|
||||
to_reverse.apply_modifications()
|
||||
|
||||
@@ -89,8 +89,8 @@ def test_which_with_slash_ignores_path(tmpdir, working_env):
|
||||
assert exe.path == path
|
||||
|
||||
|
||||
def test_which(tmpdir, monkeypatch):
|
||||
monkeypatch.setenv("PATH", str(tmpdir))
|
||||
def test_which(tmpdir):
|
||||
os.environ["PATH"] = str(tmpdir)
|
||||
assert ex.which("spack-test-exe") is None
|
||||
|
||||
with pytest.raises(ex.CommandNotFoundError):
|
||||
|
||||
@@ -10,7 +10,6 @@
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
from typing import MutableMapping, Optional
|
||||
|
||||
import llnl.util.tty as tty
|
||||
|
||||
@@ -22,13 +21,8 @@
|
||||
awk_cmd = r"""awk 'BEGIN{for(name in ENVIRON)""" r"""printf("%s=%s%c", name, ENVIRON[name], 0)}'"""
|
||||
|
||||
|
||||
def module(
|
||||
*args,
|
||||
module_template: Optional[str] = None,
|
||||
environb: Optional[MutableMapping[bytes, bytes]] = None,
|
||||
):
|
||||
module_cmd = module_template or ("module " + " ".join(args))
|
||||
environb = environb or os.environb
|
||||
def module(*args, **kwargs):
|
||||
module_cmd = kwargs.get("module_template", "module " + " ".join(args))
|
||||
|
||||
if args[0] in module_change_commands:
|
||||
# Suppress module output
|
||||
@@ -39,10 +33,10 @@ def module(
|
||||
stderr=subprocess.STDOUT,
|
||||
shell=True,
|
||||
executable="/bin/bash",
|
||||
env=environb,
|
||||
)
|
||||
|
||||
new_environb = {}
|
||||
# In Python 3, keys and values of `environ` are byte strings.
|
||||
environ = {}
|
||||
output = module_p.communicate()[0]
|
||||
|
||||
# Loop over each environment variable key=value byte string
|
||||
@@ -51,11 +45,11 @@ def module(
|
||||
parts = entry.split(b"=", 1)
|
||||
if len(parts) != 2:
|
||||
continue
|
||||
new_environb[parts[0]] = parts[1]
|
||||
environ[parts[0]] = parts[1]
|
||||
|
||||
# Update os.environ with new dict
|
||||
environb.clear()
|
||||
environb.update(new_environb) # novermin
|
||||
os.environ.clear()
|
||||
os.environb.update(environ) # novermin
|
||||
|
||||
else:
|
||||
# Simply execute commands that don't change state and return output
|
||||
|
||||
@@ -25,8 +25,8 @@ if ($_pa_set == 1) then
|
||||
eval set _pa_old_value='$'$_pa_varname
|
||||
endif
|
||||
|
||||
# Do the actual prepending here, if it is a dir and not first in the path
|
||||
if ( -d $_pa_new_path && $_pa_old_value\: !~ $_pa_new_path\:* ) then
|
||||
# Do the actual prepending here, if it is a dir and not already in the path
|
||||
if ( -d $_pa_new_path && \:$_pa_old_value\: !~ *\:$_pa_new_path\:* ) then
|
||||
if ("x$_pa_old_value" == "x") then
|
||||
setenv $_pa_varname $_pa_new_path
|
||||
else
|
||||
|
||||
@@ -370,25 +370,25 @@ e4s-rocm-external-build:
|
||||
########################################
|
||||
# GPU Testing Pipeline
|
||||
########################################
|
||||
# .gpu-tests:
|
||||
# extends: [ ".linux_x86_64_v3" ]
|
||||
# variables:
|
||||
# SPACK_CI_STACK_NAME: gpu-tests
|
||||
.gpu-tests:
|
||||
extends: [ ".linux_x86_64_v3" ]
|
||||
variables:
|
||||
SPACK_CI_STACK_NAME: gpu-tests
|
||||
|
||||
# gpu-tests-generate:
|
||||
# extends: [ ".gpu-tests", ".generate-x86_64"]
|
||||
# image: ghcr.io/spack/ubuntu20.04-runner-x86_64:2023-01-01
|
||||
gpu-tests-generate:
|
||||
extends: [ ".gpu-tests", ".generate-x86_64"]
|
||||
image: ghcr.io/spack/ubuntu20.04-runner-x86_64:2023-01-01
|
||||
|
||||
# gpu-tests-build:
|
||||
# extends: [ ".gpu-tests", ".build" ]
|
||||
# trigger:
|
||||
# include:
|
||||
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
||||
# job: gpu-tests-generate
|
||||
# strategy: depend
|
||||
# needs:
|
||||
# - artifacts: True
|
||||
# job: gpu-tests-generate
|
||||
gpu-tests-build:
|
||||
extends: [ ".gpu-tests", ".build" ]
|
||||
trigger:
|
||||
include:
|
||||
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
||||
job: gpu-tests-generate
|
||||
strategy: depend
|
||||
needs:
|
||||
- artifacts: True
|
||||
job: gpu-tests-generate
|
||||
|
||||
########################################
|
||||
# E4S OneAPI Pipeline
|
||||
@@ -894,16 +894,16 @@ e4s-cray-rhel-build:
|
||||
variables:
|
||||
SPACK_CI_STACK_NAME: e4s-cray-sles
|
||||
|
||||
# e4s-cray-sles-generate:
|
||||
# extends: [ ".generate-cray-sles", ".e4s-cray-sles" ]
|
||||
e4s-cray-sles-generate:
|
||||
extends: [ ".generate-cray-sles", ".e4s-cray-sles" ]
|
||||
|
||||
# e4s-cray-sles-build:
|
||||
# extends: [ ".build", ".e4s-cray-sles" ]
|
||||
# trigger:
|
||||
# include:
|
||||
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
||||
# job: e4s-cray-sles-generate
|
||||
# strategy: depend
|
||||
# needs:
|
||||
# - artifacts: True
|
||||
# job: e4s-cray-sles-generate
|
||||
e4s-cray-sles-build:
|
||||
extends: [ ".build", ".e4s-cray-sles" ]
|
||||
trigger:
|
||||
include:
|
||||
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
||||
job: e4s-cray-sles-generate
|
||||
strategy: depend
|
||||
needs:
|
||||
- artifacts: True
|
||||
job: e4s-cray-sles-generate
|
||||
|
||||
@@ -17,37 +17,38 @@ if ($?_sp_initializing) then
|
||||
endif
|
||||
setenv _sp_initializing true
|
||||
|
||||
# find SPACK_ROOT.
|
||||
# If SPACK_ROOT is not set, we'll try to find it ourselves.
|
||||
# csh/tcsh don't have a built-in way to do this, but both keep files
|
||||
# they are sourcing open. We use /proc on linux and lsof on macs to
|
||||
# find this script's full path in the current process's open files.
|
||||
|
||||
# figure out a command to list open files
|
||||
if (-d /proc/$$/fd) then
|
||||
set _sp_lsof = "ls -l /proc/$$/fd"
|
||||
else
|
||||
which lsof > /dev/null
|
||||
if ($? == 0) then
|
||||
set _sp_lsof = "lsof -p $$"
|
||||
endif
|
||||
endif
|
||||
|
||||
# filter this script out of list of open files
|
||||
if ( $?_sp_lsof ) then
|
||||
set _sp_source_file = `$_sp_lsof | sed -e 's/^[^/]*//' | grep "/setup-env.csh"`
|
||||
endif
|
||||
|
||||
# This script is in $SPACK_ROOT/share/spack; get the root with dirname
|
||||
if ($?_sp_source_file) then
|
||||
set _sp_share_spack = `dirname "$_sp_source_file"`
|
||||
set _sp_share = `dirname "$_sp_share_spack"`
|
||||
setenv SPACK_ROOT `dirname "$_sp_share"`
|
||||
endif
|
||||
|
||||
if (! $?SPACK_ROOT) then
|
||||
echo "==> Error: setup-env.csh couldn't figure out where spack lives."
|
||||
echo " Set SPACK_ROOT to the root of your spack installation and try again."
|
||||
exit 1
|
||||
# figure out a command to list open files
|
||||
if (-d /proc/$$/fd) then
|
||||
set _sp_lsof = "ls -l /proc/$$/fd"
|
||||
else
|
||||
which lsof > /dev/null
|
||||
if ($? == 0) then
|
||||
set _sp_lsof = "lsof -p $$"
|
||||
endif
|
||||
endif
|
||||
|
||||
# filter this script out of list of open files
|
||||
if ( $?_sp_lsof ) then
|
||||
set _sp_source_file = `$_sp_lsof | sed -e 's/^[^/]*//' | grep "/setup-env.csh"`
|
||||
endif
|
||||
|
||||
# This script is in $SPACK_ROOT/share/spack; get the root with dirname
|
||||
if ($?_sp_source_file) then
|
||||
set _sp_share_spack = `dirname "$_sp_source_file"`
|
||||
set _sp_share = `dirname "$_sp_share_spack"`
|
||||
setenv SPACK_ROOT `dirname "$_sp_share"`
|
||||
endif
|
||||
|
||||
if (! $?SPACK_ROOT) then
|
||||
echo "==> Error: setup-env.csh couldn't figure out where spack lives."
|
||||
echo " Set SPACK_ROOT to the root of your spack installation and try again."
|
||||
exit 1
|
||||
endif
|
||||
endif
|
||||
|
||||
# Command aliases point at separate source files
|
||||
|
||||
@@ -648,10 +648,10 @@ function spack_pathadd -d "Add path to specified variable (defaults to PATH)"
|
||||
# passed to regular expression matching (`string match -r`)
|
||||
set -l _a "$pa_oldvalue"
|
||||
|
||||
# skip path if it is already the first in the variable
|
||||
# skip path if it is already contained in the variable
|
||||
# note spaces in regular expression: we're matching to a space delimited
|
||||
# list of paths
|
||||
if not echo $_a | string match -q -r "^$pa_new_path *"
|
||||
if not echo $_a | string match -q -r " *$pa_new_path *"
|
||||
if test -n "$pa_oldvalue"
|
||||
set $pa_varname $pa_new_path $pa_oldvalue
|
||||
else
|
||||
|
||||
@@ -214,9 +214,9 @@ _spack_pathadd() {
|
||||
# Do the actual prepending here.
|
||||
eval "_pa_oldvalue=\${${_pa_varname}:-}"
|
||||
|
||||
_pa_canonical="$_pa_oldvalue:"
|
||||
_pa_canonical=":$_pa_oldvalue:"
|
||||
if [ -d "$_pa_new_path" ] && \
|
||||
[ "${_pa_canonical#$_pa_new_path:}" = "$_pa_canonical" ];
|
||||
[ "${_pa_canonical#*:${_pa_new_path}:}" = "${_pa_canonical}" ];
|
||||
then
|
||||
if [ -n "$_pa_oldvalue" ]; then
|
||||
eval "export $_pa_varname=\"$_pa_new_path:$_pa_oldvalue\""
|
||||
|
||||
@@ -16,8 +16,7 @@ RUN {% if os_package_update %}{{ os_packages_build.update }} \
|
||||
|
||||
# What we want to install and how we want to install it
|
||||
# is specified in a manifest file (spack.yaml)
|
||||
RUN mkdir -p {{ paths.environment }} && \
|
||||
set -o noclobber \
|
||||
RUN mkdir {{ paths.environment }} \
|
||||
{{ manifest }} > {{ paths.environment }}/spack.yaml
|
||||
|
||||
# Install the software, remove unnecessary deps
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class DependencyMv(Package):
|
||||
"""Package providing a virtual dependency and with a multivalued variant."""
|
||||
|
||||
homepage = "http://www.example.com"
|
||||
url = "http://www.example.com/foo-1.0.tar.gz"
|
||||
|
||||
version("1.0", md5="0123456789abcdef0123456789abcdef")
|
||||
|
||||
variant("cuda", default=False, description="Build with CUDA")
|
||||
variant("cuda_arch", values=any_combination_of("10", "11"), when="+cuda")
|
||||
@@ -1,21 +0,0 @@
|
||||
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class DlaFuture(Package):
|
||||
"""A package that depends on 3 different virtuals, that might or might not be provided
|
||||
by the same node.
|
||||
"""
|
||||
|
||||
homepage = "http://www.example.com"
|
||||
url = "http://www.example.com/dla-1.0.tar.gz"
|
||||
|
||||
version("1.0", md5="0123456789abcdef0123456789abcdef")
|
||||
|
||||
depends_on("blas")
|
||||
depends_on("lapack")
|
||||
depends_on("scalapack")
|
||||
@@ -1,23 +0,0 @@
|
||||
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class ForwardMultiValue(Package):
|
||||
"""A package that forwards the value of a multi-valued variant to a dependency"""
|
||||
|
||||
homepage = "http://www.llnl.gov"
|
||||
url = "http://www.llnl.gov/mpileaks-1.0.tar.gz"
|
||||
|
||||
version("1.0", md5="0123456789abcdef0123456789abcdef")
|
||||
|
||||
variant("cuda", default=False, description="Build with CUDA")
|
||||
variant("cuda_arch", values=any_combination_of("10", "11"), when="+cuda")
|
||||
|
||||
depends_on("dependency-mv")
|
||||
|
||||
requires("^dependency-mv cuda_arch=10", when="+cuda cuda_arch=10 ^dependency-mv+cuda")
|
||||
requires("^dependency-mv cuda_arch=11", when="+cuda cuda_arch=11 ^dependency-mv+cuda")
|
||||
@@ -13,7 +13,6 @@ class Gmake(Package):
|
||||
url = "https://ftpmirror.gnu.org/make/make-4.4.tar.gz"
|
||||
|
||||
version("4.4", sha256="ce35865411f0490368a8fc383f29071de6690cbadc27704734978221f25e2bed")
|
||||
version("3.0", sha256="ce35865411f0490368a8fc383f29071de6690cbadc27704734978221f25e2bed")
|
||||
|
||||
def do_stage(self):
|
||||
mkdirp(self.stage.source_path)
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class StickyVariantDependent(AutotoolsPackage):
|
||||
"""Package with a sticky variant and a conflict"""
|
||||
|
||||
homepage = "http://www.example.com"
|
||||
url = "http://www.example.com/a-1.0.tar.gz"
|
||||
|
||||
version("1.0", md5="0123456789abcdef0123456789abcdef")
|
||||
|
||||
depends_on("sticky-variant")
|
||||
conflicts("%gcc", when="^sticky-variant~allow-gcc")
|
||||
@@ -27,6 +27,8 @@ class Abinit(AutotoolsPackage):
|
||||
homepage = "https://www.abinit.org/"
|
||||
url = "https://www.abinit.org/sites/default/files/packages/abinit-8.6.3.tar.gz"
|
||||
|
||||
maintainers("downloadico")
|
||||
version("9.10.3", sha256="3f2a9aebbf1fee9855a09dd687f88d2317b8b8e04f97b2628ab96fb898dce49b")
|
||||
version("9.8.4", sha256="a086d5045f0093b432e6a044d5f71f7edf5a41a62d67b3677cb0751d330c564a")
|
||||
version("9.8.3", sha256="de823878aea2c20098f177524fbb4b60de9b1b5971b2e835ec244dfa3724589b")
|
||||
version("9.6.1", sha256="b6a12760fd728eb4aacca431ae12150609565bedbaa89763f219fcd869f79ac6")
|
||||
@@ -143,19 +145,27 @@ def configure_args(self):
|
||||
oapp(f"--with-optim-flavor={self.spec.variants['optimization-flavor'].value}")
|
||||
|
||||
if "+wannier90" in spec:
|
||||
if "@:8" in spec:
|
||||
if spec.satisfies("@:8"):
|
||||
oapp(f"--with-wannier90-libs=-L{spec['wannier90'].prefix.lib} -lwannier -lm")
|
||||
oapp(f"--with-wannier90-incs=-I{spec['wannier90'].prefix.modules}")
|
||||
oapp(f"--with-wannier90-bins={spec['wannier90'].prefix.bin}")
|
||||
oapp("--enable-connectors")
|
||||
oapp("--with-dft-flavor=atompaw+libxc+wannier90")
|
||||
else:
|
||||
elif spec.satisfies("@:9.8"):
|
||||
options.extend(
|
||||
[
|
||||
f"WANNIER90_CPPFLAGS=-I{spec['wannier90'].prefix.modules}",
|
||||
f"WANNIER90_LIBS=-L{spec['wannier90'].prefix.lib} -lwannier",
|
||||
]
|
||||
)
|
||||
else:
|
||||
options.extend(
|
||||
[
|
||||
f"WANNIER90_CPPFLAGS=-I{spec['wannier90'].prefix.modules}",
|
||||
f"WANNIER90_LIBS=-L{spec['wannier90'].prefix.lib}"
|
||||
"WANNIER90_LDFLAGS=-lwannier",
|
||||
]
|
||||
)
|
||||
else:
|
||||
if "@:9.8" in spec:
|
||||
oapp(f"--with-fftw={spec['fftw-api'].prefix}")
|
||||
@@ -169,7 +179,10 @@ def configure_args(self):
|
||||
if "+mpi" in spec:
|
||||
oapp(f"CC={spec['mpi'].mpicc}")
|
||||
oapp(f"CXX={spec['mpi'].mpicxx}")
|
||||
oapp(f"FC={spec['mpi'].mpifc}")
|
||||
if spec.satisfies("@9.8:"):
|
||||
oapp(f"F90={spec['mpi'].mpifc}")
|
||||
else:
|
||||
oapp(f"FC={spec['mpi'].mpifc}")
|
||||
|
||||
# MPI version:
|
||||
# let the configure script auto-detect MPI support from mpi_prefix
|
||||
|
||||
@@ -20,8 +20,9 @@ class Adiak(CMakePackage):
|
||||
variant("shared", default=True, description="Build dynamic libraries")
|
||||
|
||||
version(
|
||||
"0.2.2", commit="3aedd494c81c01df1183af28bc09bade2fabfcd3", submodules=True, preferred=True
|
||||
"0.4.0", commit="7e8b7233f8a148b402128ed46b2f0c643e3b397e", submodules=True, preferred=True
|
||||
)
|
||||
version("0.2.2", commit="3aedd494c81c01df1183af28bc09bade2fabfcd3", submodules=True)
|
||||
version(
|
||||
"0.3.0-alpha",
|
||||
commit="054d2693a977ed0e1f16c665b4966bb90924779e",
|
||||
|
||||
@@ -34,7 +34,7 @@ class Alquimia(CMakePackage):
|
||||
depends_on("pflotran@develop", when="@develop")
|
||||
depends_on("petsc@3.10:", when="@develop")
|
||||
|
||||
@when("@1.0.10")
|
||||
@when("@1.0.10:1.1.0")
|
||||
def patch(self):
|
||||
filter_file(
|
||||
"use iso_[cC]_binding",
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class Aluminum(CMakePackage, CudaPackage, ROCmPackage):
|
||||
class Aluminum(CachedCMakePackage, CudaPackage, ROCmPackage):
|
||||
"""Aluminum provides a generic interface to high-performance
|
||||
communication libraries, with a focus on allreduce
|
||||
algorithms. Blocking and non-blocking algorithms and GPU-aware
|
||||
@@ -22,208 +22,207 @@ class Aluminum(CMakePackage, CudaPackage, ROCmPackage):
|
||||
git = "https://github.com/LLNL/Aluminum.git"
|
||||
tags = ["ecp", "radiuss"]
|
||||
|
||||
maintainers("bvanessen")
|
||||
maintainers("benson31", "bvanessen")
|
||||
|
||||
version("master", branch="master")
|
||||
version("1.4.1", sha256="d130a67fef1cb7a9cb3bbec1d0de426f020fe68c9df6e172c83ba42281cd90e3")
|
||||
version("1.4.0", sha256="ac54de058f38cead895ec8163f7b1fa7674e4dc5aacba683a660a61babbfe0c6")
|
||||
version("1.3.1", sha256="28ce0af6c6f29f97b7f19c5e45184bd2f8a0b1428f1e898b027d96d47cb74b0b")
|
||||
version("1.3.0", sha256="d0442efbebfdfb89eec793ae65eceb8f1ba65afa9f2e48df009f81985a4c27e3")
|
||||
version("1.2.3", sha256="9b214bdf30f9b7e8e017f83e6615db6be2631f5be3dd186205dbe3aa62f4018a")
|
||||
version(
|
||||
"1.2.2",
|
||||
sha256="c01d9dd98be4cab9b944bae99b403abe76d65e9e1750e7f23bf0105636ad5485",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"1.2.1",
|
||||
sha256="869402708c8a102a67667b83527b4057644a32b8cdf4990bcd1a5c4e5f0e30af",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"1.2.0",
|
||||
sha256="2f3725147f4dbc045b945af68d3d747f5dffbe2b8e928deed64136785210bc9a",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"1.1.0",
|
||||
sha256="78b03e36e5422e8651f400feb4d8a527f87302db025d77aa37e223be6b9bdfc9",
|
||||
deprecated=True,
|
||||
)
|
||||
version("1.0.0-lbann", tag="v1.0.0-lbann", commit="40a062b1f63e84e074489c0f926f36b806c6b8f3")
|
||||
version("1.0.0", sha256="028d12e271817214db5c07c77b0528f88862139c3e442e1b12f58717290f414a")
|
||||
version(
|
||||
"0.7.0",
|
||||
sha256="bbb73d2847c56efbe6f99e46b41d837763938483f2e2d1982ccf8350d1148caa",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"0.6.0",
|
||||
sha256="6ca329951f4c7ea52670e46e5020e7e7879d9b56fed5ff8c5df6e624b313e925",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"0.5.0",
|
||||
sha256="dc365a5849eaba925355a8efb27005c5f22bcd1dca94aaed8d0d29c265c064c1",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"0.4.0",
|
||||
sha256="4d6fab5481cc7c994b32fb23a37e9ee44041a9f91acf78f981a97cb8ef57bb7d",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"0.3.3",
|
||||
sha256="26e7f263f53c6c6ee0fe216e981a558dfdd7ec997d0dd2a24285a609a6c68f3b",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"0.3.2",
|
||||
sha256="09b6d1bcc02ac54ba269b1123eee7be20f0104b93596956c014b794ba96b037f",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"0.2.1-1",
|
||||
sha256="066b750e9d1134871709a3e2414b96b166e0e24773efc7d512df2f1d96ee8eef",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"0.2.1",
|
||||
sha256="3d5d15853cccc718f60df68205e56a2831de65be4d96e7f7e8497097e7905f89",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"0.2",
|
||||
sha256="fc8f06c6d8faab17a2aedd408d3fe924043bf857da1094d5553f35c4d2af893b",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"0.1",
|
||||
sha256="3880b736866e439dd94e6a61eeeb5bb2abccebbac82b82d52033bc6c94950bdb",
|
||||
deprecated=True,
|
||||
)
|
||||
|
||||
variant("nccl", default=False, description="Builds with support for NCCL communication lib")
|
||||
# Library capabilities
|
||||
variant(
|
||||
"cuda_rma",
|
||||
default=False,
|
||||
when="+cuda",
|
||||
description="Builds with support for CUDA intra-node "
|
||||
" Put/Get and IPC RMA functionality",
|
||||
)
|
||||
variant(
|
||||
"ht",
|
||||
default=False,
|
||||
description="Builds with support for host-enabled MPI"
|
||||
" communication of accelerator data",
|
||||
)
|
||||
variant("nccl", default=False, description="Builds with support for NCCL communication lib")
|
||||
variant("shared", default=True, description="Build Aluminum as a shared library")
|
||||
|
||||
# Debugging features
|
||||
variant("hang_check", default=False, description="Enable hang checking")
|
||||
variant("trace", default=False, description="Enable runtime tracing")
|
||||
|
||||
# Profiler support
|
||||
variant("nvtx", default=False, when="+cuda", description="Enable profiling via nvprof/NVTX")
|
||||
variant(
|
||||
"cuda_rma",
|
||||
"roctracer", default=False, when="+rocm", description="Enable profiling via rocprof/roctx"
|
||||
)
|
||||
|
||||
# Advanced options
|
||||
variant("mpi_serialize", default=False, description="Serialize MPI operations")
|
||||
variant("stream_mem_ops", default=False, description="Enable stream memory operations")
|
||||
variant(
|
||||
"thread_multiple",
|
||||
default=False,
|
||||
description="Builds with support for CUDA intra-node "
|
||||
" Put/Get and IPC RMA functionality",
|
||||
)
|
||||
variant("rccl", default=False, description="Builds with support for RCCL communication lib")
|
||||
variant(
|
||||
"ofi_libfabric_plugin",
|
||||
default=spack.platforms.cray.slingshot_network(),
|
||||
when="+rccl",
|
||||
sticky=True,
|
||||
description="Builds with support for OFI libfabric enhanced RCCL/NCCL communication lib",
|
||||
)
|
||||
variant(
|
||||
"ofi_libfabric_plugin",
|
||||
default=spack.platforms.cray.slingshot_network(),
|
||||
when="+nccl",
|
||||
sticky=True,
|
||||
description="Builds with support for OFI libfabric enhanced RCCL/NCCL communication lib",
|
||||
description="Allow multiple threads to call Aluminum concurrently",
|
||||
)
|
||||
|
||||
depends_on("cmake@3.21.0:", type="build", when="@1.0.1:")
|
||||
depends_on("cmake@3.17.0:", type="build", when="@:1.0.0")
|
||||
depends_on("mpi")
|
||||
depends_on("nccl@2.7.0-0:", when="+nccl")
|
||||
depends_on("hwloc@1.11:")
|
||||
depends_on("hwloc +cuda +nvml", when="+cuda")
|
||||
depends_on("hwloc@2.3.0:", when="+rocm")
|
||||
depends_on("cub", when="@:0.1,0.6.0: +cuda ^cuda@:10")
|
||||
depends_on("hipcub", when="@:0.1,0.6.0: +rocm")
|
||||
# Benchmark/testing support
|
||||
variant(
|
||||
"benchmarks",
|
||||
default=False,
|
||||
description="Build the Aluminum benchmarking drivers "
|
||||
"(warning: may significantly increase build time!)",
|
||||
)
|
||||
variant(
|
||||
"tests",
|
||||
default=False,
|
||||
description="Build the Aluminum test drivers "
|
||||
"(warning: may moderately increase build time!)",
|
||||
)
|
||||
|
||||
depends_on("rccl", when="+rccl")
|
||||
depends_on("aws-ofi-rccl", when="+rccl +ofi_libfabric_plugin")
|
||||
depends_on("aws-ofi-nccl", when="+nccl +ofi_libfabric_plugin")
|
||||
# FIXME: Do we want to expose tuning parameters to the Spack
|
||||
# recipe? Some are numeric values, some are on/off switches.
|
||||
|
||||
conflicts("~cuda", when="+cuda_rma", msg="CUDA RMA support requires CUDA")
|
||||
conflicts("+cuda", when="+rocm", msg="CUDA and ROCm support are mutually exclusive")
|
||||
conflicts("+nccl", when="+rccl", msg="NCCL and RCCL support are mutually exclusive")
|
||||
|
||||
generator("ninja")
|
||||
depends_on("mpi")
|
||||
|
||||
depends_on("cmake@3.21.0:", type="build", when="@1.0.1:")
|
||||
depends_on("hwloc@1.11:")
|
||||
|
||||
with when("+cuda"):
|
||||
depends_on("cub", when="^cuda@:10")
|
||||
depends_on("hwloc +cuda +nvml")
|
||||
with when("+nccl"):
|
||||
depends_on("nccl@2.7.0-0:")
|
||||
for arch in CudaPackage.cuda_arch_values:
|
||||
depends_on(
|
||||
"nccl +cuda cuda_arch={0}".format(arch),
|
||||
when="+cuda cuda_arch={0}".format(arch),
|
||||
)
|
||||
if spack.platforms.cray.slingshot_network():
|
||||
depends_on("aws-ofi-nccl") # Note: NOT a CudaPackage
|
||||
|
||||
with when("+rocm"):
|
||||
for val in ROCmPackage.amdgpu_targets:
|
||||
depends_on(
|
||||
"hipcub +rocm amdgpu_target={0}".format(val), when="amdgpu_target={0}".format(val)
|
||||
)
|
||||
depends_on(
|
||||
"hwloc@2.3.0: +rocm amdgpu_target={0}".format(val),
|
||||
when="amdgpu_target={0}".format(val),
|
||||
)
|
||||
# RCCL is *NOT* implented as a ROCmPackage
|
||||
depends_on(
|
||||
"rccl amdgpu_target={0}".format(val), when="+nccl amdgpu_target={0}".format(val)
|
||||
)
|
||||
depends_on(
|
||||
"roctracer-dev +rocm amdgpu_target={0}".format(val),
|
||||
when="+roctracer amdgpu_target={0}".format(val),
|
||||
)
|
||||
if spack.platforms.cray.slingshot_network():
|
||||
depends_on("aws-ofi-rccl", when="+nccl")
|
||||
|
||||
def cmake_args(self):
|
||||
spec = self.spec
|
||||
args = [
|
||||
"-DCMAKE_CXX_STANDARD:STRING=17",
|
||||
"-DALUMINUM_ENABLE_CUDA:BOOL=%s" % ("+cuda" in spec),
|
||||
"-DALUMINUM_ENABLE_NCCL:BOOL=%s" % ("+nccl" in spec or "+rccl" in spec),
|
||||
"-DALUMINUM_ENABLE_ROCM:BOOL=%s" % ("+rocm" in spec),
|
||||
]
|
||||
|
||||
if not spec.satisfies("^cmake@3.23.0"):
|
||||
# There is a bug with using Ninja generator in this version
|
||||
# of CMake
|
||||
args.append("-DCMAKE_EXPORT_COMPILE_COMMANDS=ON")
|
||||
|
||||
if "+cuda" in spec:
|
||||
if self.spec.satisfies("%clang"):
|
||||
for flag in self.spec.compiler_flags["cxxflags"]:
|
||||
if "gcc-toolchain" in flag:
|
||||
args.append("-DCMAKE_CUDA_FLAGS=-Xcompiler={0}".format(flag))
|
||||
if spec.satisfies("^cuda@11.0:"):
|
||||
args.append("-DCMAKE_CUDA_STANDARD=17")
|
||||
else:
|
||||
args.append("-DCMAKE_CUDA_STANDARD=14")
|
||||
archs = spec.variants["cuda_arch"].value
|
||||
if archs != "none":
|
||||
arch_str = ";".join(archs)
|
||||
args.append("-DCMAKE_CUDA_ARCHITECTURES=%s" % arch_str)
|
||||
|
||||
if spec.satisfies("%cce") and spec.satisfies("^cuda+allow-unsupported-compilers"):
|
||||
args.append("-DCMAKE_CUDA_FLAGS=-allow-unsupported-compiler")
|
||||
|
||||
if spec.satisfies("@0.5:"):
|
||||
args.extend(
|
||||
[
|
||||
"-DALUMINUM_ENABLE_HOST_TRANSFER:BOOL=%s" % ("+ht" in spec),
|
||||
"-DALUMINUM_ENABLE_MPI_CUDA:BOOL=%s" % ("+cuda_rma" in spec),
|
||||
"-DALUMINUM_ENABLE_MPI_CUDA_RMA:BOOL=%s" % ("+cuda_rma" in spec),
|
||||
]
|
||||
)
|
||||
else:
|
||||
args.append("-DALUMINUM_ENABLE_MPI_CUDA:BOOL=%s" % ("+ht" in spec))
|
||||
|
||||
if spec.satisfies("@:0.1,0.6.0: +cuda ^cuda@:10"):
|
||||
args.append("-DCUB_DIR:FILEPATH=%s" % spec["cub"].prefix)
|
||||
|
||||
# Add support for OS X to find OpenMP (LLVM installed via brew)
|
||||
if self.spec.satisfies("%clang platform=darwin"):
|
||||
clang = self.compiler.cc
|
||||
clang_bin = os.path.dirname(clang)
|
||||
clang_root = os.path.dirname(clang_bin)
|
||||
args.extend(["-DOpenMP_DIR={0}".format(clang_root)])
|
||||
|
||||
if "+rocm" in spec:
|
||||
args.extend(
|
||||
[
|
||||
"-DHIP_ROOT_DIR={0}".format(spec["hip"].prefix),
|
||||
"-DHIP_CXX_COMPILER={0}".format(self.spec["hip"].hipcc),
|
||||
"-DCMAKE_CXX_FLAGS=-std=c++17",
|
||||
]
|
||||
)
|
||||
archs = self.spec.variants["amdgpu_target"].value
|
||||
if archs != "none":
|
||||
arch_str = ",".join(archs)
|
||||
if spec.satisfies("%rocmcc@:5"):
|
||||
args.append(
|
||||
"-DHIP_HIPCC_FLAGS=--amdgpu-target={0}"
|
||||
" -g -fsized-deallocation -fPIC -std=c++17".format(arch_str)
|
||||
)
|
||||
args.extend(
|
||||
[
|
||||
"-DCMAKE_HIP_ARCHITECTURES=%s" % arch_str,
|
||||
"-DAMDGPU_TARGETS=%s" % arch_str,
|
||||
"-DGPU_TARGETS=%s" % arch_str,
|
||||
]
|
||||
)
|
||||
|
||||
args = []
|
||||
return args
|
||||
|
||||
def get_cuda_flags(self):
|
||||
spec = self.spec
|
||||
args = []
|
||||
if spec.satisfies("^cuda+allow-unsupported-compilers"):
|
||||
args.append("-allow-unsupported-compiler")
|
||||
|
||||
if spec.satisfies("%clang"):
|
||||
for flag in spec.compiler_flags["cxxflags"]:
|
||||
if "gcc-toolchain" in flag:
|
||||
args.append("-Xcompiler={0}".format(flag))
|
||||
return args
|
||||
|
||||
def std_initconfig_entries(self):
|
||||
entries = super(Aluminum, self).std_initconfig_entries()
|
||||
|
||||
# CMAKE_PREFIX_PATH, in CMake types, is a "STRING", not a "PATH". :/
|
||||
entries = [x for x in entries if "CMAKE_PREFIX_PATH" not in x]
|
||||
cmake_prefix_path = os.environ["CMAKE_PREFIX_PATH"].replace(":", ";")
|
||||
entries.append(cmake_cache_string("CMAKE_PREFIX_PATH", cmake_prefix_path))
|
||||
return entries
|
||||
|
||||
def initconfig_compiler_entries(self):
|
||||
spec = self.spec
|
||||
entries = super(Aluminum, self).initconfig_compiler_entries()
|
||||
|
||||
# FIXME: Enforce this better in the actual CMake.
|
||||
entries.append(cmake_cache_string("CMAKE_CXX_STANDARD", "17"))
|
||||
entries.append(cmake_cache_option("BUILD_SHARED_LIBS", "+shared" in spec))
|
||||
entries.append(cmake_cache_option("CMAKE_EXPORT_COMPILE_COMMANDS", True))
|
||||
entries.append(cmake_cache_option("MPI_ASSUME_NO_BUILTIN_MPI", True))
|
||||
|
||||
return entries
|
||||
|
||||
def initconfig_hardware_entries(self):
|
||||
spec = self.spec
|
||||
entries = super(Aluminum, self).initconfig_hardware_entries()
|
||||
|
||||
entries.append(cmake_cache_option("ALUMINUM_ENABLE_CUDA", "+cuda" in spec))
|
||||
if spec.satisfies("+cuda"):
|
||||
entries.append(cmake_cache_string("CMAKE_CUDA_STANDARD", "17"))
|
||||
if not spec.satisfies("cuda_arch=none"):
|
||||
archs = spec.variants["cuda_arch"].value
|
||||
arch_str = ";".join(archs)
|
||||
entries.append(cmake_cache_string("CMAKE_CUDA_ARCHITECTURES", arch_str))
|
||||
|
||||
# FIXME: Should this use the "cuda_flags" function of the
|
||||
# CudaPackage class or something? There might be other
|
||||
# flags in play, and we need to be sure to get them all.
|
||||
cuda_flags = self.get_cuda_flags()
|
||||
if len(cuda_flags) > 0:
|
||||
entries.append(cmake_cache_string("CMAKE_CUDA_FLAGS", " ".join(cuda_flags)))
|
||||
|
||||
entries.append(cmake_cache_option("ALUMINUM_ENABLE_ROCM", "+rocm" in spec))
|
||||
if spec.satisfies("+rocm"):
|
||||
entries.append(cmake_cache_string("CMAKE_HIP_STANDARD", "17"))
|
||||
if not spec.satisfies("amdgpu_target=none"):
|
||||
archs = self.spec.variants["amdgpu_target"].value
|
||||
arch_str = ";".join(archs)
|
||||
entries.append(cmake_cache_string("CMAKE_HIP_ARCHITECTURES", arch_str))
|
||||
entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str))
|
||||
entries.append(cmake_cache_string("GPU_TARGETS", arch_str))
|
||||
entries.append(cmake_cache_path("HIP_ROOT_DIR", spec["hip"].prefix))
|
||||
|
||||
return entries
|
||||
|
||||
def initconfig_package_entries(self):
|
||||
spec = self.spec
|
||||
entries = super(Aluminum, self).initconfig_package_entries()
|
||||
|
||||
# Library capabilities
|
||||
entries.append(cmake_cache_option("ALUMINUM_ENABLE_MPI_CUDA", "+cuda_rma" in spec))
|
||||
entries.append(cmake_cache_option("ALUMINUM_ENABLE_MPI_CUDA_RMA", "+cuda_rma" in spec))
|
||||
entries.append(cmake_cache_option("ALUMINUM_ENABLE_HOST_TRANSFER", "+ht" in spec))
|
||||
entries.append(cmake_cache_option("ALUMINUM_ENABLE_NCCL", "+nccl" in spec))
|
||||
|
||||
# Debugging features
|
||||
entries.append(cmake_cache_option("ALUMINUM_DEBUG_HANG_CHECK", "+hang_check" in spec))
|
||||
entries.append(cmake_cache_option("ALUMINUM_ENABLE_TRACE", "+trace" in spec))
|
||||
|
||||
# Profiler support
|
||||
entries.append(cmake_cache_option("ALUMINUM_ENABLE_NVPROF", "+nvtx" in spec))
|
||||
entries.append(cmake_cache_option("ALUMINUM_ENABLE_ROCTRACER", "+roctracer" in spec))
|
||||
|
||||
# Advanced options
|
||||
entries.append(cmake_cache_option("ALUMINUM_MPI_SERIALIZE", "+mpi_serialize" in spec))
|
||||
entries.append(
|
||||
cmake_cache_option("ALUMINUM_ENABLE_STREAM_MEM_OPS", "+stream_mem_ops" in spec)
|
||||
)
|
||||
entries.append(
|
||||
cmake_cache_option("ALUMINUM_ENABLE_THREAD_MULTIPLE", "+thread_multiple" in spec)
|
||||
)
|
||||
|
||||
# Benchmark/testing support
|
||||
entries.append(cmake_cache_option("ALUMINUM_ENABLE_BENCHMARKS", "+benchmarks" in spec))
|
||||
entries.append(cmake_cache_option("ALUMINUM_ENABLE_TESTS", "+tests" in spec))
|
||||
|
||||
return entries
|
||||
|
||||
@@ -15,6 +15,12 @@ class Ams(CMakePackage, CudaPackage):
|
||||
maintainers("koparasy", "lpottier")
|
||||
|
||||
version("develop", branch="develop", submodules=False)
|
||||
version(
|
||||
"11.08.23.alpha",
|
||||
tag="11.08.23.alpha",
|
||||
commit="1a42b29268bb916dae301654ca0b92fdfe288732",
|
||||
submodules=False,
|
||||
)
|
||||
version(
|
||||
"07.25.23-alpha",
|
||||
tag="07.25.23-alpha",
|
||||
|
||||
@@ -16,6 +16,8 @@ class Asio(AutotoolsPackage):
|
||||
git = "https://github.com/chriskohlhoff/asio.git"
|
||||
maintainers("msimberg", "pauleonix")
|
||||
|
||||
license("BSL-1.0")
|
||||
|
||||
# As uneven minor versions of asio are not considered stable, they wont be added anymore
|
||||
version("1.28.0", sha256="226438b0798099ad2a202563a83571ce06dd13b570d8fded4840dbc1f97fa328")
|
||||
version("1.26.0", sha256="935583f86825b7b212479277d03543e0f419a55677fa8cb73a79a927b858a72d")
|
||||
|
||||
@@ -16,28 +16,22 @@ class Berkeleygw(MakefilePackage):
|
||||
|
||||
maintainers("migueldiascosta")
|
||||
|
||||
version(
|
||||
"3.1.0",
|
||||
sha256="7e890a5faa5a6bb601aa665c73903b3af30df7bdd13ee09362b69793bbefa6d2",
|
||||
url="https://app.box.com/shared/static/2bik75lrs85zt281ydbup2xa7i5594gy.gz",
|
||||
expand=False,
|
||||
)
|
||||
version(
|
||||
"3.0.1",
|
||||
sha256="7d8c2cc1ee679afb48efbdd676689d4d537226b50e13a049dbcb052aaaf3654f",
|
||||
url="https://app.box.com/shared/static/m1dgnhiemo47lhxczrn6si71bwxoxor8.gz",
|
||||
url="https://berkeley.box.com/shared/static/m1dgnhiemo47lhxczrn6si71bwxoxor8.gz",
|
||||
expand=False,
|
||||
)
|
||||
version(
|
||||
"3.0",
|
||||
sha256="ab411acead5e979fd42b8d298dbb0a12ce152e7be9eee0bb87e9e5a06a638e2a",
|
||||
url="https://app.box.com/shared/static/lp6hj4kxr459l5a6t05qfuzl2ucyo03q.gz",
|
||||
url="https://berkeley.box.com/shared/static/lp6hj4kxr459l5a6t05qfuzl2ucyo03q.gz",
|
||||
expand=False,
|
||||
)
|
||||
version(
|
||||
"2.1",
|
||||
sha256="31f3b643dd937350c3866338321d675d4a1b1f54c730b43ad74ae67e75a9e6f2",
|
||||
url="https://app.box.com/shared/static/ze3azi5vlyw7hpwvl9i5f82kaiid6g0x.gz",
|
||||
url="https://berkeley.box.com/shared/static/ze3azi5vlyw7hpwvl9i5f82kaiid6g0x.gz",
|
||||
expand=False,
|
||||
)
|
||||
|
||||
|
||||
@@ -15,11 +15,14 @@ class Brahma(CMakePackage):
|
||||
|
||||
version("develop", branch="dev")
|
||||
version("master", branch="master")
|
||||
version("0.0.2", tag="v0.0.2", commit="bac58d5aa8962a5c902d401fbf8021aff9104d3c")
|
||||
version("0.0.1", tag="v0.0.1", commit="15156036f14e36511dfc3f3751dc953540526a2b")
|
||||
|
||||
variant("mpi", default=False, description="Enable MPI support")
|
||||
depends_on("cpp-logger@0.0.1")
|
||||
depends_on("gotcha@develop")
|
||||
depends_on("cpp-logger@0.0.1", when="@:0.0.1")
|
||||
depends_on("cpp-logger@0.0.2", when="@0.0.2:")
|
||||
depends_on("gotcha@1.0.4", when="@:0.0.1")
|
||||
depends_on("gotcha@1.0.5", when="@0.0.2:")
|
||||
depends_on("catch2@3.0.1")
|
||||
|
||||
depends_on("mpi", when="+mpi")
|
||||
|
||||
@@ -122,9 +122,8 @@ def pgo_train(self):
|
||||
|
||||
# Run spack solve --fresh hdf5 with instrumented clingo.
|
||||
python_runtime_env = EnvironmentModifications()
|
||||
python_runtime_env.extend(
|
||||
spack.user_environment.environment_modifications_for_specs(self.spec)
|
||||
)
|
||||
for s in self.spec.traverse(deptype=("run", "link"), order="post"):
|
||||
python_runtime_env.extend(spack.user_environment.environment_modifications_for_spec(s))
|
||||
python_runtime_env.unset("SPACK_ENV")
|
||||
python_runtime_env.unset("SPACK_PYTHON")
|
||||
self.spec["python"].command(
|
||||
|
||||
@@ -20,7 +20,7 @@ class Cmake(Package):
|
||||
url = "https://github.com/Kitware/CMake/releases/download/v3.19.0/cmake-3.19.0.tar.gz"
|
||||
git = "https://gitlab.kitware.com/cmake/cmake.git"
|
||||
|
||||
maintainers("alalazo")
|
||||
maintainers("alalazo", "johnwparent")
|
||||
|
||||
tags = ["build-tools", "windows"]
|
||||
|
||||
@@ -234,13 +234,15 @@ class Cmake(Package):
|
||||
with when("~ownlibs"):
|
||||
depends_on("expat")
|
||||
# expat/zlib are used in CMake/CTest, so why not require them in libarchive.
|
||||
depends_on("libarchive@3.1.0: xar=expat compression=zlib")
|
||||
depends_on("libarchive@3.3.3:", when="@3.15.0:")
|
||||
depends_on("libuv@1.0.0:1.10", when="@3.7.0:3.10.3")
|
||||
depends_on("libuv@1.10.0:1.10", when="@3.11.0:3.11")
|
||||
depends_on("libuv@1.10.0:", when="@3.12.0:")
|
||||
depends_on("rhash", when="@3.8.0:")
|
||||
depends_on("jsoncpp build_system=meson", when="@3.2:")
|
||||
for plat in ["darwin", "cray", "linux"]:
|
||||
with when("platform=%s" % plat):
|
||||
depends_on("libarchive@3.1.0: xar=expat compression=zlib")
|
||||
depends_on("libarchive@3.3.3:", when="@3.15.0:")
|
||||
depends_on("libuv@1.0.0:1.10", when="@3.7.0:3.10.3")
|
||||
depends_on("libuv@1.10.0:1.10", when="@3.11.0:3.11")
|
||||
depends_on("libuv@1.10.0:", when="@3.12.0:")
|
||||
depends_on("rhash", when="@3.8.0:")
|
||||
depends_on("jsoncpp build_system=meson", when="@3.2:")
|
||||
|
||||
depends_on("ncurses", when="+ncurses")
|
||||
|
||||
@@ -248,9 +250,6 @@ class Cmake(Package):
|
||||
depends_on("python@2.7.11:", type="build")
|
||||
depends_on("py-sphinx", type="build")
|
||||
|
||||
# TODO: update curl package to build with Windows SSL implementation
|
||||
# at which point we can build with +ownlibs on Windows
|
||||
conflicts("~ownlibs", when="platform=windows")
|
||||
# Cannot build with Intel, should be fixed in 3.6.2
|
||||
# https://gitlab.kitware.com/cmake/cmake/issues/16226
|
||||
patch("intel-c-gnu11.patch", when="@3.6.0:3.6.1")
|
||||
|
||||
@@ -49,6 +49,15 @@ class Conquest(MakefilePackage):
|
||||
|
||||
build_directory = "src"
|
||||
|
||||
# The SYSTEM variable is required above version 1.2.
|
||||
# Versions 1.2 and older should ignore it.
|
||||
@property
|
||||
def build_targets(self):
|
||||
if self.version > Version("1.2"):
|
||||
return ["SYSTEM = example", "Conquest"]
|
||||
else:
|
||||
return ["Conquest"]
|
||||
|
||||
def edit(self, spec, prefix):
|
||||
fflags = "-O3 -fallow-argument-mismatch"
|
||||
ldflags = ""
|
||||
@@ -63,12 +72,23 @@ def edit(self, spec, prefix):
|
||||
|
||||
lapack_ld = self.spec["lapack"].libs.ld_flags
|
||||
blas_ld = self.spec["blas"].libs.ld_flags
|
||||
fftw_ld = self.spec["fftw"].libs.ld_flags
|
||||
libxc_ld = self.spec["libxc"].libs.ld_flags
|
||||
|
||||
defs_file = FileFilter("./src/system.make")
|
||||
# Starting from 1.3 there's automated logic in the Makefile that picks
|
||||
# from a list of possible files for system/compiler-specific definitions.
|
||||
# This is useful for manual builds, but since the spack will do its own
|
||||
# automation of compiler-specific flags, we will override it.
|
||||
if self.version > Version("1.2"):
|
||||
defs_file = FileFilter("./src/system/system.example.make")
|
||||
else:
|
||||
defs_file = FileFilter("./src/system.make")
|
||||
|
||||
defs_file.filter("COMPFLAGS=.*", f"COMPFLAGS= {fflags}")
|
||||
defs_file.filter("LINKFLAGS=.*", f"LINKFLAGS= {ldflags}")
|
||||
defs_file.filter("# BLAS=.*", f"BLAS= {lapack_ld} -llapack {blas_ld} -lblas")
|
||||
defs_file.filter(".*COMPFLAGS=.*", f"COMPFLAGS= {fflags}")
|
||||
defs_file.filter(".*LINKFLAGS=.*", f"LINKFLAGS= {ldflags}")
|
||||
defs_file.filter(".*BLAS=.*", f"BLAS= {lapack_ld} {blas_ld}")
|
||||
defs_file.filter(".*FFT_LIB=.*", f"FFT_LIB={fftw_ld}")
|
||||
defs_file.filter(".*XC_LIB=.*", f"XC_LIB={libxc_ld} -lxcf90 -lxc")
|
||||
|
||||
if "+openmp" in self.spec:
|
||||
defs_file.filter("OMP_DUMMY = DUMMY", "OMP_DUMMY = ")
|
||||
@@ -81,3 +101,5 @@ def edit(self, spec, prefix):
|
||||
def install(self, spec, prefix):
|
||||
mkdirp(prefix.bin)
|
||||
install("./bin/Conquest", prefix.bin)
|
||||
if self.version > Version("1.2"):
|
||||
install_tree("./benchmarks/", join_path(prefix, "benchmarks"))
|
||||
|
||||
@@ -16,3 +16,4 @@ class CppLogger(CMakePackage):
|
||||
version("develop", branch="develop")
|
||||
version("master", branch="master")
|
||||
version("0.0.1", tag="v0.0.1", commit="d48b38ab14477bb7c53f8189b8b4be2ea214c28a")
|
||||
version("0.0.2", tag="v0.0.2", commit="329a48401033d2d2a1f1196141763cab029220ae")
|
||||
|
||||
@@ -18,7 +18,7 @@ class Cpr(CMakePackage):
|
||||
version("1.9.2", sha256="3bfbffb22c51f322780d10d3ca8f79424190d7ac4b5ad6ad896de08dbd06bf31")
|
||||
|
||||
depends_on("curl")
|
||||
depends_on("git", type="build")
|
||||
depends_on("git", when="build")
|
||||
|
||||
def cmake_args(self):
|
||||
_force = "_FORCE" if self.spec.satisfies("@:1.9") else ""
|
||||
|
||||
@@ -16,7 +16,10 @@ class Cube(AutotoolsPackage):
|
||||
|
||||
homepage = "https://www.scalasca.org/software/cube-4.x/download.html"
|
||||
url = "https://apps.fz-juelich.de/scalasca/releases/cube/4.4/dist/cubegui-4.4.2.tar.gz"
|
||||
maintainers("swat-jsc")
|
||||
|
||||
version("4.8.2", sha256="bf2e02002bb2e5c4f61832ce37b62a440675c6453463014b33b2474aac78f86d")
|
||||
version("4.8.1", sha256="a8a2a62b4e587c012d3d32385bed7c500db14232419795e0f4272d1dcefc55bc")
|
||||
version("4.8", sha256="1df8fcaea95323e7eaf0cc010784a41243532c2123a27ce93cb7e3241557ff76")
|
||||
version("4.7.1", sha256="7c96bf9ffb8cc132945f706657756fe6f88b7f7a5243ecd3741f599c2006d428")
|
||||
version("4.7", sha256="103fe00fa9846685746ce56231f64d850764a87737dc0407c9d0a24037590f68")
|
||||
|
||||
@@ -14,6 +14,7 @@ class Cubelib(AutotoolsPackage):
|
||||
maintainers = ("swat-jsc", "wrwilliams")
|
||||
|
||||
version("4.8.2", sha256="d6fdef57b1bc9594f1450ba46cf08f431dd0d4ae595c47e2f3454e17e4ae74f4")
|
||||
version("4.8.1", sha256="e4d974248963edab48c5d0fc5831146d391b0ae4632cccafe840bf5f12cd80a9")
|
||||
version("4.8", sha256="171c93ac5afd6bc74c50a9a58efdaf8589ff5cc1e5bd773ebdfb2347b77e2f68")
|
||||
version("4.7.1", sha256="62cf33a51acd9a723fff9a4a5411cd74203e24e0c4ffc5b9e82e011778ed4f2f")
|
||||
version("4.7", sha256="e44352c80a25a49b0fa0748792ccc9f1be31300a96c32de982b92477a8740938")
|
||||
|
||||
@@ -14,6 +14,7 @@ class Cubew(AutotoolsPackage):
|
||||
maintainers = ("swat-jsc", "wrwilliams")
|
||||
|
||||
version("4.8.2", sha256="4f3bcf0622c2429b8972b5eb3f14d79ec89b8161e3c1cc5862ceda417d7975d2")
|
||||
version("4.8.1", sha256="42cbd743d87c16e805c8e28e79292ab33de259f2cfba46f2682cb35c1bc032d6")
|
||||
version("4.8", sha256="73c7f9e9681ee45d71943b66c01cfe675b426e4816e751ed2e0b670563ca4cf3")
|
||||
version("4.7.1", sha256="0d364a4930ca876aa887ec40d12399d61a225dbab69e57379b293516d7b6db8d")
|
||||
version("4.7", sha256="a7c7fca13e6cb252f08d4380223d7c56a8e86a67de147bcc0279ebb849c884a5")
|
||||
|
||||
@@ -115,9 +115,9 @@ def configure_args(self):
|
||||
if "+apmpi" in spec:
|
||||
extra_args.append("--enable-apmpi-mod")
|
||||
if "+apmpi_sync" in spec:
|
||||
extra_args.append(["--enable-apmpi-mod", "--enable-apmpi-coll-sync"])
|
||||
extra_args.extend(["--enable-apmpi-mod", "--enable-apmpi-coll-sync"])
|
||||
if "+apxc" in spec:
|
||||
extra_args.append(["--enable-apxc-mod"])
|
||||
extra_args.append("--enable-apxc-mod")
|
||||
|
||||
extra_args.append("--with-mem-align=8")
|
||||
extra_args.append("--with-log-path-by-env=DARSHAN_LOG_DIR_PATH")
|
||||
|
||||
@@ -23,9 +23,13 @@ class Dealii(CMakePackage, CudaPackage):
|
||||
# only add for immediate deps.
|
||||
transitive_rpaths = False
|
||||
|
||||
generator("ninja")
|
||||
# FIXME nvcc_wrapper (used for +clang) doesn't handle response files
|
||||
# correctly when ninja is used. Those are used automatically if paths get too long.
|
||||
generator("make")
|
||||
|
||||
version("master", branch="master")
|
||||
version("9.5.1", sha256="a818b535e6488d3aef7853311657c7b4fadc29a9abe91b7b202b131aad630f5e")
|
||||
version("9.5.0", sha256="a81f41565f0d3a22d491ee687957dd48053225da72e8d6d628d210358f4a0464")
|
||||
version("9.4.2", sha256="45a76cb400bfcff25cc2d9093d9a5c91545c8367985e6798811c5e9d2a6a6fd4")
|
||||
version("9.4.1", sha256="bfe5e4bf069159f93feb0f78529498bfee3da35baf5a9c6852aa59d7ea7c7a48")
|
||||
version("9.4.0", sha256="238677006cd9173658e5b69cdd1861f800556982db6005a3cc5eb8329cc1e36c")
|
||||
@@ -70,10 +74,11 @@ class Dealii(CMakePackage, CudaPackage):
|
||||
values=("default", "11", "14", "17"),
|
||||
)
|
||||
variant("doc", default=False, description="Compile with documentation")
|
||||
variant("examples", default=True, description="Compile tutorial programs")
|
||||
variant("examples", default=True, description="Compile and install tutorial programs")
|
||||
variant("int64", default=False, description="Compile with 64 bit indices support")
|
||||
variant("mpi", default=True, description="Compile with MPI")
|
||||
variant("optflags", default=False, description="Compile using additional optimization flags")
|
||||
variant("platform-introspection", default=True, description="Enable platform introspection")
|
||||
variant("python", default=False, description="Compile with Python bindings")
|
||||
|
||||
# Package variants
|
||||
@@ -81,11 +86,12 @@ class Dealii(CMakePackage, CudaPackage):
|
||||
variant("arborx", default=True, description="Compile with Arborx support")
|
||||
variant("arpack", default=True, description="Compile with Arpack and PArpack (only with MPI)")
|
||||
variant("adol-c", default=True, description="Compile with ADOL-C")
|
||||
variant("cgal", default=True, when="@9.4:", description="Compile with CGAL")
|
||||
variant("cgal", default=True, when="@9.4:~cuda", description="Compile with CGAL")
|
||||
variant("ginkgo", default=True, description="Compile with Ginkgo")
|
||||
variant("gmsh", default=True, description="Compile with GMSH")
|
||||
variant("gsl", default=True, description="Compile with GSL")
|
||||
variant("hdf5", default=True, description="Compile with HDF5 (only with MPI)")
|
||||
variant("kokkos", default=True, when="@9.5:", description="Compile with Kokkos")
|
||||
variant("metis", default=True, description="Compile with Metis")
|
||||
variant("muparser", default=True, description="Compile with muParser")
|
||||
variant("nanoflann", default=False, description="Compile with Nanoflann")
|
||||
@@ -98,14 +104,15 @@ class Dealii(CMakePackage, CudaPackage):
|
||||
variant("slepc", default=True, description="Compile with Slepc (only with Petsc and MPI)")
|
||||
variant("symengine", default=True, description="Compile with SymEngine")
|
||||
variant("simplex", default=True, description="Compile with Simplex support")
|
||||
# TODO @9.3: enable by default, when we know what to do
|
||||
# variant('taskflow', default=False,
|
||||
# description='Compile with multi-threading via Taskflow')
|
||||
# TODO @9.3: disable by default
|
||||
# (NB: only if tbb is removed in 9.3, as planned!!!)
|
||||
variant(
|
||||
"taskflow",
|
||||
default=True,
|
||||
when="@9.6:",
|
||||
description="Compile with multi-threading via Taskflow",
|
||||
)
|
||||
variant("threads", default=True, description="Compile with multi-threading via TBB")
|
||||
variant("trilinos", default=True, description="Compile with Trilinos (only with MPI)")
|
||||
variant("platform-introspection", default=True, description="Enable platform introspection")
|
||||
variant("vtk", default=True, when="@9.6:", description="Compile with VTK")
|
||||
|
||||
# Required dependencies: Light version
|
||||
depends_on("blas")
|
||||
@@ -179,6 +186,8 @@ class Dealii(CMakePackage, CudaPackage):
|
||||
# TODO: next line fixes concretization with petsc
|
||||
depends_on("hdf5+mpi+hl+fortran", when="+hdf5+mpi+petsc")
|
||||
depends_on("hdf5+mpi+hl", when="+hdf5+mpi~petsc")
|
||||
depends_on("kokkos@3.7:", when="@9.5:+kokkos~trilinos")
|
||||
depends_on("kokkos@3.7:+cuda+cuda_lambda+wrapper", when="@9.5:+kokkos~trilinos+cuda")
|
||||
# TODO: concretizer bug. The two lines mimic what comes from PETSc
|
||||
# but we should not need it
|
||||
depends_on("metis@5:+int64", when="+metis+int64")
|
||||
@@ -198,7 +207,7 @@ class Dealii(CMakePackage, CudaPackage):
|
||||
depends_on("sundials@:3~pthread", when="@9.0:9.2+sundials")
|
||||
depends_on("sundials@5:5.8", when="@9.3:9.3.3+sundials")
|
||||
depends_on("sundials@5:", when="@9.3.4:+sundials")
|
||||
# depends_on('taskflow', when='@9.3:+taskflow')
|
||||
depends_on("taskflow", when="@9.6:+taskflow")
|
||||
depends_on("trilinos gotype=int", when="+trilinos@12.18.1:")
|
||||
# TODO: next line fixes concretization with trilinos and adol-c
|
||||
depends_on("trilinos~exodus", when="@9.0:+adol-c+trilinos")
|
||||
@@ -222,12 +231,11 @@ class Dealii(CMakePackage, CudaPackage):
|
||||
# do not require +rol to make concretization of xsdk possible
|
||||
depends_on("trilinos+amesos+aztec+epetra+ifpack+ml+muelu+sacado", when="+trilinos")
|
||||
depends_on("trilinos~hypre", when="+trilinos+int64")
|
||||
# TODO: temporary disable Tpetra when using CUDA due to
|
||||
# namespace "Kokkos::Impl" has no member "cuda_abort"
|
||||
depends_on(
|
||||
"trilinos@master+rol~amesos2~ifpack2~intrepid2~kokkos~tpetra~zoltan2",
|
||||
when="+trilinos+cuda",
|
||||
)
|
||||
for _arch in CudaPackage.cuda_arch_values:
|
||||
arch_str = f"+cuda cuda_arch={_arch}"
|
||||
trilinos_spec = f"trilinos +wrapper {arch_str}"
|
||||
depends_on(trilinos_spec, when=f"@9.5:+trilinos {arch_str}")
|
||||
depends_on("vtk", when="@9.6:+vtk")
|
||||
|
||||
# Explicitly provide a destructor in BlockVector,
|
||||
# otherwise deal.II may fail to build with Intel compilers.
|
||||
@@ -296,44 +304,60 @@ class Dealii(CMakePackage, CudaPackage):
|
||||
msg="CGAL requires the C++ standard to be set explicitly to 17 or later.",
|
||||
)
|
||||
|
||||
conflicts(
|
||||
"cxxstd=14",
|
||||
when="@9.6:",
|
||||
msg="Deal.II 9.6 onwards requires the C++ standard to be set to 17 or later.",
|
||||
)
|
||||
|
||||
# Interfaces added in 8.5.0:
|
||||
for p in ["gsl", "python"]:
|
||||
for _package in ["gsl", "python"]:
|
||||
conflicts(
|
||||
"+{0}".format(p),
|
||||
"+{0}".format(_package),
|
||||
when="@:8.4.2",
|
||||
msg="The interface to {0} is supported from version 8.5.0 "
|
||||
"onwards. Please explicitly disable this variant "
|
||||
"via ~{0}".format(p),
|
||||
"via ~{0}".format(_package),
|
||||
)
|
||||
|
||||
# Interfaces added in 9.0.0:
|
||||
for p in ["assimp", "gmsh", "nanoflann", "scalapack", "sundials", "adol-c"]:
|
||||
for _package in ["assimp", "gmsh", "nanoflann", "scalapack", "sundials", "adol-c"]:
|
||||
conflicts(
|
||||
"+{0}".format(p),
|
||||
"+{0}".format(_package),
|
||||
when="@:8.5.1",
|
||||
msg="The interface to {0} is supported from version 9.0.0 "
|
||||
"onwards. Please explicitly disable this variant "
|
||||
"via ~{0}".format(p),
|
||||
"via ~{0}".format(_package),
|
||||
)
|
||||
|
||||
# interfaces added in 9.1.0:
|
||||
for p in ["ginkgo", "symengine"]:
|
||||
for _package in ["ginkgo", "symengine"]:
|
||||
conflicts(
|
||||
"+{0}".format(p),
|
||||
"+{0}".format(_package),
|
||||
when="@:9.0",
|
||||
msg="The interface to {0} is supported from version 9.1.0 "
|
||||
"onwards. Please explicitly disable this variant "
|
||||
"via ~{0}".format(p),
|
||||
"via ~{0}".format(_package),
|
||||
)
|
||||
|
||||
# interfaces added in 9.3.0:
|
||||
for p in ["simplex", "arborx"]: # , 'taskflow']:
|
||||
for _package in ["simplex", "arborx"]:
|
||||
conflicts(
|
||||
"+{0}".format(p),
|
||||
"+{0}".format(_package),
|
||||
when="@:9.2",
|
||||
msg="The interface to {0} is supported from version 9.3.0 "
|
||||
"onwards. Please explicitly disable this variant "
|
||||
"via ~{0}".format(p),
|
||||
"via ~{0}".format(_package),
|
||||
)
|
||||
|
||||
# interfaces added after 9.5.0:
|
||||
for _package in ["vtk", "taskflow"]:
|
||||
conflicts(
|
||||
"+{0}".format(_package),
|
||||
when="@:9.5",
|
||||
msg="The interface to {0} is supported from version 9.6.0 "
|
||||
"onwards. Please explicitly disable this variant "
|
||||
"via ~{0}".format(_package),
|
||||
)
|
||||
|
||||
# Interfaces removed in 9.3.0:
|
||||
@@ -346,18 +370,29 @@ class Dealii(CMakePackage, CudaPackage):
|
||||
|
||||
# Check that the combination of variants makes sense
|
||||
# 64-bit BLAS:
|
||||
for p in ["openblas", "intel-mkl", "intel-parallel-studio+mkl"]:
|
||||
for _package in ["openblas", "intel-mkl", "intel-parallel-studio+mkl"]:
|
||||
conflicts(
|
||||
"^{0}+ilp64".format(p), when="@:8.5.1", msg="64bit BLAS is only supported from 9.0.0"
|
||||
"^{0}+ilp64".format(_package),
|
||||
when="@:8.5.1",
|
||||
msg="64bit BLAS is only supported from 9.0.0",
|
||||
)
|
||||
|
||||
# MPI requirements:
|
||||
for p in ["arpack", "hdf5", "netcdf", "p4est", "petsc", "scalapack", "slepc", "trilinos"]:
|
||||
for _package in [
|
||||
"arpack",
|
||||
"hdf5",
|
||||
"netcdf",
|
||||
"p4est",
|
||||
"petsc",
|
||||
"scalapack",
|
||||
"slepc",
|
||||
"trilinos",
|
||||
]:
|
||||
conflicts(
|
||||
"+{0}".format(p),
|
||||
"+{0}".format(_package),
|
||||
when="~mpi",
|
||||
msg="To enable {0} it is necessary to build deal.II with "
|
||||
"MPI support enabled.".format(p),
|
||||
"MPI support enabled.".format(_package),
|
||||
)
|
||||
|
||||
# Optional dependencies:
|
||||
@@ -432,6 +467,7 @@ def cmake_args(self):
|
||||
|
||||
# Examples / tutorial programs
|
||||
options.append(self.define_from_variant("DEAL_II_COMPONENT_EXAMPLES", "examples"))
|
||||
options.append(self.define_from_variant("DEAL_II_COMPILE_EXAMPLES", "examples"))
|
||||
|
||||
# Enforce the specified C++ standard
|
||||
if spec.variants["cxxstd"].value != "default":
|
||||
@@ -478,9 +514,6 @@ def cmake_args(self):
|
||||
if "+mpi" in spec:
|
||||
options.extend(
|
||||
[
|
||||
self.define("CMAKE_C_COMPILER", spec["mpi"].mpicc),
|
||||
self.define("CMAKE_CXX_COMPILER", spec["mpi"].mpicxx),
|
||||
self.define("CMAKE_Fortran_COMPILER", spec["mpi"].mpifc),
|
||||
self.define("MPI_C_COMPILER", spec["mpi"].mpicc),
|
||||
self.define("MPI_CXX_COMPILER", spec["mpi"].mpicxx),
|
||||
self.define("MPI_Fortran_COMPILER", spec["mpi"].mpifc),
|
||||
@@ -499,6 +532,9 @@ def cmake_args(self):
|
||||
self.define("CUDA_HOST_COMPILER", spec["mpi"].mpicxx),
|
||||
]
|
||||
)
|
||||
# Make sure we use the same compiler that Trilinos uses
|
||||
if "+trilinos" in spec:
|
||||
options.extend([self.define("CMAKE_CXX_COMPILER", spec["trilinos"].kokkos_cxx)])
|
||||
|
||||
# Python bindings
|
||||
if spec.satisfies("@8.5.0:"):
|
||||
@@ -542,23 +578,25 @@ def cmake_args(self):
|
||||
# Optional dependencies for which library names are the same as CMake
|
||||
# variables:
|
||||
for library in (
|
||||
"arborx",
|
||||
"assimp",
|
||||
"cgal",
|
||||
"ginkgo",
|
||||
"gmsh",
|
||||
"gsl",
|
||||
"hdf5",
|
||||
"metis",
|
||||
"muparser",
|
||||
"nanoflann",
|
||||
"p4est",
|
||||
"petsc",
|
||||
"slepc",
|
||||
"trilinos",
|
||||
"metis",
|
||||
"sundials",
|
||||
"nanoflann",
|
||||
"assimp",
|
||||
"gmsh",
|
||||
"muparser",
|
||||
"symengine",
|
||||
"ginkgo",
|
||||
"arborx",
|
||||
"cgal",
|
||||
): # 'taskflow'):
|
||||
"taskflow",
|
||||
"trilinos",
|
||||
"vtk",
|
||||
):
|
||||
options.append(
|
||||
self.define_from_variant("DEAL_II_WITH_{0}".format(library.upper()), library)
|
||||
)
|
||||
|
||||
@@ -8,7 +8,39 @@
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class Dihydrogen(CMakePackage, CudaPackage, ROCmPackage):
|
||||
# This is a hack to get around some deficiencies in Hydrogen.
|
||||
def get_blas_entries(inspec):
|
||||
entries = []
|
||||
spec = inspec["hydrogen"]
|
||||
if "blas=openblas" in spec:
|
||||
entries.append(cmake_cache_option("DiHydrogen_USE_OpenBLAS", True))
|
||||
elif "blas=mkl" in spec or spec.satisfies("^intel-mkl"):
|
||||
entries.append(cmake_cache_option("DiHydrogen_USE_MKL", True))
|
||||
elif "blas=essl" in spec or spec.satisfies("^essl"):
|
||||
entries.append(cmake_cache_string("BLA_VENDOR", "IBMESSL"))
|
||||
# IF IBM ESSL is used it needs help finding the proper LAPACK libraries
|
||||
entries.append(
|
||||
cmake_cache_string(
|
||||
"LAPACK_LIBRARIES",
|
||||
"%s;-llapack;-lblas"
|
||||
% ";".join("-l{0}".format(lib) for lib in self.spec["essl"].libs.names),
|
||||
)
|
||||
)
|
||||
entries.append(
|
||||
cmake_cache_string(
|
||||
"BLAS_LIBRARIES",
|
||||
"%s;-lblas"
|
||||
% ";".join("-l{0}".format(lib) for lib in self.spec["essl"].libs.names),
|
||||
)
|
||||
)
|
||||
elif "blas=accelerate" in spec:
|
||||
entries.append(cmake_cache_option("DiHydrogen_USE_ACCELERATE", True))
|
||||
elif spec.satisfies("^netlib-lapack"):
|
||||
entries.append(cmake_cache_string("BLA_VENDOR", "Generic"))
|
||||
return entries
|
||||
|
||||
|
||||
class Dihydrogen(CachedCMakePackage, CudaPackage, ROCmPackage):
|
||||
"""DiHydrogen is the second version of the Hydrogen fork of the
|
||||
well-known distributed linear algebra library,
|
||||
Elemental. DiHydrogen aims to be a basic distributed
|
||||
@@ -20,117 +52,179 @@ class Dihydrogen(CMakePackage, CudaPackage, ROCmPackage):
|
||||
git = "https://github.com/LLNL/DiHydrogen.git"
|
||||
tags = ["ecp", "radiuss"]
|
||||
|
||||
maintainers("bvanessen")
|
||||
maintainers("benson31", "bvanessen")
|
||||
|
||||
version("develop", branch="develop")
|
||||
version("master", branch="master")
|
||||
|
||||
version("0.2.1", sha256="11e2c0f8a94ffa22e816deff0357dde6f82cc8eac21b587c800a346afb5c49ac")
|
||||
version("0.2.0", sha256="e1f597e80f93cf49a0cb2dbc079a1f348641178c49558b28438963bd4a0bdaa4")
|
||||
version("0.1", sha256="171d4b8adda1e501c38177ec966e6f11f8980bf71345e5f6d87d0a988fef4c4e")
|
||||
version("0.3.0", sha256="8dd143441a28e0c7662cd92694e9a4894b61fd48508ac1d77435f342bc226dcf")
|
||||
|
||||
# Primary features
|
||||
|
||||
variant("dace", default=False, sticky=True, description="Enable DaCe backend.")
|
||||
|
||||
variant(
|
||||
"distconv",
|
||||
default=False,
|
||||
sticky=True,
|
||||
description="Enable (legacy) Distributed Convolution support.",
|
||||
)
|
||||
|
||||
variant(
|
||||
"nvshmem",
|
||||
default=False,
|
||||
sticky=True,
|
||||
description="Enable support for NVSHMEM-based halo exchanges.",
|
||||
when="+distconv",
|
||||
)
|
||||
|
||||
variant(
|
||||
"shared", default=True, sticky=True, description="Enables the build of shared libraries"
|
||||
)
|
||||
|
||||
# Some features of developer interest
|
||||
|
||||
variant("al", default=True, description="Builds with Aluminum communication library")
|
||||
variant(
|
||||
"developer",
|
||||
default=False,
|
||||
description="Enable extra warnings and force tests to be enabled.",
|
||||
)
|
||||
variant("half", default=False, description="Enable FP16 support on the CPU.")
|
||||
|
||||
variant("ci", default=False, description="Use default options for CI builds")
|
||||
|
||||
variant(
|
||||
"distconv",
|
||||
"coverage",
|
||||
default=False,
|
||||
description="Support distributed convolutions: spatial, channel, " "filter.",
|
||||
description="Decorate build with code coverage instrumentation options",
|
||||
when="%gcc",
|
||||
)
|
||||
variant("nvshmem", default=False, description="Builds with support for NVSHMEM")
|
||||
variant("openmp", default=False, description="Enable CPU acceleration with OpenMP threads.")
|
||||
variant("rocm", default=False, description="Enable ROCm/HIP language features.")
|
||||
variant("shared", default=True, description="Enables the build of shared libraries")
|
||||
|
||||
# Variants related to BLAS
|
||||
variant(
|
||||
"openmp_blas", default=False, description="Use OpenMP for threading in the BLAS library"
|
||||
"coverage",
|
||||
default=False,
|
||||
description="Decorate build with code coverage instrumentation options",
|
||||
when="%clang",
|
||||
)
|
||||
variant("int64_blas", default=False, description="Use 64bit integers for BLAS.")
|
||||
variant(
|
||||
"blas",
|
||||
default="openblas",
|
||||
values=("openblas", "mkl", "accelerate", "essl", "libsci"),
|
||||
description="Enable the use of OpenBlas/MKL/Accelerate/ESSL/LibSci",
|
||||
"coverage",
|
||||
default=False,
|
||||
description="Decorate build with code coverage instrumentation options",
|
||||
when="%rocmcc",
|
||||
)
|
||||
|
||||
conflicts("~cuda", when="+nvshmem")
|
||||
# Package conflicts and requirements
|
||||
|
||||
depends_on("mpi")
|
||||
depends_on("catch2", type="test")
|
||||
conflicts("+nvshmem", when="~cuda", msg="NVSHMEM requires CUDA support.")
|
||||
|
||||
# Specify the correct version of Aluminum
|
||||
depends_on("aluminum@0.4.0:0.4", when="@0.1 +al")
|
||||
depends_on("aluminum@0.5.0:0.5", when="@0.2.0 +al")
|
||||
depends_on("aluminum@0.7.0:0.7", when="@0.2.1 +al")
|
||||
depends_on("aluminum@0.7.0:", when="@:0.0,0.2.1: +al")
|
||||
conflicts("+cuda", when="+rocm", msg="CUDA and ROCm are mutually exclusive.")
|
||||
|
||||
# Add Aluminum variants
|
||||
depends_on("aluminum +cuda +nccl +cuda_rma", when="+al +cuda")
|
||||
depends_on("aluminum +rocm +rccl", when="+al +rocm")
|
||||
depends_on("aluminum +ht", when="+al +distconv")
|
||||
requires(
|
||||
"+cuda",
|
||||
"+rocm",
|
||||
when="+distconv",
|
||||
policy="any_of",
|
||||
msg="DistConv support requires CUDA or ROCm.",
|
||||
)
|
||||
|
||||
for arch in CudaPackage.cuda_arch_values:
|
||||
depends_on("aluminum cuda_arch=%s" % arch, when="+al +cuda cuda_arch=%s" % arch)
|
||||
depends_on("nvshmem cuda_arch=%s" % arch, when="+nvshmem +cuda cuda_arch=%s" % arch)
|
||||
# Dependencies
|
||||
|
||||
# variants +rocm and amdgpu_targets are not automatically passed to
|
||||
# dependencies, so do it manually.
|
||||
for val in ROCmPackage.amdgpu_targets:
|
||||
depends_on("aluminum amdgpu_target=%s" % val, when="amdgpu_target=%s" % val)
|
||||
depends_on("catch2@3.0.1:", type=("build", "test"), when="+developer")
|
||||
depends_on("cmake@3.21.0:", type="build")
|
||||
depends_on("cuda@11.0:", when="+cuda")
|
||||
depends_on("spdlog@1.11.0", when="@:0.1,0.2:")
|
||||
|
||||
depends_on("roctracer-dev", when="+rocm +distconv")
|
||||
with when("@0.3.0:"):
|
||||
depends_on("hydrogen +al")
|
||||
for arch in CudaPackage.cuda_arch_values:
|
||||
depends_on(
|
||||
"hydrogen +cuda cuda_arch={0}".format(arch),
|
||||
when="+cuda cuda_arch={0}".format(arch),
|
||||
)
|
||||
|
||||
depends_on("cudnn", when="+cuda")
|
||||
depends_on("cub", when="^cuda@:10")
|
||||
for val in ROCmPackage.amdgpu_targets:
|
||||
depends_on(
|
||||
"hydrogen amdgpu_target={0}".format(val),
|
||||
when="+rocm amdgpu_target={0}".format(val),
|
||||
)
|
||||
|
||||
# Note that #1712 forces us to enumerate the different blas variants
|
||||
depends_on("openblas", when="blas=openblas")
|
||||
depends_on("openblas +ilp64", when="blas=openblas +int64_blas")
|
||||
depends_on("openblas threads=openmp", when="blas=openblas +openmp_blas")
|
||||
with when("+distconv"):
|
||||
depends_on("mpi")
|
||||
|
||||
depends_on("intel-mkl", when="blas=mkl")
|
||||
depends_on("intel-mkl +ilp64", when="blas=mkl +int64_blas")
|
||||
depends_on("intel-mkl threads=openmp", when="blas=mkl +openmp_blas")
|
||||
# All this nonsense for one silly little package.
|
||||
depends_on("aluminum@1.4.1:")
|
||||
|
||||
depends_on("veclibfort", when="blas=accelerate")
|
||||
conflicts("blas=accelerate +openmp_blas")
|
||||
# Add Aluminum variants
|
||||
depends_on("aluminum +cuda +nccl", when="+distconv +cuda")
|
||||
depends_on("aluminum +rocm +nccl", when="+distconv +rocm")
|
||||
|
||||
depends_on("essl", when="blas=essl")
|
||||
depends_on("essl +ilp64", when="blas=essl +int64_blas")
|
||||
depends_on("essl threads=openmp", when="blas=essl +openmp_blas")
|
||||
depends_on("netlib-lapack +external-blas", when="blas=essl")
|
||||
# TODO: Debug linker errors when NVSHMEM is built with UCX
|
||||
depends_on("nvshmem +nccl~ucx", when="+nvshmem")
|
||||
|
||||
depends_on("cray-libsci", when="blas=libsci")
|
||||
depends_on("cray-libsci +openmp", when="blas=libsci +openmp_blas")
|
||||
# OMP support is only used in DistConv, and only Apple needs
|
||||
# hand-holding with it.
|
||||
depends_on("llvm-openmp", when="%apple-clang")
|
||||
# FIXME: when="platform=darwin"??
|
||||
|
||||
# Distconv builds require cuda or rocm
|
||||
conflicts("+distconv", when="~cuda ~rocm")
|
||||
# CUDA/ROCm arch forwarding
|
||||
|
||||
conflicts("+distconv", when="+half")
|
||||
conflicts("+rocm", when="+half")
|
||||
for arch in CudaPackage.cuda_arch_values:
|
||||
depends_on(
|
||||
"aluminum +cuda cuda_arch={0}".format(arch),
|
||||
when="+cuda cuda_arch={0}".format(arch),
|
||||
)
|
||||
|
||||
depends_on("half", when="+half")
|
||||
# This is a workaround for a bug in the Aluminum package,
|
||||
# as it should be responsible for its own NCCL dependency.
|
||||
# Rather than failing to concretize, we help it along.
|
||||
depends_on(
|
||||
"nccl cuda_arch={0}".format(arch),
|
||||
when="+distconv +cuda cuda_arch={0}".format(arch),
|
||||
)
|
||||
|
||||
generator("ninja")
|
||||
depends_on("cmake@3.17.0:", type="build")
|
||||
# NVSHMEM also needs arch forwarding
|
||||
depends_on(
|
||||
"nvshmem +cuda cuda_arch={0}".format(arch),
|
||||
when="+nvshmem +cuda cuda_arch={0}".format(arch),
|
||||
)
|
||||
|
||||
depends_on("spdlog", when="@:0.1,0.2:")
|
||||
# Idenfity versions of cuda_arch that are too old from
|
||||
# lib/spack/spack/build_systems/cuda.py. We require >=60.
|
||||
illegal_cuda_arch_values = [
|
||||
"10",
|
||||
"11",
|
||||
"12",
|
||||
"13",
|
||||
"20",
|
||||
"21",
|
||||
"30",
|
||||
"32",
|
||||
"35",
|
||||
"37",
|
||||
"50",
|
||||
"52",
|
||||
"53",
|
||||
]
|
||||
for value in illegal_cuda_arch_values:
|
||||
conflicts("cuda_arch=" + value)
|
||||
|
||||
depends_on("llvm-openmp", when="%apple-clang +openmp")
|
||||
for val in ROCmPackage.amdgpu_targets:
|
||||
depends_on(
|
||||
"aluminum amdgpu_target={0}".format(val),
|
||||
when="+rocm amdgpu_target={0}".format(val),
|
||||
)
|
||||
|
||||
# TODO: Debug linker errors when NVSHMEM is built with UCX
|
||||
depends_on("nvshmem +nccl~ucx", when="+nvshmem")
|
||||
# CUDA-specific distconv dependencies
|
||||
depends_on("cudnn", when="+cuda")
|
||||
|
||||
# Idenfity versions of cuda_arch that are too old
|
||||
# from lib/spack/spack/build_systems/cuda.py
|
||||
illegal_cuda_arch_values = ["10", "11", "12", "13", "20", "21"]
|
||||
for value in illegal_cuda_arch_values:
|
||||
conflicts("cuda_arch=" + value)
|
||||
# ROCm-specific distconv dependencies
|
||||
depends_on("hipcub", when="+rocm")
|
||||
depends_on("miopen-hip", when="+rocm")
|
||||
depends_on("roctracer-dev", when="+rocm")
|
||||
|
||||
with when("+ci+coverage"):
|
||||
depends_on("lcov", type=("build", "run"))
|
||||
depends_on("py-gcovr", type=("build", "run"))
|
||||
# Technically it's not used in the build, but CMake sets up a
|
||||
# target, so it needs to be found.
|
||||
|
||||
@property
|
||||
def libs(self):
|
||||
@@ -138,104 +232,127 @@ def libs(self):
|
||||
return find_libraries("libH2Core", root=self.prefix, shared=shared, recursive=True)
|
||||
|
||||
def cmake_args(self):
|
||||
args = []
|
||||
return args
|
||||
|
||||
def get_cuda_flags(self):
|
||||
spec = self.spec
|
||||
args = []
|
||||
if spec.satisfies("^cuda+allow-unsupported-compilers"):
|
||||
args.append("-allow-unsupported-compiler")
|
||||
|
||||
args = [
|
||||
"-DCMAKE_CXX_STANDARD=17",
|
||||
"-DCMAKE_INSTALL_MESSAGE:STRING=LAZY",
|
||||
"-DBUILD_SHARED_LIBS:BOOL=%s" % ("+shared" in spec),
|
||||
"-DH2_ENABLE_ALUMINUM=%s" % ("+al" in spec),
|
||||
"-DH2_ENABLE_CUDA=%s" % ("+cuda" in spec),
|
||||
"-DH2_ENABLE_DISTCONV_LEGACY=%s" % ("+distconv" in spec),
|
||||
"-DH2_ENABLE_OPENMP=%s" % ("+openmp" in spec),
|
||||
"-DH2_ENABLE_FP16=%s" % ("+half" in spec),
|
||||
"-DH2_DEVELOPER_BUILD=%s" % ("+developer" in spec),
|
||||
]
|
||||
if spec.satisfies("%clang"):
|
||||
for flag in spec.compiler_flags["cxxflags"]:
|
||||
if "gcc-toolchain" in flag:
|
||||
args.append("-Xcompiler={0}".format(flag))
|
||||
return args
|
||||
|
||||
if spec.version < Version("0.3"):
|
||||
args.append("-DH2_ENABLE_HIP_ROCM=%s" % ("+rocm" in spec))
|
||||
else:
|
||||
args.append("-DH2_ENABLE_ROCM=%s" % ("+rocm" in spec))
|
||||
def initconfig_compiler_entries(self):
|
||||
spec = self.spec
|
||||
entries = super(Dihydrogen, self).initconfig_compiler_entries()
|
||||
|
||||
if not spec.satisfies("^cmake@3.23.0"):
|
||||
# There is a bug with using Ninja generator in this version
|
||||
# of CMake
|
||||
args.append("-DCMAKE_EXPORT_COMPILE_COMMANDS=ON")
|
||||
# FIXME: Enforce this better in the actual CMake.
|
||||
entries.append(cmake_cache_string("CMAKE_CXX_STANDARD", "17"))
|
||||
entries.append(cmake_cache_option("BUILD_SHARED_LIBS", "+shared" in spec))
|
||||
entries.append(cmake_cache_option("CMAKE_EXPORT_COMPILE_COMMANDS", True))
|
||||
|
||||
if "+cuda" in spec:
|
||||
if self.spec.satisfies("%clang"):
|
||||
for flag in self.spec.compiler_flags["cxxflags"]:
|
||||
if "gcc-toolchain" in flag:
|
||||
args.append("-DCMAKE_CUDA_FLAGS=-Xcompiler={0}".format(flag))
|
||||
if spec.satisfies("^cuda@11.0:"):
|
||||
args.append("-DCMAKE_CUDA_STANDARD=17")
|
||||
else:
|
||||
args.append("-DCMAKE_CUDA_STANDARD=14")
|
||||
archs = spec.variants["cuda_arch"].value
|
||||
if archs != "none":
|
||||
arch_str = ";".join(archs)
|
||||
args.append("-DCMAKE_CUDA_ARCHITECTURES=%s" % arch_str)
|
||||
# It's possible this should have a `if "platform=cray" in
|
||||
# spec:` in front of it, but it's not clear to me when this is
|
||||
# set. In particular, I don't actually see this blurb showing
|
||||
# up on Tioga builds. Which is causing the obvious problem
|
||||
# (namely, the one this was added to supposedly solve in the
|
||||
# first place.
|
||||
entries.append(cmake_cache_option("MPI_ASSUME_NO_BUILTIN_MPI", True))
|
||||
|
||||
if spec.satisfies("%cce") and spec.satisfies("^cuda+allow-unsupported-compilers"):
|
||||
args.append("-DCMAKE_CUDA_FLAGS=-allow-unsupported-compiler")
|
||||
|
||||
if "+cuda" in spec:
|
||||
args.append("-DcuDNN_DIR={0}".format(spec["cudnn"].prefix))
|
||||
|
||||
if spec.satisfies("^cuda@:10"):
|
||||
if "+cuda" in spec or "+distconv" in spec:
|
||||
args.append("-DCUB_DIR={0}".format(spec["cub"].prefix))
|
||||
|
||||
# Add support for OpenMP with external (Brew) clang
|
||||
if spec.satisfies("%clang +openmp platform=darwin"):
|
||||
if spec.satisfies("%clang +distconv platform=darwin"):
|
||||
clang = self.compiler.cc
|
||||
clang_bin = os.path.dirname(clang)
|
||||
clang_root = os.path.dirname(clang_bin)
|
||||
args.extend(
|
||||
[
|
||||
"-DOpenMP_CXX_FLAGS=-fopenmp=libomp",
|
||||
"-DOpenMP_CXX_LIB_NAMES=libomp",
|
||||
"-DOpenMP_libomp_LIBRARY={0}/lib/libomp.dylib".format(clang_root),
|
||||
]
|
||||
)
|
||||
|
||||
if "+rocm" in spec:
|
||||
args.extend(
|
||||
[
|
||||
"-DCMAKE_CXX_FLAGS=-std=c++17",
|
||||
"-DHIP_ROOT_DIR={0}".format(spec["hip"].prefix),
|
||||
"-DHIP_CXX_COMPILER={0}".format(self.spec["hip"].hipcc),
|
||||
]
|
||||
)
|
||||
if "platform=cray" in spec:
|
||||
args.extend(["-DMPI_ASSUME_NO_BUILTIN_MPI=ON"])
|
||||
archs = self.spec.variants["amdgpu_target"].value
|
||||
if archs != "none":
|
||||
arch_str = ",".join(archs)
|
||||
args.append(
|
||||
"-DHIP_HIPCC_FLAGS=--amdgpu-target={0}"
|
||||
" -g -fsized-deallocation -fPIC -std=c++17".format(arch_str)
|
||||
entries.append(cmake_cache_string("OpenMP_CXX_FLAGS", "-fopenmp=libomp"))
|
||||
entries.append(cmake_cache_string("OpenMP_CXX_LIB_NAMES", "libomp"))
|
||||
entries.append(
|
||||
cmake_cache_string(
|
||||
"OpenMP_libomp_LIBRARY", "{0}/lib/libomp.dylib".format(clang_root)
|
||||
)
|
||||
args.extend(
|
||||
[
|
||||
"-DCMAKE_HIP_ARCHITECTURES=%s" % arch_str,
|
||||
"-DAMDGPU_TARGETS=%s" % arch_str,
|
||||
"-DGPU_TARGETS=%s" % arch_str,
|
||||
]
|
||||
)
|
||||
|
||||
if self.spec.satisfies("^essl"):
|
||||
# IF IBM ESSL is used it needs help finding the proper LAPACK libraries
|
||||
args.extend(
|
||||
[
|
||||
"-DLAPACK_LIBRARIES=%s;-llapack;-lblas"
|
||||
% ";".join("-l{0}".format(lib) for lib in self.spec["essl"].libs.names),
|
||||
"-DBLAS_LIBRARIES=%s;-lblas"
|
||||
% ";".join("-l{0}".format(lib) for lib in self.spec["essl"].libs.names),
|
||||
]
|
||||
)
|
||||
|
||||
return args
|
||||
return entries
|
||||
|
||||
def initconfig_hardware_entries(self):
|
||||
spec = self.spec
|
||||
entries = super(Dihydrogen, self).initconfig_hardware_entries()
|
||||
|
||||
entries.append(cmake_cache_option("H2_ENABLE_CUDA", "+cuda" in spec))
|
||||
if spec.satisfies("+cuda"):
|
||||
entries.append(cmake_cache_string("CMAKE_CUDA_STANDARD", "17"))
|
||||
if not spec.satisfies("cuda_arch=none"):
|
||||
archs = spec.variants["cuda_arch"].value
|
||||
arch_str = ";".join(archs)
|
||||
entries.append(cmake_cache_string("CMAKE_CUDA_ARCHITECTURES", arch_str))
|
||||
|
||||
# FIXME: Should this use the "cuda_flags" function of the
|
||||
# CudaPackage class or something? There might be other
|
||||
# flags in play, and we need to be sure to get them all.
|
||||
cuda_flags = self.get_cuda_flags()
|
||||
if len(cuda_flags) > 0:
|
||||
entries.append(cmake_cache_string("CMAKE_CUDA_FLAGS", " ".join(cuda_flags)))
|
||||
|
||||
enable_rocm_var = (
|
||||
"H2_ENABLE_ROCM" if spec.version < Version("0.3") else "H2_ENABLE_HIP_ROCM"
|
||||
)
|
||||
entries.append(cmake_cache_option(enable_rocm_var, "+rocm" in spec))
|
||||
if spec.satisfies("+rocm"):
|
||||
entries.append(cmake_cache_string("CMAKE_HIP_STANDARD", "17"))
|
||||
if not spec.satisfies("amdgpu_target=none"):
|
||||
archs = self.spec.variants["amdgpu_target"].value
|
||||
arch_str = ";".join(archs)
|
||||
entries.append(cmake_cache_string("CMAKE_HIP_ARCHITECTURES", arch_str))
|
||||
entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str))
|
||||
entries.append(cmake_cache_string("GPU_TARGETS", arch_str))
|
||||
entries.append(cmake_cache_path("HIP_ROOT_DIR", spec["hip"].prefix))
|
||||
|
||||
return entries
|
||||
|
||||
def initconfig_package_entries(self):
|
||||
spec = self.spec
|
||||
entries = super(Dihydrogen, self).initconfig_package_entries()
|
||||
|
||||
# Basic H2 options
|
||||
entries.append(cmake_cache_option("H2_DEVELOPER_BUILD", "+developer" in spec))
|
||||
entries.append(cmake_cache_option("H2_ENABLE_TESTS", "+developer" in spec))
|
||||
|
||||
entries.append(cmake_cache_option("H2_ENABLE_CODE_COVERAGE", "+coverage" in spec))
|
||||
entries.append(cmake_cache_option("H2_CI_BUILD", "+ci" in spec))
|
||||
|
||||
entries.append(cmake_cache_option("H2_ENABLE_DACE", "+dace" in spec))
|
||||
|
||||
# DistConv options
|
||||
entries.append(cmake_cache_option("H2_ENABLE_ALUMINUM", "+distconv" in spec))
|
||||
entries.append(cmake_cache_option("H2_ENABLE_DISTCONV_LEGACY", "+distconv" in spec))
|
||||
entries.append(cmake_cache_option("H2_ENABLE_OPENMP", "+distconv" in spec))
|
||||
|
||||
# Paths to stuff, just in case. CMAKE_PREFIX_PATH should catch
|
||||
# all this, but this shouldn't hurt to have.
|
||||
entries.append(cmake_cache_path("spdlog_ROOT", spec["spdlog"].prefix))
|
||||
|
||||
if "+developer" in spec:
|
||||
entries.append(cmake_cache_path("Catch2_ROOT", spec["catch2"].prefix))
|
||||
|
||||
if "+coverage" in spec:
|
||||
entries.append(cmake_cache_path("lcov_ROOT", spec["lcov"].prefix))
|
||||
entries.append(cmake_cache_path("genhtml_ROOT", spec["lcov"].prefix))
|
||||
if "+ci" in spec:
|
||||
entries.append(cmake_cache_path("gcovr_ROOT", spec["py-gcovr"].prefix))
|
||||
|
||||
if "+distconv" in spec:
|
||||
entries.append(cmake_cache_path("Aluminum_ROOT", spec["aluminum"].prefix))
|
||||
if "+cuda" in spec:
|
||||
entries.append(cmake_cache_path("cuDNN_ROOT", spec["cudnn"].prefix))
|
||||
|
||||
# Currently this is a hack for all Hydrogen versions. WIP to
|
||||
# fix this at develop.
|
||||
entries.extend(get_blas_entries(spec))
|
||||
return entries
|
||||
|
||||
def setup_build_environment(self, env):
|
||||
if self.spec.satisfies("%apple-clang +openmp"):
|
||||
|
||||
@@ -18,6 +18,7 @@ class Discotec(CMakePackage):
|
||||
|
||||
version("main", branch="main")
|
||||
|
||||
variant("compression", default=False, description="Write sparse grid files compressed")
|
||||
variant("ft", default=False, description="DisCoTec with algorithm-based fault tolerance")
|
||||
variant("gene", default=False, description="Build for GENE (as task library)")
|
||||
variant("hdf5", default=True, description="Interpolation output with HDF5")
|
||||
@@ -31,6 +32,7 @@ class Discotec(CMakePackage):
|
||||
depends_on("cmake@3.24.2:", type="build")
|
||||
depends_on("glpk")
|
||||
depends_on("highfive+mpi+boost+ipo", when="+hdf5")
|
||||
depends_on("lz4", when="+compression")
|
||||
depends_on("mpi")
|
||||
depends_on("selalib", when="+selalib")
|
||||
depends_on("vtk", when="+vtk")
|
||||
@@ -38,6 +40,7 @@ class Discotec(CMakePackage):
|
||||
def cmake_args(self):
|
||||
args = [
|
||||
self.define("DISCOTEC_BUILD_MISSING_DEPS", False),
|
||||
self.define_from_variant("DISCOTEC_WITH_COMPRESSION", "compression"),
|
||||
self.define_from_variant("DISCOTEC_ENABLEFT", "ft"),
|
||||
self.define_from_variant("DISCOTEC_GENE", "gene"),
|
||||
self.define_from_variant("DISCOTEC_OPENMP", "openmp"),
|
||||
|
||||
@@ -14,6 +14,8 @@ class DlaFuture(CMakePackage, CudaPackage, ROCmPackage):
|
||||
git = "https://github.com/eth-cscs/DLA-Future.git"
|
||||
maintainers = ["rasolca", "albestro", "msimberg", "aurianer"]
|
||||
|
||||
license("BSD-3-Clause")
|
||||
|
||||
version("0.2.1", sha256="4c2669d58f041304bd618a9d69d9879a42e6366612c2fc932df3894d0326b7fe")
|
||||
version("0.2.0", sha256="da73cbd1b88287c86d84b1045a05406b742be924e65c52588bbff200abd81a10")
|
||||
version("0.1.0", sha256="f7ffcde22edabb3dc24a624e2888f98829ee526da384cd752b2b271c731ca9b1")
|
||||
|
||||
@@ -26,6 +26,7 @@ class EpicsBase(MakefilePackage):
|
||||
def patch(self):
|
||||
filter_file(r"^\s*CC\s*=.*", "CC = " + spack_cc, "configure/CONFIG.gnuCommon")
|
||||
filter_file(r"^\s*CCC\s*=.*", "CCC = " + spack_cxx, "configure/CONFIG.gnuCommon")
|
||||
filter_file(r"\$\(PERL\)\s+\$\(XSUBPP\)", "$(XSUBPP)", "modules/ca/src/perl/Makefile")
|
||||
|
||||
@property
|
||||
def install_targets(self):
|
||||
|
||||
@@ -16,7 +16,6 @@ class Fdb(CMakePackage):
|
||||
|
||||
maintainers("skosukhin")
|
||||
|
||||
# master version of fdb is subject to frequent changes and is to be used experimentally.
|
||||
version("master", branch="master")
|
||||
version("5.11.23", sha256="09b1d93f2b71d70c7b69472dfbd45a7da0257211f5505b5fcaf55bfc28ca6c65")
|
||||
version("5.11.17", sha256="375c6893c7c60f6fdd666d2abaccb2558667bd450100817c0e1072708ad5591e")
|
||||
@@ -44,6 +43,7 @@ class Fdb(CMakePackage):
|
||||
depends_on("ecbuild@3.7:", type="build", when="@5.11.6:")
|
||||
|
||||
depends_on("eckit@1.16:")
|
||||
depends_on("eckit@1.24.4:", when="@5.11.22:")
|
||||
depends_on("eckit+admin", when="+tools")
|
||||
|
||||
depends_on("eccodes@2.10:")
|
||||
|
||||
@@ -15,6 +15,8 @@ class Fmt(CMakePackage):
|
||||
url = "https://github.com/fmtlib/fmt/releases/download/7.1.3/fmt-7.1.3.zip"
|
||||
maintainers("msimberg")
|
||||
|
||||
license("MIT")
|
||||
|
||||
version("10.1.1", sha256="b84e58a310c9b50196cda48d5678d5fa0849bca19e5fdba6b684f0ee93ed9d1b")
|
||||
version("10.1.0", sha256="d725fa83a8b57a3cedf238828fa6b167f963041e8f9f7327649bddc68ae316f4")
|
||||
version("10.0.0", sha256="4943cb165f3f587f26da834d3056ee8733c397e024145ca7d2a8a96bb71ac281")
|
||||
|
||||
@@ -21,20 +21,30 @@ class Geos(CMakePackage):
|
||||
|
||||
maintainers("adamjstewart")
|
||||
|
||||
version("3.12.1", sha256="d6ea7e492224b51193e8244fe3ec17c4d44d0777f3c32ca4fb171140549a0d03")
|
||||
version("3.12.0", sha256="d96db96011259178a35555a0f6d6e75a739e52a495a6b2aa5efb3d75390fbc39")
|
||||
version("3.11.3", sha256="80d60a2bbc0cde7745a3366b9eb8c0d65a142b03e063ea0a52c364758cd5ee89")
|
||||
version("3.11.2", sha256="b1f077669481c5a3e62affc49e96eb06f281987a5d36fdab225217e5b825e4cc")
|
||||
version("3.11.1", sha256="6d0eb3cfa9f92d947731cc75f1750356b3bdfc07ea020553daf6af1c768e0be2")
|
||||
version("3.11.0", sha256="79ab8cabf4aa8604d161557b52e3e4d84575acdc0d08cb09ab3f7aaefa4d858a")
|
||||
version("3.10.6", sha256="078403158da66cad8be39ad1ede5e2fe4b70dcf7bb292fb06a65bdfe8afa6daf")
|
||||
version("3.10.5", sha256="cc47d95e846e2745c493d8f9f3a9913b1c61f26717a1165898da64352aec4dde")
|
||||
version("3.10.4", sha256="d6fc11bcfd265cbf2714199174e4c3392d657551e5fd84c74c07c863b29357e3")
|
||||
version("3.10.3", sha256="3c141b07d61958a758345d5f54e3c735834b2f4303edb9f67fb26914f0d44770")
|
||||
version("3.10.2", sha256="50bbc599ac386b4c2b3962dcc411f0040a61f204aaef4eba7225ecdd0cf45715")
|
||||
version("3.10.1", sha256="a8148eec9636814c8ab0f8f5266ce6f9b914ed65b0d083fc43bb0bbb01f83648")
|
||||
version("3.10.0", sha256="097d70e3c8f688e59633ceb8d38ad5c9b0d7ead5729adeb925dbc489437abe13")
|
||||
version("3.9.5", sha256="c6c9aedfa8864fb44ba78911408442382bfd0690cf2d4091ae3805c863789036")
|
||||
version("3.9.4", sha256="70dff2530d8cd2dfaeeb91a5014bd17afb1baee8f0e3eb18e44d5b4dbea47b14")
|
||||
version("3.9.3", sha256="f8b2314e311456f7a449144efb5e3188c2a28774752bc50fc882a3cd5c89ee35")
|
||||
version("3.9.2", sha256="44a5a9be21d7d473436bf621c2ddcc3cf5a8bbe3c786e13229618a3b9d861297")
|
||||
version("3.9.1", sha256="7e630507dcac9dc07565d249a26f06a15c9f5b0c52dd29129a0e3d381d7e382a")
|
||||
version("3.9.0", sha256="bd8082cf12f45f27630193c78bdb5a3cba847b81e72b20268356c2a4fc065269")
|
||||
version("3.8.4", sha256="6de8c98c1ae7cb0cd2d726a8dc9b7467308c4b4e05f9df94742244e64e441499")
|
||||
version("3.8.3", sha256="f98315d1ba35c8d1a94a2947235f9e9dfb7057fdec343683f64ff9ad1061255c")
|
||||
version("3.8.2", sha256="5a102f4614b0c9291504bbefd847ebac18ea717843506bd251d015c7cf9726b4")
|
||||
version("3.8.1", sha256="4258af4308deb9dbb5047379026b4cd9838513627cb943a44e16c40e42ae17f7")
|
||||
version("3.8.0", sha256="99114c3dc95df31757f44d2afde73e61b9f742f0b683fd1894cbbee05dda62d5")
|
||||
version("3.7.2", sha256="2166e65be6d612317115bfec07827c11b403c3f303e0a7420a2106bc999d7707")
|
||||
version("3.6.2", sha256="045a13df84d605a866602f6020fc6cbf8bf4c42fb50de237a08926e1d7d7652a")
|
||||
version("3.6.1", sha256="4a2e4e3a7a09a7cfda3211d0f4a235d9fd3176ddf64bd8db14b4ead266189fc5")
|
||||
|
||||
@@ -24,7 +24,8 @@ class Ginkgo(CMakePackage, CudaPackage, ROCmPackage):
|
||||
|
||||
version("develop", branch="develop")
|
||||
version("master", branch="master")
|
||||
version("1.6.0", commit="1f1ed46e724334626f016f105213c047e16bc1ae", preferred=True) # v1.6.0
|
||||
version("1.7.0", commit="49242ff89af1e695d7794f6d50ed9933024b66fe") # v1.7.0
|
||||
version("1.6.0", commit="1f1ed46e724334626f016f105213c047e16bc1ae") # v1.6.0
|
||||
version("1.5.0", commit="234594c92b58e2384dfb43c2d08e7f43e2b58e7a") # v1.5.0
|
||||
version("1.5.0.glu_experimental", branch="glu_experimental")
|
||||
version("1.4.0", commit="f811917c1def4d0fcd8db3fe5c948ce13409e28e") # v1.4.0
|
||||
@@ -37,13 +38,18 @@ class Ginkgo(CMakePackage, CudaPackage, ROCmPackage):
|
||||
variant("shared", default=True, description="Build shared libraries")
|
||||
variant("full_optimizations", default=False, description="Compile with all optimizations")
|
||||
variant("openmp", default=sys.platform != "darwin", description="Build with OpenMP")
|
||||
variant("oneapi", default=False, description="Build with oneAPI support")
|
||||
variant("sycl", default=False, description="Enable SYCL backend")
|
||||
variant("develtools", default=False, description="Compile with develtools enabled")
|
||||
variant("hwloc", default=False, description="Enable HWLOC support")
|
||||
variant("mpi", default=False, description="Enable MPI support")
|
||||
|
||||
depends_on("cmake@3.9:", type="build")
|
||||
depends_on("cuda@9:", when="+cuda")
|
||||
depends_on("cmake@3.9:", type="build", when="@:1.3.0")
|
||||
depends_on("cmake@3.13:", type="build", when="@1.4.0:1.6.0")
|
||||
depends_on("cmake@3.16:", type="build", when="@1.7.0:")
|
||||
depends_on("cmake@3.18:", type="build", when="+cuda@1.7.0:")
|
||||
depends_on("cuda@9:", when="+cuda @:1.4.0")
|
||||
depends_on("cuda@9.2:", when="+cuda @1.5.0:")
|
||||
depends_on("cuda@10.1:", when="+cuda @1.7.0:")
|
||||
depends_on("mpi", when="+mpi")
|
||||
|
||||
depends_on("rocthrust", when="+rocm")
|
||||
@@ -60,14 +66,13 @@ class Ginkgo(CMakePackage, CudaPackage, ROCmPackage):
|
||||
depends_on("googletest", type="test")
|
||||
depends_on("numactl", type="test", when="+hwloc")
|
||||
|
||||
depends_on("intel-oneapi-mkl", when="+oneapi")
|
||||
depends_on("intel-oneapi-dpl", when="+oneapi")
|
||||
depends_on("intel-oneapi-mkl", when="+sycl")
|
||||
depends_on("intel-oneapi-dpl", when="+sycl")
|
||||
depends_on("intel-oneapi-tbb", when="+sycl")
|
||||
|
||||
conflicts("%gcc@:5.2.9")
|
||||
conflicts("+rocm", when="@:1.1.1")
|
||||
conflicts("+mpi", when="@:1.4.0")
|
||||
conflicts("+cuda", when="+rocm")
|
||||
conflicts("+openmp", when="+oneapi")
|
||||
|
||||
# ROCm 4.1.0 breaks platform settings which breaks Ginkgo's HIP support.
|
||||
conflicts("^hip@4.1.0:", when="@:1.3.0")
|
||||
@@ -76,22 +81,35 @@ class Ginkgo(CMakePackage, CudaPackage, ROCmPackage):
|
||||
conflicts("^rocthrust@4.1.0:", when="@:1.3.0")
|
||||
conflicts("^rocprim@4.1.0:", when="@:1.3.0")
|
||||
|
||||
# Ginkgo 1.6.0 start relying on ROCm 4.5.0
|
||||
conflicts("^hip@:4.3.1", when="@1.6.0:")
|
||||
conflicts("^hipblas@:4.3.1", when="@1.6.0:")
|
||||
conflicts("^hipsparse@:4.3.1", when="@1.6.0:")
|
||||
conflicts("^rocthrust@:4.3.1", when="@1.6.0:")
|
||||
conflicts("^rocprim@:4.3.1", when="@1.6.0:")
|
||||
|
||||
conflicts(
|
||||
"+sycl", when="@:1.4.0", msg="For SYCL support, please use Ginkgo version 1.4.0 and newer."
|
||||
)
|
||||
|
||||
# Skip smoke tests if compatible hardware isn't found
|
||||
patch("1.4.0_skip_invalid_smoke_tests.patch", when="@1.4.0")
|
||||
|
||||
# Newer DPC++ compilers use the updated SYCL 2020 standard which change
|
||||
# kernel attribute propagation rules. This doesn't work well with the
|
||||
# initial Ginkgo oneAPI support.
|
||||
patch("1.4.0_dpcpp_use_old_standard.patch", when="+oneapi @1.4.0")
|
||||
|
||||
# Add missing include statement
|
||||
patch("thrust-count-header.patch", when="+rocm @1.5.0")
|
||||
|
||||
def setup_build_environment(self, env):
|
||||
spec = self.spec
|
||||
if "+oneapi" in spec:
|
||||
if "+sycl" in spec:
|
||||
env.set("MKLROOT", join_path(spec["intel-oneapi-mkl"].prefix, "mkl", "latest"))
|
||||
env.set("DPL_ROOT", join_path(spec["intel-oneapi-dpl"].prefix, "dpl", "latest"))
|
||||
# The `IntelSYCLConfig.cmake` is broken with spack. By default, it
|
||||
# relies on the CMAKE_CXX_COMPILER being the real ipcx/dpcpp
|
||||
# compiler. If not, the variable SYCL_COMPILER of that script is
|
||||
# broken, and all the SYCL detection mechanism is wrong. We fix it
|
||||
# by giving hint environment variables.
|
||||
env.set("SYCL_LIBRARY_DIR_HINT", os.path.dirname(os.path.dirname(self.compiler.cxx)))
|
||||
env.set("SYCL_INCLUDE_DIR_HINT", os.path.dirname(os.path.dirname(self.compiler.cxx)))
|
||||
|
||||
def cmake_args(self):
|
||||
# Check that the have the correct C++ standard is available
|
||||
@@ -106,18 +124,19 @@ def cmake_args(self):
|
||||
except UnsupportedCompilerFlag:
|
||||
raise InstallError("Ginkgo requires a C++14-compliant C++ compiler")
|
||||
|
||||
cxx_is_dpcpp = os.path.basename(self.compiler.cxx) == "dpcpp"
|
||||
if self.spec.satisfies("+oneapi") and not cxx_is_dpcpp:
|
||||
raise InstallError(
|
||||
"Ginkgo's oneAPI backend requires the" + "DPC++ compiler as main CXX compiler."
|
||||
)
|
||||
if self.spec.satisfies("@1.4.0:1.6.0 +sycl") and not self.spec.satisfies(
|
||||
"%oneapi@2021.3.0:"
|
||||
):
|
||||
raise InstallError("ginkgo +sycl requires %oneapi@2021.3.0:")
|
||||
elif self.spec.satisfies("@1.7.0: +sycl") and not self.spec.satisfies("%oneapi@2022.1.0:"):
|
||||
raise InstallError("ginkgo +sycl requires %oneapi@2022.1.0:")
|
||||
|
||||
spec = self.spec
|
||||
from_variant = self.define_from_variant
|
||||
args = [
|
||||
from_variant("GINKGO_BUILD_CUDA", "cuda"),
|
||||
from_variant("GINKGO_BUILD_HIP", "rocm"),
|
||||
from_variant("GINKGO_BUILD_DPCPP", "oneapi"),
|
||||
from_variant("GINKGO_BUILD_SYCL", "sycl"),
|
||||
from_variant("GINKGO_BUILD_OMP", "openmp"),
|
||||
from_variant("GINKGO_BUILD_MPI", "mpi"),
|
||||
from_variant("BUILD_SHARED_LIBS", "shared"),
|
||||
@@ -161,6 +180,11 @@ def cmake_args(self):
|
||||
args.append(
|
||||
self.define("CMAKE_MODULE_PATH", self.spec["hip"].prefix.lib.cmake.hip)
|
||||
)
|
||||
|
||||
if "+sycl" in self.spec:
|
||||
sycl_compatible_compilers = ["dpcpp", "icpx"]
|
||||
if not (os.path.basename(self.compiler.cxx) in sycl_compatible_compilers):
|
||||
raise InstallError("ginkgo +sycl requires DPC++ (dpcpp) or icpx compiler.")
|
||||
return args
|
||||
|
||||
@property
|
||||
|
||||
@@ -17,6 +17,7 @@ class Gotcha(CMakePackage):
|
||||
|
||||
version("develop", branch="develop")
|
||||
version("master", branch="master")
|
||||
version("1.0.5", tag="1.0.5", commit="e28f10c45a0cda0e1ec225eaea6abfe72c8353aa")
|
||||
version("1.0.4", tag="1.0.4", commit="46f2aaedc885f140a3f31a17b9b9a9d171f3d6f0")
|
||||
version("1.0.3", tag="1.0.3", commit="1aafd1e30d46ce4e6555c8a4ea5f5edf6a5eade5")
|
||||
version("1.0.2", tag="1.0.2", commit="bed1b7c716ebb0604b3e063121649b5611640f25")
|
||||
|
||||
@@ -17,6 +17,8 @@ class Gperftools(AutotoolsPackage):
|
||||
url = "https://github.com/gperftools/gperftools/releases/download/gperftools-2.7/gperftools-2.7.tar.gz"
|
||||
maintainers("albestro", "eschnett", "msimberg", "teonnik")
|
||||
|
||||
license("BSD-3-Clause")
|
||||
|
||||
version("2.13", sha256="4882c5ece69f8691e51ffd6486df7d79dbf43b0c909d84d3c0883e30d27323e7")
|
||||
version("2.12", sha256="fb611b56871a3d9c92ab0cc41f9c807e8dfa81a54a4a9de7f30e838756b5c7c6")
|
||||
version("2.11", sha256="8ffda10e7c500fea23df182d7adddbf378a203c681515ad913c28a64b87e24dc")
|
||||
|
||||
@@ -12,9 +12,22 @@ class Gzip(AutotoolsPackage):
|
||||
homepage = "https://www.gnu.org/software/gzip/"
|
||||
url = "https://ftp.gnu.org/gnu/gzip/gzip-1.10.tar.gz"
|
||||
|
||||
version("1.12", sha256="5b4fb14d38314e09f2fc8a1c510e7cd540a3ea0e3eb9b0420046b82c3bf41085")
|
||||
version("1.11", sha256="3e8a0e0c45bad3009341dce17d71536c4c655d9313039021ce7554a26cd50ed9")
|
||||
version("1.10", sha256="c91f74430bf7bc20402e1f657d0b252cb80aa66ba333a25704512af346633c68")
|
||||
version("1.13", sha256="20fc818aeebae87cdbf209d35141ad9d3cf312b35a5e6be61bfcfbf9eddd212a")
|
||||
version(
|
||||
"1.12",
|
||||
sha256="5b4fb14d38314e09f2fc8a1c510e7cd540a3ea0e3eb9b0420046b82c3bf41085",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"1.11",
|
||||
sha256="3e8a0e0c45bad3009341dce17d71536c4c655d9313039021ce7554a26cd50ed9",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"1.10",
|
||||
sha256="c91f74430bf7bc20402e1f657d0b252cb80aa66ba333a25704512af346633c68",
|
||||
deprecated=True,
|
||||
)
|
||||
|
||||
# Gzip makes a recursive symlink if built in-source
|
||||
build_directory = "spack-build"
|
||||
|
||||
@@ -207,6 +207,7 @@ class Hdf5(CMakePackage):
|
||||
variant("hl", default=False, description="Enable the high-level library")
|
||||
variant("cxx", default=False, description="Enable C++ support")
|
||||
variant("map", when="@1.14:", default=False, description="Enable MAP API support")
|
||||
variant("subfiling", when="@1.14:", default=False, description="Enable Subfiling VFD support")
|
||||
variant("fortran", default=False, description="Enable Fortran support")
|
||||
variant("java", when="@1.10:", default=False, description="Enable Java support")
|
||||
variant("threadsafe", default=False, description="Enable thread-safe capabilities")
|
||||
@@ -329,7 +330,7 @@ class Hdf5(CMakePackage):
|
||||
|
||||
patch("fortran-kinds.patch", when="@1.10.7")
|
||||
|
||||
# This patch may only be needed with GCC11.2 on macOS, but it's valid for
|
||||
# This patch may only be needed with GCC 11.2 on macOS, but it's valid for
|
||||
# any of the head HDF5 versions as of 12/2021. Since it's impossible to
|
||||
# tell what Fortran version is part of a mixed apple-clang toolchain on
|
||||
# macOS (which is the norm), and this might be an issue for other compilers
|
||||
@@ -607,6 +608,7 @@ def cmake_args(self):
|
||||
# are enabled but the tests are disabled.
|
||||
spec.satisfies("@1.8.22+shared+tools"),
|
||||
),
|
||||
self.define_from_variant("HDF5_ENABLE_SUBFILING_VFD", "subfiling"),
|
||||
self.define_from_variant("HDF5_ENABLE_MAP_API", "map"),
|
||||
self.define("HDF5_ENABLE_Z_LIB_SUPPORT", True),
|
||||
self.define_from_variant("HDF5_ENABLE_SZIP_SUPPORT", "szip"),
|
||||
@@ -711,6 +713,17 @@ def fix_package_config(self):
|
||||
if not os.path.exists(tgt_filename):
|
||||
symlink(src_filename, tgt_filename)
|
||||
|
||||
@run_after("install")
|
||||
def link_debug_libs(self):
|
||||
# When build_type is Debug, the hdf5 build appends _debug to all library names.
|
||||
# Dependents of hdf5 (netcdf-c etc.) can't handle those, thus make symlinks.
|
||||
if "build_type=Debug" in self.spec:
|
||||
libs = find(self.prefix.lib, "libhdf5*_debug.*", recursive=False)
|
||||
with working_dir(self.prefix.lib):
|
||||
for lib in libs:
|
||||
libname = os.path.split(lib)[1]
|
||||
os.symlink(libname, libname.replace("_debug", ""))
|
||||
|
||||
@property
|
||||
@llnl.util.lang.memoized
|
||||
def _output_version(self):
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user