Compare commits

...

104 Commits

Author SHA1 Message Date
Todd Gamblin
56c947722e not == to !=
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2025-04-09 11:18:43 -07:00
Todd Gamblin
b14f172a61 check spec name
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2025-04-09 09:49:26 -07:00
Todd Gamblin
795335d56e python: factor out find_python_in_prefix
Pull a free-standing `find_python_in_prefix` function out of `python`'s `command()`
property.

Originally done for #44382 but not ultimately used, I still think this is a good
refactor, so submitting as a separate pull request.

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2025-04-02 11:41:12 -07:00
Richard Berger
f4792c834e portage, tangram, wonton: update packages (#49829) 2025-04-02 19:53:14 +02:00
Satish Balay
98ca90aebc slepc, py-slepc4py, petsc, py-petsc4py add v3.23.0 (#49813) 2025-04-02 10:08:56 -07:00
Mikael Simberg
991f26d1ae pika: Add 0.33.0 (#49834) 2025-04-02 10:58:24 -06:00
eugeneswalker
973a7e6de8 e4s oneapi: upgrade to latest compilers oneapi@2025.1 (#47317)
* e4s oneapi: upgrade to latest compilers oneapi@2025.1

* update specs and package preferences

* enable some more dav packages

* enable additional specs

* e4s oneapi: packages: elfutils does not have bzip2 variant

* e4s oneapi: packages: elfutils does not have xz variant

* e4s oneapi: comment out heffte+sycl

* comment out e4s oneapi failures

* comment out more failures

* comment out failing spec
2025-04-02 09:21:49 -07:00
Cory Quammen
528ba74965 paraview: add v5.13.3 (#49818)
* Revert "paraview: add patch for Intel Classic compilers (#49116)"

This reverts commit 7a95e2beb5.

We'll mark Intel Classic compilers as conflicting with ParaView
versions 5.13.0-5.13.2 instead since 5.13.3 is available and can be
built with with those compilers.

* Add conflict for Intel Class compilers and ParaView 5.13.0-5.13.2.

* paraview: add new v5.13.3 release
2025-04-02 10:53:19 -05:00
eugeneswalker
73034c163b rocm 6.3.3 updates (#49684) 2025-04-02 08:44:06 -07:00
Massimiliano Culpo
62ee56e8a3 docs: remove leftover references to compiler: entries (#49824)
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-04-02 09:33:23 +02:00
Massimiliano Culpo
01471aee6b solver: don't use tags to compute injected deps (#49723)
This commit reorders ASP setup, so that rules from
possible compilers are collected first.

This allows us to know the dependencies that may be
injected before counting the possible dependencies,
so we can account for them too.

Proceeding this way makes it easier to inject
complex runtimes, like hip.

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-04-02 09:32:25 +02:00
Afzal Patel
c004c8b616 rocm-opencl: change homepage and git (#49832) 2025-04-02 09:19:40 +02:00
Harmen Stoppels
0facab231f spec.py: more virtuals=... type hints (#49753)
Deal with the "issue" that passing a str instance does not cause a
type check failure, because str is a subset of Sequence[str] and
Iterable[str]. Instead fix it by special casing the str instance.
2025-04-02 00:05:00 -07:00
Greg Becker
ca64050f6a config:url_fetch_method: allow curl args (#49712)
Signed-off-by: Gregory Becker <becker33@llnl.gov>
2025-04-01 15:23:28 -05:00
Jonas Eschle
91b3afac88 Add py-tf-keras package, upgrade TFP (#43688)
* enh: add tf-keras package, upgrade TFP

* chore:  remove legacy deps

* chore:  fix style

* chore:  fix style

* fix: url

* fix: use jax, tensorflow instead of py-jax, py-tensorflow

* fix: remove typo

* Update var/spack/repos/builtin/packages/py-tensorflow-probability/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* fix: typos

* fix: swap version

* fix: typos

* fix: typos

* fix: typos

* chore: use f strings

* enh: move tf-keras to pypi

* [@spackbot] updating style on behalf of jonas-eschle

* fix: t

* enh: add tf-keras package, upgrade TFP

* chore:  remove legacy deps

* chore:  fix style

* chore:  fix style

* fix: url

* fix: use jax, tensorflow instead of py-jax, py-tensorflow

* fix: remove typo

* Update var/spack/repos/builtin/packages/py-tensorflow-probability/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* fix: typos

* fix: swap version

* fix: typos

* fix: typos

* fix: typos

* chore: use f strings

* enh: move tf-keras to pypi

* [@spackbot] updating style on behalf of jonas-eschle

* enh: move tf-keras to pypi

* enh: move back to releases to make it work, actually

* enh: move back to releases to make it work, actually

* fix:change back to tar...

* Fix concretisation: py-tf-keras only has 2.17, not 2.16, fix checksum

* enh: add TFP 0.25

* enh: add tf-keras 2.18

* chore: fix style

* fix: remove patch

* maybe fix license

* Update var/spack/repos/builtin/packages/py-tf-keras/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* fix:  pipargs global?

* Update var/spack/repos/builtin/packages/py-tf-keras/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* chore: fix formatting

* chore: fix formatting again

* fix:  pathes in spack

* fix:  typo

* fix:  typo

* use github package

* use pip install

* fix typo

* fix typo

* comment 2.19 out

* fix typo

* fix typo

* fix typo

* chore: remove unused patch file

* chore: cleanup

* chore: add comment about TF version

* chore: remove unused Bazel, cleanup imports

* [@spackbot] updating style on behalf of jonas-eschle

* chore: add star import, degrading readability

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: jonas-eschle <jonas-eschle@users.noreply.github.com>
Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2025-04-01 20:12:57 +02:00
Alec Scott
0327ba1dfe py-python-lsp-ruff: new package (#49764)
* py-python-lsp-ruff: new package
* Add git and main versions to both for development
2025-04-01 10:32:49 -07:00
Vanessasaurus
67f091f0d9 flux-core: cffi needing for linking (#49656)
* Automated deployment to update package flux-core 2025-03-23

* cffi also needed for runtime

---------

Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2025-04-01 10:24:57 -07:00
Robert Maaskant
c09759353f trivy: new package (#49786) 2025-04-01 09:46:04 -07:00
Wouter Deconinck
14d72d2703 treewide style: move depends_on(c,cxx,fortran) with other dependencies, after variants (#49769)
* fix: move depends_on(c,cxx,fortran) with other dependencies, after variants

* treewide style: move depends_on(c,cxx,fortran) with other dependencies, after variants

* treewide style: move depends_on(c,cxx,fortran) with other dependencies, after variant

---------

Co-authored-by: Alec Scott <hi@alecbcs.com>
2025-04-01 09:25:46 -07:00
Massimiliano Culpo
288298bd2c ci: don't run unit-test on ubuntu 20.04 (#49826)
Compatibility with Python 3.6 is still tested by the
rhel8-platform-python job, and Ubuntu 20.04 will be
removed soon from the list of runners:

https://github.com/actions/runner-images/issues/11101

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-04-01 17:33:24 +02:00
Olivier Cessenat
964f81d3c2 scotch: remains buildable with make (#49812) 2025-04-01 07:56:48 -07:00
John W. Parent
93220f706e concretizer cache: disable due to broken cleanup (#49470)
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2025-04-01 15:38:56 +02:00
Victor A. P. Magri
fe9275a5d4 New hypre variants + refactoring (#49217) 2025-04-01 08:36:17 -05:00
Massimiliano Culpo
0fa829ae77 cmake: set CMAKE_POLICY_VERSION_MINIMUM (#49819)
CMake 4.0.0 breaks compatibility with CMake projects
requiring a CMake < 3.5. However, many projects that
specify a minimum requirement for versions older
than 3.5 are actually compatible with newer CMake
and do not use CMake 3.4 or older features. This
allows those projects to use a newer CMake

Co-authored-by: John W. Parent <45471568+johnwparent@users.noreply.github.com>
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-04-01 14:55:51 +02:00
Paul Gessinger
451db85657 apple-clang: fix spec syntax error (#49802) 2025-04-01 03:53:56 -06:00
Massimiliano Culpo
ce1c2b0f05 chapel: remove requirements on compilers (#49783)
The requirements being removed are redundant, or 
even outdated (%cray-prgenv-* is not a compiler in v0.23).

When compilers turned into nodes, these constraints,
with the "one_of" policy, started being unsat under
certain conditions e.g. we can't compile anymore
with GCC and depend on LLVM as a library.

Remove the requirements to make the recipe solvable
again.

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-04-01 09:30:43 +02:00
Adam J. Stewart
88c1eae5d4 py-webdataset: add v0.2.111 (#49780) 2025-04-01 09:15:18 +02:00
Adam J. Stewart
4af3bc47a2 py-wids: add new package (#49779) 2025-04-01 09:14:58 +02:00
Harmen Stoppels
a257747cba test/spec_semantics.py: more fixes (#49821) 2025-04-01 09:14:26 +02:00
Robert Maaskant
ed2ddec715 py-setuptools-scm: deprecate old versions (#49657) 2025-04-01 09:08:10 +02:00
Tamara Dahlgren
748c7e5420 Documentation: remote URL inclusion updates (#49669) 2025-04-01 09:02:50 +02:00
dependabot[bot]
cf70d71ba8 build(deps): bump flake8 in /.github/workflows/requirements/style (#49815)
Bumps [flake8](https://github.com/pycqa/flake8) from 7.1.2 to 7.2.0.
- [Commits](https://github.com/pycqa/flake8/compare/7.1.2...7.2.0)

---
updated-dependencies:
- dependency-name: flake8
  dependency-version: 7.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 08:34:26 +02:00
Stephen Nicholas Swatman
3913c24c19 acts dependencies: new versions as of 2025/03/31 (#49803)
This commit adds a new version of ACTS algebra plugins (v0.27.0) as
well as detray (v0.94.0).
2025-04-01 00:28:54 -06:00
Stephen Nicholas Swatman
032a0dba90 benchmark: add v1.9.2 (#49804)
This commit adds v1.9.2 of Google Benchmark.
2025-04-01 00:28:39 -06:00
Chris Marsh
d4a8602577 spdlog: Add v1.15.1, improve version ranges (#49745)
* Add v1.15.1, fix external tweakme.h, better constraint version ranges of external fmt

* style
2025-03-31 19:09:28 -06:00
Elliott Slaughter
1ce5ecfbd7 legion: add 25.03.0, deprecate versions 23.03.0 and prior. (#49770)
* legion: Add 25.03.0. Deprecate 23.03.0 and prior.

* legion: Add ucc dependency when using ucx network.

---------

Co-authored-by: Richard Berger <rberger@lanl.gov>
2025-03-31 18:54:00 -06:00
Satish Balay
bf6ea7b047 cuda@12.8 requires glibc@2.27 (#49561) 2025-03-31 13:02:09 -07:00
Chris Marsh
49a17de751 python 3.12 package fixes (#48830)
* update packages to work with python 3.12+

* partd updates

* py-nc-time-axis updates for 3.12

* Add new py-cftime as required for py-nc-time-axis

* fix dropped python 3.9

* switch from when blocks to flat

* remove redundant requires

* protect version range for python@:3.11

* add new c-blosc to support newly added python 3.12 py-blosc version

* add scikit-build 0.18.1 for python 3.12 required for this set of commits

* add complete optional variant for py-partd to match pyproject.toml. Deprecate super old versions

* only set system blosc for the required case

* style

* Remove incorrect python bound

* improve python version reqs and move to more canonical depends_on

* move to depends from req

* add new python range limit, update comment

* remove @coreyjadams as maintainer as per their request https://github.com/spack/spack/pull/48830#issuecomment-2705062587

* Fix python bounds

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2025-03-31 12:09:56 -05:00
Harmen Stoppels
95927df455 test/spec_semantics.py: split a test into multiple ones, test only necessary bits (#49806) 2025-03-31 17:08:36 +00:00
Tara Drwenski
0f64f1baec CachedCMakePackage: Improve finding mpiexec for non-slurm machines (#49033)
* Check for LSF, FLux, and Slurm when determing MPI exec

* Make scheduler/MPI exec helper functions methods of CachedCMakeBuilder

* Remove axom workaround for running mpi on machines with flux
2025-03-31 09:11:09 -07:00
Harmen Stoppels
46f7737626 xfail -> skipif platform specific tests on other platforms (#49800) 2025-03-31 16:32:26 +02:00
Alec Scott
7256508983 fzf: add v0.61.0 (#49787) 2025-03-31 12:16:36 +02:00
John W. Parent
75a3d179b1 Windows: MSVC provides fortran, fix msmpi handling (#49734)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-31 10:53:07 +02:00
Xylar Asay-Davis
72b14de89e Add nco v5.3.3 (#49776) 2025-03-30 10:12:12 -07:00
Christoph Junghans
de6eaa1b4e votca: add v2025 (#49768) 2025-03-30 10:10:23 -07:00
Paul Gessinger
0f4bfda2a1 apple-gl: remove compiler requirement after change in compiler model (#49760) 2025-03-30 11:12:42 +02:00
Ritwik Patil
6c78d9cab2 sendme: add v0.25.0, v0.24.0 (#49775)
* sendme: add v0.25.0, v0.24.0

* fix version orders

* add minimum rust version
2025-03-29 19:12:34 -06:00
Matthieu Dorier
7970a04025 unqlite: fix dependency on C++ (#49774) 2025-03-29 10:10:59 -05:00
Adrien Bernede
22c38e5975 Update hip support for Cached CMake Packages (#49635)
* Clearly split old and new hip settings requirements
* Apply generic rocm handling to every project
* make default logic for hip support more robust
* GPU_TARGET is only necessary under certain project specific conditions, it should not be necessary in general
* Update logic to find amdclang++
2025-03-28 18:38:45 -07:00
Wouter Deconinck
3c4cb0d4f3 gsoap: depends on c, cxx (#49728) 2025-03-28 13:58:31 -06:00
Wouter Deconinck
61b9e8779b mlpack: depends on c (#49767)
* mlpack: depends on c

* mlpack: mv depends_on after variants
2025-03-28 13:53:56 -06:00
Wouter Deconinck
2a25e2b572 qt-*: depends on c (#49766)
* qt-5compat: depends on c

* qt-*: ensure all depend on C

* qt-base: mv depends_on after variants
2025-03-28 13:53:28 -06:00
Sebastian Pipping
7db5b1d5d6 expat: Add 2.7.1 (#49741) 2025-03-28 13:32:55 -06:00
Massimiliano Culpo
10f309273a Do not error when trying to convert corrupted compiler entries (#49759)
fixes #49717

If no compiler is listed in the 'packages' section of
the configuration, Spack will currently try to:
1. Look for a legacy compilers.yaml to convert
2. Look for compilers in PATH

in that order. If an entry in compilers.yaml is
corrupted, that should not result in an obscure
error.

Instead, it should just be skipped.

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-28 13:14:12 -06:00
Matthias Krack
3e99a12ea2 libsmeagol: add dependency fortran (#49754) 2025-03-28 13:09:31 -06:00
Greg Becker
d5f5d48bb3 oneapi-compilers: allow detection of executables symlinked to prefix.bin (#49742)
* intel-oneapi-compilers: detect compilers when symlinked to prefix.bin

* update detection tests

--

Signed-off-by: Gregory Becker <becker33@llnl.gov>
2025-03-28 12:28:47 -06:00
Wouter Deconinck
6f2019ece9 root: add v6.34.06 (#49748) 2025-03-28 11:09:47 -05:00
Harmen Stoppels
35a84f02fa variant.py: reserved names is a set (#49762) 2025-03-28 08:17:04 -07:00
Wouter Deconinck
f6123ee160 onnx: depends on c (#49739) 2025-03-28 08:31:21 -06:00
Harmen Stoppels
4b02ecddf4 variant.py: use spack.error the normal way (#49763)
This module references `spack.error.Something` in the same file, which happens to
work but is incorrect. Use `spack.error` to prevent that in the future.
2025-03-28 14:12:45 +00:00
Richard Berger
bd39598e61 libcxi: disable ze, cuda, rocm autodetection (#49736)
Only enable these if explicitly requested.
2025-03-28 08:06:29 -06:00
Wouter Deconinck
df9cac172e cryptopp: depends on cxx (#49730) 2025-03-28 08:06:00 -06:00
Wouter Deconinck
511c2750c7 cppzmq: depends on c (#49729) 2025-03-28 08:00:23 -06:00
Jim Edwards
b7a81426b0 parallelio: add v2.6.4, v2.6.5 (#49607)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-28 04:31:50 -06:00
Jim Edwards
654b294641 mpi-serial: add v2.5.2, v2.5.3 (#49606)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-28 04:31:25 -06:00
Cameron Rutherford
c19a90b74a py-notebook: Add 6.5.6 and 6.5.7, fix py-traitlets incompatibility for 6.5.5 (#49666) 2025-03-28 10:30:52 +01:00
Fernando Ayats
03d70feb18 Add py-equinox (#49342)
Adds Equinox, a JAX library. I've added the latest version 0.11.2, and also 0.11.0
which is compatible with older JAX versions.

I've built both versions and loading an example from their page seems to work OK.

* Add py-wadler-lindig
* Add py-equinox
2025-03-28 10:27:04 +01:00
Vinícius
bb1216432a simgrid: add v4.0 (#49675)
Co-authored-by: viniciusvgp <viniciusvgp@users.noreply.github.com>
2025-03-28 10:24:55 +01:00
Wouter Deconinck
d27aab721a erlang: add v26.2.5.2, v27.0.1 (#45725) 2025-03-28 10:23:09 +01:00
Todd Gamblin
3444d40ae2 bugfix: pure cxx and fortran deps should display with a compiler. (#49752)
I noticed that `abseil-cpp` was showing in `spack find` with "no compiler", and the only
difference between it and other nodes was that it *only* depends on `cxx` -- others
depend on `c` as well.

It turns out that the `select()` method on `EdgeMap` only takes `Sequence[str]` and doesn't
check whether they're actually just one `str`.  So asking for, e.g., `cxx` is like asking for
`c` or `x` or `x`, as the `str` is treated like a sequence. This causes Spack to miss `cxx`
and `fortran` language virtuals in `DeprecatedCompilerSpec`.

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2025-03-28 10:18:18 +01:00
Carlos Bederián
37f2683d17 hcoll: add new versions (#47923) 2025-03-28 10:17:46 +01:00
Harmen Stoppels
3c4f23f64a Fix import urllib.request (#49699) 2025-03-28 10:15:33 +01:00
jmuddnv
db8e56b0a5 NVIDIA HPC SDK: add v25.3 (#49694) 2025-03-28 10:14:21 +01:00
Alec Scott
ff6dfea9b9 difftastic: new package (#49708) 2025-03-28 10:06:27 +01:00
Adam J. Stewart
2f3ef790e2 py-keras: add v3.9.1 (#49726) 2025-03-28 01:48:06 -07:00
Richard Berger
01db307f41 libfabric: add missing xpmem dependency (#49569) 2025-03-28 09:41:36 +01:00
dithwick
d715b725fa openmpi: add ipv6 variant (#49727) 2025-03-28 09:26:56 +01:00
Todd Gamblin
52a995a95c UI: Color external packages differently from installed packages (#49751)
Currently, externals show up in `spack find` and `spack spec` install status as a green
`[e]`, which is hard to distinguish from the green [+] used for installed packages.

- [x] Make externals magenta instead, so they stand out.

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2025-03-28 00:37:56 -07:00
John W. Parent
b87c025cd3 Remove spack sh setup (#49747) 2025-03-27 16:40:39 -07:00
v
360eb4278c set compiler version for xsd package (#49702)
* set compiler version for xsd package

---------

Co-authored-by: novasoft <novasoft@fnal.gov>
2025-03-27 14:29:05 -07:00
Vicente Bolea
7b3fc7dee3 adios2: fix smoke package (#49647) 2025-03-27 16:10:42 -05:00
Dave Keeshan
f0acbe4310 yosys: add v0.51 (#49639)
* add: yosys v0.51

* Move license to within 6 lines of start, but truncated description to do that

* Moving license back, and reinstating description
2025-03-27 12:28:44 -06:00
Adam J. Stewart
f0c676d14a py-contrib: add new package (#49634) 2025-03-27 19:07:33 +01:00
Howard Pritchard
13dd198a09 Open MPI: add 5.0.7 (#49661)
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2025-03-27 10:33:56 -07:00
Teague Sterling
3f6c66d701 libwnck: add v43.2 (#49655)
* libwnck: new package

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>

* Update package.py

* Update package.py

* Update package.py

* Update package.py

* Update package.py

* Update package.py

add gettext and xres (testing)

* Update package.py

* Update package.py

* Update package.py

* Update package.py

* [@spackbot] updating style on behalf of teaguesterling

* Update package.py

* [@spackbot] updating style on behalf of teaguesterling

* Adding Cairo variant

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>

* libwnck: add v43.2

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>

---------

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>
2025-03-27 10:32:32 -07:00
gregorweiss
dcd6e61f34 bigwhoop: new package (#49677)
* add package for BigWhoop lossy data compression

Co-authored-by: Patrick Vogler <patrick.vogler@hlrs.de>

* add version 0.2.0

* update url

* corrected checksum

---------

Co-authored-by: Patrick Vogler <patrick.vogler@hlrs.de>
2025-03-27 10:30:20 -07:00
Richard Berger
ac7b467897 libfabric: add v2.0.0, v2.1.0, and lnx fabric (#49549) 2025-03-27 10:22:15 -07:00
Harmen Stoppels
c5adb934c2 rocminfo: extends python (#49532) 2025-03-27 18:05:56 +01:00
eugeneswalker
1e70a8d6ad amr-wind+rocm depends_on rocrand and rocprim (#49703)
* amr-wind+rocm depends_on rocrand

* amr-wind +rocm: depend_on rocprim
2025-03-27 09:30:29 -07:00
Buldram
755a4054b2 zuo: add v1.12 (#49678) 2025-03-27 08:45:35 -07:00
Seth R. Johnson
040d747a86 g4tendl: avoid closing braket that breaks lmod (#49721) 2025-03-27 09:44:16 -06:00
Jon Rood
9440894173 hdf5: add version 1.14.6. (#49704) 2025-03-27 08:40:10 -07:00
Alex Tyler Chapman
4e42e3c2ec update gslib package to include version 1.0.9 (#49710) 2025-03-27 08:38:44 -07:00
Wouter Deconinck
662bf113e2 ensmallen: depends on c (#49719) 2025-03-27 08:36:24 -07:00
Wouter Deconinck
7e11fd62e2 geant4: add v11.3.1 (#49700) 2025-03-27 16:08:57 +01:00
Massimiliano Culpo
a6c22f2690 Update documentation after compiler as deps (#49715) 2025-03-27 11:19:35 +01:00
Massimiliano Culpo
4894668ece Remove "classic" Intel packages (#45188) 2025-03-27 10:13:33 +01:00
Todd Gamblin
199133fca4 bugfix: concretization shouldn't require concrete packages to be known (#49706)
Concretization setup was checking whether any input spec has a dependency
that's *not* in the set of possible dependencies for all roots in the solve.
There are two reasons to check this:

1. The user could be asking for a dependency that none of the roots has, or
2. The user could be asking for a dependency that doesn't exist.

For abstract roots, (2) implies (1), and the check makes sense.  For concrete
roots, we don't care, because the spec has already been built. If a `package.py`
no longer depends on something it did before, it doesn't matter -- it's already
built. If the dependency no longer exists, we also do not care -- we already
built it and there's an installation for it somewhere. 

When you concretize an environment with a lockfile, *many* of the input specs
are concrete, and we don't need to build them. If a package changes its
dependencies, or if a `package.py` is removed for a concrete input spec, that
shouldn't cause an already-built environment to fail concretization.

A user reported that this was happening with an error like:

```console
spack concretize
==> Error: Package chapel does not depend on py-protobuf@5.28.2/a4rf4glr2tntfwsz6myzwmlk5iu25t74
```

Or, with traceback:
```console
  File "/apps/other/spack-devel/lib/spack/spack/solver/asp.py", line 3014, in setup
    raise spack.spec.InvalidDependencyError(spec.name, missing_deps)
spack.spec.InvalidDependencyError: Package chapel does not depend on py-protobuf@5.28.2/a4rf4glr2tntfwsz6myzwmlk5iu25t74
```

Fix this by skipping the check for concrete input specs. We already ignore conflicts,
etc. for concrete/external specs, and we do not need metadata in the solve for
concrete dependencies b/c they're imposed by hash constraints.

- [x] Ignore the package existence check for concrete input specs.

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2025-03-26 23:58:12 +00:00
Alec Scott
ea3a3b51a0 ci: Skip removed packages during automatic checksum verification (#49681)
* Skip packages removed for automatic checksum verification

* Unify finding modified or added packages with spack.repo logic

* Remove unused imports

* Fix unit-tests using shared modified function

* Update last remaining unit test to new format
2025-03-26 16:47:11 -07:00
Robert Maaskant
23bd3e6104 glab: add v1.54.0 and v1.55.0 (#49689)
* glab: add v1.54.0 and v1.55.0

* glab: uniform go deps

Co-authored-by: Alec Scott <hi@alecbcs.com>

---------

Co-authored-by: Alec Scott <hi@alecbcs.com>
2025-03-26 14:48:18 -06:00
Robert Maaskant
c72477e67a gtk-doc: use download.gnome.org for downloads (#49660) 2025-03-26 12:03:41 -06:00
Todd Gamblin
2d2a4d1908 bugfix: display old-style compilers without @= in spack find output (#49693)
* bugfix: display old-style compilers without `@=` in `spack find` output

Fix `display_str` attribute of `DeprecatedCompiler` so that `spack find` displays
compilers without `@=` for the version (as expected).

- [x] Use `spec.format("{@version}")` to do this

before:
```
> spack find
-- darwin-sequoia-m1 / apple-clang@=16.0.0 -----------------------
abseil-cpp@20240722.0               py-graphviz@0.13.2
apple-libuuid@1353.100.2            py-hatch-fancy-pypi-readme@23.1.0
```

after:
```
> spack find
-- darwin-sequoia-m1 / apple-clang@16.0.0 -----------------------
abseil-cpp@20240722.0               py-graphviz@0.13.2
apple-libuuid@1353.100.2            py-hatch-fancy-pypi-readme@23.1.0
```

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>

* [@spackbot] updating style on behalf of tgamblin

---------

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2025-03-26 10:34:05 -07:00
Afzal Patel
2cd773aea4 py-jaxlib: add spack-built ROCm support (#49611)
* py-jaxlib: add spack-built ROCm support

* fix style

* py-jaxlib 0.4.38 rocm support

* py-jaxlib 0.4.38 rocm support

* add comgr dependency

* changes for ROCm external and enable till 0.4.38

* enable version of py-jax

* add jax+rocm to ci

* add conflict for cuda and remove py-jaxlib from aarch64 pipeline

* Update var/spack/repos/builtin/packages/py-jaxlib/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* add conflict for aarch64

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2025-03-26 09:23:52 -06:00
Mikael Simberg
145b0667cc asio: add 1.34.0 (#49529) 2025-03-26 14:01:01 +01:00
1265 changed files with 5211 additions and 7075 deletions

View File

@@ -59,7 +59,6 @@ jobs:
- name: Package audits (without coverage)
if: ${{ runner.os == 'Windows' }}
run: |
. share/spack/setup-env.sh
spack -d audit packages
./share/spack/qa/validate_last_exit.ps1
spack -d audit configs

View File

@@ -1,6 +1,6 @@
black==25.1.0
clingo==5.7.1
flake8==7.1.2
flake8==7.2.0
isort==6.0.1
mypy==1.15.0
types-six==1.17.0.20250304

View File

@@ -19,9 +19,6 @@ jobs:
on_develop:
- ${{ github.ref == 'refs/heads/develop' }}
include:
- python-version: '3.6'
os: ubuntu-20.04
on_develop: ${{ github.ref == 'refs/heads/develop' }}
- python-version: '3.7'
os: ubuntu-22.04
on_develop: ${{ github.ref == 'refs/heads/develop' }}

View File

@@ -20,3 +20,6 @@ packages:
cxx: [msvc]
mpi: [msmpi]
gl: [wgl]
mpi:
require:
- one_of: [msmpi]

View File

@@ -1409,27 +1409,29 @@ that executables will run without the need to set ``LD_LIBRARY_PATH``.
.. code-block:: yaml
compilers:
- compiler:
spec: gcc@4.9.3
paths:
cc: /opt/gcc/bin/gcc
c++: /opt/gcc/bin/g++
f77: /opt/gcc/bin/gfortran
fc: /opt/gcc/bin/gfortran
environment:
unset:
- BAD_VARIABLE
set:
GOOD_VARIABLE_NUM: 1
GOOD_VARIABLE_STR: good
prepend_path:
PATH: /path/to/binutils
append_path:
LD_LIBRARY_PATH: /opt/gcc/lib
extra_rpaths:
- /path/to/some/compiler/runtime/directory
- /path/to/some/other/compiler/runtime/directory
packages:
gcc:
externals:
- spec: gcc@4.9.3
prefix: /opt/gcc
extra_attributes:
compilers:
c: /opt/gcc/bin/gcc
cxx: /opt/gcc/bin/g++
fortran: /opt/gcc/bin/gfortran
environment:
unset:
- BAD_VARIABLE
set:
GOOD_VARIABLE_NUM: 1
GOOD_VARIABLE_STR: good
prepend_path:
PATH: /path/to/binutils
append_path:
LD_LIBRARY_PATH: /opt/gcc/lib
extra_rpaths:
- /path/to/some/compiler/runtime/directory
- /path/to/some/other/compiler/runtime/directory
^^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -63,7 +63,6 @@ on these ideas for each distinct build system that Spack supports:
build_systems/cudapackage
build_systems/custompackage
build_systems/inteloneapipackage
build_systems/intelpackage
build_systems/rocmpackage
build_systems/sourceforgepackage

View File

@@ -33,9 +33,6 @@ For more information on a specific package, do::
spack info --all <package-name>
Intel no longer releases new versions of Parallel Studio, which can be
used in Spack via the :ref:`intelpackage`. All of its components can
now be found in oneAPI.
Examples
========
@@ -50,34 +47,8 @@ Install the oneAPI compilers::
spack install intel-oneapi-compilers
Add the compilers to your ``compilers.yaml`` so spack can use them::
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/bin
Verify that the compilers are available::
spack compiler list
Note that 2024 and later releases do not include ``icc``. Before 2024,
the package layout was different::
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin/intel64
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin
The ``intel-oneapi-compilers`` package includes 2 families of
compilers:
* ``intel``: ``icc``, ``icpc``, ``ifort``. Intel's *classic*
compilers. 2024 and later releases contain ``ifort``, but not
``icc`` and ``icpc``.
* ``oneapi``: ``icx``, ``icpx``, ``ifx``. Intel's new generation of
compilers based on LLVM.
To build the ``patchelf`` Spack package with ``icc``, do::
spack install patchelf%intel
To build with with ``icx``, do ::
To build the ``patchelf`` Spack package with ``icx``, do::
spack install patchelf%oneapi
@@ -92,15 +63,6 @@ Install the oneAPI compilers::
spack install intel-oneapi-compilers
Add the compilers to your ``compilers.yaml`` so Spack can use them::
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/bin
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/bin
Verify that the compilers are available::
spack compiler list
Clone `spack-configs <https://github.com/spack/spack-configs>`_ repo and activate Intel oneAPI CPU environment::
git clone https://github.com/spack/spack-configs
@@ -149,7 +111,7 @@ Compilers
---------
To use the compilers, add some information about the installation to
``compilers.yaml``. For most users, it is sufficient to do::
``packages.yaml``. For most users, it is sufficient to do::
spack compiler add /opt/intel/oneapi/compiler/latest/bin
@@ -157,7 +119,7 @@ Adapt the paths above if you did not install the tools in the default
location. After adding the compilers, using them is the same
as if you had installed the ``intel-oneapi-compilers`` package.
Another option is to manually add the configuration to
``compilers.yaml`` as described in :ref:`Compiler configuration
``packages.yaml`` as described in :ref:`Compiler configuration
<compiler-config>`.
Before 2024, the directory structure was different::
@@ -200,15 +162,5 @@ You can also use Spack-installed libraries. For example::
Will update your environment CPATH, LIBRARY_PATH, and other
environment variables for building an application with oneMKL.
More information
================
This section describes basic use of oneAPI, especially if it has
changed compared to Parallel Studio. See :ref:`intelpackage` for more
information on :ref:`intel-virtual-packages`,
:ref:`intel-unrelated-packages`,
:ref:`intel-integrating-external-libraries`, and
:ref:`using-mkl-tips`.
.. _`Intel installers`: https://software.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top.html

File diff suppressed because it is too large Load Diff

View File

@@ -12,8 +12,7 @@ The ``ROCmPackage`` is not a build system but a helper package. Like ``CudaPacka
it provides standard variants, dependencies, and conflicts to facilitate building
packages using GPUs though for AMD in this case.
You can find the source for this package (and suggestions for setting up your
``compilers.yaml`` and ``packages.yaml`` files) at
You can find the source for this package (and suggestions for setting up your ``packages.yaml`` file) at
`<https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/rocm.py>`__.
^^^^^^^^

View File

@@ -148,15 +148,16 @@ this can expose you to attacks. Use at your own risk.
``ssl_certs``
--------------------
Path to custom certificats for SSL verification. The value can be a
Path to custom certificats for SSL verification. The value can be a
filesytem path, or an environment variable that expands to an absolute file path.
The default value is set to the environment variable ``SSL_CERT_FILE``
to use the same syntax used by many other applications that automatically
detect custom certificates.
When ``url_fetch_method:curl`` the ``config:ssl_certs`` should resolve to
a single file. Spack will then set the environment variable ``CURL_CA_BUNDLE``
in the subprocess calling ``curl``.
If ``url_fetch_method:urllib`` then files and directories are supported i.e.
in the subprocess calling ``curl``. If additional ``curl`` arguments are required,
they can be set in the config, e.g. ``url_fetch_method:'curl -k -q'``.
If ``url_fetch_method:urllib`` then files and directories are supported i.e.
``config:ssl_certs:$SSL_CERT_FILE`` or ``config:ssl_certs:$SSL_CERT_DIR``
will work.
In all cases the expanded path must be absolute for Spack to use the certificates.

View File

@@ -11,7 +11,7 @@ Configuration Files
Spack has many configuration files. Here is a quick list of them, in
case you want to skip directly to specific docs:
* :ref:`compilers.yaml <compiler-config>`
* :ref:`packages.yaml <compiler-config>`
* :ref:`concretizer.yaml <concretizer-options>`
* :ref:`config.yaml <config-yaml>`
* :ref:`include.yaml <include-yaml>`
@@ -95,7 +95,7 @@ are six configuration scopes. From lowest to highest:
precedence over all other scopes.
Each configuration directory may contain several configuration files,
such as ``config.yaml``, ``compilers.yaml``, or ``mirrors.yaml``. When
such as ``config.yaml``, ``packages.yaml``, or ``mirrors.yaml``. When
configurations conflict, settings from higher-precedence scopes override
lower-precedence settings.

View File

@@ -667,11 +667,11 @@ a ``packages.yaml`` file) could contain:
# ...
packages:
all:
compiler: [intel]
providers:
mpi: [openmpi]
# ...
This configuration sets the default compiler for all packages to
``intel``.
This configuration sets the default mpi provider to be openmpi.
^^^^^^^^^^^^^^^^^^^^^^^
Included configurations
@@ -686,7 +686,8 @@ the environment.
spack:
include:
- environment/relative/path/to/config.yaml
- https://github.com/path/to/raw/config/compilers.yaml
- path: https://github.com/path/to/raw/config/compilers.yaml
sha256: 26e871804a92cd07bb3d611b31b4156ae93d35b6a6d6e0ef3a67871fcb1d258b
- /absolute/path/to/packages.yaml
- path: /path/to/$os/$target/environment
optional: true
@@ -700,11 +701,11 @@ with the ``optional`` clause and conditional with the ``when`` clause. (See
Files are listed using paths to individual files or directories containing them.
Path entries may be absolute or relative to the environment or specified as
URLs. URLs to individual files need link to the **raw** form of the file's
URLs. URLs to individual files must link to the **raw** form of the file's
contents (e.g., `GitHub
<https://docs.github.com/en/repositories/working-with-files/using-files/viewing-and-understanding-files#viewing-or-copying-the-raw-file-content>`_
or `GitLab
<https://docs.gitlab.com/ee/api/repository_files.html#get-raw-file-from-repository>`_).
<https://docs.gitlab.com/ee/api/repository_files.html#get-raw-file-from-repository>`_) **and** include a valid sha256 for the file.
Only the ``file``, ``ftp``, ``http`` and ``https`` protocols (or schemes) are
supported. Spack-specific, environment and user path variables can be used.
(See :ref:`config-file-variables` for more information.)

View File

@@ -1,161 +0,0 @@
spack:
definitions:
- compiler-pkgs:
- 'llvm+clang@6.0.1 os=centos7'
- 'gcc@6.5.0 os=centos7'
- 'llvm+clang@6.0.1 os=ubuntu18.04'
- 'gcc@6.5.0 os=ubuntu18.04'
- pkgs:
- readline@7.0
# - xsdk@0.4.0
- compilers:
- '%gcc@5.5.0'
- '%gcc@6.5.0'
- '%gcc@7.3.0'
- '%clang@6.0.0'
- '%clang@6.0.1'
- oses:
- os=ubuntu18.04
- os=centos7
specs:
- matrix:
- [$pkgs]
- [$compilers]
- [$oses]
exclude:
- '%gcc@7.3.0 os=centos7'
- '%gcc@5.5.0 os=ubuntu18.04'
mirrors:
cloud_gitlab: https://mirror.spack.io
compilers:
# The .gitlab-ci.yml for this project picks a Docker container which does
# not have any compilers pre-built and ready to use, so we need to fake the
# existence of those here.
- compiler:
operating_system: centos7
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: gcc@5.5.0
target: x86_64
- compiler:
operating_system: centos7
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: gcc@6.5.0
target: x86_64
- compiler:
operating_system: centos7
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: clang@6.0.0
target: x86_64
- compiler:
operating_system: centos7
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: clang@6.0.1
target: x86_64
- compiler:
operating_system: ubuntu18.04
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: clang@6.0.0
target: x86_64
- compiler:
operating_system: ubuntu18.04
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: clang@6.0.1
target: x86_64
- compiler:
operating_system: ubuntu18.04
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: gcc@6.5.0
target: x86_64
- compiler:
operating_system: ubuntu18.04
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: gcc@7.3.0
target: x86_64
gitlab-ci:
bootstrap:
- name: compiler-pkgs
compiler-agnostic: true
mappings:
- # spack-cloud-ubuntu
match:
# these are specs, if *any* match the spec under consideration, this
# 'mapping' will be used to generate the CI job
- os=ubuntu18.04
runner-attributes:
# 'tags' and 'image' go directly onto the job, 'variables' will
# be added to what we already necessarily create for the job as
# a part of the CI workflow
tags:
- spack-k8s
image:
name: scottwittenburg/spack_builder_ubuntu_18.04
entrypoint: [""]
- # spack-cloud-centos
match:
# these are specs, if *any* match the spec under consideration, this
# 'mapping' will be used to generate the CI job
- 'os=centos7'
runner-attributes:
tags:
- spack-k8s
image:
name: scottwittenburg/spack_builder_centos_7
entrypoint: [""]
cdash:
build-group: Release Testing
url: http://cdash
project: Spack Testing
site: Spack Docker-Compose Workflow
repos: []
upstreams: {}
modules:
enable: []
packages: {}
config: {}

View File

@@ -254,12 +254,11 @@ directory.
Compiler configuration
----------------------
Spack has the ability to build packages with multiple compilers and
compiler versions. Compilers can be made available to Spack by
specifying them manually in ``compilers.yaml`` or ``packages.yaml``,
or automatically by running ``spack compiler find``, but for
convenience Spack will automatically detect compilers the first time
it needs them.
Spack has the ability to build packages with multiple compilers and compiler versions.
Compilers can be made available to Spack by specifying them manually in ``packages.yaml``,
or automatically by running ``spack compiler find``.
For convenience, Spack will automatically detect compilers the first time it needs them,
if none is available.
.. _cmd-spack-compilers:
@@ -274,16 +273,11 @@ compilers`` or ``spack compiler list``:
$ spack compilers
==> Available compilers
-- gcc ---------------------------------------------------------
gcc@4.9.0 gcc@4.8.0 gcc@4.7.0 gcc@4.6.2 gcc@4.4.7
gcc@4.8.2 gcc@4.7.1 gcc@4.6.3 gcc@4.6.1 gcc@4.1.2
-- intel -------------------------------------------------------
intel@15.0.0 intel@14.0.0 intel@13.0.0 intel@12.1.0 intel@10.0
intel@14.0.3 intel@13.1.1 intel@12.1.5 intel@12.0.4 intel@9.1
intel@14.0.2 intel@13.1.0 intel@12.1.3 intel@11.1
intel@14.0.1 intel@13.0.1 intel@12.1.2 intel@10.1
-- clang -------------------------------------------------------
clang@3.4 clang@3.3 clang@3.2 clang@3.1
-- gcc ubuntu20.04-x86_64 ---------------------------------------
gcc@9.4.0 gcc@8.4.0 gcc@10.5.0
-- llvm ubuntu20.04-x86_64 --------------------------------------
llvm@12.0.0 llvm@11.0.0 llvm@10.0.0
Any of these compilers can be used to build Spack packages. More on
how this is done is in :ref:`sec-specs`.
@@ -302,16 +296,22 @@ An alias for ``spack compiler find``.
``spack compiler find``
^^^^^^^^^^^^^^^^^^^^^^^
Lists the compilers currently available to Spack. If you do not see
a compiler in this list, but you want to use it with Spack, you can
simply run ``spack compiler find`` with the path to where the
compiler is installed. For example:
If you do not see a compiler in the list shown by:
.. code-block:: console
$ spack compiler find /usr/local/tools/ic-13.0.079
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml
intel@13.0.079
$ spack compiler list
but you want to use it with Spack, you can simply run ``spack compiler find`` with the
path to where the compiler is installed. For example:
.. code-block:: console
$ spack compiler find /opt/intel/oneapi/compiler/2025.1/bin/
==> Added 1 new compiler to /home/user/.spack/packages.yaml
intel-oneapi-compilers@2025.1.0
==> Compilers are defined in the following files:
/home/user/.spack/packages.yaml
Or you can run ``spack compiler find`` with no arguments to force
auto-detection. This is useful if you do not know where compilers are
@@ -322,7 +322,7 @@ installed, but you know that new compilers have been added to your
$ module load gcc/4.9.0
$ spack compiler find
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml
==> Added 1 new compiler to /home/user/.spack/packages.yaml
gcc@4.9.0
This loads the environment module for gcc-4.9.0 to add it to
@@ -331,7 +331,7 @@ This loads the environment module for gcc-4.9.0 to add it to
.. note::
By default, spack does not fill in the ``modules:`` field in the
``compilers.yaml`` file. If you are using a compiler from a
``packages.yaml`` file. If you are using a compiler from a
module, then you should add this field manually.
See the section on :ref:`compilers-requiring-modules`.
@@ -341,91 +341,82 @@ This loads the environment module for gcc-4.9.0 to add it to
``spack compiler info``
^^^^^^^^^^^^^^^^^^^^^^^
If you want to see specifics on a particular compiler, you can run
``spack compiler info`` on it:
If you want to see additional information on some specific compilers, you can run ``spack compiler info`` on it:
.. code-block:: console
$ spack compiler info intel@15
intel@15.0.0:
paths:
cc = /usr/local/bin/icc-15.0.090
cxx = /usr/local/bin/icpc-15.0.090
f77 = /usr/local/bin/ifort-15.0.090
fc = /usr/local/bin/ifort-15.0.090
modules = []
operating_system = centos6
...
$ spack compiler info gcc
gcc@=8.4.0 languages='c,c++,fortran' arch=linux-ubuntu20.04-x86_64:
prefix: /usr
compilers:
c: /usr/bin/gcc-8
cxx: /usr/bin/g++-8
fortran: /usr/bin/gfortran-8
This shows which C, C++, and Fortran compilers were detected by Spack.
Notice also that we didn't have to be too specific about the
version. We just said ``intel@15``, and information about the only
matching Intel compiler was displayed.
gcc@=9.4.0 languages='c,c++,fortran' arch=linux-ubuntu20.04-x86_64:
prefix: /usr
compilers:
c: /usr/bin/gcc
cxx: /usr/bin/g++
fortran: /usr/bin/gfortran
gcc@=10.5.0 languages='c,c++,fortran' arch=linux-ubuntu20.04-x86_64:
prefix: /usr
compilers:
c: /usr/bin/gcc-10
cxx: /usr/bin/g++-10
fortran: /usr/bin/gfortran-10
This shows the details of the compilers that were detected by Spack.
Notice also that we didn't have to be too specific about the version. We just said ``gcc``, and we got information
about all the matching compilers.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Manual compiler configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If auto-detection fails, you can manually configure a compiler by
editing your ``~/.spack/<platform>/compilers.yaml`` file. You can do this by running
``spack config edit compilers``, which will open the file in
If auto-detection fails, you can manually configure a compiler by editing your ``~/.spack/packages.yaml`` file.
You can do this by running ``spack config edit packages``, which will open the file in
:ref:`your favorite editor <controlling-the-editor>`.
Each compiler configuration in the file looks like this:
Each compiler has an "external" entry in the file with some ``extra_attributes``:
.. code-block:: yaml
compilers:
- compiler:
modules: []
operating_system: centos6
paths:
cc: /usr/local/bin/icc-15.0.024-beta
cxx: /usr/local/bin/icpc-15.0.024-beta
f77: /usr/local/bin/ifort-15.0.024-beta
fc: /usr/local/bin/ifort-15.0.024-beta
spec: intel@15.0.0
packages:
gcc:
externals:
- spec: gcc@10.5.0 languages='c,c++,fortran'
prefix: /usr
extra_attributes:
compilers:
c: /usr/bin/gcc-10
cxx: /usr/bin/g++-10
fortran: /usr/bin/gfortran-10
For compilers that do not support Fortran (like ``clang``), put
``None`` for ``f77`` and ``fc``:
The compiler executables are listed under ``extra_attributes:compilers``, and are keyed by language.
Once you save the file, the configured compilers will show up in the list displayed by ``spack compilers``.
.. code-block:: yaml
compilers:
- compiler:
modules: []
operating_system: centos6
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: None
fc: None
spec: clang@3.3svn
Once you save the file, the configured compilers will show up in the
list displayed by ``spack compilers``.
You can also add compiler flags to manually configured compilers. These
flags should be specified in the ``flags`` section of the compiler
specification. The valid flags are ``cflags``, ``cxxflags``, ``fflags``,
You can also add compiler flags to manually configured compilers. These flags should be specified in the
``flags`` section of the compiler specification. The valid flags are ``cflags``, ``cxxflags``, ``fflags``,
``cppflags``, ``ldflags``, and ``ldlibs``. For example:
.. code-block:: yaml
compilers:
- compiler:
modules: []
operating_system: centos6
paths:
cc: /usr/bin/gcc
cxx: /usr/bin/g++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags:
cflags: -O3 -fPIC
cxxflags: -O3 -fPIC
cppflags: -O3 -fPIC
spec: gcc@4.7.2
packages:
gcc:
externals:
- spec: gcc@10.5.0 languages='c,c++,fortran'
prefix: /usr
extra_attributes:
compilers:
c: /usr/bin/gcc-10
cxx: /usr/bin/g++-10
fortran: /usr/bin/gfortran-10
flags:
cflags: -O3 -fPIC
cxxflags: -O3 -fPIC
cppflags: -O3 -fPIC
These flags will be treated by spack as if they were entered from
the command line each time this compiler is used. The compiler wrappers
@@ -440,95 +431,44 @@ These variables should be specified in the ``environment`` section of the compil
specification. The operations available to modify the environment are ``set``, ``unset``,
``prepend_path``, ``append_path``, and ``remove_path``. For example:
.. code-block:: yaml
compilers:
- compiler:
modules: []
operating_system: centos6
paths:
cc: /opt/intel/oneapi/compiler/latest/linux/bin/icx
cxx: /opt/intel/oneapi/compiler/latest/linux/bin/icpx
f77: /opt/intel/oneapi/compiler/latest/linux/bin/ifx
fc: /opt/intel/oneapi/compiler/latest/linux/bin/ifx
spec: oneapi@latest
environment:
set:
MKL_ROOT: "/path/to/mkl/root"
unset: # A list of environment variables to unset
- CC
prepend_path: # Similar for append|remove_path
LD_LIBRARY_PATH: /ld/paths/added/by/setvars/sh
.. note::
Spack is in the process of moving compilers from a separate
attribute to be handled like all other packages. As part of this
process, the ``compilers.yaml`` section will eventually be replaced
by configuration in the ``packages.yaml`` section. This new
configuration is now available, although it is not yet the default
behavior.
Compilers can also be configured as external packages in the
``packages.yaml`` config file. Any external package for a compiler
(e.g. ``gcc`` or ``llvm``) will be treated as a configured compiler
assuming the paths to the compiler executables are determinable from
the prefix.
If the paths to the compiler executable are not determinable from the
prefix, you can add them to the ``extra_attributes`` field. Similarly,
all other fields from the compilers config can be added to the
``extra_attributes`` field for an external representing a compiler.
Note that the format for the ``paths`` field in the
``extra_attributes`` section is different than in the ``compilers``
config. For compilers configured as external packages, the section is
named ``compilers`` and the dictionary maps language names (``c``,
``cxx``, ``fortran``) to paths, rather than using the names ``cc``,
``fc``, and ``f77``.
.. code-block:: yaml
packages:
gcc:
external:
- spec: gcc@12.2.0 arch=linux-rhel8-skylake
prefix: /usr
extra_attributes:
environment:
set:
GCC_ROOT: /usr
external:
- spec: llvm+clang@15.0.0 arch=linux-rhel8-skylake
prefix: /usr
intel-oneapi-compilers:
externals:
- spec: intel-oneapi-compilers@2025.1.0
prefix: /opt/intel/oneapi
extra_attributes:
compilers:
c: /usr/bin/clang-with-suffix
cxx: /usr/bin/clang++-with-extra-info
fortran: /usr/bin/gfortran
extra_rpaths:
- /usr/lib/llvm/
c: /opt/intel/oneapi/compiler/2025.1/bin/icx
cxx: /opt/intel/oneapi/compiler/2025.1/bin/icpx
fortran: /opt/intel/oneapi/compiler/2025.1/bin/ifx
environment:
set:
MKL_ROOT: "/path/to/mkl/root"
unset: # A list of environment variables to unset
- CC
prepend_path: # Similar for append|remove_path
LD_LIBRARY_PATH: /ld/paths/added/by/setvars/sh
^^^^^^^^^^^^^^^^^^^^^^^
Build Your Own Compiler
^^^^^^^^^^^^^^^^^^^^^^^
If you are particular about which compiler/version you use, you might
wish to have Spack build it for you. For example:
If you are particular about which compiler/version you use, you might wish to have Spack build it for you.
For example:
.. code-block:: console
$ spack install gcc@4.9.3
$ spack install gcc@14+binutils
Once that has finished, you will need to add it to your
``compilers.yaml`` file. You can then set Spack to use it by default
by adding the following to your ``packages.yaml`` file:
Once the compiler is installed, you can start using it without additional configuration:
.. code-block:: yaml
.. code-block:: console
packages:
all:
compiler: [gcc@4.9.3]
$ spack install hdf5~mpi %gcc@14
The same holds true for compilers that are made available from buildcaches, when reusing them is allowed.
.. _compilers-requiring-modules:
@@ -536,30 +476,26 @@ by adding the following to your ``packages.yaml`` file:
Compilers Requiring Modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Many installed compilers will work regardless of the environment they
are called with. However, some installed compilers require
``$LD_LIBRARY_PATH`` or other environment variables to be set in order
to run; this is typical for Intel and other proprietary compilers.
Many installed compilers will work regardless of the environment they are called with.
However, some installed compilers require environment variables to be set in order to run;
this is typical for Intel and other proprietary compilers.
In such a case, you should tell Spack which module(s) to load in order
to run the chosen compiler (If the compiler does not come with a
module file, you might consider making one by hand). Spack will load
this module into the environment ONLY when the compiler is run, and
NOT in general for a package's ``install()`` method. See, for
example, this ``compilers.yaml`` file:
On typical HPC clusters, these environment modifications are usually delegated to some "module" system.
In such a case, you should tell Spack which module(s) to load in order to run the chosen compiler:
.. code-block:: yaml
compilers:
- compiler:
modules: [other/comp/gcc-5.3-sp3]
operating_system: SuSE11
paths:
cc: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gcc
cxx: /usr/local/other/SLES11.3/gcc/5.3.0/bin/g++
f77: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gfortran
fc: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gfortran
spec: gcc@5.3.0
packages:
gcc:
externals:
- spec: gcc@10.5.0 languages='c,c++,fortran'
prefix: /opt/compilers
extra_attributes:
compilers:
c: /opt/compilers/bin/gcc-10
cxx: /opt/compilers/bin/g++-10
fortran: /opt/compilers/bin/gfortran-10
modules: [gcc/10.5.0]
Some compilers require special environment settings to be loaded not just
to run, but also to execute the code they build, breaking packages that
@@ -580,7 +516,7 @@ Licensed Compilers
^^^^^^^^^^^^^^^^^^
Some proprietary compilers require licensing to use. If you need to
use a licensed compiler (eg, PGI), the process is similar to a mix of
use a licensed compiler, the process is similar to a mix of
build your own, plus modules:
#. Create a Spack package (if it doesn't exist already) to install
@@ -590,24 +526,21 @@ build your own, plus modules:
using Spack to load the module it just created, and running simple
builds (eg: ``cc helloWorld.c && ./a.out``)
#. Add the newly-installed compiler to ``compilers.yaml`` as shown
above.
#. Add the newly-installed compiler to ``packages.yaml`` as shown above.
.. _mixed-toolchains:
^^^^^^^^^^^^^^^^
Mixed Toolchains
^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^
Fortran compilers on macOS
^^^^^^^^^^^^^^^^^^^^^^^^^^
Modern compilers typically come with related compilers for C, C++ and
Fortran bundled together. When possible, results are best if the same
compiler is used for all languages.
In some cases, this is not possible. For example, starting with macOS El
Capitan (10.11), many packages no longer build with GCC, but XCode
provides no Fortran compilers. The user is therefore forced to use a
mixed toolchain: XCode-provided Clang for C/C++ and GNU ``gfortran`` for
Fortran.
In some cases, this is not possible. For example, XCode on macOS provides no Fortran compilers.
The user is therefore forced to use a mixed toolchain: XCode-provided Clang for C/C++ and e.g.
GNU ``gfortran`` for Fortran.
#. You need to make sure that Xcode is installed. Run the following command:
@@ -660,45 +593,25 @@ Fortran.
Note: the flag is ``-license``, not ``--license``.
#. Run ``spack compiler find`` to locate Clang.
#. There are different ways to get ``gfortran`` on macOS. For example, you can
install GCC with Spack (``spack install gcc``), with Homebrew (``brew install
gcc``), or from a `DMG installer
<https://github.com/fxcoudert/gfortran-for-macOS/releases>`_.
#. The only thing left to do is to edit ``~/.spack/darwin/compilers.yaml`` to provide
the path to ``gfortran``:
#. Run ``spack compiler find`` to locate both Apple-Clang and GCC.
.. code-block:: yaml
compilers:
- compiler:
# ...
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /path/to/bin/gfortran
fc: /path/to/bin/gfortran
spec: apple-clang@11.0.0
If you used Spack to install GCC, you can get the installation prefix by
``spack location -i gcc`` (this will only work if you have a single version
of GCC installed). Whereas for Homebrew, GCC is installed in
``/usr/local/Cellar/gcc/x.y.z``. With the DMG installer, the correct path
will be ``/usr/local/gfortran``.
Since languages in Spack are modeled as virtual packages, ``apple-clang`` will be used to provide
C and C++, while GCC will be used for Fortran.
^^^^^^^^^^^^^^^^^^^^^
Compiler Verification
^^^^^^^^^^^^^^^^^^^^^
You can verify that your compilers are configured properly by installing a
simple package. For example:
You can verify that your compilers are configured properly by installing a simple package. For example:
.. code-block:: console
$ spack install zlib%gcc@5.3.0
$ spack install zlib-ng%gcc@5.3.0
.. _vendor-specific-compiler-configuration:
@@ -707,9 +620,7 @@ simple package. For example:
Vendor-Specific Compiler Configuration
--------------------------------------
With Spack, things usually "just work" with GCC. Not so for other
compilers. This section provides details on how to get specific
compilers working.
This section provides details on how to get vendor-specific compilers working.
^^^^^^^^^^^^^^^
Intel Compilers
@@ -731,8 +642,8 @@ compilers:
you have installed from the ``PATH`` environment variable.
If you want use a version of ``gcc`` or ``g++`` other than the default
version on your system, you need to use either the ``-gcc-name``
or ``-gxx-name`` compiler option to specify the path to the version of
version on your system, you need to use either the ``--gcc-install-dir``
or ``--gcc-toolchain`` compiler option to specify the path to the version of
``gcc`` or ``g++`` that you want to use."
-- `Intel Reference Guide <https://software.intel.com/en-us/node/522750>`_
@@ -740,76 +651,12 @@ compilers:
Intel compilers may therefore be configured in one of two ways with
Spack: using modules, or using compiler flags.
""""""""""""""""""""""""""
Configuration with Modules
""""""""""""""""""""""""""
One can control which GCC is seen by the Intel compiler with modules.
A module must be loaded both for the Intel Compiler (so it will run)
and GCC (so the compiler can find the intended GCC). The following
configuration in ``compilers.yaml`` illustrates this technique:
.. code-block:: yaml
compilers:
- compiler:
modules: [gcc-4.9.3, intel-15.0.24]
operating_system: centos7
paths:
cc: /opt/intel-15.0.24/bin/icc-15.0.24-beta
cxx: /opt/intel-15.0.24/bin/icpc-15.0.24-beta
f77: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
fc: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
spec: intel@15.0.24.4.9.3
.. note::
The version number on the Intel compiler is a combination of
the "native" Intel version number and the GNU compiler it is
targeting.
""""""""""""""""""""""""""
Command Line Configuration
""""""""""""""""""""""""""
One can also control which GCC is seen by the Intel compiler by adding
flags to the ``icc`` command:
#. Identify the location of the compiler you just installed:
.. code-block:: console
$ spack location --install-dir gcc
~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw...
#. Set up ``compilers.yaml``, for example:
.. code-block:: yaml
compilers:
- compiler:
modules: [intel-15.0.24]
operating_system: centos7
paths:
cc: /opt/intel-15.0.24/bin/icc-15.0.24-beta
cxx: /opt/intel-15.0.24/bin/icpc-15.0.24-beta
f77: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
fc: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
flags:
cflags: -gcc-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc
cxxflags: -gxx-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/g++
fflags: -gcc-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc
spec: intel@15.0.24.4.9.3
^^^
NAG
^^^
The Numerical Algorithms Group provides a licensed Fortran compiler. Like Clang,
this requires you to set up a :ref:`mixed-toolchains`. It is recommended to use
GCC for your C/C++ compilers.
The Numerical Algorithms Group provides a licensed Fortran compiler.
It is recommended to use GCC for your C/C++ compilers.
The NAG Fortran compilers are a bit more strict than other compilers, and many
packages will fail to install with error messages like:
@@ -826,44 +673,40 @@ the command line:
$ spack install openmpi fflags="-mismatch"
Or it can be set permanently in your ``compilers.yaml``:
Or it can be set permanently in your ``packages.yaml``:
.. code-block:: yaml
- compiler:
modules: []
operating_system: centos6
paths:
cc: /soft/spack/opt/spack/linux-x86_64/gcc-5.3.0/gcc-6.1.0-q2zosj3igepi3pjnqt74bwazmptr5gpj/bin/gcc
cxx: /soft/spack/opt/spack/linux-x86_64/gcc-5.3.0/gcc-6.1.0-q2zosj3igepi3pjnqt74bwazmptr5gpj/bin/g++
f77: /soft/spack/opt/spack/linux-x86_64/gcc-4.4.7/nag-6.1-jt3h5hwt5myezgqguhfsan52zcskqene/bin/nagfor
fc: /soft/spack/opt/spack/linux-x86_64/gcc-4.4.7/nag-6.1-jt3h5hwt5myezgqguhfsan52zcskqene/bin/nagfor
flags:
fflags: -mismatch
spec: nag@6.1
packages:
nag:
externals:
- spec: nag@6.1
prefix: /opt/nag/bin
extra_attributes:
compilers:
fortran: /opt/nag/bin/nagfor
flags:
fflags: -mismatch
---------------
System Packages
---------------
Once compilers are configured, one needs to determine which
pre-installed system packages, if any, to use in builds. This is
configured in the file ``~/.spack/packages.yaml``. For example, to use
an OpenMPI installed in /opt/local, one would use:
Once compilers are configured, one needs to determine which pre-installed system packages,
if any, to use in builds. These are also configured in the ``~/.spack/packages.yaml`` file.
For example, to use an OpenMPI installed in /opt/local, one would use:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: openmpi@1.10.1
prefix: /opt/local
buildable: False
packages:
openmpi:
buildable: False
externals:
- spec: openmpi@1.10.1
prefix: /opt/local
In general, Spack is easier to use and more reliable if it builds all of
its own dependencies. However, there are several packages for which one
commonly needs to use system versions:
In general, *Spack is easier to use and more reliable if it builds all of its own dependencies*.
However, there are several packages for which one commonly needs to use system versions:
^^^
MPI
@@ -876,8 +719,7 @@ you are unlikely to get a working MPI from Spack. Instead, use an
appropriate pre-installed MPI.
If you choose a pre-installed MPI, you should consider using the
pre-installed compiler used to build that MPI; see above on
``compilers.yaml``.
pre-installed compiler used to build that MPI.
^^^^^^^
OpenSSL
@@ -1441,9 +1283,9 @@ To configure Spack, first run the following command inside the Spack console:
spack compiler find
This creates a ``.staging`` directory in our Spack prefix, along with a ``windows`` subdirectory
containing a ``compilers.yaml`` file. On a fresh Windows install with the above packages
containing a ``packages.yaml`` file. On a fresh Windows install with the above packages
installed, this command should only detect Microsoft Visual Studio and the Intel Fortran
compiler will be integrated within the first version of MSVC present in the ``compilers.yaml``
compiler will be integrated within the first version of MSVC present in the ``packages.yaml``
output.
Spack provides a default ``config.yaml`` file for Windows that it will use unless overridden.

View File

@@ -23,7 +23,6 @@ components for use by dependent packages:
packages:
all:
compiler: [rocmcc@=5.3.0]
variants: amdgpu_target=gfx90a
hip:
buildable: false
@@ -70,16 +69,15 @@ This is in combination with the following compiler definition:
.. code-block:: yaml
compilers:
- compiler:
spec: rocmcc@=5.3.0
paths:
cc: /opt/rocm-5.3.0/bin/amdclang
cxx: /opt/rocm-5.3.0/bin/amdclang++
f77: null
fc: /opt/rocm-5.3.0/bin/amdflang
operating_system: rhel8
target: x86_64
packages:
llvm-amdgpu:
externals:
- spec: llvm-amdgpu@=5.3.0
prefix: /opt/rocm-5.3.0
compilers:
c: /opt/rocm-5.3.0/bin/amdclang
cxx: /opt/rocm-5.3.0/bin/amdclang++
fortran: null
This includes the following considerations:

View File

@@ -43,6 +43,20 @@ or specified as URLs. Only the ``file``, ``ftp``, ``http`` and ``https`` protoco
schemes) are supported. Spack-specific, environment and user path variables
can be used. (See :ref:`config-file-variables` for more information.)
A ``sha256`` is required for remote file URLs and must be specified as follows:
.. code-block:: yaml
include:
- path: https://github.com/path/to/raw/config/compilers.yaml
sha256: 26e871804a92cd07bb3d611b31b4156ae93d35b6a6d6e0ef3a67871fcb1d258b
Additionally, remote file URLs must link to the **raw** form of the file's
contents (e.g., `GitHub
<https://docs.github.com/en/repositories/working-with-files/using-files/viewing-and-understanding-files#viewing-or-copying-the-raw-file-content>`_
or `GitLab
<https://docs.gitlab.com/ee/api/repository_files.html#get-raw-file-from-repository>`_).
.. warning::
Recursive includes are not currently processed in a breadth-first manner

View File

@@ -557,14 +557,13 @@ preferences.
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
Most package preferences (``compilers``, ``target`` and ``providers``)
The ``target`` and ``providers`` preferences
can only be set globally under the ``all`` section of ``packages.yaml``:
.. code-block:: yaml
packages:
all:
compiler: [gcc@12.2.0, clang@12:, oneapi@2023:]
target: [x86_64_v3]
providers:
mpi: [mvapich2, mpich, openmpi]

View File

@@ -2,9 +2,10 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import collections.abc
import enum
import os
import re
from typing import Tuple
from typing import Optional, Tuple
import llnl.util.filesystem as fs
import llnl.util.tty as tty
@@ -13,6 +14,7 @@
import spack.spec
import spack.util.prefix
from spack.directives import depends_on
from spack.util.executable import which_string
from .cmake import CMakeBuilder, CMakePackage
@@ -178,6 +180,64 @@ def initconfig_compiler_entries(self):
return entries
class Scheduler(enum.Enum):
LSF = enum.auto()
SLURM = enum.auto()
FLUX = enum.auto()
def get_scheduler(self) -> Optional[Scheduler]:
spec = self.pkg.spec
# Check for Spectrum-mpi, which always uses LSF or LSF MPI variant
if spec.satisfies("^spectrum-mpi") or spec["mpi"].satisfies("schedulers=lsf"):
return self.Scheduler.LSF
# Check for Slurm MPI variants
slurm_checks = ["+slurm", "schedulers=slurm", "process_managers=slurm"]
if any(spec["mpi"].satisfies(variant) for variant in slurm_checks):
return self.Scheduler.SLURM
# TODO improve this when MPI implementations support flux
# Do this check last to avoid using a flux wrapper present next to Slurm/ LSF schedulers
if which_string("flux") is not None:
return self.Scheduler.FLUX
return None
def get_mpi_exec(self) -> Optional[str]:
spec = self.pkg.spec
scheduler = self.get_scheduler()
if scheduler == self.Scheduler.LSF:
return which_string("lrun")
elif scheduler == self.Scheduler.SLURM:
if spec["mpi"].external:
return which_string("srun")
else:
return os.path.join(spec["slurm"].prefix.bin, "srun")
elif scheduler == self.Scheduler.FLUX:
flux = which_string("flux")
return f"{flux};run" if flux else None
elif hasattr(spec["mpi"].package, "mpiexec"):
return spec["mpi"].package.mpiexec
else:
mpiexec = os.path.join(spec["mpi"].prefix.bin, "mpirun")
if not os.path.exists(mpiexec):
mpiexec = os.path.join(spec["mpi"].prefix.bin, "mpiexec")
return mpiexec
def get_mpi_exec_num_proc(self) -> str:
scheduler = self.get_scheduler()
if scheduler in [self.Scheduler.FLUX, self.Scheduler.LSF, self.Scheduler.SLURM]:
return "-n"
else:
return "-np"
def initconfig_mpi_entries(self):
spec = self.pkg.spec
@@ -197,27 +257,10 @@ def initconfig_mpi_entries(self):
if hasattr(spec["mpi"], "mpifc"):
entries.append(cmake_cache_path("MPI_Fortran_COMPILER", spec["mpi"].mpifc))
# Check for slurm
using_slurm = False
slurm_checks = ["+slurm", "schedulers=slurm", "process_managers=slurm"]
if any(spec["mpi"].satisfies(variant) for variant in slurm_checks):
using_slurm = True
# Determine MPIEXEC
if using_slurm:
if spec["mpi"].external:
# Heuristic until we have dependents on externals
mpiexec = "/usr/bin/srun"
else:
mpiexec = os.path.join(spec["slurm"].prefix.bin, "srun")
elif hasattr(spec["mpi"].package, "mpiexec"):
mpiexec = spec["mpi"].package.mpiexec
else:
mpiexec = os.path.join(spec["mpi"].prefix.bin, "mpirun")
if not os.path.exists(mpiexec):
mpiexec = os.path.join(spec["mpi"].prefix.bin, "mpiexec")
mpiexec = self.get_mpi_exec()
if not os.path.exists(mpiexec):
if mpiexec is None or not os.path.exists(mpiexec.split(";")[0]):
msg = "Unable to determine MPIEXEC, %s tests may fail" % self.pkg.name
entries.append("# {0}\n".format(msg))
tty.warn(msg)
@@ -230,10 +273,7 @@ def initconfig_mpi_entries(self):
entries.append(cmake_cache_path("MPIEXEC", mpiexec))
# Determine MPIEXEC_NUMPROC_FLAG
if using_slurm:
entries.append(cmake_cache_string("MPIEXEC_NUMPROC_FLAG", "-n"))
else:
entries.append(cmake_cache_string("MPIEXEC_NUMPROC_FLAG", "-np"))
entries.append(cmake_cache_string("MPIEXEC_NUMPROC_FLAG", self.get_mpi_exec_num_proc()))
return entries
@@ -276,30 +316,18 @@ def initconfig_hardware_entries(self):
entries.append("# ROCm")
entries.append("#------------------{0}\n".format("-" * 30))
if spec.satisfies("^blt@0.7:"):
rocm_root = os.path.dirname(spec["llvm-amdgpu"].prefix)
entries.append(cmake_cache_path("ROCM_PATH", rocm_root))
else:
# Explicitly setting HIP_ROOT_DIR may be a patch that is no longer necessary
entries.append(cmake_cache_path("HIP_ROOT_DIR", "{0}".format(spec["hip"].prefix)))
llvm_bin = spec["llvm-amdgpu"].prefix.bin
llvm_prefix = spec["llvm-amdgpu"].prefix
# Some ROCm systems seem to point to /<path>/rocm-<ver>/ and
# others point to /<path>/rocm-<ver>/llvm
if os.path.basename(os.path.normpath(llvm_prefix)) != "llvm":
llvm_bin = os.path.join(llvm_prefix, "llvm/bin/")
entries.append(
cmake_cache_filepath(
"CMAKE_HIP_COMPILER", os.path.join(llvm_bin, "amdclang++")
)
)
rocm_root = os.path.dirname(spec["llvm-amdgpu"].prefix)
entries.append(cmake_cache_path("ROCM_PATH", rocm_root))
archs = self.spec.variants["amdgpu_target"].value
if archs[0] != "none":
arch_str = ";".join(archs)
entries.append(cmake_cache_string("CMAKE_HIP_ARCHITECTURES", arch_str))
entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str))
entries.append(cmake_cache_string("GPU_TARGETS", arch_str))
llvm_bin = spec["llvm-amdgpu"].prefix.bin
entries.append(
cmake_cache_filepath("CMAKE_HIP_COMPILER", os.path.join(llvm_bin, "amdclang++"))
)
if spec.satisfies("%gcc"):
entries.append(
@@ -308,6 +336,15 @@ def initconfig_hardware_entries(self):
)
)
# Extra definitions that might be required in other cases
if not spec.satisfies("^blt"):
entries.append(cmake_cache_path("HIP_ROOT_DIR", "{0}".format(spec["hip"].prefix)))
if archs[0] != "none":
arch_str = ";".join(archs)
entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str))
entries.append(cmake_cache_string("GPU_TARGETS", arch_str))
return entries
def std_initconfig_entries(self):

View File

@@ -311,4 +311,4 @@ def ld_flags(self):
#: Tuple of Intel math libraries, exported to packages
INTEL_MATH_LIBRARIES = ("intel-mkl", "intel-oneapi-mkl", "intel-parallel-studio")
INTEL_MATH_LIBRARIES = ("intel-oneapi-mkl",)

View File

@@ -4,7 +4,6 @@
import json
import os
import re
import shutil
import sys
from typing import Dict
@@ -26,12 +25,10 @@
import spack.hash_types as ht
import spack.mirrors.mirror
import spack.package_base
import spack.paths
import spack.repo
import spack.spec
import spack.stage
import spack.util.executable
import spack.util.git
import spack.util.gpg as gpg_util
import spack.util.timer as timer
import spack.util.url as url_util
@@ -45,7 +42,6 @@
SPACK_COMMAND = "spack"
INSTALL_FAIL_CODE = 1
FAILED_CREATE_BUILDCACHE_CODE = 100
BUILTIN = re.compile(r"var\/spack\/repos\/builtin\/packages\/([^\/]+)\/package\.py")
def deindent(desc):
@@ -783,18 +779,15 @@ def ci_verify_versions(args):
then parses the git diff between the two to determine which packages
have been modified verifies the new checksums inside of them.
"""
with fs.working_dir(spack.paths.prefix):
# We use HEAD^1 explicitly on the merge commit created by
# GitHub Actions. However HEAD~1 is a safer default for the helper function.
files = spack.util.git.get_modified_files(from_ref=args.from_ref, to_ref=args.to_ref)
# Get a list of package names from the modified files.
pkgs = [(m.group(1), p) for p in files for m in [BUILTIN.search(p)] if m]
# Get a list of all packages that have been changed or added
# between from_ref and to_ref
pkgs = spack.repo.get_all_package_diffs("AC", args.from_ref, args.to_ref)
failed_version = False
for pkg_name, path in pkgs:
for pkg_name in pkgs:
spec = spack.spec.Spec(pkg_name)
pkg = spack.repo.PATH.get_pkg_class(spec.name)(spec)
path = spack.repo.PATH.package_path(pkg_name)
# Skip checking manual download packages and trust the maintainers
if pkg.manual_download:
@@ -818,7 +811,7 @@ def ci_verify_versions(args):
# TODO: enforce every version have a commit or a sha256 defined if not
# an infinite version (there are a lot of package's where this doesn't work yet.)
with fs.working_dir(spack.paths.prefix):
with fs.working_dir(os.path.dirname(path)):
added_checksums = spack_ci.get_added_versions(
checksums_version_dict, path, from_ref=args.from_ref, to_ref=args.to_ref
)

View File

@@ -572,7 +572,7 @@ def edit(self, spec, prefix):
class IntelPackageTemplate(PackageTemplate):
"""Provides appropriate overrides for licensed Intel software"""
base_class_name = "IntelPackage"
base_class_name = "IntelOneApiPackage"
body_def = """\
# FIXME: Override `setup_environment` if necessary."""

View File

@@ -7,6 +7,7 @@
import os
import re
import sys
import warnings
from typing import Any, Dict, List, Optional, Tuple
import archspec.cpu
@@ -337,7 +338,15 @@ def from_legacy_yaml(compiler_dict: Dict[str, Any]) -> List[spack.spec.Spec]:
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
pattern = re.compile(r"|".join(finder.search_patterns(pkg=pkg_cls)))
filtered_paths = [x for x in candidate_paths if pattern.search(os.path.basename(x))]
detected = finder.detect_specs(pkg=pkg_cls, paths=filtered_paths)
try:
detected = finder.detect_specs(pkg=pkg_cls, paths=filtered_paths)
except Exception:
warnings.warn(
f"[{__name__}] cannot detect {pkg_name} from the "
f"following paths: {', '.join(filtered_paths)}"
)
continue
for s in detected:
for key in ("flags", "environment", "extra_rpaths"):
if key in compiler_dict:

View File

@@ -654,7 +654,7 @@ def format_error(msg, pkg):
msg += " @*r{{[{0}, variant '{1}']}}"
return llnl.util.tty.color.colorize(msg.format(pkg.name, name))
if name in spack.variant.reserved_names:
if name in spack.variant.RESERVED_NAMES:
def _raise_reserved_name(pkg):
msg = "The name '%s' is reserved by Spack" % name

View File

@@ -295,8 +295,9 @@ def fetch(self):
)
def _fetch_from_url(self, url):
if spack.config.get("config:url_fetch_method") == "curl":
return self._fetch_curl(url)
fetch_method = spack.config.get("config:url_fetch_method", "urllib")
if fetch_method.startswith("curl"):
return self._fetch_curl(url, config_args=fetch_method.split()[1:])
else:
return self._fetch_urllib(url)
@@ -345,7 +346,7 @@ def _fetch_urllib(self, url):
self._check_headers(str(response.headers))
@_needs_stage
def _fetch_curl(self, url):
def _fetch_curl(self, url, config_args=[]):
save_file = None
partial_file = None
if self.stage.save_filename:
@@ -374,7 +375,7 @@ def _fetch_curl(self, url):
timeout = self.extra_options.get("timeout")
base_args = web_util.base_curl_fetch_args(url, timeout)
curl_args = save_args + base_args + cookie_args
curl_args = config_args + save_args + base_args + cookie_args
# Run curl but grab the mime type from the http headers
curl = self.curl

View File

@@ -100,7 +100,7 @@
"allow_sgid": {"type": "boolean"},
"install_status": {"type": "boolean"},
"binary_index_root": {"type": "string"},
"url_fetch_method": {"type": "string", "enum": ["urllib", "curl"]},
"url_fetch_method": {"type": "string", "pattern": r"^urllib$|^curl( .*)*"},
"additional_external_search_paths": {"type": "array", "items": {"type": "string"}},
"binary_index_ttl": {"type": "integer", "minimum": 0},
"aliases": {"type": "object", "patternProperties": {r"\w[\w-]*": {"type": "string"}}},

View File

@@ -1190,7 +1190,7 @@ def solve(self, setup, specs, reuse=None, output=None, control=None, allow_depre
problem_repr += "\n" + f.read()
result = None
conc_cache_enabled = spack.config.get("config:concretization_cache:enable", True)
conc_cache_enabled = spack.config.get("config:concretization_cache:enable", False)
if conc_cache_enabled:
result, concretization_stats = CONC_CACHE.fetch(problem_repr)
@@ -2997,14 +2997,46 @@ def setup(
"""
reuse = reuse or []
check_packages_exist(specs)
self.gen = ProblemInstanceBuilder()
node_counter = create_counter(specs, tests=self.tests, possible_graph=self.possible_graph)
# Compute possible compilers first, so we can record which dependencies they might inject
_ = spack.compilers.config.all_compilers(init_config=True)
# Get compilers from buildcache only if injected through "reuse" specs
supported_compilers = spack.compilers.config.supported_compilers()
compilers_from_reuse = {
x for x in reuse if x.name in supported_compilers and not x.external
}
candidate_compilers, self.rejected_compilers = possible_compilers(
configuration=spack.config.CONFIG
)
for x in candidate_compilers:
if x.external or x in reuse:
continue
reuse.append(x)
for dep in x.traverse(root=False, deptype="run"):
reuse.extend(dep.traverse(deptype=("link", "run")))
candidate_compilers.update(compilers_from_reuse)
self.possible_compilers = list(candidate_compilers)
self.possible_compilers.sort() # type: ignore[call-overload]
self.gen.h1("Runtimes")
injected_dependencies = self.define_runtime_constraints()
node_counter = create_counter(
specs + injected_dependencies, tests=self.tests, possible_graph=self.possible_graph
)
self.possible_virtuals = node_counter.possible_virtuals()
self.pkgs = node_counter.possible_dependencies()
self.libcs = sorted(all_libcs()) # type: ignore[type-var]
# Fail if we already know an unreachable node is requested
for spec in specs:
# concrete roots don't need their dependencies verified
if spec.concrete:
continue
missing_deps = [
str(d)
for d in spec.traverse()
@@ -3017,7 +3049,6 @@ def setup(
if node.namespace is not None:
self.explicitly_required_namespaces[node.name] = node.namespace
self.gen = ProblemInstanceBuilder()
self.gen.h1("Generic information")
if using_libc_compatibility():
for libc in self.libcs:
@@ -3046,27 +3077,6 @@ def setup(
specs = tuple(specs) # ensure compatible types to add
_ = spack.compilers.config.all_compilers(init_config=True)
# Get compilers from buildcache only if injected through "reuse" specs
supported_compilers = spack.compilers.config.supported_compilers()
compilers_from_reuse = {
x for x in reuse if x.name in supported_compilers and not x.external
}
candidate_compilers, self.rejected_compilers = possible_compilers(
configuration=spack.config.CONFIG
)
for x in candidate_compilers:
if x.external or x in reuse:
continue
reuse.append(x)
for dep in x.traverse(root=False, deptype="run"):
reuse.extend(dep.traverse(deptype=("link", "run")))
candidate_compilers.update(compilers_from_reuse)
self.possible_compilers = list(candidate_compilers)
self.possible_compilers.sort() # type: ignore[call-overload]
self.gen.h1("Reusable concrete specs")
self.define_concrete_input_specs(specs, self.pkgs)
if reuse:
@@ -3138,9 +3148,6 @@ def setup(
self.gen.h1("Variant Values defined in specs")
self.define_variant_values()
self.gen.h1("Runtimes")
self.define_runtime_constraints()
self.gen.h1("Version Constraints")
self.collect_virtual_constraints()
self.define_version_constraints()
@@ -3174,8 +3181,10 @@ def visit(node):
path = os.path.join(parent_dir, "concretize.lp")
parse_files([path], visit)
def define_runtime_constraints(self):
"""Define the constraints to be imposed on the runtimes"""
def define_runtime_constraints(self) -> List[spack.spec.Spec]:
"""Define the constraints to be imposed on the runtimes, and returns a list of
injected packages.
"""
recorder = RuntimePropertyRecorder(self)
for compiler in self.possible_compilers:
@@ -3225,6 +3234,7 @@ def define_runtime_constraints(self):
)
recorder.consume_facts()
return sorted(recorder.injected_dependencies)
def literal_specs(self, specs):
for spec in sorted(specs):
@@ -3489,6 +3499,7 @@ def __init__(self, setup):
self._setup = setup
self.rules = []
self.runtime_conditions = set()
self.injected_dependencies = set()
# State of this object set in the __call__ method, and reset after
# each directive-like method
self.current_package = None
@@ -3527,6 +3538,7 @@ def depends_on(self, dependency_str: str, *, when: str, type: str, description:
if dependency_spec.versions != vn.any_version:
self._setup.version_constraints.add((dependency_spec.name, dependency_spec.versions))
self.injected_dependencies.add(dependency_spec)
body_str, node_variable = self.rule_body_from(when_spec)
head_clauses = self._setup.spec_clauses(dependency_spec, body=False)
@@ -3682,20 +3694,21 @@ def consume_facts(self):
"""Consume the facts collected by this object, and emits rules and
facts for the runtimes.
"""
self._setup.gen.h2("Runtimes: declarations")
runtime_pkgs = sorted(
{x.name for x in self.injected_dependencies if not spack.repo.PATH.is_virtual(x.name)}
)
for runtime_pkg in runtime_pkgs:
self._setup.gen.fact(fn.runtime(runtime_pkg))
self._setup.gen.newline()
self._setup.gen.h2("Runtimes: rules")
self._setup.gen.newline()
for rule in self.rules:
self._setup.gen.append(rule)
self._setup.gen.newline()
self._setup.gen.h2("Runtimes: conditions")
for runtime_pkg in spack.repo.PATH.packages_with_tags("runtime"):
self._setup.gen.fact(fn.runtime(runtime_pkg))
self._setup.gen.fact(fn.possible_in_link_run(runtime_pkg))
self._setup.gen.newline()
# Inject version rules for runtimes (versions are declared based
# on the available compilers)
self._setup.pkg_version_rules(runtime_pkg)
self._setup.gen.h2("Runtimes: requirements")
for imposed_spec, when_spec in sorted(self.runtime_conditions):
msg = f"{when_spec} requires {imposed_spec} at runtime"
_ = self._setup.condition(when_spec, imposed_spec=imposed_spec, msg=msg)
@@ -3830,7 +3843,7 @@ def virtual_on_edge(self, parent_node, provider_node, virtual):
provider_spec = self._specs[provider_node]
dependencies = [x for x in dependencies if id(x.spec) == id(provider_spec)]
assert len(dependencies) == 1, f"{virtual}: {provider_node.pkg}"
dependencies[0].update_virtuals((virtual,))
dependencies[0].update_virtuals(virtual)
def reorder_flags(self):
"""For each spec, determine the order of compiler flags applied to it.

View File

@@ -18,8 +18,6 @@
import spack.store
from spack.error import SpackError
RUNTIME_TAG = "runtime"
class PossibleGraph(NamedTuple):
real_pkgs: Set[str]
@@ -50,7 +48,8 @@ def possible_dependencies(
) -> PossibleGraph:
"""Returns the set of possible dependencies, and the set of possible virtuals.
Both sets always include runtime packages, which may be injected by compilers.
Runtime packages, which may be injected by compilers, needs to be added to specs if
the dependency is not explicit in the package.py recipe.
Args:
transitive: return transitive dependencies if True, only direct dependencies if False
@@ -70,14 +69,9 @@ class NoStaticAnalysis(PossibleDependencyGraph):
def __init__(self, *, configuration: spack.config.Configuration, repo: spack.repo.RepoPath):
self.configuration = configuration
self.repo = repo
self.runtime_pkgs = set(self.repo.packages_with_tags(RUNTIME_TAG))
self.runtime_virtuals = set()
self._platform_condition = spack.spec.Spec(
f"platform={spack.platforms.host()} target={archspec.cpu.host().family}:"
)
for x in self.runtime_pkgs:
pkg_class = self.repo.get_pkg_class(x)
self.runtime_virtuals.update(pkg_class.provided_virtual_names())
try:
self.libc_pkgs = [x.name for x in self.providers_for("libc")]
@@ -214,8 +208,6 @@ def possible_dependencies(
for root, children in edges.items():
real_packages.update(x for x in children if self._is_possible(pkg_name=x))
virtuals.update(self.runtime_virtuals)
real_packages = real_packages | self.runtime_pkgs
return PossibleGraph(real_pkgs=real_packages, virtuals=virtuals, edges=edges)
def _package_list(self, specs: Tuple[Union[spack.spec.Spec, str], ...]) -> List[str]:
@@ -470,7 +462,7 @@ def possible_packages_facts(self, gen, fn):
gen.fact(fn.max_dupes(package_name, 1))
gen.newline()
gen.h2("Packages with at multiple possible nodes (build-tools)")
gen.h2("Packages with multiple possible nodes (build-tools)")
default = spack.config.CONFIG.get("concretizer:duplicates:max_dupes:default", 2)
for package_name in sorted(self.possible_dependencies() & build_tools):
max_dupes = spack.config.CONFIG.get(

View File

@@ -206,7 +206,7 @@ class InstallStatus(enum.Enum):
installed = "@g{[+]} "
upstream = "@g{[^]} "
external = "@g{[e]} "
external = "@M{[e]} "
absent = "@K{ - } "
missing = "@r{[-]} "
@@ -663,11 +663,9 @@ def versions(self):
def display_str(self):
"""Equivalent to {compiler.name}{@compiler.version} for Specs, without extra
@= for readability."""
if self.spec.concrete:
return f"{self.name}@{self.version}"
elif self.versions != vn.any_version:
return f"{self.name}@{self.versions}"
return self.name
if self.versions != vn.any_version:
return self.spec.format("{name}{@version}")
return self.spec.format("{name}")
def __lt__(self, other):
if not isinstance(other, CompilerSpec):
@@ -756,11 +754,17 @@ def update_deptypes(self, depflag: dt.DepFlag) -> bool:
self.depflag = new
return True
def update_virtuals(self, virtuals: Iterable[str]) -> bool:
def update_virtuals(self, virtuals: Union[str, Iterable[str]]) -> bool:
"""Update the list of provided virtuals"""
old = self.virtuals
self.virtuals = tuple(sorted(set(virtuals).union(self.virtuals)))
return old != self.virtuals
if isinstance(virtuals, str):
union = {virtuals, *self.virtuals}
else:
union = {*virtuals, *self.virtuals}
if len(union) == len(old):
return False
self.virtuals = tuple(sorted(union))
return True
def copy(self) -> "DependencySpec":
"""Return a copy of this edge"""
@@ -1024,7 +1028,7 @@ def select(
parent: Optional[str] = None,
child: Optional[str] = None,
depflag: dt.DepFlag = dt.ALL,
virtuals: Optional[Sequence[str]] = None,
virtuals: Optional[Union[str, Sequence[str]]] = None,
) -> List[DependencySpec]:
"""Selects a list of edges and returns them.
@@ -1043,7 +1047,7 @@ def select(
parent: name of the parent package
child: name of the child package
depflag: allowed dependency types in flag form
virtuals: list of virtuals on the edge
virtuals: list of virtuals or specific virtual on the edge
"""
if not depflag:
return []
@@ -1064,7 +1068,10 @@ def select(
# Filter by virtuals
if virtuals is not None:
selected = (dep for dep in selected if any(v in dep.virtuals for v in virtuals))
if isinstance(virtuals, str):
selected = (dep for dep in selected if virtuals in dep.virtuals)
else:
selected = (dep for dep in selected if any(v in dep.virtuals for v in virtuals))
return list(selected)
@@ -1589,7 +1596,11 @@ def _get_dependency(self, name):
return deps[0]
def edges_from_dependents(
self, name=None, depflag: dt.DepFlag = dt.ALL, *, virtuals: Optional[List[str]] = None
self,
name=None,
depflag: dt.DepFlag = dt.ALL,
*,
virtuals: Optional[Union[str, Sequence[str]]] = None,
) -> List[DependencySpec]:
"""Return a list of edges connecting this node in the DAG
to parents.
@@ -1604,7 +1615,11 @@ def edges_from_dependents(
]
def edges_to_dependencies(
self, name=None, depflag: dt.DepFlag = dt.ALL, *, virtuals: Optional[Sequence[str]] = None
self,
name=None,
depflag: dt.DepFlag = dt.ALL,
*,
virtuals: Optional[Union[str, Sequence[str]]] = None,
) -> List[DependencySpec]:
"""Returns a list of edges connecting this node in the DAG to children.
@@ -1646,7 +1661,7 @@ def dependencies(
name=None,
deptype: Union[dt.DepTypes, dt.DepFlag] = dt.ALL,
*,
virtuals: Optional[Sequence[str]] = None,
virtuals: Optional[Union[str, Sequence[str]]] = None,
) -> List["Spec"]:
"""Returns a list of direct dependencies (nodes in the DAG)
@@ -1696,7 +1711,7 @@ def _add_flag(self, name, value, propagate):
Known flags currently include "arch"
"""
if propagate and name in vt.reserved_names:
if propagate and name in vt.RESERVED_NAMES:
raise UnsupportedPropagationError(
f"Propagation with '==' is not supported for '{name}'."
)
@@ -3009,9 +3024,8 @@ def ensure_valid_variants(spec):
# but are not necessarily recorded by the package's class
propagate_variants = [name for name, variant in spec.variants.items() if variant.propagate]
not_existing = set(spec.variants) - (
set(pkg_variants) | set(vt.reserved_names) | set(propagate_variants)
)
not_existing = set(spec.variants)
not_existing.difference_update(pkg_variants, vt.RESERVED_NAMES, propagate_variants)
if not_existing:
raise vt.UnknownVariantError(
@@ -4654,7 +4668,7 @@ def substitute_abstract_variants(spec: Spec):
if name == "dev_path":
spec.variants.substitute(vt.SingleValuedVariant(name, v._original_value))
continue
elif name in vt.reserved_names:
elif name in vt.RESERVED_NAMES:
continue
variant_defs = spack.repo.PATH.get_pkg_class(spec.fullname).variant_definitions(name)

View File

@@ -32,7 +32,7 @@ def repro_dir(tmp_path):
def test_get_added_versions_new_checksum(mock_git_package_changes):
repo_path, filename, commits = mock_git_package_changes
repo, filename, commits = mock_git_package_changes
checksum_versions = {
"3f6576971397b379d4205ae5451ff5a68edf6c103b2f03c4188ed7075fbb5f04": Version("2.1.5"),
@@ -41,7 +41,7 @@ def test_get_added_versions_new_checksum(mock_git_package_changes):
"86993903527d9b12fc543335c19c1d33a93797b3d4d37648b5addae83679ecd8": Version("2.0.0"),
}
with fs.working_dir(str(repo_path)):
with fs.working_dir(repo.packages_path):
added_versions = ci.get_added_versions(
checksum_versions, filename, from_ref=commits[-1], to_ref=commits[-2]
)
@@ -50,7 +50,7 @@ def test_get_added_versions_new_checksum(mock_git_package_changes):
def test_get_added_versions_new_commit(mock_git_package_changes):
repo_path, filename, commits = mock_git_package_changes
repo, filename, commits = mock_git_package_changes
checksum_versions = {
"74253725f884e2424a0dd8ae3f69896d5377f325": Version("2.1.6"),
@@ -60,9 +60,9 @@ def test_get_added_versions_new_commit(mock_git_package_changes):
"86993903527d9b12fc543335c19c1d33a93797b3d4d37648b5addae83679ecd8": Version("2.0.0"),
}
with fs.working_dir(str(repo_path)):
with fs.working_dir(repo.packages_path):
added_versions = ci.get_added_versions(
checksum_versions, filename, from_ref=commits[2], to_ref=commits[1]
checksum_versions, filename, from_ref=commits[-2], to_ref=commits[-3]
)
assert len(added_versions) == 1
assert added_versions[0] == Version("2.1.6")

View File

@@ -1978,6 +1978,13 @@ def test_ci_validate_git_versions_invalid(
assert f"Invalid commit for diff-test@{version}" in err
def mock_packages_path(path):
def packages_path():
return path
return packages_path
@pytest.fixture
def verify_standard_versions_valid(monkeypatch):
def validate_standard_versions(pkg, versions):
@@ -2024,9 +2031,12 @@ def test_ci_verify_versions_valid(
mock_git_package_changes,
verify_standard_versions_valid,
verify_git_versions_valid,
tmpdir,
):
repo_path, _, commits = mock_git_package_changes
monkeypatch.setattr(spack.paths, "prefix", repo_path)
repo, _, commits = mock_git_package_changes
spack.repo.PATH.put_first(repo)
monkeypatch.setattr(spack.repo, "packages_path", mock_packages_path(repo.packages_path))
out = ci_cmd("verify-versions", commits[-1], commits[-3])
assert "Validated diff-test@2.1.5" in out
@@ -2040,9 +2050,10 @@ def test_ci_verify_versions_standard_invalid(
verify_standard_versions_invalid,
verify_git_versions_invalid,
):
repo_path, _, commits = mock_git_package_changes
repo, _, commits = mock_git_package_changes
spack.repo.PATH.put_first(repo)
monkeypatch.setattr(spack.paths, "prefix", repo_path)
monkeypatch.setattr(spack.repo, "packages_path", mock_packages_path(repo.packages_path))
out = ci_cmd("verify-versions", commits[-1], commits[-3], fail_on_error=False)
assert "Invalid checksum found diff-test@2.1.5" in out
@@ -2050,8 +2061,10 @@ def test_ci_verify_versions_standard_invalid(
def test_ci_verify_versions_manual_package(monkeypatch, mock_packages, mock_git_package_changes):
repo_path, _, commits = mock_git_package_changes
monkeypatch.setattr(spack.paths, "prefix", repo_path)
repo, _, commits = mock_git_package_changes
spack.repo.PATH.put_first(repo)
monkeypatch.setattr(spack.repo, "packages_path", mock_packages_path(repo.packages_path))
pkg_class = spack.spec.Spec("diff-test").package_class
monkeypatch.setattr(pkg_class, "manual_download", True)

View File

@@ -62,7 +62,7 @@
(
["-t", "intel", "/test-intel"],
"test-intel",
[r"TestIntel(IntelPackage)", r"setup_environment"],
[r"TestIntel(IntelOneApiPackage)", r"setup_environment"],
),
(
["-t", "makefile", "/test-makefile"],

View File

@@ -38,10 +38,9 @@
(["--transitive", "--deptype=link", "dtbuild1"], {"dtlink2"}),
],
)
def test_direct_dependencies(cli_args, expected, mock_runtimes):
def test_direct_dependencies(cli_args, expected, mock_packages):
out = dependencies(*cli_args)
result = set(re.split(r"\s+", out.strip()))
expected.update(mock_runtimes)
assert expected == result

View File

@@ -24,8 +24,6 @@ def test_it_just_runs(pkg):
(
("mpi",),
[
"intel-mpi",
"intel-parallel-studio",
"mpich",
"mpilander",
"mvapich2",

View File

@@ -83,3 +83,12 @@ def tests_compiler_conversion_modules(mock_compiler):
compiler_spec = CompilerFactory.from_legacy_yaml(mock_compiler)[0]
assert compiler_spec.external
assert compiler_spec.external_modules == modules
@pytest.mark.regression("49717")
def tests_compiler_conversion_corrupted_paths(mock_compiler):
"""Tests that compiler entries with corrupted path do not raise"""
mock_compiler["paths"] = {"cc": "gcc", "cxx": "g++", "fc": "gfortran", "f77": "gfortran"}
# Test this call doesn't raise
compiler_spec = CompilerFactory.from_legacy_yaml(mock_compiler)
assert compiler_spec == []

View File

@@ -243,13 +243,11 @@ def latest_commit():
@pytest.fixture
def mock_git_package_changes(git, tmpdir, override_git_repos_cache_path):
def mock_git_package_changes(git, tmpdir, override_git_repos_cache_path, monkeypatch):
"""Create a mock git repo with known structure of package edits
The structure of commits in this repo is as follows::
o diff-test: modification to make manual download package
|
o diff-test: add v1.2 (from a git ref)
|
o diff-test: add v1.1 (from source tarball)
@@ -261,8 +259,12 @@ def mock_git_package_changes(git, tmpdir, override_git_repos_cache_path):
Important attributes of the repo for test coverage are: multiple package
versions are added with some coming from a tarball and some from git refs.
"""
repo_path = str(tmpdir.mkdir("git_package_changes_repo"))
filename = "var/spack/repos/builtin/packages/diff-test/package.py"
filename = "diff-test/package.py"
repo_path, _ = spack.repo.create_repo(str(tmpdir.mkdir("myrepo")))
repo_cache = spack.util.file_cache.FileCache(str(tmpdir.mkdir("cache")))
repo = spack.repo.Repo(repo_path, cache=repo_cache)
def commit(message):
global commit_counter
@@ -276,7 +278,7 @@ def commit(message):
)
commit_counter += 1
with working_dir(repo_path):
with working_dir(repo.packages_path):
git("init")
git("config", "user.name", "Spack")
@@ -307,17 +309,11 @@ def latest_commit():
commit("diff-test: add v2.1.6")
commits.append(latest_commit())
# convert pkg-a to a manual download package
shutil.copy2(f"{spack.paths.test_path}/data/conftest/diff-test/package-3.txt", filename)
git("add", filename)
commit("diff-test: modification to make manual download package")
commits.append(latest_commit())
# The commits are ordered with the last commit first in the list
commits = list(reversed(commits))
# Return the git directory to install, the filename used, and the commits
yield repo_path, filename, commits
yield repo, filename, commits
@pytest.fixture(autouse=True)

View File

@@ -1,23 +0,0 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class DiffTest(AutotoolsPackage):
"""zlib replacement with optimizations for next generation systems."""
homepage = "https://github.com/zlib-ng/zlib-ng"
url = "https://github.com/zlib-ng/zlib-ng/archive/2.0.0.tar.gz"
git = "https://github.com/zlib-ng/zlib-ng.git"
license("Zlib")
manual_download = True
version("2.1.6", tag="2.1.6", commit="74253725f884e2424a0dd8ae3f69896d5377f325")
version("2.1.5", sha256="3f6576971397b379d4205ae5451ff5a68edf6c103b2f03c4188ed7075fbb5f04")
version("2.1.4", sha256="a0293475e6a44a3f6c045229fe50f69dc0eebc62a42405a51f19d46a5541e77a")
version("2.0.7", sha256="6c0853bb27738b811f2b4d4af095323c3d5ce36ceed6b50e5f773204fb8f7200")
version("2.0.0", sha256="86993903527d9b12fc543335c19c1d33a93797b3d4d37648b5addae83679ecd8")

View File

@@ -111,14 +111,13 @@ def mpi_names(mock_inspector):
("dtbuild1", {"allowed_deps": dt.LINK}, {"dtbuild1", "dtlink2"}),
],
)
def test_possible_dependencies(pkg_name, fn_kwargs, expected, mock_runtimes, mock_inspector):
def test_possible_dependencies(pkg_name, fn_kwargs, expected, mock_inspector):
"""Tests possible nodes of mpileaks, under different scenarios."""
expected.update(mock_runtimes)
result, *_ = mock_inspector.possible_dependencies(pkg_name, **fn_kwargs)
assert expected == result
def test_possible_dependencies_virtual(mock_inspector, mock_packages, mock_runtimes, mpi_names):
def test_possible_dependencies_virtual(mock_inspector, mock_packages, mpi_names):
expected = set(mpi_names)
for name in mpi_names:
expected.update(
@@ -126,7 +125,6 @@ def test_possible_dependencies_virtual(mock_inspector, mock_packages, mock_runti
for dep in mock_packages.get_pkg_class(name).dependencies_by_name()
if not mock_packages.is_virtual(dep)
)
expected.update(mock_runtimes)
expected.update(s.name for s in mock_packages.providers_for("c"))
real_pkgs, *_ = mock_inspector.possible_dependencies(
@@ -146,7 +144,6 @@ def test_possible_dependencies_with_multiple_classes(
pkgs = ["dt-diamond", "mpileaks"]
expected = set(mpileaks_possible_deps)
expected.update({"dt-diamond", "dt-diamond-left", "dt-diamond-right", "dt-diamond-bottom"})
expected.update(mock_packages.packages_with_tags("runtime"))
real_pkgs, *_ = mock_inspector.possible_dependencies(*pkgs, allowed_deps=dt.ALL)
assert set(expected) == real_pkgs

View File

@@ -626,43 +626,32 @@ def test_propagate_reserved_variant_names(self, spec_string):
with pytest.raises(spack.spec_parser.SpecParsingError, match="Propagation"):
Spec(spec_string)
def test_unsatisfiable_multi_value_variant(self, default_mock_concretization):
def test_multivalued_variant_1(self, default_mock_concretization):
# Semantics for a multi-valued variant is different
# Depending on whether the spec is concrete or not
a = default_mock_concretization('multivalue-variant foo="bar"')
spec_str = 'multivalue-variant foo="bar,baz"'
b = Spec(spec_str)
a = default_mock_concretization("multivalue-variant foo=bar")
b = Spec("multivalue-variant foo=bar,baz")
assert not a.satisfies(b)
assert not a.satisfies(spec_str)
# A concrete spec cannot be constrained further
with pytest.raises(UnsatisfiableSpecError):
a.constrain(b)
a = Spec('multivalue-variant foo="bar"')
spec_str = 'multivalue-variant foo="bar,baz"'
b = Spec(spec_str)
def test_multivalued_variant_2(self):
a = Spec("multivalue-variant foo=bar")
b = Spec("multivalue-variant foo=bar,baz")
# The specs are abstract and they **could** be constrained
assert a.satisfies(b)
assert a.satisfies(spec_str)
# An abstract spec can instead be constrained
assert a.constrain(b)
a = default_mock_concretization('multivalue-variant foo="bar,baz"')
spec_str = 'multivalue-variant foo="bar,baz,quux"'
b = Spec(spec_str)
def test_multivalued_variant_3(self, default_mock_concretization):
a = default_mock_concretization("multivalue-variant foo=bar,baz")
b = Spec("multivalue-variant foo=bar,baz,quux")
assert not a.satisfies(b)
assert not a.satisfies(spec_str)
# A concrete spec cannot be constrained further
with pytest.raises(UnsatisfiableSpecError):
a.constrain(b)
a = Spec('multivalue-variant foo="bar,baz"')
spec_str = 'multivalue-variant foo="bar,baz,quux"'
b = Spec(spec_str)
def test_multivalued_variant_4(self):
a = Spec("multivalue-variant foo=bar,baz")
b = Spec("multivalue-variant foo=bar,baz,quux")
# The specs are abstract and they **could** be constrained
assert a.intersects(b)
assert a.intersects(spec_str)
# An abstract spec can instead be constrained
assert a.constrain(b)
# ...but will fail during concretization if there are
@@ -670,15 +659,14 @@ def test_unsatisfiable_multi_value_variant(self, default_mock_concretization):
with pytest.raises(InvalidVariantValueError):
spack.concretize.concretize_one(a)
def test_multivalued_variant_5(self):
# This time we'll try to set a single-valued variant
a = Spec('multivalue-variant fee="bar"')
spec_str = 'multivalue-variant fee="baz"'
b = Spec(spec_str)
a = Spec("multivalue-variant fee=bar")
b = Spec("multivalue-variant fee=baz")
# The specs are abstract and they **could** be constrained,
# as before concretization I don't know which type of variant
# I have (if it is not a BV)
assert a.intersects(b)
assert a.intersects(spec_str)
# A variant cannot be parsed as single-valued until we try to
# concretize. This means that we can constrain the variant above
assert a.constrain(b)
@@ -1960,6 +1948,35 @@ def test_edge_equality_does_not_depend_on_virtual_order():
assert tuple(sorted(edge2.virtuals)) == edge1.virtuals
def test_update_virtuals():
parent, child = Spec("parent"), Spec("child")
edge = DependencySpec(parent, child, depflag=0, virtuals=("mpi", "lapack"))
assert edge.update_virtuals("blas")
assert edge.virtuals == ("blas", "lapack", "mpi")
assert edge.update_virtuals(("c", "fortran", "mpi", "lapack"))
assert edge.virtuals == ("blas", "c", "fortran", "lapack", "mpi")
assert not edge.update_virtuals("mpi")
assert not edge.update_virtuals(("c", "fortran", "mpi", "lapack"))
assert edge.virtuals == ("blas", "c", "fortran", "lapack", "mpi")
def test_virtual_queries_work_for_strings_and_lists():
"""Ensure that ``dependencies()`` works with both virtuals=str and virtuals=[str, ...]."""
parent, child = Spec("parent"), Spec("child")
parent._add_dependency(
child, depflag=dt.BUILD, virtuals=("cxx", "fortran") # multi-char dep names
)
assert not parent.dependencies(virtuals="c") # not in virtuals but shares a char with cxx
for lang in ["cxx", "fortran"]:
assert parent.dependencies(virtuals=lang) # string arg
assert parent.edges_to_dependencies(virtuals=lang) # string arg
assert parent.dependencies(virtuals=[lang]) # list arg
assert parent.edges_to_dependencies(virtuals=[lang]) # string arg
def test_old_format_strings_trigger_error(default_mock_concretization):
s = spack.concretize.concretize_one("pkg-a")
with pytest.raises(SpecFormatStringError):

View File

@@ -26,15 +26,9 @@
)
from spack.tokenize import Token
FAIL_ON_WINDOWS = pytest.mark.xfail(
sys.platform == "win32",
raises=(SpecTokenizationError, spack.spec.InvalidHashError),
reason="Unix style path on Windows",
)
SKIP_ON_WINDOWS = pytest.mark.skipif(sys.platform == "win32", reason="Unix style path on Windows")
FAIL_ON_UNIX = pytest.mark.xfail(
sys.platform != "win32", raises=SpecTokenizationError, reason="Windows style path on Unix"
)
SKIP_ON_UNIX = pytest.mark.skipif(sys.platform != "win32", reason="Windows style path on Unix")
def simple_package_name(name):
@@ -1060,56 +1054,56 @@ def test_error_conditions(text, match_string):
[
# Specfile related errors
pytest.param(
"/bogus/path/libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=FAIL_ON_WINDOWS
"/bogus/path/libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=SKIP_ON_WINDOWS
),
pytest.param("../../libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=FAIL_ON_WINDOWS),
pytest.param("./libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=FAIL_ON_WINDOWS),
pytest.param("../../libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=SKIP_ON_WINDOWS),
pytest.param("./libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=SKIP_ON_WINDOWS),
pytest.param(
"libfoo ^/bogus/path/libdwarf.yaml",
spack.spec.NoSuchSpecFileError,
marks=FAIL_ON_WINDOWS,
marks=SKIP_ON_WINDOWS,
),
pytest.param(
"libfoo ^../../libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=FAIL_ON_WINDOWS
"libfoo ^../../libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=SKIP_ON_WINDOWS
),
pytest.param(
"libfoo ^./libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=FAIL_ON_WINDOWS
"libfoo ^./libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=SKIP_ON_WINDOWS
),
pytest.param(
"/bogus/path/libdwarf.yamlfoobar",
spack.spec.NoSuchSpecFileError,
marks=FAIL_ON_WINDOWS,
marks=SKIP_ON_WINDOWS,
),
pytest.param(
"libdwarf^/bogus/path/libelf.yamlfoobar ^/path/to/bogus.yaml",
spack.spec.NoSuchSpecFileError,
marks=FAIL_ON_WINDOWS,
marks=SKIP_ON_WINDOWS,
),
pytest.param(
"c:\\bogus\\path\\libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=FAIL_ON_UNIX
"c:\\bogus\\path\\libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=SKIP_ON_UNIX
),
pytest.param("..\\..\\libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=FAIL_ON_UNIX),
pytest.param(".\\libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=FAIL_ON_UNIX),
pytest.param("..\\..\\libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=SKIP_ON_UNIX),
pytest.param(".\\libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=SKIP_ON_UNIX),
pytest.param(
"libfoo ^c:\\bogus\\path\\libdwarf.yaml",
spack.spec.NoSuchSpecFileError,
marks=FAIL_ON_UNIX,
marks=SKIP_ON_UNIX,
),
pytest.param(
"libfoo ^..\\..\\libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=FAIL_ON_UNIX
"libfoo ^..\\..\\libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=SKIP_ON_UNIX
),
pytest.param(
"libfoo ^.\\libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=FAIL_ON_UNIX
"libfoo ^.\\libdwarf.yaml", spack.spec.NoSuchSpecFileError, marks=SKIP_ON_UNIX
),
pytest.param(
"c:\\bogus\\path\\libdwarf.yamlfoobar",
spack.spec.SpecFilenameError,
marks=FAIL_ON_UNIX,
marks=SKIP_ON_UNIX,
),
pytest.param(
"libdwarf^c:\\bogus\\path\\libelf.yamlfoobar ^c:\\path\\to\\bogus.yaml",
spack.spec.SpecFilenameError,
marks=FAIL_ON_UNIX,
marks=SKIP_ON_UNIX,
),
],
)

View File

@@ -109,6 +109,26 @@ def test_fetch_options(tmp_path, mock_archive):
assert filecmp.cmp(fetcher.archive_file, mock_archive.archive_file)
def test_fetch_curl_options(tmp_path, mock_archive, monkeypatch):
with spack.config.override("config:url_fetch_method", "curl -k -q"):
fetcher = fs.URLFetchStrategy(
url=mock_archive.url, fetch_options={"cookie": "True", "timeout": 10}
)
def check_args(*args, **kwargs):
# Raise StopIteration to avoid running the rest of the fetch method
# args[0] is `which curl`, next two are our config options
assert args[1:3] == ("-k", "-q")
raise StopIteration
monkeypatch.setattr(type(fetcher.curl), "__call__", check_args)
with Stage(fetcher, path=str(tmp_path)):
assert fetcher.archive_file is None
with pytest.raises(StopIteration):
fetcher.fetch()
@pytest.mark.parametrize("_fetch_method", ["curl", "urllib"])
def test_archive_file_errors(tmp_path, mutable_config, mock_archive, _fetch_method):
"""Ensure FetchStrategy commands may only be used as intended"""

View File

@@ -7,9 +7,9 @@
def test_modified_files(mock_git_package_changes):
repo_path, filename, commits = mock_git_package_changes
repo, filename, commits = mock_git_package_changes
with working_dir(repo_path):
with working_dir(repo.packages_path):
files = get_modified_files(from_ref="HEAD~1", to_ref="HEAD")
assert len(files) == 1
assert files[0] == filename

View File

@@ -18,7 +18,7 @@
from pathlib import Path, PurePosixPath
from typing import IO, Dict, Iterable, List, Optional, Set, Tuple, Union
from urllib.error import HTTPError, URLError
from urllib.request import HTTPSHandler, Request, build_opener
from urllib.request import HTTPDefaultErrorHandler, HTTPSHandler, Request, build_opener
import llnl.url
from llnl.util import lang, tty
@@ -57,7 +57,7 @@ def __reduce__(self):
return DetailedHTTPError, (self.req, self.code, self.msg, self.hdrs, None)
class SpackHTTPDefaultErrorHandler(urllib.request.HTTPDefaultErrorHandler):
class SpackHTTPDefaultErrorHandler(HTTPDefaultErrorHandler):
def http_error_default(self, req, fp, code, msg, hdrs):
raise DetailedHTTPError(req, code, msg, hdrs, fp)
@@ -388,9 +388,9 @@ def fetch_url_text(url, curl: Optional[Executable] = None, dest_dir="."):
fetch_method = spack.config.get("config:url_fetch_method")
tty.debug("Using '{0}' to fetch {1} into {2}".format(fetch_method, url, path))
if fetch_method == "curl":
if fetch_method.startswith("curl"):
curl_exe = curl or require_curl()
curl_args = ["-O"]
curl_args = fetch_method.split()[1:] + ["-O"]
curl_args.extend(base_curl_fetch_args(url))
# Curl automatically downloads file contents as filename
@@ -436,15 +436,14 @@ def url_exists(url, curl=None):
url_result = urllib.parse.urlparse(url)
# Use curl if configured to do so
use_curl = spack.config.get(
"config:url_fetch_method", "urllib"
) == "curl" and url_result.scheme not in ("gs", "s3")
fetch_method = spack.config.get("config:url_fetch_method", "urllib")
use_curl = fetch_method.startswith("curl") and url_result.scheme not in ("gs", "s3")
if use_curl:
curl_exe = curl or require_curl()
# Telling curl to fetch the first byte (-r 0-0) is supposed to be
# portable.
curl_args = ["--stderr", "-", "-s", "-f", "-r", "0-0", url]
curl_args = fetch_method.split()[1:] + ["--stderr", "-", "-s", "-f", "-r", "0-0", url]
if not spack.config.get("config:verify_ssl"):
curl_args.append("-k")
_ = curl_exe(*curl_args, fail_on_error=False, output=os.devnull)

View File

@@ -16,12 +16,12 @@
import llnl.util.lang as lang
import llnl.util.tty.color
import spack.error as error
import spack.error
import spack.spec
import spack.spec_parser
#: These are variant names used by Spack internally; packages can't use them
reserved_names = [
RESERVED_NAMES = {
"arch",
"architecture",
"dev_path",
@@ -31,7 +31,7 @@
"patches",
"platform",
"target",
]
}
special_variant_values = [None, "none", "*"]
@@ -251,7 +251,7 @@ def convert(self, other):
# We don't care if types are different as long as I can convert other to type(self)
try:
other = type(self)(other.name, other._original_value, propagate=other.propagate)
except (error.SpecError, ValueError):
except (spack.error.SpecError, ValueError):
return False
return method(self, other)
@@ -626,7 +626,7 @@ def __init__(self, *sets):
# 'none' is a special value and can appear only in a set of
# a single element
if any("none" in s and s != set(("none",)) for s in self.sets):
raise error.SpecError(
raise spack.error.SpecError(
"The value 'none' represents the empty set,"
" and must appear alone in a set. Use the "
"method 'allow_empty_set' to add it."
@@ -634,7 +634,7 @@ def __init__(self, *sets):
# Sets should not intersect with each other
if any(s1 & s2 for s1, s2 in itertools.combinations(self.sets, 2)):
raise error.SpecError("sets in input must be disjoint")
raise spack.error.SpecError("sets in input must be disjoint")
#: Attribute used to track values which correspond to
#: features which can be enabled or disabled as understood by the
@@ -704,7 +704,7 @@ def _disjoint_set_validator(pkg_name, variant_name, values):
format_args = {"variant": variant_name, "package": pkg_name, "values": values}
msg = self.error_fmt + " @*r{{[{package}, variant '{variant}']}}"
msg = llnl.util.tty.color.colorize(msg.format(**format_args))
raise error.SpecError(msg)
raise spack.error.SpecError(msg)
return _disjoint_set_validator
@@ -832,7 +832,7 @@ def prevalidate_variant_value(
only if the variant is a reserved variant.
"""
# don't validate wildcards or variants with reserved names
if variant.value == ("*",) or variant.name in reserved_names or variant.propagate:
if variant.value == ("*",) or variant.name in RESERVED_NAMES or variant.propagate:
return []
# raise if there is no definition at all
@@ -890,11 +890,11 @@ class ConditionalVariantValues(lang.TypedMutableSequence):
"""A list, just with a different type"""
class DuplicateVariantError(error.SpecError):
class DuplicateVariantError(spack.error.SpecError):
"""Raised when the same variant occurs in a spec twice."""
class UnknownVariantError(error.SpecError):
class UnknownVariantError(spack.error.SpecError):
"""Raised when an unknown variant occurs in a spec."""
def __init__(self, msg: str, unknown_variants: List[str]):
@@ -902,7 +902,7 @@ def __init__(self, msg: str, unknown_variants: List[str]):
self.unknown_variants = unknown_variants
class InconsistentValidationError(error.SpecError):
class InconsistentValidationError(spack.error.SpecError):
"""Raised if the wrong validator is used to validate a variant."""
def __init__(self, vspec, variant):
@@ -910,7 +910,7 @@ def __init__(self, vspec, variant):
super().__init__(msg.format(vspec, variant))
class MultipleValuesInExclusiveVariantError(error.SpecError, ValueError):
class MultipleValuesInExclusiveVariantError(spack.error.SpecError, ValueError):
"""Raised when multiple values are present in a variant that wants
only one.
"""
@@ -922,15 +922,15 @@ def __init__(self, variant: AbstractVariant, pkg_name: Optional[str] = None):
super().__init__(msg.format(variant, pkg_info))
class InvalidVariantValueCombinationError(error.SpecError):
class InvalidVariantValueCombinationError(spack.error.SpecError):
"""Raised when a variant has values '*' or 'none' with other values."""
class InvalidVariantValueError(error.SpecError):
class InvalidVariantValueError(spack.error.SpecError):
"""Raised when variants have invalid values."""
class UnsatisfiableVariantSpecError(error.UnsatisfiableSpecError):
class UnsatisfiableVariantSpecError(spack.error.UnsatisfiableSpecError):
"""Raised when a spec variant conflicts with package constraints."""
def __init__(self, provided, required):

View File

@@ -334,7 +334,7 @@ e4s-rocm-external-build:
e4s-oneapi-generate:
extends: [ ".e4s-oneapi", ".generate-x86_64"]
image: ghcr.io/spack/spack/ubuntu22.04-runner-amd64-oneapi-2024.2:2024.09.06
image: ghcr.io/spack/spack/ubuntu22.04-runner-amd64-oneapi-2025.1:2025.03.24
e4s-oneapi-build:
extends: [ ".e4s-oneapi", ".build" ]

View File

@@ -34,54 +34,28 @@ spack:
variants: +termlib
openblas:
variants: threads=openmp
require: 'cppflags="-O1" target=x86_64_v3 %oneapi'
trilinos:
variants: +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
variants: +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext
+ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu
+nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos
+teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
xz:
variants: +pic
gcc-runtime:
require: 'target=x86_64_v3 %gcc'
binutils:
variants: +ld +gold +headers +libiberty ~nls
rust:
require: 'target=x86_64_v3 %gcc'
bison:
require: 'target=x86_64_v3 %gcc'
raja:
variants: +plugins
mpi:
require: intel-oneapi-mpi
intel-oneapi-mpi:
buildable: false
externals:
- spec: intel-oneapi-mpi@2021.13.1
prefix: /opt/intel/oneapi
unzip:
require:
- 'target=x86_64_v3 %gcc'
py-maturin:
require:
- 'target=x86_64_v3 %gcc'
binutils:
require:
- 'target=x86_64_v3 %gcc'
variants: +ld +gold +headers +libiberty ~nls
llvm:
require:
- 'target=x86_64_v3 %gcc'
ruby:
require:
- 'target=x86_64_v3 %gcc'
rust:
require:
- 'target=x86_64_v3 %gcc'
krb5:
require:
- 'target=x86_64_v3 %gcc'
openssh:
require:
- 'target=x86_64_v3 %gcc'
dyninst:
require:
- 'target=x86_64_v3 %gcc'
bison:
require:
- 'target=x86_64_v3 %gcc'
paraview:
require:
- +examples target=x86_64_v3
specs:
# CPU
- adios
- alquimia
- aml
@@ -95,15 +69,12 @@ spack:
- butterflypack
- cabana
- caliper
- chai
- chai tests=none
- charliecloud
- conduit
- datatransferkit
- dealii ~vtk ~taskflow cflags=-O0 cxxflags=-O0 # +vtk +taskflow: ^taskflow: CMake Error at CMakeLists.txt:81 (message): Taskflow currently supports the following compilers: g++ v7.0 or above, clang++ v6.0 or above; use -O0 to work around compiler failure
- drishti
- dxt-explorer
- ecp-data-vis-sdk ~cuda ~rocm +adios2 ~ascent +cinema +darshan +faodel +hdf5 +paraview +pnetcdf +sz +unifyfs +veloc ~visit +vtkm +zfp # +ascent: fides: fides/xgc/XGCCommon.cxx:233:10: error: no member named 'iota' in namespace 'std'; +visit: visit_vtk/lightweight/vtkSkewLookupTable.C:32:10: error: cannot initialize return object of type 'unsigned char *' with an rvalue of type 'const unsigned char *'
- exaworks
- e4s-alc
- e4s-cl
- flecsi
- flit
- flux-core
@@ -111,26 +82,24 @@ spack:
- gasnet
- ginkgo
- globalarrays
- glvis ^llvm
- gmp
- gotcha
- gptune ~mpispawn
- gromacs
- h5bench
- hdf5-vol-async
- hdf5-vol-cache
- hdf5-vol-log
- heffte +fftw
- hpctoolkit
- hpx networking=mpi
- hypre
- kokkos +openmp
- kokkos-kernels +openmp
- laghos
- lammps
- laghos ^mfem~cuda
- lammps +amoeba +asphere +bocs +body +bpm +brownian +cg-dna +cg-spica +class2 +colloid +colvars +compress +coreshell +dielectric +diffraction +dipole +dpd-basic +dpd-meso +dpd-react +dpd-smooth +drude +eff +electrode +extra-compute +extra-dump +extra-fix +extra-molecule +extra-pair +fep +granular +interlayer +kspace +lepton +machdyn +manybody +mc +meam +mesont +misc +ml-iap +ml-pod +ml-snap +mofff +molecule +openmp-package +opt +orient +peri +phonon +plugin +poems +qeq +reaction +reaxff +replica +rigid +shock +sph +spin +srd +tally +uef +voronoi +yaff
- legion
- libceed
- libnrm
- libpressio +bitgrooming +bzip2 ~cuda ~cusz +fpzip +hdf5 +libdistributed +lua +openmp +python +sz +sz3 +unix +zfp
- libquo
- libunwind
- loki
@@ -142,14 +111,16 @@ spack:
- mpifileutils ~xattr
- nccmp
- nco
- nekbone +mpi
- netcdf-fortran
- netlib-scalapack
- nrm ^py-scipy cflags="-Wno-error=incompatible-function-pointer-types" # py-scipy@1.8.1 fails without cflags here
- nrm
- nwchem
- omega-h
- openfoam
- openmpi
- openpmd-api
- papi
- papi target=x86_64_v3
- papyrus
- parsec ~cuda
- pdt
@@ -159,12 +130,11 @@ spack:
- precice
- pruners-ninja
- pumi
- py-amrex ~ipo # oneAPI 2024.2.0 builds do not support IPO/LTO says CMake, even though pybind11 strongly encourages it
- py-h5py
- py-jupyterhub
- py-libensemble
- py-petsc4py
- qthreads scheduler=distrib
- quantum-espresso
- raja
- rempi
- scr
@@ -197,70 +167,73 @@ spack:
- hdf5
- libcatalyst
- parallel-netcdf
- paraview
- py-cinemasci
- sz
- unifyfs
- veloc
# - visit # visit: +visit: visit_vtk/lightweight/vtkSkewLookupTable.C:32:10: error: cannot initialize return object of type 'unsigned char *' with an rvalue of type 'const unsigned char *'
- vtk-m ~openmp
- warpx +python ~python_ipo ^py-amrex ~ipo # oneAPI 2024.2.0 builds do not support IPO/LTO says CMake, even though pybind11 strongly encourages it
- vtk-m ~openmp # +openmp: https://github.com/spack/spack/issues/31830
- zfp
# --
# - chapel ~cuda ~rocm # llvm: closures.c:(.text+0x305e): undefined reference to `_intel_fast_memset'
# - cp2k +mpi # dbcsr: dbcsr_api.F(973): #error: incomplete macro call DBCSR_ABORT.
# - fftx # fftx: https://github.com/spack/spack/issues/47048
# - geopm-runtime # libelf: configure: error: installation or configuration problem: C compiler cannot create executables.
# - hpctoolkit # dyninst@13.0.0%gcc: libiberty/./d-demangle.c:142: undefined reference to `_intel_fast_memcpy'
# - lbann # 2024.2 internal compiler error
# - plasma # 2024.2 internal compiler error
# - quantum-espresso # quantum-espresso: external/mbd/src/mbd_c_api.F90(392): error #6645: The name of the module procedure conflicts with a name in the encompassing scoping unit. [F_C_STRING]
# - wps # wps: InstallError: Compiler not recognized nor supported.
# - paraview +qt # llvm-17.0.6: https://github.com/spack/spack/issues/49625
# - py-cinemasci # llvm-14.0.6: https://github.com/spack/spack/issues/49625
# - visit # llvm-17.0.6: https://github.com/spack/spack/issues/49625
# --
# - chapel ~cuda ~rocm # llvm-19.1.7: https://github.com/spack/spack/issues/49625
# - cp2k +mpi # dbcsr-2.8.0: FAILED: src/CMakeFiles/dbcsr.dir/dbcsr_api.F-pp.f src/CMakeFiles/dbcsr.dir/dbcsr_api.F.o.ddi:
# - dealii # taskflow@3.7.0: cmake: Taskflow currently supports the following compilers: g++ v7.0 or above, clang++ v6.0 or above
# - exago +mpi ~ipopt +hiop ~python +raja ^hiop+raja~sparse # raja-0.14.0: RAJA/pattern/kernel/Tile.hpp:174:30: error: no member named 'block_id' in 'IterableTiler<Iterable>'
# - exaworks # py-maturin: rust-lld: error: undefined symbol: _intel_fast_memcpy
# - fftx # fftx-1.2.0: https://github.com/spack/spack/issues/49621
# - fpm # fpm-0.10.0: /tmp/ifx1305151083OkWTRB/ifxqBG60i.i90: error #6405: The same named entity from different modules and/or program units cannot be referenced. [TOML_TABLE]; fpm.F90(32048): error #7002: Error in opening the compiled module file. Check INCLUDE paths. [FPM_MANIFEST_PREPROCESS]
# - geopm-runtime # concretize: c-blosc2: conflicts with '%oneapi';
# - glvis # llvm-17.0.6: https://github.com/spack/spack/issues/49625
# - gptune ~mpispawn # llvm-14.0.6: https://github.com/spack/spack/issues/49625
# - lbann # lbann-0.104: https://github.com/spack/spack/issues/49619
# - libpressio +bitgrooming +bzip2 ~cuda ~cusz +fpzip +hdf5 +libdistributed +lua +openmp +python +sz +sz3 +unix +zfp # concretize: c-blosc2: conflicts with '%oneapi';
# - nek5000 +mpi ~visit # nek5000-19.0: RuntimeError: Cannot build example: short_tests/eddy.
# - plasma # concretizer: requires("%gcc@4.9:", when="@17.1:")
# - py-deephyper # py-numpy-1.25.2: numpy/distutils/checks/cpu_avx512_knl.c:21:37: error: call to undeclared function '_mm512_exp2a23_pd'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
# - py-jupyterhub # py-maturin: rust-lld: error: undefined symbol: _intel_fast_memcpy
# - warpx ~qed +python ~python_ipo compute=sycl ^py-amrex ~ipo # warpx-25.03: https://github.com/spack/spack/issues/49620
# - wps # wps-4.5: InstallError: Compiler not recognized nor supported.
# PYTHON PACKAGES
- opencv +python3
- py-jupyterlab
- py-mpi4py
- py-notebook
- py-numba
- py-numpy
- py-openai
- py-pandas
- py-plotly
- py-pooch
- py-pytest
- py-scikit-learn
- py-scipy
- py-seaborn
# - py-horovod # error
# - py-jax # error
# - py-matplotlib # error
# - py-tensorflow # error
# - py-torch # error
# --
# - py-jupyterlab # py-maturin: rust-lld: error: undefined symbol: _intel_fast_memcpy
# - py-notebook # py-maturin: rust-lld: error: undefined symbol: _intel_fast_memcpy
# - py-numba # llvm-14.0.6: https://github.com/spack/spack/issues/49625
# - py-pandas # llvm-14.0.6: https://github.com/spack/spack/issues/49625
# - py-plotly # py-maturin: rust-lld: error: undefined symbol: _intel_fast_memcpy
# GPU
- aml +level_zero
- amrex +sycl
- arborx +sycl ^kokkos +sycl +openmp cxxstd=17 +examples
- cabana +sycl ^kokkos +sycl +openmp cxxstd=17 +examples
- arborx +sycl ^kokkos +sycl +openmp cxxstd=17
- cabana +sycl ^kokkos +sycl +openmp cxxstd=17
- ginkgo +sycl
- heffte +sycl
- kokkos +sycl +openmp cxxstd=17 +examples
- kokkos-kernels build_type=Release %oneapi ^kokkos +sycl +openmp cxxstd=17 +examples
- petsc +sycl
- sundials +sycl cxxstd=17 +examples-install
- tau +mpi +opencl +level_zero ~pdt +syscall # requires libdrm.so to be installed
- kokkos +sycl +openmp cxxstd=17
- sundials +sycl +examples-install cxxstd=17
- tau +mpi +opencl +level_zero +syscall
- upcxx +level_zero
- warpx ~qed +python ~python_ipo compute=sycl ^py-amrex ~ipo # qed for https://github.com/ECP-WarpX/picsar/pull/53 prior to 24.09 release; ~ipo for oneAPI 2024.2.0 GPU builds do not support IPO/LTO says CMake, even though pybind11 strongly encourages it
# --
# - hpctoolkit +level_zero # dyninst@12.3.0%gcc: /usr/bin/ld: libiberty/./d-demangle.c:142: undefined reference to `_intel_fast_memcpy'; can't mix intel-tbb@%oneapi with dyninst%gcc
# - slate +sycl # slate: ifx: error #10426: option '-fopenmp-targets' requires '-fiopenmp'
# - heffte +sycl # heffte-2.4.1: intel-oneapi-mkl-2024.2.2-yevdna3rfdezkcm6vz4r3gtrz4grkts3/mkl/2024.2/lib//libmkl_sycl_rng.so: undefined reference to `__host_std::sycl_host_floor(sycl::_V1::vec<float, 4>)'
# - hpctoolkit +level_zero # hpctoolkit-2024.01.1: gpu/intel/level0/level0-command-process.c:94:61: error: illegal initializer type 'atomic_int' (aka '_Atomic(int)')
# - hypre +sycl # hypre-2.32.0: gpu/intel/level0/level0-command-process.c:94:61: error: illegal initializer type 'atomic_int' (aka '_Atomic(int)')
# - kokkos-kernels ^kokkos +sycl +openmp cxxstd=17 # kokkos-kernels-4.5.01: /opt/intel/oneapi/compiler/2025.1/bin/compiler/../../include/sycl/group_algorithm.hpp:598:31: error: SYCL kernel cannot call an undefined function without SYCL_EXTERNAL attribute
# - petsc +sycl # kokkos-kernels-4.4.01: /opt/intel/oneapi/compiler/2025.1/bin/compiler/../../include/sycl/group_algorithm.hpp:598:31: error: SYCL kernel cannot call an undefined function without SYCL_EXTERNAL attribute
# - slate +sycl # blaspp-2024.10.26: blas/device.hh:46:14: fatal error: 'sycl.hpp' file not found
ci:
pipeline-gen:
- build-job:
image: ghcr.io/spack/spack/ubuntu22.04-runner-amd64-oneapi-2024.2:2024.09.06
image: ghcr.io/spack/spack/ubuntu22.04-runner-amd64-oneapi-2025.1:2025.03.24
cdash:
build-group: E4S OneAPI

View File

@@ -27,9 +27,8 @@ spack:
- py-transformers
# JAX
# Does not yet support Spack-installed ROCm
# - py-jax
# - py-jaxlib
- py-jax
- py-jaxlib
# Keras
- py-keras backend=tensorflow

View File

@@ -110,9 +110,9 @@ class Corge
f.write(corge_h)
with open("%s/corge/corgegator.cc" % self.stage.source_path, "w", encoding="utf-8") as f:
f.write(corgegator_cc)
gpp = which("/usr/bin/g++")
gpp = which("g++")
if sys.platform == "darwin":
gpp = which("/usr/bin/clang++")
gpp = which("clang++")
gpp(
"-Dcorge_EXPORTS",
"-I%s" % self.stage.source_path,

View File

@@ -28,11 +28,11 @@ class Abacus(MakefilePackage):
version("2.2.1", sha256="14feca1d8d1ce025d3f263b85ebfbebc1a1efff704b6490e95b07603c55c1d63")
version("2.2.0", sha256="09d4a2508d903121d29813a85791eeb3a905acbe1c5664b8a88903f8eda64b8f")
variant("openmp", default=True, description="Enable OpenMP support")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant("openmp", default=True, description="Enable OpenMP support")
depends_on("elpa+openmp", when="+openmp")
depends_on("elpa~openmp", when="~openmp")
depends_on("cereal")

View File

@@ -43,10 +43,6 @@ class Abinit(AutotoolsPackage):
version("8.6.3", sha256="82e8d071088ab8dc1b3a24380e30b68c544685678314df1213180b449c84ca65")
version("8.2.2", sha256="e43544a178d758b0deff3011c51ef7c957d7f2df2ce8543366d68016af9f3ea1")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
variant("mpi", default=True, description="Builds with MPI support. Requires MPI2+")
variant("openmp", default=False, description="Enables OpenMP threads. Use threaded FFTW3")
variant("scalapack", default=False, description="Enables scalapack support. Requires MPI")
@@ -65,6 +61,10 @@ class Abinit(AutotoolsPackage):
variant("install-tests", default=False, description="Install test cases")
# Add dependencies
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("atompaw")
depends_on("blas")
depends_on("lapack")
@@ -209,7 +209,7 @@ def configure_args(self):
linalg = spec["lapack"].libs + spec["blas"].libs
# linalg_flavor is selected using the virtual lapack provider
is_using_intel_libraries = spec["lapack"].name in INTEL_MATH_LIBRARIES
is_using_intel_libraries = spec["lapack"].name == "intel-oneapi-mkl"
# These *must* be elifs, otherwise spack's lapack provider is ignored
# linalg_flavor ends up as "custom", which is not supported by abinit@9.10.3:

View File

@@ -68,8 +68,6 @@ class AbseilCpp(CMakePackage):
version("20181200", sha256="e2b53bfb685f5d4130b84c4f3050c81bf48c497614dc85d91dbd3ed9129bce6d")
version("20180600", sha256="794d483dd9a19c43dc1fbbe284ce8956eb7f2600ef350dac4c602f9b4eb26e90")
depends_on("cxx", type="build") # generated
# Avoid export of testonly target absl::test_allocator in CMake builds
patch(
"https://github.com/abseil/abseil-cpp/commit/779a3565ac6c5b69dd1ab9183e500a27633117d5.patch?full_index=1",
@@ -88,6 +86,8 @@ class AbseilCpp(CMakePackage):
description="C++ standard used during compilation",
)
depends_on("cxx", type="build") # generated
depends_on("cmake@3.16:", when="@20240722:", type="build")
depends_on("cmake@3.10:", when="@20220907:", type="build")
depends_on("cmake@3.5:", when="@20190312:", type="build")

View File

@@ -36,13 +36,13 @@ class Abyss(AutotoolsPackage):
version("2.0.2", sha256="d87b76edeac3a6fb48f24a1d63f243d8278a324c9a5eb29027b640f7089422df")
version("1.5.2", sha256="8a52387f963afb7b63db4c9b81c053ed83956ea0a3981edcad554a895adf84b1")
depends_on("c", type="build")
depends_on("cxx", type="build")
variant(
"maxk", default=128, values=is_multiple_32, description="set the maximum k-mer length."
)
depends_on("c", type="build")
depends_on("cxx", type="build")
depends_on("autoconf", type="build")
depends_on("automake", type="build")
depends_on("bwa", type="run")

View File

@@ -17,11 +17,11 @@ class Accfft(CMakePackage, CudaPackage):
version("develop", branch="master")
depends_on("cxx", type="build") # generated
variant("pnetcdf", default=True, description="Add support for parallel NetCDF")
variant("shared", default=True, description="Enables the build of shared libraries")
depends_on("cxx", type="build") # generated
# See: http://accfft.org/articles/install/#installing-dependencies
depends_on("fftw precision=float,double ~mpi+openmp")

View File

@@ -16,14 +16,13 @@ class ActsAlgebraPlugins(CMakePackage):
license("MPL-2.0", checked_by="stephenswat")
version("0.27.0", sha256="c2081b399b7f4e004bebd5bf8250ed9596b113002fe445bca7fdac24d2c5932c")
version("0.26.2", sha256="0170f22e1a75493b86464f27991117bc2c5a9d52554c75786e321d4c591990e7")
version("0.26.1", sha256="8eb1e9e28ec2839d149b6a6bddd0f983b0cdf71c286c0aeb67ede31727c5b7d3")
version("0.26.0", sha256="301702e3d0a3d12e46ae6d949f3027ddebd0b1167cbb3004d9a4a5697d3adc7f")
version("0.25.0", sha256="bb0cba6e37558689d780a6de8f749abb3b96f8cd9e0c8851474eb4532e1e98b8")
version("0.24.0", sha256="f44753e62b1ba29c28ab86b282ab67ac6028a0f9fe41e599b7fc6fc50b586b62")
depends_on("cxx", type="build") # generated
_cxxstd_values = (
conditional("17", when="@:0.25"),
conditional("20", when="@0:"),
@@ -38,10 +37,14 @@ class ActsAlgebraPlugins(CMakePackage):
variant("vc", default=False, description="Enables the Vc plugin")
variant("fastor", default=False, description="Enables the Fastor plugin")
depends_on("cxx", type="build") # generated
depends_on("cmake@3.14:", type="build")
depends_on("vecmem@1.5.0:", when="+vecmem")
depends_on("vecmem@1.14.0:", when="+vecmem @0.27:")
depends_on("eigen@3.4.0:", when="+eigen")
depends_on("vc@1.4.3:", when="+vc")
depends_on("vc@1.4.5:", when="+vc @0.27:")
depends_on("root@6.18.0:", when="+smatrix")
depends_on("fastor@0.6.4:", when="+fastor")

View File

@@ -196,9 +196,6 @@ class Acts(CMakePackage, CudaPackage):
version("0.08.1", commit="289bdcc320f0b3ff1d792e29e462ec2d3ea15df6")
version("0.08.0", commit="99eedb38f305e3a1cd99d9b4473241b7cd641fa9")
depends_on("c", type="build", when="+dd4hep") # DD4hep requires C
depends_on("cxx", type="build")
# Variants that affect the core Acts library
variant(
"benchmarks", default=False, description="Build the performance benchmarks", when="@0.16:"
@@ -350,6 +347,8 @@ class Acts(CMakePackage, CudaPackage):
variant("analysis", default=False, description="Build analysis applications in the examples")
# Build dependencies
depends_on("c", type="build", when="+dd4hep") # DD4hep requires C
depends_on("cxx", type="build")
depends_on("acts-dd4hep", when="@19 +dd4hep")
with when("+svg"):
depends_on("actsvg@0.4.20:", when="@20.1:")

View File

@@ -27,10 +27,6 @@ class Adios(AutotoolsPackage):
version("1.10.0", sha256="6713069259ee7bfd4d03f47640bf841874e9114bab24e7b0c58e310c42a0ec48")
version("1.9.0", sha256="23b2bb70540d51ab0855af0b205ca484fd1bd963c39580c29e3133f9e6fffd46")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
variant("shared", default=True, description="Builds a shared version of the library")
variant("fortran", default=False, description="Enable Fortran bindings support")
@@ -60,6 +56,10 @@ class Adios(AutotoolsPackage):
description="Enable dataspaces and/or flexpath staging transports",
)
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("autoconf", type="build")
depends_on("automake", type="build")
depends_on("m4", type="build")

View File

@@ -4,7 +4,6 @@
import os
import sys
import tempfile
from spack.build_systems.cmake import CMakeBuilder
from spack.package import *
@@ -48,10 +47,6 @@ class Adios2(CMakePackage, CudaPackage, ROCmPackage):
version("2.6.0", sha256="45b41889065f8b840725928db092848b8a8b8d1bfae1b92e72f8868d1c76216c")
version("2.5.0", sha256="7c8ff3bf5441dd662806df9650c56a669359cb0185ea232ecb3578de7b065329")
depends_on("c", type="build")
depends_on("cxx", type="build")
depends_on("fortran", type="build")
# There's not really any consistency about how static and shared libs are
# implemented across spack. What we're trying to support is specifically three
# library build types:
@@ -120,6 +115,10 @@ class Adios2(CMakePackage, CudaPackage, ROCmPackage):
# ifx does not support submodules in separate files
conflicts("%oneapi@:2022.1.0", when="+fortran")
depends_on("c", type="build")
depends_on("cxx", type="build")
depends_on("fortran", type="build")
depends_on("cmake@3.12.0:", type="build")
# Standalone CUDA support
@@ -390,35 +389,29 @@ def test_run_executables(self):
exe = which(join_path(self.prefix.bin, cmd))
exe(*opts)
def test_examples(self):
"""Build and run an example program"""
src_dir = self.test_suite.current_test_cache_dir.testing.install.C
test_stage_dir = self.test_suite.test_dir_for_spec(self.spec)
def test_install(self):
"""Build and run an install tests"""
srcdir = self.test_suite.current_test_cache_dir.testing.install.C
blddir = self.test_suite.current_test_cache_dir.build_dir
# Create the build tree within this spec's test stage dir so it gets
# cleaned up automatically
build_dir = tempfile.mkdtemp(dir=test_stage_dir)
cmake = Executable(spec["cmake"].prefix.bin.cmake)
std_cmake_args = []
cmake = Executable(self.spec["cmake"].prefix.bin.cmake)
cmake_args = []
if self.spec.satisfies("+mpi"):
mpi_exec = join_path(self.spec["mpi"].prefix, "bin", "mpiexec")
std_cmake_args.append(f"-DMPIEXEC_EXECUTABLE={mpi_exec}")
cmake_args.append(f"-DMPIEXEC_EXECUTABLE={mpi_exec}")
built_programs = ["adios_c_mpi_test", "adios_adios2c_test", "adios_c_test"]
with working_dir(build_dir):
with test_part(
self, "test_examples_build", purpose="build example against installed adios2"
):
cmake(src_dir, *std_cmake_args)
with working_dir(blddir, create=True):
with test_part(self, "test_install_build", purpose="ADIOS2 install test build app"):
cmake(srcdir, *cmake_args)
cmake(*(["--build", "."]))
for p in built_programs:
exe = which(join_path(".", p))
for binary in ["adios_c_mpi_test", "adios_adios2c_test", "adios_c_test"]:
exe = which(join_path(".", binary))
if exe:
with test_part(
self, f"test_examples_run_{p}", purpose=f"run built adios2 example {p}"
self,
f"test_install_run_{binary}",
purpose=f"ADIOS2 install test run {binary}",
):
exe()

View File

@@ -29,9 +29,6 @@ class AdolC(AutotoolsPackage):
version("2.5.1", sha256="dedb93c3bb291366d799014b04b6d1ec63ca4e7216edf16167776c07961e3b4a")
version("2.5.0", sha256="9d51c426d831884aac8f418be410c001eb62f3a11cb8f30c66af0b842edffb96")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant(
"advanced_branching",
default=False,
@@ -55,6 +52,9 @@ class AdolC(AutotoolsPackage):
variant("boost", default=False, description="Enable boost")
# Build dependencies
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("automake", type="build", when="@develop")
depends_on("autoconf", type="build", when="@develop")
depends_on("libtool", type="build", when="@develop")

View File

@@ -25,9 +25,6 @@ class Akantu(CMakePackage):
version("master", branch="master")
version("3.0.0", sha256="7e8f64e25956eba44def1b2d891f6db8ba824e4a82ff0d51d6b585b60ab465db")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant(
"external_solvers",
values=any_combination_of("mumps", "petsc"),
@@ -36,6 +33,9 @@ class Akantu(CMakePackage):
variant("mpi", default=True, description="Activates parallel capabilities")
variant("python", default=False, description="Activates python bindings")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("boost@:1.66", when="@:3.0")
depends_on(Boost.with_default_variants)
depends_on("lapack")

View File

@@ -20,8 +20,6 @@ class Albany(CMakePackage):
version("develop", branch="master")
depends_on("cxx", type="build") # generated
variant("lcm", default=True, description="Enable LCM")
variant("aeras", default=False, description="Enable AERAS")
variant("qcad", default=False, description="Enable QCAD")
@@ -39,6 +37,8 @@ class Albany(CMakePackage):
variant("64bit", default=True, description="Enable 64BIT")
# Add dependencies
depends_on("cxx", type="build") # generated
depends_on("mpi")
depends_on(
"trilinos"

View File

@@ -20,11 +20,11 @@ class Alembic(CMakePackage):
version("1.8.5", sha256="180a12f08d391cd89f021f279dbe3b5423b1db751a9898540c8059a45825c2e9")
version("1.7.16", sha256="2529586c89459af34d27a36ab114ad1d43dafd44061e65cfcfc73b7457379e7c")
depends_on("cxx", type="build") # generated
variant("python", default=False, description="Python support")
variant("hdf5", default=False, description="HDF5 support")
depends_on("cxx", type="build") # generated
depends_on("cmake@2.8.11:", type="build")
depends_on("openexr@2.2.0:")
depends_on("hdf5@1.8.9:", when="+hdf5")

View File

@@ -48,8 +48,6 @@ class Alpaka(CMakePackage, CudaPackage):
deprecated=True,
)
depends_on("cxx", type="build") # generated
variant(
"backend",
multi=True,
@@ -73,6 +71,8 @@ class Alpaka(CMakePackage, CudaPackage):
variant("examples", default=False, description="Build alpaka examples")
depends_on("cxx", type="build") # generated
depends_on("boost@1.65.1:", when="@0.4.0:0.8.0")
depends_on("boost@1.74:", when="@0.9.0:")

View File

@@ -21,12 +21,12 @@ class Alquimia(CMakePackage):
version("1.0.10", commit="b2c11b6cde321f4a495ef9fcf267cb4c7a9858a0") # tag v.1.0.10
version("1.0.9", commit="2ee3bcfacc63f685864bcac2b6868b48ad235225") # tag v.1.0.9
variant("shared", default=True, description="Enables the build of shared libraries")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
variant("shared", default=True, description="Enables the build of shared libraries")
depends_on("mpi")
depends_on("hdf5")
depends_on("pflotran@5.0.0", when="@1.1.0")

View File

@@ -19,12 +19,12 @@ class AlsaLib(AutotoolsPackage):
version("1.2.2", sha256="d8e853d8805574777bbe40937812ad1419c9ea7210e176f0def3e6ed255ab3ec")
version("1.1.4.1", sha256="91bb870c14d1c7c269213285eeed874fa3d28112077db061a3af8010d0885b76")
depends_on("c", type="build") # generated
variant("python", default=False, description="enable python")
patch("python.patch", when="@1.1.4:1.1.5 +python")
depends_on("c", type="build") # generated
depends_on("python", type=("link", "run"), when="+python")
conflicts("platform=darwin", msg="ALSA only works for Linux")

View File

@@ -32,8 +32,6 @@ class Aluminum(CachedCMakePackage, CudaPackage, ROCmPackage):
version("1.3.0", sha256="d0442efbebfdfb89eec793ae65eceb8f1ba65afa9f2e48df009f81985a4c27e3")
version("1.2.3", sha256="9b214bdf30f9b7e8e017f83e6615db6be2631f5be3dd186205dbe3aa62f4018a")
depends_on("cxx", type="build") # generated
# Library capabilities
variant(
"cuda_rma",
@@ -89,6 +87,8 @@ class Aluminum(CachedCMakePackage, CudaPackage, ROCmPackage):
conflicts("+cuda", when="+rocm", msg="CUDA and ROCm support are mutually exclusive")
depends_on("cxx", type="build") # generated
depends_on("mpi")
depends_on("cmake@3.21.0:", type="build", when="@1.0.1:")

View File

@@ -52,9 +52,6 @@ class Amdfftw(FftwBase):
version("3.0", sha256="a69deaf45478a59a69f77c4f7e9872967f1cfe996592dd12beb6318f18ea0bcd")
version("2.2", sha256="de9d777236fb290c335860b458131678f75aa0799c641490c644c843f0e246f8")
depends_on("c", type="build") # generated
depends_on("fortran", type="build") # generated
variant("shared", default=True, description="Builds a shared version of the library")
variant("openmp", default=True, description="Enable OpenMP support")
variant("threads", default=False, description="Enable SMP threads support")
@@ -107,6 +104,9 @@ class Amdfftw(FftwBase):
" to execute on different x86 CPU architectures",
)
depends_on("c", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("texinfo")
provides("fftw-api@3")

View File

@@ -61,10 +61,6 @@ class Amdlibflame(CMakePackage, LibflameBase):
version("3.0", sha256="d94e08b688539748571e6d4c1ec1ce42732eac18bd75de989234983c33f01ced")
version("2.2", sha256="12b9c1f92d2c2fa637305aaa15cf706652406f210eaa5cbc17aaea9fcfa576dc")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
variant("ilp64", default=False, when="@3.0.1: ", description="Build with ILP64 support")
variant(
"vectorization",
@@ -99,6 +95,10 @@ class Amdlibflame(CMakePackage, LibflameBase):
provides("flame@5.2", when="@2:")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("python+pythoncmd", type="build")
depends_on("gmake@4:", when="@3.0.1,3.1:", type="build")

View File

@@ -43,12 +43,12 @@ class Amdlibm(SConsPackage):
version("3.0", sha256="eb26b5e174f43ce083928d0d8748a6d6d74853333bba37d50057aac2bef7c7aa")
version("2.2", commit="4033e022da428125747e118ccd6fdd9cee21c470")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant("verbose", default=False, description="Building with verbosity", when="@:4.1")
# Mandatory dependencies
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("python@3.6.1:", type=("build", "run"))
depends_on("scons@3.1.2:", type=("build"))
depends_on("mpfr", type=("link"))

View File

@@ -19,6 +19,7 @@ class Amdsmi(CMakePackage):
libraries = ["libamd_smi"]
license("MIT")
version("6.3.3", sha256="e23abc65a1cd75764d7da049b91cce2a095b287279efcd4f90b4b9b63b974dd5")
version("6.3.2", sha256="1ed452eedfe51ac6e615d7bfe0bd7a0614f21113874ae3cbea7df72343cc2d13")
version("6.3.1", sha256="a3a5a711052e813b9be9304d5e818351d3797f668ec2a455e61253a73429c355")
version("6.3.0", sha256="7234c46648938239385cd5db57516ed53985b8c09d2f0828ae8f446386d8bd1e")

View File

@@ -23,12 +23,12 @@ class Amg2013(MakefilePackage):
version("1.1", tag="1.1", commit="09fe8a78baf6ba5eaef7d2804f7b653885d60fee")
version("1.0", tag="1.0", commit="f5b864708ca3ef48a86e1e46fcb812cbbfa80c51")
depends_on("c", type="build") # generated
variant("openmp", default=True, description="Build with OpenMP support")
variant("optflags", default=False, description="Additional optimizations")
variant("int64", default=False, description="Use 64-bit integers for global variables")
depends_on("c", type="build") # generated
depends_on("mpi")
@property

View File

@@ -20,12 +20,12 @@ class Amg2023(CMakePackage, CudaPackage, ROCmPackage):
version("develop", branch="main")
depends_on("c", type="build") # generated
variant("mpi", default=True, description="Enable MPI support")
variant("openmp", default=False, description="Enable OpenMP support")
variant("caliper", default=False, description="Enable Caliper monitoring")
depends_on("c", type="build") # generated
depends_on("mpi", when="+mpi")
depends_on("hypre+mpi", when="+mpi")
requires("+mpi", when="^hypre+mpi")

View File

@@ -28,14 +28,14 @@ class Amgx(CMakePackage, CudaPackage):
version("2.0.1", sha256="6f9991f1836fbf4ba2114ce9f49febd0edc069a24f533bd94fd9aa9be72435a7")
version("2.0.0", sha256="8ec7ea8412be3de216fcf7243c4e2a8bcf76878e6865468e4238630a082a431b")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant("cuda", default=True, description="Build with CUDA")
variant("mpi", default=True, description="Enable MPI support")
variant("mkl", default=False, description="Enable MKL support")
variant("magma", default=False, description="Enable Magma support")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("mpi", when="+mpi")
depends_on("mkl", when="+mkl")
depends_on("magma", when="+magma")

View File

@@ -40,8 +40,6 @@ class Aml(AutotoolsPackage):
deprecated=True,
)
depends_on("c", type="build") # generated
# Generate possible variants.
#############################
@@ -64,6 +62,7 @@ class Aml(AutotoolsPackage):
# Dependencies management
#########################
depends_on("c", type="build") # generated
# aml always depends on libnuma
depends_on("numactl")

View File

@@ -21,8 +21,6 @@ class AmqpCpp(CMakePackage):
version("4.3.24", sha256="c3312f8af813cacabf6c257dfaf41bf9e66606bbf7d62d085a9b7da695355245")
version("4.3.19", sha256="ca29bb349c498948576a4604bed5fd3c27d87240b271a4441ccf04ba3797b31d")
depends_on("cxx", type="build") # generated
variant(
"tcp",
default=False,
@@ -32,6 +30,8 @@ class AmqpCpp(CMakePackage):
conflicts("+tcp", when="platform=darwin", msg="TCP module requires Linux")
depends_on("cxx", type="build") # generated
depends_on("cmake@3.5:", type="build")
depends_on("openssl@1.1.1:", when="+tcp", type=("build", "link", "run"))

View File

@@ -257,6 +257,9 @@ class AmrWind(CMakePackage, CudaPackage, ROCmPackage):
depends_on("fftw", when="@2.1: +waves2amr")
depends_on("fftw", when="@3.3.1: +fft")
depends_on("rocrand", when="+rocm")
depends_on("rocprim", when="+rocm")
for arch in CudaPackage.cuda_arch_values:
depends_on("hypre+cuda cuda_arch=%s" % arch, when="+cuda+hypre cuda_arch=%s" % arch)
for arch in ROCmPackage.amdgpu_targets:

View File

@@ -94,10 +94,6 @@ class Amrex(CMakePackage, CudaPackage, ROCmPackage):
version("18.10", sha256="298eba03ef03d617c346079433af1089d38076d6fab2c34476c687740c1f4234")
version("18.09.1", sha256="a065ee4d1d98324b6c492ae20ea63ba12a4a4e23432bf5b3fe9788d44aa4398e")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
# Config options
variant(
"dimensions",
@@ -154,6 +150,10 @@ class Amrex(CMakePackage, CudaPackage, ROCmPackage):
variant("sycl", default=False, description="Enable SYCL backend")
# Build dependencies
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("mpi", when="+mpi")
with when("+linear_solvers"):
depends_on("rocsparse", when="@25.01: +rocm")

View File

@@ -19,8 +19,6 @@ class Amrvis(MakefilePackage):
version("main", branch="main")
depends_on("cxx", type="build") # generated
variant(
"dims",
default="3",
@@ -39,6 +37,8 @@ class Amrvis(MakefilePackage):
variant("debug", default=False, description="Enable debugging features")
variant("profiling", default=False, description="Enable AMReX profiling features")
depends_on("cxx", type="build") # generated
depends_on("gmake", type="build")
depends_on("mpi", when="+mpi")
depends_on("libsm")

View File

@@ -27,9 +27,6 @@ class Ams(CMakePackage, CudaPackage):
submodules=False,
)
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant(
"faiss",
default=False,
@@ -50,6 +47,9 @@ class Ams(CMakePackage, CudaPackage):
description="Enable AMSLib verbose output (controlled by environment variable)",
)
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("umpire")
depends_on("mpi", when="+mpi")

View File

@@ -21,11 +21,11 @@ class Angsd(MakefilePackage):
version("0.921", sha256="8892d279ce1804f9e17fe2fc65a47e5498e78fc1c1cb84d2ca2527fd5c198772")
version("0.919", sha256="c2ea718ca5a5427109f4c3415e963dcb4da9afa1b856034e25c59c003d21822a")
variant("r", default=True, description="Enable R dependency")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant("r", default=True, description="Enable R dependency")
depends_on("htslib")
conflicts("^htslib@1.6:", when="@0.919")

View File

@@ -19,9 +19,6 @@ class Antlr(AutotoolsPackage):
version("2.7.7", sha256="853aeb021aef7586bda29e74a6b03006bcb565a755c86b66032d8ec31b67dbb9")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
# Fixes build with recent versions of GCC
patch("gcc.patch")
@@ -30,6 +27,9 @@ class Antlr(AutotoolsPackage):
variant("python", default=False, description="Enable ANTLR for Python")
variant("pic", default=False, description="Enable fPIC")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
extends("python", when="+python")
depends_on("java", type=("build", "run"), when="+java")

View File

@@ -58,14 +58,14 @@ class Aocc(Package, LlvmDetection, CompilerPackage):
url="https://download.amd.com/developer/eula/aocc-compiler/aocc-compiler-3.2.0.tar",
)
depends_on("c", type="build") # generated
provides("c", "cxx")
provides("fortran")
# Licensing
license_url = "https://www.amd.com/en/developer/aocc/aocc-compiler/eula.html"
depends_on("c", type="build") # generated
depends_on("libxml2")
depends_on("zlib-api")
depends_on("ncurses")

View File

@@ -49,9 +49,6 @@ class AoclCompression(CMakePackage):
)
version("4.2", sha256="a18b3e7f64a8105c1500dda7b4c343e974b5e26bfe3dd838a1c1acf82a969c6f")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant("shared", default=True, description="Build shared library")
variant("zlib", default=True, description="Build zlib library")
variant("bzip2", default=True, description="Build bzip2 library")
@@ -74,6 +71,9 @@ class AoclCompression(CMakePackage):
)
variant("enable_fast_math", default=False, description="Enable fast-math optimizations")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("cmake@3.22:", type="build")
def cmake_args(self):

View File

@@ -43,8 +43,6 @@ class AoclCrypto(CMakePackage):
)
version("4.2", sha256="2bdbedd8ab1b28632cadff237f4abd776e809940ad3633ad90fc52ce225911fe")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant("examples", default=False, description="Build examples")
variant("ipp", default=False, description="Build Intel IPP library")
@@ -55,6 +53,8 @@ class AoclCrypto(CMakePackage):
when="@5.0",
)
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("cmake@3.22:", type="build")
depends_on("openssl@3.1.5:")
depends_on("intel-oneapi-ippcp@2021.12.0:", when="+ipp")

View File

@@ -39,9 +39,6 @@ class AoclLibmem(CMakePackage):
)
version("4.2", sha256="4ff5bd8002e94cc2029ef1aeda72e7cf944b797c7f07383656caa93bcb447569")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant("logging", default=False, description="Enable/Disable logger")
variant("tunables", default=False, description="Enable/Disable user input")
variant("shared", default=True, description="build shared library")
@@ -60,6 +57,9 @@ class AoclLibmem(CMakePackage):
when="@5.0",
)
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("cmake@3.22:", type="build")
@property

View File

@@ -42,9 +42,6 @@ class AoclSparse(CMakePackage):
version("3.0", sha256="1d04ba16e04c065051af916b1ed9afce50296edfa9b1513211a7378e1d6b952e")
version("2.2", sha256="33c2ed6622cda61d2613ee63ff12c116a6cd209c62e54307b8fde986cd65f664")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant("shared", default=True, description="Build shared library")
variant("ilp64", default=False, description="Build with ILP64 support")
variant("examples", default=False, description="Build sparse examples")
@@ -58,6 +55,9 @@ class AoclSparse(CMakePackage):
)
variant("openmp", default=True, when="@4.2:", description="Enable OpenMP support")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
for vers in ["4.1", "4.2", "5.0"]:
with when(f"@={vers}"):
depends_on(f"amdblis@={vers}")

View File

@@ -42,13 +42,13 @@ class AoclUtils(CMakePackage):
version("4.2", sha256="1294cdf275de44d3a22fea6fc4cd5bf66260d0a19abb2e488b898aaf632486bd")
version("4.1", sha256="660746e7770dd195059ec25e124759b126ee9f060f43302d13354560ca76c02c")
depends_on("cxx", type="build") # generated
variant("doc", default=False, description="enable documentation")
variant("tests", default=False, description="enable testing")
variant("shared", default=True, when="@4.2:", description="build shared library")
variant("examples", default=False, description="enable examples")
depends_on("cxx", type="build") # generated
depends_on("cmake@3.22:", type="build")
depends_on("doxygen", when="+doc")

View File

@@ -41,15 +41,15 @@ class Apcomp(Package):
version("0.0.2", sha256="cb2e2c4524889408de2dd3d29665512c99763db13e6f5e35c3b55e52948c649c")
version("0.0.1", sha256="cbf85fe58d5d5bc2f468d081386cc8b79861046b3bb7e966edfa3f8e95b998b2")
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
variant("openmp", default=True, description="Build with openmp support")
variant("mpi", default=True, description="Build with MPI support")
variant("shared", default=True, description="Build Shared Library")
# set to false for systems that implicitly link mpi
variant("blt_find_mpi", default=True, description="Use BLT CMake Find MPI logic")
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("cmake@3.9:", type="build")
depends_on("mpi", when="+mpi")
depends_on("llvm-openmp", when="+openmp %apple-clang")

View File

@@ -66,10 +66,6 @@ class Apex(CMakePackage):
deprecated=True,
)
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
# Disable some default dependencies on Darwin/OSX
darwin_default = False
if sys.platform != "darwin":
@@ -102,6 +98,10 @@ class Apex(CMakePackage):
variant("examples", default=False, description="Build Examples")
# Dependencies
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("zlib-api")
depends_on("cmake@3.10.0:", type="build")
depends_on("cmake@3.20.1:", type="build", when="@2.6.2:")

View File

@@ -57,9 +57,9 @@ def _standard_flag(self, *, language, standard):
"cxx": {
"11": [("@4.0:", "-std=c++11")],
"14": [("@6.1:", "-std=c++14")],
"17": [("@6.1:10.0", "-std=c++1z"), ("10.1:", "-std=c++17")],
"20": [("@10.0:13.0", "-std=c++2a"), ("13.1:", "-std=c++20")],
"23": [("13.0:", "-std=c++2b")],
"17": [("@6.1:10.0", "-std=c++1z"), ("@10.1:", "-std=c++17")],
"20": [("@10.0:13.0", "-std=c++2a"), ("@13.1:", "-std=c++20")],
"23": [("@13.0:", "-std=c++2b")],
},
"c": {
"99": [("@4.0:", "-std=c99")],

View File

@@ -38,7 +38,4 @@ class AppleGl(AppleGlBase):
provides("gl@4.1")
requires(
"platform=darwin %apple-clang",
msg="Apple-GL is only available on Darwin, when using Apple Clang",
)
requires("platform=darwin", msg="Apple-GL is only available on Darwin")

View File

@@ -12,7 +12,4 @@ class AppleGlu(AppleGlBase):
provides("glu@1.3")
requires(
"platform=darwin %apple-clang",
msg="Apple-GLU is only available on Darwin, when using Apple Clang",
)
requires("platform=darwin", msg="Apple-GL is only available on Darwin")

View File

@@ -18,14 +18,14 @@ class AprUtil(AutotoolsPackage):
version("1.6.0", sha256="483ef4d59e6ac9a36c7d3fd87ad7b9db7ad8ae29c06b9dd8ff22dda1cc416389")
version("1.5.4", sha256="976a12a59bc286d634a21d7be0841cc74289ea9077aa1af46be19d1a6e844c19")
depends_on("c", type="build") # generated
variant("crypto", default=True, description="Enable crypto support")
variant("gdbm", default=False, description="Enable GDBM support")
variant("pgsql", default=False, description="Enable PostgreSQL support")
variant("sqlite", default=False, description="Enable sqlite DBD driver")
variant("odbc", default=False, description="Enalbe ODBC support")
depends_on("c", type="build") # generated
depends_on("apr")
depends_on("expat")
depends_on("iconv")

View File

@@ -8,6 +8,20 @@
from spack.package import *
_versions = {
"6.3.3": {
"apt": (
"5fe2b18e75e8c0a66069af8446399796818f7340a9ef5f2b52adaa79ee8e2a37",
"https://repo.radeon.com/rocm/apt/6.3.3/pool/main/h/hsa-amd-aqlprofile/hsa-amd-aqlprofile_1.0.0.60303-74~20.04_amd64.deb",
),
"yum": (
"bc501f9f4ec27a2fa402f4ad057f15ccb5a79bd25f44baa44f19e0427fb46a79",
"https://repo.radeon.com/rocm/rhel8/6.3.3/main/hsa-amd-aqlprofile-1.0.0.60303-74.el8.x86_64.rpm",
),
"zyp": (
"81e5e9c2b373973c675962eefb60311394b35f54e00872de6a646f180c32805f",
"https://repo.radeon.com/rocm/zyp/6.3.3/main/hsa-amd-aqlprofile-1.0.0.60303-sles155.74.x86_64.rpm",
),
},
"6.3.2": {
"apt": (
"bef302bf344c9297f9fb64a4a93f360721a467185bc4fefbeecb307dd956c504",
@@ -290,6 +304,7 @@ class Aqlprofile(Package):
"6.3.0",
"6.3.1",
"6.3.2",
"6.3.3",
]:
depends_on(f"hsa-rocr-dev@{ver}", when=f"@{ver}")

View File

@@ -49,9 +49,6 @@ class Arbor(CMakePackage, CudaPackage):
url="https://github.com/arbor-sim/arbor/releases/download/v0.5.2/arbor-v0.5.2-full.tar.gz",
)
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant("assertions", default=False, description="Enable arb_assert() assertions in code.")
variant("doc", default=False, description="Build documentation.")
variant("mpi", default=False, description="Enable MPI support")
@@ -73,6 +70,9 @@ class Arbor(CMakePackage, CudaPackage):
conflicts("%cce@:9.1")
conflicts("%intel")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("cmake@3.19:", type="build")
# misc dependencies

View File

@@ -37,8 +37,6 @@ class Arborx(CMakePackage, CudaPackage, ROCmPackage):
deprecated=True,
)
depends_on("cxx", type="build")
# Allowed C++ standard
variant(
"cxxstd",
@@ -64,6 +62,8 @@ class Arborx(CMakePackage, CudaPackage, ROCmPackage):
variant(backend.lower(), default=deflt, description=descr)
variant("trilinos", default=False, when="@:1.5", description="use Kokkos from Trilinos")
depends_on("cxx", type="build")
depends_on("cmake@3.12:", type="build")
depends_on("cmake@3.16:", type="build", when="@1.0:")
depends_on("mpi", when="+mpi")

View File

@@ -27,8 +27,6 @@ class Argobots(AutotoolsPackage):
version("1.0.1", sha256="fa05a02d7f8f74d845647636609219ee02f6adf628ebcbf40393f829987d9036")
version("1.0", sha256="36a0815f7bf99900a9c9c1eef61ef9b3b76aa2cfc4594a304f6c8c3296da8def")
depends_on("c", type="build") # generated
variant("perf", default=True, description="Add performance optimization flags")
variant("valgrind", default=False, description="Enable Valgrind")
variant("debug", default=False, description="Compiled with debugging symbols")
@@ -43,6 +41,8 @@ class Argobots(AutotoolsPackage):
variant("tool", default=False, description="Enable ABT_tool interface")
variant("affinity", default=False, description="Enable affinity setting")
depends_on("c", type="build") # generated
depends_on("m4", type=("build"), when="@main")
depends_on("autoconf", type=("build"), when="@main")
depends_on("automake", type=("build"), when="@main")

View File

@@ -33,11 +33,11 @@ class Armadillo(CMakePackage):
version("8.100.1", sha256="54773f7d828bd3885c598f90122b530ded65d9b195c9034e082baea737cd138d")
version("7.950.1", sha256="a32da32a0ea420b8397a53e4b40ed279c1a5fc791dd492a2ced81ffb14ad0d1b")
variant("hdf5", default=False, description="Include HDF5 support", when="@:10")
depends_on("c", type="build")
depends_on("cxx", type="build")
variant("hdf5", default=False, description="Include HDF5 support", when="@:10")
depends_on("cmake@2.8.12:", type="build")
depends_on("cmake@3.5:", type="build", when="@14:")
depends_on("arpack-ng") # old arpack causes undefined symbols

View File

@@ -22,14 +22,14 @@ class Armcimpi(AutotoolsPackage):
"0.3.1-beta", sha256="f3eaa8f365fb55123ecd9ced401086b0732e37e4df592b27916d71a67ab34fe9"
)
depends_on("c", type="build") # generated
depends_on("fortran", type="build") # generated
variant("shared", default=True, description="Builds a shared version of the library")
variant("progress", default=False, description="Enable asynchronous progress")
provides("armci")
depends_on("c", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("autoconf", type="build")
depends_on("automake", type="build")
depends_on("libtool", type="build")

View File

@@ -51,10 +51,6 @@ class ArpackNg(CMakePackage, AutotoolsPackage):
version("3.5.0", sha256="50f7a3e3aec2e08e732a487919262238f8504c3ef927246ec3495617dde81239")
version("3.4.0", sha256="69e9fa08bacb2475e636da05a6c222b17c67f1ebeab3793762062248dd9d842f")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
variant("shared", default=True, description="Enables the build of shared libraries")
variant("mpi", default=True, description="Activates MPI support")
variant("icb", default=False, when="@3.6:", description="Activates iso_c_binding support")
@@ -71,6 +67,10 @@ class ArpackNg(CMakePackage, AutotoolsPackage):
patch("xlf.patch", when="@3.7.0%xl", level=0)
patch("xlf.patch", when="@3.7.0%xl_r", level=0)
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("blas")
depends_on("lapack")
depends_on("mpi", when="+mpi")

View File

@@ -28,12 +28,12 @@ class Arrayfire(CMakePackage, CudaPackage):
"3.7.0", commit="fbea2aeb6f7f2d277dcb0ab425a77bb18ed22291", submodules=True, tag="v3.7.0"
)
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant("forge", default=False, description="Enable graphics library")
variant("opencl", default=False, description="Enable OpenCL backend")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("boost@1.70:")
depends_on("fftw-api@3:")
depends_on("blas")
@@ -86,7 +86,7 @@ def cmake_args(self):
]
args.append(self.define("CUDA_architecture_build_targets", arch_list))
if self.spec["blas"].name in INTEL_MATH_LIBRARIES:
if self.spec.satisfies("^[virtuals=blas] intel-oneapi-mkl"):
if self.version >= Version("3.8.0"):
args.append(self.define("AF_COMPUTE_LIBRARY", "Intel-MKL"))
else:

View File

@@ -23,10 +23,6 @@ class Asagi(CMakePackage):
# is preferred to satisfy internal-dependencies
version("1.0", commit="f67250798b435c308b9a1e7516f916f7855534ec", submodules=True)
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
variant(
"link_type",
default="shared",
@@ -46,6 +42,10 @@ class Asagi(CMakePackage):
variant("tests", default=False, description="compile tests")
variant("examples", default=False, description="compile examples")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("mpi", when="+mpi")
depends_on("mpi@3:", when="+mpi3")
depends_on("netcdf-c +mpi", when="+mpi")

Some files were not shown because too many files have changed in this diff Show More