Compare commits

..

495 Commits

Author SHA1 Message Date
Harmen Stoppels
020c06c696 try something 2024-04-26 23:14:03 +02:00
Harmen Stoppels
30f811588d Revert "fix"
This reverts commit 896d7cf497.
2024-04-26 23:13:04 +02:00
Harmen Stoppels
896d7cf497 fix 2024-04-26 08:34:12 +02:00
Harmen Stoppels
75530be48a verbose output 2024-04-26 07:55:02 +02:00
Alex Richert
2bf900a893 Update package.py (#43836) 2024-04-25 16:00:43 -07:00
Andrey Perestoronin
99bba0b1ce Add intel-oneapi-mpi 2021.12.1 patch package (#43850)
* Add intel-oneapi-mpi package

* Fix style
2024-04-25 13:38:47 -06:00
Juan Miguel Carceller
a8506f9022 glew package: add ld flags when compiling with ^apple-gl (#43429) 2024-04-25 11:18:00 -07:00
Harmen Stoppels
4a40a76291 build_environment.py: expand SPACK_MANAGED_DIRS with realpath (#43844) 2024-04-25 13:33:50 +02:00
downloadico
fe9ddf22fc spatialdata: add spatialdata package to spack (#43500) 2024-04-25 04:18:08 -07:00
Adam J. Stewart
1cae1299eb CI: remove ML ROCm stack (#43825) 2024-04-25 12:56:59 +02:00
Adam J. Stewart
8b106416c0 py-lightning: add v2.2.3 (#43824) 2024-04-25 12:03:01 +02:00
jalcaraz
e2088b599e [TAU Package] Updates for rocm (#43790)
* Updates for rocm

Updated for rocm@6
Added conflict between rocprofiler and roctracer.
Request either +rocprofiler or +roctracer when +rocm. In this case, it automatically builds for one, instead of displaying the message. 
Request +rocm when using either +rocprofiler or +roctracer. In this case, it automatically builds with +rocm, instead of displaying the message.

Disabled the tests. Will update them with the new test method.

* [@spackbot] updating style on behalf of jordialcaraz

---------

Co-authored-by: jordialcaraz <jordialcaraz@users.noreply.github.com>
2024-04-24 16:35:41 -07:00
Ken Raffenetti
56446685ca mpich: add 4.2.0 release (#42687) 2024-04-24 12:33:59 -07:00
Teo
47a8d875c8 update halide versions; sync llvm (#43793) 2024-04-24 12:27:42 -07:00
Adam J. Stewart
56b2d250c1 py-keras: add v3.3.1-2 (#43798) 2024-04-24 12:23:34 -07:00
David Boehme
abbd09b4b2 Add Caliper v2.11 (#43802) 2024-04-24 12:21:56 -07:00
Rémi Lacroix
9e5fdc6614 dotnet-core-sdk: Add versions 7.0.18 and 8.0.4 (#43814) 2024-04-24 12:19:33 -07:00
Harmen Stoppels
1224a3e8cf clang.py: detect flang-new (#43815)
If a flang-new exists, which is rather unlikely, it probably means the
user wants it as a fortran compiler.
2024-04-24 19:11:02 +02:00
Jake Koester
6c3218920f Trilinos: update kokkos dependency (#43785)
* fix so trilinos@master uses correct kokkos (@4.3.00)

* Update var/spack/repos/builtin/packages/trilinos/package.py
2024-04-24 10:42:08 -06:00
Peter Scheibel
02cc3ea005 Add new redistribute() directive (#20185)
Some packages can't be redistributed in source or binary form. We need an explicit way to say that in a package.

This adds a `redistribute()` directive so that package authors can write, e.g.:

```python
    redistribute(source=False, binary=False)
```

You can also do this conditionally with `when=`, as with other directives, e.g.:

```python
    # 12.0 and higher are proprietary
    redistribute(source=False, binary=False, when="@12.0:")

    # can't redistribute when we depend on some proprietary dependency
    redistribute(source=False, binary=False, when="^proprietary-dependency")
```


To prevent Spack from adding either their sources or binaries to public mirrors and build caches. You can still unconditionally add things *if* you run either:
* `spack mirror create --private`
* `spack buildcache push --private`

But the default behavior for build caches is not to include non-redistributable packages in either mirrors or build caches.  We have previously done this manually for our public buildcache, but with this we can start maintaining redistributability directly in packages.

Caveats: currently the default for `redistribute()` is `True` for both `source` and `binary`, and you can only set either of them to `False` via this directive.

- [x] add `redistribute()` directive
- [x] add `redistribute_source` and `redistribute_binary` class methods to `PackageBase`
- [x] add `--private` option to `spack mirror`
- [x] add `--private` option to `spack buildcache push`
- [x] test exclusion of packages from source mirror (both as a root and as a dependency)
- [x] test exclusion of packages from binary mirror (both as a root and as a dependency)
2024-04-24 09:41:03 -07:00
John W. Parent
641ab95a31 Revert "Windows: add win-sdk/wgl externals during bootstrapping (#43459)" (#43819)
This reverts commit 9e2558bd56.
2024-04-24 18:24:28 +02:00
Adam J. Stewart
e8b76c27e4 py-matplotlib: drop test dep (#43765)
test dependencies constrains build / link type deps, so avoid that
2024-04-24 18:03:13 +02:00
Massimiliano Culpo
0dbe4d54b6 Tune default compiler preferences (#43805)
Add `%oneapi`, remove compilers that have been discontinued upstream
2024-04-24 18:00:40 +02:00
Harmen Stoppels
1eb6977049 rust: use system libs (#43797) 2024-04-24 09:46:11 -06:00
Harmen Stoppels
3f1cfdb7d7 libc: from current python process (#43787)
If there's no compiler we currently don't have any external libc for the solver.

This commit adds a fallback on libc from the current Python process, which works if it is dynamically linked.

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-04-24 05:10:48 -06:00
Massimiliano Culpo
d438d7993d nf-*-cli: fix typo in conditional (#43806)
The default runner changed on GitHub for macOS, and that
revealed a bug in a package when running audits
2024-04-24 11:30:35 +02:00
Todd Gamblin
aa0825d642 Refactor to improve spec format speed (#43712)
When looking at where we spend our time in solver setup, I noticed a fair bit of time is spent
in `Spec.format()`, and `Spec.format()` is a pretty old, slow, convoluted method.

This PR does a number of things:
- [x] Consolidate most of what was being done manually with a character loop and several
      regexes into a single regex.
- [x] Precompile regexes where we keep them 
- [x] Remove the `transform=` argument to `Spec.format()` which was only used in one 
      place in the code (modules) to uppercase env var names, but added a lot of complexity
- [x] Avoid escaping and colorizing specs unless necessary
- [x] Refactor a lot of the colorization logic to avoid unnecessary object construction
- [x] Add type hints and remove some spots in the code where we were using nonexistent
      arguments to `format()`.
- [x] Add trivial cases to `__str__` in `VariantMap` and `VersionList` to avoid sorting
- [x] Avoid calling `isinstance()` in the main loop of `Spec.format()`
- [x] Don't bother constructing a `string` representation for the result of `_prev_version`
      as it is only used for comparisons.

In my timings (on all the specs formatted in a solve of `hdf5`), this is over 2.67x faster than the 
original `format()`, and it seems to reduce setup time by around a second (for `hdf5`).
2024-04-23 10:52:15 -07:00
Greg Becker
978c20f35a concretizer: update reuse: default to True (#41302) 2024-04-23 17:42:14 +02:00
Marc T. Henry de Frahan
d535124500 Update amr-wind package with versions (#43728) 2024-04-23 05:07:42 -06:00
Harmen Stoppels
01f61a2eba Remove import distro from packages and docs (#43772) 2024-04-23 12:47:33 +02:00
Massimiliano Culpo
7d5e27d5e8 Do not detect a compiler without a C compiler (#43778) 2024-04-23 12:20:33 +02:00
Harmen Stoppels
d210425eef nettle: remove openssl dep (#43770) 2024-04-23 07:44:15 +02:00
Nathalie Furmento
6be07da201 starpu: fix release 1.4.4 (#43730) 2024-04-22 15:56:23 -07:00
Filippo Spiga
02b38716bf Adding UCX 1.16.0 (#43743)
* Adding UCX 1.16.0
* Fixed hash
2024-04-22 15:52:20 -07:00
Adam J. Stewart
d7bc624c61 Tags: add more build tools (#43766)
* Tags: add more build tools
* py-pythran: add maintainer
2024-04-22 15:34:38 -07:00
Adam J. Stewart
b7cecc9726 py-scikit-image: add v0.23 (#43767) 2024-04-22 15:32:55 -07:00
Adam J. Stewart
393a2f562b py-keras: add v3.3.0 (#43783) 2024-04-22 15:04:06 -07:00
Massimiliano Culpo
682fcec0b2 zig: add v0.12.0 (#43774) 2024-04-22 15:02:45 -07:00
Harmen Stoppels
d6baae525f repo.py: drop deleted packages from provider cache (#43779)
The reverse provider lookup may have stale entries for deleted packages, which used to cause errors. It's hard to invalidate those cache entries, so this commit simply drops entries w/o invalidating the cache.

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-04-22 19:03:44 +02:00
Kyle Knoepfel
e1f2612581 Adjust severity of irreversible operations (#43721) 2024-04-22 16:41:53 +02:00
Harmen Stoppels
080fc875eb compiler.py: reduce verbosity of implicit link dirs parsing (#43777) 2024-04-22 16:07:14 +02:00
Harmen Stoppels
69f417b26a view: dont warn about externals (#43771)
since it's the status quo on linux after libc as external by default
2024-04-22 16:05:32 +02:00
Harmen Stoppels
80b5106611 bootstrap: no need to add dummy compilers (#43775) 2024-04-22 16:01:41 +02:00
Massimiliano Culpo
34146c197a Add libc dependency to compiled packages and runtime deps
This commit differentiate linux from other platforms by
using libc compatibility as a criterion for deciding
which buildcaches / binaries can be reused. Other
platforms still use OS compatibility.

On linux a libc is injected by all compilers as an implicit
external, and the compatibility criterion is that a libc is
compatible with all other libcs with the same name and a
version that is lesser or equal.

Some concretization unit tests use libc when run on linux.
2024-04-22 15:18:06 +02:00
Harmen Stoppels
209a3bf302 Compiler.default_libc
Some logic to detect what libc the c / cxx compilers use by default,
based on `-dynamic-linker`.

The function `compiler.default_libc()` returns a `Spec` of the form
`glibc@x.y` or `musl@x.y` with the `external_path` property set.

The idea is this can be injected as a dependency.

If we can't run the dynamic linker directly, fall back to `ldd` relative
to the prefix computed from `ld.so.`
2024-04-22 15:18:06 +02:00
Harmen Stoppels
e8c41cdbcb database.py: stream of json objects forward compat (#43598)
In the future we may transform the database from a single JSON object to
a stream of JSON objects.

This paves the way for constant time writes and constant time rereads
when only O(1) changes are made. Currently both are linear time.

This commit gives just enough forward compat for Spack to produce a
friendly error when we would move to a stream of json objects, and a db
would look like this:

```json
{"database": {"version": "<something newer>"}}
```
2024-04-22 09:43:41 +02:00
Massimiliano Culpo
a450dd31fa Fix a bug preventing to set platform= on externals (#43758)
closes #43406
2024-04-22 09:15:22 +02:00
Alex Richert
7c1a309453 perl: remove mkdirp from setup_dependent_package (#43733) 2024-04-20 21:34:07 +02:00
Massimiliano Culpo
78b6fa96e5 ci.py: visit all edges (#43761) 2024-04-20 21:29:32 +02:00
Jordan Ogas
1b315a9ede mpich: add v4.2.1 (#43753) 2024-04-20 19:45:25 +02:00
Harmen Stoppels
82df0e549d compiler wrapper: prioritize spack store paths in -L, -I, -rpath (#43593)
* compiler wrapper: prioritize spack managed paths in search order

This commit partitions search paths of -L, -I (and -rpath) into three
groups, from highest priority to lowest:

1. Spack managed directories: these include absolute paths such as
   stores and the stage dir, as well as all relative paths since they
   are relative to a Spack owned dir
2. Non-system dirs: these are for externals that live in non-system
   locations
3. System dirs: your typical `/usr/lib` etc.

It's very easy for Spack to known the prefixes it owns, it's much more
difficult to tell system dirs from non-system dirs. Before this commit
Spack tried to distinguish only system and non-system dirs, and failed
for very trivial cases like `/usr/lib/x/..` which comes up often, since
build systems sometimes copy search paths from `gcc -print-search-dirs`.

Potentially this implementation is even faster than the current state of
things, since a loop over paths is replaced with an eval'ed `case ...`.

* Trigger a pipeline

* Revert "Trigger a pipeline"

This reverts commit 5d7fa863de.

* remove redudant return statement
2024-04-20 13:23:37 -04:00
Harmen Stoppels
f5591f9068 ci.py: simplify, and dont warn excessively about externals (#43759) 2024-04-20 15:09:54 +02:00
FrederickDeny
98c08d277d e4s-alc: add new package (#43750)
* Added e4s-cl@1.0.3

* add e4s-alc package

* removed trailing whitespace
2024-04-19 15:54:23 -06:00
Jordan Galby
facca4e2c8 ccache: 4.9.1 and 4.8.3 (#43748) 2024-04-19 23:46:38 +02:00
Adam J. Stewart
764029bcd1 py-ruff: add v0.4.0 (#43740) 2024-04-19 15:44:19 -06:00
Harmen Stoppels
44cb4eca93 environment.py: fix excessive re-reads (#43746) 2024-04-19 13:39:34 -06:00
Jack Morrison
39888d4df6 libfabric: Add version 1.21.0 (#43735) 2024-04-19 11:09:59 -07:00
Vincent Michaud-Rioux
f68ea49e54 Update py-pennylane + Lightning plugins + few deps (#43706)
* Update PennyLane packages to v0.32.
* Reformat.
* Couple small fixes.
* Fix Lightning cmake_args.
* Couple dep fixes in py-pennylane + plugins.
* Fix scipy condition.
* Add comment on conflicting requirement.
* Update py-pl versions
* Update lightning versions.
* Fix copyright.
* Fix license.
* Update pl-kokkos versions
* run black
* Fix L-Kokkos build and update autoray.
* build step only required for older versions. update autograd
* Fix LK@0.31 kokkos compat issue. Introduce url_for_version.
* Fix few more version bounds.
2024-04-19 11:05:09 -07:00
Olivier Cessenat
78b5e4cdfa perl-fth: new version 0.529 (#43727) 2024-04-19 10:58:34 -07:00
Micael Oliveira
26515b8871 fms: add two variants (#43734)
* fms: add two variants supporting existing build options.
* Style fixes.
2024-04-19 10:51:58 -07:00
Harmen Stoppels
74640987c7 ruamel yaml: fix quadratic complexity bug (#43745) 2024-04-19 14:33:42 +02:00
Harmen Stoppels
d6154645c7 chai / raja / umpire: compile entire project with hipcc again (#43738) 2024-04-19 11:18:12 +02:00
James Smillie
faed43704b py-numpy package: enable build on Windows (#43686)
* Add conflicts for some blas implementations that don't build on
  Windows (or with %msvc)
* Need to enclose CC/CXX variables in quotes in case those paths
  have spaces, otherwise Meson runs into errors
* On Windows, Python dependencies now add <prefix>/Scripts to the
  PATH (this is established as a standard in PEP 370)
2024-04-18 16:38:08 -06:00
John W. Parent
6fba31ce34 Windows: Update MSVC + oneAPI detection and integration (#43646)
* Later versions of oneAPI have moved, so update detection to find it
  in both old and new location
* Remove reliance on ONEAPI_ROOT env variable when determining Fortran
  compiler version for %msvc
* When finding a Fortran compiler for MSVC, there was logic enforcing
  a maximum MSVC version for a given oneAPI Fortran version. This
  mapping was out of date and excluding valid combinations, so has
  been removed (the logic now just picks the latest available
  oneAPI Fortran compiler for any given MSVC version).
2024-04-18 21:53:56 +00:00
Olivier Cessenat
112cead00b keepassxc: new version 2.7.7 (#43729) 2024-04-18 14:54:08 -06:00
John W. Parent
9e2558bd56 Windows: add win-sdk/wgl externals during bootstrapping (#43459)
On Windows, bootstrapping logic now searches for and adds the win-sdk
and wgl packages to the user's top scope as externals if they are not
present.

These packages are generally required to install most packages with
Spack on Windows, and are only available as externals, so it is
assumed that doing this automatically would be useful and avoid
a mandatory manual step for each new Spack instance.

Note this is the first case of bootstrapping logic modifying
configuration other than the bootstrap configuration.
2024-04-18 10:20:04 -07:00
eugeneswalker
019058226f py-netcdf4 %oneapi: cflags append -Wno-error=int-conversion (#43629) 2024-04-18 10:11:24 -07:00
George Young
ac0040f67d spaceranger: new manual download package @2.1.1 (#42391)
* spaceranger: new manual download package @2.1.1
* Adding license url

---------

Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2024-04-18 10:11:01 -07:00
wspear
38f341f12d Added tau@2.33.2 (#43682) 2024-04-18 10:10:18 -07:00
George Young
26ad22743f cellranger: new manual download package @7.1.0 (#38486)
* cellranger: new manual download package @7.1.0
* cellranger: updating to @7.2.0
* updating website
* Adding license url

---------

Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2024-04-18 10:09:00 -07:00
George Young
46c2b8a565 xeniumranger: new manual download package @1.7.1 (#42389)
* xeniumranger: new manual download package @1.7.1
* Adding license url

---------

Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2024-04-18 10:08:04 -07:00
James Taliaferro
5cbb59f2b8 New package: editorconfig (#43670)
* New package: editorconfig
* remove FIXMEs
* add description for editorconfig
2024-04-18 10:05:15 -07:00
Adam J. Stewart
f29fa1cfdf CI: remove MXNet (#43704) 2024-04-18 10:04:03 -07:00
Olivier Cessenat
c69951d6e1 gxsview: compiles against system qt and vtk on rhel8 (#43722)
* gxsview: compiles againts system qt and vtk on rhel8
* Update gxsview/package.py for blanks around operator
* Update gxsview/package.py import blank line
* Update gxsview/package.py for style
* Update gxsview/package.py checking vtk version
2024-04-18 10:18:05 -06:00
Adam J. Stewart
f406f27d9c ML CI: remove extra xgboost (#43709) 2024-04-18 09:08:25 -07:00
Tamara Dahlgren
36ea208e12 Twitter->X: Reflect the name (only) change (#43690) 2024-04-18 08:52:54 -07:00
Kyle Knoepfel
17e0774189 Make sure variable is None if exception is raised. (#43707) 2024-04-18 08:50:15 -07:00
Sam Gillingham
3162c2459d update py-python-fmask to version 0.5.9 (#43698)
* update py-python-fmask to version 0.5.9
* add gillins and neilflood as maintainers
* remove spaces
* remove blank line
* put maintainers higher
2024-04-18 08:48:15 -07:00
Massimiliano Culpo
7cad6c62a3 Associate condition sets from cli to root node (#43710)
This PR prevents a condition_set from having nodes that are not associated with the corresponding root node through some (transitive) dependencies.
2024-04-18 17:27:12 +02:00
Harmen Stoppels
eb2ddf6fa2 asp.py: do not copy 2024-04-18 15:39:26 +02:00
Harmen Stoppels
2bc2902fed spec.py: early return in __str__ 2024-04-18 15:39:26 +02:00
Mikael Simberg
b362362291 cvise package: add version 2.10.0 and ncurses constraint (#43319) 2024-04-17 13:41:33 -07:00
Rocco Meli
32bb5c7523 mkl interface (#43673) 2024-04-17 16:19:35 -04:00
FrederickDeny
a2b76c68a0 Added e4s-cl@1.0.3 (#43693) 2024-04-17 13:45:36 -06:00
Wouter Deconinck
62132919e1 xrootd: new version 5.6.9 (#43684) 2024-04-17 12:26:52 -07:00
Wouter Deconinck
b06929f6df xerces-c: new version 3.2.5 (#43687) 2024-04-17 12:24:27 -07:00
Wouter Deconinck
0f33de157b assimp: new version 5.4.0 (#43689) 2024-04-17 10:21:15 -07:00
Sinan
03a074ebe7 package/npm update (#43692)
* package/npm update
* add conflicts to exclude certain version intervals

---------

Co-authored-by: sbulut <sbulut@3vgeomatics.com>
2024-04-17 10:06:09 -07:00
Adam J. Stewart
4d12b6a4fd py-shapely: add v2.0.4 (#43702) 2024-04-17 18:51:48 +02:00
Adam J. Stewart
26bb15e1fb py-sphinx: add v7.3 (#43703) 2024-04-17 18:51:31 +02:00
Bill Williams
1bf92c7881 [Score-P] Make local with-or-without not use "yes" (#43701)
Score-P does not accept "--with-foo=yes", but only "--with-foo" or "--with-foo=some-valid-specific-choice-or-path". This keeps Spack from generating config flags that will cause Score-P to barf.
2024-04-17 09:32:47 -07:00
Todd Gamblin
eefe0b2eec Improve spack find output in environments (#42334)
This adds some improvements to `spack find` output when in environments based
around some thoughts about what users want to know when they're in an env.

If you're working in an enviroment, you mostly care about:
* What are the roots
* Which ones are installed / not installed
* What's been added that still needs to be concretized

So, this PR adds a couple tweaks to display that information more clearly:

- [x] We now display install status next to every root. You can easily see
      which are installed and which aren't.

- [x] When you run `spack find -l` in an env, the roots now show their concrete
      hash (if they've been concretized). They previously would show `-------`
      (b/c the root spec itself is abstract), but showing the concretized root's
      hash is a lot more useful.

- [x] Newly added/unconcretized specs still show `-------`, which now makes more
      sense, b/c they are not concretized.

- [x] There is a new option, `-r` / `--only-roots` to *only* show env roots if
      you don't want to look at all the installed specs.

- [x] Roots in the installed spec list are now highlighted as bold. This is
      actually an old feature from the first env implementation , but various
      refactors had disabled it inadvertently.
2024-04-17 16:22:05 +00:00
Wouter Deconinck
de6c6f0cd9 py-pyparsing: new version 3.1.2 (#43579) 2024-04-17 07:53:03 -06:00
James Smillie
309d3aa1ec Python package: fix install of static libs on Windows (#43564) 2024-04-16 16:40:15 -06:00
Andrey Perestoronin
feff11f914 intel-oneapi-dnn-2024.1.0: add DNN package version (#43679)
* add onednn package

* fix style

* Update var/spack/repos/builtin/packages/intel-oneapi-dnn/package.py

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>

---------

Co-authored-by: Robert Cohn <rscohn2@gmail.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-04-16 15:26:14 -04:00
John W. Parent
de3b324983 Windows filesystem utilities (bugfix): improve SFN usage (#43645)
Reduce incidence of spurious errors by:
* Ensuring we're passing the buffer by reference
* Get the correct short string size from Windows API instead of computing ourselves
* Ensure sufficient space for null terminator character

Add test for `windows_sfn`
2024-04-16 11:02:02 -07:00
kwryankrattiger
747cd374df Run after_script aggregator with spack python (#43669) 2024-04-16 19:03:44 +02:00
Wouter Deconinck
8b3ac40436 acts: new version 34.0.0 (#43680) 2024-04-16 11:03:31 -06:00
Wouter Deconinck
28e9be443c (py-)onnx: new version 1.16.0 (#43675) 2024-04-16 09:58:43 -07:00
John W. Parent
1381bede80 zstd: 1.5.6 does not build on Windows (#43677)
Conflict until a fix has been merged upstream
2024-04-16 09:36:27 -07:00
Wouter Deconinck
6502785908 podio: +rntuple requires root +root7 (#43672) 2024-04-16 09:27:07 -07:00
Erik Heeren
53257408a3 py-bluepyopt: 1.14.11 (#43678) 2024-04-16 09:21:24 -07:00
Wouter Deconinck
28d02dff60 pythia8: new version 8.311 (#43667) 2024-04-16 09:19:06 -07:00
Wouter Deconinck
9d60b42a97 jwt-cpp: new version 0.7.0, scitokens-cpp: new versions to 1.1.1 (#43657)
* jwt-cpp: new version 0.7.0, depends_on nlohmann-json
* scitokens-cpp: new versions to 1.1.1
* scitokens-cpp: conflicts ^jwt-cpp@0.7: when @:1.1
2024-04-16 09:17:18 -07:00
Harmen Stoppels
9ff5a30574 concretize.lp: fix issue with reuse of conditional variants (#43676)
Currently if you request pkg +example where example is a conditional
variant, and you have a pkg in the database for which the condition
did not hold (so no +example nor ~example), the solver would reuse it
regardless, not imposing +example.

The change rules out exactly one thing: variant_set without variant_value,
which in practice could only happen when not node_has_variant (i.e. when
under the current package.py rules the variant's when condition did not
trigger).
2024-04-16 16:09:32 +00:00
Wouter Deconinck
9a6c013365 wayland: +doc requires graphviz +expat for HTML tables (#43668) 2024-04-16 01:28:47 -06:00
Andrey Prokopenko
9f62a3e819 arborx: add v1.6 (#43623) 2024-04-16 09:27:14 +02:00
Maciej Wójcik
e380e9a0ab gromacs: prevent version conflict after enabling plumed (#43449) 2024-04-16 09:07:50 +02:00
Adam J. Stewart
8415ea9ada py-black: switch maintainer (#43652) 2024-04-16 08:51:12 +02:00
Josh Bowden
6960766e0c Damaris: add v1.10.0 (#43664)
Co-authored-by: Joshua Bowden <joshua-chales.bowden@inria.fr>
2024-04-15 23:43:39 -06:00
Michael Kuhn
0c2ca8c841 octave: add 8.4.0 and 9.1.0 (#43518) 2024-04-15 13:32:20 -07:00
Jen Herting
273960fdbb [py-pybedtools] added version 0.10.0 (#43625) 2024-04-15 13:27:22 -07:00
eugeneswalker
0cd2a1102c crtm: add noaa versions and package mods (#43635)
* crtm: add noaa versions and package mods
* crtm@v2.4.1-jedi: add missing depends_on netcdf-fortran, ecbuild from jcsda spack fork
2024-04-15 13:24:07 -07:00
Teague Sterling
e40676e901 Add ollama package (#43655)
* Added package to build Ollama
* Update package.py
  Add license and documentation
* [@spackbot] updating style on behalf of teaguesterling
2024-04-15 13:13:07 -07:00
Tristan Carel
4ddb07e94f py-morphio: add version 3.3.7, update license (#43420) 2024-04-15 14:06:48 -06:00
eugeneswalker
50585d55c5 hdf: %oneapi: -Wno-error=implicit-function-declaration (#43631)
* hdf: %oneapi: -Wno-error=implicit-function-declaration

* [@spackbot] updating style on behalf of eugeneswalker
2024-04-15 12:17:07 -07:00
Adam J. Stewart
5d6b5f3f6f GDAL: fix MDB build (#43665) 2024-04-15 10:38:56 -07:00
Juan Miguel Carceller
2351c19489 glew: remove a few unused options (#43399)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-04-15 18:49:51 +02:00
snehring
08d49361f0 iq-tree: add v2.3.1 (#43442) 2024-04-15 18:48:58 +02:00
Ken Raffenetti
c3c63e5ca4 mpich: Update PMI configure options (#43551)
Add a "default" option that passes no option to configure. Existing
options changed in the MPICH 4.2.0 release, so update the package to
reflect those changes.
2024-04-15 18:40:37 +02:00
Richard Berger
e72d4075bd LAMMPS: add v20240207.1 (#43538)
Add workaround for undefined HIP_PATH in older versions
2024-04-15 16:34:30 +00:00
Todd Gamblin
f9f97bf22b tests: Spec tests shouldn't fetch remote git repositories. (#43656)
Currently, some of the tests in `spec_format` and `spec_semantics` fetch
the actual zlib repository when run, because they call `str()` on specs
like `zlib@foo/bar`, which at least currently requires a remote git clone
to resolve.

This doesn't change the behavior of git versions, but it uses our mock git
repo infrastructure and clones the `git-test` package instead of the *real*
URL from the mock `zlib` package.

This should speed up tests.  We could probably refactor more so that the git
tests *all* use such a fixture, but the `checks` field that unfortunately
tightly couples the mock git repository and the `git_fetch` tests complicates
this. We could also consider *not* making `str()` resolve git versions, but
I did not dig into that here.

- [x] add a mock_git_test_package fixture that sets up a mock git repo *and*
      monkeypatches the `git-test` package (like our git test packages do)
- [x] use fixture in `test_spec_format_path`
- [x] use fixture in `test_spec_format_path_posix`
- [x] use fixture in `test_spec_format_path_windows`
- [x] use fixture in `test_parse_single_spec`
2024-04-15 09:20:23 -07:00
Massimiliano Culpo
8033455d5f hdf5: require mpich+fortran when hdf5+fortran (#43591) 2024-04-15 18:17:28 +02:00
Eric Berquist
50a5a6fea4 tree-sitter: add versions up to 0.22.2 (#43608) 2024-04-15 09:10:19 -07:00
eugeneswalker
0de8a0e3f3 wgrib2: oneapi -> comp_sys="intel_linux" (#43632) 2024-04-15 18:04:41 +02:00
SXS Bot
0a26e74cc8 spectre: add v2024.04.12 (#43641)
Co-authored-by: sxs-bot <sxs-bot@users.noreply.github.com>
2024-04-15 18:03:51 +02:00
dependabot[bot]
9dfd91efbb build(deps): bump docker/setup-buildx-action from 3.2.0 to 3.3.0 (#43542)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 3.2.0 to 3.3.0.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](2b51285047...d70bba72b1)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-15 18:00:47 +02:00
dependabot[bot]
1a7baadbff build(deps): bump python-levenshtein in /lib/spack/docs (#43543)
Bumps [python-levenshtein](https://github.com/rapidfuzz/python-Levenshtein) from 0.25.0 to 0.25.1.
- [Release notes](https://github.com/rapidfuzz/python-Levenshtein/releases)
- [Changelog](https://github.com/rapidfuzz/python-Levenshtein/blob/main/HISTORY.md)
- [Commits](https://github.com/rapidfuzz/python-Levenshtein/compare/v0.25.0...v0.25.1)

---
updated-dependencies:
- dependency-name: python-levenshtein
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-15 18:00:24 +02:00
Marc Perache
afcfd56ae5 range-v3: add v0.12.0 (#40103) 2024-04-15 17:43:14 +02:00
dependabot[bot]
7eb2e704b6 build(deps): bump codecov/codecov-action from 4.1.1 to 4.3.0 (#43562)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4.1.1 to 4.3.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](c16abc29c9...84508663e9)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-15 17:34:31 +02:00
Josh Milthorpe
564b4fa263 hipcub: depend on matching version of hip+cuda when +cuda (#42970) 2024-04-15 17:33:26 +02:00
Adam J. Stewart
0a941b43ca PyTorch: build with external cpuinfo (#40758) 2024-04-15 17:26:52 +02:00
one
35ff24ddea googletest: fix reversed pthreads variant logic (#43649) 2024-04-15 11:18:27 -04:00
Rocco Meli
7019e4e3cb openbabel: add CMake patch for 3.1.1 (#43612) 2024-04-15 17:07:54 +02:00
Weston Ortiz
cb16b8a047 goma: new version 7.6.1 (#43617) 2024-04-15 17:04:34 +02:00
Adam J. Stewart
381acb3726 Build systems: fix docstrings (#43618) 2024-04-15 17:01:52 +02:00
Adam J. Stewart
d87ea0b256 py-maturin: add v1.5.1 (#43619) 2024-04-15 16:55:17 +02:00
Adam J. Stewart
1a757e7f70 py-lightning: add v2.2.2 (#43621) 2024-04-15 16:54:34 +02:00
eugeneswalker
704e2c53a8 py-eccodes: add v1.4.2 (#43633) 2024-04-15 16:44:57 +02:00
renjithravindrankannath
478d8a668c rocm-opencl: add dependency on aqlprofile (#43637) 2024-04-15 16:44:16 +02:00
dependabot[bot]
7903f9fcfd build(deps): bump black from 24.3.0 to 24.4.0 in /lib/spack/docs (#43642)
Bumps [black](https://github.com/psf/black) from 24.3.0 to 24.4.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/24.3.0...24.4.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-15 16:40:37 +02:00
dependabot[bot]
670d3d3fdc build(deps): bump black in /.github/workflows/style (#43643)
Bumps [black](https://github.com/psf/black) from 24.3.0 to 24.4.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/24.3.0...24.4.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-15 16:40:14 +02:00
Vanessasaurus
e8aab6b31c flux-core: add v0.61.2 (#43648)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2024-04-15 16:34:58 +02:00
Adam J. Stewart
1ce408ecc5 py-ruff: add v0.3.7 (#43620) 2024-04-15 16:33:11 +02:00
Adam J. Stewart
dc81a2dcdb py-rasterio: add v1.3.10 (#43653) 2024-04-15 16:32:34 +02:00
Rocco Meli
b10f51f020 charmpp: add archs including Cray shasta with ARM (#43191)
Co-authored-by: RMeli <RMeli@users.noreply.github.com>
2024-04-15 16:28:31 +02:00
Hariharan Devarajan
4f4e3f5607 dlio-profiler: add releases up yo v0.0.5 (#43530) 2024-04-15 16:21:52 +02:00
Wouter Deconinck
00fb80e766 util-linux: new version 2.40 (#43661) 2024-04-15 16:16:52 +02:00
Michael Kuhn
057603cad8 Fix pkgconfig dependencies (#43651)
pkgconfig is the virtual package, pkg-config and pkgconf are
implementations.
2024-04-15 16:05:11 +02:00
kjrstory
5b8b6e492d su2: add v8.0.0, v8.0.1 (#43662) 2024-04-15 15:25:16 +02:00
Wouter Deconinck
763279cd61 spdlog: new version 1.13.0 (#43658) 2024-04-15 12:36:10 +02:00
Wouter Deconinck
e4237b9153 zlib: new version 1.3.1 (#43660) 2024-04-15 10:59:06 +02:00
Wouter Deconinck
d288658cf0 zstd: new version 1.5.6 (#43659) 2024-04-15 09:04:42 +02:00
Kacper Kornet
2c22ae0576 intel-oneapi-compiler: Fix generation of config files (#43654)
Commit 330a9a7c9a aimed at preventing
generation of .cfg files when a given compiler does not exist
in the particular release. However the check does not
contain the full paths so it always fails resulting in empty
.cfg files. This commit fixes it.
2024-04-13 19:34:58 -04:00
eugeneswalker
fc3fc94689 gsibec: properly reference self.spec (#43627) 2024-04-13 14:21:03 -07:00
eugeneswalker
b5013c1372 dealii@9.5 +cgal requires ^cgal@5: (#43639) 2024-04-13 14:20:40 -07:00
Juan Miguel Carceller
e220674c4d sherpa: remove paths to compiler wrappers and use the provided libtool (#43611)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-04-13 15:24:26 -05:00
Harmen Stoppels
7f13518225 gettext: unvendor libxml (#43622) 2024-04-13 12:01:38 +02:00
Wouter Deconinck
96a13a97e6 freetype: new version 2.12 and 2.13 (#43571) 2024-04-13 11:48:48 +02:00
Chris White
6d244b3f67 remove hardcoded hipcc (#43644) 2024-04-12 19:38:15 -06:00
Chris White
6bc66db141 Axom: add new versions and bring in new changes (#43590)
* add new axom releases, sync changes between repos

* add new version of axom and add fallback depends for umpire/raja

* remove now redundant flags, add flags to cuda flags to flags output by cachedcmakepackage

Co-authored-by: white238 <white238@users.noreply.github.com>
2024-04-12 17:34:32 -06:00
James Shen
acfb2b9270 Geant4: extend patch to fix compile for 10.0-10.4 as well (#43640) 2024-04-12 18:52:22 -04:00
Frédéric Simonis
d92a2c31fb precice: Add version 3.1.1 (#43616) 2024-04-12 15:24:28 -07:00
eugeneswalker
e32561aff6 odc: add v1.4.6 (#43634) 2024-04-12 15:19:32 -07:00
Auriane R
4b0479159f pika: add pika 0.24.0 (#43624) 2024-04-12 15:18:27 -07:00
eugeneswalker
03bfd36926 w3nco %oneapi cflags: append -Wno-error=implicit-function-declaration (#43628)
* w3nco %oneapi cflags: append -Wno-error=implicit-function-declaration

* update flag for %apple-clang, %clang

* [@spackbot] updating style on behalf of eugeneswalker
2024-04-12 20:54:18 +02:00
eugeneswalker
4d30c8dce4 gsibec: add v1.2.1 (#43630) 2024-04-12 12:42:46 -06:00
Eric Berquist
49d4104f22 emacs: add 29.3 (#43626) 2024-04-12 11:10:37 -07:00
Massimiliano Culpo
07fb83b493 gcc: add more detection tests (#43613) 2024-04-12 04:03:11 -06:00
Massimiliano Culpo
263007ba81 solver: add an integrity constraint for virtual nodes (#43582)
Upon close inspection of clingo answer sets, in some cases we have "equivalent" (i.e. same hash for the concrete spec) duplicates that differ only because of virtual nodes that are added to the answer set, without any edge using them.
2024-04-12 09:31:44 +02:00
Sajid Ali
3b6e99381f add py-h5py@3.11 (#43605)
* add py-h5py@3.11
* incorporate reviewer feedback
* incorporate reviewer feedback
2024-04-11 23:23:20 -06:00
Howard Pritchard
a30af1ac54 OpenMPI: add version 5.0.3 (#43609)
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2024-04-11 22:48:15 -06:00
Matthew Thompson
294742ab7b openmpi: add MPIFC environment variable (#36669) 2024-04-11 21:50:59 -06:00
eugeneswalker
6391559fb6 e4s ci: add: netcdf-fortran, fpm, e4s-cl (#43601) 2024-04-11 21:01:38 -06:00
Dave Keeshan
d4d4f813a9 verible: Add version v0.0-3624-gd256d779 (#43604) 2024-04-11 20:56:00 -06:00
Dave Keeshan
4667163dc4 Add version 5.024 (#43603) 2024-04-11 20:45:06 -06:00
Dave Keeshan
439f105285 Add versions 0.39 and 0.40 (#43600) 2024-04-11 20:44:51 -06:00
eugeneswalker
f65b1fd7b6 dealii +cuda: conflicts with ^cuda@12: (#43599) 2024-04-11 19:43:32 -06:00
Radim Janalík
d23e06c27e Allow packages to be pushed to build cache after install from source (#42423)
This commit adds a property `autopush` to mirrors. When true, every source build is immediately followed by a push to the build cache. This is useful in ephemeral environments such as CI / containers.

To enable autopush on existing build caches, use `spack mirror set --autopush <name>`. The same flag can be used in `spack mirror add`.
2024-04-11 19:43:13 -06:00
Alberto Sartori
b76e9a887b justbuild: bump version 1.2.5 (#43592) 2024-04-11 18:51:03 -06:00
Filippo Spiga
55ffd439ce JUBE: add v2.6.x (#41272)
* Adding JUBE 2.6.0
* They quickly released a bugfix package
* Correct the version sha256 for v2.6.1

Co-authored-by: Rocco Meli <r.meli@bluemail.ch>

---------

Co-authored-by: fspiga <fspiga@fc01-gg01.cm.cluster>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Rocco Meli <r.meli@bluemail.ch>
2024-04-11 16:01:43 -06:00
Thomas Green
d8a7b88e7b Create Castep package (#41230)
* Create package.py
* Update package.py
  Post review fixes.
* Style fixes.
2024-04-11 14:49:03 -07:00
Paul Kuberry
aaa1bb1d98 suite-sparse: refactor cmake args (#43386)
* Simplify config command and add BLAS/LAPACK location
* Use BLAS_ROOT and LAPACK_ROOT and disable use of system
  package registry
* Adds location of BLAS_LIBRARIES and LAPACK_LIBRARIES to CMake
* Adds CMake variables to prevent picking up system installations
  of BLAS/LAPACK. Fixes previous PR #43328 that was picking up
  incorrect installations
* Adds versions 7.2.1
2024-04-11 14:06:57 -07:00
Martin Aumüller
0d94b8044b libzip: add up to v1.10.1 (#43560)
* libzip: add up to v1.10.1
  - update homepage and change download url to GitHub
  - change build system to CMake for releases starting with 1.4
* [@spackbot] updating style on behalf of aumuell
* libzip: fix urls
* [@spackbot] updating style on behalf of aumuell
* libzip: do not add versions from libzip.org
  these are old, and urllib refuses to fetch them
* libzip: deprecate versions from libzip.org
  urllib refuses to fetch them, only curl would work

---------

Co-authored-by: aumuell <aumuell@users.noreply.github.com>
2024-04-11 14:51:02 -06:00
Wouter Deconinck
5a52780f7c acts: new version 33.1.0 (#43581)
* acts: new version 33.1.0
* actsvg: new version 0.4.41
* geomodel: new package
* [@spackbot] updating style on behalf of wdconinc
* acts: plugin_cmake_variant(geomodel)
* geomodel: with when(+visualization)

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-04-11 13:32:10 -06:00
Tristan Carel
dd0a8452ee py-simpleeval: add version 0.9.13 (#43568) 2024-04-11 12:19:30 -07:00
jalcaraz
c467bba73e TAU package: Include recent change for Ubuntu (#43572)
* Include recent change for Ubuntu
  Select option -disable-no-pie-on-ubuntu for some Ubuntu systems
  823971df01
* Added conflict for new variant
* Updated conflict version
* Added mention of Ubuntu to variant description
2024-04-11 12:06:34 -07:00
James Taliaferro
d680a0cb99 New package: timew (#43585)
* add timewarrior
* fix style checks
* fix style checks
2024-04-11 11:58:08 -07:00
Luc Berger
efadee26ef Kokkos Ecosystem: 4.3.00 (#43607)
* Kokkos Kernels: adding missing TPLs and pre-conditions
  Adding variants and dependencies for rocBLAS and rocSPARSE.
  Also adding a "when=" close to the TPL variants that prevents
  enabling the TPLs in versions of the library when it was not
  yet available.
* Kokkos Kernels: remove comment for better format
* Kokkos Kernels: adding cusolver and rocsolver for at version 4.3.00
* Kokkos Ecosystem: updating packages for release 4.3.00
* Kokkos: adding arch for SG2042
* Removing sg2042 from spack_micro_arch_map
  Removing it here and will work to add it in the proper generic spack location, likely:   `spack/lib/spack/external/archspec/json/cpu/microarchitectures.json` ?
2024-04-11 12:41:28 -06:00
Greg Becker
2077b3a006 invalid compiler: warn instead of error (#43491) 2024-04-11 20:39:27 +02:00
Adam J. Stewart
8e0c659b51 py-cartopy: add v0.23.0 (#43583)
* py-cartopy: add v0.23.0
* numpy 2 support added
2024-04-11 11:11:43 -07:00
Derek Ryan Strong
863ab5a597 opus: update package (#43587) 2024-04-11 10:58:27 -07:00
Derek Ryan Strong
db4e76ab27 cpio: add 2.15 (#43589) 2024-04-11 10:55:08 -07:00
Adam J. Stewart
6728a46a84 py-keras: add v3.2.1 (#43594) 2024-04-11 10:37:48 -07:00
Adam J. Stewart
5a09459dd5 py-pandas: add v2.2.2 (#43596) 2024-04-11 10:29:43 -07:00
James Taliaferro
7e14ff806a add taskwarrior 3.0.0 (#43580)
* add taskwarrior 3.0.0

* blacken
2024-04-11 04:59:25 -06:00
Wouter Deconinck
7e88cf795c wayland-protocols: new versions 1.34 (#43577) 2024-04-11 04:29:41 -06:00
Wouter Deconinck
1536e3d422 qt-base: depends_on cmake@3.21: when ~shared or platform=darwin (#43576) 2024-04-11 04:29:15 -06:00
Massimiliano Culpo
1fe8e63481 Reuse specs built with compilers not in config (#43539)
Allow reuse of specs that were built with compilers not in the current configuration. This means that specs from build caches don't need to have a matching compiler locally to be reused. Similarly when updating a distro. If a node needs to be built, only available compilers will be considered as candidates.
2024-04-11 09:13:24 +02:00
YI Zeping
dfca2c285e Packages: llvm cxx flag remove, new versions, and binutils build issue fix (#43567)
* add c++11 header to gold for compiler not defaulting to c++11

* glibc: add 2.39

* llvm: add 18.1.3

* fix #42314, remove cxx11 flag for llvm; should be controlled by cmake.

* modify patch

* llvm version

* add gmake version request
2024-04-10 23:04:58 -04:00
Martin Aumüller
2686f778fa qt-*: add v6.6.2, v6.6.3, and v6.7.0 (#43559)
* qt-base: add v6.6.2

* qt-declarative: add v6.6.2

* qt-quick3d: add v6.6.2

* qt-quicktimeline: add v6.6.2

* qt-shadertools: add v6.6.2

* qt-svg: add v6.6.2

* qt-base: add v6.7.0

* qt-svg: add v6.7.0

* qt-declarative: add v6.7.0

* qt-quick3d: add v6.7.0

* qt-quicktimeline: add v6.7.0

* qt-shadertools: add v6.7.0

* qt-*: add v6.6.3

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-04-10 16:30:09 -05:00
Wouter Deconinck
925e9c73b1 libpthread-stubs: new version 0.5 (#43574) 2024-04-10 13:29:45 -06:00
one
aba447e885 Add new versions for toml11 (#43469)
* Add new versions for toml11
  Added 3.8.0 and 3.8.1
* Update package.py, add cxx_std
* [@spackbot] updating style on behalf of alephpiece

---------

Co-authored-by: alephpiece <alephpiece@users.noreply.github.com>
2024-04-10 12:29:11 -07:00
Richard Berger
1113de0dad flecsi+legion: add cr versions FleCSI depended on in past releases (#43499)
* flecsi+legion: add cr versions FleCSI depended on in past releases
* flecsi: deprecate develop version
2024-04-10 12:26:17 -07:00
Wouter Deconinck
4110225166 libdrm: new version 2.4.120 (#43573) 2024-04-10 13:24:45 -06:00
Adam J. Stewart
24c839c837 py-scikit-learn: add v1.4.2 (#43557) 2024-04-10 12:20:18 -07:00
Martin Aumüller
42c6a6b189 ospray: add v3.1.0 and dependencies (#43558)
* rkcommon: add v1.13.0
* openvkl: add v2.0.1
* openimagedenoise: add v2.2.2
* ospray: add v3.1.0
2024-04-10 12:13:23 -07:00
Martin Aumüller
b0ea1c6f24 botan: add v3.4.0 (#43561) 2024-04-10 11:55:52 -07:00
Vanessasaurus
735102eb2b Automated deployment to update package flux-sched 2024-04-10 (#43566)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2024-04-10 11:54:49 -07:00
Vanessasaurus
2e3cdb349b Automated deployment to update package flux-core 2024-04-10 (#43565)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2024-04-10 11:54:20 -07:00
Seth R. Johnson
05c8030119 swig: update symlink alias to appease cmake (#43271) 2024-04-10 11:50:16 -07:00
Adam J. Stewart
bbcd4224fa py-matplotlib: add v3.8.4 (#43487)
* py-matplotlib: add v3.8.4

* matplotlib requires exact version of freetype for tests to pass

* Add version constraints to fontconfig deps

* Fix freetype build

* Freetype cmake build is useless

* Typo

* Fix download

* Fix build of older freetype

* cmake is useless
2024-04-10 18:20:42 +02:00
Adam J. Stewart
4c0cdb99b3 py-scipy: add v1.12 and v1.13 (#42213) 2024-04-10 17:56:21 +02:00
Axel Huebl
f22d009c6d pyAMReX: No CCache (#43570)
This interferes with Spack compiler wrappers.
2024-04-10 09:02:37 -06:00
Axel Huebl
c5a3e36ad0 Update pyAMReX: 24.03, 24.04 (#42858)
Latest version of pyAMReX.
2024-04-10 06:33:32 -06:00
Hariharan Devarajan
1c76ba1c3e Release for brahma 0.0.3 (#43525)
* Release for brahma 0.0.3
* switch the version directive order
2024-04-09 12:33:29 -06:00
Hariharan Devarajan
b969f739bd cpp-logger: add v0.0.3 (#43524)
* added cpp-logger 0.0.3
* switched version directive order
2024-04-09 12:27:41 -06:00
one
4788c4774c Add a new version for cxxopts (#43470)
Added 3.2.0
2024-04-09 10:36:05 -07:00
Wouter Deconinck
34de028dbc cppzmq: new version 4.10.0 (#43526) 2024-04-09 10:14:13 -07:00
Hariharan Devarajan
a69254fd79 release for gotcha 1.0.6 (#43531) 2024-04-09 10:10:32 -07:00
Adam J. Stewart
af5f205759 py-keras: add v3.2.0 (#43540) 2024-04-09 10:09:15 -07:00
HELICS-bot
77f9100a59 helics: Add version 3.5.2 (#43541)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-04-09 10:08:19 -07:00
Derek Ryan Strong
386bb71392 flac: update versions (#43544) 2024-04-09 10:07:15 -07:00
Wouter Deconinck
0676d6457f yoda: new version 2.0.0 (#43534)
* yoda: new versions 1.9.9, 1.9.10, and 2.0.0
* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-04-09 10:04:20 -07:00
Martin Lang
0b80e36867 [libxc] new homepage (#43546)
The old page on tddft.org is no longer reachable.
2024-04-09 09:59:47 -07:00
Stephen Sachs
4c9816f10c [mpas-model] Only add options for double precision when requested (#43547)
As in the original makefile "FFLAGS_PROMOTION = -fdefault-real-8
-fdefault-double-8" is only used when `precision=double`. This is the default
for the Spack package, so no change if `precision` is left unset.
2024-04-09 09:56:48 -07:00
psakievich
fb6741cf85 Trilinos: More accurate stk boot dependency (#43550)
Boost was not required as of `@13.4.0`
2024-04-09 06:49:06 -06:00
Kensuke WATANABE
3f2fa256fc LLVM: avoid Fujitsu compiler build fail in llvm17-18 (#43387)
* Avoid Fujitsu compiler Clang Mode options when building LLVM

* LLVM: avoid Fujitsu compiler build fail in llvm17-18

* address review comments
2024-04-08 19:56:30 -04:00
John W. Parent
d5c8864942 Windows bugfix: safe rename if renaming file onto itself (#43456)
* Generally use os.replace on Windows and Linux
* Windows behavior for os.replace differs when the destination exists
  and is a symlink to a directory: on Linux the dst is replaced and
  on Windows this fails - this PR makes Windows behave like Linux
  (by deleting the dst before doing the rename unless src and dst
  are the same)
2024-04-08 14:10:02 -07:00
Luc Berger
b3cef1072d Nalu: updating Trilinos recipe a bit (#43471)
* Nalu: updating Trilinos recipe a bit

Basic changes to build/install nalu properly using Spack.
Some more changes would be nice for instance adding an
option to build against Trilinos master or develop. Adding
a dependency on googletest to avoid the annoying build
failures in the unit-tests.

* Nalu: adding release 1.6.0

Nalu v1.6.0 can build cleanly against Trilinos 14.0.0 with the
proposed changes. The only other combo is master / master but
than one is "floating" as these branch evolve over time. When a
new Nalu comes out we might want to add another fixed version to
keep this recipes up to date!
2024-04-08 10:39:51 -06:00
Wouter Deconinck
e8ae9a403c acts: depends_on py-onnxruntime when +onnx for @23.3: (#43529) 2024-04-08 14:13:17 +02:00
Wouter Deconinck
1a8ef161c8 fastjet: new multi-valued variant plugins (#43523)
* fastjet: new multi-valued variant `plugins`

* rivet: depends_on fastjet plugins=cxx
2024-04-08 14:12:12 +02:00
Harmen Stoppels
d3913938bc py-tatsu: add upperbound on python (#43510) 2024-04-08 11:26:46 +02:00
Harmen Stoppels
4179880fe6 py-pymatgen: add forward compat bound for cython (#43511) 2024-04-08 11:26:09 +02:00
Harmen Stoppels
125dd0368e py-triton: add zlib (#43512) 2024-04-08 11:25:33 +02:00
Harmen Stoppels
fd68f8916c gperftools: add cmake build system (#43506)
the autotools build system does something funky which causes a link line
where gccs default link dirs are explicitly added and end up before the
-L from spack's libunwind, so that ultimately it links against system
libunwind.

the cmake build system does better.
2024-04-08 10:00:05 +02:00
Jonas Eschle
93e6f5fa4e Update jax & jaxlib versions (#42863)
* upgrade new versions

* style fix

* update jaxlib deps (not cuda and bazel yet)

* update jaxlib cuda versions

* update jaxlib cuda versions

* update jaxlib cuda versions

* chore: style fix

* Update package.py

* Update package.py

* fix:  typo

* docs: add source for cuda version

* py-jaxlib 0.4.14 also doesn't build on ppc64le

* Add 0.4.26

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2024-04-07 12:04:23 +02:00
Robert Cohn
54acda3f11 oneapi licenses (#43451) 2024-04-06 08:04:04 -04:00
kwryankrattiger
663e20fcc4 ParaView: add v5.12.0 (#42943)
* ParaView: Update version 5.12.0

Add 5.12.0 release
Update default to 5.12.0

* Add patch for building ParaView 5.12 with kits

* Drop VTKm from neoverse
2024-04-06 04:12:48 +00:00
eugeneswalker
6428132ebb e4s ci: enable lammps variants from presets/most.cmake (#43522) 2024-04-05 20:56:18 -07:00
eugeneswalker
171958cf09 py-deephyper: add v0.6.0 (#43492)
* py-deephyper: add latest version: v0.6.0

* e4s: add py-deephyper

* v0.6.0: depend on python@3.7:3.11

* add py-packaging constraint so arm64 builds work

* [@spackbot] updating style on behalf of eugeneswalker
2024-04-06 00:28:37 +00:00
Frédéric Simonis
0d0f7ab030 Add release 3.1.0 (#43508) 2024-04-05 16:58:57 -07:00
eugeneswalker
35f8b43a54 e4s ci: add nekbone (#43515)
* e4s ci: add nekbone, nek5000

* remove nek5000
2024-04-05 16:36:13 -07:00
one
6f7eb3750c Add new versions of tinyxml2 (#43467)
* Add new versions of tinyxml2
  Added 7.0.0 to 10.0.0
* Add the variant "shared"
2024-04-05 13:36:45 -07:00
renjithravindrankannath
2121eb31ba Patch to set PARAMETERS_MIN_ALIGNMENT to the native alignment for rocm-opencl (#43444)
* For avx build, the start address of values_ buffer in KernelParameters
   is not a correct as it is computed based on 16-byte alignment.
* Style check error fix
2024-04-05 12:02:32 -07:00
kwryankrattiger
c68d739825 CI: Add debug to the log aggregation script (#42562)
* CI: Add debug to the log aggregation script
2024-04-05 14:00:27 -05:00
John W. Parent
c468697b35 Use correct method "append" instead of extend (#43514) 2024-04-05 18:46:47 +00:00
G-Ragghianti
c4094cf051 slate: Adding comgr as dependency (#43448)
* Adding comgr as dependency

* adding more smoke test deps
2024-04-05 11:32:16 -07:00
LinaMuryanto
9ff9ca61e6 py-amq, py-celery, py-kombu: New versions, fix build (#43295)
* updating package.py for py-celery, py-kombu, py-amq
* added more py-kombu package versions
* fix copyrights and stype on py-kombu/package.py
* removed extra spaces
* added py-billiard 4.2.0 and added back the license('BSD-3-Clause')
* removed extra spaces in py-celery/package.py
* fixed py-amqp 2.4.0 sha; fixed py-celery's dependency of py-click (when version restrictions)
* more clean up on specifying version bounds
2024-04-05 11:14:18 -07:00
Massimiliano Culpo
826e0c0405 Improve hit-rate on buildcaches (#43272)
* Relax compiler and target mismatches

The mismatch occurs on an edge. Previously it was assigned
the parent priority, now it is assigned the child priority.

This should make reuse from buildcaches or store more likely,
since most mismatches will be counted with "reused" priority.

* Optimize version badness for runtimes at very low priority

We don't want to e.g. switch other attributes because we
cannot reuse an old installed runtime.

* Optimize runtime attributes at very low priority

This is such that the version of the runtime would
not influence whether we should reuse a spec.

Compiler mismatches are considered for runtimes,
to avoid situations where compiling foo%gcc@9
brings in gcc-runtime%gcc@13 if gcc@13 is among
the available compilers

* Exclude specs without runtimes from reuse

This should ensure that we do not reuse specs that
could be broken, as they expect the compiler to be
installed in a specific place.
2024-04-05 20:10:28 +02:00
Andrey Perestoronin
1b86a842ea add itac and inspector packages (#43507) 2024-04-05 09:30:36 -04:00
Harmen Stoppels
558a28bf52 bazel: conflict with gcc 13 (#43504) 2024-04-05 14:24:06 +02:00
Harmen Stoppels
411576e1fa Do not acquire a write lock on the env post install if no views (#43505) 2024-04-05 12:31:21 +02:00
eugeneswalker
cab4f92960 datatransferkit: needs trilinos@14.2: for @3.1.0: (#43496) 2024-04-05 03:03:15 -06:00
Adam J. Stewart
c6c13f6782 GDAL: add v3.8.5 (#43493) 2024-04-04 21:29:28 -06:00
Daniele Cesarini
cf11fab5ad Added Libfort library (#43490) 2024-04-04 21:14:27 -06:00
Harmen Stoppels
1d8b35c840 installer.py: compute package_id from spec (#43485)
The installer runs `get_dependent_ids`, which follows edges outside the
subdag that's being installed, so it returns a superset of the actual
dependents.

That's generally fine, except that it calls `s.package` on every
dependent, which triggers a package class to be instantiated, which is a
lot of work.

Instead, compute the package id from the spec, since that's all that's
used anyways and does not trigger *lots* of slow and redundant
instantiations of package objects.
2024-04-04 20:39:30 -06:00
Weiqun Zhang
5dc46a976d amrex: add v24.04 (#43443) 2024-04-04 19:00:44 -07:00
Alex Richert
05f5596cdd Update grib-util recipe (#43484) 2024-04-04 15:07:05 -07:00
psakievich
6942c7f35b Update exawind packages (#40793) 2024-04-04 12:54:20 -06:00
Alex Richert
18f0ac0f94 Add g2@3.4.9 (#43481) 2024-04-04 11:50:08 -07:00
Vicente Bolea
d9196ee3f8 adios2: bump version 2.10.0 (#43479) 2024-04-04 13:46:40 -05:00
John W. Parent
ef0bb6fe6b Msvc: Determine OneAPI_ROOT from fc compiler path (#43131)
If ONEAPI_ROOT is not set as an environment variable, the current approach will raise an error.
Instead we can compute the OneAPI_ROOT from the compiler paths like we do with vcvarsall.
2024-04-04 11:14:44 -07:00
Alex Richert
3fed320013 Add MPI and arch bugfixes to SCOTCH (#39264)
* Add MPI and arch bugfixes to SCOTCH

* Update scotch/package.py
2024-04-04 12:48:39 -05:00
Chris Marsh
1aa77e695d Trilinos: add threadsafe variant (#43480)
* Fixes #43454 by exposing a threadsafe variant

* [@spackbot] updating style on behalf of Chrismarsh

* fix style

---------

Co-authored-by: Chrismarsh <Chrismarsh@users.noreply.github.com>
2024-04-04 10:00:53 -07:00
Alex Richert
3a0efeecf1 add g2c@1.9.0 (#43482) 2024-04-04 09:56:19 -07:00
Alex Richert
5ffb5657c9 update g2tmpl recipe (#43483) 2024-04-04 09:56:09 -07:00
Alex Richert
2b3e7fd10a Add shared variant for fftw to allow static-only builds (#37897)
Co-authored-by: alexrichert <alexrichert@gmail.com>
2024-04-04 03:47:46 -06:00
James Smillie
cb315e18f0 py-pip package: enable install on Windows (#43203)
The installation mechanism used on Linux to install py-pip (using pip
from the downloaded wheel to install the wheel) does not work on Windows.

This updates the installation of py-pip on Windows to download and
use a zipapp of a specific pip version in order to install the wheel
pip version that is requested.
2024-04-03 23:03:56 -06:00
Jonas Eschle
10c637aca0 py-zfit: add v0.18.* (#43200)
Co-authored-by: Valentin Volkl <valentin.volkl@cern.ch>
2024-04-04 02:55:00 +02:00
Greg Becker
fb4e1cad45 remove dpcpp compiler and package (#43418)
`dpcpp` is deprecated by intel and has been superseded by `oneapi` compilers for a very long time.

---------

Co-authored-by: becker33 <becker33@users.noreply.github.com>
2024-04-03 15:34:23 -07:00
Adam J. Stewart
3054b71e2e py-rarfile: add v4.2 (#43477) 2024-04-03 15:08:36 -07:00
downloadico
47163f7435 py-cig-pythia: add missing py-setuptools dependency (#43473) 2024-04-03 15:02:32 -07:00
Dom Heinzeller
e322a8382f py-torch: Add variant 'internal-protobuf' to build with the internal protobuf (#43056) 2024-04-03 15:57:54 -06:00
Greg Sjaardema
53fb4795ca Seacas exodusii 04 2024 (#43468)
* SEACAS: Update package.py to handle new SEACAS project name
  The base project name for the SEACAS project has changed from
  "SEACASProj" to "SEACAS" as of @2022-10-14, so the package
  needed to be updated to use the new project name when needed.

  The refactor also changes several:
      "-DSome_CMAKE_Option:BOOL=ON"
  to
     define("Some_CMAKE_Option", True)
* SEACAS, EXODUSII: New version; deprecate older versions; better variant descriptions
* [@spackbot] updating style on behalf of gsjaardema
* Fix long lines reported by flake8

---------

Co-authored-by: gsjaardema <gsjaardema@users.noreply.github.com>
2024-04-03 15:46:57 -06:00
eugeneswalker
4517c7fa9b ginkgo@1.7 %oneapi@2024.1: icpx 2024.1 no longer accepts sycl::ext::intel::ctz (#43476) 2024-04-03 15:46:42 -06:00
Wouter Deconinck
efaed17f91 root: new version 6.30.06 (#43472) 2024-04-03 15:41:41 -06:00
Thomas Madlener
2c17cd365d Make it possible to build whizard from a git checkout (#43447) 2024-04-03 21:55:57 +02:00
psakievich
dfe537f688 Convert curl env mod method to a side effect (#43474) 2024-04-03 12:02:48 -07:00
G-Ragghianti
be0002b460 slate: Removing scalapack as test dependency, adding python (#43452)
* removing scalapack as test dependency, adding python
* fixing style
* style
2024-04-03 11:20:03 -07:00
Daryl W. Grunau
743ee5f3de eospac: expose versions 6.5.8 and 6.5.9 (#43450)
Co-authored-by: Daryl W. Grunau <dwg@lanl.gov>
2024-04-03 11:17:15 -07:00
Cameron Smith
b6caf0156f simmetrix-simmodsuite: support RHEL8, fix module paths (#43455) 2024-04-03 11:07:43 -07:00
Christoph Junghans
ec00ffc244 byfl: fix llvm dep (#43460) 2024-04-03 10:39:06 -07:00
G-Ragghianti
f020256b9f magma add version 2.8.0 (#43417)
* Add release 2.8.0
* Changing C compiler to hipcc
* Final release version
* Adding new cmake definition for rocm support
* Enabling rocm version support
* Update sha256
* Updating website URL
* Removing unnecessary C compiler spec
* Adding rocm-core dependency
* fixing rocm-core version
* fixing rocm-core version
* fixing style
* bugfix
2024-04-03 11:25:46 -06:00
Peter Brady
04377e39e0 New package: Parthenon (#43426) 2024-04-03 10:05:02 -07:00
Vanessasaurus
ba2703fea6 flux-core: add v0.61.0 (#43465)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2024-04-03 12:07:32 +02:00
Adrien Bernede
92b1c8f763 RADIUSS packages update (Starting over #39613) (#41375) 2024-04-02 15:03:07 -07:00
Adam J. Stewart
2b29ecd9b6 py-pillow: add v10+ (#43441) 2024-04-02 14:46:21 -07:00
Adam J. Stewart
5b43bf1b58 py-scikit-image: add v0.21 and v0.22 (#43440)
* py-scikit-image: add v0.21 and v0.22
* Add maintainer and license
* Style fix
2024-04-02 14:41:29 -07:00
Juan Miguel Carceller
37d9770e02 gdb: add a dependency on pkgconfig (#43439)
* gdb: add a dependency on pkgconfig
* Apply dependency for 13.1 and onwards

---------

Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-04-02 14:39:03 -07:00
Brad Geltz
0e016ba6f5 geopm-runtime: New package (#42737)
* Add systemd

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* gobject-introspection: Correct glib versions

- The meson.build requirement that the glib version
  is >= the gobject-introspective version is not in place
  until v1.76.1.
- Prior to that, the requirement was glib >= 2.58.0.
- Bug introduced in acbf0d99c4, PR #42222.

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* util-linux: add v2.39.3

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* py-natsort: add new versions

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* geopm-service: default systemd support to true

- Make the dependency sticky to force a failure
  if systemd compilation fails, or force
  the user to disable the option.

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* geopm-service: Add initial multi-architecture support

- Restrict arch conflicts to 3.0.1
- Disable cpuid at configure time on non-x86_64 platforms.

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* geopm-service: update docstrings

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* Add py-geopmdpy

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* Add geopm-runtime recipe

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

---------

Signed-off-by: Brad Geltz <brad.geltz@intel.com>
2024-04-02 17:27:36 +02:00
psakievich
7afa949da1 Add handling of custom ssl certs in urllib ops (#42953)
This PR allows the user to specify a path to a custom cert file (or directory) in
Spack's config:

```yaml
  # This is where custom certs for proxy/firewall are stored.
  # It can be a path or environment variable. To match ssl env configuration
  # the default is the environment variable SSL_CERT_FILE
  ssl_certs: $SSL_CERT_FILE
```

`config:ssl_certs` can be a path to a file or a directory, or it can be and environment
variable that resolves to one of those. When it posts to something valid, Spack will
update the ssl context to include custom certs, and fetching via `urllib` and `curl`
will trust the provided certs.

This should resolve many issues with fetching behind corporate firewalls.


---------

Co-authored-by: psakievich <psakievich@users.noreply.github.com>
Co-authored-by: Alec Scott <alec@bcs.sh>
2024-04-01 11:11:13 -07:00
Jose E. Roman
b81d7d0aac slepc, py-slepc4py, petsc, py-petsc4py add v3.21.0 (#43435)
* New release SLEPc 3.21

* petsc, py-petsc4py  add v3.21.0

* [@spackbot] updating style on behalf of joseeroman

---------

Co-authored-by: Satish Balay <balay@mcs.anl.gov>
Co-authored-by: joseeroman <joseeroman@users.noreply.github.com>
2024-03-31 08:57:44 -07:00
Peter Scheibel
e78484f501 Concretize when_possible: add failure detection and explicit message (#43202)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-03-31 14:02:09 +02:00
Adam J. Stewart
6fd43b4e75 PyTorch: update ecosystem (#43423) 2024-03-30 06:47:22 -06:00
G-Ragghianti
14edb55288 SLATE package: fix smoke test (#43425) 2024-03-29 17:17:48 -07:00
Juan Miguel Carceller
f062f1c5b3 ROOT package: add patch for builds with libAfterImage for MacOS (#43428) 2024-03-29 16:48:59 -07:00
eugeneswalker
7756c8f4fc ci devtools manylinux2014: update ci image with compatible gpg (#43421) 2024-03-29 16:12:55 -06:00
eugeneswalker
69c8a9e4ba veloc@1.7: depend on er@0.4: (#43433) 2024-03-29 12:54:27 -07:00
Todd Gamblin
47c0736952 xz: add comment to avoid 5.6 pending CVE resolution (#43432)
XZ is compromised; add a note for maintainers to avoid updating until we
have a release without the CVE.
2024-03-29 12:03:13 -06:00
kwryankrattiger
8b89287084 CI Reproducer on Metal (#43411)
* MacOS image remove requires override syntax

* Metal reproducer auto start and cross-platform
2024-03-29 12:32:54 -05:00
Julien Cortial
8bd6283b52 med: add v4.1.1, and v5.0.0, update package recipe (#43409) 2024-03-29 18:27:33 +01:00
Peter Scheibel
179e4f3ad1 Don't delete "spack develop" build artifacts after install (#43424)
After #41373, where we stopped considering the source directory to be the stage for develop builds,
we resumed *deleting* the stage even after a successful build.

We don't want this for develop builds because developers need to iterate; we should keep the artifacts
unless they explicitly run `spack clean`.  

Now:
- [x] Build artifacts for develop packages are not removed after a successful install
- [x] They are also not removed before an install starts, i.e. develop packages always 
      reuse prior artifacts, if available.
- [x] They can be deleted in any other context, e.g. by running  `spack clean --stage`
2024-03-29 09:36:31 -07:00
kwryankrattiger
e97787691b force oneapi compiler unless specified otherwise (#43419) 2024-03-29 09:20:26 -07:00
kjrstory
5932ee901c openradioss: add DEXEC_NAME to cmake variable and change style of cmake_args (#43365) 2024-03-29 10:45:54 +01:00
afzpatel
3bdebeba3c UCX: Add patch to set HIP_PLATFORM_AMD (#43403) 2024-03-29 10:41:45 +01:00
Massimiliano Culpo
d390ee1902 spack load: remove --only argument (#42120)
The argument was deprecated in v0.21 and slated
for removal in v0.22.
2024-03-29 10:19:10 +01:00
AMD Toolchain Support
4f9fe6f9bf Adding 'logging' and 'tracing' variants to enable AOCL DTL trace and logging capabilities (#43414) 2024-03-29 00:58:31 -06:00
Richard Berger
df6d6d9b5c flecsi: depend on legion@24.03.0: moving forward (#43410)
Prior FleCSI releases relied on commits on the control-replication branch (cr)
of Legion. That branch has now been merged into master and is part of the
24.03.0 Legion release.
2024-03-29 00:53:36 -06:00
Luc Berger
e57d33b29f Trilinos: update SuperLU dependency (#43346) 2024-03-29 00:48:27 -06:00
Wouter Deconinck
85c6d6dbab (py-)onnx: new version 1.14.{0,1}, 1.15.0 (#41877)
* (py-)onnx: new version 1.14.{0,1}, 1.15.0

Notes on `onnx`:
- The C++ standard was changed to 14 in 1.15, so no more filter_file is needed. (The C++ standard has since changed to 17 in master.)

Notes on `py-onnx`:
- `py-pybind11` was an unlisted requirement in CMakeLists.txt since 1.3 or so (before earliest spack package).

* py-onnx: depends_on pybind11 with type link, not run

* py-onnx: depends_on py-setuptools@64:
2024-03-29 00:33:27 -06:00
Kyle Knoepfel
5f9228746e Add ability to rename environments (#43296) 2024-03-28 15:15:04 -06:00
Adam J. Stewart
9f2451ddff py-jaxlib: ppc64le support has been fixed (#43422) 2024-03-28 21:15:17 +01:00
Tristan Carel
a05eb11b7b steps: add version 5.0.1 (#43360) 2024-03-28 17:43:10 +01:00
kwryankrattiger
ae2d0ff1cd CI: fail the rebuild command if buildcache push failed (#40045) 2024-03-28 17:02:41 +01:00
Greg Becker
7e906ced75 spack find: add options for local/upstream only (#42999)
Users requested an option to filter between local/upstream results in `spack find` output.

```
# default behavior, same as without --install-tree argument
$ spack find --install-tree all

# show only local results
$ spack find --install-tree local  

# show results from all upstreams
$ spack find --install-tree upstream 

# show results from a particular upstream or the local install_tree
$ spack find --install-tree /path/to/install/tree/root
```

---------

Co-authored-by: becker33 <becker33@users.noreply.github.com>
2024-03-28 10:00:55 -05:00
Andrey Perestoronin
647e89f6bc intel-oneapi 2024.1.0: added new version to packages (#43375)
* added new package versions

* add itac and inspector

* Remove ITAC and Inspector

* Set older version for MKL as a workaround to pass CI issue
2024-03-28 08:09:37 -06:00
Anderson Chauphan
3239c29fb0 trilinos: add v15.1.1 (#42996) 2024-03-28 10:08:28 -04:00
Martin Aumüller
abced0e87d openscenegraph: patch for compatibility with ffmpeg@5: (#43051)
Co-authored-by: aumuell <aumuell@users.noreply.github.com>
2024-03-28 11:21:47 +01:00
liam-o-marsh
300fc2ee42 scipy: register conflict with too-recent openblas (#43301) 2024-03-28 11:03:45 +01:00
Christopher Christofi
13c4258e54 py-folium: add new package with version 0.16.0 (#43352) 2024-03-28 10:50:37 +01:00
Sergey Kosukhin
f29cb7f953 zlib-ng: %nvhpc: Fix build with %nvhpc (similar to zlib) (#43095)
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-03-28 10:41:10 +01:00
Ryan Honeyager
826b8f25c5 hdf5: fixes floating point exceptions generated when fpe trapping is enabled (#42880) 2024-03-28 10:34:10 +01:00
Cody Balos
ebaeea7820 sundials: add new version (#43008)
* sundials: add new version
* note previous default
* update when clause for removed options

---------

Co-authored-by: David J. Gardner <gardner48@llnl.gov>
2024-03-28 10:12:49 +01:00
Kensuke WATANABE
f76eb993aa pixman: avoid assembler macros error with Fujitsu compiler (#43362) 2024-03-28 10:10:59 +01:00
Tristan Carel
0b2c370a83 py-pytest-mpi: new package starting at 0.6 (#43368) 2024-03-28 10:01:01 +01:00
Tristan Carel
6a9ee480bf py-vascpy: new package starting at 0.1.1 (#43370) 2024-03-28 09:58:43 +01:00
potter-s
cc80d52b62 aws-cli-2: restrict supported Python versions to @:3.11 (#43390)
Co-authored-by: Simon Potter <sp39@sanger.ac.uk>
2024-03-28 08:39:51 +01:00
Paul R. C. Kent
b9c7d3b89b llvm: add 18.1.2 (#43401) 2024-03-28 08:24:05 +01:00
AMD Toolchain Support
c1be6a5483 QE: backport 7.3.1 elpa build fix to 7.3 (#43394)
Co-authored-by: Ning Li <ning.li@amd.com>
2024-03-28 08:12:23 +01:00
Juan Miguel Carceller
42550208c3 gaudi: add version 38 and a gaudialg variant (#42856)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-03-28 08:10:00 +01:00
Ken Raffenetti
be231face6 mpich: fixup removal of pmi=off option (#43377) 2024-03-28 08:08:37 +01:00
Chris White
89ac747a76 add lua 5.4.6 (#43407) 2024-03-27 22:37:29 -06:00
Richard Berger
5d8f36d667 Legion: add Rust-based profiler to install (#43408)
* legion: add missing license
* legion: add rust-based profiler to install
2024-03-27 22:25:54 -06:00
Sreenivasa Murthy Kolam
6c3fed351f Fix build failure when run-tests is enabled for rocfft (#42424)
* initial commit to fix the error when running run tests for rocfft

* apply patch from 5.7.0 onwards

* add description for the patch that is added
2024-03-27 21:09:29 -05:00
afzpatel
b9cbd15674 hipcc: new package for HIP (#43245)
* initial commit to add hipcc package
* restore setup_dependent_package
2024-03-27 21:05:50 -05:00
Alex Richert
b8f633246a Add aocc support to qt (#43400) 2024-03-27 17:23:45 -06:00
Massimiliano Culpo
a2f3e98ab9 pinentry: add v1.3.0 (#43395) 2024-03-27 22:28:34 +01:00
Paul R. C. Kent
acffe37313 libffi: add 3.4.6 and 3.4.5 (#43404) 2024-03-27 22:27:19 +01:00
Greg Sjaardema
249e5415e8 Update exodusii package (#43379)
* SEACAS: Update package.py to handle new SEACAS project name
  The base project name for the SEACAS project has changed from
  "SEACASProj" to "SEACAS" as of @2022-10-14, so the package
  needed to be updated to use the new project name when needed.
  The refactor also changes several:
      "-DSome_CMAKE_Option:BOOL=ON"
  to
     define("Some_CMAKE_Option", True)
* exodusii -- refactor and bring up-to-date
* Add missed patch file
* [@spackbot] updating style on behalf of gsjaardema
* Apply seacas windows patch here also.
* Update url so old checksums valid; redo new checksums

---------

Co-authored-by: gsjaardema <gsjaardema@users.noreply.github.com>
2024-03-27 14:14:49 -07:00
Elliott Slaughter
e2a942d07e legion: Add 24.03.0, update HIP dependency (#43398)
* legion: Add 24.03.0, update HIP dependency.
* legion: Remove CUDA upper bound, update HIP bounds, use f-strings.
* legion: Fix format.
* legion: Fix format again.
2024-03-27 14:38:28 -06:00
afzpatel
32deca2a4c initial commit to add check function to rocthrust (#43405) 2024-03-27 13:06:55 -07:00
Yanfei Guo
e4c64865f1 argobots: update to v1.2 (#43381)
* argobots: update to v1.2
* Replace prior maintainer with current one.

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-03-27 13:58:37 -06:00
Adam J. Stewart
1175f37577 py-kornia: add v0.7.2 (#43184)
* py-kornia: add v0.7.2
* Add maintainers to recipe
* https not supported
* Debug build failure
* More detailed debug info
* Try rust+dev
* Fix pyo3 build
* Fix turbojpeg-sys build
* Fix dlpack-rs build
* Get rid of debug env vars
2024-03-27 10:58:06 -07:00
James Edgeley
faa183331f Update Nektar++ package (#43397)
* Update Nektar++ package
* shorten line
* correct to 5.5.0
* use cmake helpers
* style

---------

Co-authored-by: JamesEdgeley <JamesEdgeley@users.noreply.github.com>
2024-03-27 11:55:14 -06:00
afzpatel
bbac33871c bump rocprofiler-dev to 6.0 and add aqlprofile (#42459)
* Initial commit to bump rocprofiler-dev to 6.0 and add aqlprofile recipe
* bump to 6.0.2 and extracting binaries from deb pkg
* fixes for hpctoolkit build errors
* add yum and zyp aqlprofile packages
* fix style issues
2024-03-27 10:36:10 -07:00
afzpatel
6d4dd33c46 Enable ASAN in ROCm packages (#42704)
* Initial commit to enable ASAN
* fix styling
* fix styling
* add asan option for hip-tensor and roctracer-dev
2024-03-27 09:40:21 -07:00
Adrien Cotte
579bad05a8 Add py-modules-gui (mogui) (#43396) 2024-03-27 09:35:17 -07:00
psakievich
27a8eb0f68 Add config option and compiler support to reuse across OS's (#42693)
* Allow compilers to function across compatible OS's
* Add documentation in the default yaml

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
2024-03-27 15:39:07 +00:00
Tristan Carel
4cd993070f py-libsonata: new package starting at 0.1.25 (#43372)
* py-libsonata: new package starting at 0.1.25
* align spack version with the git branch
* Fix min required versions / use spack packages instead of submodules
2024-03-27 08:20:13 -07:00
renjithravindrankannath
4c55c6a268 rvs binary path updated for 6.0 (#43359) 2024-03-27 07:30:30 -07:00
Wouter Deconinck
a4a27fb1e4 root: new version 6.30.04 (#43378)
New version of ROOT, no changes to build system that aren't already accounted for (e.g. require http for webgui), https://github.com/root-project/root/compare/v6-30-02...v6-30-04
2024-03-27 07:28:34 -07:00
Massimiliano Culpo
66345e7185 Improve fixup macos rpath unit test (#43392)
Starting from XCode version 15 the linker ignores
duplicate rpaths, so the libraries don't need fixing
in those cases
2024-03-27 12:01:12 +01:00
dependabot[bot]
8f76f1b0d8 build(deps): bump actions/setup-python from 5.0.0 to 5.1.0 (#43384) 2024-03-27 10:49:45 +01:00
dependabot[bot]
4cab6f3af5 build(deps): bump codecov/codecov-action from 4.1.0 to 4.1.1 (#43385) 2024-03-27 10:49:33 +01:00
Harmen Stoppels
0d4665583b ci: inherit secrets in local workflows (#43391) 2024-03-27 09:48:14 +01:00
Harmen Stoppels
5d0ef9e4f4 ci: remove outdated version comments (#43389) 2024-03-27 09:26:12 +01:00
Harmen Stoppels
e145baf619 codecov: verbose: true (#43388) 2024-03-27 09:15:34 +01:00
Wouter Deconinck
6c912b30a2 Codespaces support for rapid PR evaluation (#41901)
* Create devcontainer.json

* Ensure codespace can be setup for current branch

* fix: find compilers in site scope

* fix: use cloud_pipelines ubuntu20.04 image

* fix: spack config --scope site add

* fix: use develop, not develop-root mirror
2024-03-26 21:13:32 -05:00
Jordan Ogas
f4da453f6b Charliecloud package: add 0.36 and 0.37; update dependencies. (#42590)
This adds a dependency on pkg-config which in turn builds pkg-config
on pipelines using %onapi/%cce: update the pkg-config build to disable
specific warnings-as-errors from these compilers.

Co-authored-by: Reid Priedhorsky <1682574+reidpr@users.noreply.github.com>
2024-03-26 16:13:29 -07:00
Harmen Stoppels
7e9caed8c2 ci: fix codecov upload (#43382) 2024-03-26 22:49:26 +01:00
Matthew Thompson
69509a6d9a pflogger: add version 1.14.0 (#43373) 2024-03-26 14:08:24 -06:00
Massimiliano Culpo
0841050d20 Add macos-14 as a runner (Apple M1) (#42728)
* Add macos-14 as a runner (Apple M1)

* Mark a test xfail

We need to check later if this test needs modifications
on Apple Silicon chips.

---------

Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Co-authored-by: alalazo <alalazo@users.noreply.github.com>
2024-03-26 12:36:21 -07:00
Cody Balos
321ffd732b axom package: add tests (#43312) 2024-03-26 11:21:28 -07:00
Simon Frasch
22922323e3 spfft package: add version 1.1.0 (#43318) 2024-03-26 10:19:06 -07:00
eugeneswalker
0b5b192c18 glvis: fix spack issue #42839 (#43369)
* glvis: fix spack issue #42839

* add issue link
2024-03-26 11:13:54 -06:00
Andrey Alekseenko
1275c57d88 gromacs: add new versions 2024, 2024.1; fix license (#43366) 2024-03-26 10:19:58 -06:00
Juan Miguel Carceller
29a39ac6a0 xrootd: add a dependency on pkgconfig when building with +davix (#43339)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-03-26 09:01:47 -07:00
kwryankrattiger
ae9c86a930 buildcache sync: manifest-glob with arbitrary destination (#41284)
* buildcache sync: manifest-glob with arbitrary destination

The current implementation of the --manifest-glob is a bit restrictive
requiring the destination to be known by the generation stage of CI.
This allows specifying an arbitrary destination mirror URL.

* Add unit test for buildcache sync with manifest

* Fix test and arguments for manifest-glob with override destination

* Add testing path for unused mirror argument
2024-03-26 08:47:45 -07:00
Massimiliano Culpo
83199a981d Allow unit test to work on Apple M1/M2 (#43363)
* Remove a few compilers from static test data

These compilers were used only in a bunch of tests, so
they are added only there.

* Remove clang@3.3 from unit test configuration

* Parametrize compilers.yaml

* Remove specially named gcc from static data

The compilers are used in two tests

* Remove apple-clang and macOS compilers from static data

The compiler was used only in multimethod tests

* Remove clang@3.5 (compiler seems to be unused)

* Remove gcc@4.4.0 (compiler seems to be unused)

* Exclude x86_64 tests on other architectures

* Mark two tests as for clingo only

* Update version syntax in compilers.yaml

* Parametrize tcl tests on architectures

* Parametrize lmod tests on architectures

* Substitute gcc@4.5.0 with gcc@4.8.0 so it can be used on aarch64

* Fix a few issues with aarch64 and unit-tests
2024-03-26 16:20:42 +01:00
eugeneswalker
ed40c3210e ci: add developer-tools-manylinux2014 stack (#43128)
* ci: add developer-tools-manylinux2014 stack

* add libtree, patchelf
2024-03-26 08:02:16 -07:00
Christopher Christofi
be96460ab2 py-nibabel: add new version 5.2.1 (#43335) 2024-03-26 12:46:48 +01:00
pauleonix
95caf55fe7 cuda: add 12.4.0, 12.3.2, 12.3.1 and 12.2.2 (#42748)
Add conflicts to ginkgo and petsc to avoid build failures with cuda@12.4

Co-authored-by: pauleonix <pauleonix@users.noreply.github.com>
2024-03-26 07:10:48 +01:00
Mark W. Krentel
960af24270 hpctoolkit: add version 2024.01.1 (#43353)
Add version 2024.01.1.  Adjust the dependencies for develop that no
longer uses libmonitor.
2024-03-25 17:57:45 -06:00
Christopher Christofi
899bef2aa8 py-nilearn: add new version (#43332)
* py-nilearn: add new version
* add maintainers and license
2024-03-25 16:45:13 -07:00
Adam J. Stewart
f0f092d9f1 py-keras: add v3.1.1 (#43283) 2024-03-26 00:38:01 +01:00
Christopher Christofi
6eaac2270d py-trx-python: add new package with version 0.2.9 (#43333) 2024-03-25 16:11:28 -07:00
John W. Parent
a9f3f6c007 seacas: fix linking on Windows (#43356) 2024-03-25 16:28:03 -06:00
Rocco Meli
08a04ebd46 spglib: add version 2.3.1 (#43345) 2024-03-25 14:27:32 -06:00
Matthew Thompson
d8e642ecb7 Update gftl, pflogger to match GFE 1.14 (#43291) 2024-03-25 13:22:50 -07:00
Christopher Christofi
669ed69d8e py-mne: add new version (#43334) 2024-03-25 13:21:07 -07:00
potter-s
7ebb21a0da Updated version (#43343)
Co-authored-by: Simon Potter <sp39@sanger.ac.uk>
2024-03-25 13:12:58 -07:00
Alec Scott
93ffa9ba5d go: add v1.22.1 (#43337) 2024-03-25 09:32:58 -07:00
liam-o-marsh
e5fdb90496 pyscf: new dependency bounds for pyscf, new version (#43300) 2024-03-25 10:24:01 -05:00
Danny McClanahan
303a0b3653 add command_line scope to help metavar (#42890)
It's now possible to add config on the command line with `spack -c <CONFIG_VARS> ...`, but the new `command_line` scope isn't reflected in the help output for `--scope`:

```bash
> spack help config
...
  --scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT
                        configuration scope to read/modify
...
```
2024-03-25 07:13:43 -07:00
Juan Miguel Carceller
9f07544bde r-rcpp: add version 1.0.11 and 1.0.12 (#43330) 2024-03-25 13:53:12 +01:00
Harmen Stoppels
9b046a39a8 strip url: fix whl suffix, remove exe (#43344) 2024-03-25 13:46:09 +01:00
Massimiliano Culpo
0c9a53ba3a Add intel-oneapi-runtime, allow injecting virtual dependencies (#42062)
This PR adds:
- A new runtime for `%oneapi` compilers, called `intel-oneapi-runtime`
- Information to both `gcc-runtime`  and `intel-oneapi-runtime`, to ensure
  that we don't mix compilers using different soname for either `libgfortran`
  or `libifcore`

To do so, the following internal mechanisms have been implemented:
- Possibility to inject virtual dependencies from the `runtime_constraints`
  callback on packages

Information has been added to `gcc-runtime` to provide the correct soname
under different conditions on its `%gcc`.

Rules injected into the solver looks like:

```prolog
% Add a dependency on 'gfortran@5' for nodes compiled with gcc@=13.2.0 and using the 'fortran' language
attr("dependency_holds", node(ID, Package), "gfortran", "link") :-
  attr("node", node(ID, Package)),
  attr("node_compiler", node(ID, Package), "gcc"),
  attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
  not external(node(ID, Package)),
  not runtime(Package),
  attr("language", node(ID, Package), "fortran").

attr("virtual_node", node(RuntimeID, "gfortran")) :-
  attr("depends_on", node(ID, Package), ProviderNode, "link"),
  provider(ProviderNode, node(RuntimeID, "gfortran")),
  attr("node", node(ID, Package)),
  attr("node_compiler", node(ID, Package), "gcc"),
  attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
  not external(node(ID, Package)),
  not runtime(Package),
  attr("language", node(ID, Package), "fortran").

attr("node_version_satisfies", node(RuntimeID, "gfortran"), "5") :-
  attr("depends_on", node(ID, Package), ProviderNode, "link"),
  provider(ProviderNode, node(RuntimeID, "gfortran")),
  attr("node", node(ID, Package)),
  attr("node_compiler", node(ID, Package), "gcc"),
  attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
  not external(node(ID, Package)),
  not runtime(Package),
  attr("language", node(ID, Package), "fortran").
```
2024-03-24 22:59:21 -07:00
Loic Hausammann
1fd4353289 openmpi: Add new variant: romio-filesystem=string (#43265)
Co-authored-by: loikki <loic.hausammann@id.ethz.ch>
2024-03-24 01:00:38 +01:00
potter-s
fcb8ed6409 py-plotly: Add versions up to 5.20 (#43284) 2024-03-24 00:55:42 +01:00
Marie Houillon
2f11862832 openCARP: Add v15.0 packages (#43299)
Co-authored-by: openCARP consortium <info@opencarp.org>
2024-03-24 00:35:32 +01:00
Gavin John
bff11ce8e7 py-kaleido: Add MacOS build, fix checksums (#43309)
* py-kaledio: Fix completely borked package.py

* Readd stuff I forgot to add

* And one last missing thing

* Remove python restriction

* [@spackbot] updating style on behalf of Pandapip1

* Add MacOS build

* Fix checksum

* Handle all supported OSes

* Split imports

* Remove extra version stuff
2024-03-24 00:31:38 +01:00
fgava90
218693431c paraview: fix range of exodusII-netcdf4.9.0.patch (#42926)
Co-authored-by: Gava, Francesco <francesco.gava@mclaren.com>
2024-03-23 20:33:20 +01:00
Alex Richert
e036cd9ef6 zlib-ng: New variants: +shared and +pic (#42796) 2024-03-23 19:32:42 +01:00
Sinan
cd5bef6780 py-line-profiler: Add 4.1.2 and 3.5.1 with their deps (#43156)
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-03-23 11:16:48 -06:00
Davide
159e9a20d1 suite-sparse: Add version 7.3.1 (#43328) 2024-03-23 10:00:28 -06:00
Auriane R
99bb288db7 py-torch-nvidia-apex: @3.11: Add config_settings(PEP517), add missing py-packaging (#43306) 2024-03-23 16:04:10 +01:00
Chris White
99744a766b BLT: add new version 0.6.2 (#43257) 2024-03-23 15:25:29 +01:00
Juan Miguel Carceller
ddd8be51a0 re2: use the same C++ std used by abseil-cpp (#43288)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-03-23 14:48:38 +01:00
吴坎
bba66b1063 py-vl-convert-python: Add 1.3.0 (#43297) 2024-03-23 14:44:23 +01:00
Howard Pritchard
1c3c21d9c7 mpich: add variant +xpmem to specify use of xpmem (#43293)
Signed-off-by: Howard Pritchard <hppritcha@gmail.com>
2024-03-23 14:36:06 +01:00
Christopher Christofi
cbe9b3d01c bgen: add new package with version 1.1.7 (#43327) 2024-03-23 13:49:41 +01:00
Richard Berger
0abf5ba43c hip: don't set HIP_PATH in ROCm 5.5+ (#42882) 2024-03-23 13:22:06 +01:00
Jonas Eschle
9ab3c1332b py-tensorflow-probability: Re-activate +py-jax variant for @0.20:, Add 0.23.0 (#43249) 2024-03-23 13:15:26 +01:00
Christopher Christofi
b6425da50f New package: qctool (#43326) 2024-03-23 05:46:03 -06:00
YI Zeping
937a4dbf69 NWChem pacakge: expand fftw patch appliance range to 7.2.2 (#43321)
* expand fftw patch appliance range to include 7.2.2
* fix formatting bug
2024-03-23 05:41:34 -06:00
Gavin John
cd779ee54d octave package: correction to jdk variant description (#43325) 2024-03-23 05:41:11 -06:00
Gavin John
7ddcb13325 openjdk package: use correct homepage (#43324) 2024-03-23 05:40:58 -06:00
Gavin John
7666046ce3 icedtea: change jdk dependency to java (#43323) 2024-03-23 05:40:43 -06:00
Miguel Dias Costa
8e89e61402 BerkeleyGW package: add version 4.0 (#43316) 2024-03-23 04:02:44 -06:00
Auriane R
d0dbfaa5d6 aws-ofi-nccl package: add versions including 1.8.1 (#43305)
The default url couldn't be the one with v0.0.0-aws since spack was
replacing v0.0.0-aws with v<version_number> for example, deleting the
-aws suffix. I used the url_for_version method to specify this suffix.
2024-03-22 17:49:00 -07:00
Gavin John
26f562b5a7 New package: py-kneaddata (#43310) 2024-03-22 17:20:52 -07:00
Chris Marsh
2967804da1 netcdf-cxx4: fix reference error (#43322) 2024-03-22 16:05:17 -07:00
Harmen Stoppels
c3eaf4d6cf Support for prereleases (#43140)
This adds support for prereleases. Alpha, beta and release candidate
suffixes are ordered in the intuitive way:

```
1.2.0-alpha < 1.2.0-alpha.1 < 1.2.0-beta.2 < 1.2.0-rc.3 < 1.2.0 < 1.2.0-xyz
```

Alpha, beta and rc prereleases are defined as follows: split the version
string into components like before (on delimiters and string boundaries).
If there's a string component `alpha`, `beta` or `rc` followed by an optional
numeric component at the end, then the version is prerelease.

So `1.2.0-alpha.1 == 1.2.0alpha1 == 1.2.0.alpha1` are all the same, as usual.

The strings `alpha`, `beta` and `rc` are chosen because they match semver,
they are sufficiently long to be unambiguous, and and all contain at least
one non-hex character so distinguish them from shasum/digest type suffixes.

The comparison key is now stored as `(release_tuple, prerelease_tuple)`, so in
the above example:

```
((1,2,0),(ALPHA,)) < ((1,2,0),(ALPHA,1)) < ((1,2,0),(BETA,2)) < ((1,2,0),(RC,3)) < ((1,2,0),(FINAL,)) < ((1,2,0,"xyz"), (FINAL,))
```

The version ranges `@1.2.0:` and `@:1.1` do *not* include prereleases of
`1.2.0`.

So for packaging, if the `1.2.0alpha` and `1.2.0` versions have the same constraints on
dependencies, it's best to write

```python
depends_on("x@1:", when="@1.2.0alpha:")
```

However, `@1.2:` does include `1.2.0alpha`. This is because Spack considers
`1.2 < 1.2.0` as distinct versions, with `1.2 < 1.2.0alpha < 1.2.0` as a consequence.

Alternatively, the above `depends_on` statement can thus be written

```python
depends_on("x@1:", when="@1.2:")
```

which can be useful too. A short-hand to include prereleases, but you
can still be explicit to exclude the prerelease by specifying the patch version
number.

### Concretization

Concretization uses a different version order than `<`. Prereleases are ordered
between final releases and develop versions. That way, users should not
have to set `preferred=True` on every final release if they add just one
prerelease to a package. The concretizer is unlikely to pick a prerelease when
final releases are possible.

### Limitations

1. You can't express a range that includes all alpha release but excludes all beta
   releases. Only alternative is good old repeated nines: `@:1.2.0alpha99`.

2. The Python ecosystem defaults to `a`, `b`, `rc` strings, so translation of Python versions to
   Spack versions requires expansion to `alpha`, `beta`, `rc`. It's mildly annoying, because
   this means we may need to compute URLs differently (not done in this commit).

### Hash

Care is taken not to break hashes of versions that do not have a prerelease
suffix.
2024-03-22 23:30:32 +01:00
John W. Parent
397334a4be Spack CI: Refactor process_command for Cross Platform support (#39739)
Generate CI scripts as powershell on Windows. This is intended to
output exactly the same bash scripts as before on Linux.

Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
2024-03-22 14:06:29 -07:00
Harmen Stoppels
434836be81 python wheels: do not "expand" (#43317) 2024-03-22 16:57:46 +01:00
Thomas-Ulrich
7b9b976f40 openssh: add 9.7p1 and 9.6p1; update workaround for clang (#40857)
Signed-off-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-03-22 08:24:32 -06:00
Mikael Simberg
4746e8a048 apex: Set APEX_WITH_KOKKOS CMake option in apex package (#43243)
* Make sure APEX_WITH_KOKKOS CMake option is set in apex

* Add conflict for apex with ~kokkos
2024-03-22 08:50:29 +01:00
Rocco Meli
69c684fef9 ELPA: enable GPU streams and update deprecated variables (#43145)
* elpa streams

* [@spackbot] updating style on behalf of RMeli

* Apply suggestions from @albestro

* Update var/spack/repos/builtin/packages/elpa/package.py

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>

---------

Co-authored-by: RMeli <RMeli@users.noreply.github.com>
Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>
2024-03-22 08:42:24 +01:00
Valentin Volkl
2314aeb884 py-cmake: only run test suite when run_tests (#43246)
As the cmake build is triggered by scikit build, the usual spack option
for enabling tests had no effect and the heavy test suite ran all the time.

Used https://github.com/scikit-build/cmake-python-distributions/issues/172#issuecomment-890322263
to implement how to pass options to the actual `cmake` build.

I also excluded some tests that failed for me on alma9 (gcc 11.4.1),
so the rest of the test suite can be run.
2024-03-22 04:34:43 +01:00
Martin Aumüller
d33e10a695 ffmpeg: add v6.1.1 and older patch release updates (#43050) 2024-03-22 03:39:53 +01:00
Henning Glawe
7668a0889a ncurses: Add terminfo for rxvt-unicode{,-256color} (#42721)
taken from the debian ncurses source package.
2024-03-22 02:50:24 +01:00
Thomas-Ulrich
d7a74bde9f easi: add v1.3.0, python bindings and master (#42784) 2024-03-22 02:44:20 +01:00
AMD Toolchain Support
fedf8128ae openblas: Add variant dynamic_dispatch: select best kernel at runtime (#42746)
Enable OpenBLAS's built-in CPU capability detection and kernel selection. 

This allows run-time selection of the "best" kernels for the running CPU, rather
than what is specified at build time.  For example, it allows OpenBLAS  to use
AVX512 kernels when running on ZEN4, and built targeting the "ZEN" architecture.

Co-authored-by: Branden Moore <branden.moore@amd.com>
2024-03-22 02:27:21 +01:00
Richard Berger
f70af2cc57 libristra: depends_on() for lua: allow newer lua versions (#42810) 2024-03-22 01:53:33 +01:00
Alex Leute
50562e6a0e py-neptune-client and missing deps: new package (#43059)
Co-authored-by: Cecilia Lau <chlits@rit.edu>
Co-authored-by: Jen Herting <jen@herting.cc>
2024-03-22 01:51:13 +01:00
Sergey Kosukhin
4ac51b2127 libiconv: fix building with nvhpc (#43033) 2024-03-22 01:43:30 +01:00
HELICS-bot
81c9e346dc helics: Add version 3.5.1 (#43314)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-03-21 18:08:36 -06:00
Christopher Christofi
73e16a7881 py-mrcfile: add new version (#43125) 2024-03-22 00:00:15 +01:00
Alex Richert
af8868fa47 xv (image viewer): Add missing depends_on(libxt) (#43277) 2024-03-21 23:24:29 +01:00
Rémi Lacroix
cfd4e356f8 x264: Tag a recent commit: 20240314 (#43304)
x264 does not publish releases so tag a newer commit in Spack in order to be able to use an up-to-date version.
2024-03-21 23:20:00 +01:00
Adam J. Stewart
fc87dcad4c py-torchmetrics: add v1.3.2 (#43244) 2024-03-21 23:00:32 +01:00
snehring
65472159c7 dorado: adding version 0.5.3 (#43313) 2024-03-21 22:59:20 +01:00
Stephen Hudson
d1f9d8f06d libEnsemble: add v1.2.2 (#43308) 2024-03-21 22:57:19 +01:00
Rocco Meli
67ac9c46a8 namd: disable parallel build for 3.0b3 (#43215) 2024-03-21 22:26:36 +01:00
Stephen Sachs
aa39465188 Re enable aws pcluster buildcache stack (#38931)
* Changes to re-enable aws-pcluster pipelines

- Use compilers from pre-installed spack store such that compiler path relocation works when downloading from buildcache.
- Install gcc from hash so there is no risk of building gcc from source in pipleine.
- `packages.yam` files are now part of the pipelines.
- No more eternal `postinstall.sh`. The necessary steps are in `setup=pcluster.sh` and will be version controlled within this repo.
- Re-enable pipelines.

* Add  and

* Debugging output & mv skylake -> skylake_avx512

* Explicilty check for packages

* Handle case with no intel compiler

* compatibility when using setup-pcluster.sh on a pre-installed cluster.

* Disable palace as parser cannot read require clause at the moment

* ifort cannot build superlu in buildcache

`ifort` is unable to handle such long file names as used when cmake compiles
test programs inside build cache.

* Fix spack commit for intel compiler installation

* Need to fetch other commits before using them

* fix style

* Add TODO

* Update packages.yaml to not use 'compiler:', 'target:' or 'provider:'

Synchronize with changes in https://github.com/spack/spack-configs/blob/main/AWS/parallelcluster/

* Use Intel compiler from later version (orig commit no longer found)

* Use envsubst to deal with quoted newlines

This is cleaner than the `eval` command used.

* Need to fetch tags for checkout on version number

* Intel compiler needs to be from version that has compatible DB

* Install intel compiler with commit that has DB ver 7

* Decouple the intel compiler installation from current commit

- Use a completely different spack installation such that this current pipeline
commit remains untouched.
- Make the script suceed even if the compiler installation fails (e.g. because
the Database version has been updated)
- Make the install targets fall back to gcc in case the compiler did not install
correctly.

* Use generic target for x86_64_vX

There is no way to provision a skylake/icelake/zen runner. They are all in the
same pools under x86_64_v3 and x86_64_v4.

* Find the intel compiler in the current spack installation

* Remove SPACK_TARGET_ARCH

* Fix virtual package index & use package.yaml for intel compiler

* Use only one stack & pipeline per generic architecture

* Fix yaml format

* Cleanup typos

* Include fix for ifx.cfg to get the right gcc toolchain when linking

* [removeme] Adding timeout to debug hang in make (palace)

* Revert "[removeme] Adding timeout to debug hang in make (palace)"

This reverts commit fee8a01580489a4ea364368459e9353b46d0d7e2.

* palace x86_64_v4 gets stuck when compiling try newer oneapi

* Update comment

* Use the latest container image

* Update gcc_hashes to match new container

* Use only one tag providing tags per extends call

Also removed an unnecessary tag.

* Move generic setup script out of individual stack

* Cleanup from last commit

* Enable checking signature for packages available on the container

* Remove commented packages / Add comment for palace

* Enable openmpi@5 which needs pmix>3

* don't look for intel compiler on aarch64
2024-03-21 14:45:05 -05:00
downloadico
09810a5e7c py-cig-pythia: add py-cig-pythia package to spack (#43294) 2024-03-21 11:11:14 -07:00
Alec Scott
446c0f2325 Disable interactive editor when --batch if passed to checksum (#43102) 2024-03-21 18:15:09 +01:00
Auriane R
c4ce51c9be Add flash-attn package (#42939) 2024-03-21 07:23:07 -06:00
Harmen Stoppels
1f63a764ac jdk: new versions (#43264) 2024-03-21 12:49:29 +01:00
Alec Scott
384e198304 py-python-lsp-server: add v1.10.0 (#42799)
* py-python-lsp-server: add v1.10.0

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Remove py-wheel from package

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2024-03-21 10:36:15 +01:00
Auriane R
2303332415 Update aws-ofi-nccl to use the hwloc option (#43287) 2024-03-21 09:38:40 +01:00
Tom Scogland
0eb1957999 cmd/python: use runpy to allow multiprocessing in scripts (#41789)
Running a `spack-python` script like this:

```python

import spack
import multiprocessing

def echo(args):
    print(args)

if __name__ == "__main__":
    pool = multiprocessing.Pool(2)
    pool.map(echo, range(10))
```

will fail in `develop` with an error like this:

```console
_pickle.PicklingError: Can't pickle <function echo at 0x104865820>: attribute lookup echo on __main__ failed
```

Python expects to be able to look up the method `echo` in `sys.path["__main__"]` in
subprocesses spawned by `multiprocessing`, but because we use `InteractiveConsole` to
run `spack python`, the executed file isn't considered to be the `__main__` module, and
lookups in subprocesses fail. We tried to fake this by setting `__name__` to `__main__`
in the `spack python` command, but that doesn't fix the fact that no `__main__` module
exists.

Another annoyance with `InteractiveConsole` is that `__file__` is not defined in the
main script scope, so you can't use it in your scripts.

We can use the [runpy.run_path()](https://docs.python.org/3/library/runpy.html#runpy.run_path) function,
which has been around since Python 3.2, to fix this.

- [x] Use `runpy` module to launch non-interactive `spack python` invocations
- [x] Only use `InteractiveConsole` for interactive `spack python`
2024-03-21 01:32:28 -07:00
Greg Becker
de1f9593c6 cray: return false more readily in detection logic (#43150)
Often in containers, the files we use to detect whether a cray system supports new features are not available.

Given that the cray containers only support the newer versions, and that these versions have been
around for a while at this point and few sites don't support them, this PR changes the logic for
detecting cray systems so that:

1. Don't even consider whether something is the `cray` platform if `opt/cray` is not in `MODULEPATH`
2. Only use the `cray` platform if we can read files in /opt/cray/pe and positively detect an older version
3. Otherwise, assume we're *not* on a cray (includes newer Cray PE's, which we treat as Linux)
2024-03-20 15:43:45 -07:00
Christopher Christofi
65fa71c1b4 py-pycm: new package (#43251)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Signed-off-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-03-20 19:02:36 +01:00
Martin Lang
9802649716 berkeleygw: update FCPP flags for gcc (#42848)
Compilation with the old flags fails on PowerPC (power8le) due to syntax
errors in the output from the preprocessor. Compilation with the
extended set of flags works both on PowerPC and x86_64.

The correct set of flags was suggested from the berkeleygw developers:
https://groups.google.com/a/berkeleygw.org/g/help/c/ewi3RZgOyeE/m/jSIoe45PAgAJ

Uses spack-provided compiler prefix to call cpp when compiling berkeleygw with gcc.
2024-03-20 18:55:54 +01:00
Alex Leute
8d9d721f07 py-postcactus: new package (#42907)
Co-authored-by: Sid Pendelberry <sid@rit.edu>
2024-03-20 18:50:52 +01:00
Greg Becker
ecef72c471 Target.optimization_flags converts non-numeric versions to numeric (#43179) 2024-03-20 09:39:26 -07:00
Robert Cohn
485b6e2170 Update Intel download URLs (#43286) 2024-03-20 12:34:03 -04:00
mvlopri
ba02c6b70f seacas: update the variants and tpls (#43195)
Adds variants to turn off tests
Add variants for some missing TPL options
Add the variables required to build in ~shared

* Add pamgen to Trilinos as a variant to support SEACAS

This adds the ability to turn off and on pamgen as needed
through the variant interface for the Trilinos package.py.
Add changes for seacas package.py to build the appropriate
Trilinos variants.
Add zlib-api as depends_on instead of zlib directly for SEACAS
package.py

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-03-20 10:14:24 -06:00
Juan Miguel Carceller
7028669d50 fastjet: add version 3.4.2 (#43285)
* fastjet: add version 3.4.2

* Change the range of the ATLAS patch

---------

Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-03-20 16:46:09 +01:00
Dom Heinzeller
2f0a73f7ef Define variant typescript for py-jupyter-server with explicit dependency on npm (#43279)
* Add variant typescript for py-jupyter-server@:1, which then requires npm/node. Patch the build system for ~typescript so that it doesn't find any npm/node installations and attempts to build the typescript extension even though it shouldn't

* Fix formatting in var/spack/repos/builtin/packages/py-jupyter-server/package.py

* Constrain typescript variant to py-jupyter-server versions 1.10.2:1

* with when not needed if variant doesn't exist for other versions
2024-03-20 08:03:18 -06:00
Massimiliano Culpo
7cb0dbf77a Remove optimization criterion on OS mismatches (#43282) 2024-03-20 04:57:56 -06:00
G-Ragghianti
ac8800ffc7 heffte: Update MKL dependency to intel-oneapi-mkl (#43273) 2024-03-20 03:28:21 -06:00
Richard Berger
eb11fa7d18 lua-sol2: merge duplicate sol2 package into it (#43155) 2024-03-20 03:22:58 -06:00
Mikael Simberg
4d8381a775 fmt: Add master branch as version (#43239) 2024-03-20 03:18:06 -06:00
jmlapre
de5e20fc21 py-snoop: new package (#42945) 2024-03-20 01:18:24 +01:00
Adam J. Stewart
c33af49ed5 py-keras: add v3.1.0 (#43268) 2024-03-20 01:10:14 +01:00
Martin Aumüller
3addda6c4d mgard: disable C++11 warning also for apple-clang@15 (#43170) 2024-03-19 17:27:42 -06:00
AMD Toolchain Support
33f6f55d6b aocl-sparse: fix inconsistency in dependency logic (#43259) 2024-03-19 16:45:39 -06:00
AMD Toolchain Support
41d20d3731 amdblibm: add support for parallel build (#43258) 2024-03-19 16:45:09 -06:00
Bill Williams
dde8fa5561 scorep: add conflict for ROCm6 (#43240)
Co-authored-by: William Williams <william.williams@tu-dresden.de>
Co-authored-by: wrwilliams <wrwilliams@users.noreply.github.com>
2024-03-19 16:40:08 -06:00
Mikael Simberg
588a94bc8c apex: add v2.6.5 (#43242) 2024-03-19 16:02:58 -06:00
Alec Scott
06392f2c01 gnutls: add v3.8.3 (#43229)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2024-03-19 22:23:19 +01:00
Tom Payerle
f16e29559e n2p2: add --no-print-directory flag to calls to "make" (#43196)
This should fix issue #43192

Basically, had issue where a make variable was set to the output
of a shell function which included cd commands, and then the value
of that variable used as a makefile target.

The cd commands in the shell function caused assorted informational
messages (e.g. "Entering directory ...") which got included in the
return of the shell function, corrupting the value of the variable.
The presence of colons in the corrupted value caused make to issue
"multiple target" erros.

This fix adds --no-print-directory flags to the calls to the
make function in the package's build method, which resolves the
issue above.

It is admittedly a crude fix, and will remove *all* informational
messages re directory changes, thereby potentially making it more
difficult to diagnose/debug future issues building this package.
However, I do not see a way for to turn off these messages in a
more surgical manner.
2024-03-19 20:04:15 +01:00
Rémi Lacroix
ea96403157 ffmpeg: Fix patch hash (#43269) 2024-03-19 19:46:02 +01:00
G-Ragghianti
b659eac453 slate: add v2023.11.05 (#42913)
* Updated version of slate

* Added rocm version conflict

* Added patch to fix openMP problem
2024-03-19 08:58:34 -07:00
John W. Parent
ab590cc03a WGL: Update libs for new archspec on Win (#43253) 2024-03-19 09:43:29 -06:00
Harmen Stoppels
1a007a842b cmake: deprecate old patch releases and add missing gmake dep (#43261) 2024-03-19 15:41:50 +01:00
Martin Aumüller
9756354998 mgard: don't restrict protobuf version more than necessary (#43172)
* mgard: don't restrict protobuf version more than necessary

successfully built:
mgard@2022-11-18 ^protobuf@3.{4,21,25}
mgard@2023-01-10 ^protobuf@3.{4,25}
mgard@2023-03-31 ^protobuf@3.{4,25}

compile failures:
mgard@2022-11-18 ^protobuf@3.3
mgard@2023-01-10 ^protobuf@3.3
mgard@2023-03-31 ^protobuf@3.3

* mgard: add conflicts to address CI errors

* mgard: conflict between cuda and abseil@20240116.1

compiling mgard+cuda with gcc@12.3.0 and nvcc from cuda@12.3.0 against
protobuf pulling in abseil-cpp@20240116.1 results in the errors reported
here: https://github.com/abseil/abseil-cpp/issues/1629
2024-03-19 09:29:44 +01:00
Sinan
3984dd750c package/py-setuptools_add_new_versions (#43180)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2024-03-18 12:33:39 -06:00
Gregor Daiß
d5c1e16e43 sgpp: update dependency versions (#43178)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-03-18 15:52:16 +01:00
Sinan
56ace9a087 py-wand: add v0.6.13 (#42972) 2024-03-18 15:13:50 +01:00
jdomke
6e0bab1706 fujitsu-mpi: add gcc and clang support (#43053)
Co-authored-by: domke <673751-domke@users.noreply.gitlab.com>
2024-03-18 15:12:44 +01:00
John W. Parent
193386f6ac netcdfc: consider static build in pkgconf filtering (#43084) 2024-03-18 14:40:32 +01:00
Christopher Christofi
755131fcdf py-optax: add new version (#43169) 2024-03-18 14:19:53 +01:00
dependabot[bot]
9a71733adb build(deps): bump docker/setup-buildx-action from 3.1.0 to 3.2.0 (#43204)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 3.1.0 to 3.2.0.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](0d103c3126...2b51285047)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 14:10:11 +01:00
dependabot[bot]
cd919d51ea build(deps): bump docker/build-push-action from 5.2.0 to 5.3.0 (#43205)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5.2.0 to 5.3.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](af5a7ed5ba...2cdde995de)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 14:09:29 +01:00
George Young
12adf66d07 telocal: add new package (#43241)
Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2024-03-18 07:08:32 -06:00
afzpatel
c02f58da8f llvm-amdgpu: add rpath to HIP rt (#42876) 2024-03-18 13:37:27 +01:00
Harmen Stoppels
9662d181a0 use directives in some packages (#43238) 2024-03-18 12:53:53 +01:00
Alec Scott
282df7aecc fzf: add v0.48.0 (#43230) 2024-03-18 12:53:37 +01:00
wspear
b4c0e6f03b scorep: add v8.4 (#43225) 2024-03-18 11:23:28 +01:00
Wileam Y. Phan
4cd8488139 intel-gtpin: add version 4.0 (#43216) 2024-03-18 11:09:29 +01:00
George Young
69a052841c py-cutadapt: updating to @4.7 (#43214)
Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2024-03-18 11:08:14 +01:00
Matthias Wolf
a3f39890c2 py-shacl: new version, update dependencies (#42905)
* py-shacl: new version, update dependencies

Also updates the dependencies py-prettytable and py-rdflib.

* review comments

* Update var/spack/repos/builtin/packages/py-pyshacl/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* py-poetry-core: add required 1.8.1

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2024-03-18 10:56:40 +01:00
Christopher Christofi
02d126ce2b py-geopandas: add new version 0.14.3 (#43235)
* py-geopandas: add new version

* add when specification on setuptools requirement
2024-03-18 10:52:19 +01:00
Christopher Christofi
339a63370f py-pyshp: add new version (#43234) 2024-03-18 10:23:11 +01:00
Christopher Christofi
fef6aed627 py-branca: add new package (#43236) 2024-03-18 10:16:58 +01:00
afzpatel
3445da807e rocm-smi-lib: remove standalone test and add build time test (#43129) 2024-03-18 10:13:08 +01:00
Pieter P
429c3598af Fix CMake generator documentation (#43232) 2024-03-18 10:02:33 +01:00
Todd Gamblin
3d8136493a performance: avoid jinja2 import at startup unless needed (#43237)
`jinja2` can be a costly import, and right now it happens at startup every time we run
Spack. This slows down `spack --print-shell-vars` a bit, which is needed by `setup-env.*sh`.
2024-03-18 10:00:37 +01:00
Sergey Kosukhin
8cd160db85 zlib-ng: add variant new_strategies (#43219) 2024-03-18 09:42:43 +01:00
Simon Pintarelli
a7dd756b34 gcc 12.3 ICE patch for aarch64 (#43093)
* gcc12.3 patch for ICE on aarch64

* aarch64 ICE patch for gcc@13.2
2024-03-18 09:12:49 +01:00
Thomas Padioleau
53be280681 Remove bundled fmt (#43210) 2024-03-17 21:43:40 -07:00
713 changed files with 14077 additions and 6349 deletions

View File

@@ -0,0 +1,4 @@
{
"image": "ghcr.io/spack/ubuntu20.04-runner-amd64-gcc-11.4:2023.08.01",
"postCreateCommand": "./.devcontainer/postCreateCommand.sh"
}

View File

@@ -0,0 +1,20 @@
#!/bin/bash
# Load spack environment at terminal startup
cat <<EOF >> /root/.bashrc
. /workspaces/spack/share/spack/setup-env.sh
EOF
# Load spack environment in this script
. /workspaces/spack/share/spack/setup-env.sh
# Ensure generic targets for maximum matching with buildcaches
spack config --scope site add "packages:all:require:[target=x86_64_v3]"
spack config --scope site add "concretizer:targets:granularity:generic"
# Find compiler and install gcc-runtime
spack compiler find --scope site
# Setup buildcaches
spack mirror add --scope site develop https://binaries.spack.io/develop
spack buildcache keys --install --trust

View File

@@ -22,8 +22,8 @@ jobs:
matrix:
operating_system: ["ubuntu-latest", "macos-latest"]
steps:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # @v2
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: ${{inputs.python_version}}
- name: Install Python packages
@@ -43,7 +43,9 @@ jobs:
. share/spack/setup-env.sh
$(which spack) audit packages
$(which spack) audit externals
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab # @v2.1.0
- uses: codecov/codecov-action@84508663e988701840491b86de86b666e8a86bed
if: ${{ inputs.with_coverage == 'true' }}
with:
flags: unittests,audits
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -159,7 +159,7 @@ jobs:
brew install cmake bison@2.7 tree
- name: Checkout
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: "3.12"
- name: Bootstrap clingo

View File

@@ -55,7 +55,7 @@ jobs:
if: github.repository == 'spack/spack'
steps:
- name: Checkout
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # @v2
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: docker/metadata-action@8e5442c4ef9f78752691e2d8f8d19755c6f78e81
id: docker_meta
@@ -96,7 +96,7 @@ jobs:
uses: docker/setup-qemu-action@68827325e0b33c7199eb31dd4e31fbe9023e06e3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@0d103c3126aa41d772a8362f6aa67afac040f80c
uses: docker/setup-buildx-action@d70bba72b1f3fd22344832f00baa16ece964efeb
- name: Log in to GitHub Container Registry
uses: docker/login-action@e92390c5fb421da1463c202d546fed0ec5c39f20
@@ -113,7 +113,7 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Deploy ${{ matrix.dockerfile[0] }}
uses: docker/build-push-action@af5a7ed5ba88268d5278f7203fb52cd833f66d6e
uses: docker/build-push-action@2cdde995de11925a030ce8070c3d77a52ffcf1c0
with:
context: dockerfiles/${{ matrix.dockerfile[0] }}
platforms: ${{ matrix.dockerfile[1] }}

View File

@@ -18,6 +18,7 @@ jobs:
prechecks:
needs: [ changes ]
uses: ./.github/workflows/valid-style.yml
secrets: inherit
with:
with_coverage: ${{ needs.changes.outputs.core }}
all-prechecks:
@@ -35,7 +36,7 @@ jobs:
core: ${{ steps.filter.outputs.core }}
packages: ${{ steps.filter.outputs.packages }}
steps:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
if: ${{ github.event_name == 'push' }}
with:
fetch-depth: 0
@@ -70,14 +71,17 @@ jobs:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.bootstrap == 'true' }}
needs: [ prechecks, changes ]
uses: ./.github/workflows/bootstrap.yml
secrets: inherit
unit-tests:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }}
needs: [ prechecks, changes ]
uses: ./.github/workflows/unit_tests.yaml
secrets: inherit
windows:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }}
needs: [ prechecks ]
uses: ./.github/workflows/windows_python.yml
secrets: inherit
all:
needs: [ windows, unit-tests, bootstrap ]
runs-on: ubuntu-latest

View File

@@ -17,7 +17,7 @@ jobs:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: 3.9
- name: Install Python packages

View File

@@ -1,4 +1,4 @@
black==24.3.0
black==24.4.0
clingo==5.7.1
flake8==7.0.0
isort==5.13.2

View File

@@ -51,10 +51,10 @@ jobs:
on_develop: false
steps:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: ${{ matrix.python-version }}
- name: Install System packages
@@ -91,17 +91,19 @@ jobs:
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }}
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab
- uses: codecov/codecov-action@84508663e988701840491b86de86b666e8a86bed
with:
flags: unittests,linux,${{ matrix.concretizer }}
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Test shell integration
shell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.11'
- name: Install System packages
@@ -122,9 +124,11 @@ jobs:
COVERAGE: true
run: |
share/spack/qa/run-shell-tests
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab
- uses: codecov/codecov-action@84508663e988701840491b86de86b666e8a86bed
with:
flags: shelltests,linux
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Test RHEL8 UBI with platform Python. This job is run
# only on PRs modifying core Spack
@@ -137,7 +141,7 @@ jobs:
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- name: Setup repo and non-root user
run: |
git --version
@@ -156,10 +160,10 @@ jobs:
clingo-cffi:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.11'
- name: Install System packages
@@ -181,20 +185,23 @@ jobs:
SPACK_TEST_SOLVER: clingo
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab # @v2.1.0
- uses: codecov/codecov-action@84508663e988701840491b86de86b666e8a86bed
with:
flags: unittests,linux,clingo
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Run unit tests on MacOS
macos:
runs-on: macos-latest
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [macos-latest, macos-14]
python-version: ["3.11"]
steps:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: ${{ matrix.python-version }}
- name: Install Python packages
@@ -216,6 +223,8 @@ jobs:
$(which spack) solve zlib
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
$(which spack) unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}"
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab
- uses: codecov/codecov-action@84508663e988701840491b86de86b666e8a86bed
with:
flags: unittests,macos
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -19,7 +19,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.11'
cache: 'pip'
@@ -38,7 +38,7 @@ jobs:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.11'
cache: 'pip'
@@ -56,6 +56,7 @@ jobs:
share/spack/qa/run-style-tests
audit:
uses: ./.github/workflows/audit.yaml
secrets: inherit
with:
with_coverage: ${{ inputs.with_coverage }}
python_version: '3.11'
@@ -69,7 +70,7 @@ jobs:
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- name: Setup repo and non-root user
run: |
git --version

View File

@@ -18,7 +18,7 @@ jobs:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: 3.9
- name: Install Python packages
@@ -33,16 +33,18 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
coverage combine -a
coverage xml
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab
- uses: codecov/codecov-action@84508663e988701840491b86de86b666e8a86bed
with:
flags: unittests,windows
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
unit-tests-cmd:
runs-on: windows-latest
steps:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: 3.9
- name: Install Python packages
@@ -57,16 +59,18 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
coverage combine -a
coverage xml
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab
- uses: codecov/codecov-action@84508663e988701840491b86de86b666e8a86bed
with:
flags: unittests,windows
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
build-abseil:
runs-on: windows-latest
steps:
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: 3.9
- name: Install Python packages

View File

@@ -88,7 +88,7 @@ Resources:
[bridged](https://github.com/matrix-org/matrix-appservice-slack#matrix-appservice-slack) to Slack.
* [**Github Discussions**](https://github.com/spack/spack/discussions):
for Q&A and discussions. Note the pinned discussions for announcements.
* **Twitter**: [@spackpm](https://twitter.com/spackpm). Be sure to
* **X**: [@spackpm](https://twitter.com/spackpm). Be sure to
`@mention` us!
* **Mailing list**: [groups.google.com/d/forum/spack](https://groups.google.com/d/forum/spack):
only for announcements. Please use other venues for discussions.

View File

@@ -15,7 +15,7 @@ concretizer:
# as possible, rather than building. If `false`, we'll always give you a fresh
# concretization. If `dependencies`, we'll only reuse dependencies but
# give you a fresh concretization for your root specs.
reuse: dependencies
reuse: true
# Options that tune which targets are considered for concretization. The
# concretization process is very sensitive to the number targets, and the time
# needed to reach a solution increases noticeably with the number of targets
@@ -42,3 +42,8 @@ concretizer:
# "minimal": allows the duplication of 'build-tools' nodes only (e.g. py-setuptools, cmake etc.)
# "full" (experimental): allows separation of the entire build-tool stack (e.g. the entire "cmake" subDAG)
strategy: minimal
# Option to specify compatiblity between operating systems for reuse of compilers and packages
# Specified as a key: [list] where the key is the os that is being targeted, and the list contains the OS's
# it can reuse. Note this is a directional compatibility so mutual compatibility between two OS's
# requires two entries i.e. os_compatible: {sonoma: [monterey], monterey: [sonoma]}
os_compatible: {}

View File

@@ -101,6 +101,12 @@ config:
verify_ssl: true
# This is where custom certs for proxy/firewall are stored.
# It can be a path or environment variable. To match ssl env configuration
# the default is the environment variable SSL_CERT_FILE
ssl_certs: $SSL_CERT_FILE
# Suppress gpg warnings from binary package verification
# Only suppresses warnings, gpg failure will still fail the install
# Potential rationale to set True: users have already explicitly trusted the

View File

@@ -19,7 +19,6 @@ packages:
- apple-clang
- clang
- gcc
- intel
providers:
elf: [libelf]
fuse: [macfuse]

View File

@@ -15,7 +15,7 @@
# -------------------------------------------------------------------------
packages:
all:
compiler: [gcc, intel, pgi, clang, xl, nag, fj, aocc]
compiler: [gcc, clang, oneapi, xl, nag, fj, aocc]
providers:
awk: [gawk]
blas: [openblas, amdblis]
@@ -24,6 +24,7 @@ packages:
elf: [elfutils]
fftw-api: [fftw, amdfftw]
flame: [libflame, amdlibflame]
fortran-rt: [gcc-runtime, intel-oneapi-runtime]
fuse: [libfuse]
gl: [glx, osmesa]
glu: [mesa-glu, openglu]
@@ -34,7 +35,10 @@ packages:
java: [openjdk, jdk, ibm-java]
jpeg: [libjpeg-turbo, libjpeg]
lapack: [openblas, amdlibflame]
libc: [glibc, musl]
libgfortran: [ gcc-runtime ]
libglx: [mesa+glx, mesa18+glx]
libifcore: [ intel-oneapi-runtime ]
libllvm: [llvm]
libosmesa: [mesa+osmesa, mesa18+osmesa]
lua-lang: [lua, lua-luajit-openresty, lua-luajit]

View File

@@ -1119,6 +1119,9 @@ and ``3.4.2``. Similarly, ``@4.2:`` means any version above and including
``4.2``. As a short-hand, ``@3`` is equivalent to the range ``@3:3`` and
includes any version with major version ``3``.
Versions are ordered lexicograpically by its components. For more details
on the order, see :ref:`the packaging guide <version-comparison>`.
Notice that you can distinguish between the specific version ``@=3.2`` and
the range ``@3.2``. This is useful for packages that follow a versioning
scheme that omits the zero patch version number: ``3.2``, ``3.2.1``,

View File

@@ -220,6 +220,40 @@ section of the configuration:
.. _binary_caches_oci:
---------------------------------
Automatic push to a build cache
---------------------------------
Sometimes it is convenient to push packages to a build cache as soon as they are installed. Spack can do this by setting autopush flag when adding a mirror:
.. code-block:: console
$ spack mirror add --autopush <name> <url or path>
Or the autopush flag can be set for an existing mirror:
.. code-block:: console
$ spack mirror set --autopush <name> # enable automatic push for an existing mirror
$ spack mirror set --no-autopush <name> # disable automatic push for an existing mirror
Then after installing a package it is automatically pushed to all mirrors with ``autopush: true``. The command
.. code-block:: console
$ spack install <package>
will have the same effect as
.. code-block:: console
$ spack install <package>
$ spack buildcache push <cache> <package> # for all caches with autopush: true
.. note::
Packages are automatically pushed to a build cache only if they are built from source.
-----------------------------------------
OCI / Docker V2 registries as build cache
-----------------------------------------

View File

@@ -250,7 +250,7 @@ generator is Ninja. To switch to the Ninja generator, simply add:
.. code-block:: python
generator = "Ninja"
generator("ninja")
``CMakePackage`` defaults to "Unix Makefiles". If you switch to the

View File

@@ -145,6 +145,22 @@ hosts when making ``ssl`` connections. Set to ``false`` to disable, and
tools like ``curl`` will use their ``--insecure`` options. Disabling
this can expose you to attacks. Use at your own risk.
--------------------
``ssl_certs``
--------------------
Path to custom certificats for SSL verification. The value can be a
filesytem path, or an environment variable that expands to a file path.
The default value is set to the environment variable ``SSL_CERT_FILE``
to use the same syntax used by many other applications that automatically
detect custom certificates.
When ``url_fetch_method:curl`` the ``config:ssl_certs`` should resolve to
a single file. Spack will then set the environment variable ``CURL_CA_BUNDLE``
in the subprocess calling ``curl``.
If ``url_fetch_method:urllib`` then files and directories are supported i.e.
``config:ssl_certs:$SSL_CERT_FILE`` or ``config:ssl_certs:$SSL_CERT_DIR``
will work.
--------------------
``checksum``
--------------------

View File

@@ -552,11 +552,11 @@ With either interpreter you can run a single command:
.. code-block:: console
$ spack python -c 'import distro; distro.linux_distribution()'
('Ubuntu', '18.04', 'Bionic Beaver')
$ spack python -c 'from spack.spec import Spec; Spec("python").concretized()'
...
$ spack python -i ipython -c 'import distro; distro.linux_distribution()'
Out[1]: ('Ubuntu', '18.04', 'Bionic Beaver')
$ spack python -i ipython -c 'from spack.spec import Spec; Spec("python").concretized()'
Out[1]: ...
or a file:
@@ -1071,9 +1071,9 @@ Announcing a release
We announce releases in all of the major Spack communication channels.
Publishing the release takes care of GitHub. The remaining channels are
Twitter, Slack, and the mailing list. Here are the steps:
X, Slack, and the mailing list. Here are the steps:
#. Announce the release on Twitter.
#. Announce the release on X.
* Compose the tweet on the ``@spackpm`` account per the
``spack-twitter`` slack channel.

View File

@@ -893,26 +893,50 @@ as an option to the ``version()`` directive. Example situations would be a
"snapshot"-like Version Control System (VCS) tag, a VCS branch such as
``v6-16-00-patches``, or a URL specifying a regularly updated snapshot tarball.
.. _version-comparison:
^^^^^^^^^^^^^^^^^^
Version comparison
^^^^^^^^^^^^^^^^^^
Spack imposes a generic total ordering on the set of versions,
independently from the package they are associated with.
Most Spack versions are numeric, a tuple of integers; for example,
``0.1``, ``6.96`` or ``1.2.3.1``. Spack knows how to compare and sort
numeric versions.
``0.1``, ``6.96`` or ``1.2.3.1``. In this very basic case, version
comparison is lexicographical on the numeric components:
``1.2 < 1.2.1 < 1.2.2 < 1.10``.
Some Spack versions involve slight extensions of numeric syntax; for
example, ``py-sphinx-rtd-theme@=0.1.10a0``. In this case, numbers are
always considered to be "newer" than letters. This is for consistency
with `RPM <https://bugzilla.redhat.com/show_bug.cgi?id=50977>`_.
Spack can also supports string components such as ``1.1.1a`` and
``1.y.0``. String components are considered less than numeric
components, so ``1.y.0 < 1.0``. This is for consistency with
`RPM <https://bugzilla.redhat.com/show_bug.cgi?id=50977>`_. String
components do not have to be separated by dots or any other delimiter.
So, the contrived version ``1y0`` is identical to ``1.y.0``.
Spack versions may also be arbitrary non-numeric strings, for example
``develop``, ``master``, ``local``.
Pre-release suffixes also contain string parts, but they are handled
in a special way. For example ``1.2.3alpha1`` is parsed as a pre-release
of the version ``1.2.3``. This allows Spack to order it before the
actual release: ``1.2.3alpha1 < 1.2.3``. Spack supports alpha, beta and
release candidate suffixes: ``1.2alpha1 < 1.2beta1 < 1.2rc1 < 1.2``. Any
suffix not recognized as a pre-release is treated as an ordinary
string component, so ``1.2 < 1.2-mysuffix``.
The order on versions is defined as follows. A version string is split
into a list of components based on delimiters such as ``.``, ``-`` etc.
Lists are then ordered lexicographically, where components are ordered
as follows:
Finally, there are a few special string components that are considered
"infinity versions". They include ``develop``, ``main``, ``master``,
``head``, ``trunk``, and ``stable``. For example: ``1.2 < develop``.
These are useful for specifying the most recent development version of
a package (often a moving target like a git branch), without assigning
a specific version number. Infinity versions are not automatically used when determining the latest version of a package unless explicitly required by another package or user.
More formally, the order on versions is defined as follows. A version
string is split into a list of components based on delimiters such as
``.`` and ``-`` and string boundaries. The components are split into
the **release** and a possible **pre-release** (if the last component
is numeric and the second to last is a string ``alpha``, ``beta`` or ``rc``).
The release components are ordered lexicographically, with comparsion
between different types of components as follows:
#. The following special strings are considered larger than any other
numeric or non-numeric version component, and satisfy the following
@@ -925,6 +949,9 @@ as follows:
#. All other non-numeric components are less than numeric components,
and are ordered alphabetically.
Finally, if the release components are equal, the pre-release components
are used to break the tie, in the obvious way.
The logic behind this sort order is two-fold:
#. Non-numeric versions are usually used for special cases while

View File

@@ -2,12 +2,12 @@ sphinx==7.2.6
sphinxcontrib-programoutput==0.17
sphinx_design==0.5.0
sphinx-rtd-theme==2.0.0
python-levenshtein==0.25.0
python-levenshtein==0.25.1
docutils==0.20.1
pygments==2.17.2
urllib3==2.2.1
pytest==8.1.1
isort==5.13.2
black==24.3.0
black==24.4.0
flake8==7.0.0
mypy==1.9.0

253
lib/spack/env/cc vendored
View File

@@ -47,7 +47,8 @@ SPACK_F77_RPATH_ARG
SPACK_FC_RPATH_ARG
SPACK_LINKER_ARG
SPACK_SHORT_SPEC
SPACK_SYSTEM_DIRS"
SPACK_SYSTEM_DIRS
SPACK_MANAGED_DIRS"
# Optional parameters that aren't required to be set
@@ -173,22 +174,6 @@ preextend() {
unset IFS
}
# system_dir PATH
# test whether a path is a system directory
system_dir() {
IFS=':' # SPACK_SYSTEM_DIRS is colon-separated
path="$1"
for sd in $SPACK_SYSTEM_DIRS; do
if [ "${path}" = "${sd}" ] || [ "${path}" = "${sd}/" ]; then
# success if path starts with a system prefix
unset IFS
return 0
fi
done
unset IFS
return 1 # fail if path starts no system prefix
}
# Fail with a clear message if the input contains any bell characters.
if eval "[ \"\${*#*${lsep}}\" != \"\$*\" ]"; then
die "Compiler command line contains our separator ('${lsep}'). Cannot parse."
@@ -201,6 +186,18 @@ for param in $params; do
fi
done
# eval this because SPACK_MANAGED_DIRS and SPACK_SYSTEM_DIRS are inputs we don't wanna loop over.
# moving the eval inside the function would eval it every call.
eval "\
path_order() {
case \"\$1\" in
$SPACK_MANAGED_DIRS) return 0 ;;
$SPACK_SYSTEM_DIRS) return 2 ;;
/*) return 1 ;;
esac
}
"
# Check if optional parameters are defined
# If we aren't asking for debug flags, don't add them
if [ -z "${SPACK_ADD_DEBUG_FLAGS:-}" ]; then
@@ -248,7 +245,7 @@ case "$command" in
lang_flags=C
debug_flags="-g"
;;
c++|CC|g++|clang++|armclang++|icpc|icpx|dpcpp|pgc++|nvc++|xlc++|xlc++_r|FCC|amdclang++|crayCC)
c++|CC|g++|clang++|armclang++|icpc|icpx|pgc++|nvc++|xlc++|xlc++_r|FCC|amdclang++|crayCC)
command="$SPACK_CXX"
language="C++"
comp="CXX"
@@ -420,11 +417,12 @@ input_command="$*"
parse_Wl() {
while [ $# -ne 0 ]; do
if [ "$wl_expect_rpath" = yes ]; then
if system_dir "$1"; then
append return_system_rpath_dirs_list "$1"
else
append return_rpath_dirs_list "$1"
fi
path_order "$1"
case $? in
0) append return_spack_store_rpath_dirs_list "$1" ;;
1) append return_rpath_dirs_list "$1" ;;
2) append return_system_rpath_dirs_list "$1" ;;
esac
wl_expect_rpath=no
else
case "$1" in
@@ -432,21 +430,25 @@ parse_Wl() {
arg="${1#-rpath=}"
if [ -z "$arg" ]; then
shift; continue
elif system_dir "$arg"; then
append return_system_rpath_dirs_list "$arg"
else
append return_rpath_dirs_list "$arg"
fi
path_order "$arg"
case $? in
0) append return_spack_store_rpath_dirs_list "$arg" ;;
1) append return_rpath_dirs_list "$arg" ;;
2) append return_system_rpath_dirs_list "$arg" ;;
esac
;;
--rpath=*)
arg="${1#--rpath=}"
if [ -z "$arg" ]; then
shift; continue
elif system_dir "$arg"; then
append return_system_rpath_dirs_list "$arg"
else
append return_rpath_dirs_list "$arg"
fi
path_order "$arg"
case $? in
0) append return_spack_store_rpath_dirs_list "$arg" ;;
1) append return_rpath_dirs_list "$arg" ;;
2) append return_system_rpath_dirs_list "$arg" ;;
esac
;;
-rpath|--rpath)
wl_expect_rpath=yes
@@ -473,12 +475,20 @@ categorize_arguments() {
return_other_args_list=""
return_isystem_was_used=""
return_isystem_spack_store_include_dirs_list=""
return_isystem_system_include_dirs_list=""
return_isystem_include_dirs_list=""
return_spack_store_include_dirs_list=""
return_system_include_dirs_list=""
return_include_dirs_list=""
return_spack_store_lib_dirs_list=""
return_system_lib_dirs_list=""
return_lib_dirs_list=""
return_spack_store_rpath_dirs_list=""
return_system_rpath_dirs_list=""
return_rpath_dirs_list=""
@@ -526,7 +536,7 @@ categorize_arguments() {
continue
fi
replaced="$after$stripped"
replaced="$after$stripped"
# it matched, remove it
shift
@@ -546,29 +556,32 @@ categorize_arguments() {
arg="${1#-isystem}"
return_isystem_was_used=true
if [ -z "$arg" ]; then shift; arg="$1"; fi
if system_dir "$arg"; then
append return_isystem_system_include_dirs_list "$arg"
else
append return_isystem_include_dirs_list "$arg"
fi
path_order "$arg"
case $? in
0) append return_isystem_spack_store_include_dirs_list "$arg" ;;
1) append return_isystem_include_dirs_list "$arg" ;;
2) append return_isystem_system_include_dirs_list "$arg" ;;
esac
;;
-I*)
arg="${1#-I}"
if [ -z "$arg" ]; then shift; arg="$1"; fi
if system_dir "$arg"; then
append return_system_include_dirs_list "$arg"
else
append return_include_dirs_list "$arg"
fi
path_order "$arg"
case $? in
0) append return_spack_store_include_dirs_list "$arg" ;;
1) append return_include_dirs_list "$arg" ;;
2) append return_system_include_dirs_list "$arg" ;;
esac
;;
-L*)
arg="${1#-L}"
if [ -z "$arg" ]; then shift; arg="$1"; fi
if system_dir "$arg"; then
append return_system_lib_dirs_list "$arg"
else
append return_lib_dirs_list "$arg"
fi
path_order "$arg"
case $? in
0) append return_spack_store_lib_dirs_list "$arg" ;;
1) append return_lib_dirs_list "$arg" ;;
2) append return_system_lib_dirs_list "$arg" ;;
esac
;;
-l*)
# -loopopt=0 is generated erroneously in autoconf <= 2.69,
@@ -601,29 +614,32 @@ categorize_arguments() {
break
elif [ "$xlinker_expect_rpath" = yes ]; then
# Register the path of -Xlinker -rpath <other args> -Xlinker <path>
if system_dir "$1"; then
append return_system_rpath_dirs_list "$1"
else
append return_rpath_dirs_list "$1"
fi
path_order "$1"
case $? in
0) append return_spack_store_rpath_dirs_list "$1" ;;
1) append return_rpath_dirs_list "$1" ;;
2) append return_system_rpath_dirs_list "$1" ;;
esac
xlinker_expect_rpath=no
else
case "$1" in
-rpath=*)
arg="${1#-rpath=}"
if system_dir "$arg"; then
append return_system_rpath_dirs_list "$arg"
else
append return_rpath_dirs_list "$arg"
fi
path_order "$arg"
case $? in
0) append return_spack_store_rpath_dirs_list "$arg" ;;
1) append return_rpath_dirs_list "$arg" ;;
2) append return_system_rpath_dirs_list "$arg" ;;
esac
;;
--rpath=*)
arg="${1#--rpath=}"
if system_dir "$arg"; then
append return_system_rpath_dirs_list "$arg"
else
append return_rpath_dirs_list "$arg"
fi
path_order "$arg"
case $? in
0) append return_spack_store_rpath_dirs_list "$arg" ;;
1) append return_rpath_dirs_list "$arg" ;;
2) append return_system_rpath_dirs_list "$arg" ;;
esac
;;
-rpath|--rpath)
xlinker_expect_rpath=yes
@@ -661,16 +677,25 @@ categorize_arguments() {
}
categorize_arguments "$@"
include_dirs_list="$return_include_dirs_list"
lib_dirs_list="$return_lib_dirs_list"
rpath_dirs_list="$return_rpath_dirs_list"
system_include_dirs_list="$return_system_include_dirs_list"
system_lib_dirs_list="$return_system_lib_dirs_list"
system_rpath_dirs_list="$return_system_rpath_dirs_list"
isystem_was_used="$return_isystem_was_used"
isystem_system_include_dirs_list="$return_isystem_system_include_dirs_list"
isystem_include_dirs_list="$return_isystem_include_dirs_list"
other_args_list="$return_other_args_list"
spack_store_include_dirs_list="$return_spack_store_include_dirs_list"
system_include_dirs_list="$return_system_include_dirs_list"
include_dirs_list="$return_include_dirs_list"
spack_store_lib_dirs_list="$return_spack_store_lib_dirs_list"
system_lib_dirs_list="$return_system_lib_dirs_list"
lib_dirs_list="$return_lib_dirs_list"
spack_store_rpath_dirs_list="$return_spack_store_rpath_dirs_list"
system_rpath_dirs_list="$return_system_rpath_dirs_list"
rpath_dirs_list="$return_rpath_dirs_list"
isystem_spack_store_include_dirs_list="$return_isystem_spack_store_include_dirs_list"
isystem_system_include_dirs_list="$return_isystem_system_include_dirs_list"
isystem_include_dirs_list="$return_isystem_include_dirs_list"
isystem_was_used="$return_isystem_was_used"
other_args_list="$return_other_args_list"
#
# Add flags from Spack's cppflags, cflags, cxxflags, fcflags, fflags, and
@@ -738,16 +763,25 @@ esac
IFS="$lsep"
categorize_arguments $spack_flags_list
unset IFS
spack_flags_include_dirs_list="$return_include_dirs_list"
spack_flags_lib_dirs_list="$return_lib_dirs_list"
spack_flags_rpath_dirs_list="$return_rpath_dirs_list"
spack_flags_system_include_dirs_list="$return_system_include_dirs_list"
spack_flags_system_lib_dirs_list="$return_system_lib_dirs_list"
spack_flags_system_rpath_dirs_list="$return_system_rpath_dirs_list"
spack_flags_isystem_was_used="$return_isystem_was_used"
spack_flags_isystem_system_include_dirs_list="$return_isystem_system_include_dirs_list"
spack_flags_isystem_include_dirs_list="$return_isystem_include_dirs_list"
spack_flags_other_args_list="$return_other_args_list"
spack_flags_isystem_spack_store_include_dirs_list="$return_isystem_spack_store_include_dirs_list"
spack_flags_isystem_system_include_dirs_list="$return_isystem_system_include_dirs_list"
spack_flags_isystem_include_dirs_list="$return_isystem_include_dirs_list"
spack_flags_spack_store_include_dirs_list="$return_spack_store_include_dirs_list"
spack_flags_system_include_dirs_list="$return_system_include_dirs_list"
spack_flags_include_dirs_list="$return_include_dirs_list"
spack_flags_spack_store_lib_dirs_list="$return_spack_store_lib_dirs_list"
spack_flags_system_lib_dirs_list="$return_system_lib_dirs_list"
spack_flags_lib_dirs_list="$return_lib_dirs_list"
spack_flags_spack_store_rpath_dirs_list="$return_spack_store_rpath_dirs_list"
spack_flags_system_rpath_dirs_list="$return_system_rpath_dirs_list"
spack_flags_rpath_dirs_list="$return_rpath_dirs_list"
spack_flags_isystem_was_used="$return_isystem_was_used"
spack_flags_other_args_list="$return_other_args_list"
# On macOS insert headerpad_max_install_names linker flag
@@ -767,11 +801,13 @@ if [ "$mode" = ccld ] || [ "$mode" = ld ]; then
# Append RPATH directories. Note that in the case of the
# top-level package these directories may not exist yet. For dependencies
# it is assumed that paths have already been confirmed.
extend spack_store_rpath_dirs_list SPACK_STORE_RPATH_DIRS
extend rpath_dirs_list SPACK_RPATH_DIRS
fi
fi
if [ "$mode" = ccld ] || [ "$mode" = ld ]; then
extend spack_store_lib_dirs_list SPACK_STORE_LINK_DIRS
extend lib_dirs_list SPACK_LINK_DIRS
fi
@@ -798,38 +834,50 @@ case "$mode" in
;;
esac
case "$mode" in
cpp|cc|as|ccld)
if [ "$spack_flags_isystem_was_used" = "true" ] || [ "$isystem_was_used" = "true" ]; then
extend isystem_spack_store_include_dirs_list SPACK_STORE_INCLUDE_DIRS
extend isystem_include_dirs_list SPACK_INCLUDE_DIRS
else
extend spack_store_include_dirs_list SPACK_STORE_INCLUDE_DIRS
extend include_dirs_list SPACK_INCLUDE_DIRS
fi
;;
esac
#
# Finally, reassemble the command line.
#
args_list="$flags_list"
# Insert include directories just prior to any system include directories
# Include search paths partitioned by (in store, non-sytem, system)
# NOTE: adding ${lsep} to the prefix here turns every added element into two
extend args_list spack_flags_include_dirs_list "-I"
extend args_list include_dirs_list "-I"
extend args_list spack_flags_spack_store_include_dirs_list -I
extend args_list spack_store_include_dirs_list -I
extend args_list spack_flags_include_dirs_list -I
extend args_list include_dirs_list -I
extend args_list spack_flags_isystem_spack_store_include_dirs_list "-isystem${lsep}"
extend args_list isystem_spack_store_include_dirs_list "-isystem${lsep}"
extend args_list spack_flags_isystem_include_dirs_list "-isystem${lsep}"
extend args_list isystem_include_dirs_list "-isystem${lsep}"
case "$mode" in
cpp|cc|as|ccld)
if [ "$spack_flags_isystem_was_used" = "true" ]; then
extend args_list SPACK_INCLUDE_DIRS "-isystem${lsep}"
elif [ "$isystem_was_used" = "true" ]; then
extend args_list SPACK_INCLUDE_DIRS "-isystem${lsep}"
else
extend args_list SPACK_INCLUDE_DIRS "-I"
fi
;;
esac
extend args_list spack_flags_system_include_dirs_list -I
extend args_list system_include_dirs_list -I
extend args_list spack_flags_isystem_system_include_dirs_list "-isystem${lsep}"
extend args_list isystem_system_include_dirs_list "-isystem${lsep}"
# Library search paths
# Library search paths partitioned by (in store, non-sytem, system)
extend args_list spack_flags_spack_store_lib_dirs_list "-L"
extend args_list spack_store_lib_dirs_list "-L"
extend args_list spack_flags_lib_dirs_list "-L"
extend args_list lib_dirs_list "-L"
extend args_list spack_flags_system_lib_dirs_list "-L"
extend args_list system_lib_dirs_list "-L"
@@ -839,8 +887,12 @@ case "$mode" in
if [ -n "$dtags_to_add" ] ; then
append args_list "$linker_arg$dtags_to_add"
fi
extend args_list spack_flags_spack_store_rpath_dirs_list "$rpath"
extend args_list spack_store_rpath_dirs_list "$rpath"
extend args_list spack_flags_rpath_dirs_list "$rpath"
extend args_list rpath_dirs_list "$rpath"
extend args_list spack_flags_system_rpath_dirs_list "$rpath"
extend args_list system_rpath_dirs_list "$rpath"
;;
@@ -848,8 +900,12 @@ case "$mode" in
if [ -n "$dtags_to_add" ] ; then
append args_list "$dtags_to_add"
fi
extend args_list spack_flags_spack_store_rpath_dirs_list "-rpath${lsep}"
extend args_list spack_store_rpath_dirs_list "-rpath${lsep}"
extend args_list spack_flags_rpath_dirs_list "-rpath${lsep}"
extend args_list rpath_dirs_list "-rpath${lsep}"
extend args_list spack_flags_system_rpath_dirs_list "-rpath${lsep}"
extend args_list system_rpath_dirs_list "-rpath${lsep}"
;;
@@ -913,4 +969,3 @@ fi
# Execute the full command, preserving spaces with IFS set
# to the alarm bell separator.
IFS="$lsep"; exec $full_command_list

View File

@@ -497,7 +497,7 @@ def copy_attributes(self, t, memo=None):
Tag.attrib, merge_attrib]:
if hasattr(self, a):
if memo is not None:
setattr(t, a, copy.deepcopy(getattr(self, a, memo)))
setattr(t, a, copy.deepcopy(getattr(self, a), memo))
else:
setattr(t, a, getattr(self, a))
# fmt: on

View File

@@ -0,0 +1,13 @@
diff --git a/lib/spack/external/_vendoring/ruamel/yaml/comments.py b/lib/spack/external/_vendoring/ruamel/yaml/comments.py
index 1badeda585..892c868af3 100644
--- a/lib/spack/external/_vendoring/ruamel/yaml/comments.py
+++ b/lib/spack/external/_vendoring/ruamel/yaml/comments.py
@@ -497,7 +497,7 @@ def copy_attributes(self, t, memo=None):
Tag.attrib, merge_attrib]:
if hasattr(self, a):
if memo is not None:
- setattr(t, a, copy.deepcopy(getattr(self, a, memo)))
+ setattr(t, a, copy.deepcopy(getattr(self, a), memo))
else:
setattr(t, a, getattr(self, a))
# fmt: on

View File

@@ -12,7 +12,7 @@
# Archive extensions allowed in Spack
PREFIX_EXTENSIONS = ("tar", "TAR")
EXTENSIONS = ("gz", "bz2", "xz", "Z")
NO_TAR_EXTENSIONS = ("zip", "tgz", "tbz2", "tbz", "txz")
NO_TAR_EXTENSIONS = ("zip", "tgz", "tbz2", "tbz", "txz", "whl")
# Add PREFIX_EXTENSIONS and EXTENSIONS last so that .tar.gz is matched *before* .tar or .gz
ALLOWED_ARCHIVE_TYPES = (
@@ -357,10 +357,8 @@ def strip_version_suffixes(path_or_url: str) -> str:
r"i[36]86",
r"ppc64(le)?",
r"armv?(7l|6l|64)?",
# PyPI
r"[._-]py[23].*\.whl",
r"[._-]cp[23].*\.whl",
r"[._-]win.*\.exe",
# PyPI wheels
r"-(?:py|cp)[23].*",
]
for regex in suffix_regexes:
@@ -403,7 +401,7 @@ def expand_contracted_extension_in_path(
def compression_ext_from_compressed_archive(extension: str) -> Optional[str]:
"""Returns compression extension for a compressed archive"""
extension = expand_contracted_extension(extension)
for ext in [*EXTENSIONS]:
for ext in EXTENSIONS:
if ext in extension:
return ext
return None

View File

@@ -198,15 +198,32 @@ def getuid():
return os.getuid()
def _win_rename(src, dst):
# os.replace will still fail if on Windows (but not POSIX) if the dst
# is a symlink to a directory (all other cases have parity Windows <-> Posix)
if os.path.islink(dst) and os.path.isdir(os.path.realpath(dst)):
if os.path.samefile(src, dst):
# src and dst are the same
# do nothing and exit early
return
# If dst exists and is a symlink to a directory
# we need to remove dst and then perform rename/replace
# this is safe to do as there's no chance src == dst now
os.remove(dst)
os.replace(src, dst)
@system_path_filter
def rename(src, dst):
# On Windows, os.rename will fail if the destination file already exists
# os.replace is the same as os.rename on POSIX and is MoveFileExW w/
# the MOVEFILE_REPLACE_EXISTING flag on Windows
# Windows invocation is abstracted behind additonal logic handling
# remaining cases of divergent behavior accross platforms
if sys.platform == "win32":
# Windows path existence checks will sometimes fail on junctions/links/symlinks
# so check for that case
if os.path.exists(dst) or islink(dst):
os.remove(dst)
os.rename(src, dst)
_win_rename(src, dst)
else:
os.replace(src, dst)
@system_path_filter
@@ -1217,10 +1234,12 @@ def windows_sfn(path: os.PathLike):
import ctypes
k32 = ctypes.WinDLL("kernel32", use_last_error=True)
# Method with null values returns size of short path name
sz = k32.GetShortPathNameW(path, None, 0)
# stub Windows types TCHAR[LENGTH]
TCHAR_arr = ctypes.c_wchar * len(path)
TCHAR_arr = ctypes.c_wchar * sz
ret_str = TCHAR_arr()
k32.GetShortPathNameW(path, ret_str, len(path))
k32.GetShortPathNameW(path, ctypes.byref(ret_str), sz)
return ret_str.value

View File

@@ -12,7 +12,7 @@
import traceback
from datetime import datetime
from sys import platform as _platform
from typing import NoReturn
from typing import Any, NoReturn
if _platform != "win32":
import fcntl
@@ -158,21 +158,22 @@ def get_timestamp(force=False):
return ""
def msg(message, *args, **kwargs):
def msg(message: Any, *args: Any, newline: bool = True) -> None:
if not msg_enabled():
return
if isinstance(message, Exception):
message = "%s: %s" % (message.__class__.__name__, str(message))
message = f"{message.__class__.__name__}: {message}"
else:
message = str(message)
newline = kwargs.get("newline", True)
st_text = ""
if _stacktrace:
st_text = process_stacktrace(2)
if newline:
cprint("@*b{%s==>} %s%s" % (st_text, get_timestamp(), cescape(_output_filter(message))))
else:
cwrite("@*b{%s==>} %s%s" % (st_text, get_timestamp(), cescape(_output_filter(message))))
nl = "\n" if newline else ""
cwrite(f"@*b{{{st_text}==>}} {get_timestamp()}{cescape(_output_filter(message))}{nl}")
for arg in args:
print(indent + _output_filter(str(arg)))

View File

@@ -237,7 +237,6 @@ def transpose():
def colified(
elts: List[Any],
cols: int = 0,
output: Optional[IO] = None,
indent: int = 0,
padding: int = 2,
tty: Optional[bool] = None,

View File

@@ -62,6 +62,7 @@
import re
import sys
from contextlib import contextmanager
from typing import Optional
class ColorParseError(Exception):
@@ -95,7 +96,7 @@ def __init__(self, message):
} # white
# Regex to be used for color formatting
color_re = r"@(?:@|\.|([*_])?([a-zA-Z])?(?:{((?:[^}]|}})*)})?)"
COLOR_RE = re.compile(r"@(?:(@)|(\.)|([*_])?([a-zA-Z])?(?:{((?:[^}]|}})*)})?)")
# Mapping from color arguments to values for tty.set_color
color_when_values = {"always": True, "auto": None, "never": False}
@@ -203,77 +204,64 @@ def color_when(value):
set_color_when(old_value)
class match_to_ansi:
def __init__(self, color=True, enclose=False, zsh=False):
self.color = _color_when_value(color)
self.enclose = enclose
self.zsh = zsh
def escape(self, s):
"""Returns a TTY escape sequence for a color"""
if self.color:
if self.zsh:
result = rf"\e[0;{s}m"
else:
result = f"\033[{s}m"
if self.enclose:
result = rf"\[{result}\]"
return result
def _escape(s: str, color: bool, enclose: bool, zsh: bool) -> str:
"""Returns a TTY escape sequence for a color"""
if color:
if zsh:
result = rf"\e[0;{s}m"
else:
return ""
result = f"\033[{s}m"
def __call__(self, match):
"""Convert a match object generated by ``color_re`` into an ansi
color code. This can be used as a handler in ``re.sub``.
"""
style, color, text = match.groups()
m = match.group(0)
if enclose:
result = rf"\[{result}\]"
if m == "@@":
return "@"
elif m == "@.":
return self.escape(0)
elif m == "@":
raise ColorParseError("Incomplete color format: '%s' in %s" % (m, match.string))
string = styles[style]
if color:
if color not in colors:
raise ColorParseError(
"Invalid color specifier: '%s' in '%s'" % (color, match.string)
)
string += ";" + str(colors[color])
colored_text = ""
if text:
colored_text = text + self.escape(0)
return self.escape(string) + colored_text
return result
else:
return ""
def colorize(string, **kwargs):
def colorize(
string: str, color: Optional[bool] = None, enclose: bool = False, zsh: bool = False
) -> str:
"""Replace all color expressions in a string with ANSI control codes.
Args:
string (str): The string to replace
string: The string to replace
Returns:
str: The filtered string
The filtered string
Keyword Arguments:
color (bool): If False, output will be plain text without control
codes, for output to non-console devices.
enclose (bool): If True, enclose ansi color sequences with
color: If False, output will be plain text without control codes, for output to
non-console devices (default: automatically choose color or not)
enclose: If True, enclose ansi color sequences with
square brackets to prevent misestimation of terminal width.
zsh (bool): If True, use zsh ansi codes instead of bash ones (for variables like PS1)
zsh: If True, use zsh ansi codes instead of bash ones (for variables like PS1)
"""
color = _color_when_value(kwargs.get("color", get_color_when()))
zsh = kwargs.get("zsh", False)
string = re.sub(color_re, match_to_ansi(color, kwargs.get("enclose")), string, zsh)
string = string.replace("}}", "}")
return string
color = color if color is not None else get_color_when()
def match_to_ansi(match):
"""Convert a match object generated by ``COLOR_RE`` into an ansi
color code. This can be used as a handler in ``re.sub``.
"""
escaped_at, dot, style, color_code, text = match.groups()
if escaped_at:
return "@"
elif dot:
return _escape(0, color, enclose, zsh)
elif not (style or color_code):
raise ColorParseError(
f"Incomplete color format: '{match.group(0)}' in '{match.string}'"
)
ansi_code = _escape(f"{styles[style]};{colors.get(color_code, '')}", color, enclose, zsh)
if text:
return f"{ansi_code}{text}{_escape(0, color, enclose, zsh)}"
else:
return ansi_code
return COLOR_RE.sub(match_to_ansi, string).replace("}}", "}")
def clen(string):
@@ -305,7 +293,7 @@ def cprint(string, stream=None, color=None):
cwrite(string + "\n", stream, color)
def cescape(string):
def cescape(string: str) -> str:
"""Escapes special characters needed for color codes.
Replaces the following symbols with their equivalent literal forms:
@@ -321,10 +309,7 @@ def cescape(string):
Returns:
(str): the string with color codes escaped
"""
string = str(string)
string = string.replace("@", "@@")
string = string.replace("}", "}}")
return string
return string.replace("@", "@@").replace("}", "}}")
class ColorStream:

View File

@@ -17,7 +17,6 @@
import tarfile
import tempfile
import time
import traceback
import urllib.error
import urllib.parse
import urllib.request
@@ -111,10 +110,6 @@ def __init__(self, errors):
super().__init__(self.message)
class ListMirrorSpecsError(spack.error.SpackError):
"""Raised when unable to retrieve list of specs from the mirror"""
class BinaryCacheIndex:
"""
The BinaryCacheIndex tracks what specs are available on (usually remote)
@@ -541,83 +536,6 @@ def binary_index_location():
BINARY_INDEX: BinaryCacheIndex = llnl.util.lang.Singleton(BinaryCacheIndex) # type: ignore
class NoOverwriteException(spack.error.SpackError):
"""Raised when a file would be overwritten"""
def __init__(self, file_path):
super().__init__(f"Refusing to overwrite the following file: {file_path}")
class NoGpgException(spack.error.SpackError):
"""
Raised when gpg2 is not in PATH
"""
def __init__(self, msg):
super().__init__(msg)
class NoKeyException(spack.error.SpackError):
"""
Raised when gpg has no default key added.
"""
def __init__(self, msg):
super().__init__(msg)
class PickKeyException(spack.error.SpackError):
"""
Raised when multiple keys can be used to sign.
"""
def __init__(self, keys):
err_msg = "Multiple keys available for signing\n%s\n" % keys
err_msg += "Use spack buildcache create -k <key hash> to pick a key."
super().__init__(err_msg)
class NoVerifyException(spack.error.SpackError):
"""
Raised if file fails signature verification.
"""
pass
class NoChecksumException(spack.error.SpackError):
"""
Raised if file fails checksum verification.
"""
def __init__(self, path, size, contents, algorithm, expected, computed):
super().__init__(
f"{algorithm} checksum failed for {path}",
f"Expected {expected} but got {computed}. "
f"File size = {size} bytes. Contents = {contents!r}",
)
class NewLayoutException(spack.error.SpackError):
"""
Raised if directory layout is different from buildcache.
"""
def __init__(self, msg):
super().__init__(msg)
class InvalidMetadataFile(spack.error.SpackError):
pass
class UnsignedPackageException(spack.error.SpackError):
"""
Raised if installation of unsigned package is attempted without
the use of ``--no-check-signature``.
"""
def compute_hash(data):
if isinstance(data, str):
data = data.encode("utf-8")
@@ -992,15 +910,10 @@ def url_read_method(url):
if entry.endswith("spec.json") or entry.endswith("spec.json.sig")
]
read_fn = url_read_method
except KeyError as inst:
msg = "No packages at {0}: {1}".format(cache_prefix, inst)
tty.warn(msg)
except Exception as err:
# If we got some kind of S3 (access denied or other connection
# error), the first non boto-specific class in the exception
# hierarchy is Exception. Just print a warning and return
msg = "Encountered problem listing packages at {0}: {1}".format(cache_prefix, err)
tty.warn(msg)
# If we got some kind of S3 (access denied or other connection error), the first non
# boto-specific class in the exception is Exception. Just print a warning and return
tty.warn(f"Encountered problem listing packages at {cache_prefix}: {err}")
return file_list, read_fn
@@ -1047,11 +960,10 @@ def generate_package_index(cache_prefix, concurrency=32):
"""
try:
file_list, read_fn = _spec_files_from_cache(cache_prefix)
except ListMirrorSpecsError as err:
tty.error("Unable to generate package index, {0}".format(err))
return
except ListMirrorSpecsError as e:
raise GenerateIndexError(f"Unable to generate package index: {e}") from e
tty.debug("Retrieving spec descriptor files from {0} to build index".format(cache_prefix))
tty.debug(f"Retrieving spec descriptor files from {cache_prefix} to build index")
tmpdir = tempfile.mkdtemp()
@@ -1061,27 +973,22 @@ def generate_package_index(cache_prefix, concurrency=32):
try:
_read_specs_and_push_index(file_list, read_fn, cache_prefix, db, db_root_dir, concurrency)
except Exception as err:
msg = "Encountered problem pushing package index to {0}: {1}".format(cache_prefix, err)
tty.warn(msg)
tty.debug("\n" + traceback.format_exc())
except Exception as e:
raise GenerateIndexError(
f"Encountered problem pushing package index to {cache_prefix}: {e}"
) from e
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(tmpdir, ignore_errors=True)
def generate_key_index(key_prefix, tmpdir=None):
"""Create the key index page.
Creates (or replaces) the "index.json" page at the location given in
key_prefix. This page contains an entry for each key (.pub) under
key_prefix.
Creates (or replaces) the "index.json" page at the location given in key_prefix. This page
contains an entry for each key (.pub) under key_prefix.
"""
tty.debug(
" ".join(
("Retrieving key.pub files from", url_util.format(key_prefix), "to build key index")
)
)
tty.debug(f"Retrieving key.pub files from {url_util.format(key_prefix)} to build key index")
try:
fingerprints = (
@@ -1089,17 +996,8 @@ def generate_key_index(key_prefix, tmpdir=None):
for entry in web_util.list_url(key_prefix, recursive=False)
if entry.endswith(".pub")
)
except KeyError as inst:
msg = "No keys at {0}: {1}".format(key_prefix, inst)
tty.warn(msg)
return
except Exception as err:
# If we got some kind of S3 (access denied or other connection
# error), the first non boto-specific class in the exception
# hierarchy is Exception. Just print a warning and return
msg = "Encountered problem listing keys at {0}: {1}".format(key_prefix, err)
tty.warn(msg)
return
except Exception as e:
raise CannotListKeys(f"Encountered problem listing keys at {key_prefix}: {e}") from e
remove_tmpdir = False
@@ -1124,12 +1022,13 @@ def generate_key_index(key_prefix, tmpdir=None):
keep_original=False,
extra_args={"ContentType": "application/json"},
)
except Exception as err:
msg = "Encountered problem pushing key index to {0}: {1}".format(key_prefix, err)
tty.warn(msg)
except Exception as e:
raise GenerateIndexError(
f"Encountered problem pushing key index to {key_prefix}: {e}"
) from e
finally:
if remove_tmpdir:
shutil.rmtree(tmpdir)
shutil.rmtree(tmpdir, ignore_errors=True)
def tarfile_of_spec_prefix(tar: tarfile.TarFile, prefix: str) -> None:
@@ -1200,7 +1099,8 @@ def push_or_raise(spec: Spec, out_url: str, options: PushOptions):
used at the mirror (following <tarball_directory_name>).
This method raises :py:class:`NoOverwriteException` when ``force=False`` and the tarball or
spec.json file already exist in the buildcache.
spec.json file already exist in the buildcache. It raises :py:class:`PushToBuildCacheError`
when the tarball or spec.json file cannot be pushed to the buildcache.
"""
if not spec.concrete:
raise ValueError("spec must be concrete to build tarball")
@@ -1278,13 +1178,18 @@ def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, option
key = select_signing_key(options.key)
sign_specfile(key, options.force, specfile_path)
# push tarball and signed spec json to remote mirror
web_util.push_to_url(spackfile_path, remote_spackfile_path, keep_original=False)
web_util.push_to_url(
signed_specfile_path if not options.unsigned else specfile_path,
remote_signed_specfile_path if not options.unsigned else remote_specfile_path,
keep_original=False,
)
try:
# push tarball and signed spec json to remote mirror
web_util.push_to_url(spackfile_path, remote_spackfile_path, keep_original=False)
web_util.push_to_url(
signed_specfile_path if not options.unsigned else specfile_path,
remote_signed_specfile_path if not options.unsigned else remote_specfile_path,
keep_original=False,
)
except Exception as e:
raise PushToBuildCacheError(
f"Encountered problem pushing binary {remote_spackfile_path}: {e}"
) from e
# push the key to the build cache's _pgp directory so it can be
# imported
@@ -1296,8 +1201,6 @@ def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, option
if options.regenerate_index:
generate_package_index(url_util.join(out_url, os.path.relpath(cache_prefix, stage_dir)))
return None
class NotInstalledError(spack.error.SpackError):
"""Raised when a spec is not installed but picked to be packaged."""
@@ -1352,28 +1255,6 @@ def specs_to_be_packaged(
return [s for s in itertools.chain(roots, deps) if not s.external]
def push(spec: Spec, mirror_url: str, options: PushOptions):
"""Create and push binary package for a single spec to the specified
mirror url.
Args:
spec: Spec to package and push
mirror_url: Desired destination url for binary package
options:
Returns:
True if package was pushed, False otherwise.
"""
try:
push_or_raise(spec, mirror_url, options)
except NoOverwriteException as e:
warnings.warn(str(e))
return False
return True
def try_verify(specfile_path):
"""Utility function to attempt to verify a local file. Assumes the
file is a clearsigned signature file.
@@ -2706,3 +2587,96 @@ def conditional_fetch(self) -> FetchIndexResult:
raise FetchIndexError(f"Remote index {url_manifest} is invalid")
return FetchIndexResult(etag=None, hash=index_digest.digest, data=result, fresh=False)
class NoOverwriteException(spack.error.SpackError):
"""Raised when a file would be overwritten"""
def __init__(self, file_path):
super().__init__(f"Refusing to overwrite the following file: {file_path}")
class NoGpgException(spack.error.SpackError):
"""
Raised when gpg2 is not in PATH
"""
def __init__(self, msg):
super().__init__(msg)
class NoKeyException(spack.error.SpackError):
"""
Raised when gpg has no default key added.
"""
def __init__(self, msg):
super().__init__(msg)
class PickKeyException(spack.error.SpackError):
"""
Raised when multiple keys can be used to sign.
"""
def __init__(self, keys):
err_msg = "Multiple keys available for signing\n%s\n" % keys
err_msg += "Use spack buildcache create -k <key hash> to pick a key."
super().__init__(err_msg)
class NoVerifyException(spack.error.SpackError):
"""
Raised if file fails signature verification.
"""
pass
class NoChecksumException(spack.error.SpackError):
"""
Raised if file fails checksum verification.
"""
def __init__(self, path, size, contents, algorithm, expected, computed):
super().__init__(
f"{algorithm} checksum failed for {path}",
f"Expected {expected} but got {computed}. "
f"File size = {size} bytes. Contents = {contents!r}",
)
class NewLayoutException(spack.error.SpackError):
"""
Raised if directory layout is different from buildcache.
"""
def __init__(self, msg):
super().__init__(msg)
class InvalidMetadataFile(spack.error.SpackError):
pass
class UnsignedPackageException(spack.error.SpackError):
"""
Raised if installation of unsigned package is attempted without
the use of ``--no-check-signature``.
"""
class ListMirrorSpecsError(spack.error.SpackError):
"""Raised when unable to retrieve list of specs from the mirror"""
class GenerateIndexError(spack.error.SpackError):
"""Raised when unable to generate key or package index for mirror"""
class CannotListKeys(GenerateIndexError):
"""Raised when unable to list keys when generating key index"""
class PushToBuildCacheError(spack.error.SpackError):
"""Raised when unable to push objects to binary mirror"""

View File

@@ -173,35 +173,14 @@ def _read_metadata(self, package_name: str) -> Any:
return data
def _install_by_hash(
self,
pkg_hash: str,
pkg_sha256: str,
index: List[spack.spec.Spec],
bincache_platform: spack.platforms.Platform,
self, pkg_hash: str, pkg_sha256: str, bincache_platform: spack.platforms.Platform
) -> None:
index_spec = next(x for x in index if x.dag_hash() == pkg_hash)
# Reconstruct the compiler that we need to use for bootstrapping
compiler_entry = {
"modules": [],
"operating_system": str(index_spec.os),
"paths": {
"cc": "/dev/null",
"cxx": "/dev/null",
"f77": "/dev/null",
"fc": "/dev/null",
},
"spec": str(index_spec.compiler),
"target": str(index_spec.target.family),
}
with spack.platforms.use_platform(bincache_platform):
with spack.config.override("compilers", [{"compiler": compiler_entry}]):
spec_str = "/" + pkg_hash
query = spack.binary_distribution.BinaryCacheQuery(all_architectures=True)
matches = spack.store.find([spec_str], multiple=False, query_fn=query)
for match in matches:
spack.binary_distribution.install_root_node(
match, unsigned=True, force=True, sha256=pkg_sha256
)
query = spack.binary_distribution.BinaryCacheQuery(all_architectures=True)
for match in spack.store.find([f"/{pkg_hash}"], multiple=False, query_fn=query):
spack.binary_distribution.install_root_node(
match, unsigned=True, force=True, sha256=pkg_sha256
)
def _install_and_test(
self,
@@ -232,7 +211,7 @@ def _install_and_test(
continue
for _, pkg_hash, pkg_sha256 in item["binaries"]:
self._install_by_hash(pkg_hash, pkg_sha256, index, bincache_platform)
self._install_by_hash(pkg_hash, pkg_sha256, bincache_platform)
info: ConfigDictionary = {}
if test_fn(query_spec=abstract_spec, query_info=info):

View File

@@ -43,7 +43,7 @@
from collections import defaultdict
from enum import Flag, auto
from itertools import chain
from typing import List, Tuple
from typing import List, Set, Tuple
import llnl.util.tty as tty
from llnl.string import plural
@@ -57,8 +57,10 @@
import spack.build_systems.meson
import spack.build_systems.python
import spack.builder
import spack.compilers
import spack.config
import spack.deptypes as dt
import spack.error
import spack.main
import spack.package_base
import spack.paths
@@ -66,6 +68,7 @@
import spack.repo
import spack.schema.environment
import spack.spec
import spack.stage
import spack.store
import spack.subprocess_context
import spack.user_environment
@@ -78,7 +81,7 @@
from spack.installer import InstallError
from spack.util.cpus import determine_number_of_jobs
from spack.util.environment import (
SYSTEM_DIRS,
SYSTEM_DIR_CASE_ENTRY,
EnvironmentModifications,
env_flag,
filter_system_paths,
@@ -101,9 +104,13 @@
# Spack's compiler wrappers.
#
SPACK_ENV_PATH = "SPACK_ENV_PATH"
SPACK_MANAGED_DIRS = "SPACK_MANAGED_DIRS"
SPACK_INCLUDE_DIRS = "SPACK_INCLUDE_DIRS"
SPACK_LINK_DIRS = "SPACK_LINK_DIRS"
SPACK_RPATH_DIRS = "SPACK_RPATH_DIRS"
SPACK_STORE_INCLUDE_DIRS = "SPACK_STORE_INCLUDE_DIRS"
SPACK_STORE_LINK_DIRS = "SPACK_STORE_LINK_DIRS"
SPACK_STORE_RPATH_DIRS = "SPACK_STORE_RPATH_DIRS"
SPACK_RPATH_DEPS = "SPACK_RPATH_DEPS"
SPACK_LINK_DEPS = "SPACK_LINK_DEPS"
SPACK_PREFIX = "SPACK_PREFIX"
@@ -416,7 +423,7 @@ def set_compiler_environment_variables(pkg, env):
env.set("SPACK_COMPILER_SPEC", str(spec.compiler))
env.set("SPACK_SYSTEM_DIRS", ":".join(SYSTEM_DIRS))
env.set("SPACK_SYSTEM_DIRS", SYSTEM_DIR_CASE_ENTRY)
compiler.setup_custom_environment(pkg, env)
@@ -544,9 +551,26 @@ def update_compiler_args_for_dep(dep):
include_dirs = list(dedupe(filter_system_paths(include_dirs)))
rpath_dirs = list(dedupe(filter_system_paths(rpath_dirs)))
env.set(SPACK_LINK_DIRS, ":".join(link_dirs))
env.set(SPACK_INCLUDE_DIRS, ":".join(include_dirs))
env.set(SPACK_RPATH_DIRS, ":".join(rpath_dirs))
# Spack managed directories include the stage, store and upstream stores. We extend this with
# their real paths to make it more robust (e.g. /tmp vs /private/tmp on macOS).
spack_managed_dirs: Set[str] = {
spack.stage.get_stage_root(),
spack.store.STORE.db.root,
*(db.root for db in spack.store.STORE.db.upstream_dbs),
}
spack_managed_dirs.update([os.path.realpath(p) for p in spack_managed_dirs])
env.set(SPACK_MANAGED_DIRS, "|".join(f'"{p}/"*' for p in sorted(spack_managed_dirs)))
is_spack_managed = lambda p: any(p.startswith(store) for store in spack_managed_dirs)
link_dirs_spack, link_dirs_system = stable_partition(link_dirs, is_spack_managed)
include_dirs_spack, include_dirs_system = stable_partition(include_dirs, is_spack_managed)
rpath_dirs_spack, rpath_dirs_system = stable_partition(rpath_dirs, is_spack_managed)
env.set(SPACK_LINK_DIRS, ":".join(link_dirs_system))
env.set(SPACK_INCLUDE_DIRS, ":".join(include_dirs_system))
env.set(SPACK_RPATH_DIRS, ":".join(rpath_dirs_system))
env.set(SPACK_STORE_LINK_DIRS, ":".join(link_dirs_spack))
env.set(SPACK_STORE_INCLUDE_DIRS, ":".join(include_dirs_spack))
env.set(SPACK_STORE_RPATH_DIRS, ":".join(rpath_dirs_spack))
def set_package_py_globals(pkg, context: Context = Context.BUILD):
@@ -583,10 +607,22 @@ def set_package_py_globals(pkg, context: Context = Context.BUILD):
# Put spack compiler paths in module scope. (Some packages use it
# in setup_run_environment etc, so don't put it context == build)
link_dir = spack.paths.build_env_path
module.spack_cc = os.path.join(link_dir, pkg.compiler.link_paths["cc"])
module.spack_cxx = os.path.join(link_dir, pkg.compiler.link_paths["cxx"])
module.spack_f77 = os.path.join(link_dir, pkg.compiler.link_paths["f77"])
module.spack_fc = os.path.join(link_dir, pkg.compiler.link_paths["fc"])
pkg_compiler = None
try:
pkg_compiler = pkg.compiler
except spack.compilers.NoCompilerForSpecError as e:
tty.debug(f"cannot set 'spack_cc': {str(e)}")
if pkg_compiler is not None:
module.spack_cc = os.path.join(link_dir, pkg_compiler.link_paths["cc"])
module.spack_cxx = os.path.join(link_dir, pkg_compiler.link_paths["cxx"])
module.spack_f77 = os.path.join(link_dir, pkg_compiler.link_paths["f77"])
module.spack_fc = os.path.join(link_dir, pkg_compiler.link_paths["fc"])
else:
module.spack_cc = None
module.spack_cxx = None
module.spack_f77 = None
module.spack_fc = None
# Useful directories within the prefix are encapsulated in
# a Prefix object.
@@ -789,7 +825,7 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
for mod in ["cray-mpich", "cray-libsci"]:
module("unload", mod)
if target.module_name:
if target and target.module_name:
load_module(target.module_name)
load_external_modules(pkg)

View File

@@ -434,11 +434,6 @@ def _do_patch_libtool(self):
r"crtendS\.o",
]:
x.filter(regex=(rehead + o), repl="")
elif self.pkg.compiler.name == "dpcpp":
# Hack to filter out spurious predep_objects when building with Intel dpcpp
# (see https://github.com/spack/spack/issues/32863):
x.filter(regex=r"^(predep_objects=.*)/tmp/conftest-[0-9A-Fa-f]+\.o", repl=r"\1")
x.filter(regex=r"^(predep_objects=.*)/tmp/a-[0-9A-Fa-f]+\.o", repl=r"\1")
elif self.pkg.compiler.name == "nag":
for tag in ["fc", "f77"]:
marker = markers[tag]

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import collections.abc
import os
import re
from typing import Tuple
import llnl.util.filesystem as fs
@@ -15,6 +16,12 @@
from .cmake import CMakeBuilder, CMakePackage
def spec_uses_toolchain(spec):
gcc_toolchain_regex = re.compile(".*gcc-toolchain.*")
using_toolchain = list(filter(gcc_toolchain_regex.match, spec.compiler_flags["cxxflags"]))
return using_toolchain
def cmake_cache_path(name, value, comment="", force=False):
"""Generate a string for a cmake cache variable"""
force_str = " FORCE" if force else ""
@@ -213,7 +220,7 @@ def initconfig_mpi_entries(self):
else:
# starting with cmake 3.10, FindMPI expects MPIEXEC_EXECUTABLE
# vs the older versions which expect MPIEXEC
if self.pkg.spec["cmake"].satisfies("@3.10:"):
if spec["cmake"].satisfies("@3.10:"):
entries.append(cmake_cache_path("MPIEXEC_EXECUTABLE", mpiexec))
else:
entries.append(cmake_cache_path("MPIEXEC", mpiexec))
@@ -248,12 +255,17 @@ def initconfig_hardware_entries(self):
# Include the deprecated CUDA_TOOLKIT_ROOT_DIR for supporting BLT packages
entries.append(cmake_cache_path("CUDA_TOOLKIT_ROOT_DIR", cudatoolkitdir))
archs = spec.variants["cuda_arch"].value
if archs[0] != "none":
arch_str = ";".join(archs)
entries.append(
cmake_cache_string("CMAKE_CUDA_ARCHITECTURES", "{0}".format(arch_str))
)
# CUDA_FLAGS
cuda_flags = []
if not spec.satisfies("cuda_arch=none"):
cuda_archs = ";".join(spec.variants["cuda_arch"].value)
entries.append(cmake_cache_string("CMAKE_CUDA_ARCHITECTURES", cuda_archs))
if spec_uses_toolchain(spec):
cuda_flags.append("-Xcompiler {}".format(spec_uses_toolchain(spec)[0]))
entries.append(cmake_cache_string("CMAKE_CUDA_FLAGS", " ".join(cuda_flags)))
if "+rocm" in spec:
entries.append("#------------------{0}".format("-" * 30))
@@ -262,9 +274,6 @@ def initconfig_hardware_entries(self):
# Explicitly setting HIP_ROOT_DIR may be a patch that is no longer necessary
entries.append(cmake_cache_path("HIP_ROOT_DIR", "{0}".format(spec["hip"].prefix)))
entries.append(
cmake_cache_path("HIP_CXX_COMPILER", "{0}".format(self.spec["hip"].hipcc))
)
llvm_bin = spec["llvm-amdgpu"].prefix.bin
llvm_prefix = spec["llvm-amdgpu"].prefix
# Some ROCm systems seem to point to /<path>/rocm-<ver>/ and
@@ -277,11 +286,9 @@ def initconfig_hardware_entries(self):
archs = self.spec.variants["amdgpu_target"].value
if archs[0] != "none":
arch_str = ";".join(archs)
entries.append(
cmake_cache_string("CMAKE_HIP_ARCHITECTURES", "{0}".format(arch_str))
)
entries.append(cmake_cache_string("AMDGPU_TARGETS", "{0}".format(arch_str)))
entries.append(cmake_cache_string("GPU_TARGETS", "{0}".format(arch_str)))
entries.append(cmake_cache_string("CMAKE_HIP_ARCHITECTURES", arch_str))
entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str))
entries.append(cmake_cache_string("GPU_TARGETS", arch_str))
return entries

View File

@@ -16,7 +16,7 @@
class CargoPackage(spack.package_base.PackageBase):
"""Specialized class for packages built using a Makefiles."""
"""Specialized class for packages built using cargo."""
#: This attribute is used in UI queries that need to know the build
#: system base class

View File

@@ -21,7 +21,7 @@
class MakefilePackage(spack.package_base.PackageBase):
"""Specialized class for packages built using a Makefiles."""
"""Specialized class for packages built using Makefiles."""
#: This attribute is used in UI queries that need to know the build
#: system base class

View File

@@ -14,7 +14,7 @@
from llnl.util.link_tree import LinkTree
from spack.build_environment import dso_suffix
from spack.directives import conflicts, variant
from spack.directives import conflicts, license, variant
from spack.package_base import InstallError
from spack.util.environment import EnvironmentModifications
from spack.util.executable import Executable
@@ -26,6 +26,7 @@ class IntelOneApiPackage(Package):
"""Base class for Intel oneAPI packages."""
homepage = "https://software.intel.com/oneapi"
license("https://intel.ly/393CijO")
# oneAPI license does not allow mirroring outside of the
# organization (e.g. University/Company).

View File

@@ -75,9 +75,12 @@
# does not like its directory structure.
#
import os
import spack.variant
from spack.directives import conflicts, depends_on, variant
from spack.package_base import PackageBase
from spack.util.environment import EnvironmentModifications
class ROCmPackage(PackageBase):
@@ -154,6 +157,25 @@ def hip_flags(amdgpu_target):
archs = ",".join(amdgpu_target)
return "--amdgpu-target={0}".format(archs)
def asan_on(self, env: EnvironmentModifications):
llvm_path = self.spec["llvm-amdgpu"].prefix
env.set("CC", llvm_path + "/bin/clang")
env.set("CXX", llvm_path + "/bin/clang++")
env.set("ASAN_OPTIONS", "detect_leaks=0")
for root, _, files in os.walk(llvm_path):
if "libclang_rt.asan-x86_64.so" in files:
asan_lib_path = root
env.prepend_path("LD_LIBRARY_PATH", asan_lib_path)
if "rhel" in self.spec.os or "sles" in self.spec.os:
SET_DWARF_VERSION_4 = "-gdwarf-5"
else:
SET_DWARF_VERSION_4 = ""
env.set("CFLAGS", f"-fsanitize=address -shared-libasan -g {SET_DWARF_VERSION_4}")
env.set("CXXFLAGS", f"-fsanitize=address -shared-libasan -g {SET_DWARF_VERSION_4}")
env.set("LDFLAGS", "-Wl,--enable-new-dtags -fuse-ld=lld -fsanitize=address -g -Wl,")
# HIP version vs Architecture
# TODO: add a bunch of lines like:

File diff suppressed because it is too large Load Diff

View File

@@ -334,8 +334,7 @@ def display_specs(specs, args=None, **kwargs):
variants (bool): Show variants with specs
indent (int): indent each line this much
groups (bool): display specs grouped by arch/compiler (default True)
decorators (dict): dictionary mappng specs to decorators
header_callback (typing.Callable): called at start of arch/compiler groups
decorator (typing.Callable): function to call to decorate specs
all_headers (bool): show headers even when arch/compiler aren't defined
output (typing.IO): A file object to write to. Default is ``sys.stdout``
@@ -384,15 +383,13 @@ def get_arg(name, default=None):
vfmt = "{variants}" if variants else ""
format_string = nfmt + "{@version}" + ffmt + vfmt
transform = {"package": decorator, "fullpackage": decorator}
def fmt(s, depth=0):
"""Formatter function for all output specs"""
string = ""
if hashes:
string += gray_hash(s, hlen) + " "
string += depth * " "
string += s.cformat(format_string, transform=transform)
string += decorator(s, s.cformat(format_string))
return string
def format_list(specs):
@@ -451,7 +448,7 @@ def filter_loaded_specs(specs):
return [x for x in specs if x.dag_hash() in hashes]
def print_how_many_pkgs(specs, pkg_type=""):
def print_how_many_pkgs(specs, pkg_type="", suffix=""):
"""Given a list of specs, this will print a message about how many
specs are in that list.
@@ -462,7 +459,7 @@ def print_how_many_pkgs(specs, pkg_type=""):
category, e.g. if pkg_type is "installed" then the message
would be "3 installed packages"
"""
tty.msg("%s" % llnl.string.plural(len(specs), pkg_type + " package"))
tty.msg("%s" % llnl.string.plural(len(specs), pkg_type + " package") + suffix)
def spack_is_git_repo():

View File

@@ -133,6 +133,11 @@ def setup_parser(subparser: argparse.ArgumentParser):
help="when pushing to an OCI registry, tag an image containing all root specs and their "
"runtime dependencies",
)
push.add_argument(
"--private",
action="store_true",
help="for a private mirror, include non-redistributable packages",
)
arguments.add_common_arguments(push, ["specs", "jobs"])
push.set_defaults(func=push_fn)
@@ -275,23 +280,37 @@ def setup_parser(subparser: argparse.ArgumentParser):
# Sync buildcache entries from one mirror to another
sync = subparsers.add_parser("sync", help=sync_fn.__doc__)
sync.add_argument(
"--manifest-glob", help="a quoted glob pattern identifying copy manifest files"
sync_manifest_source = sync.add_argument_group(
"Manifest Source",
"Specify a list of build cache objects to sync using manifest file(s)."
'This option takes the place of the "source mirror" for synchronization'
'and optionally takes a "destination mirror" ',
)
sync.add_argument(
sync_manifest_source.add_argument(
"--manifest-glob", help="a quoted glob pattern identifying CI rebuild manifest files"
)
sync_source_mirror = sync.add_argument_group(
"Named Source",
"Specify a single registered source mirror to synchronize from. This option requires"
"the specification of a destination mirror.",
)
sync_source_mirror.add_argument(
"src_mirror",
metavar="source mirror",
type=arguments.mirror_name_or_url,
nargs="?",
type=arguments.mirror_name_or_url,
help="source mirror name, path, or URL",
)
sync.add_argument(
"dest_mirror",
metavar="destination mirror",
type=arguments.mirror_name_or_url,
nargs="?",
type=arguments.mirror_name_or_url,
help="destination mirror name, path, or URL",
)
sync.set_defaults(func=sync_fn)
# Update buildcache index without copying any additional packages
@@ -353,6 +372,25 @@ def _make_pool() -> MaybePool:
return NoPool()
def _skip_no_redistribute_for_public(specs):
remaining_specs = list()
removed_specs = list()
for spec in specs:
if spec.package.redistribute_binary:
remaining_specs.append(spec)
else:
removed_specs.append(spec)
if removed_specs:
colified_output = tty.colify.colified(list(s.name for s in removed_specs), indent=4)
tty.debug(
"The following specs will not be added to the binary cache"
" because they cannot be redistributed:\n"
f"{colified_output}\n"
"You can use `--private` to include them."
)
return remaining_specs
def push_fn(args):
"""create a binary package and push it to a mirror"""
if args.spec_file:
@@ -403,6 +441,8 @@ def push_fn(args):
root="package" in args.things_to_install,
dependencies="dependencies" in args.things_to_install,
)
if not args.private:
specs = _skip_no_redistribute_for_public(specs)
# When pushing multiple specs, print the url once ahead of time, as well as how
# many specs are being pushed.
@@ -1070,7 +1110,17 @@ def sync_fn(args):
requires an active environment in order to know which specs to sync
"""
if args.manifest_glob:
manifest_copy(glob.glob(args.manifest_glob))
# Passing the args.src_mirror here because it is not possible to
# have the destination be required when specifying a named source
# mirror and optional for the --manifest-glob argument. In the case
# of manifest glob sync, the source mirror positional argument is the
# destination mirror if it is specified. If there are two mirrors
# specified, the second is ignored and the first is the override
# destination.
if args.dest_mirror:
tty.warn(f"Ignoring unused arguemnt: {args.dest_mirror.name}")
manifest_copy(glob.glob(args.manifest_glob), args.src_mirror)
return 0
if args.src_mirror is None or args.dest_mirror is None:
@@ -1121,7 +1171,7 @@ def sync_fn(args):
shutil.rmtree(tmpdir)
def manifest_copy(manifest_file_list):
def manifest_copy(manifest_file_list, dest_mirror=None):
"""Read manifest files containing information about specific specs to copy
from source to destination, remove duplicates since any binary packge for
a given hash should be the same as any other, and copy all files specified
@@ -1135,10 +1185,17 @@ def manifest_copy(manifest_file_list):
# Last duplicate hash wins
deduped_manifest[spec_hash] = copy_list
build_cache_dir = bindist.build_cache_relative_path()
for spec_hash, copy_list in deduped_manifest.items():
for copy_file in copy_list:
tty.debug("copying {0} to {1}".format(copy_file["src"], copy_file["dest"]))
copy_buildcache_file(copy_file["src"], copy_file["dest"])
dest = copy_file["dest"]
if dest_mirror:
src_relative_path = os.path.join(
build_cache_dir, copy_file["src"].rsplit(build_cache_dir, 1)[1].lstrip("/")
)
dest = url_util.join(dest_mirror.push_url, src_relative_path)
tty.debug("copying {0} to {1}".format(copy_file["src"], dest))
copy_buildcache_file(copy_file["src"], dest)
def update_index(mirror: spack.mirror.Mirror, update_keys=False):
@@ -1165,14 +1222,18 @@ def update_index(mirror: spack.mirror.Mirror, update_keys=False):
url, bindist.build_cache_relative_path(), bindist.build_cache_keys_relative_path()
)
bindist.generate_key_index(keys_url)
try:
bindist.generate_key_index(keys_url)
except bindist.CannotListKeys as e:
# Do not error out if listing keys went wrong. This usually means that the _gpg path
# does not exist. TODO: distinguish between this and other errors.
tty.warn(f"did not update the key index: {e}")
def update_index_fn(args):
"""update a buildcache index"""
update_index(args.mirror, update_keys=args.keys)
return update_index(args.mirror, update_keys=args.keys)
def buildcache(parser, args):
if args.func:
args.func(args)
return args.func(args)

View File

@@ -183,7 +183,7 @@ def checksum(parser, args):
print()
if args.add_to_package:
add_versions_to_package(pkg, version_lines)
add_versions_to_package(pkg, version_lines, args.batch)
def print_checksum_status(pkg: PackageBase, version_hashes: dict):
@@ -229,7 +229,7 @@ def print_checksum_status(pkg: PackageBase, version_hashes: dict):
tty.die("Invalid checksums found.")
def add_versions_to_package(pkg: PackageBase, version_lines: str):
def add_versions_to_package(pkg: PackageBase, version_lines: str, is_batch: bool):
"""
Add checksumed versions to a package's instructions and open a user's
editor so they may double check the work of the function.
@@ -282,5 +282,5 @@ def add_versions_to_package(pkg: PackageBase, version_lines: str):
tty.msg(f"Added {num_versions_added} new versions to {pkg.name}")
tty.msg(f"Open {filename} to review the additions.")
if sys.stdout.isatty():
if sys.stdout.isatty() and not is_batch:
editor(filename)

View File

@@ -14,6 +14,7 @@
import spack.binary_distribution as bindist
import spack.ci as spack_ci
import spack.cmd
import spack.cmd.buildcache as buildcache
import spack.config as cfg
import spack.environment as ev
@@ -32,6 +33,7 @@
SPACK_COMMAND = "spack"
MAKE_COMMAND = "make"
INSTALL_FAIL_CODE = 1
FAILED_CREATE_BUILDCACHE_CODE = 100
def deindent(desc):
@@ -705,11 +707,9 @@ def ci_rebuild(args):
cdash_handler.report_skipped(job_spec, reports_dir, reason=msg)
cdash_handler.copy_test_results(reports_dir, job_test_dir)
# If the install succeeded, create a buildcache entry for this job spec
# and push it to one or more mirrors. If the install did not succeed,
# print out some instructions on how to reproduce this build failure
# outside of the pipeline environment.
if install_exit_code == 0:
# If the install succeeded, push it to one or more mirrors. Failure to push to any mirror
# will result in a non-zero exit code. Pushing is best-effort.
mirror_urls = [buildcache_mirror_url]
# TODO: Remove this block in Spack 0.23
@@ -721,13 +721,12 @@ def ci_rebuild(args):
destination_mirror_urls=mirror_urls,
sign_binaries=spack_ci.can_sign_binaries(),
):
msg = tty.msg if result.success else tty.warn
msg(
"{} {} to {}".format(
"Pushed" if result.success else "Failed to push",
job_spec.format("{name}{@version}{/hash:7}", color=clr.get_color_when()),
result.url,
)
if not result.success:
install_exit_code = FAILED_CREATE_BUILDCACHE_CODE
(tty.msg if result.success else tty.error)(
f'{"Pushed" if result.success else "Failed to push"} '
f'{job_spec.format("{name}{@version}{/hash:7}", color=clr.get_color_when())} '
f"to {result.url}"
)
# If this is a develop pipeline, check if the spec that we just built is
@@ -748,22 +747,22 @@ def ci_rebuild(args):
tty.warn(msg.format(broken_spec_path, err))
else:
# If the install did not succeed, print out some instructions on how to reproduce this
# build failure outside of the pipeline environment.
tty.debug("spack install exited non-zero, will not create buildcache")
api_root_url = os.environ.get("CI_API_V4_URL")
ci_project_id = os.environ.get("CI_PROJECT_ID")
ci_job_id = os.environ.get("CI_JOB_ID")
repro_job_url = "{0}/projects/{1}/jobs/{2}/artifacts".format(
api_root_url, ci_project_id, ci_job_id
)
repro_job_url = f"{api_root_url}/projects/{ci_project_id}/jobs/{ci_job_id}/artifacts"
# Control characters cause this to be printed in blue so it stands out
reproduce_msg = """
print(
f"""
\033[34mTo reproduce this build locally, run:
spack ci reproduce-build {0} [--working-dir <dir>] [--autostart]
spack ci reproduce-build {repro_job_url} [--working-dir <dir>] [--autostart]
If this project does not have public pipelines, you will need to first:
@@ -771,12 +770,9 @@ def ci_rebuild(args):
... then follow the printed instructions.\033[0;0m
""".format(
repro_job_url
"""
)
print(reproduce_msg)
rebuild_timer.stop()
try:
with open("install_timers.json", "w") as timelog:

View File

@@ -404,17 +404,6 @@ def no_install_status():
)
@arg
def debug_nondefaults():
return Args(
"-d",
"--debug-nondefaults",
action="store_true",
default=False,
help="show non-default decisions in red",
)
@arg
def no_checksum():
return Args(

View File

@@ -9,6 +9,7 @@
import shutil
import sys
import tempfile
from pathlib import Path
from typing import Optional
import llnl.string as string
@@ -44,6 +45,7 @@
"deactivate",
"create",
["remove", "rm"],
["rename", "mv"],
["list", "ls"],
["status", "st"],
"loads",
@@ -472,11 +474,82 @@ def env_remove(args):
tty.msg(f"Successfully removed environment '{bad_env_name}'")
#
# env rename
#
def env_rename_setup_parser(subparser):
"""rename an existing environment"""
subparser.add_argument(
"mv_from", metavar="from", help="name (or path) of existing environment"
)
subparser.add_argument(
"mv_to", metavar="to", help="new name (or path) for existing environment"
)
subparser.add_argument(
"-d",
"--dir",
action="store_true",
help="the specified arguments correspond to directory paths",
)
subparser.add_argument(
"-f", "--force", action="store_true", help="allow overwriting of an existing environment"
)
def env_rename(args):
"""Rename an environment.
This renames a managed environment or moves an anonymous environment.
"""
# Directory option has been specified
if args.dir:
if not ev.is_env_dir(args.mv_from):
tty.die("The specified path does not correspond to a valid spack environment")
from_path = Path(args.mv_from)
if not args.force:
if ev.is_env_dir(args.mv_to):
tty.die(
"The new path corresponds to an existing environment;"
" specify the --force flag to overwrite it."
)
if Path(args.mv_to).exists():
tty.die("The new path already exists; specify the --force flag to overwrite it.")
to_path = Path(args.mv_to)
# Name option being used
elif ev.exists(args.mv_from):
from_path = ev.environment.environment_dir_from_name(args.mv_from)
if not args.force and ev.exists(args.mv_to):
tty.die(
"The new name corresponds to an existing environment;"
" specify the --force flag to overwrite it."
)
to_path = ev.environment.root(args.mv_to)
# Neither
else:
tty.die("The specified name does not correspond to a managed spack environment")
# Guard against renaming from or to an active environment
active_env = ev.active_environment()
if active_env:
from_env = ev.Environment(from_path)
if from_env.path == active_env.path:
tty.die("Cannot rename active environment")
if to_path == active_env.path:
tty.die(f"{args.mv_to} is an active environment")
shutil.rmtree(to_path, ignore_errors=True)
fs.rename(from_path, to_path)
tty.msg(f"Successfully renamed environment {args.mv_from} to {args.mv_to}")
#
# env list
#
def env_list_setup_parser(subparser):
"""list available environments"""
"""list managed environments"""
def env_list(args):

View File

@@ -3,7 +3,6 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import copy
import sys
import llnl.util.lang
@@ -14,6 +13,7 @@
import spack.cmd as cmd
import spack.environment as ev
import spack.repo
import spack.store
from spack.cmd.common import arguments
from spack.database import InstallStatuses
@@ -69,6 +69,12 @@ def setup_parser(subparser):
arguments.add_common_arguments(subparser, ["long", "very_long", "tags", "namespaces"])
subparser.add_argument(
"-r",
"--only-roots",
action="store_true",
help="don't show full list of installed specs in an environment",
)
subparser.add_argument(
"-c",
"--show-concretized",
@@ -140,6 +146,12 @@ def setup_parser(subparser):
subparser.add_argument(
"--only-deprecated", action="store_true", help="show only deprecated packages"
)
subparser.add_argument(
"--install-tree",
action="store",
default="all",
help="Install trees to query: 'all' (default), 'local', 'upstream', upstream name or path",
)
subparser.add_argument("--start-date", help="earliest date of installation [YYYY-MM-DD]")
subparser.add_argument("--end-date", help="latest date of installation [YYYY-MM-DD]")
@@ -168,6 +180,12 @@ def query_arguments(args):
q_args = {"installed": installed, "known": known, "explicit": explicit}
install_tree = args.install_tree
upstreams = spack.config.get("upstreams", {})
if install_tree in upstreams.keys():
install_tree = upstreams[install_tree]["install_tree"]
q_args["install_tree"] = install_tree
# Time window of installation
for attribute in ("start_date", "end_date"):
date = getattr(args, attribute)
@@ -177,26 +195,22 @@ def query_arguments(args):
return q_args
def setup_env(env):
def make_env_decorator(env):
"""Create a function for decorating specs when in an environment."""
def strip_build(seq):
return set(s.copy(deps=("link", "run")) for s in seq)
added = set(strip_build(env.added_specs()))
roots = set(strip_build(env.roots()))
removed = set(strip_build(env.removed_specs()))
roots = set(env.roots())
removed = set(env.removed_specs())
def decorator(spec, fmt):
# add +/-/* to show added/removed/root specs
if any(spec.dag_hash() == r.dag_hash() for r in roots):
return color.colorize("@*{%s}" % fmt)
return color.colorize(f"@*{{{fmt}}}")
elif spec in removed:
return color.colorize("@K{%s}" % fmt)
return color.colorize(f"@K{{{fmt}}}")
else:
return "%s" % fmt
return fmt
return decorator, added, roots, removed
return decorator
def display_env(env, args, decorator, results):
@@ -211,28 +225,51 @@ def display_env(env, args, decorator, results):
"""
tty.msg("In environment %s" % env.name)
if not env.user_specs:
tty.msg("No root specs")
else:
tty.msg("Root specs")
num_roots = len(env.user_specs) or "No"
tty.msg(f"{num_roots} root specs")
# Root specs cannot be displayed with prefixes, since those are not
# set for abstract specs. Same for hashes
root_args = copy.copy(args)
root_args.paths = False
concrete_specs = {
root: concrete_root
for root, concrete_root in zip(env.concretized_user_specs, env.concrete_roots())
}
# Roots are displayed with variants, etc. so that we can see
# specifically what the user asked for.
def root_decorator(spec, string):
"""Decorate root specs with their install status if needed"""
concrete = concrete_specs.get(spec)
if concrete:
status = color.colorize(concrete.install_status().value)
hash = concrete.dag_hash()
else:
status = color.colorize(spack.spec.InstallStatus.absent.value)
hash = "-" * 32
# TODO: status has two extra spaces on the end of it, but fixing this and other spec
# TODO: space format idiosyncrasies is complicated. Fix this eventually
status = status[:-2]
if args.long or args.very_long:
hash = color.colorize(f"@K{{{hash[: 7 if args.long else None]}}}")
return f"{status} {hash} {string}"
else:
return f"{status} {string}"
with spack.store.STORE.db.read_transaction():
cmd.display_specs(
env.user_specs,
root_args,
decorator=lambda s, f: color.colorize("@*{%s}" % f),
args,
# these are overrides of CLI args
paths=False,
long=False,
very_long=False,
# these enforce details in the root specs to show what the user asked for
namespaces=True,
show_flags=True,
show_full_compiler=True,
decorator=root_decorator,
variants=True,
)
print()
print()
if args.show_concretized:
tty.msg("Concretized roots")
@@ -242,7 +279,7 @@ def display_env(env, args, decorator, results):
# Display a header for the installed packages section IF there are installed
# packages. If there aren't any, we'll just end up printing "0 installed packages"
# later.
if results:
if results and not args.only_roots:
tty.msg("Installed packages")
@@ -251,9 +288,10 @@ def find(parser, args):
results = args.specs(**q_args)
env = ev.active_environment()
decorator = lambda s, f: f
if env:
decorator, _, roots, _ = setup_env(env)
if not env and args.only_roots:
tty.die("-r / --only-roots requires an active environment")
decorator = make_env_decorator(env) if env else lambda s, f: f
# use groups by default except with format.
if args.groups is None:
@@ -280,9 +318,12 @@ def find(parser, args):
if env:
display_env(env, args, decorator, results)
cmd.display_specs(results, args, decorator=decorator, all_headers=True)
count_suffix = " (not shown)"
if not args.only_roots:
cmd.display_specs(results, args, decorator=decorator, all_headers=True)
count_suffix = ""
# print number of installed packages last (as the list may be long)
if sys.stdout.isatty() and args.groups:
pkg_type = "loaded" if args.loaded else "installed"
spack.cmd.print_how_many_pkgs(results, pkg_type)
spack.cmd.print_how_many_pkgs(results, pkg_type, suffix=count_suffix)

View File

@@ -263,8 +263,8 @@ def _fmt_name_and_default(variant):
return color.colorize(f"@c{{{variant.name}}} @C{{[{_fmt_value(variant.default)}]}}")
def _fmt_when(when, indent):
return color.colorize(f"{indent * ' '}@B{{when}} {color.cescape(when)}")
def _fmt_when(when: "spack.spec.Spec", indent: int):
return color.colorize(f"{indent * ' '}@B{{when}} {color.cescape(str(when))}")
def _fmt_variant_description(variant, width, indent):
@@ -441,7 +441,7 @@ def get_url(version):
return "No URL"
url = get_url(preferred) if pkg.has_code else ""
line = version(" {0}".format(pad(preferred))) + color.cescape(url)
line = version(" {0}".format(pad(preferred))) + color.cescape(str(url))
color.cwrite(line)
print()
@@ -464,7 +464,7 @@ def get_url(version):
continue
for v, url in vers:
line = version(" {0}".format(pad(v))) + color.cescape(url)
line = version(" {0}".format(pad(v))) + color.cescape(str(url))
color.cprint(line)
@@ -494,7 +494,9 @@ def print_licenses(pkg, args):
pad = padder(pkg.licenses, 4)
for when_spec in pkg.licenses:
license_identifier = pkg.licenses[when_spec]
line = license(" {0}".format(pad(license_identifier))) + color.cescape(when_spec)
line = license(" {0}".format(pad(license_identifier))) + color.cescape(
str(when_spec)
)
color.cprint(line)

View File

@@ -420,10 +420,9 @@ def install_with_active_env(env: ev.Environment, args, install_kwargs, reporter_
with reporter_factory(specs_to_install):
env.install_specs(specs_to_install, **install_kwargs)
finally:
# TODO: this is doing way too much to trigger
# views and modules to be generated.
with env.write_transaction():
env.write(regenerate=True)
if env.views:
with env.write_transaction():
env.write(regenerate=True)
def concrete_specs_from_cli(args, install_kwargs):

View File

@@ -5,8 +5,6 @@
import sys
import llnl.util.tty as tty
import spack.cmd
import spack.cmd.find
import spack.environment as ev
@@ -70,16 +68,6 @@ def setup_parser(subparser):
help="load the first match if multiple packages match the spec",
)
subparser.add_argument(
"--only",
default="package,dependencies",
dest="things_to_load",
choices=["package", "dependencies"],
help="select whether to load the package and its dependencies\n\n"
"the default is to load the package and all dependencies. alternatively, "
"one can decide to load only the package or only the dependencies",
)
subparser.add_argument(
"--list",
action="store_true",
@@ -110,11 +98,6 @@ def load(parser, args):
)
return 1
if args.things_to_load != "package,dependencies":
tty.warn(
"The `--only` flag in spack load is deprecated and will be removed in Spack v0.22"
)
with spack.store.STORE.db.read_transaction():
env_mod = uenv.environment_modifications_for_specs(*specs)
for spec in specs:

View File

@@ -71,6 +71,11 @@ def setup_parser(subparser):
help="the number of versions to fetch for each spec, choose 'all' to"
" retrieve all versions of each package",
)
create_parser.add_argument(
"--private",
action="store_true",
help="for a private mirror, include non-redistributable packages",
)
arguments.add_common_arguments(create_parser, ["specs"])
arguments.add_concretizer_args(create_parser)
@@ -108,6 +113,11 @@ def setup_parser(subparser):
"and source use `--type binary --type source` (default)"
),
)
add_parser.add_argument(
"--autopush",
action="store_true",
help=("set mirror to push automatically after installation"),
)
add_parser_signed = add_parser.add_mutually_exclusive_group(required=False)
add_parser_signed.add_argument(
"--unsigned",
@@ -175,6 +185,21 @@ def setup_parser(subparser):
),
)
set_parser.add_argument("--url", help="url of mirror directory from 'spack mirror create'")
set_parser_autopush = set_parser.add_mutually_exclusive_group(required=False)
set_parser_autopush.add_argument(
"--autopush",
help="set mirror to push automatically after installation",
action="store_true",
default=None,
dest="autopush",
)
set_parser_autopush.add_argument(
"--no-autopush",
help="set mirror to not push automatically after installation",
action="store_false",
default=None,
dest="autopush",
)
set_parser_unsigned = set_parser.add_mutually_exclusive_group(required=False)
set_parser_unsigned.add_argument(
"--unsigned",
@@ -218,6 +243,7 @@ def mirror_add(args):
or args.type
or args.oci_username
or args.oci_password
or args.autopush
or args.signed is not None
):
connection = {"url": args.url}
@@ -234,6 +260,8 @@ def mirror_add(args):
if args.type:
connection["binary"] = "binary" in args.type
connection["source"] = "source" in args.type
if args.autopush:
connection["autopush"] = args.autopush
if args.signed is not None:
connection["signed"] = args.signed
mirror = spack.mirror.Mirror(connection, name=args.name)
@@ -270,6 +298,8 @@ def _configure_mirror(args):
changes["access_pair"] = [args.oci_username, args.oci_password]
if getattr(args, "signed", None) is not None:
changes["signed"] = args.signed
if getattr(args, "autopush", None) is not None:
changes["autopush"] = args.autopush
# argparse cannot distinguish between --binary and --no-binary when same dest :(
# notice that set-url does not have these args, so getattr
@@ -334,7 +364,6 @@ def concrete_specs_from_user(args):
specs = filter_externals(specs)
specs = list(set(specs))
specs.sort(key=lambda s: (s.name, s.version))
specs, _ = lang.stable_partition(specs, predicate_fn=not_excluded_fn(args))
return specs
@@ -379,36 +408,50 @@ def concrete_specs_from_cli_or_file(args):
return specs
def not_excluded_fn(args):
"""Return a predicate that evaluate to True if a spec was not explicitly
excluded by the user.
"""
exclude_specs = []
if args.exclude_file:
exclude_specs.extend(specs_from_text_file(args.exclude_file, concretize=False))
if args.exclude_specs:
exclude_specs.extend(spack.cmd.parse_specs(str(args.exclude_specs).split()))
class IncludeFilter:
def __init__(self, args):
self.exclude_specs = []
if args.exclude_file:
self.exclude_specs.extend(specs_from_text_file(args.exclude_file, concretize=False))
if args.exclude_specs:
self.exclude_specs.extend(spack.cmd.parse_specs(str(args.exclude_specs).split()))
self.private = args.private
def not_excluded(x):
return not any(x.satisfies(y) for y in exclude_specs)
def __call__(self, x):
return all([self._not_license_excluded(x), self._not_cmdline_excluded(x)])
return not_excluded
def _not_license_excluded(self, x):
"""True if the spec is for a private mirror, or as long as the
package does not explicitly forbid redistributing source."""
if self.private:
return True
elif x.package_class.redistribute_source(x):
return True
else:
tty.debug(
"Skip adding {0} to mirror: the package.py file"
" indicates that a public mirror should not contain"
" it.".format(x.name)
)
return False
def _not_cmdline_excluded(self, x):
"""True if a spec was not explicitly excluded by the user."""
return not any(x.satisfies(y) for y in self.exclude_specs)
def concrete_specs_from_environment(selection_fn):
def concrete_specs_from_environment():
env = ev.active_environment()
assert env, "an active environment is required"
mirror_specs = env.all_specs()
mirror_specs = filter_externals(mirror_specs)
mirror_specs, _ = lang.stable_partition(mirror_specs, predicate_fn=selection_fn)
return mirror_specs
def all_specs_with_all_versions(selection_fn):
def all_specs_with_all_versions():
specs = [spack.spec.Spec(n) for n in spack.repo.all_package_names()]
mirror_specs = spack.mirror.get_all_versions(specs)
mirror_specs.sort(key=lambda s: (s.name, s.version))
mirror_specs, _ = lang.stable_partition(mirror_specs, predicate_fn=selection_fn)
return mirror_specs
@@ -429,12 +472,6 @@ def versions_per_spec(args):
return num_versions
def create_mirror_for_individual_specs(mirror_specs, path, skip_unstable_versions):
present, mirrored, error = spack.mirror.create(path, mirror_specs, skip_unstable_versions)
tty.msg("Summary for mirror in {}".format(path))
process_mirror_stats(present, mirrored, error)
def process_mirror_stats(present, mirrored, error):
p, m, e = len(present), len(mirrored), len(error)
tty.msg(
@@ -480,30 +517,28 @@ def mirror_create(args):
# When no directory is provided, the source dir is used
path = args.directory or spack.caches.fetch_cache_location()
mirror_specs, mirror_fn = _specs_and_action(args)
mirror_fn(mirror_specs, path=path, skip_unstable_versions=args.skip_unstable_versions)
def _specs_and_action(args):
include_fn = IncludeFilter(args)
if args.all and not ev.active_environment():
create_mirror_for_all_specs(
path=path,
skip_unstable_versions=args.skip_unstable_versions,
selection_fn=not_excluded_fn(args),
)
return
mirror_specs = all_specs_with_all_versions()
mirror_fn = create_mirror_for_all_specs
elif args.all and ev.active_environment():
mirror_specs = concrete_specs_from_environment()
mirror_fn = create_mirror_for_individual_specs
else:
mirror_specs = concrete_specs_from_user(args)
mirror_fn = create_mirror_for_individual_specs
if args.all and ev.active_environment():
create_mirror_for_all_specs_inside_environment(
path=path,
skip_unstable_versions=args.skip_unstable_versions,
selection_fn=not_excluded_fn(args),
)
return
mirror_specs = concrete_specs_from_user(args)
create_mirror_for_individual_specs(
mirror_specs, path=path, skip_unstable_versions=args.skip_unstable_versions
)
mirror_specs, _ = lang.stable_partition(mirror_specs, predicate_fn=include_fn)
return mirror_specs, mirror_fn
def create_mirror_for_all_specs(path, skip_unstable_versions, selection_fn):
mirror_specs = all_specs_with_all_versions(selection_fn=selection_fn)
def create_mirror_for_all_specs(mirror_specs, path, skip_unstable_versions):
mirror_cache, mirror_stats = spack.mirror.mirror_cache_and_stats(
path, skip_unstable_versions=skip_unstable_versions
)
@@ -515,11 +550,10 @@ def create_mirror_for_all_specs(path, skip_unstable_versions, selection_fn):
process_mirror_stats(*mirror_stats.stats())
def create_mirror_for_all_specs_inside_environment(path, skip_unstable_versions, selection_fn):
mirror_specs = concrete_specs_from_environment(selection_fn=selection_fn)
create_mirror_for_individual_specs(
mirror_specs, path=path, skip_unstable_versions=skip_unstable_versions
)
def create_mirror_for_individual_specs(mirror_specs, path, skip_unstable_versions):
present, mirrored, error = spack.mirror.create(path, mirror_specs, skip_unstable_versions)
tty.msg("Summary for mirror in {}".format(path))
process_mirror_stats(present, mirrored, error)
def mirror_destroy(args):

View File

@@ -116,39 +116,38 @@ def ipython_interpreter(args):
def python_interpreter(args):
"""A python interpreter is the default interpreter"""
# Fake a main python shell by setting __name__ to __main__.
console = code.InteractiveConsole({"__name__": "__main__", "spack": spack})
if "PYTHONSTARTUP" in os.environ:
startup_file = os.environ["PYTHONSTARTUP"]
if os.path.isfile(startup_file):
with open(startup_file) as startup:
console.runsource(startup.read(), startup_file, "exec")
if args.python_command:
propagate_exceptions_from(console)
console.runsource(args.python_command)
elif args.python_args:
propagate_exceptions_from(console)
if args.python_args and not args.python_command:
sys.argv = args.python_args
with open(args.python_args[0]) as file:
console.runsource(file.read(), args.python_args[0], "exec")
runpy.run_path(args.python_args[0], run_name="__main__")
else:
# Provides readline support, allowing user to use arrow keys
console.push("import readline")
# Provide tabcompletion
console.push("from rlcompleter import Completer")
console.push("readline.set_completer(Completer(locals()).complete)")
console.push('readline.parse_and_bind("tab: complete")')
# Fake a main python shell by setting __name__ to __main__.
console = code.InteractiveConsole({"__name__": "__main__", "spack": spack})
if "PYTHONSTARTUP" in os.environ:
startup_file = os.environ["PYTHONSTARTUP"]
if os.path.isfile(startup_file):
with open(startup_file) as startup:
console.runsource(startup.read(), startup_file, "exec")
if args.python_command:
propagate_exceptions_from(console)
console.runsource(args.python_command)
else:
# Provides readline support, allowing user to use arrow keys
console.push("import readline")
# Provide tabcompletion
console.push("from rlcompleter import Completer")
console.push("readline.set_completer(Completer(locals()).complete)")
console.push('readline.parse_and_bind("tab: complete")')
console.interact(
"Spack version %s\nPython %s, %s %s"
% (
spack.spack_version,
platform.python_version(),
platform.system(),
platform.machine(),
console.interact(
"Spack version %s\nPython %s, %s %s"
% (
spack.spack_version,
platform.python_version(),
platform.system(),
platform.machine(),
)
)
)
def propagate_exceptions_from(console):

View File

@@ -45,9 +45,7 @@ def setup_parser(subparser):
arguments.add_common_arguments(subparser, ["long", "very_long", "namespaces"])
install_status_group = subparser.add_mutually_exclusive_group()
arguments.add_common_arguments(
install_status_group, ["install_status", "no_install_status", "debug_nondefaults"]
)
arguments.add_common_arguments(install_status_group, ["install_status", "no_install_status"])
subparser.add_argument(
"-y",
@@ -93,7 +91,6 @@ def setup_parser(subparser):
def _process_result(result, show, required_format, kwargs):
result.raise_if_unsat()
opt, _, _ = min(result.answers)
if ("opt" in show) and (not required_format):
tty.msg("Best of %d considered solutions." % result.nmodels)
@@ -147,7 +144,6 @@ def solve(parser, args):
"show_types": args.types,
"status_fn": install_status_fn if args.install_status else None,
"hashes": args.long or args.very_long,
"nondefaults": args.debug_nondefaults,
}
# process output options

View File

@@ -32,9 +32,7 @@ def setup_parser(subparser):
arguments.add_common_arguments(subparser, ["long", "very_long", "namespaces"])
install_status_group = subparser.add_mutually_exclusive_group()
arguments.add_common_arguments(
install_status_group, ["install_status", "no_install_status", "debug_nondefaults"]
)
arguments.add_common_arguments(install_status_group, ["install_status", "no_install_status"])
format_group = subparser.add_mutually_exclusive_group()
format_group.add_argument(
@@ -86,8 +84,6 @@ def spec(parser, args):
tree_kwargs = {
"cover": args.cover,
"format": fmt,
"hashes": args.long or args.very_long,
"nondefaults": args.debug_nondefaults,
"hashlen": None if args.very_long else 7,
"show_types": args.types,
"status_fn": install_status_fn if args.install_status else None,
@@ -129,14 +125,12 @@ def spec(parser, args):
# repeated output. This happens because parse_specs outputs concrete
# specs for `/hash` inputs.
if not input.concrete:
# NOTE: can use overrides | tree_kwargs in python 3.9+
overrides = {"hashes": False}
overriden_kwargs = {**overrides, **tree_kwargs}
tree_kwargs["hashes"] = False # Always False for input spec
print("Input spec")
print("--------------------------------")
print(input.tree(**overriden_kwargs))
print(input.tree(**tree_kwargs))
print("Concretized")
print("--------------------------------")
tree_kwargs["hashes"] = args.long or args.very_long
print(output.tree(**tree_kwargs))

View File

@@ -34,6 +34,13 @@ def setup_parser(subparser):
default=False,
help="show full pytest help, with advanced options",
)
subparser.add_argument(
"-n",
"--numprocesses",
type=int,
default=1,
help="run tests in parallel up to this wide, default 1 for sequential",
)
# extra spack arguments to list tests
list_group = subparser.add_argument_group("listing tests")
@@ -229,6 +236,16 @@ def unit_test(parser, args, unknown_args):
if args.extension:
pytest_root = spack.extensions.load_extension(args.extension)
if args.numprocesses is not None and args.numprocesses > 1:
pytest_args.extend(
[
"--dist",
"loadfile",
"--tx",
f"{args.numprocesses}*popen//python=spack-tmpconfig spack python",
]
)
# pytest.ini lives in the root of the spack repository.
with llnl.util.filesystem.working_dir(pytest_root):
if args.list:

View File

@@ -8,6 +8,7 @@
import os
import platform
import re
import shlex
import shutil
import sys
import tempfile
@@ -22,6 +23,7 @@
import spack.error
import spack.spec
import spack.util.executable
import spack.util.libc
import spack.util.module_cmd
import spack.version
from spack.util.environment import filter_system_paths
@@ -107,7 +109,6 @@ def _parse_link_paths(string):
"""
lib_search_paths = False
raw_link_dirs = []
tty.debug("parsing implicit link info")
for line in string.splitlines():
if lib_search_paths:
if line.startswith("\t"):
@@ -122,7 +123,7 @@ def _parse_link_paths(string):
continue
if _LINKER_LINE_IGNORE.match(line):
continue
tty.debug("linker line: %s" % line)
tty.debug(f"implicit link dirs: link line: {line}")
next_arg = False
for arg in line.split():
@@ -138,15 +139,12 @@ def _parse_link_paths(string):
link_dir_arg = _LINK_DIR_ARG.match(arg)
if link_dir_arg:
link_dir = link_dir_arg.group("dir")
tty.debug("linkdir: %s" % link_dir)
raw_link_dirs.append(link_dir)
link_dir_arg = _LIBPATH_ARG.match(arg)
if link_dir_arg:
link_dir = link_dir_arg.group("dir")
tty.debug("libpath: %s", link_dir)
raw_link_dirs.append(link_dir)
tty.debug("found raw link dirs: %s" % ", ".join(raw_link_dirs))
implicit_link_dirs = list()
visited = set()
@@ -156,7 +154,7 @@ def _parse_link_paths(string):
implicit_link_dirs.append(normalized_path)
visited.add(normalized_path)
tty.debug("found link dirs: %s" % ", ".join(implicit_link_dirs))
tty.debug(f"implicit link dirs: result: {', '.join(implicit_link_dirs)}")
return implicit_link_dirs
@@ -184,6 +182,21 @@ def _parse_non_system_link_dirs(string: str) -> List[str]:
return list(p for p in link_dirs if not in_system_subdirectory(p))
def _parse_dynamic_linker(output: str):
"""Parse -dynamic-linker /path/to/ld.so from compiler output"""
for line in reversed(output.splitlines()):
if "-dynamic-linker" not in line:
continue
args = shlex.split(line)
for idx in reversed(range(1, len(args))):
arg = args[idx]
if arg == "-dynamic-linker" or args == "--dynamic-linker":
return args[idx + 1]
elif arg.startswith("--dynamic-linker=") or arg.startswith("-dynamic-linker="):
return arg.split("=", 1)[1]
def in_system_subdirectory(path):
system_dirs = [
"/lib/",
@@ -417,17 +430,35 @@ def real_version(self):
self._real_version = self.version
return self._real_version
def implicit_rpaths(self):
def implicit_rpaths(self) -> List[str]:
if self.enable_implicit_rpaths is False:
return []
# Put CXX first since it has the most linking issues
# And because it has flags that affect linking
link_dirs = self._get_compiler_link_paths()
output = self.compiler_verbose_output
if not output:
return []
link_dirs = _parse_non_system_link_dirs(output)
all_required_libs = list(self.required_libs) + Compiler._all_compiler_rpath_libraries
return list(paths_containing_libs(link_dirs, all_required_libs))
@property
def default_libc(self) -> Optional["spack.spec.Spec"]:
"""Determine libc targeted by the compiler from link line"""
output = self.compiler_verbose_output
if not output:
return None
dynamic_linker = _parse_dynamic_linker(output)
if not dynamic_linker:
return None
return spack.util.libc.libc_from_dynamic_linker(dynamic_linker)
@property
def required_libs(self):
"""For executables created with this compiler, the compiler libraries
@@ -436,17 +467,18 @@ def required_libs(self):
# By default every compiler returns the empty list
return []
def _get_compiler_link_paths(self):
@property
def compiler_verbose_output(self) -> Optional[str]:
"""Verbose output from compiling a dummy C source file. Output is cached."""
if not hasattr(self, "_compile_c_source_output"):
self._compile_c_source_output = self._compile_dummy_c_source()
print(self._compile_c_source_output)
return self._compile_c_source_output
def _compile_dummy_c_source(self) -> Optional[str]:
cc = self.cc if self.cc else self.cxx
if not cc or not self.verbose_flag:
# Cannot determine implicit link paths without a compiler / verbose flag
return []
# What flag types apply to first_compiler, in what order
if cc == self.cc:
flags = ["cflags", "cppflags", "ldflags"]
else:
flags = ["cxxflags", "cppflags", "ldflags"]
return None
try:
tmpdir = tempfile.mkdtemp(prefix="spack-implicit-link-info")
@@ -458,20 +490,16 @@ def _get_compiler_link_paths(self):
"int main(int argc, char* argv[]) { (void)argc; (void)argv; return 0; }\n"
)
cc_exe = spack.util.executable.Executable(cc)
for flag_type in flags:
for flag_type in ["cflags" if cc == self.cc else "cxxflags", "cppflags", "ldflags"]:
cc_exe.add_default_arg(*self.flags.get(flag_type, []))
with self.compiler_environment():
output = cc_exe(self.verbose_flag, fin, "-o", fout, output=str, error=str)
return _parse_non_system_link_dirs(output)
except spack.util.executable.ProcessError as pe:
tty.debug("ProcessError: Command exited with non-zero status: " + pe.long_message)
return []
return cc_exe(self.verbose_flag, fin, "-o", fout, output=str, error=str)
finally:
shutil.rmtree(tmpdir, ignore_errors=True)
@property
def verbose_flag(self):
def verbose_flag(self) -> Optional[str]:
"""
This property should be overridden in the compiler subclass if a
verbose flag is available.

View File

@@ -10,6 +10,7 @@
import itertools
import multiprocessing.pool
import os
import warnings
from typing import Dict, List, Optional, Tuple
import archspec.cpu
@@ -109,27 +110,33 @@ def _to_dict(compiler):
return {"compiler": d}
def get_compiler_config(scope=None, init_config=False):
def get_compiler_config(
configuration: "spack.config.Configuration",
*,
scope: Optional[str] = None,
init_config: bool = False,
) -> List[Dict]:
"""Return the compiler configuration for the specified architecture."""
config = spack.config.get("compilers", scope=scope) or []
config = configuration.get("compilers", scope=scope) or []
if config or not init_config:
return config
merged_config = spack.config.get("compilers")
merged_config = configuration.get("compilers")
if merged_config:
# Config is empty for this scope
# Do not init config because there is a non-empty scope
return config
_init_compiler_config(scope=scope)
config = spack.config.get("compilers", scope=scope)
_init_compiler_config(configuration, scope=scope)
config = configuration.get("compilers", scope=scope)
return config
def get_compiler_config_from_packages(scope=None):
def get_compiler_config_from_packages(
configuration: "spack.config.Configuration", *, scope: Optional[str] = None
) -> List[Dict]:
"""Return the compiler configuration from packages.yaml"""
config = spack.config.get("packages", scope=scope)
config = configuration.get("packages", scope=scope)
if not config:
return []
@@ -216,13 +223,15 @@ def _compiler_config_from_external(config):
return compiler_entry
def _init_compiler_config(*, scope):
def _init_compiler_config(
configuration: "spack.config.Configuration", *, scope: Optional[str]
) -> None:
"""Compiler search used when Spack has no compilers."""
compilers = find_compilers()
compilers_dict = []
for compiler in compilers:
compilers_dict.append(_to_dict(compiler))
spack.config.set("compilers", compilers_dict, scope=scope)
configuration.set("compilers", compilers_dict, scope=scope)
def compiler_config_files():
@@ -233,7 +242,7 @@ def compiler_config_files():
compiler_config = config.get("compilers", scope=name)
if compiler_config:
config_files.append(config.get_config_filename(name, "compilers"))
compiler_config_from_packages = get_compiler_config_from_packages(scope=name)
compiler_config_from_packages = get_compiler_config_from_packages(config, scope=name)
if compiler_config_from_packages:
config_files.append(config.get_config_filename(name, "packages"))
return config_files
@@ -246,7 +255,9 @@ def add_compilers_to_config(compilers, scope=None):
compilers: a list of Compiler objects.
scope: configuration scope to modify.
"""
compiler_config = get_compiler_config(scope, init_config=False)
compiler_config = get_compiler_config(
configuration=spack.config.CONFIG, scope=scope, init_config=False
)
for compiler in compilers:
if not compiler.cc:
tty.debug(f"{compiler.spec} does not have a C compiler")
@@ -295,7 +306,9 @@ def _remove_compiler_from_scope(compiler_spec, scope):
True if one or more compiler entries were actually removed, False otherwise
"""
assert scope is not None, "a specific scope is needed when calling this function"
compiler_config = get_compiler_config(scope, init_config=False)
compiler_config = get_compiler_config(
configuration=spack.config.CONFIG, scope=scope, init_config=False
)
filtered_compiler_config = [
compiler_entry
for compiler_entry in compiler_config
@@ -310,21 +323,28 @@ def _remove_compiler_from_scope(compiler_spec, scope):
# We need to preserve the YAML type for comments, hence we are copying the
# items in the list that has just been retrieved
compiler_config[:] = filtered_compiler_config
spack.config.set("compilers", compiler_config, scope=scope)
spack.config.CONFIG.set("compilers", compiler_config, scope=scope)
return True
def all_compilers_config(scope=None, init_config=True):
def all_compilers_config(
configuration: "spack.config.Configuration",
*,
scope: Optional[str] = None,
init_config: bool = True,
) -> List["spack.compiler.Compiler"]:
"""Return a set of specs for all the compiler versions currently
available to build with. These are instances of CompilerSpec.
"""
from_packages_yaml = get_compiler_config_from_packages(scope)
from_packages_yaml = get_compiler_config_from_packages(configuration, scope=scope)
if from_packages_yaml:
init_config = False
from_compilers_yaml = get_compiler_config(scope, init_config)
from_compilers_yaml = get_compiler_config(configuration, scope=scope, init_config=init_config)
result = from_compilers_yaml + from_packages_yaml
key = lambda c: _compiler_from_config_entry(c["compiler"])
# Dedupe entries by the compiler they represent
# If the entry is invalid, treat it as unique for deduplication
key = lambda c: _compiler_from_config_entry(c["compiler"] or id(c))
return list(llnl.util.lang.dedupe(result, key=key))
@@ -332,7 +352,7 @@ def all_compiler_specs(scope=None, init_config=True):
# Return compiler specs from the merged config.
return [
spack.spec.parse_with_version_concrete(s["compiler"]["spec"], compiler=True)
for s in all_compilers_config(scope, init_config)
for s in all_compilers_config(spack.config.CONFIG, scope=scope, init_config=init_config)
]
@@ -492,11 +512,20 @@ def find_specs_by_arch(compiler_spec, arch_spec, scope=None, init_config=True):
def all_compilers(scope=None, init_config=True):
config = all_compilers_config(scope, init_config=init_config)
compilers = list()
for items in config:
return all_compilers_from(
configuration=spack.config.CONFIG, scope=scope, init_config=init_config
)
def all_compilers_from(configuration, scope=None, init_config=True):
compilers = []
for items in all_compilers_config(
configuration=configuration, scope=scope, init_config=init_config
):
items = items["compiler"]
compilers.append(_compiler_from_config_entry(items))
compiler = _compiler_from_config_entry(items) # can be None in error case
if compiler:
compilers.append(compiler)
return compilers
@@ -507,7 +536,7 @@ def compilers_for_spec(
"""This gets all compilers that satisfy the supplied CompilerSpec.
Returns an empty list if none are found.
"""
config = all_compilers_config(scope, init_config)
config = all_compilers_config(spack.config.CONFIG, scope=scope, init_config=init_config)
matches = set(find(compiler_spec, scope, init_config))
compilers = []
@@ -517,7 +546,7 @@ def compilers_for_spec(
def compilers_for_arch(arch_spec, scope=None):
config = all_compilers_config(scope)
config = all_compilers_config(spack.config.CONFIG, scope=scope)
return list(get_compilers(config, arch_spec=arch_spec))
@@ -603,7 +632,10 @@ def _compiler_from_config_entry(items):
compiler = _compiler_cache.get(config_id, None)
if compiler is None:
compiler = compiler_from_dict(items)
try:
compiler = compiler_from_dict(items)
except UnknownCompilerError as e:
warnings.warn(e.message)
_compiler_cache[config_id] = compiler
return compiler
@@ -656,7 +688,9 @@ def get_compilers(config, cspec=None, arch_spec=None):
raise ValueError(msg)
continue
compilers.append(_compiler_from_config_entry(items))
compiler = _compiler_from_config_entry(items)
if compiler:
compilers.append(compiler)
return compilers
@@ -933,10 +967,11 @@ def _default_make_compilers(cmp_id, paths):
make_mixed_toolchain(flat_compilers)
# Finally, create the compiler list
compilers = []
compilers: List["spack.compiler.Compiler"] = []
for compiler_id, _, compiler in flat_compilers:
make_compilers = getattr(compiler_id.os, "make_compilers", _default_make_compilers)
compilers.extend(make_compilers(compiler_id, compiler))
candidates = make_compilers(compiler_id, compiler)
compilers.extend(x for x in candidates if x.cc is not None)
return compilers

View File

@@ -38,10 +38,10 @@ class Clang(Compiler):
cxx_names = ["clang++"]
# Subclasses use possible names of Fortran 77 compiler
f77_names = ["flang"]
f77_names = ["flang-new", "flang"]
# Subclasses use possible names of Fortran 90 compiler
fc_names = ["flang"]
fc_names = ["flang-new", "flang"]
version_argument = "--version"
@@ -171,10 +171,11 @@ def extract_version_from_output(cls, output):
match = re.search(
# Normal clang compiler versions are left as-is
r"clang version ([^ )\n]+)-svn[~.\w\d-]*|"
r"(?:clang|flang-new) version ([^ )\n]+)-svn[~.\w\d-]*|"
# Don't include hyphenated patch numbers in the version
# (see https://github.com/spack/spack/pull/14365 for details)
r"clang version ([^ )\n]+?)-[~.\w\d-]*|" r"clang version ([^ )\n]+)",
r"(?:clang|flang-new) version ([^ )\n]+?)-[~.\w\d-]*|"
r"(?:clang|flang-new) version ([^ )\n]+)",
output,
)
if match:

View File

@@ -1,34 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import spack.compilers.oneapi
class Dpcpp(spack.compilers.oneapi.Oneapi):
"""This is the same as the oneAPI compiler but uses dpcpp instead of
icpx (for DPC++ source files). It explicitly refers to dpcpp, so that
CMake test files which check the compiler name (e.g. CMAKE_CXX_COMPILER)
detect it as dpcpp.
Ideally we could switch out icpx for dpcpp where needed in the oneAPI
compiler definition, but two things are needed for that: (a) a way to
tell the compiler that it should be using dpcpp and (b) a way to
customize the link_paths
See also: https://www.intel.com/content/www/us/en/develop/documentation/oneapi-dpcpp-cpp-compiler-dev-guide-and-reference/top/compiler-setup/using-the-command-line/invoking-the-compiler.html
"""
# Subclasses use possible names of C++ compiler
cxx_names = ["dpcpp"]
# Named wrapper links within build_env_path
link_paths = {
"cc": os.path.join("oneapi", "icx"),
"cxx": os.path.join("oneapi", "dpcpp"),
"f77": os.path.join("oneapi", "ifx"),
"fc": os.path.join("oneapi", "ifx"),
}

View File

@@ -8,7 +8,7 @@
import subprocess
import sys
import tempfile
from typing import Dict, List, Set
from typing import Dict, List
import archspec.cpu
@@ -20,15 +20,7 @@
from spack.error import SpackError
from spack.version import Version, VersionRange
avail_fc_version: Set[str] = set()
fc_path: Dict[str, str] = dict()
fortran_mapping = {
"2021.3.0": "19.29.30133",
"2021.2.1": "19.28.29913",
"2021.2.0": "19.28.29334",
"2021.1.0": "19.28.29333",
}
FC_PATH: Dict[str, str] = dict()
class CmdCall:
@@ -115,15 +107,13 @@ def command_str(self):
return f"{script} {self.arch} {self.sdk_ver} {self.vcvars_ver}"
def get_valid_fortran_pth(comp_ver):
cl_ver = str(comp_ver)
def get_valid_fortran_pth():
"""Assign maximum available fortran compiler version"""
# TODO (johnwparent): validate compatibility w/ try compiler
# functionality when added
sort_fn = lambda fc_ver: Version(fc_ver)
sort_fc_ver = sorted(list(avail_fc_version), key=sort_fn)
for ver in sort_fc_ver:
if ver in fortran_mapping:
if Version(cl_ver) <= Version(fortran_mapping[ver]):
return fc_path[ver]
return None
sort_fc_ver = sorted(list(FC_PATH.keys()), key=sort_fn)
return FC_PATH[sort_fc_ver[-1]] if sort_fc_ver else None
class Msvc(Compiler):
@@ -167,11 +157,9 @@ def __init__(self, *args, **kwargs):
# This positional argument "paths" is later parsed and process by the base class
# via the call to `super` later in this method
paths = args[3]
# This positional argument "cspec" is also parsed and handled by the base class
# constructor
cspec = args[0]
new_pth = [pth if pth else get_valid_fortran_pth(cspec.version) for pth in paths]
paths[:] = new_pth
latest_fc = get_valid_fortran_pth()
new_pth = [pth if pth else latest_fc for pth in paths[2:]]
paths[2:] = new_pth
# Initialize, deferring to base class but then adding the vcvarsallfile
# file based on compiler executable path.
super().__init__(*args, **kwargs)
@@ -183,7 +171,7 @@ def __init__(self, *args, **kwargs):
# and stores their path, but their respective VCVARS
# file must be invoked before useage.
env_cmds = []
compiler_root = os.path.join(self.cc, "../../../../../../..")
compiler_root = os.path.join(os.path.dirname(self.cc), "../../../../../..")
vcvars_script_path = os.path.join(compiler_root, "Auxiliary", "Build", "vcvars64.bat")
# get current platform architecture and format for vcvars argument
arch = spack.platforms.real_host().default.lower()
@@ -198,11 +186,34 @@ def __init__(self, *args, **kwargs):
# paths[2] refers to the fc path and is a generic check
# for a fortran compiler
if paths[2]:
def get_oneapi_root(pth: str):
"""From within a prefix known to be a oneAPI path
determine the oneAPI root path from arbitrary point
under root
Args:
pth: path prefixed within oneAPI root
"""
if not pth:
return ""
while os.path.basename(pth) and os.path.basename(pth) != "oneAPI":
pth = os.path.dirname(pth)
return pth
# If this found, it sets all the vars
oneapi_root = os.getenv("ONEAPI_ROOT")
oneapi_root = get_oneapi_root(self.fc)
if not oneapi_root:
raise RuntimeError(f"Non-oneAPI Fortran compiler {self.fc} assigned to MSVC")
oneapi_root_setvars = os.path.join(oneapi_root, "setvars.bat")
# some oneAPI exes return a version more precise than their
# install paths specify, so we determine path from
# the install path rather than the fc executable itself
numver = r"\d+\.\d+(?:\.\d+)?"
pattern = f"((?:{numver})|(?:latest))"
version_from_path = re.search(pattern, self.fc).group(1)
oneapi_version_setvars = os.path.join(
oneapi_root, "compiler", str(self.ifx_version), "env", "vars.bat"
oneapi_root, "compiler", version_from_path, "env", "vars.bat"
)
# order matters here, the specific version env must be invoked first,
# otherwise it will be ignored if the root setvars sets up the oneapi
@@ -314,23 +325,19 @@ def setup_custom_environment(self, pkg, env):
@classmethod
def fc_version(cls, fc):
# We're using intel for the Fortran compilers, which exist if
# ONEAPI_ROOT is a meaningful variable
if not sys.platform == "win32":
return "unknown"
fc_ver = cls.default_version(fc)
avail_fc_version.add(fc_ver)
fc_path[fc_ver] = fc
if os.getenv("ONEAPI_ROOT"):
try:
sps = spack.operating_systems.windows_os.WindowsOs().compiler_search_paths
except AttributeError:
raise SpackError("Windows compiler search paths not established")
clp = spack.util.executable.which_string("cl", path=sps)
ver = cls.default_version(clp)
else:
ver = fc_ver
return ver
FC_PATH[fc_ver] = fc
try:
sps = spack.operating_systems.windows_os.WindowsOs().compiler_search_paths
except AttributeError:
raise SpackError(
"Windows compiler search paths not established, "
"please report this behavior to github.com/spack/spack"
)
clp = spack.util.executable.which_string("cl", path=sps)
return cls.default_version(clp) if clp else fc_ver
@classmethod
def f77_version(cls, f77):

View File

@@ -64,7 +64,7 @@ def verbose_flag(self):
#
# This way, we at least enable the implicit rpath detection, which is
# based on compilation of a C file (see method
# spack.compiler._get_compiler_link_paths): in the case of a mixed
# spack.compiler._compile_dummy_c_source): in the case of a mixed
# NAG/GCC toolchain, the flag will be passed to g++ (e.g.
# 'g++ -Wl,-v ./main.c'), otherwise, the flag will be passed to nagfor
# (e.g. 'nagfor -Wl,-v ./main.c' - note that nagfor recognizes '.c'

View File

@@ -749,7 +749,6 @@ def _concretize_specs_together_new(*abstract_specs, **kwargs):
result = solver.solve(
abstract_specs, tests=kwargs.get("tests", False), allow_deprecated=allow_deprecated
)
result.raise_if_unsat()
return [s.copy() for s in result.specs]

View File

@@ -107,7 +107,7 @@
#: metavar to use for commands that accept scopes
#: this is shorter and more readable than listing all choices
SCOPES_METAVAR = "{defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT"
SCOPES_METAVAR = "{defaults,system,site,user,command_line}[/PLATFORM] or env:ENVIRONMENT"
#: Base name for the (internal) overrides scope.
_OVERRIDES_BASE_NAME = "overrides-"
@@ -1562,8 +1562,9 @@ def ensure_latest_format_fn(section: str) -> Callable[[YamlConfigDict], bool]:
def use_configuration(
*scopes_or_paths: Union[ConfigScope, str]
) -> Generator[Configuration, None, None]:
"""Use the configuration scopes passed as arguments within the
context manager.
"""Use the configuration scopes passed as arguments within the context manager.
This function invalidates caches, and is therefore very slow.
Args:
*scopes_or_paths: scope objects or paths to be used

View File

@@ -25,6 +25,7 @@
import socket
import sys
import time
from json import JSONDecoder
from typing import (
Any,
Callable,
@@ -818,7 +819,8 @@ def _read_from_file(self, filename):
"""
try:
with open(filename, "r") as f:
fdata = sjson.load(f)
# In the future we may use a stream of JSON objects, hence `raw_decode` for compat.
fdata, _ = JSONDecoder().raw_decode(f.read())
except Exception as e:
raise CorruptDatabaseError("error parsing database:", str(e)) from e
@@ -833,27 +835,24 @@ def check(cond, msg):
# High-level file checks
db = fdata["database"]
check("installs" in db, "no 'installs' in JSON DB.")
check("version" in db, "no 'version' in JSON DB.")
installs = db["installs"]
# TODO: better version checking semantics.
version = vn.Version(db["version"])
if version > _DB_VERSION:
raise InvalidDatabaseVersionError(self, _DB_VERSION, version)
elif version < _DB_VERSION:
if not any(old == version and new == _DB_VERSION for old, new in _SKIP_REINDEX):
tty.warn(
"Spack database version changed from %s to %s. Upgrading."
% (version, _DB_VERSION)
)
elif version < _DB_VERSION and not any(
old == version and new == _DB_VERSION for old, new in _SKIP_REINDEX
):
tty.warn(f"Spack database version changed from {version} to {_DB_VERSION}. Upgrading.")
self.reindex(spack.store.STORE.layout)
installs = dict(
(k, v.to_dict(include_fields=self._record_fields))
for k, v in self._data.items()
)
self.reindex(spack.store.STORE.layout)
installs = dict(
(k, v.to_dict(include_fields=self._record_fields)) for k, v in self._data.items()
)
else:
check("installs" in db, "no 'installs' in JSON DB.")
installs = db["installs"]
spec_reader = reader(version)
@@ -1621,15 +1620,32 @@ def query_local(self, *args, **kwargs):
query_local.__doc__ += _QUERY_DOCSTRING
def query(self, *args, **kwargs):
"""Query the Spack database including all upstream databases."""
"""Query the Spack database including all upstream databases.
Additional Arguments:
install_tree (str): query 'all' (default), 'local', 'upstream', or upstream path
"""
install_tree = kwargs.pop("install_tree", "all")
valid_trees = ["all", "upstream", "local", self.root] + [u.root for u in self.upstream_dbs]
if install_tree not in valid_trees:
msg = "Invalid install_tree argument to Database.query()\n"
msg += f"Try one of {', '.join(valid_trees)}"
tty.error(msg)
return []
upstream_results = []
for upstream_db in self.upstream_dbs:
upstreams = self.upstream_dbs
if install_tree not in ("all", "upstream"):
upstreams = [u for u in self.upstream_dbs if u.root == install_tree]
for upstream_db in upstreams:
# queries for upstream DBs need to *not* lock - we may not
# have permissions to do this and the upstream DBs won't know about
# us anyway (so e.g. they should never uninstall specs)
upstream_results.extend(upstream_db._query(*args, **kwargs) or [])
local_results = set(self.query_local(*args, **kwargs))
local_results = []
if install_tree in ("all", "local") or self.root == install_tree:
local_results = set(self.query_local(*args, **kwargs))
results = list(local_results) + list(x for x in upstream_results if x not in local_results)

View File

@@ -9,8 +9,6 @@
import tempfile
from typing import Any, Deque, Dict, Generator, List, NamedTuple, Tuple
import jinja2
from llnl.util import filesystem
import spack.repo
@@ -85,6 +83,8 @@ def _mock_layout(self) -> Generator[List[str], None, None]:
self.tmpdir.cleanup()
def _create_executable_scripts(self, mock_executables: MockExecutables) -> List[pathlib.Path]:
import jinja2
relative_paths = mock_executables.executables
script = mock_executables.script
script_template = jinja2.Template("#!/bin/bash\n{{ script }}\n")

View File

@@ -27,6 +27,7 @@ class OpenMpi(Package):
* ``variant``
* ``version``
* ``requires``
* ``redistribute``
"""
import collections
@@ -63,6 +64,7 @@ class OpenMpi(Package):
__all__ = [
"DirectiveError",
"DirectiveMeta",
"DisableRedistribute",
"version",
"conflicts",
"depends_on",
@@ -75,6 +77,7 @@ class OpenMpi(Package):
"resource",
"build_system",
"requires",
"redistribute",
]
#: These are variant names used by Spack internally; packages can't use them
@@ -94,6 +97,9 @@ class OpenMpi(Package):
PatchesType = Optional[Union[Patcher, str, List[Union[Patcher, str]]]]
SUPPORTED_LANGUAGES = ("fortran", "cxx")
def _make_when_spec(value: WhenType) -> Optional["spack.spec.Spec"]:
"""Create a ``Spec`` that indicates when a directive should be applied.
@@ -585,6 +591,9 @@ def depends_on(
@see The section "Dependency specs" in the Spack Packaging Guide.
"""
if spack.spec.Spec(spec).name in SUPPORTED_LANGUAGES:
assert type == "build", "languages must be of 'build' type"
return _language(lang_spec_str=spec, when=when)
def _execute_depends_on(pkg: "spack.package_base.PackageBase"):
_depends_on(pkg, spec, when=when, type=type, patches=patches)
@@ -592,6 +601,64 @@ def _execute_depends_on(pkg: "spack.package_base.PackageBase"):
return _execute_depends_on
#: Store whether a given Spec source/binary should not be redistributed.
class DisableRedistribute:
def __init__(self, source, binary):
self.source = source
self.binary = binary
@directive("disable_redistribute")
def redistribute(source=None, binary=None, when: WhenType = None):
"""Can be used inside a Package definition to declare that
the package source and/or compiled binaries should not be
redistributed.
By default, Packages allow source/binary distribution (i.e. in
mirrors). Because of this, and because overlapping enable/
disable specs are not allowed, this directive only allows users
to explicitly disable redistribution for specs.
"""
return lambda pkg: _execute_redistribute(pkg, source, binary, when)
def _execute_redistribute(
pkg: "spack.package_base.PackageBase", source=None, binary=None, when: WhenType = None
):
if source is None and binary is None:
return
elif (source is True) or (binary is True):
raise DirectiveError(
"Source/binary distribution are true by default, they can only "
"be explicitly disabled."
)
if source is None:
source = True
if binary is None:
binary = True
when_spec = _make_when_spec(when)
if not when_spec:
return
if source is False:
max_constraint = spack.spec.Spec(f"{pkg.name}@{when_spec.versions}")
if not max_constraint.satisfies(when_spec):
raise DirectiveError("Source distribution can only be disabled for versions")
if when_spec in pkg.disable_redistribute:
disable = pkg.disable_redistribute[when_spec]
if not source:
disable.source = True
if not binary:
disable.binary = True
else:
pkg.disable_redistribute[when_spec] = DisableRedistribute(
source=not source, binary=not binary
)
@directive(("extendees", "dependencies"))
def extends(spec, when=None, type=("build", "run"), patches=None):
"""Same as depends_on, but also adds this package to the extendee list.
@@ -921,9 +988,9 @@ def maintainers(*names: str):
"""
def _execute_maintainer(pkg):
maintainers_from_base = getattr(pkg, "maintainers", [])
# Here it is essential to copy, otherwise we might add to an empty list in the parent
pkg.maintainers = list(sorted(set(maintainers_from_base + list(names))))
maintainers = set(getattr(pkg, "maintainers", []))
maintainers.update(names)
pkg.maintainers = sorted(maintainers)
return _execute_maintainer
@@ -967,7 +1034,6 @@ def license(
checked_by: string or list of strings indicating which github user checked the
license (if any).
when: A spec specifying when the license applies.
when: A spec specifying when the license applies.
"""
return lambda pkg: _execute_license(pkg, license_identifier, when)
@@ -1014,6 +1080,21 @@ def _execute_requires(pkg: "spack.package_base.PackageBase"):
return _execute_requires
@directive("languages")
def _language(lang_spec_str: str, *, when: Optional[Union[str, bool]] = None):
"""Temporary implementation of language virtuals, until compilers are proper dependencies."""
def _execute_languages(pkg: "spack.package_base.PackageBase"):
when_spec = _make_when_spec(when)
if not when_spec:
return
languages = pkg.languages.setdefault(when_spec, set())
languages.add(lang_spec_str)
return _execute_languages
class DirectiveError(spack.error.SpackError):
"""This is raised when something is wrong with a package directive."""

View File

@@ -106,17 +106,16 @@ def environment_name(path: Union[str, pathlib.Path]) -> str:
return path_str
def check_disallowed_env_config_mods(scopes):
def ensure_no_disallowed_env_config_mods(scopes: List[spack.config.ConfigScope]) -> None:
for scope in scopes:
with spack.config.use_configuration(scope):
if spack.config.get("config:environments_root"):
raise SpackEnvironmentError(
"Spack environments are prohibited from modifying 'config:environments_root' "
"because it can make the definition of the environment ill-posed. Please "
"remove from your environment and place it in a permanent scope such as "
"defaults, system, site, etc."
)
return scopes
config = scope.get_section("config")
if config and "environments_root" in config["config"]:
raise SpackEnvironmentError(
"Spack environments are prohibited from modifying 'config:environments_root' "
"because it can make the definition of the environment ill-posed. Please "
"remove from your environment and place it in a permanent scope such as "
"defaults, system, site, etc."
)
def default_manifest_yaml():
@@ -1427,7 +1426,7 @@ def _concretize_separately(self, tests=False):
# Ensure we have compilers in compilers.yaml to avoid that
# processes try to write the config file in parallel
_ = spack.compilers.get_compiler_config(init_config=True)
_ = spack.compilers.get_compiler_config(spack.config.CONFIG, init_config=True)
# Early return if there is nothing to do
if len(args) == 0:
@@ -2463,6 +2462,10 @@ def __init__(self, manifest_dir: Union[pathlib.Path, str]) -> None:
self.scope_name = f"env:{environment_name(self.manifest_dir)}"
self.config_stage_dir = os.path.join(env_subdir_path(manifest_dir), "config")
#: Configuration scopes associated with this environment. Note that these are not
#: invalidated by a re-read of the manifest file.
self._config_scopes: Optional[List[spack.config.ConfigScope]] = None
if not self.manifest_file.exists():
msg = f"cannot find '{manifest_name}' in {self.manifest_dir}"
raise SpackEnvironmentError(msg)
@@ -2808,16 +2811,19 @@ def included_config_scopes(self) -> List[spack.config.ConfigScope]:
@property
def env_config_scopes(self) -> List[spack.config.ConfigScope]:
"""A list of all configuration scopes for the environment manifest.
Returns: All configuration scopes associated with the environment
"""
config_name = self.scope_name
env_scope = spack.config.SingleFileScope(
config_name, str(self.manifest_file), spack.schema.env.schema, [TOP_LEVEL_KEY]
)
return check_disallowed_env_config_mods(self.included_config_scopes + [env_scope])
"""A list of all configuration scopes for the environment manifest. On the first call this
instantiates all the scopes, on subsequent calls it returns the cached list."""
if self._config_scopes is not None:
return self._config_scopes
scopes: List[spack.config.ConfigScope] = [
*self.included_config_scopes,
spack.config.SingleFileScope(
self.scope_name, str(self.manifest_file), spack.schema.env.schema, [TOP_LEVEL_KEY]
),
]
ensure_no_disallowed_env_config_mods(scopes)
self._config_scopes = scopes
return scopes
def prepare_config_scope(self) -> None:
"""Add the manifest's scopes to the global configuration search path."""

View File

@@ -662,9 +662,6 @@ def add_specs(self, *specs: spack.spec.Spec) -> None:
return
# Drop externals
for s in specs:
if s.external:
tty.warn("Skipping external package: " + s.short_spec)
specs = [s for s in specs if not s.external]
self._sanity_check_view_projection(specs)

View File

@@ -0,0 +1,27 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import llnl.util.tty as tty
import spack.binary_distribution as bindist
import spack.mirror
def post_install(spec, explicit):
# Push package to all buildcaches with autopush==True
# Do nothing if package was not installed from source
pkg = spec.package
if pkg.installed_from_binary_cache:
return
# Push the package to all autopush mirrors
for mirror in spack.mirror.MirrorCollection(binary=True, autopush=True).values():
bindist.push_or_raise(
spec,
mirror.push_url,
bindist.PushOptions(force=True, regenerate_index=False, unsigned=not mirror.signed),
)
tty.msg(f"{spec.name}: Pushed to build cache: '{mirror.name}'")

View File

@@ -119,7 +119,7 @@ def __init__(self, pkg_count: int):
self.pkg_ids: Set[str] = set()
def next_pkg(self, pkg: "spack.package_base.PackageBase"):
pkg_id = package_id(pkg)
pkg_id = package_id(pkg.spec)
if pkg_id not in self.pkg_ids:
self.pkg_num += 1
@@ -221,12 +221,12 @@ def _handle_external_and_upstream(pkg: "spack.package_base.PackageBase", explici
# consists in module file generation and registration in the DB.
if pkg.spec.external:
_process_external_package(pkg, explicit)
_print_installed_pkg(f"{pkg.prefix} (external {package_id(pkg)})")
_print_installed_pkg(f"{pkg.prefix} (external {package_id(pkg.spec)})")
return True
if pkg.spec.installed_upstream:
tty.verbose(
f"{package_id(pkg)} is installed in an upstream Spack instance at "
f"{package_id(pkg.spec)} is installed in an upstream Spack instance at "
f"{pkg.spec.prefix}"
)
_print_installed_pkg(pkg.prefix)
@@ -403,7 +403,7 @@ def _install_from_cache(
return False
t.stop()
pkg_id = package_id(pkg)
pkg_id = package_id(pkg.spec)
tty.debug(f"Successfully extracted {pkg_id} from binary cache")
_write_timer_json(pkg, t, True)
@@ -484,7 +484,7 @@ def _process_binary_cache_tarball(
if download_result is None:
return False
tty.msg(f"Extracting {package_id(pkg)} from binary cache")
tty.msg(f"Extracting {package_id(pkg.spec)} from binary cache")
with timer.measure("install"), spack.util.path.filter_padding():
binary_distribution.extract_tarball(pkg.spec, download_result, force=False, timer=timer)
@@ -513,7 +513,7 @@ def _try_install_from_binary_cache(
if not spack.mirror.MirrorCollection(binary=True):
return False
tty.debug(f"Searching for binary cache of {package_id(pkg)}")
tty.debug(f"Searching for binary cache of {package_id(pkg.spec)}")
with timer.measure("search"):
matches = binary_distribution.get_mirrors_for_spec(pkg.spec, index_only=True)
@@ -610,7 +610,7 @@ def get_dependent_ids(spec: "spack.spec.Spec") -> List[str]:
Returns: list of package ids
"""
return [package_id(d.package) for d in spec.dependents()]
return [package_id(d) for d in spec.dependents()]
def install_msg(name: str, pid: int, install_status: InstallStatus) -> str:
@@ -720,7 +720,7 @@ def log(pkg: "spack.package_base.PackageBase") -> None:
dump_packages(pkg.spec, packages_dir)
def package_id(pkg: "spack.package_base.PackageBase") -> str:
def package_id(spec: "spack.spec.Spec") -> str:
"""A "unique" package identifier for installation purposes
The identifier is used to track build tasks, locks, install, and
@@ -732,10 +732,10 @@ def package_id(pkg: "spack.package_base.PackageBase") -> str:
Args:
pkg: the package from which the identifier is derived
"""
if not pkg.spec.concrete:
if not spec.concrete:
raise ValueError("Cannot provide a unique, readable id when the spec is not concretized.")
return f"{pkg.name}-{pkg.version}-{pkg.spec.dag_hash()}"
return f"{spec.name}-{spec.version}-{spec.dag_hash()}"
class BuildRequest:
@@ -765,7 +765,7 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
self.pkg.last_phase = install_args.pop("stop_at", None) # type: ignore[attr-defined]
# Cache the package id for convenience
self.pkg_id = package_id(pkg)
self.pkg_id = package_id(pkg.spec)
# Save off the original install arguments plus standard defaults
# since they apply to the requested package *and* dependencies.
@@ -780,9 +780,9 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
# are not able to return full dependents for all packages across
# environment specs.
self.dependencies = set(
package_id(d.package)
package_id(d)
for d in self.pkg.spec.dependencies(deptype=self.get_depflags(self.pkg))
if package_id(d.package) != self.pkg_id
if package_id(d) != self.pkg_id
)
def __repr__(self) -> str:
@@ -832,7 +832,7 @@ def get_depflags(self, pkg: "spack.package_base.PackageBase") -> int:
depflag = dt.LINK | dt.RUN
include_build_deps = self.install_args.get("include_build_deps")
if self.pkg_id == package_id(pkg):
if self.pkg_id == package_id(pkg.spec):
cache_only = self.install_args.get("package_cache_only")
else:
cache_only = self.install_args.get("dependencies_cache_only")
@@ -927,7 +927,7 @@ def __init__(
raise ValueError(f"{self.pkg.name} must have a concrete spec")
# The "unique" identifier for the task's package
self.pkg_id = package_id(self.pkg)
self.pkg_id = package_id(self.pkg.spec)
# The explicit build request associated with the package
if not isinstance(request, BuildRequest):
@@ -965,9 +965,9 @@ def __init__(
# if use traverse for transitive dependencies, then must remove
# transitive dependents on failure.
self.dependencies = set(
package_id(d.package)
package_id(d)
for d in self.pkg.spec.dependencies(deptype=self.request.get_depflags(self.pkg))
if package_id(d.package) != self.pkg_id
if package_id(d) != self.pkg_id
)
# Handle bootstrapped compiler
@@ -976,14 +976,18 @@ def __init__(
# a dependency of the build task. Here we add it to self.dependencies
compiler_spec = self.pkg.spec.compiler
arch_spec = self.pkg.spec.architecture
if not spack.compilers.compilers_for_spec(compiler_spec, arch_spec=arch_spec):
strict = spack.concretize.Concretizer().check_for_compiler_existence
if (
not spack.compilers.compilers_for_spec(compiler_spec, arch_spec=arch_spec)
and not strict
):
# The compiler is in the queue, identify it as dependency
dep = spack.compilers.pkg_spec_for_compiler(compiler_spec)
dep.constrain(f"platform={str(arch_spec.platform)}")
dep.constrain(f"os={str(arch_spec.os)}")
dep.constrain(f"target={arch_spec.target.microarchitecture.family.name}:")
dep.concretize()
dep_id = package_id(dep.package)
dep_id = package_id(dep)
self.dependencies.add(dep_id)
# List of uninstalled dependencies, which is used to establish
@@ -1194,7 +1198,7 @@ def _add_bootstrap_compilers(
"""
packages = _packages_needed_to_bootstrap_compiler(compiler, architecture, pkgs)
for comp_pkg, is_compiler in packages:
pkgid = package_id(comp_pkg)
pkgid = package_id(comp_pkg.spec)
if pkgid not in self.build_tasks:
self._add_init_task(comp_pkg, request, is_compiler, all_deps)
elif is_compiler:
@@ -1241,7 +1245,7 @@ def _add_init_task(
"""
task = BuildTask(pkg, request, is_compiler, 0, 0, STATUS_ADDED, self.installed)
for dep_id in task.dependencies:
all_deps[dep_id].add(package_id(pkg))
all_deps[dep_id].add(package_id(pkg.spec))
self._push_task(task)
@@ -1276,7 +1280,7 @@ def _check_deps_status(self, request: BuildRequest) -> None:
err = "Cannot proceed with {0}: {1}"
for dep in request.traverse_dependencies():
dep_pkg = dep.package
dep_id = package_id(dep_pkg)
dep_id = package_id(dep)
# Check for failure since a prefix lock is not required
if spack.store.STORE.failure_tracker.has_failed(dep):
@@ -1409,7 +1413,7 @@ def _cleanup_task(self, pkg: "spack.package_base.PackageBase") -> None:
Args:
pkg: the package being installed
"""
self._remove_task(package_id(pkg))
self._remove_task(package_id(pkg.spec))
# Ensure we have a read lock to prevent others from uninstalling the
# spec during our installation.
@@ -1423,7 +1427,7 @@ def _ensure_install_ready(self, pkg: "spack.package_base.PackageBase") -> None:
Args:
pkg: the package being locally installed
"""
pkg_id = package_id(pkg)
pkg_id = package_id(pkg.spec)
pre = f"{pkg_id} cannot be installed locally:"
# External packages cannot be installed locally.
@@ -1465,7 +1469,7 @@ def _ensure_locked(
"write",
], f'"{lock_type}" is not a supported package management lock type'
pkg_id = package_id(pkg)
pkg_id = package_id(pkg.spec)
ltype, lock = self.locks.get(pkg_id, (lock_type, None))
if lock and ltype == lock_type:
return ltype, lock
@@ -1601,7 +1605,7 @@ def _add_tasks(self, request: BuildRequest, all_deps):
for dep in request.traverse_dependencies():
dep_pkg = dep.package
dep_id = package_id(dep_pkg)
dep_id = package_id(dep)
if dep_id not in self.build_tasks:
self._add_init_task(dep_pkg, request, False, all_deps)
@@ -1913,7 +1917,7 @@ def _flag_installed(
dependent_ids: set of the package's dependent ids, or None if the dependent ids are
limited to those maintained in the package (dependency DAG)
"""
pkg_id = package_id(pkg)
pkg_id = package_id(pkg.spec)
if pkg_id in self.installed:
# Already determined the package has been installed
@@ -2274,11 +2278,15 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
# whether to install source code with the packag
self.install_source = install_args.get("install_source", False)
is_develop = pkg.spec.is_develop
# whether to keep the build stage after installation
self.keep_stage = install_args.get("keep_stage", False)
# Note: user commands do not have an explicit choice to disable
# keeping stages (i.e., we have a --keep-stage option, but not
# a --destroy-stage option), so we can override a default choice
# to destroy
self.keep_stage = is_develop or install_args.get("keep_stage", False)
# whether to restage
self.restage = install_args.get("restage", False)
self.restage = (not is_develop) and install_args.get("restage", False)
# whether to skip the patch phase
self.skip_patch = install_args.get("skip_patch", False)
@@ -2305,7 +2313,7 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
# info/debug information
self.pre = _log_prefix(pkg.name)
self.pkg_id = package_id(pkg)
self.pkg_id = package_id(pkg.spec)
def run(self) -> bool:
"""Main entry point from ``build_process`` to kick off install in child."""

View File

@@ -137,6 +137,12 @@ def source(self):
def signed(self) -> bool:
return isinstance(self._data, str) or self._data.get("signed", True)
@property
def autopush(self) -> bool:
if isinstance(self._data, str):
return False
return self._data.get("autopush", False)
@property
def fetch_url(self):
"""Get the valid, canonicalized fetch URL"""
@@ -150,7 +156,7 @@ def push_url(self):
def _update_connection_dict(self, current_data: dict, new_data: dict, top_level: bool):
keys = ["url", "access_pair", "access_token", "profile", "endpoint_url"]
if top_level:
keys += ["binary", "source", "signed"]
keys += ["binary", "source", "signed", "autopush"]
changed = False
for key in keys:
if key in new_data and current_data.get(key) != new_data[key]:
@@ -286,6 +292,7 @@ def __init__(
scope=None,
binary: Optional[bool] = None,
source: Optional[bool] = None,
autopush: Optional[bool] = None,
):
"""Initialize a mirror collection.
@@ -297,21 +304,27 @@ def __init__(
If None, do not filter on binary mirrors.
source: If True, only include source mirrors.
If False, omit source mirrors.
If None, do not filter on source mirrors."""
self._mirrors = {
name: Mirror(data=mirror, name=name)
for name, mirror in (
mirrors.items()
if mirrors is not None
else spack.config.get("mirrors", scope=scope).items()
)
}
If None, do not filter on source mirrors.
autopush: If True, only include mirrors that have autopush enabled.
If False, omit mirrors that have autopush enabled.
If None, do not filter on autopush."""
mirrors_data = (
mirrors.items()
if mirrors is not None
else spack.config.get("mirrors", scope=scope).items()
)
mirrors = (Mirror(data=mirror, name=name) for name, mirror in mirrors_data)
if source is not None:
self._mirrors = {k: v for k, v in self._mirrors.items() if v.source == source}
def _filter(m: Mirror):
if source is not None and m.source != source:
return False
if binary is not None and m.binary != binary:
return False
if autopush is not None and m.autopush != autopush:
return False
return True
if binary is not None:
self._mirrors = {k: v for k, v in self._mirrors.items() if v.binary == binary}
self._mirrors = {m.name: m for m in mirrors if _filter(m)}
def __eq__(self, other):
return self._mirrors == other._mirrors

View File

@@ -83,6 +83,17 @@ def configuration(module_set_name):
)
_FORMAT_STRING_RE = re.compile(r"({[^}]*})")
def _format_env_var_name(spec, var_name_fmt):
"""Format the variable name, but uppercase any formatted fields."""
fmt_parts = _FORMAT_STRING_RE.split(var_name_fmt)
return "".join(
spec.format(part).upper() if _FORMAT_STRING_RE.match(part) else part for part in fmt_parts
)
def _check_tokens_are_valid(format_string, message):
"""Checks that the tokens used in the format string are valid in
the context of module file and environment variable naming.
@@ -737,20 +748,12 @@ def environment_modifications(self):
exclude = self.conf.exclude_env_vars
# We may have tokens to substitute in environment commands
# Prepare a suitable transformation dictionary for the names
# of the environment variables. This means turn the valid
# tokens uppercase.
transform = {}
for token in _valid_tokens:
transform[token] = lambda s, string: str.upper(string)
for x in env:
# Ensure all the tokens are valid in this context
msg = "some tokens cannot be expanded in an environment variable name"
_check_tokens_are_valid(x.name, message=msg)
# Transform them
x.name = self.spec.format(x.name, transform=transform)
x.name = _format_env_var_name(self.spec, x.name)
if self.modification_needs_formatting(x):
try:
# Not every command has a value

View File

@@ -73,17 +73,24 @@ def vs_install_paths(self):
def msvc_paths(self):
return [os.path.join(path, "VC", "Tools", "MSVC") for path in self.vs_install_paths]
@property
def oneapi_root(self):
root = os.environ.get("ONEAPI_ROOT", "") or os.path.join(
os.environ.get("ProgramFiles(x86)", ""), "Intel", "oneAPI"
)
if os.path.exists(root):
return root
@property
def compiler_search_paths(self):
# First Strategy: Find MSVC directories using vswhere
_compiler_search_paths = []
for p in self.msvc_paths:
_compiler_search_paths.extend(glob.glob(os.path.join(p, "*", "bin", "Hostx64", "x64")))
if os.getenv("ONEAPI_ROOT"):
oneapi_root = self.oneapi_root
if oneapi_root:
_compiler_search_paths.extend(
glob.glob(
os.path.join(str(os.getenv("ONEAPI_ROOT")), "compiler", "*", "windows", "bin")
)
glob.glob(os.path.join(oneapi_root, "compiler", "**", "bin"), recursive=True)
)
# Second strategy: Find MSVC via the registry

View File

@@ -468,7 +468,41 @@ def _names(when_indexed_dictionary):
return sorted(all_names)
class PackageBase(WindowsRPath, PackageViewMixin, metaclass=PackageMeta):
class RedistributionMixin:
"""Logic for determining whether a Package is source/binary
redistributable.
"""
#: Store whether a given Spec source/binary should not be
#: redistributed.
disable_redistribute: Dict["spack.spec.Spec", "spack.directives.DisableRedistribute"]
# Source redistribution must be determined before concretization
# (because source mirrors work with un-concretized Specs).
@classmethod
def redistribute_source(cls, spec):
"""Whether it should be possible to add the source of this
package to a Spack mirror.
"""
for when_spec, disable_redistribute in cls.disable_redistribute.items():
if disable_redistribute.source and spec.satisfies(when_spec):
return False
return True
@property
def redistribute_binary(self):
"""Whether it should be possible to create a binary out of an
installed instance of this package.
"""
for when_spec, disable_redistribute in self.__class__.disable_redistribute.items():
if disable_redistribute.binary and self.spec.satisfies(when_spec):
return False
return True
class PackageBase(WindowsRPath, PackageViewMixin, RedistributionMixin, metaclass=PackageMeta):
"""This is the superclass for all spack packages.
***The Package class***
@@ -567,6 +601,7 @@ class PackageBase(WindowsRPath, PackageViewMixin, metaclass=PackageMeta):
provided_together: Dict["spack.spec.Spec", List[Set[str]]]
patches: Dict["spack.spec.Spec", List["spack.patch.Patch"]]
variants: Dict[str, Tuple["spack.variant.Variant", "spack.spec.Spec"]]
languages: Dict["spack.spec.Spec", Set[str]]
#: By default, packages are not virtual
#: Virtual packages override this attribute

View File

@@ -160,10 +160,15 @@ def detect(cls):
system, as the Cray compiler wrappers and other components of the Cray
programming environment are irrelevant without module support.
"""
craype_type, craype_version = cls.craype_type_and_version()
if craype_type == "EX" and craype_version >= spack.version.Version("21.10"):
if "opt/cray" not in os.environ.get("MODULEPATH", ""):
return False
return "opt/cray" in os.environ.get("MODULEPATH", "")
craype_type, craype_version = cls.craype_type_and_version()
if craype_type == "XC":
return True
if craype_type == "EX" and craype_version < spack.version.Version("21.10"):
return True
return False
def _default_target_from_env(self):
"""Set and return the default CrayPE target loaded in a clean login

View File

@@ -726,14 +726,14 @@ def first_repo(self):
"""Get the first repo in precedence order."""
return self.repos[0] if self.repos else None
@llnl.util.lang.memoized
def _all_package_names_set(self, include_virtuals):
return {name for repo in self.repos for name in repo.all_package_names(include_virtuals)}
@llnl.util.lang.memoized
def _all_package_names(self, include_virtuals):
"""Return all unique package names in all repositories."""
all_pkgs = set()
for repo in self.repos:
for name in repo.all_package_names(include_virtuals):
all_pkgs.add(name)
return sorted(all_pkgs, key=lambda n: n.lower())
return sorted(self._all_package_names_set(include_virtuals), key=lambda n: n.lower())
def all_package_names(self, include_virtuals=False):
return self._all_package_names(include_virtuals)
@@ -794,7 +794,11 @@ def patch_index(self):
@autospec
def providers_for(self, vpkg_spec):
providers = self.provider_index.providers_for(vpkg_spec)
providers = [
spec
for spec in self.provider_index.providers_for(vpkg_spec)
if spec.name in self._all_package_names_set(include_virtuals=False)
]
if not providers:
raise UnknownPackageError(vpkg_spec.fullname)
return providers

View File

@@ -14,7 +14,7 @@
import xml.sax.saxutils
from typing import Dict, Optional
from urllib.parse import urlencode
from urllib.request import HTTPHandler, Request, build_opener
from urllib.request import HTTPSHandler, Request, build_opener
import llnl.util.tty as tty
from llnl.util.filesystem import working_dir
@@ -27,6 +27,7 @@
from spack.error import SpackError
from spack.util.crypto import checksum
from spack.util.log_parse import parse_log_events
from spack.util.web import urllib_ssl_cert_handler
from .base import Reporter
from .extract import extract_test_parts
@@ -427,7 +428,7 @@ def upload(self, filename):
# Compute md5 checksum for the contents of this file.
md5sum = checksum(hashlib.md5, filename, block_size=8192)
opener = build_opener(HTTPHandler)
opener = build_opener(HTTPSHandler(context=urllib_ssl_cert_handler()))
with open(filename, "rb") as f:
params_dict = {
"build": self.buildname,

View File

@@ -34,6 +34,7 @@
"strategy": {"type": "string", "enum": ["none", "minimal", "full"]}
},
},
"os_compatible": {"type": "object", "additionalProperties": {"type": "array"}},
},
}
}

View File

@@ -73,6 +73,7 @@
"environments_root": {"type": "string"},
"connect_timeout": {"type": "integer", "minimum": 0},
"verify_ssl": {"type": "boolean"},
"ssl_certs": {"type": "string"},
"suppress_gpg_warnings": {"type": "boolean"},
"install_missing_compilers": {"type": "boolean"},
"debug": {"type": "boolean"},

View File

@@ -46,6 +46,7 @@
"signed": {"type": "boolean"},
"fetch": fetch_and_push,
"push": fetch_and_push,
"autopush": {"type": "boolean"},
**connection, # type: ignore
},
}

View File

@@ -15,6 +15,7 @@
import types
import typing
import warnings
from contextlib import contextmanager
from typing import Callable, Dict, Iterator, List, NamedTuple, Optional, Set, Tuple, Type, Union
import archspec.cpu
@@ -40,6 +41,8 @@
import spack.spec
import spack.store
import spack.util.crypto
import spack.util.elf
import spack.util.libc
import spack.util.path
import spack.util.timer
import spack.variant
@@ -119,6 +122,17 @@ def __str__(self):
return f"{self._name_.lower()}"
@contextmanager
def spec_with_name(spec, name):
"""Context manager to temporarily set the name of a spec"""
old_name = spec.name
spec.name = name
try:
yield spec
finally:
spec.name = old_name
class RequirementKind(enum.Enum):
"""Purpose / provenance of a requirement"""
@@ -267,8 +281,36 @@ def _create_counter(specs: List[spack.spec.Spec], tests: bool):
return NoDuplicatesCounter(specs, tests=tests)
def all_compilers_in_config():
return spack.compilers.all_compilers()
def all_compilers_in_config(configuration):
return spack.compilers.all_compilers_from(configuration)
def all_libcs() -> Set[spack.spec.Spec]:
"""Return a set of all libc specs targeted by any configured compiler. If none, fall back to
libc determined from the current Python process if dynamically linked."""
libcs = {
c.default_libc for c in all_compilers_in_config(spack.config.CONFIG) if c.default_libc
}
if libcs:
return libcs
libc = spack.util.libc.libc_from_current_python_process()
return {libc} if libc else set()
def libc_is_compatible(lhs: spack.spec.Spec, rhs: spack.spec.Spec) -> List[spack.spec.Spec]:
return (
lhs.name == rhs.name
and lhs.external_path == rhs.external_path
and lhs.version >= rhs.version
)
def using_libc_compatibility() -> bool:
"""Returns True if we are currently using libc compatibility"""
return spack.platforms.host().name == "linux"
def extend_flag_list(flag_list, new_flags):
@@ -541,6 +583,7 @@ def _concretization_version_order(version_info: Tuple[GitOrStandardVersion, dict
info.get("preferred", False),
not info.get("deprecated", False),
not version.isdevelop(),
not version.is_prerelease(),
version,
)
@@ -553,6 +596,23 @@ def _spec_with_default_name(spec_str, name):
return spec
def _external_config_with_implicit_externals(configuration):
# Read packages.yaml and normalize it, so that it will not contain entries referring to
# virtual packages.
packages_yaml = _normalize_packages_yaml(configuration.get("packages"))
# Add externals for libc from compilers on Linux
if not using_libc_compatibility():
return packages_yaml
for compiler in all_compilers_in_config(configuration):
libc = compiler.default_libc
if libc:
entry = {"spec": f"{libc} %{compiler.spec}", "prefix": libc.external_path}
packages_yaml.setdefault(libc.name, {}).setdefault("externals", []).append(entry)
return packages_yaml
class ErrorHandler:
def __init__(self, model):
self.model = model
@@ -687,8 +747,9 @@ def on_model(model):
raise UnsatisfiableSpecError(msg)
#: Data class to collect information on a requirement
class RequirementRule(NamedTuple):
"""Data class to collect information on a requirement"""
pkg_name: str
policy: str
requirements: List["spack.spec.Spec"]
@@ -697,6 +758,27 @@ class RequirementRule(NamedTuple):
message: Optional[str]
class KnownCompiler(NamedTuple):
"""Data class to collect information on compilers"""
spec: "spack.spec.Spec"
os: str
target: str
available: bool
compiler_obj: Optional["spack.compiler.Compiler"]
def _key(self):
return self.spec, self.os, self.target
def __eq__(self, other: object):
if not isinstance(other, KnownCompiler):
return NotImplemented
return self._key() == other._key()
def __hash__(self):
return hash(self._key())
class PyclingoDriver:
def __init__(self, cores=True):
"""Driver for the Python clingo interface.
@@ -749,10 +831,16 @@ def solve(self, setup, specs, reuse=None, output=None, control=None, allow_depre
self.control.load(os.path.join(parent_dir, "heuristic.lp"))
if spack.config.CONFIG.get("concretizer:duplicates:strategy", "none") != "none":
self.control.load(os.path.join(parent_dir, "heuristic_separate.lp"))
self.control.load(os.path.join(parent_dir, "os_compatibility.lp"))
self.control.load(os.path.join(parent_dir, "display.lp"))
if not setup.concretize_everything:
self.control.load(os.path.join(parent_dir, "when_possible.lp"))
# Binary compatibility is based on libc on Linux, and on the os tag elsewhere
if using_libc_compatibility():
self.control.load(os.path.join(parent_dir, "libc_compatibility.lp"))
else:
self.control.load(os.path.join(parent_dir, "os_compatibility.lp"))
timer.stop("load")
# Grounding is the first step in the solve -- it turns our facts
@@ -762,7 +850,6 @@ def solve(self, setup, specs, reuse=None, output=None, control=None, allow_depre
timer.stop("ground")
# With a grounded program, we can run the solve.
result = Result(specs)
models = [] # stable models if things go well
cores = [] # unsatisfiable cores if they do not
@@ -783,6 +870,7 @@ def on_model(model):
timer.stop("solve")
# once done, construct the solve result
result = Result(specs)
result.satisfiable = solve_result.satisfiable
if result.satisfiable:
@@ -823,6 +911,8 @@ def on_model(model):
print("Statistics:")
pprint.pprint(self.control.statistics)
result.raise_if_unsat()
if result.satisfiable and result.unsolved_specs and setup.concretize_everything:
unsolved_str = Result.format_unsolved(result.unsolved_specs)
raise InternalConcretizerError(
@@ -947,6 +1037,9 @@ def __init__(self, tests: bool = False):
self.pkgs: Set[str] = set()
self.explicitly_required_namespaces: Dict[str, str] = {}
# list of unique libc specs targeted by compilers (or an educated guess if no compiler)
self.libcs: List[spack.spec.Spec] = []
def pkg_version_rules(self, pkg):
"""Output declared versions of a package.
@@ -1039,41 +1132,52 @@ def conflict_rules(self, pkg):
)
self.gen.newline()
def package_languages(self, pkg):
for when_spec, languages in pkg.languages.items():
condition_msg = f"{pkg.name} needs the {', '.join(sorted(languages))} language"
if when_spec != spack.spec.Spec():
condition_msg += f" when {when_spec}"
condition_id = self.condition(when_spec, name=pkg.name, msg=condition_msg)
for language in sorted(languages):
self.gen.fact(fn.pkg_fact(pkg.name, fn.language(condition_id, language)))
self.gen.newline()
def config_compatible_os(self):
"""Facts about compatible os's specified in configs"""
self.gen.h2("Compatible OS from concretizer config file")
os_data = spack.config.get("concretizer:os_compatible", {})
for recent, reusable in os_data.items():
for old in reusable:
self.gen.fact(fn.os_compatible(recent, old))
self.gen.newline()
def compiler_facts(self):
"""Facts about available compilers."""
self.gen.h2("Available compilers")
indexed_possible_compilers = list(enumerate(self.possible_compilers))
for compiler_id, compiler in indexed_possible_compilers:
for compiler_id, compiler in enumerate(self.possible_compilers):
self.gen.fact(fn.compiler_id(compiler_id))
self.gen.fact(fn.compiler_name(compiler_id, compiler.spec.name))
self.gen.fact(fn.compiler_version(compiler_id, compiler.spec.version))
if compiler.operating_system:
self.gen.fact(fn.compiler_os(compiler_id, compiler.operating_system))
if compiler.target == "any":
compiler.target = None
if compiler.os:
self.gen.fact(fn.compiler_os(compiler_id, compiler.os))
if compiler.target is not None:
self.gen.fact(fn.compiler_target(compiler_id, compiler.target))
for flag_type, flags in compiler.flags.items():
for flag in flags:
self.gen.fact(fn.compiler_flag(compiler_id, flag_type, flag))
if compiler.compiler_obj is not None:
c = compiler.compiler_obj
for flag_type, flags in c.flags.items():
for flag in flags:
self.gen.fact(fn.compiler_flag(compiler_id, flag_type, flag))
if compiler.available:
self.gen.fact(fn.compiler_available(compiler_id))
self.gen.fact(fn.compiler_weight(compiler_id, compiler_id))
self.gen.newline()
# Set compiler defaults, given a list of possible compilers
self.gen.h2("Default compiler preferences (CompilerID, Weight)")
ppk = spack.package_prefs.PackagePrefs("all", "compiler", all=False)
matches = sorted(indexed_possible_compilers, key=lambda x: ppk(x[1].spec))
for weight, (compiler_id, cspec) in enumerate(matches):
f = fn.compiler_weight(compiler_id, weight)
self.gen.fact(f)
def package_requirement_rules(self, pkg):
parser = RequirementParser(spack.config.CONFIG)
self.emit_facts_from_requirement_rules(parser.rules(pkg))
@@ -1088,6 +1192,9 @@ def pkg_rules(self, pkg, tests):
self.pkg_version_rules(pkg)
self.gen.newline()
# languages
self.package_languages(pkg)
# variants
self.variant_rules(pkg)
@@ -1284,34 +1391,39 @@ def condition(
Returns:
int: id of the condition created by this function
"""
named_cond = required_spec.copy()
named_cond.name = named_cond.name or name
if not named_cond.name:
raise ValueError(f"Must provide a name for anonymous condition: '{named_cond}'")
name = required_spec.name or name
if not name:
raise ValueError(f"Must provide a name for anonymous condition: '{required_spec}'")
# Check if we can emit the requirements before updating the condition ID counter.
# In this way, if a condition can't be emitted but the exception is handled in the caller,
# we won't emit partial facts.
with spec_with_name(required_spec, name):
condition_id = next(self._id_counter)
self.gen.fact(fn.pkg_fact(named_cond.name, fn.condition(condition_id)))
self.gen.fact(fn.condition_reason(condition_id, msg))
# Check if we can emit the requirements before updating the condition ID counter.
# In this way, if a condition can't be emitted but the exception is handled in the
# caller, we won't emit partial facts.
trigger_id = self._get_condition_id(
named_cond, cache=self._trigger_cache, body=True, transform=transform_required
)
self.gen.fact(fn.pkg_fact(named_cond.name, fn.condition_trigger(condition_id, trigger_id)))
condition_id = next(self._id_counter)
self.gen.fact(fn.pkg_fact(required_spec.name, fn.condition(condition_id)))
self.gen.fact(fn.condition_reason(condition_id, msg))
trigger_id = self._get_condition_id(
required_spec, cache=self._trigger_cache, body=True, transform=transform_required
)
self.gen.fact(
fn.pkg_fact(required_spec.name, fn.condition_trigger(condition_id, trigger_id))
)
if not imposed_spec:
return condition_id
effect_id = self._get_condition_id(
imposed_spec, cache=self._effect_cache, body=False, transform=transform_imposed
)
self.gen.fact(
fn.pkg_fact(required_spec.name, fn.condition_effect(condition_id, effect_id))
)
if not imposed_spec:
return condition_id
effect_id = self._get_condition_id(
imposed_spec, cache=self._effect_cache, body=False, transform=transform_imposed
)
self.gen.fact(fn.pkg_fact(named_cond.name, fn.condition_effect(condition_id, effect_id)))
return condition_id
def impose(self, condition_id, imposed_spec, node=True, name=None, body=False):
imposed_constraints = self.spec_clauses(imposed_spec, body=body, required_from=name)
for pred in imposed_constraints:
@@ -1498,12 +1610,8 @@ def emit_facts_from_requirement_rules(self, rules: List[RequirementRule]):
requirement_weight += 1
def external_packages(self):
"""Facts on external packages, as read from packages.yaml"""
# Read packages.yaml and normalize it, so that it
# will not contain entries referring to virtual
# packages.
packages_yaml = spack.config.get("packages")
packages_yaml = _normalize_packages_yaml(packages_yaml)
"""Facts on external packages, from packages.yaml and implicit externals."""
packages_yaml = _external_config_with_implicit_externals(spack.config.CONFIG)
self.gen.h1("External packages")
for pkg_name, data in packages_yaml.items():
@@ -1554,6 +1662,7 @@ def external_imposition(input_spec, requirements):
self.gen.newline()
self.trigger_rules()
self.effect_rules()
def preferred_variants(self, pkg_name):
"""Facts on concretization preferences, as read from packages.yaml"""
@@ -1599,23 +1708,6 @@ def target_preferences(self):
for i, preferred in enumerate(package_targets):
self.gen.fact(fn.target_weight(str(preferred.architecture.target), i))
def flag_defaults(self):
self.gen.h2("Compiler flag defaults")
# types of flags that can be on specs
for flag in spack.spec.FlagMap.valid_compiler_flags():
self.gen.fact(fn.flag_type(flag))
self.gen.newline()
# flags from compilers.yaml
compilers = all_compilers_in_config()
for compiler in compilers:
for name, flags in compiler.flags.items():
for flag in flags:
self.gen.fact(
fn.compiler_version_flag(compiler.name, compiler.version, name, flag)
)
def spec_clauses(
self,
spec: spack.spec.Spec,
@@ -1792,6 +1884,16 @@ def _spec_clauses(
if dep.name == "gcc-runtime":
continue
# libc is also solved again by clingo, but in this case the compatibility
# is not encoded in the parent node - so we need to emit explicit facts
if "libc" in dspec.virtuals:
for libc in self.libcs:
if libc_is_compatible(libc, dep):
clauses.append(
fn.attr("compatible_libc", spec.name, libc.name, libc.version)
)
continue
# We know dependencies are real for concrete specs. For abstract
# specs they just mean the dep is somehow in the DAG.
for dtype in dt.ALL_FLAGS:
@@ -2021,9 +2123,16 @@ def target_defaults(self, specs):
candidate_targets.append(ancestor)
best_targets = {uarch.family.name}
for compiler_id, compiler in enumerate(self.possible_compilers):
for compiler_id, known_compiler in enumerate(self.possible_compilers):
if not known_compiler.available:
continue
compiler = known_compiler.compiler_obj
# Stub support for cross-compilation, to be expanded later
if compiler.target is not None and compiler.target != str(uarch.family):
if known_compiler.target is not None and compiler.target not in (
str(uarch.family),
"any",
):
self.gen.fact(fn.compiler_supports_target(compiler_id, compiler.target))
self.gen.newline()
continue
@@ -2079,58 +2188,6 @@ def virtual_providers(self):
self.gen.fact(fn.virtual(vspec))
self.gen.newline()
def generate_possible_compilers(self, specs):
compilers = all_compilers_in_config()
# Search for compilers which differs only by aspects that are
# not selectable by users using the spec syntax
seen, sanitized_list = set(), []
for compiler in compilers:
key = compiler.spec, compiler.operating_system, compiler.target
if key in seen:
warnings.warn(
f"duplicate found for {compiler.spec} on "
f"{compiler.operating_system}/{compiler.target}. "
f"Edit your compilers.yaml configuration to remove it."
)
continue
sanitized_list.append(compiler)
seen.add(key)
cspecs = set([c.spec for c in compilers])
# add compiler specs from the input line to possibilities if we
# don't require compilers to exist.
strict = spack.concretize.Concretizer().check_for_compiler_existence
for s in traverse.traverse_nodes(specs):
# we don't need to validate compilers for already-built specs
if s.concrete or not s.compiler:
continue
version = s.compiler.versions.concrete
if not version or any(c.satisfies(s.compiler) for c in cspecs):
continue
# Error when a compiler is not found and strict mode is enabled
if strict:
raise spack.concretize.UnavailableCompilerVersionError(s.compiler)
# Make up a compiler matching the input spec. This is for bootstrapping.
compiler_cls = spack.compilers.class_for_compiler_name(s.compiler.name)
compilers.append(
compiler_cls(s.compiler, operating_system=None, target=None, paths=[None] * 4)
)
self.gen.fact(fn.allow_compiler(s.compiler.name, version))
return list(
sorted(
compilers,
key=lambda compiler: (compiler.spec.name, compiler.spec.version),
reverse=True,
)
)
def define_version_constraints(self):
"""Define what version_satisfies(...) means in ASP logic."""
for pkg_name, versions in sorted(self.version_constraints):
@@ -2158,7 +2215,7 @@ def versions_for(v):
if isinstance(v, vn.StandardVersion):
return [v]
elif isinstance(v, vn.ClosedOpenRange):
return [v.lo, vn.prev_version(v.hi)]
return [v.lo, vn._prev_version(v.hi)]
elif isinstance(v, vn.VersionList):
return sum((versions_for(e) for e in v), [])
else:
@@ -2292,8 +2349,7 @@ def setup(
node_counter = _create_counter(specs, tests=self.tests)
self.possible_virtuals = node_counter.possible_virtuals()
self.pkgs = node_counter.possible_dependencies()
self.pkgs.update(spack.repo.PATH.packages_with_tags("runtime"))
self.libcs = sorted(all_libcs()) # type: ignore[type-var]
# Fail if we already know an unreachable node is requested
for spec in specs:
@@ -2303,11 +2359,16 @@ def setup(
if missing_deps:
raise spack.spec.InvalidDependencyError(spec.name, missing_deps)
for node in spack.traverse.traverse_nodes(specs):
for node in traverse.traverse_nodes(specs):
if node.namespace is not None:
self.explicitly_required_namespaces[node.name] = node.namespace
self.gen = ProblemInstanceBuilder()
compiler_parser = CompilerParser(configuration=spack.config.CONFIG).with_input_specs(specs)
if using_libc_compatibility():
for libc in self.libcs:
self.gen.fact(fn.allowed_libc(libc.name, libc.version))
if not allow_deprecated:
self.gen.fact(fn.deprecated_versions_not_allowed())
@@ -2327,17 +2388,17 @@ def setup(
)
specs = tuple(specs) # ensure compatible types to add
# get possible compilers
self.possible_compilers = self.generate_possible_compilers(specs)
self.gen.h1("Reusable concrete specs")
self.define_concrete_input_specs(specs, self.pkgs)
if reuse:
self.gen.fact(fn.optimize_for_reuse())
for reusable_spec in reuse:
compiler_parser.add_compiler_from_concrete_spec(reusable_spec)
self.register_concrete_spec(reusable_spec, self.pkgs)
self.concrete_specs()
self.possible_compilers = compiler_parser.possible_compilers()
self.gen.h1("Generic statements on possible packages")
node_counter.possible_packages_facts(self.gen, fn)
@@ -2347,6 +2408,7 @@ def setup(
self.gen.newline()
self.gen.h1("General Constraints")
self.config_compatible_os()
self.compiler_facts()
# architecture defaults
@@ -2437,15 +2499,36 @@ def visit(node):
def define_runtime_constraints(self):
"""Define the constraints to be imposed on the runtimes"""
recorder = RuntimePropertyRecorder(self)
for compiler in self.possible_compilers:
if compiler.name != "gcc":
continue
compiler_with_different_cls_names = {
"oneapi": "intel-oneapi-compilers",
"clang": "llvm",
}
compiler_cls_name = compiler_with_different_cls_names.get(
compiler.spec.name, compiler.spec.name
)
try:
compiler_cls = spack.repo.PATH.get_pkg_class(compiler.name)
compiler_cls = spack.repo.PATH.get_pkg_class(compiler_cls_name)
if hasattr(compiler_cls, "runtime_constraints"):
compiler_cls.runtime_constraints(spec=compiler.spec, pkg=recorder)
except spack.repo.UnknownPackageError:
pass
# Inject libc from available compilers, on Linux
if not compiler.available:
continue
if hasattr(compiler_cls, "runtime_constraints"):
compiler_cls.runtime_constraints(compiler=compiler, pkg=recorder)
if using_libc_compatibility() and compiler.compiler_obj.default_libc:
recorder("*").depends_on(
"libc", when=f"%{compiler.spec}", type="link", description="Add libc"
)
recorder("*").depends_on(
str(compiler.compiler_obj.default_libc),
when=f"%{compiler.spec}",
type="link",
description="Add libc",
)
recorder.consume_facts()
@@ -2817,6 +2900,97 @@ def reject_requirement_constraint(
return False
class CompilerParser:
"""Parses configuration files, and builds a list of possible compilers for the solve."""
def __init__(self, configuration) -> None:
self.compilers: Set[KnownCompiler] = set()
for c in all_compilers_in_config(configuration):
if using_libc_compatibility() and not c.default_libc:
warnings.warn(
f"cannot detect libc from {c.spec}. The compiler will not be used "
f"during concretization."
)
continue
target = c.target if c.target != "any" else None
candidate = KnownCompiler(
spec=c.spec, os=c.operating_system, target=target, available=True, compiler_obj=c
)
if candidate in self.compilers:
warnings.warn(
f"duplicate found for {c.spec} on {c.operating_system}/{c.target}. "
f"Edit your compilers.yaml configuration to remove it."
)
continue
self.compilers.add(candidate)
def with_input_specs(self, input_specs: List["spack.spec.Spec"]) -> "CompilerParser":
"""Accounts for input specs when building the list of possible compilers.
Args:
input_specs: specs to be concretized
"""
strict = spack.concretize.Concretizer().check_for_compiler_existence
default_os = str(spack.platforms.host().default_os)
default_target = str(archspec.cpu.host().family)
for s in traverse.traverse_nodes(input_specs):
# we don't need to validate compilers for already-built specs
if s.concrete or not s.compiler:
continue
version = s.compiler.versions.concrete
if not version or any(item.spec.satisfies(s.compiler) for item in self.compilers):
continue
# Error when a compiler is not found and strict mode is enabled
if strict:
raise spack.concretize.UnavailableCompilerVersionError(s.compiler)
# Make up a compiler matching the input spec. This is for bootstrapping.
compiler_cls = spack.compilers.class_for_compiler_name(s.compiler.name)
compiler_obj = compiler_cls(
s.compiler, operating_system=default_os, target=default_target, paths=[None] * 4
)
self.compilers.add(
KnownCompiler(
spec=s.compiler,
os=default_os,
target=default_target,
available=True,
compiler_obj=compiler_obj,
)
)
return self
def add_compiler_from_concrete_spec(self, spec: "spack.spec.Spec") -> None:
"""Account for compilers that are coming from concrete specs, through reuse.
Args:
spec: concrete spec to be reused
"""
assert spec.concrete, "the spec argument must be concrete"
candidate = KnownCompiler(
spec=spec.compiler,
os=str(spec.architecture.os),
target=str(spec.architecture.target.microarchitecture.family),
available=False,
compiler_obj=None,
)
self.compilers.add(candidate)
def possible_compilers(self) -> List[KnownCompiler]:
# Here we have to sort two times, first sort by name and ascending version
result = sorted(self.compilers, key=lambda x: (x.spec.name, x.spec.version), reverse=True)
# Then stable sort to prefer available compilers and account for preferences
ppk = spack.package_prefs.PackagePrefs("all", "compiler", all=False)
result.sort(key=lambda x: (not x.available, ppk(x.spec)))
return result
class RuntimePropertyRecorder:
"""An object of this class is injected in callbacks to compilers, to let them declare
properties of the runtimes they support and of the runtimes they provide, and to add
@@ -2857,13 +3031,24 @@ def reset(self):
"""Resets the current state."""
self.current_package = None
def depends_on(self, dependency_str: str, *, when: str, type: str, description: str) -> None:
def depends_on(
self,
dependency_str: str,
*,
when: str,
type: str,
description: str,
languages: Optional[List[str]] = None,
) -> None:
"""Injects conditional dependencies on packages.
Conditional dependencies can be either "real" packages or virtual dependencies.
Args:
dependency_str: the dependency spec to inject
when: anonymous condition to be met on a package to have the dependency
type: dependency type
languages: languages needed by the package for the dependency to be considered
description: human-readable description of the rule for adding the dependency
"""
# TODO: The API for this function is not final, and is still subject to change. At
@@ -2889,26 +3074,45 @@ def depends_on(self, dependency_str: str, *, when: str, type: str, description:
f" not external({node_variable}),\n"
f" not runtime(Package)"
).replace(f'"{placeholder}"', f"{node_variable}")
if languages:
body_str += ",\n"
for language in languages:
body_str += f' attr("language", {node_variable}, "{language}")'
head_clauses = self._setup.spec_clauses(dependency_spec, body=False)
runtime_pkg = dependency_spec.name
is_virtual = head_clauses[0].args[0] == "virtual_node"
main_rule = (
f"% {description}\n"
f'1 {{ attr("depends_on", {node_variable}, node(0..X-1, "{runtime_pkg}"), "{type}") :'
f' max_dupes("gcc-runtime", X)}} 1:-\n'
f' max_dupes("{runtime_pkg}", X)}} 1:-\n'
f"{body_str}.\n\n"
)
if is_virtual:
main_rule = (
f"% {description}\n"
f'attr("dependency_holds", {node_variable}, "{runtime_pkg}", "{type}") :-\n'
f"{body_str}.\n\n"
)
self.rules.append(main_rule)
for clause in head_clauses:
if clause.args[0] == "node":
continue
runtime_node = f'node(RuntimeID, "{runtime_pkg}")'
head_str = str(clause).replace(f'"{runtime_pkg}"', runtime_node)
rule = (
f"{head_str} :-\n"
depends_on_constraint = (
f' attr("depends_on", {node_variable}, {runtime_node}, "{type}"),\n'
f"{body_str}.\n\n"
)
if is_virtual:
depends_on_constraint = (
f' attr("depends_on", {node_variable}, ProviderNode, "{type}"),\n'
f" provider(ProviderNode, {runtime_node}),\n"
)
rule = f"{head_str} :-\n" f"{depends_on_constraint}" f"{body_str}.\n\n"
self.rules.append(rule)
self.reset()
@@ -2967,23 +3171,6 @@ def consume_facts(self):
self._setup.effect_rules()
def call_on_concrete(fun):
"""Annotation for SpecBuilder actions that indicates they should be called on concrete specs.
While building Specs, we set a lot of attributes on the Specs that are under
construction, but we may also just look up a concrete spec (e.g., if a specific hash
was reused). We don't need to set most attributes on these b/c we just look them up
in the DB.
Metadata attributes like concretizer weights aren't stored on the spec or in the DB
-- they're transient annotations for a single run. So we need their handlers to be
called on all specs, including concrete ones.
"""
fun._call_on_concrete = True
return fun
class SpecBuilder:
"""Class with actions to rebuild a spec from ASP results."""
@@ -3083,7 +3270,6 @@ def node_flag_compiler_default(self, node):
def node_flag(self, node, flag_type, flag):
self._specs[node].compiler_flags.add_flag(flag_type, flag, False)
@call_on_concrete
def node_flag_source(self, node, flag_type, source):
self._flag_sources[(node, flag_type)].add(source)
@@ -3091,12 +3277,8 @@ def no_flags(self, node, flag_type):
self._specs[node].compiler_flags[flag_type] = []
def external_spec_selected(self, node, idx):
"""This means that the external spec and index idx
has been selected for this package.
"""
packages_yaml = spack.config.get("packages")
packages_yaml = _normalize_packages_yaml(packages_yaml)
"""This means that the external spec and index idx has been selected for this package."""
packages_yaml = _external_config_with_implicit_externals(spack.config.CONFIG)
spec_info = packages_yaml[node.pkg]["externals"][int(idx)]
self._specs[node].external_path = spec_info.get("prefix", None)
self._specs[node].external_modules = spack.spec.Spec._format_module_list(
@@ -3144,7 +3326,9 @@ def reorder_flags(self):
imposes order afterwards.
"""
# reverse compilers so we get highest priority compilers that share a spec
compilers = dict((c.spec, c) for c in reversed(all_compilers_in_config()))
compilers = dict(
(c.spec, c) for c in reversed(all_compilers_in_config(spack.config.CONFIG))
)
cmd_specs = dict((s.name, s) for spec in self._command_line_specs for s in spec.traverse())
for spec in self._specs.values():
@@ -3192,31 +3376,6 @@ def reorder_flags(self):
def deprecated(self, node: NodeArgument, version: str) -> None:
tty.warn(f'using "{node.pkg}@{version}" which is a deprecated version')
@call_on_concrete
def version_weight(self, node: NodeArgument, weight: int):
self._specs[node]._weights["version_weight"] = int(weight)
@call_on_concrete
def provider_weight(self, node: NodeArgument, virtual: str, weight: int):
self._specs[node]._weights.setdefault("provider_weight", 0)
self._specs[node]._weights["provider_weight"] += int(weight)
@call_on_concrete
def variant_not_default(self, node: NodeArgument, variant: str, value: str):
self._specs[node]._weights[f"variant_not_default({variant})"] = bool(value)
@call_on_concrete
def variant_default_not_used(self, node: NodeArgument, variant: str, value: str):
self._specs[node]._weights[f"variant_default_not_used({variant})"] = bool(value)
@call_on_concrete
def node_compiler_weight(self, node: NodeArgument, weight: int):
self._specs[node]._weights["node_compiler_weight"] = int(weight)
@call_on_concrete
def node_target_weight(self, node: NodeArgument, weight: int):
self._specs[node]._weights["node_target_weight"] = int(weight)
@staticmethod
def sort_fn(function_tuple):
"""Ensure attributes are evaluated in the correct order.
@@ -3275,12 +3434,13 @@ def build_specs(self, function_tuples):
if spack.repo.PATH.is_virtual(pkg):
continue
# if we've already gotten a concrete spec for this pkg, do not bother
# calling actions on it except for certain metaata methods that add
# information not tracked on the spec itself.
# if we've already gotten a concrete spec for this pkg,
# do not bother calling actions on it except for node_flag_source,
# since node_flag_source is tracking information not in the spec itself
spec = self._specs.get(args[0])
if spec and spec.concrete and not getattr(action, "_call_on_concrete", None):
continue
if spec and spec.concrete:
if name != "node_flag_source":
continue
action(*args)
@@ -3360,7 +3520,7 @@ def _is_reusable(spec: spack.spec.Spec, packages, local: bool) -> bool:
return False
if not spec.external:
return True
return _has_runtime_dependencies(spec)
# Cray external manifest externals are always reusable
if local:
@@ -3385,6 +3545,19 @@ def _is_reusable(spec: spack.spec.Spec, packages, local: bool) -> bool:
return False
def _has_runtime_dependencies(spec: spack.spec.Spec) -> bool:
if not WITH_RUNTIME:
return True
if spec.compiler.name == "gcc" and not spec.dependencies("gcc-runtime"):
return False
if spec.compiler.name == "oneapi" and not spec.dependencies("intel-oneapi-runtime"):
return False
return True
class Solver:
"""This is the main external interface class for solving.
@@ -3403,7 +3576,7 @@ def __init__(self):
# These properties are settable via spack configuration, and overridable
# by setting them directly as properties.
self.reuse = spack.config.get("concretizer:reuse", False)
self.reuse = spack.config.get("concretizer:reuse", True)
@staticmethod
def _check_input_and_extract_concrete_specs(specs):
@@ -3420,7 +3593,7 @@ def _check_input_and_extract_concrete_specs(specs):
def _reusable_specs(self, specs):
reusable_specs = []
if self.reuse:
packages = spack.config.get("packages")
packages = _external_config_with_implicit_externals(spack.config.CONFIG)
# Specs from the local Database
with spack.store.STORE.db.read_transaction():
reusable_specs.extend(
@@ -3526,9 +3699,14 @@ def solve_in_rounds(
if not result.unsolved_specs:
break
# This means we cannot progress with solving the input
if not result.satisfiable or not result.specs:
break
if not result.specs:
# This is also a problem: no specs were solved for, which
# means we would be in a loop if we tried again
unsolved_str = Result.format_unsolved(result.unsolved_specs)
raise InternalConcretizerError(
"Internal Spack error: a subset of input specs could not"
f" be solved for.\n\t{unsolved_str}"
)
input_specs = list(x for (x, y) in result.unsolved_specs)
for spec in result.specs:

View File

@@ -80,6 +80,7 @@ unification_set(SetID, VirtualNode)
#defined multiple_unification_sets/1.
#defined runtime/1.
%----
% Rules to break symmetry and speed-up searches
@@ -126,10 +127,12 @@ trigger_node(TriggerID, Node, Node) :-
trigger_condition_holds(TriggerID, Node),
literal(TriggerID).
% Since we trigger the existence of literal nodes from a condition, we need to construct
% the condition_set/2 manually below
% Since we trigger the existence of literal nodes from a condition, we need to construct the condition_set/2
mentioned_in_literal(Root, Mentioned) :- mentioned_in_literal(TriggerID, Root, Mentioned), solve_literal(TriggerID).
condition_set(node(min_dupe_id, Root), node(min_dupe_id, Mentioned)) :- mentioned_in_literal(Root, Mentioned).
condition_set(node(min_dupe_id, Root), node(min_dupe_id, Root)) :- mentioned_in_literal(Root, Root).
1 { condition_set(node(min_dupe_id, Root), node(0..Y-1, Mentioned)) : max_dupes(Mentioned, Y) } 1 :-
mentioned_in_literal(Root, Mentioned), Mentioned != Root.
% Discriminate between "roots" that have been explicitly requested, and roots that are deduced from "virtual roots"
explicitly_requested_root(node(min_dupe_id, Package)) :-
@@ -137,6 +140,20 @@ explicitly_requested_root(node(min_dupe_id, Package)) :-
trigger_and_effect(Package, TriggerID, EffectID),
imposed_constraint(EffectID, "root", Package).
% Keep track of which nodes are associated with which root DAG
associated_with_root(RootNode, RootNode) :- attr("root", RootNode).
associated_with_root(RootNode, ChildNode) :-
depends_on(ParentNode, ChildNode),
associated_with_root(RootNode, ParentNode).
% We cannot have a node in the root condition set, that is not associated with that root
:- attr("root", RootNode),
condition_set(RootNode, node(X, Package)),
not virtual(Package),
not associated_with_root(RootNode, node(X, Package)).
#defined concretize_everything/0.
#defined literal/1.
@@ -158,6 +175,14 @@ error(100, multiple_values_error, Attribute, Package)
attr_single_value(Attribute),
2 { attr(Attribute, node(ID, Package), Value) }.
%-----------------------------------------------------------------------------
% Languages used
%-----------------------------------------------------------------------------
attr("language", node(X, Package), Language) :-
condition_holds(ConditionID, node(X, Package)),
pkg_fact(Package,language(ConditionID, Language)).
%-----------------------------------------------------------------------------
% Version semantics
%-----------------------------------------------------------------------------
@@ -175,7 +200,7 @@ pkg_fact(Package, version_declared(Version, Weight)) :- pkg_fact(Package, versio
% We cannot use a version declared for an installed package if we end up building it
:- pkg_fact(Package, version_declared(Version, Weight, "installed")),
attr("version", node(ID, Package), Version),
attr("version_weight", node(ID, Package), Weight),
version_weight(node(ID, Package), Weight),
not attr("hash", node(ID, Package), _),
internal_error("Reuse version weight used for built package").
@@ -215,7 +240,7 @@ possible_version_weight(node(ID, Package), Weight)
% we can't use the weight for an external version if we don't use the
% corresponding external spec.
:- attr("version", node(ID, Package), Version),
attr("version_weight", node(ID, Package), Weight),
version_weight(node(ID, Package), Weight),
pkg_fact(Package, version_declared(Version, Weight, "external")),
not external(node(ID, Package)),
internal_error("External weight used for built package").
@@ -223,18 +248,18 @@ possible_version_weight(node(ID, Package), Weight)
% we can't use a weight from an installed spec if we are building it
% and vice-versa
:- attr("version", node(ID, Package), Version),
attr("version_weight", node(ID, Package), Weight),
version_weight(node(ID, Package), Weight),
pkg_fact(Package, version_declared(Version, Weight, "installed")),
build(node(ID, Package)),
internal_error("Reuse version weight used for build package").
:- attr("version", node(ID, Package), Version),
attr("version_weight", node(ID, Package), Weight),
version_weight(node(ID, Package), Weight),
not pkg_fact(Package, version_declared(Version, Weight, "installed")),
not build(node(ID, Package)),
internal_error("Build version weight used for reused package").
1 { attr("version_weight", node(ID, Package), Weight) : pkg_fact(Package, version_declared(Version, Weight)) } 1
1 { version_weight(node(ID, Package), Weight) : pkg_fact(Package, version_declared(Version, Weight)) } 1
:- attr("version", node(ID, Package), Version),
attr("node", node(ID, Package)),
internal_error("version weights must exist and be unique").
@@ -514,6 +539,12 @@ attr("virtual_on_edge", PackageNode, ProviderNode, Virtual)
provider(ProviderNode, node(_, Virtual)),
not external(PackageNode).
% If a virtual node is in the answer set, it must be either a virtual root,
% or used somewhere
:- attr("virtual_node", node(_, Virtual)),
not attr("virtual_on_incoming_edges", _, Virtual),
not attr("virtual_root", node(_, Virtual)).
attr("virtual_on_incoming_edges", ProviderNode, Virtual)
:- attr("virtual_on_edge", _, ProviderNode, Virtual).
@@ -583,7 +614,7 @@ do_not_impose(EffectID, node(X, Package))
% A provider may have different possible weights depending on whether it's an external
% or not, or on preferences expressed in packages.yaml etc. This rule ensures that
% we select the weight, among the possible ones, that minimizes the overall objective function.
1 { attr("provider_weight", DependencyNode, VirtualNode, Weight) :
1 { provider_weight(DependencyNode, VirtualNode, Weight) :
possible_provider_weight(DependencyNode, VirtualNode, Weight, _) } 1
:- provider(DependencyNode, VirtualNode), internal_error("Package provider weights must be unique").
@@ -626,7 +657,7 @@ error(100, "Attempted to use external for '{0}' which does not satisfy a unique
:- external(node(ID, Package)),
2 { external_version(node(ID, Package), Version, Weight) }.
attr("version_weight", PackageNode, Weight) :- external_version(PackageNode, Version, Weight).
version_weight(PackageNode, Weight) :- external_version(PackageNode, Version, Weight).
attr("version", PackageNode, Version) :- external_version(PackageNode, Version, Weight).
% if a package is not buildable, only externals or hashed specs are allowed
@@ -644,7 +675,7 @@ external(PackageNode) :- attr("external_spec_selected", PackageNode, _).
% we can't use the weight for an external version if we don't use the
% corresponding external spec.
:- attr("version", node(ID, Package), Version),
attr("version_weight", node(ID, Package), Weight),
version_weight(node(ID, Package), Weight),
pkg_fact(Package, version_declared(Version, Weight, "external")),
not external(node(ID, Package)),
internal_error("External weight used for internal spec").
@@ -876,17 +907,13 @@ error(100, "{0} variant '{1}' cannot have values '{2}' and '{3}' as they come fr
Set1 < Set2, % see[1]
build(node(ID, Package)).
% variant_set is an explicitly set variant value. If it's not 'set',
% we revert to the default value. If it is set, we force the set value
attr("variant_value", PackageNode, Variant, Value)
:- attr("node", PackageNode),
node_has_variant(PackageNode, Variant),
attr("variant_set", PackageNode, Variant, Value).
:- attr("variant_set", node(ID, Package), Variant, Value),
not attr("variant_value", node(ID, Package), Variant, Value).
% The rules below allow us to prefer default values for variants
% whenever possible. If a variant is set in a spec, or if it is
% specified in an external, we score it as if it was a default value.
attr("variant_not_default", node(ID, Package), Variant, Value)
variant_not_default(node(ID, Package), Variant, Value)
:- attr("variant_value", node(ID, Package), Variant, Value),
not variant_default_value(Package, Variant, Value),
% variants set explicitly on the CLI don't count as non-default
@@ -901,7 +928,7 @@ attr("variant_not_default", node(ID, Package), Variant, Value)
% A default variant value that is not used
attr("variant_default_not_used", node(ID, Package), Variant, Value)
variant_default_not_used(node(ID, Package), Variant, Value)
:- variant_default_value(Package, Variant, Value),
node_has_variant(node(ID, Package), Variant),
not attr("variant_value", node(ID, Package), Variant, Value),
@@ -966,14 +993,13 @@ pkg_fact(Package, variant_single_value("dev_path"))
% Platform semantics
%-----------------------------------------------------------------------------
% if no platform is set, fall back to the default
error(100, "platform '{0}' is not allowed on the current host", Platform)
:- attr("node_platform", _, Platform), not allowed_platform(Platform).
% NOTE: Currently we have a single allowed platform per DAG, therefore there is no
% need to have additional optimization criteria. If we ever add cross-platform dags,
% this needs to be changed.
:- 2 { allowed_platform(Platform) }, internal_error("More than one allowed platform detected").
attr("node_platform", PackageNode, Platform)
:- attr("node", PackageNode),
not attr("node_platform_set", PackageNode),
node_platform_default(Platform).
1 { attr("node_platform", PackageNode, Platform) : allowed_platform(Platform) } 1
:- attr("node", PackageNode).
% setting platform on a node is a hard constraint
attr("node_platform", PackageNode, Platform)
@@ -997,43 +1023,18 @@ error(100, "Cannot select '{0} os={1}' (operating system '{1}' is not buildable)
attr("node_os", node(X, Package), OS),
not buildable_os(OS).
% can't have dependencies on incompatible OS's
error(100, "{0} and dependency {1} have incompatible operating systems 'os={2}' and 'os={3}'", Package, Dependency, PackageNodeOS, DependencyOS)
:- depends_on(node(X, Package), node(Y, Dependency)),
attr("node_os", node(X, Package), PackageNodeOS),
attr("node_os", node(Y, Dependency), DependencyOS),
not os_compatible(PackageNodeOS, DependencyOS),
build(node(X, Package)).
% give OS choice weights according to os declarations
node_os_weight(PackageNode, Weight)
:- attr("node", PackageNode),
attr("node_os", PackageNode, OS),
os(OS, Weight).
% match semantics for OS's
node_os_match(PackageNode, DependencyNode) :-
depends_on(PackageNode, DependencyNode),
attr("node_os", PackageNode, OS),
attr("node_os", DependencyNode, OS).
node_os_mismatch(PackageNode, DependencyNode) :-
depends_on(PackageNode, DependencyNode),
not node_os_match(PackageNode, DependencyNode).
% every OS is compatible with itself. We can use `os_compatible` to declare
os_compatible(OS, OS) :- os(OS).
% Transitive compatibility among operating systems
os_compatible(OS1, OS3) :- os_compatible(OS1, OS2), os_compatible(OS2, OS3).
% We can select only operating systems compatible with the ones
% for which we can build software. We need a cardinality constraint
% since we might have more than one "buildable_os(OS)" fact.
:- not 1 { os_compatible(CurrentOS, ReusedOS) : buildable_os(CurrentOS) },
attr("node_os", Package, ReusedOS),
internal_error("Reused OS incompatible with build OS").
% If an OS is set explicitly respect the value
attr("node_os", PackageNode, OS) :- attr("node_os_set", PackageNode, OS), attr("node", PackageNode).
@@ -1081,12 +1082,15 @@ error(100, "{0} compiler '{2}@{3}' incompatible with 'target={1}'", Package, Tar
compiler_version(CompilerID, Version),
build(node(X, Package)).
#defined compiler_supports_target/2.
#defined compiler_available/1.
% if a target is set explicitly, respect it
attr("node_target", PackageNode, Target)
:- attr("node", PackageNode), attr("node_target_set", PackageNode, Target).
% each node has the weight of its assigned target
attr("node_target_weight", node(ID, Package), Weight)
node_target_weight(node(ID, Package), Weight)
:- attr("node", node(ID, Package)),
attr("node_target", node(ID, Package), Target),
target_weight(Target, Weight).
@@ -1111,7 +1115,7 @@ error(100, "'{0} target={1}' is not compatible with this machine", Package, Targ
% Compiler semantics
%-----------------------------------------------------------------------------
% There must be only one compiler set per built node.
{ node_compiler(PackageNode, CompilerID) : compiler_id(CompilerID) } :-
{ node_compiler(PackageNode, CompilerID) : compiler_id(CompilerID), compiler_available(CompilerID) } :-
attr("node", PackageNode),
build(PackageNode).
@@ -1128,6 +1132,7 @@ attr("node_compiler_version", PackageNode, CompilerName, CompilerVersion)
:- node_compiler(PackageNode, CompilerID),
compiler_name(CompilerID, CompilerName),
compiler_version(CompilerID, CompilerVersion),
compiler_available(CompilerID),
build(PackageNode).
attr("node_compiler", PackageNode, CompilerName)
@@ -1188,8 +1193,8 @@ error(100, "{0} compiler '%{1}@{2}' incompatible with 'os={3}'", Package, Compil
node_compiler(node(X, Package), CompilerID),
compiler_name(CompilerID, Compiler),
compiler_version(CompilerID, Version),
not compiler_os(CompilerID, OS),
not allow_compiler(Compiler, Version),
compiler_os(CompilerID, CompilerOS),
not os_compatible(CompilerOS, OS),
build(node(X, Package)).
% If a package and one of its dependencies don't have the
@@ -1210,15 +1215,18 @@ compiler_mismatch_required(PackageNode, DependencyNode)
not compiler_match(PackageNode, DependencyNode).
#defined compiler_os/3.
#defined allow_compiler/2.
% compilers weighted by preference according to packages.yaml
attr("node_compiler_weight", PackageNode, Weight)
:- node_compiler(PackageNode, CompilerID),
node_compiler_weight(node(ID, Package), Weight)
:- node_compiler(node(ID, Package), CompilerID),
compiler_name(CompilerID, Compiler),
compiler_version(CompilerID, V),
compiler_weight(CompilerID, Weight).
attr("node_compiler_weight", PackageNode, 100)
:- node_compiler(PackageNode, CompilerID),
node_compiler_weight(node(ID, Package), 100)
:- node_compiler(node(ID, Package), CompilerID),
compiler_name(CompilerID, Compiler),
compiler_version(CompilerID, V),
not compiler_weight(CompilerID, _).
% For the time being, be strict and reuse only if the compiler match one we have on the system
@@ -1355,7 +1363,7 @@ build_priority(PackageNode, 0) :- not build(PackageNode), attr("node", Package
% will instead build the latest bar. When we actually include transitive
% build deps in the solve, consider using them as a preference to resolve this.
:- attr("version", node(ID, Package), Version),
attr("version_weight", node(ID, Package), Weight),
version_weight(node(ID, Package), Weight),
pkg_fact(Package, version_declared(Version, Weight, "installed")),
not optimize_for_reuse().
@@ -1429,7 +1437,7 @@ opt_criterion(70, "version weight").
#minimize {
Weight@70+Priority
: attr("root", PackageNode),
attr("version_weight", PackageNode, Weight),
version_weight(PackageNode, Weight),
build_priority(PackageNode, Priority)
}.
@@ -1438,7 +1446,7 @@ opt_criterion(65, "number of non-default variants (roots)").
#minimize{ 0@65: #true }.
#minimize {
1@65+Priority,PackageNode,Variant,Value
: attr("variant_not_default", PackageNode, Variant, Value),
: variant_not_default(PackageNode, Variant, Value),
attr("root", PackageNode),
build_priority(PackageNode, Priority)
}.
@@ -1448,7 +1456,7 @@ opt_criterion(60, "preferred providers for roots").
#minimize{ 0@60: #true }.
#minimize{
Weight@60+Priority,ProviderNode,Virtual
: attr("provider_weight", ProviderNode, Virtual, Weight),
: provider_weight(ProviderNode, Virtual, Weight),
attr("root", ProviderNode),
build_priority(ProviderNode, Priority)
}.
@@ -1458,7 +1466,7 @@ opt_criterion(55, "default values of variants not being used (roots)").
#minimize{ 0@55: #true }.
#minimize{
1@55+Priority,PackageNode,Variant,Value
: attr("variant_default_not_used", PackageNode, Variant, Value),
: variant_default_not_used(PackageNode, Variant, Value),
attr("root", PackageNode),
build_priority(PackageNode, Priority)
}.
@@ -1469,7 +1477,7 @@ opt_criterion(50, "number of non-default variants (non-roots)").
#minimize{ 0@50: #true }.
#minimize {
1@50+Priority,PackageNode,Variant,Value
: attr("variant_not_default", PackageNode, Variant, Value),
: variant_not_default(PackageNode, Variant, Value),
not attr("root", PackageNode),
build_priority(PackageNode, Priority)
}.
@@ -1481,7 +1489,7 @@ opt_criterion(45, "preferred providers (non-roots)").
#minimize{ 0@45: #true }.
#minimize{
Weight@45+Priority,ProviderNode,Virtual
: attr("provider_weight", ProviderNode, Virtual, Weight),
: provider_weight(ProviderNode, Virtual, Weight),
not attr("root", ProviderNode),
build_priority(ProviderNode, Priority)
}.
@@ -1491,28 +1499,20 @@ opt_criterion(40, "compiler mismatches that are not from CLI").
#minimize{ 0@240: #true }.
#minimize{ 0@40: #true }.
#minimize{
1@40+Priority,PackageNode,DependencyNode
: compiler_mismatch(PackageNode, DependencyNode),
build_priority(PackageNode, Priority)
1@40+Priority,PackageNode,node(ID, Dependency)
: compiler_mismatch(PackageNode, node(ID, Dependency)),
build_priority(node(ID, Dependency), Priority),
not runtime(Dependency)
}.
opt_criterion(39, "compiler mismatches that are not from CLI").
#minimize{ 0@239: #true }.
#minimize{ 0@39: #true }.
#minimize{
1@39+Priority,PackageNode,DependencyNode
: compiler_mismatch_required(PackageNode, DependencyNode),
build_priority(PackageNode, Priority)
}.
% Try to minimize the number of compiler mismatches in the DAG.
opt_criterion(35, "OS mismatches").
#minimize{ 0@235: #true }.
#minimize{ 0@35: #true }.
#minimize{
1@35+Priority,PackageNode,DependencyNode
: node_os_mismatch(PackageNode, DependencyNode),
build_priority(PackageNode, Priority)
1@39+Priority,PackageNode,node(ID, Dependency)
: compiler_mismatch_required(PackageNode, node(ID, Dependency)),
build_priority(node(ID, Dependency), Priority),
not runtime(Dependency)
}.
opt_criterion(30, "non-preferred OS's").
@@ -1529,9 +1529,10 @@ opt_criterion(25, "version badness").
#minimize{ 0@225: #true }.
#minimize{ 0@25: #true }.
#minimize{
Weight@25+Priority,PackageNode
: attr("version_weight", PackageNode, Weight),
build_priority(PackageNode, Priority)
Weight@25+Priority,node(X, Package)
: version_weight(node(X, Package), Weight),
build_priority(node(X, Package), Priority),
not runtime(Package)
}.
% Try to use all the default values of variants
@@ -1540,7 +1541,7 @@ opt_criterion(20, "default values of variants not being used (non-roots)").
#minimize{ 0@20: #true }.
#minimize{
1@20+Priority,PackageNode,Variant,Value
: attr("variant_default_not_used", PackageNode, Variant, Value),
: variant_default_not_used(PackageNode, Variant, Value),
not attr("root", PackageNode),
build_priority(PackageNode, Priority)
}.
@@ -1550,9 +1551,10 @@ opt_criterion(15, "non-preferred compilers").
#minimize{ 0@215: #true }.
#minimize{ 0@15: #true }.
#minimize{
Weight@15+Priority,PackageNode
: attr("node_compiler_weight", PackageNode, Weight),
build_priority(PackageNode, Priority)
Weight@15+Priority,node(X, Package)
: node_compiler_weight(node(X, Package), Weight),
build_priority(node(X, Package), Priority),
not runtime(Package)
}.
% Minimize the number of mismatches for targets in the DAG, try
@@ -1561,18 +1563,55 @@ opt_criterion(10, "target mismatches").
#minimize{ 0@210: #true }.
#minimize{ 0@10: #true }.
#minimize{
1@10+Priority,PackageNode,Dependency
: node_target_mismatch(PackageNode, Dependency),
build_priority(PackageNode, Priority)
1@10+Priority,PackageNode,node(ID, Dependency)
: node_target_mismatch(PackageNode, node(ID, Dependency)),
build_priority(node(ID, Dependency), Priority),
not runtime(Dependency)
}.
opt_criterion(5, "non-preferred targets").
#minimize{ 0@205: #true }.
#minimize{ 0@5: #true }.
#minimize{
Weight@5+Priority,PackageNode
: attr("node_target_weight", PackageNode, Weight),
build_priority(PackageNode, Priority)
Weight@5+Priority,node(X, Package)
: node_target_weight(node(X, Package), Weight),
build_priority(node(X, Package), Priority),
not runtime(Package)
}.
% Minimize the number of compiler mismatches for runtimes
opt_criterion(4, "compiler mismatches (runtimes)").
#minimize{ 0@204: #true }.
#minimize{ 0@4: #true }.
#minimize{
1@4,PackageNode,node(ID, Dependency)
: compiler_mismatch(PackageNode, node(ID, Dependency)), runtime(Dependency)
}.
#minimize{
1@4,PackageNode,node(ID, Dependency)
: compiler_mismatch_required(PackageNode, node(ID, Dependency)), runtime(Dependency)
}.
% Choose more recent versions for runtimes
opt_criterion(3, "version badness (runtimes)").
#minimize{ 0@203: #true }.
#minimize{ 0@3: #true }.
#minimize{
Weight@3,node(X, Package)
: version_weight(node(X, Package), Weight),
runtime(Package)
}.
% Choose best target for runtimes
opt_criterion(2, "non-preferred targets (runtimes)").
#minimize{ 0@202: #true }.
#minimize{ 0@2: #true }.
#minimize{
Weight@2,node(X, Package)
: node_target_weight(node(X, Package), Weight),
runtime(Package)
}.
% Choose more recent versions for nodes
@@ -1581,7 +1620,7 @@ opt_criterion(1, "edge wiring").
#minimize{ 0@1: #true }.
#minimize{
Weight@1,ParentNode,PackageNode
: attr("version_weight", PackageNode, Weight),
: version_weight(PackageNode, Weight),
not attr("root", PackageNode),
depends_on(ParentNode, PackageNode)
}.

View File

@@ -10,6 +10,7 @@
import spack.deptypes as dt
import spack.package_base
import spack.repo
import spack.spec
PossibleDependencies = Set[str]
@@ -24,7 +25,13 @@ class Counter:
"""
def __init__(self, specs: List["spack.spec.Spec"], tests: bool) -> None:
self.specs = specs
runtime_pkgs = spack.repo.PATH.packages_with_tags("runtime")
runtime_virtuals = set()
for x in runtime_pkgs:
pkg_class = spack.repo.PATH.get_pkg_class(x)
runtime_virtuals.update(pkg_class.provided_virtual_names())
self.specs = specs + [spack.spec.Spec(x) for x in runtime_pkgs]
self.link_run_types: dt.DepFlag = dt.LINK | dt.RUN | dt.TEST
self.all_types: dt.DepFlag = dt.ALL
@@ -33,7 +40,9 @@ def __init__(self, specs: List["spack.spec.Spec"], tests: bool) -> None:
self.all_types = dt.LINK | dt.RUN | dt.BUILD
self._possible_dependencies: PossibleDependencies = set()
self._possible_virtuals: Set[str] = set(x.name for x in specs if x.virtual)
self._possible_virtuals: Set[str] = (
set(x.name for x in specs if x.virtual) | runtime_virtuals
)
def possible_dependencies(self) -> PossibleDependencies:
"""Returns the list of possible dependencies"""

View File

@@ -0,0 +1,37 @@
% Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
% Spack Project Developers. See the top-level COPYRIGHT file for details.
%
% SPDX-License-Identifier: (Apache-2.0 OR MIT)
%=============================================================================
% Libc compatibility rules for reusing solves.
%
% These rules are used on Linux
%=============================================================================
% A package cannot be reused if the libc is not compatible with it
:- provider(node(X, LibcPackage), node(0, "libc")),
attr("version", node(X, LibcPackage), LibcVersion),
attr("hash", node(R, ReusedPackage), Hash),
% Libc packages can be reused without the "compatible_libc" attribute
ReusedPackage != LibcPackage,
not attr("compatible_libc", node(R, ReusedPackage), LibcPackage, LibcVersion).
% Check whether the DAG has any built package
has_built_packages() :- build(X), not external(X).
% A libc is needed in the DAG
:- has_built_packages(), not provider(_, node(0, "libc")).
% The libc must be chosen among available ones
:- has_built_packages(),
provider(node(X, LibcPackage), node(0, "libc")),
attr("node", node(X, LibcPackage)),
attr("version", node(X, LibcPackage), LibcVersion),
not allowed_libc(LibcPackage, LibcVersion).
% A built node must depend on libc
:- build(PackageNode),
provider(LibcNode, node(0, "libc")),
not external(PackageNode),
not depends_on(PackageNode, LibcNode).

View File

@@ -7,21 +7,24 @@
% OS compatibility rules for reusing solves.
% os_compatible(RecentOS, OlderOS)
% OlderOS binaries can be used on RecentOS
%
% These rules are used on every platform, but Linux
%=============================================================================
% macOS
os_compatible("monterey", "bigsur").
os_compatible("bigsur", "catalina").
% Ubuntu
os_compatible("ubuntu22.04", "ubuntu21.10").
os_compatible("ubuntu21.10", "ubuntu21.04").
os_compatible("ubuntu21.04", "ubuntu20.10").
os_compatible("ubuntu20.10", "ubuntu20.04").
os_compatible("ubuntu20.04", "ubuntu19.10").
os_compatible("ubuntu19.10", "ubuntu19.04").
os_compatible("ubuntu19.04", "ubuntu18.10").
os_compatible("ubuntu18.10", "ubuntu18.04").
% can't have dependencies on incompatible OS's
error(100, "{0} and dependency {1} have incompatible operating systems 'os={2}' and 'os={3}'", Package, Dependency, PackageNodeOS, DependencyOS)
:- depends_on(node(X, Package), node(Y, Dependency)),
attr("node_os", node(X, Package), PackageNodeOS),
attr("node_os", node(Y, Dependency), DependencyOS),
not os_compatible(PackageNodeOS, DependencyOS),
build(node(X, Package)).
%EL8
os_compatible("rhel8", "rocky8").
% We can select only operating systems compatible with the ones
% for which we can build software. We need a cardinality constraint
% since we might have more than one "buildable_os(OS)" fact.
:- not 1 { os_compatible(CurrentOS, ReusedOS) : buildable_os(CurrentOS) },
attr("node_os", Package, ReusedOS).

View File

@@ -51,7 +51,6 @@
import collections
import collections.abc
import enum
import io
import itertools
import os
import pathlib
@@ -59,7 +58,7 @@
import re
import socket
import warnings
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union
from typing import Any, Callable, Dict, List, Match, Optional, Set, Tuple, Union
import llnl.path
import llnl.string
@@ -121,6 +120,31 @@
"SpecDeprecatedError",
]
SPEC_FORMAT_RE = re.compile(
r"(?:" # this is one big or, with matches ordered by priority
# OPTION 1: escaped character (needs to be first to catch opening \{)
# Note that an unterminated \ at the end of a string is left untouched
r"(?:\\(.))"
r"|" # or
# OPTION 2: an actual format string
r"{" # non-escaped open brace {
r"([%@/]|arch=)?" # optional sigil (to print sigil in color)
r"(?:\^([^}\.]+)\.)?" # optional ^depname. (to get attr from dependency)
# after the sigil or depname, we can have a hash expression or another attribute
r"(?:" # one of
r"(hash\b)(?:\:(\d+))?" # hash followed by :<optional length>
r"|" # or
r"([^}]*)" # another attribute to format
r")" # end one of
r"(})?" # finish format string with non-escaped close brace }, or missing if not present
r"|"
# OPTION 3: mismatched close brace (option 2 would consume a matched open brace)
r"(})" # brace
r")",
re.IGNORECASE,
)
#: Valid pattern for an identifier in Spack
IDENTIFIER_RE = r"\w[\w-]*"
@@ -133,7 +157,6 @@
ARCHITECTURE_COLOR = "@m" #: color for highlighting architectures
VARIANT_COLOR = "@B" #: color for highlighting variants
HASH_COLOR = "@K" #: color for highlighting package hashes
NONDEFAULT_COLOR = "@_R" #: color for highlighting non-defaults in spec output
#: Default format for Spec.format(). This format can be round-tripped, so that:
#: Spec(Spec("string").format()) == Spec("string)"
@@ -177,6 +200,7 @@ class InstallStatus(enum.Enum):
missing = "@r{[-]} "
# regexes used in spec formatting
OLD_STYLE_FMT_RE = re.compile(r"\${[A-Z]+}")
@@ -875,6 +899,9 @@ def flags():
yield flags
def __str__(self):
if not self:
return ""
sorted_items = sorted((k, v) for k, v in self.items() if v)
result = ""
@@ -1336,10 +1363,6 @@ def __init__(
# Build spec should be the actual build spec unless marked dirty.
self._build_spec = None
# Optimization weight information from the solver for coloring non-defaults in
# spec formatting output.
self._weights = {}
if isinstance(spec_like, str):
spack.parser.parse_one_or_raise(spec_like, self)
@@ -1376,6 +1399,13 @@ def external_path(self, ext_path):
def external(self):
return bool(self.external_path) or bool(self.external_modules)
@property
def is_develop(self):
"""Return whether the Spec represents a user-developed package
in a Spack ``Environment`` (i.e. using `spack develop`).
"""
return bool(self.variants.get("dev_path", False))
def clear_dependencies(self):
"""Trim the dependencies of this spec."""
self._dependencies.clear()
@@ -2929,7 +2959,6 @@ def _new_concretize(self, tests=False):
allow_deprecated = spack.config.get("config:deprecated", False)
solver = spack.solver.asp.Solver()
result = solver.solve([self], tests=tests, allow_deprecated=allow_deprecated)
result.raise_if_unsat()
# take the best answer
opt, i, answer = min(result.answers)
@@ -4001,7 +4030,6 @@ def _dup(self, other, deps: Union[bool, dt.DepTypes, dt.DepFlag] = True, clearde
self.compiler_flags.spec = self
self.variants = other.variants.copy()
self._build_spec = other._build_spec
self._weights = other._weights.copy()
# FIXME: we manage _patches_in_order_of_appearance specially here
# to keep it from leaking out of spec.py, but we should figure
@@ -4255,13 +4283,7 @@ def deps():
yield deps
def format(
self,
format_string: str = DEFAULT_FORMAT,
color: bool = False,
nondefaults: bool = False,
transform: Optional[Callable] = None,
):
def format(self, format_string: str = DEFAULT_FORMAT, color: Optional[bool] = False) -> str:
r"""Prints out particular pieces of a spec, depending on what is
in the format string.
@@ -4324,77 +4346,65 @@ def format(
literal ``\`` character.
Args:
format_string (str): string containing the format to be expanded
Keyword Args:
color (bool): True if returned string is colored
transform (dict): maps full-string formats to a callable \
that accepts a string and returns another one
format_string: string containing the format to be expanded
color: True for colorized result; False for no color; None for auto color.
"""
ensure_modern_format_string(format_string)
transform = transform if transform is not None else {}
out = io.StringIO()
def safe_color(sigil: str, string: str, color_fmt: Optional[str]) -> str:
# avoid colorizing if there is no color or the string is empty
if (color is False) or not color_fmt or not string:
return sigil + string
# escape and add the sigil here to avoid multiple concatenations
if sigil == "@":
sigil = "@@"
return clr.colorize(f"{color_fmt}{sigil}{clr.cescape(string)}@.", color=color)
def write(string, cformat=None):
escaped = clr.cescape(string)
if cformat is not None:
escaped = f"{cformat}{escaped}@."
clr.cwrite(escaped, stream=out, color=color)
def format_attribute(match_object: Match) -> str:
(esc, sig, dep, hash, hash_len, attribute, close_brace, unmatched_close_brace) = (
match_object.groups()
)
if esc:
return esc
elif unmatched_close_brace:
raise SpecFormatStringError(f"Unmatched close brace: '{format_string}'")
elif not close_brace:
raise SpecFormatStringError(f"Missing close brace: '{format_string}'")
def write_attribute(spec, attribute):
attribute = attribute.lower()
current = self if dep is None else self[dep]
sig = ""
if attribute.startswith(("@", "%", "/")):
# color sigils that are inside braces
sig = attribute[0]
attribute = attribute[1:]
elif attribute.startswith("arch="):
sig = " arch=" # include space as separator
attribute = attribute[5:]
current = spec
if attribute.startswith("^"):
attribute = attribute[1:]
dep, attribute = attribute.split(".", 1)
current = self[dep]
# Hash attributes can return early.
# NOTE: we currently treat abstract_hash like an attribute and ignore
# any length associated with it. We may want to change that.
if hash:
if sig and sig != "/":
raise SpecFormatSigilError(sig, "DAG hashes", hash)
try:
length = int(hash_len) if hash_len else None
except ValueError:
raise SpecFormatStringError(f"Invalid hash length: '{hash_len}'")
return safe_color(sig or "", current.dag_hash(length), HASH_COLOR)
if attribute == "":
raise SpecFormatStringError("Format string attributes must be non-empty")
attribute = attribute.lower()
parts = attribute.split(".")
assert parts
# check that the sigil is valid for the attribute.
if sig == "@" and parts[-1] not in ("versions", "version"):
if not sig:
sig = ""
elif sig == "@" and parts[-1] not in ("versions", "version"):
raise SpecFormatSigilError(sig, "versions", attribute)
elif sig == "%" and attribute not in ("compiler", "compiler.name"):
raise SpecFormatSigilError(sig, "compilers", attribute)
elif sig == "/" and not re.match(r"(abstract_)?hash(:\d+)?$", attribute):
elif sig == "/" and attribute != "abstract_hash":
raise SpecFormatSigilError(sig, "DAG hashes", attribute)
elif sig == " arch=" and attribute not in ("architecture", "arch"):
raise SpecFormatSigilError(sig, "the architecture", attribute)
# find the morph function for our attribute
morph = transform.get(attribute, lambda s, x: x)
# Special cases for non-spec attributes and hashes.
# These must be the only non-dep component of the format attribute
if attribute == "spack_root":
write(morph(spec, spack.paths.spack_root))
return
elif attribute == "spack_install":
write(morph(spec, spack.store.STORE.layout.root))
return
elif re.match(r"hash(:\d)?", attribute):
if ":" in attribute:
_, length = attribute.split(":")
write(sig + morph(spec, current.dag_hash(int(length))), HASH_COLOR)
else:
write(sig + morph(spec, current.dag_hash()), HASH_COLOR)
return
elif sig == "arch=":
if attribute not in ("architecture", "arch"):
raise SpecFormatSigilError(sig, "the architecture", attribute)
sig = " arch=" # include space as separator
# Iterate over components using getattr to get next element
for idx, part in enumerate(parts):
@@ -4403,7 +4413,7 @@ def write_attribute(spec, attribute):
if part.startswith("_"):
raise SpecFormatStringError("Attempted to format private attribute")
else:
if isinstance(current, vt.VariantMap):
if part == "variants" and isinstance(current, vt.VariantMap):
# subscript instead of getattr for variant names
current = current[part]
else:
@@ -4427,83 +4437,31 @@ def write_attribute(spec, attribute):
raise SpecFormatStringError(m)
if isinstance(current, vn.VersionList):
if current == vn.any_version:
# We don't print empty version lists
return
# don't print empty version lists
return ""
if callable(current):
raise SpecFormatStringError("Attempted to format callable object")
if current is None:
# We're not printing anything
return
# not printing anything
return ""
# Set color codes for various attributes
def nondefault_or_color(color, *weights):
"""Spec attributes are nondefault relevant weights are present and nonzero."""
if nondefaults and any(self._weights.get(w) for w in weights):
return NONDEFAULT_COLOR
else:
return color
attr_color = None
color = None
if "variants" in parts:
write(sig)
for variant, boolean in current.ordered_variants():
attr_color = nondefault_or_color(
VARIANT_COLOR,
f"variant_not_default({variant.name})",
f"variant_default_not_used({variant.name})",
)
if not boolean:
write(" ")
write(morph(spec, str(variant)), attr_color)
return
elif "name" == parts[0]:
attr_color = nondefault_or_color(None, "provider_weight")
color = VARIANT_COLOR
elif "architecture" in parts:
# TODO: handle arch parts independently
attr_color = nondefault_or_color(ARCHITECTURE_COLOR, "node_target_weight")
color = ARCHITECTURE_COLOR
elif "compiler" in parts or "compiler_flags" in parts:
attr_color = nondefault_or_color(COMPILER_COLOR, "node_compiler_weight")
color = COMPILER_COLOR
elif "version" in parts or "versions" in parts:
attr_color = nondefault_or_color(VERSION_COLOR, "version_weight")
color = VERSION_COLOR
# Finally, write the output
write(sig + morph(spec, str(current)), attr_color)
# return colored output
return safe_color(sig, str(current), color)
attribute = ""
in_attribute = False
escape = False
for c in format_string:
if escape:
out.write(c)
escape = False
elif c == "\\":
escape = True
elif in_attribute:
if c == "}":
write_attribute(self, attribute)
attribute = ""
in_attribute = False
else:
attribute += c
else:
if c == "}":
raise SpecFormatStringError(
"Encountered closing } before opening { in %s" % format_string
)
elif c == "{":
in_attribute = True
else:
out.write(c)
if in_attribute:
raise SpecFormatStringError(
"Format string terminated while reading attribute." "Missing terminating }."
)
formatted_spec = out.getvalue()
return formatted_spec.strip()
return SPEC_FORMAT_RE.sub(format_attribute, format_string).strip()
def cformat(self, *args, **kwargs):
"""Same as format, but color defaults to auto instead of False."""
@@ -4511,6 +4469,16 @@ def cformat(self, *args, **kwargs):
kwargs.setdefault("color", None)
return self.format(*args, **kwargs)
@property
def spack_root(self):
"""Special field for using ``{spack_root}`` in Spec.format()."""
return spack.paths.spack_root
@property
def spack_install(self):
"""Special field for using ``{spack_install}`` in Spec.format()."""
return spack.store.STORE.layout.root
def format_path(
# self, format_string: str, _path_ctor: Optional[pathlib.PurePath] = None
self,
@@ -4536,18 +4504,27 @@ def format_path(
path_ctor = _path_ctor or pathlib.PurePath
format_string_as_path = path_ctor(format_string)
if format_string_as_path.is_absolute():
if format_string_as_path.is_absolute() or (
# Paths that begin with a single "\" on windows are relative, but we still
# want to preserve the initial "\\" to be consistent with PureWindowsPath.
# Ensure that this '\' is not passed to polite_filename() so it's not converted to '_'
(os.name == "nt" or path_ctor == pathlib.PureWindowsPath)
and format_string_as_path.parts[0] == "\\"
):
output_path_components = [format_string_as_path.parts[0]]
input_path_components = list(format_string_as_path.parts[1:])
else:
output_path_components = []
input_path_components = list(format_string_as_path.parts)
output_path_components += [
fs.polite_filename(self.format(x)) for x in input_path_components
fs.polite_filename(self.format(part)) for part in input_path_components
]
return str(path_ctor(*output_path_components))
def __str__(self):
if not self._dependencies:
return self.format()
root_str = [self.format()]
sorted_dependencies = sorted(
self.traverse(root=False), key=lambda x: (x.name, x.abstract_hash)
@@ -4614,7 +4591,6 @@ def tree(
recurse_dependencies: bool = True,
status_fn: Optional[Callable[["Spec"], InstallStatus]] = None,
prefix: Optional[Callable[["Spec"], str]] = None,
nondefaults: bool = False,
) -> str:
"""Prints out this spec and its dependencies, tree-formatted
with indentation.
@@ -4638,7 +4614,6 @@ def tree(
installation status
prefix: optional callable that takes a node as an argument and return its
installation prefix
nondefaults: highlight non-default values in output
"""
out = ""
@@ -4687,7 +4662,7 @@ def tree(
out += " " * d
if d > 0:
out += "^"
out += node.format(format, color=color, nondefaults=nondefaults) + "\n"
out += node.format(format, color=color) + "\n"
# Check if we wanted just the first line
if not recurse_dependencies:

View File

@@ -927,6 +927,10 @@ def destroy(self):
shutil.rmtree(self.path)
except FileNotFoundError:
pass
try:
os.remove(self.reference_link)
except FileNotFoundError:
pass
self.created = False
def restage(self):

View File

@@ -142,7 +142,7 @@ def optimization_flags(self, compiler):
# custom spec.
compiler_version = compiler.version
version_number, suffix = archspec.cpu.version_components(compiler.version)
if not version_number or suffix not in ("", "apple"):
if not version_number or suffix:
# Try to deduce the underlying version of the compiler, regardless
# of its name in compilers.yaml. Depending on where this function
# is called we might get either a CompilerSpec or a fully fledged
@@ -155,4 +155,6 @@ def optimization_flags(self, compiler):
# log this and just return compiler.version instead
tty.debug(str(e))
return self.microarchitecture.optimization_flags(compiler.name, str(compiler_version))
return self.microarchitecture.optimization_flags(
compiler.name, compiler_version.dotted_numeric_string
)

View File

@@ -8,13 +8,16 @@
import pytest
import archspec.cpu
import llnl.util.filesystem as fs
import spack.compilers
import spack.concretize
import spack.operating_systems
import spack.platforms
import spack.target
from spack.spec import ArchSpec, CompilerSpec, Spec
from spack.spec import ArchSpec, Spec
@pytest.fixture(scope="module")
@@ -123,52 +126,60 @@ def test_arch_spec_container_semantic(item, architecture_str):
@pytest.mark.parametrize(
"compiler_spec,target_name,expected_flags",
[
# Check compilers with version numbers from a single toolchain
# Homogeneous compilers
("gcc@4.7.2", "ivybridge", "-march=core-avx-i -mtune=core-avx-i"),
# Check mixed toolchains
("clang@8.0.0", "broadwell", ""),
("clang@3.5", "x86_64", "-march=x86-64 -mtune=generic"),
# Check Apple's Clang compilers
("apple-clang@9.1.0", "x86_64", "-march=x86-64"),
# Mixed toolchain
("clang@8.0.0", "broadwell", ""),
],
)
@pytest.mark.filterwarnings("ignore:microarchitecture specific")
def test_optimization_flags(compiler_spec, target_name, expected_flags, config):
def test_optimization_flags(compiler_spec, target_name, expected_flags, compiler_factory):
target = spack.target.Target(target_name)
compiler = spack.compilers.compilers_for_spec(compiler_spec).pop()
compiler_dict = compiler_factory(spec=compiler_spec, operating_system="")["compiler"]
if compiler_spec == "clang@8.0.0":
compiler_dict["paths"] = {
"cc": "/path/to/clang-8",
"cxx": "/path/to/clang++-8",
"f77": "/path/to/gfortran-9",
"fc": "/path/to/gfortran-9",
}
compiler = spack.compilers.compiler_from_dict(compiler_dict)
opt_flags = target.optimization_flags(compiler)
assert opt_flags == expected_flags
@pytest.mark.parametrize(
"compiler,real_version,target_str,expected_flags",
"compiler_str,real_version,target_str,expected_flags",
[
(CompilerSpec("gcc@=9.2.0"), None, "haswell", "-march=haswell -mtune=haswell"),
("gcc@=9.2.0", None, "haswell", "-march=haswell -mtune=haswell"),
# Check that custom string versions are accepted
(
CompilerSpec("gcc@=10foo"),
"9.2.0",
"icelake",
"-march=icelake-client -mtune=icelake-client",
),
("gcc@=10foo", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
# Check that we run version detection (4.4.0 doesn't support icelake)
(
CompilerSpec("gcc@=4.4.0-special"),
"9.2.0",
"icelake",
"-march=icelake-client -mtune=icelake-client",
),
("gcc@=4.4.0-special", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
# Check that the special case for Apple's clang is treated correctly
# i.e. it won't try to detect the version again
(CompilerSpec("apple-clang@=9.1.0"), None, "x86_64", "-march=x86-64"),
("apple-clang@=9.1.0", None, "x86_64", "-march=x86-64"),
],
)
def test_optimization_flags_with_custom_versions(
compiler, real_version, target_str, expected_flags, monkeypatch, config
compiler_str,
real_version,
target_str,
expected_flags,
monkeypatch,
mutable_config,
compiler_factory,
):
target = spack.target.Target(target_str)
compiler_dict = compiler_factory(spec=compiler_str, operating_system="redhat6")
mutable_config.set("compilers", [compiler_dict])
if real_version:
monkeypatch.setattr(spack.compiler.Compiler, "get_real_version", lambda x: real_version)
compiler = spack.compilers.compiler_from_dict(compiler_dict["compiler"])
opt_flags = target.optimization_flags(compiler)
assert opt_flags == expected_flags
@@ -203,9 +214,10 @@ def test_satisfy_strict_constraint_when_not_concrete(architecture_tuple, constra
)
@pytest.mark.usefixtures("mock_packages", "config")
@pytest.mark.only_clingo("Fixing the parser broke this test for the original concretizer.")
@pytest.mark.skipif(
str(archspec.cpu.host().family) != "x86_64", reason="tests are for x86_64 uarch ranges"
)
def test_concretize_target_ranges(root_target_range, dep_target_range, result, monkeypatch):
# Monkeypatch so that all concretization is done as if the machine is core2
monkeypatch.setattr(spack.platforms.test.Test, "default", "core2")
spec = Spec(f"a %gcc@10 foobar=bar target={root_target_range} ^b target={dep_target_range}")
with spack.concretize.disable_compiler_existence_check():
spec.concretize()

View File

@@ -19,6 +19,8 @@
import py
import pytest
import archspec.cpu
from llnl.util.filesystem import join_path, visit_directory_tree
import spack.binary_distribution as bindist
@@ -34,7 +36,7 @@
import spack.util.spack_yaml as syaml
import spack.util.url as url_util
import spack.util.web as web_util
from spack.binary_distribution import get_buildfile_manifest
from spack.binary_distribution import CannotListKeys, GenerateIndexError, get_buildfile_manifest
from spack.directory_layout import DirectoryLayout
from spack.paths import test_path
from spack.spec import Spec
@@ -388,11 +390,11 @@ def test_built_spec_cache(mirror_dir):
assert any([r["spec"] == s for r in results])
def fake_dag_hash(spec):
def fake_dag_hash(spec, length=None):
# Generate an arbitrary hash that is intended to be different than
# whatever a Spec reported before (to test actions that trigger when
# the hash changes)
return "tal4c7h4z0gqmixb1eqa92mjoybxn5l6"
return "tal4c7h4z0gqmixb1eqa92mjoybxn5l6"[:length]
@pytest.mark.usefixtures(
@@ -463,50 +465,57 @@ def test_generate_index_missing(monkeypatch, tmpdir, mutable_config):
assert "libelf" not in cache_list
def test_generate_indices_key_error(monkeypatch, capfd):
def test_generate_key_index_failure(monkeypatch):
def list_url(url, recursive=False):
if "fails-listing" in url:
raise Exception("Couldn't list the directory")
return ["first.pub", "second.pub"]
def push_to_url(*args, **kwargs):
raise Exception("Couldn't upload the file")
monkeypatch.setattr(web_util, "list_url", list_url)
monkeypatch.setattr(web_util, "push_to_url", push_to_url)
with pytest.raises(CannotListKeys, match="Encountered problem listing keys"):
bindist.generate_key_index("s3://non-existent/fails-listing")
with pytest.raises(GenerateIndexError, match="problem pushing .* Couldn't upload"):
bindist.generate_key_index("s3://non-existent/fails-uploading")
def test_generate_package_index_failure(monkeypatch, capfd):
def mock_list_url(url, recursive=False):
print("mocked list_url({0}, {1})".format(url, recursive))
raise KeyError("Test KeyError handling")
raise Exception("Some HTTP error")
monkeypatch.setattr(web_util, "list_url", mock_list_url)
test_url = "file:///fake/keys/dir"
# Make sure generate_key_index handles the KeyError
bindist.generate_key_index(test_url)
with pytest.raises(GenerateIndexError, match="Unable to generate package index"):
bindist.generate_package_index(test_url)
err = capfd.readouterr()[1]
assert "Warning: No keys at {0}".format(test_url) in err
# Make sure generate_package_index handles the KeyError
bindist.generate_package_index(test_url)
err = capfd.readouterr()[1]
assert "Warning: No packages at {0}".format(test_url) in err
assert (
f"Warning: Encountered problem listing packages at {test_url}: Some HTTP error"
in capfd.readouterr().err
)
def test_generate_indices_exception(monkeypatch, capfd):
def mock_list_url(url, recursive=False):
print("mocked list_url({0}, {1})".format(url, recursive))
raise Exception("Test Exception handling")
monkeypatch.setattr(web_util, "list_url", mock_list_url)
test_url = "file:///fake/keys/dir"
url = "file:///fake/keys/dir"
# Make sure generate_key_index handles the Exception
bindist.generate_key_index(test_url)
with pytest.raises(GenerateIndexError, match=f"Encountered problem listing keys at {url}"):
bindist.generate_key_index(url)
err = capfd.readouterr()[1]
expect = "Encountered problem listing keys at {0}".format(test_url)
assert expect in err
with pytest.raises(GenerateIndexError, match="Unable to generate package index"):
bindist.generate_package_index(url)
# Make sure generate_package_index handles the Exception
bindist.generate_package_index(test_url)
err = capfd.readouterr()[1]
expect = "Encountered problem listing packages at {0}".format(test_url)
assert expect in err
assert f"Encountered problem listing packages at {url}" in capfd.readouterr().err
@pytest.mark.usefixtures("mock_fetch", "install_mockery")
@@ -573,11 +582,20 @@ def test_update_sbang(tmpdir, test_mirror):
uninstall_cmd("-y", "/%s" % new_spec.dag_hash())
def test_install_legacy_buildcache_layout(install_mockery_mutable_config):
@pytest.mark.skipif(
str(archspec.cpu.host().family) != "x86_64",
reason="test data uses gcc 4.5.0 which does not support aarch64",
)
def test_install_legacy_buildcache_layout(
mutable_config, compiler_factory, install_mockery_mutable_config
):
"""Legacy buildcache layout involved a nested archive structure
where the .spack file contained a repeated spec.json and another
compressed archive file containing the install tree. This test
makes sure we can still read that layout."""
mutable_config.set(
"compilers", [compiler_factory(spec="gcc@4.5.0", operating_system="debian6")]
)
legacy_layout_dir = os.path.join(test_path, "data", "mirrors", "legacy_layout")
mirror_url = "file://{0}".format(legacy_layout_dir)
filename = (

View File

@@ -63,7 +63,8 @@ def build_environment(working_env):
os.environ["SPACK_LINKER_ARG"] = "-Wl,"
os.environ["SPACK_DTAGS_TO_ADD"] = "--disable-new-dtags"
os.environ["SPACK_DTAGS_TO_STRIP"] = "--enable-new-dtags"
os.environ["SPACK_SYSTEM_DIRS"] = "/usr/include /usr/lib"
os.environ["SPACK_SYSTEM_DIRS"] = "/usr/include|/usr/lib"
os.environ["SPACK_MANAGED_DIRS"] = f"{prefix}/opt/spack"
os.environ["SPACK_TARGET_ARGS"] = ""
if "SPACK_DEPENDENCIES" in os.environ:

View File

@@ -9,6 +9,8 @@
import py.path
import pytest
import archspec.cpu
import llnl.util.filesystem as fs
import spack.build_systems.autotools
@@ -209,6 +211,9 @@ def test_autotools_gnuconfig_replacement_disabled(
assert "gnuconfig version of config.guess" not in f.read()
@pytest.mark.disable_clean_stage_check
@pytest.mark.skipif(
str(archspec.cpu.host().family) != "x86_64", reason="test data is specific for x86_64"
)
def test_autotools_gnuconfig_replacement_no_gnuconfig(self, mutable_database, monkeypatch):
"""
Tests whether a useful error message is shown when patch_config_files is

View File

@@ -15,7 +15,7 @@
import spack.config
import spack.spec
from spack.paths import build_env_path
from spack.util.environment import SYSTEM_DIRS, set_env
from spack.util.environment import SYSTEM_DIR_CASE_ENTRY, set_env
from spack.util.executable import Executable, ProcessError
#
@@ -159,7 +159,8 @@ def wrapper_environment(working_env):
SPACK_DEBUG_LOG_ID="foo-hashabc",
SPACK_COMPILER_SPEC="gcc@4.4.7",
SPACK_SHORT_SPEC="foo@1.2 arch=linux-rhel6-x86_64 /hashabc",
SPACK_SYSTEM_DIRS=":".join(SYSTEM_DIRS),
SPACK_SYSTEM_DIRS=SYSTEM_DIR_CASE_ENTRY,
SPACK_MANAGED_DIRS="/path/to/spack-1/opt/spack/*|/path/to/spack-2/opt/spack/*",
SPACK_CC_RPATH_ARG="-Wl,-rpath,",
SPACK_CXX_RPATH_ARG="-Wl,-rpath,",
SPACK_F77_RPATH_ARG="-Wl,-rpath,",
@@ -907,3 +908,108 @@ def test_linker_strips_loopopt(wrapper_environment, wrapper_flags):
result = cc(*(test_args + ["-loopopt=0", "-c", "x.c"]), output=str)
result = result.strip().split("\n")
assert "-loopopt=0" in result
def test_spack_managed_dirs_are_prioritized(wrapper_environment):
# We have two different stores with 5 packages divided over them
pkg1 = "/path/to/spack-1/opt/spack/linux-ubuntu22.04-zen2/gcc-13.2.0/pkg-1.0-abcdef"
pkg2 = "/path/to/spack-1/opt/spack/linux-ubuntu22.04-zen2/gcc-13.2.0/pkg-2.0-abcdef"
pkg3 = "/path/to/spack-2/opt/spack/linux-ubuntu22.04-zen2/gcc-13.2.0/pkg-3.0-abcdef"
pkg4 = "/path/to/spack-2/opt/spack/linux-ubuntu22.04-zen2/gcc-13.2.0/pkg-4.0-abcdef"
pkg5 = "/path/to/spack-2/opt/spack/linux-ubuntu22.04-zen2/gcc-13.2.0/pkg-5.0-abcdef"
variables = {
# cppflags, ldflags from the command line, config or package.py take highest priority
"SPACK_CPPFLAGS": f"-I/usr/local/include -I/external-1/include -I{pkg1}/include",
"SPACK_LDFLAGS": f"-L/usr/local/lib -L/external-1/lib -L{pkg1}/lib "
f"-Wl,-rpath,/usr/local/lib -Wl,-rpath,/external-1/lib -Wl,-rpath,{pkg1}/lib",
# automatic -L, -Wl,-rpath, -I flags from dependencies -- on the spack side they are
# already partitioned into "spack owned prefixes" and "non-spack owned prefixes"
"SPACK_STORE_LINK_DIRS": f"{pkg4}/lib:{pkg5}/lib",
"SPACK_STORE_RPATH_DIRS": f"{pkg4}/lib:{pkg5}/lib",
"SPACK_STORE_INCLUDE_DIRS": f"{pkg4}/include:{pkg5}/include",
"SPACK_LINK_DIRS": "/external-3/lib:/external-4/lib",
"SPACK_RPATH_DIRS": "/external-3/lib:/external-4/lib",
"SPACK_INCLUDE_DIRS": "/external-3/include:/external-4/include",
}
with set_env(SPACK_TEST_COMMAND="dump-args", **variables):
effective_call = (
cc(
# system paths
"-I/usr/include",
"-L/usr/lib",
"-Wl,-rpath,/usr/lib",
# some other externals
"-I/external-2/include",
"-L/external-2/lib",
"-Wl,-rpath,/external-2/lib",
# relative paths are considered "spack managed" since they are in the stage dir
"-I..",
"-L..",
"-Wl,-rpath,..", # pathological but simpler for the test.
# spack store paths
f"-I{pkg2}/include",
f"-I{pkg3}/include",
f"-L{pkg2}/lib",
f"-L{pkg3}/lib",
f"-Wl,-rpath,{pkg2}/lib",
f"-Wl,-rpath,{pkg3}/lib",
"hello.c",
"-o",
"hello",
output=str,
)
.strip()
.split("\n")
)
dash_I = [flag[2:] for flag in effective_call if flag.startswith("-I")]
dash_L = [flag[2:] for flag in effective_call if flag.startswith("-L")]
dash_Wl_rpath = [flag[11:] for flag in effective_call if flag.startswith("-Wl,-rpath")]
assert dash_I == [
# spack owned dirs from SPACK_*FLAGS
f"{pkg1}/include",
# spack owned dirs from command line & automatic flags for deps (in that order)]
"..",
f"{pkg2}/include", # from command line
f"{pkg3}/include", # from command line
f"{pkg4}/include", # from SPACK_STORE_INCLUDE_DIRS
f"{pkg5}/include", # from SPACK_STORE_INCLUDE_DIRS
# non-system dirs from SPACK_*FLAGS
"/external-1/include",
# non-system dirs from command line & automatic flags for deps (in that order)
"/external-2/include", # from command line
"/external-3/include", # from SPACK_INCLUDE_DIRS
"/external-4/include", # from SPACK_INCLUDE_DIRS
# system dirs from SPACK_*FLAGS
"/usr/local/include",
# system dirs from command line
"/usr/include",
]
assert (
dash_L
== dash_Wl_rpath
== [
# spack owned dirs from SPACK_*FLAGS
f"{pkg1}/lib",
# spack owned dirs from command line & automatic flags for deps (in that order)
"..",
f"{pkg2}/lib", # from command line
f"{pkg3}/lib", # from command line
f"{pkg4}/lib", # from SPACK_STORE_LINK_DIRS
f"{pkg5}/lib", # from SPACK_STORE_LINK_DIRS
# non-system dirs from SPACK_*FLAGS
"/external-1/lib",
# non-system dirs from command line & automatic flags for deps (in that order)
"/external-2/lib", # from command line
"/external-3/lib", # from SPACK_LINK_DIRS
"/external-4/lib", # from SPACK_LINK_DIRS
# system dirs from SPACK_*FLAGS
"/usr/local/lib",
# system dirs from command line
"/usr/lib",
]
)

View File

@@ -448,7 +448,7 @@ def _fail(self, args):
def test_ci_create_buildcache(tmpdir, working_env, config, mock_packages, monkeypatch):
"""Test that create_buildcache returns a list of objects with the correct
keys and types."""
monkeypatch.setattr(spack.ci, "push_mirror_contents", lambda a, b, c: True)
monkeypatch.setattr(spack.ci, "_push_to_build_cache", lambda a, b, c: True)
results = ci.create_buildcache(
None, destination_mirror_urls=["file:///fake-url-one", "file:///fake-url-two"]

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import errno
import json
import os
import shutil
@@ -168,6 +169,25 @@ def test_update_key_index(
assert "index.json" in key_dir_list
def test_buildcache_autopush(tmp_path, install_mockery, mock_fetch):
"""Test buildcache with autopush"""
mirror_dir = tmp_path / "mirror"
mirror_autopush_dir = tmp_path / "mirror_autopush"
mirror("add", "--unsigned", "mirror", mirror_dir.as_uri())
mirror("add", "--autopush", "--unsigned", "mirror-autopush", mirror_autopush_dir.as_uri())
s = Spec("libdwarf").concretized()
# Install and generate build cache index
s.package.do_install()
metadata_file = spack.binary_distribution.tarball_name(s, ".spec.json")
assert not (mirror_dir / "build_cache" / metadata_file).exists()
assert (mirror_autopush_dir / "build_cache" / metadata_file).exists()
def test_buildcache_sync(
mutable_mock_env_path,
install_mockery_mutable_config,
@@ -234,10 +254,71 @@ def verify_mirror_contents():
# Use mirror names to specify mirrors
mirror("add", "src", src_mirror_url)
mirror("add", "dest", dest_mirror_url)
mirror("add", "ignored", "file:///dummy/io")
buildcache("sync", "src", "dest")
verify_mirror_contents()
shutil.rmtree(dest_mirror_dir)
def manifest_insert(manifest, spec, dest_url):
manifest[spec.dag_hash()] = [
{
"src": spack.util.url.join(
src_mirror_url,
spack.binary_distribution.build_cache_relative_path(),
spack.binary_distribution.tarball_name(spec, ".spec.json"),
),
"dest": spack.util.url.join(
dest_url,
spack.binary_distribution.build_cache_relative_path(),
spack.binary_distribution.tarball_name(spec, ".spec.json"),
),
},
{
"src": spack.util.url.join(
src_mirror_url,
spack.binary_distribution.build_cache_relative_path(),
spack.binary_distribution.tarball_path_name(spec, ".spack"),
),
"dest": spack.util.url.join(
dest_url,
spack.binary_distribution.build_cache_relative_path(),
spack.binary_distribution.tarball_path_name(spec, ".spack"),
),
},
]
manifest_file = os.path.join(tmpdir.strpath, "manifest_dest.json")
with open(manifest_file, "w") as fd:
test_env = ev.active_environment()
manifest = {}
for spec in test_env.specs_by_hash.values():
manifest_insert(manifest, spec, dest_mirror_url)
json.dump(manifest, fd)
buildcache("sync", "--manifest-glob", manifest_file)
verify_mirror_contents()
shutil.rmtree(dest_mirror_dir)
manifest_file = os.path.join(tmpdir.strpath, "manifest_bad_dest.json")
with open(manifest_file, "w") as fd:
manifest = {}
for spec in test_env.specs_by_hash.values():
manifest_insert(
manifest, spec, spack.util.url.join(dest_mirror_url, "invalid_path")
)
json.dump(manifest, fd)
# Trigger the warning
output = buildcache("sync", "--manifest-glob", manifest_file, "dest", "ignored")
assert "Ignoring unused arguemnt: ignored" in output
verify_mirror_contents()
shutil.rmtree(dest_mirror_dir)
def test_buildcache_create_install(
@@ -365,3 +446,10 @@ def test_push_and_install_with_mirror_marked_unsigned_does_not_require_extra_fla
spec.package.do_uninstall(force=True)
spec.package.do_install(**kwargs)
def test_skip_no_redistribute(mock_packages, config):
specs = list(Spec("no-redistribute-dependent").concretized().traverse())
filtered = spack.cmd.buildcache._skip_no_redistribute_for_public(specs)
assert not any(s.name == "no-redistribute" for s in filtered)
assert any(s.name == "no-redistribute-dependent" for s in filtered)

View File

@@ -26,6 +26,7 @@
import spack.util.gpg
import spack.util.spack_yaml as syaml
import spack.util.url as url_util
from spack.cmd.ci import FAILED_CREATE_BUILDCACHE_CODE
from spack.schema.buildcache_spec import schema as specfile_schema
from spack.schema.ci import schema as ci_schema
from spack.schema.database_index import schema as db_idx_schema
@@ -47,6 +48,8 @@
@pytest.fixture()
def ci_base_environment(working_env, tmpdir):
os.environ["CI_PROJECT_DIR"] = tmpdir.strpath
os.environ["CI_PIPELINE_ID"] = "7192"
os.environ["CI_JOB_NAME"] = "mock"
@pytest.fixture(scope="function")
@@ -114,13 +117,13 @@ def test_specs_staging(config, tmpdir):
with repo.use_repositories(builder.root):
spec_a = Spec("a").concretized()
spec_a_label = ci._spec_deps_key(spec_a)
spec_b_label = ci._spec_deps_key(spec_a["b"])
spec_c_label = ci._spec_deps_key(spec_a["c"])
spec_d_label = ci._spec_deps_key(spec_a["d"])
spec_e_label = ci._spec_deps_key(spec_a["e"])
spec_f_label = ci._spec_deps_key(spec_a["f"])
spec_g_label = ci._spec_deps_key(spec_a["g"])
spec_a_label = ci._spec_ci_label(spec_a)
spec_b_label = ci._spec_ci_label(spec_a["b"])
spec_c_label = ci._spec_ci_label(spec_a["c"])
spec_d_label = ci._spec_ci_label(spec_a["d"])
spec_e_label = ci._spec_ci_label(spec_a["e"])
spec_f_label = ci._spec_ci_label(spec_a["f"])
spec_g_label = ci._spec_ci_label(spec_a["g"])
spec_labels, dependencies, stages = ci.stage_spec_jobs([spec_a])
@@ -776,6 +779,43 @@ def test_ci_rebuild_mock_success(
assert "Cannot copy test logs" in out
def test_ci_rebuild_mock_failure_to_push(
tmpdir,
working_env,
mutable_mock_env_path,
install_mockery_mutable_config,
mock_gnupghome,
mock_stage,
mock_fetch,
mock_binary_index,
ci_base_environment,
monkeypatch,
):
pkg_name = "trivial-install-test-package"
rebuild_env = create_rebuild_env(tmpdir, pkg_name)
# Mock the install script succuess
def mock_success(*args, **kwargs):
return 0
monkeypatch.setattr(spack.ci, "process_command", mock_success)
# Mock failure to push to the build cache
def mock_push_or_raise(*args, **kwargs):
raise spack.binary_distribution.PushToBuildCacheError(
"Encountered problem pushing binary <url>: <expection>"
)
monkeypatch.setattr(spack.binary_distribution, "push_or_raise", mock_push_or_raise)
with rebuild_env.env_dir.as_cwd():
activate_rebuild_env(tmpdir, pkg_name, rebuild_env)
expect = f"Command exited with code {FAILED_CREATE_BUILDCACHE_CODE}"
with pytest.raises(spack.main.SpackCommandError, match=expect):
ci_cmd("rebuild", fail_on_error=True)
@pytest.mark.skip(reason="fails intermittently and covered by gitlab ci")
def test_ci_rebuild(
tmpdir,
@@ -1063,7 +1103,7 @@ def test_ci_generate_mirror_override(
@pytest.mark.disable_clean_stage_check
def test_push_mirror_contents(
def test_push_to_build_cache(
tmpdir,
mutable_mock_env_path,
install_mockery_mutable_config,
@@ -1124,7 +1164,7 @@ def test_push_mirror_contents(
install_cmd("--add", "--keep-stage", json_path)
for s in concrete_spec.traverse():
ci.push_mirror_contents(s, mirror_url, True)
ci.push_to_build_cache(s, mirror_url, True)
buildcache_path = os.path.join(mirror_dir.strpath, "build_cache")
@@ -1217,21 +1257,16 @@ def test_push_mirror_contents(
assert len(dl_dir_list) == 2
def test_push_mirror_contents_exceptions(monkeypatch, capsys):
def failing_access(*args, **kwargs):
def test_push_to_build_cache_exceptions(monkeypatch, tmp_path, capsys):
def _push_to_build_cache(spec, sign_binaries, mirror_url):
raise Exception("Error: Access Denied")
monkeypatch.setattr(spack.ci, "_push_mirror_contents", failing_access)
monkeypatch.setattr(spack.ci, "_push_to_build_cache", _push_to_build_cache)
# Input doesn't matter, as wwe are faking exceptional output
url = "fakejunk"
ci.push_mirror_contents(None, url, None)
captured = capsys.readouterr()
std_out = captured[0]
expect_msg = "Permission problem writing to {0}".format(url)
assert expect_msg in std_out
# Input doesn't matter, as we are faking exceptional output
url = tmp_path.as_uri()
ci.push_to_build_cache(None, url, None)
assert f"Permission problem writing to {url}" in capsys.readouterr().err
@pytest.mark.parametrize("match_behavior", ["first", "merge"])
@@ -1461,26 +1496,24 @@ def test_ci_rebuild_index(
working_dir = tmpdir.join("working_dir")
mirror_dir = working_dir.join("mirror")
mirror_url = "file://{0}".format(mirror_dir.strpath)
mirror_url = url_util.path_to_file_url(str(mirror_dir))
spack_yaml_contents = """
spack_yaml_contents = f"""
spack:
specs:
- callpath
mirrors:
test-mirror: {0}
ci:
pipeline-gen:
- submapping:
- match:
- patchelf
build-job:
tags:
- donotcare
image: donotcare
""".format(
mirror_url
)
specs:
- callpath
mirrors:
test-mirror: {mirror_url}
ci:
pipeline-gen:
- submapping:
- match:
- patchelf
build-job:
tags:
- donotcare
image: donotcare
"""
filename = str(tmpdir.join("spack.yaml"))
with open(filename, "w") as f:

View File

@@ -123,17 +123,18 @@ def test_root_and_dep_match_returns_root(mock_packages, mutable_mock_env_path):
@pytest.mark.parametrize(
"arg,config", [("--reuse", True), ("--fresh", False), ("--reuse-deps", "dependencies")]
"arg,conf", [("--reuse", True), ("--fresh", False), ("--reuse-deps", "dependencies")]
)
def test_concretizer_arguments(mutable_config, mock_packages, arg, config):
def test_concretizer_arguments(mutable_config, mock_packages, arg, conf):
"""Ensure that ConfigSetAction is doing the right thing."""
spec = spack.main.SpackCommand("spec")
assert spack.config.get("concretizer:reuse", None) is None
assert spack.config.get("concretizer:reuse", None, scope="command_line") is None
spec(arg, "zlib")
assert spack.config.get("concretizer:reuse", None) == config
assert spack.config.get("concretizer:reuse", None) == conf
assert spack.config.get("concretizer:reuse", None, scope="command_line") == conf
def test_use_buildcache_type():

View File

@@ -112,10 +112,10 @@ def test_compiler_find_no_apple_gcc(no_compilers_yaml, working_env, mock_executa
@pytest.mark.regression("37996")
def test_compiler_remove(mutable_config, mock_packages):
"""Tests that we can remove a compiler from configuration."""
assert spack.spec.CompilerSpec("gcc@=4.5.0") in spack.compilers.all_compiler_specs()
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@4.5.0", add_paths=[], scope=None)
assert spack.spec.CompilerSpec("gcc@=9.4.0") in spack.compilers.all_compiler_specs()
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@9.4.0", add_paths=[], scope=None)
spack.cmd.compiler.compiler_remove(args)
assert spack.spec.CompilerSpec("gcc@=4.5.0") not in spack.compilers.all_compiler_specs()
assert spack.spec.CompilerSpec("gcc@=9.4.0") not in spack.compilers.all_compiler_specs()
@pytest.mark.regression("37996")
@@ -124,10 +124,10 @@ def test_removing_compilers_from_multiple_scopes(mutable_config, mock_packages):
site_config = spack.config.get("compilers", scope="site")
spack.config.set("compilers", site_config, scope="user")
assert spack.spec.CompilerSpec("gcc@=4.5.0") in spack.compilers.all_compiler_specs()
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@4.5.0", add_paths=[], scope=None)
assert spack.spec.CompilerSpec("gcc@=9.4.0") in spack.compilers.all_compiler_specs()
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@9.4.0", add_paths=[], scope=None)
spack.cmd.compiler.compiler_remove(args)
assert spack.spec.CompilerSpec("gcc@=4.5.0") not in spack.compilers.all_compiler_specs()
assert spack.spec.CompilerSpec("gcc@=9.4.0") not in spack.compilers.all_compiler_specs()
@pytest.mark.not_on_windows("Cannot execute bash script on Windows")
@@ -175,7 +175,9 @@ def test_compiler_find_mixed_suffixes(
assert "clang@11.0.0" in output
assert "gcc@8.4.0" in output
config = spack.compilers.get_compiler_config("site", False)
config = spack.compilers.get_compiler_config(
no_compilers_yaml, scope="site", init_config=False
)
clang = next(c["compiler"] for c in config if c["compiler"]["spec"] == "clang@=11.0.0")
gcc = next(c["compiler"] for c in config if c["compiler"]["spec"] == "gcc@=8.4.0")
@@ -210,7 +212,9 @@ def test_compiler_find_prefer_no_suffix(no_compilers_yaml, working_env, compiler
assert "clang@11.0.0" in output
assert "gcc@8.4.0" in output
config = spack.compilers.get_compiler_config("site", False)
config = spack.compilers.get_compiler_config(
no_compilers_yaml, scope="site", init_config=False
)
clang = next(c["compiler"] for c in config if c["compiler"]["spec"] == "clang@=11.0.0")
assert clang["paths"]["cc"] == str(compilers_dir / "clang")
@@ -229,7 +233,9 @@ def test_compiler_find_path_order(no_compilers_yaml, working_env, compilers_dir)
compiler("find", "--scope=site")
config = spack.compilers.get_compiler_config("site", False)
config = spack.compilers.get_compiler_config(
no_compilers_yaml, scope="site", init_config=False
)
gcc = next(c["compiler"] for c in config if c["compiler"]["spec"] == "gcc@=8.4.0")
assert gcc["paths"] == {
"cc": str(new_dir / "gcc-8"),

View File

@@ -20,7 +20,10 @@
install = SpackCommand("install")
env = SpackCommand("env")
pytestmark = pytest.mark.not_on_windows("does not run on windows")
pytestmark = [
pytest.mark.not_on_windows("does not run on windows"),
pytest.mark.disable_clean_stage_check,
]
def test_dev_build_basics(tmpdir, install_mockery):
@@ -41,7 +44,6 @@ def test_dev_build_basics(tmpdir, install_mockery):
assert os.path.exists(str(tmpdir))
@pytest.mark.disable_clean_stage_check
def test_dev_build_before(tmpdir, install_mockery):
spec = spack.spec.Spec(f"dev-build-test-install@0.0.0 dev_path={tmpdir}").concretized()
@@ -58,7 +60,6 @@ def test_dev_build_before(tmpdir, install_mockery):
assert not os.path.exists(spec.prefix)
@pytest.mark.disable_clean_stage_check
def test_dev_build_until(tmpdir, install_mockery):
spec = spack.spec.Spec(f"dev-build-test-install@0.0.0 dev_path={tmpdir}").concretized()
@@ -76,7 +77,6 @@ def test_dev_build_until(tmpdir, install_mockery):
assert not spack.store.STORE.db.query(spec, installed=True)
@pytest.mark.disable_clean_stage_check
def test_dev_build_until_last_phase(tmpdir, install_mockery):
# Test that we ignore the last_phase argument if it is already last
spec = spack.spec.Spec(f"dev-build-test-install@0.0.0 dev_path={tmpdir}").concretized()
@@ -96,7 +96,6 @@ def test_dev_build_until_last_phase(tmpdir, install_mockery):
assert os.path.exists(str(tmpdir))
@pytest.mark.disable_clean_stage_check
def test_dev_build_before_until(tmpdir, install_mockery, capsys):
spec = spack.spec.Spec(f"dev-build-test-install@0.0.0 dev_path={tmpdir}").concretized()
@@ -134,7 +133,6 @@ def mock_module_noop(*args):
pass
@pytest.mark.disable_clean_stage_check
def test_dev_build_drop_in(tmpdir, mock_packages, monkeypatch, install_mockery, working_env):
monkeypatch.setattr(os, "execvp", print_spack_cc)
monkeypatch.setattr(spack.build_environment, "module", mock_module_noop)

Some files were not shown because too many files have changed in this diff Show More