Compare commits

...

195 Commits

Author SHA1 Message Date
Harmen Stoppels
6593d22c4e spack.modules.commmon: pass spec to SetupContext (#40886)
Currently module globals aren't set before running
`setup_[dependent_]run_environment` to compute environment modifications
for module files. This commit fixes that.
2023-11-04 20:42:47 +00:00
Massimiliano Culpo
f51dad976e hdf5-vol-async: better specify dependency condition (#40882) 2023-11-04 20:31:52 +01:00
Cameron Rutherford
ff8cd597e0 hiop: fix cuda constraints (#40875) 2023-11-04 13:09:59 -05:00
eugeneswalker
fd22d109a6 sundials +sycl: add cxxflags=-fsycl via flag_handler (#40845) 2023-11-04 08:55:19 -05:00
zv-io
88ee3a0fba linux-headers: support multiple versions (#40877)
The download URL for linux-headers was hardcoded to 4.x;
we need to derive the correct URL from the version number.
2023-11-04 12:21:12 +01:00
Massimiliano Culpo
f50377de7f environment: solve one spec per child process (#40876)
Looking at the memory profiles of concurrent solves
for environment with unify:false, it seems memory
is only ramping up.

This exchange in the potassco mailing list:
 https://sourceforge.net/p/potassco/mailman/potassco-users/thread/b55b5b8c2e8945409abb3fa3c935c27e%40lohn.at/#msg36517698

Seems to suggest that clingo doesn't release memory
until end of the application.

Since when unify:false we distribute work to processes,
here we give a maxtaskperchild=1, so we clean memory
after each solve.
2023-11-03 23:10:42 +00:00
Adam J. Stewart
8e96d3a051 GDAL: add v3.7.3 (#40865) 2023-11-03 22:59:52 +01:00
Richarda Butler
8fc1ba2d7a Bugfix: propagation of multivalued variants (#39833)
Don't encourage use of default value if propagating a multivalued variant.
2023-11-03 12:09:39 -07:00
Massimiliano Culpo
668a5b45e5 clingo-bootstrap: force setuptools through variant (#40866) 2023-11-03 16:53:45 +01:00
Andrew W Elble
70171d6caf squashfuse: remove url_for_version (#40862)
0.5.0 tarball now has the 'v' removed from the name
2023-11-03 10:34:25 -04:00
Thomas-Ulrich
0f1898c82a xdmf3: fix compilation with hdf5@1.10 and above (#37551) 2023-11-03 14:23:49 +01:00
Massimiliano Culpo
db16335aec ASP-based solver: fix for unsplittable providers (#40859)
Some providers must provide virtuals "together", i.e.
if they provide one virtual of a set, they must be the
providers also of the others.

There was a bug though, where we were not checking if
the other virtuals in the set were needed at all in
the DAG.

This commit fixes the bug.
2023-11-03 12:56:37 +01:00
Harmen Stoppels
3082ce6a22 oci parsing: make image name case insensitive (#40858) 2023-11-03 12:50:30 +01:00
George Young
fe0cf80e05 py-spython: updating to @0.3.1 (#40839)
* py-spython: updating to @0.3.1

* Adding `when=` for py-semver

---------

Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2023-11-03 05:07:58 -06:00
Thomas-Ulrich
a5e6097af7 fix typo in packaging guide (#40853) 2023-11-03 09:56:13 +01:00
eugeneswalker
d4a1618e07 tau: update 2.33 hash, add syscall variant (#40851)
Co-authored-by: wspear <wjspear@gmail.com>
2023-11-03 07:58:00 +01:00
Veselin Dobrev
48a21970d1 MFEM: add logic to find CUDA math-libs when using HPC SDK installation (#40815)
* mfem: add logic to find CUDA math-libs when using HPC SDK installation

* [@spackbot] updating style on behalf of v-dobrev
2023-11-02 20:19:11 -07:00
Martin Aumüller
864d47043c qt-svg: new package for Qt6 SVG module (#40834)
enables loading of SVG icons by providing plugin used by qt-base
2023-11-02 17:05:54 -07:00
Martin Aumüller
c2af2bcac3 qt-*: add v6.5.3 & v6.6.0 (#40833) 2023-11-02 19:52:15 -04:00
Martin Aumüller
7c79c744b6 libtheora: fix build on macos (#40840)
* libtheora: regenerate Makefile.in during autoreconf

The patch to inhibit running of configure would exit autogen.sh so early
that it did not yet run autoconf/automake/...
Instead of patching autogen.sh, just pass -V as argument, as this is
passed on to configure and lets it just print its version instead of
configuring the build tree.

Also drop arguments from autogen.sh, as they are unused when configure
does not run.

* libtheora: fix build on macos

Apply upstream patches in order to avoid unresolved symbols during building of libtheoraenc.
These patches require re-running automake/autoconf/...

Error messages:
libtool: link: /Users/ma/git/spack/lib/spack/env/clang/clang -dynamiclib  -o .libs/libtheoraenc.1.dylib  .libs/apiwrapper.o .libs/fragment.o .libs/idct.o .libs/internal.o .libs/state.o .libs/quant.o .l
ibs/analyze.o .libs/fdct.o .libs/encfrag.o .libs/encapiwrapper.o .libs/encinfo.o .libs/encode.o .libs/enquant.o .libs/huffenc.o .libs/mathops.o .libs/mcenc.o .libs/rate.o .libs/tokenize.o   -L/opt/spac
k/darwin-sonoma-m1/apple-clang-15.0.0/libtheora-1.1.1-uflq3jvysewnrmlj5x5tvltst65ho3v4/lib -logg -lm  -Wl,-exported_symbols_list -Wl,/var/folders/zv/qr55pmd9065glf0mcltpx5bm000102/T/ma/spack-stage/spac
k-stage-libtheora-1.1.1-uflq3jvysewnrmlj5x5tvltst65ho3v4/spack-src/lib/theoraenc.exp   -install_name  /opt/spack/darwin-sonoma-m1/apple-clang-15.0.0/libtheora-1.1.1-uflq3jvysewnrmlj5x5tvltst65ho3v4/lib
/libtheoraenc.1.dylib -compatibility_version 3 -current_version 3.2
ld: warning: search path '/opt/spack/darwin-sonoma-m1/apple-clang-15.0.0/libtheora-1.1.1-uflq3jvysewnrmlj5x5tvltst65ho3v4/lib' not found
ld: Undefined symbols:
  _th_comment_add, referenced from:
      _theora_comment_add in apiwrapper.o
  _th_comment_add_tag, referenced from:
      _theora_comment_add_tag in apiwrapper.o
  _th_comment_clear, referenced from:
      _theora_comment_clear in apiwrapper.o
  _th_comment_init, referenced from:
      _theora_comment_init in apiwrapper.o
  _th_comment_query, referenced from:
      _theora_comment_query in apiwrapper.o
  _th_comment_query_count, referenced from:
      _theora_comment_query_count in apiwrapper.o

* libtheora: add git versions

stable as version name for theora-1.1 branch was chosen so that it sorts between 1.1.x and master

* libtheora: remove unused patch

thanks to @michaelkuhn for noticing
2023-11-03 00:08:22 +01:00
garylawson
94d143763e Update Anaconda3 -- add version 2023.09-0 for x86_64, aarch64, and ppc64le (#40622)
* Add 2023.09-0 for x86_64, aarch64, and ppc64le
   extend the anaconda3 package.py to support aarch64 and ppc64le. 
   add the latest version of anaconda3 to each new platform, including the existing x86_64
* formatting
2023-11-02 16:42:44 -06:00
Vanessasaurus
6f9425c593 Automated deployment to update package flux-sched 2023-10-18 (#40596)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
2023-11-02 13:16:39 -07:00
Nicolas Cornu
05953e4491 highfive: 2.8.0 (#40837)
Co-authored-by: Nicolas Cornu <me@alkino.fr>
2023-11-02 14:03:44 -06:00
Sergey Kosukhin
6b236f130c eccodes: rename variant 'definitions' to 'extra_definitions' (#36186) 2023-11-02 13:28:31 -06:00
Greg Becker
fa08de669e bugfix: computing NodeID2 in requirement node_flag_source (#40846) 2023-11-02 20:17:54 +01:00
Seth R. Johnson
c2193b5470 py-pint: new versions 0.21, 0.22 (#40745)
* py-pint: new versions 0.21, 0.22

* Address feedback

* Fix dumb typo

* Add typing extension requirement
2023-11-02 14:13:19 -05:00
Chris Richardson
b5b94d89d3 Update to latest version (#40778) 2023-11-02 14:07:44 -05:00
vucoda
dd57b58c2f py-pyside2: fix to build with newer llvm and to use spack install headers (#40544)
* Fix py-pyside2 to build with newer llvm and to use spack libglx and libxcb headers where system headers are missing

pyside2 needs LLVM_INSTALL_DIR to be set when using llvm 11: and expects system headers for libglx and libxcb and won't build otherwise.

* Fix styling

* remove raw string type

* Update var/spack/repos/builtin/packages/py-pyside2/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-11-02 14:03:18 -05:00
Chris Richardson
29a30963b3 Fixes to ffcx @0.6.0 (#40787) 2023-11-02 14:02:07 -05:00
Jordan Ogas
3447e425f0 add charliecloud 0.35 (#40842)
* add charliecloud 0.35
* fix linter rage
* fix linter rage?
2023-11-02 11:23:49 -07:00
Juan Miguel Carceller
518da16833 Gaudi: Add a few versions and a dependency on tbb after 37.1 (#40802)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-11-02 11:15:27 -07:00
Paul R. C. Kent
4633327e60 llvm: add 17.0.2-4 (#40820) 2023-11-02 17:00:35 +01:00
Harmen Stoppels
6930176ac6 clingo ^pyhton@3.12: revisit distutils fix (#40844) 2023-11-02 16:48:21 +01:00
Adam J. Stewart
bb64b22066 PyTorch: build with external sleef (#40763)
Co-authored-by: adamjstewart <adamjstewart@users.noreply.github.com>
2023-11-02 16:09:49 +01:00
Harmen Stoppels
8b0ab67de4 depfile: deal with empty / non-concrete env (#40816) 2023-11-02 16:04:35 +01:00
Satish Balay
dbf21bf843 exago: update petsc dependency (#40831) 2023-11-02 07:29:37 -07:00
Harmen Stoppels
af3a29596e go/rust bootstrap: no versions if unsupported arch (#40841)
The lookup in a dictionary causes KeyError on package load for
unsupported architectures such as i386 and ppc big endian.
2023-11-02 08:13:13 -06:00
Harmen Stoppels
80944d22f7 spack external find: fix multi-arch troubles (#33973) 2023-11-02 09:45:31 +01:00
Tamara Dahlgren
f56efaff3e env remove: add a unit test removing two environments (#40814) 2023-11-02 08:51:08 +01:00
Martin Aumüller
83bb2002b4 openscenegraph: support more file formats (#39897) 2023-11-02 08:41:03 +01:00
Massimiliano Culpo
16fa3b9f07 Cherry-picking virtual dependencies (#35322)
This PR makes it possible to select only a subset of virtual dependencies from a spec that _may_ provide more. To select providers, a syntax to specify edge attributes is introduced:
```
hdf5 ^[virtuals=mpi] mpich
```
With that syntax we can concretize specs like:
```console
$ spack spec strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
```

On `develop` this would currently fail with:
```console
$ spack spec strumpack ^intel-parallel-studio+mkl ^openblas
==> Error: Spec cannot include multiple providers for virtual 'blas'
    Requested 'intel-parallel-studio' and 'openblas'
```

In package recipes, virtual specs that are declared in the same `provides` directive need to be provided _together_. This means that e.g. `openblas`, which has:
```python
provides("blas", "lapack")
```
needs to provide both `lapack` and `blas` when requested to provide at least one of them.

## Additional notes

This capability is needed to model compilers. Assuming that languages are treated like virtual dependencies, we might want e.g. to use LLVM to compile C/C++ and Gnu GCC to compile Fortran. This can be accomplished by the following[^1]:
```
hdf5 ^[virtuals=c,cxx] llvm ^[virtuals=fortran] gcc
```

[^1]: We plan to add some syntactic sugar around this syntax, and reuse the `%` sigil to avoid having a lot of boilerplate around compilers.

Modifications:
- [x] Add syntax to interact with edge attributes from spec literals
- [x] Add concretization logic to be able to cherry-pick virtual dependencies
- [x] Extend semantic of the `provides` directive to express when virtuals need to be provided together
- [x] Add unit-tests and documentation
2023-11-01 23:35:23 -07:00
Thomas Madlener
6cd2241e49 edm4hep: Add 0.10.1 tag and update maintainers (#40829)
* edm4hep: add latest tag
* edm4hep: Add myself as maintainer
2023-11-01 23:04:00 -06:00
snehring
6af45230b4 ceres-solver: adding version 2.2.0 (#40824)
* ceres-solver: adding version 2.2.0
* ceres-solver: adding suite-sparse dep
2023-11-01 17:47:55 -07:00
snehring
a8285f0eec vcftools: add v0.1.16 (#40805)
* vcftools: adding new version 0.1.16

* Update var/spack/repos/builtin/packages/vcftools/package.py

Co-authored-by: Alec Scott <alec@bcs.sh>

---------

Co-authored-by: Alec Scott <alec@bcs.sh>
2023-11-01 16:33:12 -07:00
Adam J. Stewart
e7456e1aab py-matplotlib: add v3.8.1 (#40819) 2023-11-01 16:33:00 -07:00
Jeremy L Thompson
dd636dd3fb libCEED v0.12.0, Ratel v0.3.0 (#40822)
* ratel - add v0.3.0
* libceed - add version 0.12.0
2023-11-01 16:29:18 -07:00
Mikael Simberg
a73c95b734 pika: Add 0.20.0 (#40817) 2023-11-01 17:19:56 -06:00
Miroslav Stoyanov
33b355a085 heffte: add v2.4.0 (#40741)
* update the heffte versions

* remove obsolete patch files

* update testing

* style

* restore version (unknown reason)

* restore old patch

* change the syntax

* [@spackbot] updating style on behalf of mkstoyanov

* missed one

* style
2023-11-01 16:54:11 -06:00
Satish Balay
f7630f265b pflotran: add version 5.0.0 (#40828)
alquimia: add version 1.1.0
And fix alquimia@master
2023-11-01 15:16:04 -07:00
dependabot[bot]
9744e86d02 build(deps): bump black in /.github/workflows/style (#40681)
Bumps [black](https://github.com/psf/black) from 23.9.1 to 23.10.1.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/23.9.1...23.10.1)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-01 14:20:29 -07:00
Harmen Stoppels
ff6bbf03a1 changelog: add 0.20.2 and 0.20.3 changes (#40818) 2023-11-01 22:09:11 +01:00
Cameron Rutherford
0767c8673e hiop: fix cuda constraints and add tag to versions (#40721)
* hiop: fix cuda constraints and add tag to versions

* hiop: fix styling
2023-11-01 13:21:14 -07:00
Satish Balay
9aa75eaf65 superlu-dist: -std=c99 prevents usage of putenv() (#40729) 2023-11-01 12:44:13 -07:00
Weiqun Zhang
73f012b999 amrex: add v23.11 (#40821) 2023-11-01 12:38:02 -07:00
Satish Balay
c7a8a83cbf petsc, py-petsc4py: add v3.20.1 (#40794) 2023-11-01 12:37:53 -07:00
Satish Balay
5f87db98ea butterflypack: add version 2.4.0 (#40826) 2023-11-01 12:20:13 -07:00
Brian Van Essen
d05dc8a468 LBANN: add explicit variant for shared builds (#40808) 2023-11-01 13:18:57 -06:00
wspear
afa2a2566e Add 2.33 to tau (#40810) 2023-11-01 12:10:35 -07:00
Thomas Madlener
581f45b639 podio: Add latest tags and variants and update dependencies accordingly (#40182)
* Make sure sio is in dependent build env for podio

* podio: Fix likely(?) typo in root dependency

* podio: Add latest tag and new variants + dependencies

* podio: Add v00-16-07 tag

* podio: Fix dependencies flagged by package audit

* podio: Simplify root dependency

* podio: Add 0.17.1 tag
2023-11-01 13:44:11 -05:00
Bilal Mirza
92780a9af6 fix: sentence framing (#40809) 2023-11-01 11:41:37 +01:00
Harmen Stoppels
2ea8e6c820 Executable.add_default_arg: multiple (#40801) 2023-11-01 09:14:37 +01:00
Harmen Stoppels
ac976a4bf4 Parser: fix ambiguity with whitespace in version ranges (#40344)
Allowing white space around `:` in version ranges introduces an ambiguity:

```
a@1: b
```

parses as `a@1:b` but should really be parsed as two separate specs `a@1:` and `b`.

With white space disallowed around `:` in ranges, the ambiguity is resolved.
2023-11-01 09:08:57 +01:00
Harmen Stoppels
e5f3ffc04f SetupContext.get_env_modifications fixes and documentation (#40683)
Call setup_dependent_run_environment on both link and run edges,
instead of only run edges, which restores old behavior.

Move setup_build_environment into get_env_modifications

Also call setup_run_environment on direct build deps, since their run
environment has to be set up.
2023-11-01 08:47:15 +01:00
Harmen Stoppels
7aaed4d6f3 Revert python build isolation & setuptools source install (#40796)
* Revert "Improve build isolation in PythonPipBuilder (#40224)"

This reverts commit 0f43074f3e.

* Revert "py-setuptools: sdist + rpath patch backport (#40205)"

This reverts commit 512e41a84a.
2023-11-01 07:10:34 +01:00
Tamara Dahlgren
f5d717cd5a Fix env remove indentation (#40811) 2023-11-01 00:08:46 -06:00
Sreenivasa Murthy Kolam
cb018fd7eb Enable address sanitizer in rocm's llvm-amdgpu package. (#40570)
* enable address sanitizer in rocm's llvm-amdgpu package
* remove references to 5.7.0 for now
* fix style error
* address review comments
2023-10-31 19:09:40 -06:00
Luisa Burini
e5cebb6b6f fix create/remove env with invalid spack.yaml (#39898)
* fix create/remove env with invalid spack.yaml
* fix isort error
* fix env ident unittests
* Fix pull request points
2023-10-31 15:39:42 -07:00
Patrick Bridges
4738b45fb1 beatnik: mall changes for v1.0 (#40726)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-10-31 22:28:48 +01:00
Harmen Stoppels
343ed8a3fa force color in subshell if not SPACK_COLOR (#40782) 2023-10-31 22:27:00 +01:00
Adam J. Stewart
58e5315089 PyTorch: build with external gloo (#40759)
* PyTorch: build with external gloo

* Fix gloo compilation with GCC 11

* undeprecate

* py-torch+cuda+gloo requires gloo+cuda
2023-10-31 16:25:24 -05:00
Samuel Li
26649e71f9 Update sperr (#40626)
* update SPERR package
* remove blank line
* update SPERR to be version 0.7.1
* a little clean up
* bound versions that require zstd
* add USE_ZSTD
* add libpressio-sperr version upbound
* update libpressio-sperr
* address review comments
* improve format

---------

Co-authored-by: Samuel Li <Sam@Navada>
Co-authored-by: Samuel Li <sam@cisl-m121a>
2023-10-31 13:53:09 -07:00
Peter Scheibel
2f2d9ae30d Fix cflags requirements (#40639) 2023-10-31 21:19:12 +01:00
jalcaraz
f9c0a15ba0 TAU: Added dyninst variant (#40790)
* Added dyninst variant

* Added dyninst variant and fixed some issues

* Update package.py

* Removed whitespace

* Update package.py

* Update package.py

* Fixed conflicting version

---------

Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
2023-10-31 13:28:16 -06:00
Sreenivasa Murthy Kolam
14cb923dd8 add new recipe for rocm packages- amdsmi (#39270)
* add new recipe for rocm packages- amdsmilib
* update tags,maintainers list
2023-10-31 10:18:32 -07:00
Massimiliano Culpo
544a121248 Fix interaction of spec literals that propagate variants with unify:false (#40789)
* Add tests to ensure variant propagation syntax can round-trip to/from string

* Add a regression test for the bug in 35298

* Reconstruct the spec constraints in the worker process

Specs do not preserve any information on propagation of variants
when round-tripping to/from JSON (which we use to pickle), but
preserve it when round-tripping to/from strings.

Therefore, we pass a spec literal to the worker and reconstruct
the Spec objects there.
2023-10-31 17:50:13 +01:00
Harmen Stoppels
cd6bb9e159 spack checksum: improve signature (#40800) 2023-10-31 16:52:53 +01:00
Greg Sjaardema
e420a685a9 Seacas: Update for latest seacas releaes version (#40698) 2023-10-31 09:38:20 -06:00
Harmen Stoppels
40a5c1ff2d spack checksum: fix error when initial filter yields empty list (#40799) 2023-10-31 15:08:41 +01:00
Harmen Stoppels
6933e1c3cb ci: bump tutorial image and toolchain (#40795) 2023-10-31 12:58:33 +01:00
Harmen Stoppels
160bfd881d tutorial: replace zlib -> gmake to avoid deprecated versions (#40769) 2023-10-31 10:04:53 +01:00
G-Ragghianti
81997ae6d6 Added NVML and cgroup support to the slurm package (#40638)
* Added NVML support to the slurm package
* dbus package is required for cgroup support
* Fixing formatting
* Style fix
* Added PAM support
* Added ROCm SMI support
2023-10-30 19:12:09 -07:00
Todd Gamblin
702a2250fa docs: update license() docs with examples and links (#40598)
- [x] Add links to information people are going to want to know when adding license
      information to their packages (namely OSI licenses and SPDX identifiers).
- [x] Update the packaging docs for `license()` with Spack as an example for `when=`.
      After all, it's a dual-licensed package that changed once in the past.
- [x] Add link to https://spdx.org/licenses/ in the `spack create` boilerplate as well.
2023-10-30 18:54:31 -07:00
Freifrau von Bleifrei
3a0f9ce226 selalib: add (sca)lapack dependency (#40667)
* selalib: add (sca)lapack dependency
* selalib: change when "-mpi" to "~mpi"
2023-10-30 18:28:52 -07:00
Thomas Madlener
a095c8113d dd4hep: Add tag for version 1.27 (#40776) 2023-10-30 17:55:33 -07:00
Larry Knox
4ef433b64d Add hdf5 version 1.14.3. (#40786)
Add hdf5 version 1.10.11.
Update version condition for adding h5pfc->h5fc symlink.  File h5pfc
exists in versions 1.10.10 and 1.10.22.
2023-10-30 17:22:55 -06:00
dependabot[bot]
f228c7cbcc build(deps): bump black from 23.9.1 to 23.10.1 in /lib/spack/docs (#40680)
Bumps [black](https://github.com/psf/black) from 23.9.1 to 23.10.1.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/23.9.1...23.10.1)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-31 00:11:53 +01:00
MatthewLieber
e9ca16ab07 adding sha for OMB 7.3 release (#40784)
Co-authored-by: Matt Lieber <lieber.31@osu.edu>
2023-10-30 16:01:48 -07:00
Andrew W Elble
47ac2b8d09 squashfuse: add version 0.5.0 (#40775) 2023-10-30 11:33:22 -06:00
Harmen Stoppels
b1b8500eba ci: print colored specs in concretization progress (#40711) 2023-10-30 15:29:27 +01:00
Harmen Stoppels
060a1ff2f3 tty: flush immediately (#40774) 2023-10-30 15:07:30 +01:00
marcost2
9ed9a541c9 freesurfer: fix support for linux (#39864)
* Load the script file during enviroment setup so that all the enviroment variables are set properly
* Patch csh/tcsh so that it uses spacks via env
* Update SHA for latest version
* Extend shebang to perl and fix up the regex
2023-10-30 14:19:42 +01:00
Alec Scott
1ebf1c8d1c must: remove release candidates (#40476) 2023-10-30 14:08:23 +01:00
SXS Bot
c2f3943e9e spectre: add v2023.10.11 (#40463)
Co-authored-by: nilsvu <nilsvu@users.noreply.github.com>
2023-10-30 13:56:05 +01:00
Brian Vanderwende
1ba530bff5 Get utilities necessary for successful PIO build (#40502) 2023-10-30 13:53:57 +01:00
RichardBuntLinaro
cc09e88a4a linaro-forge: add v23.0.4 (#40772) 2023-10-30 06:43:07 -06:00
Harmen Stoppels
2f3801196d binary_distribution.py: fix type annotation singleton (#40572)
Convince the language server it's really just a BinaryCacheIndex,
otherwise it defaults to thinking it's Singleton, and can't autocomplete
etc.
2023-10-30 12:52:47 +01:00
Juan Miguel Carceller
d03289c38b Fetch recola from gitlab and add a new version of collier (#40651)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-10-30 12:22:31 +01:00
kwryankrattiger
e720d8640a ISPC: Drop ncurses workaround in favor of patch (#39662)
ISPC had a bug in their lookup for NCurses, this was fixed upstream and
backported here.
2023-10-30 12:16:25 +01:00
Federico Ficarelli
00602cda4f pegtl: add v3.2.7 (#35687) 2023-10-30 12:12:20 +01:00
Alberto Sartori
35882130ce justbuild: add version 1.2.2 (#40701) 2023-10-30 12:09:42 +01:00
Brian Van Essen
1586c8c786 aluminum: make network variants "sticky" (#40715) 2023-10-30 11:26:24 +01:00
Wouter Deconinck
a9e78dc7d8 acts: new variant +binaries when +examples (#40738)
Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2023-10-30 10:40:31 +01:00
wspear
b53b235cff RAJA: add "plugins" variant (#40750) 2023-10-30 09:40:08 +01:00
Veselin Dobrev
33cb8c988f Fix an issue with using the environment variable MACHTYPE which is not always defined (#40733)
* Fix an issue reported here:
   https://github.com/spack/spack/pull/36154#issuecomment-1781854894

* [@spackbot] updating style on behalf of v-dobrev
2023-10-30 09:36:02 +01:00
Adam J. Stewart
6511d3dfff py-pandas: add v2.1.2 (#40734) 2023-10-30 03:32:48 -05:00
Adam J. Stewart
272ca0fc24 PyTorch: build with external fp16 (#40760) 2023-10-30 09:28:52 +01:00
Martin Aumüller
a8f42b865f pcl: checksum new versions (#39039) 2023-10-30 08:54:36 +01:00
Cameron Rutherford
7739c54eb5 exago: fix exago missing on PYTHONPATH when +python (#40748) 2023-10-30 08:35:36 +01:00
Veselin Dobrev
bd1bb7d1ba mfem: support petsc+rocm with spack-installed rocm (#40768) 2023-10-30 01:17:51 -06:00
Massimiliano Culpo
6983db1392 ASP-based solver: avoid cycles in clingo using hidden directive (#40720)
The code should be functonally equivalent to what it was before,
but now to avoid cycles by design we are using a "hidden"
feature of clingo
2023-10-30 07:38:53 +01:00
Wouter Deconinck
2a797f90b4 acts: add v28.1.0:30.3.2 (#40723)
* acts: new version from 28.1.0 to 30.3.1

* acts: new version 30.3.2

* acts: new variant +podio
2023-10-29 18:01:27 -07:00
Harmen Stoppels
2e097b4cbd py-numcodecs: fix broken sse / avx2 variables (#40754) 2023-10-29 13:45:23 -05:00
Aoba
a1282337c0 Add liggght patched for newer compiler (#38685)
* Add liggght patched for newer compiler

Add C++ 17 support
Add Clang and Oneapi support

* Add maintainers

* Fix format in liggghts

* Fix maintainers before versions

Co-authored-by: Alec Scott <alec@bcs.sh>

* Fix style and user to usr

* Update package.py

---------

Co-authored-by: Alec Scott <alec@bcs.sh>
2023-10-29 09:56:27 -07:00
Jerome Soumagne
361d973f97 mercury: add v2.3.1 (#40749) 2023-10-28 10:05:50 -07:00
Lydéric Debusschère
64ec6e7d8e py-moarchiving: new package (#40558)
* [add] py-moarchiving: new package

* py-moarchiving: update from review: description, variant default value is False, switch when and type

---------

Co-authored-by: LydDeb <lyderic.debusschere@eolen.com>
2023-10-28 08:06:48 -05:00
Lydéric Debusschère
9f95945cb5 py-generateds: new package (#40555)
* [add] py-generateds: new package

* py-generateds: Update from review

Co-authored-by: Manuela Kuhn <36827019+manuelakuhn@users.noreply.github.com>

* py-generateds: add versions 2.41.5, 2.42.1, 2.42.2, 2.43.1 and 2.43.2

---------

Co-authored-by: LydDeb <lyderic.debusschere@eolen.com>
Co-authored-by: Manuela Kuhn <36827019+manuelakuhn@users.noreply.github.com>
2023-10-28 08:05:37 -05:00
Rémi Lacroix
21f3240e08 NCCL: Add version 2.19.3-1 (#40704) 2023-10-28 08:03:02 -05:00
Jen Herting
28d617c1c8 New version of py-langsmith (#40674)
Co-authored-by: Benjamin Meyers <bsmits@rit.edu>
2023-10-28 08:02:19 -05:00
Erik Heeren
7da4b3569f py-bluepyemodel: opensourcing with dependencies (#40592)
* py-bluepyemodel: new package with dependencies

* py-morphio: add MPI as dependency to avoid failing builds

* Formatting

* py-bluepyefe: no need to set NEURON_INIT_MPI

* py-morphio: unifurcation branch is ancient history

* py-bluepyopt: only set NEURON_INIT_MPI with +neuron

* py-efel: get rid of old version

* py-morph{-tool,io}: rename develop to master to match branch

* py-bluepyefe: unset PMI_RANK is also neuron-related

* py-bluepyopt: PMI_RANK is also neuron-related

* Implement review remarks

* py-morph-tool, py-neurom: small fixes

* py-morphio: reword dependencies
2023-10-28 07:55:49 -05:00
Manuela Kuhn
f8aa66b62e py-comm: add 0.1.4 (#40669) 2023-10-28 07:51:55 -05:00
Adam J. Stewart
a1d3e0002c py-numpy: add v1.26 (#40057) 2023-10-28 13:17:32 +02:00
John W. Parent
148dce96ed MSVC: detection from registry (#38500)
Typically MSVC is detected via the VSWhere program. However, this may
not be available, or may be installed in an unpredictable location.
This PR adds an additional approach via Windows Registry queries to
determine VS install location root.

Additionally:

* Construct vs_install_paths after class-definition time (move it to
  variable-access time).
* Skip over keys for which a user does not have read permissions
  when performing searches (previously the presence of these keys
  would have caused an error, regardless of whether they were
  needed).
* Extend helper functionality with option for regex matching on
  registry keys vs. exact string matching.
* Some internal refactoring: remove boolean parameters in some cases
  where the function was always called with the same value
  (e.g. `find_subkey`)
2023-10-27 16:58:50 -07:00
Mosè Giordano
9e01199e13 hipsycl: restrict compatibility with llvm for v0.8.0 (#40736) 2023-10-27 21:33:48 +02:00
eugeneswalker
ed7274a4d0 e4s ci stacks: add exago specs (#40712)
* e4s ci: add exago +cuda, +rocm builds

* exago: rename 5-18-2022-snapshot to snapshot.5-18-2022

* disable exago +rocm for non-external rocm ci install

* note that hiop +rocm fails to find hip libraries when they are spack-installed
2023-10-27 11:15:11 -07:00
eugeneswalker
f2963e41ba mgard@2020-10-01 %oneapi@2023: turn of c++11-narrowing via cxxflags (#40743) 2023-10-27 12:08:33 -06:00
John W. Parent
069762cd37 External finding: update default paths; treat .bat as executable on Windows (#39850)
.bat or .exe files can be considered executable on Windows. This PR
expands the regex for detectable packages to allow for the detection
of packages that vendor .bat wrappers (intel mpi for example).

Additional changes:

* Outside of Windows, when searching for executables `path_hints=None`
  was used to indicate that default path hints should be provided,
  and `[]` was taken to mean that no defaults should be chosen
  (in that case, nothing is searched); behavior on Windows has
  now been updated to match.
* Above logic for handling of `path_hints=[]`  has also been extended
  to library search (for both Linux and Windows).
* All exceptions for external packages were documented as timeout
  errors: this commit adds a distinction for other types of errors
  in warning messages to the user.
2023-10-27 10:40:44 -07:00
Harmen Stoppels
195f965076 OCI buildcache (#38358)
Credits to @ChristianKniep for advocating the idea of OCI image layers
being identical to spack buildcache tarballs.

With this you can configure an OCI registry as a buildcache:

```console 
$ spack mirror add my_registry oci://user/image # Dockerhub

$ spack mirror add my_registry oci://ghcr.io/haampie/spack-test # GHCR

$ spack mirror set --push --oci-username ... --oci-password ... my_registry  # set login credentials
```

which should result in this config:

```yaml
mirrors:
  my_registry:
    url: oci://ghcr.io/haampie/spack-test
    push:
      access_pair: [<username>, <password>]
```

It can be used like any other registry

```
spack buildcache push my_registry [specs...]
```

It will upload the Spack tarballs in parallel, as well as manifest + config
files s.t. the binaries are compatible with `docker pull` or `skopeo copy`.

In fact, a base image can be added to get a _runnable_ image:

```console
$ spack buildcache push --base-image ubuntu:23.04 my_registry python
Pushed ... as [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack

$ docker run --rm -it [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
```

which should really be a game changer for sharing binaries.

Further, all content-addressable blobs that are downloaded and verified
will be cached in Spack's download cache. This should make repeated
`push` commands faster, as well as `push` followed by a separate
`update-index` command.

An end to end example of how to use this in Github Actions is here:

**https://github.com/haampie/spack-oci-buildcache-example**


TODO:

- [x] Generate environment modifications in config so PATH is set up
- [x] Enrich config with Spack's `spec` json (this is allowed in the OCI specification)
- [x] When ^ is done, add logic to create an index in say `<image>:index` by fetching all config files (using OCI distribution discovery API)
- [x] Add logic to use object storage in an OCI registry in `spack install`.
- [x] Make the user pick the base image for generated OCI images.
- [x] Update buildcache install logic to deal with absolute paths in tarballs
- [x] Merge with `spack buildcache` command
- [x] Merge #37441 (included here)
- [x] Merge #39077 (included here)
- [x] #39187 + #39285
- [x] #39341
- [x] Not a blocker: #35737 fixes correctness run env for the generated container images

NOTE:

1. `oci://` is unfortunately taken, so it's being abused in this PR to mean "oci type mirror". `skopeo` uses `docker://` which I'd like to avoid, given that classical docker v1 registries are not supported.
2. this is currently `https`-only, given that basic auth is used to login. I _could_ be convinced to allow http, but I'd prefer not to, given that for a `spack buildcache push` command multiple domains can be involved (auth server, source of base image, destination registry). Right now, no urllib http handler is added, so redirects to https and auth servers with http urls will simply result in a hard failure.

CAVEATS:

1. Signing is not implemented in this PR. `gpg --clearsign` is not the nicest solution, since (a) the spec.json is merged into the image config, which must be valid json, and (b) it would be better to sign the manifest (referencing both config/spec file and tarball) using more conventional image signing tools
2. `spack.binary_distribution.push` is not yet implemented for the OCI buildcache, only `spack buildcache push` is. This is because I'd like to always push images + deps to the registry, so that it's `docker pull`-able, whereas in `spack ci` we really wanna push an individual package without its deps to say `pr-xyz`, while its deps reside in some `develop` buildcache.
3. The `push -j ...` flag only works for OCI buildcache, not for others
2023-10-27 15:30:04 +02:00
Ashwin Kumar Karnad
3fff8be929 octopus: split netcdf-c and netcdf-fortran dependency (#40685) 2023-10-27 14:24:44 +02:00
Satish Balay
1bf758a784 strumpack: add version 7.2.0 (#40732) 2023-10-27 04:29:15 -07:00
Harmen Stoppels
9b8fb413c3 gromacs: default to external blas & lapack (#40490)
* gromacs: default to external blas & lapack

* drop vendored lapack/blas altogether
2023-10-27 09:51:12 +02:00
Harmen Stoppels
51275df0b1 ci: spack compiler find should list extra config scopes (#40727)
otherwise it detected pre-configured compilers in an potentially different way.
2023-10-27 09:43:01 +02:00
dmt4
af13d16c2c Fixes and options for package spglib (#40684)
* Fix cmake_args for spglib v2.1.0+

* Add option to build fortran interface in package spglib

* fix style as sugested by ci/prechecks/style

* Enable fortran variant from v1.16.4 as suggested

Co-authored-by: Rocco Meli <r.meli@bluemail.ch>

---------

Co-authored-by: Rocco Meli <r.meli@bluemail.ch>
2023-10-27 08:55:57 +02:00
Harmen Stoppels
37f48aff8b gromacs: fix version branch in intel fftw (#40489) 2023-10-27 08:29:02 +02:00
Alec Scott
feda52f800 akantu: use f-strings (#40466)
Co-authored-by: Nicolas Richart <nrichart@users.noreply.github.com>
2023-10-27 08:12:20 +02:00
Satish Balay
8959d65577 plasma: add version 23.8.2 (#40728) 2023-10-26 16:48:20 -06:00
Carlos Bederián
546695f193 itk: misc fixes (#39832)
* itk: patch missing include for newer compilers

* itk: The package doesn't use MPI

* itk: package requires the high-level hdf5 api

* itk: patch url with ?full_index=1

* itk: point to 4041 commit in master

* itk: don't constrain hdf5 with ~mpi
2023-10-26 15:13:27 -07:00
snehring
c3f5ee54d4 ldak: add v5.2 & add maintainer (#40710)
* ldak: update to 5.2, add maintainer

* ldak: use compiler.openmp_flag
2023-10-26 15:12:10 -07:00
Daniel Arndt
d64f312726 dataTransferKit: add v3.1.1, v3.1.0 (#40556)
* Update DataTransferKit for 3.1.1 release

* Require Trilinos-14 for 3.1.0 and higher
2023-10-26 15:10:16 -07:00
Adam J. Stewart
b4b25dec64 PythonPackage: allow archive_files to be overridden (#40694) 2023-10-26 15:25:56 -05:00
Torbjörn Lönnemark
81172f9251 curl: Fix librtmp variant (#40713)
* rtmpdump: New package

* curl: Fix librtmp variant

Add the previously missing dependency required for rtmp support.

The variant has been broken since its addition in PR #25166.

Fixes one of the two issues reported in #26887.
2023-10-26 21:11:43 +02:00
Alec Scott
cbf9dd0aee unmaintained a* packages: update to use f-strings (#40467) 2023-10-26 21:08:55 +02:00
Ryan Danehy
7ecb9243c1 Update spack package for exago@1.6.0 release (#40614)
* Update spack package for exago:1.6.0

* update style

* Weird spack style env bug fixed

* Update spack package for exago:1.6.0

* update style

* Weird spack style env bug fixed

* changes to allow release 1.6.0

* fix depends, and versioning

* rm cmake variable

* add s

* style fix

---------

Co-authored-by: Ryan Danehy <dane678@deception04.pnl.gov>
Co-authored-by: Ryan Danehy <dane678@deception03.pnl.gov>
Co-authored-by: ryan.danehy@pnnl.gov <dane678@we45149.home>
2023-10-26 11:18:31 -07:00
Harmen Stoppels
e96f31c29d spack checksum pkg@1.2, use as version filter (#39694)
* spack checksum pkg@1.2, use as version filter

Currently pkg@1.2 splits on @ and looks for 1.2 specifically, with this
PR pkg@1.2 is a filter so any matching 1.2, 1.2.1, ..., 1.2.10 version
is displayed.

* fix tests

* fix style
2023-10-26 09:57:55 -07:00
Auriane R
53d5011192 Add conflict between cxxstd > 17 and cuda < 12 in pika (#40717)
* Add conflict with C++ standard > 17 and cuda < 12

* Removing map_cxxstd since boost supports C++20 flag
2023-10-26 16:08:21 +02:00
Xavier Delaruelle
751b64cbcd modules: no --delim option if separator is colon character (#39010)
Update Tcl modulefile template to simplify generated `append-path`,
`prepend-path` and `remove-path` commands and improve their readability.

If path element delimiter is colon character, do not set the `--delim`
option as it is the default delimiter value.
2023-10-26 15:55:49 +02:00
Adam J. Stewart
f57c2501a3 PythonPackage: nested config_settings (#40693)
* PythonPackage: nested config_settings

* flake8
2023-10-26 08:18:02 -05:00
Harmen Stoppels
1c8073c21f spack checksum: show long flags in usage output (#40407) 2023-10-26 14:48:35 +02:00
Xavier Delaruelle
86520abb68 modules: hide implicit modulefiles (#36619)
Renames exclude_implicits to hide_implicits

When hide_implicits option is enabled, generate modulefile of
implicitly installed software and hide them. Even if implicit, those
modulefiles may be referred as dependency in other modulefiles thus they
should be generated to make module properly load dependent module.

A new hidden property is added to BaseConfiguration class.

To hide modulefiles, modulercs are generated along modulefiles. Such rc
files contain specific module command to indicate a module should be
hidden (for instance when using "module avail").

A modulerc property is added to TclFileLayout and LmodFileLayout classes
to get fully qualified path name of the modulerc associated to a given
modulefile.

Modulerc files will be located in each module directory, next to the
version modulefiles. This scheme is supported by both module tool
implementations.

modulerc_header and hide_cmd_format attributes are added to
TclModulefileWriter and LmodModulefileWriter. They help to know how to
generate a modulerc file with hidden commands for each module tool.

Tcl modulerc file requires an header. As we use a command introduced on
Modules 4.7 (module-hide --hidden-loaded), a version requirement is
added to header string.

For lmod, modules that open up a hierarchy are never hidden, even if
they are implicitly installed.

Modulerc is created, updated or removed when associated modulefile is
written or removed. If an implicit modulefile becomes explicit, hidden
command in modulerc for this modulefile is removed. If modulerc becomes
empty, this file is removed. Modulerc file is not rewritten when no
content change is detected.

Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2023-10-26 11:49:13 +00:00
Alberto Invernizzi
bf88ed45da libluv: require CMake 3 and CMP0042 (#40716) 2023-10-26 03:33:27 -06:00
Harmen Stoppels
b4cf3d9f18 git versions: fix commit shas [automated] (#40703) 2023-10-26 11:26:47 +02:00
Ben Boeckel
8e19576ec5 Paraview 5.12 prep (#40527)
* paraview: rebase the adios2 patch for 5.12-to-be

* paraview: disable fastfloat and token for 5.12-to-be

* paraview: require older protobuf for 5.12 as well

* paraview: require C++11-supporting protobuf for `master` too
2023-10-25 16:26:49 -07:00
Victoria Cherkas
3c590ad071 fdb: add releases v5.11.23 and v5.11.17 (#40571) 2023-10-25 16:24:54 -07:00
afzpatel
3e47f3f05c initial commit to fix mivisionx build for 5.6 (#40579) 2023-10-25 16:24:31 -07:00
Dominic Hofer
d9edc92119 cuda: add NVHPC_CUDA_HOME. (#40507)
* [cuda] Add NVHPC_CUDA_HOME.

* Add CUDA_HOME and NVHC_CUDA_HOME to cuda's dependent build env.

---------

Co-authored-by: Dominic Hofer <dominic.hofer@meteoswiss.ch>
2023-10-25 16:22:22 -07:00
Filippo Barbari
2a245fdd21 Added Highway versions up to 1.0.7 (#40691) 2023-10-25 15:49:46 -07:00
Adam J. Stewart
932d7a65e0 PyTorch: patch breakpad dependency (#40648) 2023-10-25 23:10:48 +02:00
dependabot[bot]
6bd2dd032b build(deps): bump pytest from 7.4.2 to 7.4.3 in /lib/spack/docs (#40697) 2023-10-25 20:58:53 +02:00
Harmen Stoppels
c0a4be156c ci: don't put compilers in config (#40700)
* ci: don't register detectable compilers

Cause they go out of sync...

* remove intel compiler, it can be detected too

* Do not run spack compiler find since compilers are registered in concretize job already

* trilinos: work around +stokhos +cuda +superlu-dist bug due to EMPTY macro
2023-10-25 11:55:04 -07:00
Harmen Stoppels
0c30418732 ci: darwin aarch64 use apple-clang-15 tag (#40706) 2023-10-25 17:35:47 +02:00
Adam J. Stewart
3063093322 py-lightning: py-torch~distributed is broken again (#40696) 2023-10-25 13:06:35 +02:00
Rocco Meli
f4bbc0dbd2 Add dlaf variant to cp2k (#40702) 2023-10-25 04:13:32 -06:00
Taillefumier Mathieu
1ecb100e43 [cp2k] Use fftw3 MKL by default when cp2k is compiled with mkl (#40671) 2023-10-25 09:55:13 +02:00
John W. Parent
e1da9339d9 Windows: search PATH for patch utility (#40513)
Previously, we only searched for `patch` inside of whatever Git
installation was available because the most common installation of Git
available on Windows had `patch`. That's not true for all possible
installations of Git though, so this updates the search to also check
PATH.
2023-10-24 16:37:26 -07:00
Alex Richert
2d203df075 Add ufs-utils@1.11.0 (#40695)
* Add ufs-utils@1.11.0
* Update package.py
2023-10-24 15:46:23 -07:00
renjithravindrankannath
50f25964cf Updating rvs binary path. (#40604)
* Updating rvs binary path
* Updating spec check as per the recommendation
2023-10-24 15:30:02 -07:00
AMD Toolchain Support
95558d67ae openmpi: fix pmi@4.2.3: compat (#40686) 2023-10-24 20:06:32 +02:00
Filippo Barbari
83532b5469 Added new benchmark version up to 1.8.3 (#40689) 2023-10-24 10:26:26 -07:00
Alberto Invernizzi
444c27ca53 neovim: conflict for libluv problem on macOS + add newer versions of neovim and libluv (#40690)
* add conflict with libluv version >=1.44 just on macOS
* minor change
* add libluv versions
* neovim: add newer releases
2023-10-24 10:21:58 -07:00
eugeneswalker
d075732cc5 hiop +cuda: fix issue 40678 (#40688) 2023-10-24 10:28:23 -06:00
eugeneswalker
cf9a32e6db exago: fix v1.5.1 tag; only allow python up to 3.10 for for @:1.5 (#40676)
* exago: fix v1.5.1 tag; only allow python up to 3.10 for for @:1.5 due to pybind error with py 3.11

* hiop@:1.0 +cuda: constrain to cuda@:11.9
2023-10-24 01:08:05 -06:00
Annop Wongwathanarat
bc54aa1e82 armpl-gcc: add version 23.10 and macOS support (#40511) 2023-10-24 00:58:04 -06:00
Nakano Masaki
88622d5129 fix installation error of bear (#40637)
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
2023-10-23 13:02:15 -07:00
Vicente Bolea
d0982115b3 Adios2: add kokkos variant (#40623)
* adios2: update variants and dependencies

* adios2: add kokkos rocm|cuda|sycl variant

* e4s oneapi ci stack: add adios2 +sycl

* e4s ci stack: add adios2 +rocm

* [@spackbot] updating style on behalf of vicentebolea

* Apply suggestions from code review

* adios2: fixed cuda variant

* update ecp-data-vis-sdk

* Update share/spack/gitlab/cloud_pipelines/stacks/e4s-power/spack.yaml

---------

Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
Co-authored-by: vicentebolea <vicentebolea@users.noreply.github.com>
2023-10-23 13:01:57 -07:00
Taillefumier Mathieu
1e4a5791b2 Add rccl and nccl variants to cp2k and cosma (#40451) 2023-10-23 12:37:42 -07:00
Jim Galarowicz
8def7f5583 Update survey package file for survey version 9 changes. (#40619)
* Update survey package file for survey version 9 changes.
* Fix single quote - make double.
* Small change to trigger spack tests
2023-10-23 12:31:20 -07:00
Adam J. Stewart
66f07088cb py-scikit-learn: add v1.3.2 (#40672) 2023-10-23 13:56:27 -05:00
Michael Kuhn
bf6d5df0ec audit: add check for GitLab patches (#40656)
GitLab's .patch URLs only provide abbreviated hashes, while .diff URLs
provide full hashes. There does not seem to be a parameter to force
.patch URLs to also return full hashes, so we should make sure to use
the .diff ones.
2023-10-23 20:22:39 +02:00
Olivier Cessenat
3eac79bba7 ngspice: new version 41 and option osdi (#40664) 2023-10-23 12:56:12 -04:00
Juan Miguel Carceller
47c9760492 geant4: add patch for when using the system expat library (#40650)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-10-23 16:11:51 +01:00
Harmen Stoppels
a452e8379e nghttp2: add v1.57.0 (#40652) 2023-10-23 16:22:41 +02:00
Aiden Grossman
a6466b9ddd 3proxy: respect compiler choice (#39240) 2023-10-23 03:43:54 -06:00
Harmen Stoppels
96548047f8 concretizer verbose: show progress in % too (#40654) 2023-10-23 10:26:20 +02:00
Harmen Stoppels
a675156c70 py-cython: new version, python 3.11 upperbound (#40343) 2023-10-23 09:37:20 +02:00
Tamara Dahlgren
cfc5363053 Docs: Update spec variant checks plus python quotes and string formatting (#40643) 2023-10-23 09:15:03 +02:00
Michael Kuhn
d9167834c4 libtheora: fix GitLab patch (#40657)
GitLab's .patch URLs do not provide stable/full hashes, while .diff URLs
do. See #40656 for more information.
2023-10-23 09:00:22 +02:00
Michael Kuhn
8a4860480a knem: fix GitLab patch (#40662)
GitLab's .patch URLs do not provide stable/full hashes, while .diff URLs
do. See #40656 for more information.
2023-10-23 08:59:58 +02:00
Michael Kuhn
f4c813f74a gobject-introspection: fix GitLab patch (#40661)
GitLab's .patch URLs do not provide stable/full hashes, while .diff URLs
do. See #40656 for more information.
2023-10-23 08:59:38 +02:00
Michael Kuhn
8b4e557fed garfieldpp: fix GitLab patch (#40660)
GitLab's .patch URLs do not provide stable/full hashes, while .diff URLs
do. See #40656 for more information.
2023-10-23 08:59:10 +02:00
Michael Kuhn
c5d0fd42e6 vtk: fix GitLab patch (#40659)
GitLab's .patch URLs do not provide stable/full hashes, while .diff URLs
do. See #40656 for more information.
2023-10-23 08:58:47 +02:00
Michael Kuhn
428202b246 libxml2: fix GitLab patch (#40658)
GitLab's .patch URLs do not provide stable/full hashes, while .diff URLs
do. See #40656 for more information.
2023-10-23 08:58:24 +02:00
Bill Williams
1c0d3bc071 Add Score-P 8.3 and dependencies (#40478)
Includes Score-P 8.3 and Cubew/cubelib 4.8.2.
2023-10-22 22:11:19 +02:00
Juan Miguel Carceller
eea3c07628 glib: add patch with a fix for PTRACE_0_EXITKILL (#40655)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-10-22 11:18:16 -06:00
Harmen Stoppels
7cd5fcb484 zlib-ng: add v2.1.4 (#40647) 2023-10-22 11:17:48 -06:00
Juan Miguel Carceller
bbb4c939da py-kiwisolver: add a new version (#40653)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-10-22 09:07:31 -05:00
Tamara Dahlgren
f915489c62 Docs: Add version range example to conditional dependencies (#40630)
* Docs: Add version range example to conditional dependencies

* Add when context manager example
2023-10-22 10:52:44 +02:00
357 changed files with 10064 additions and 2425 deletions

View File

@@ -159,6 +159,9 @@ jobs:
brew install cmake bison@2.7 tree
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
- uses: actions/setup-python@65d7f2d534ac1bc67fcd62888c5f4f3d2cb2b236 # @v2
with:
python-version: "3.12"
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh

View File

@@ -1,4 +1,4 @@
black==23.9.1
black==23.10.1
clingo==5.6.2
flake8==6.1.0
isort==5.12.0

View File

@@ -1,3 +1,33 @@
# v0.20.3 (2023-10-31)
## Bugfixes
- Fix a bug where `spack mirror set-url` would drop configured connection info (reverts #34210)
- Fix a minor issue with package hash computation for Python 3.12 (#40328)
# v0.20.2 (2023-10-03)
## Features in this release
Spack now supports Python 3.12 (#40155)
## Bugfixes
- Improve escaping in Tcl module files (#38375)
- Make repo cache work on repositories with zero mtime (#39214)
- Ignore errors for newer, incompatible buildcache version (#40279)
- Print an error when git is required, but missing (#40254)
- Ensure missing build dependencies get installed when using `spack install --overwrite` (#40252)
- Fix an issue where Spack freezes when the build process unexpectedly exits (#39015)
- Fix a bug where installation failures cause an unrelated `NameError` to be thrown (#39017)
- Fix an issue where Spack package versions would be incorrectly derived from git tags (#39414)
- Fix a bug triggered when file locking fails internally (#39188)
- Prevent "spack external find" to error out when a directory cannot be accessed (#38755)
- Fix multiple performance regressions in environments (#38771)
- Add more ignored modules to `pyproject.toml` for `mypy` (#38769)
# v0.20.1 (2023-07-10)
## Spack Bugfixes

View File

@@ -66,7 +66,7 @@ Resources:
* **Matrix space**: [#spack-space:matrix.org](https://matrix.to/#/#spack-space:matrix.org):
[bridged](https://github.com/matrix-org/matrix-appservice-slack#matrix-appservice-slack) to Slack.
* [**Github Discussions**](https://github.com/spack/spack/discussions):
not just for discussions, also Q&A.
not just for discussions, but also Q&A.
* **Mailing list**: [groups.google.com/d/forum/spack](https://groups.google.com/d/forum/spack)
* **Twitter**: [@spackpm](https://twitter.com/spackpm). Be sure to
`@mention` us!

View File

@@ -1526,6 +1526,30 @@ any MPI implementation will do. If another package depends on
error. Likewise, if you try to plug in some package that doesn't
provide MPI, Spack will raise an error.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Explicit binding of virtual dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are packages that provide more than just one virtual dependency. When interacting with them, users
might want to utilize just a subset of what they could provide, and use other providers for virtuals they
need.
It is possible to be more explicit and tell Spack which dependency should provide which virtual, using a
special syntax:
.. code-block:: console
$ spack spec strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
Concretizing the spec above produces the following DAG:
.. figure:: images/strumpack_virtuals.svg
:scale: 60 %
:align: center
where ``intel-parallel-studio`` *could* provide ``mpi``, ``lapack``, and ``blas`` but is used only for the former. The ``lapack``
and ``blas`` dependencies are satisfied by ``openblas``.
^^^^^^^^^^^^^^^^^^^^^^^^
Specifying Specs by Hash
^^^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -156,6 +156,131 @@ List of popular build caches
* `Extreme-scale Scientific Software Stack (E4S) <https://e4s-project.github.io/>`_: `build cache <https://oaciss.uoregon.edu/e4s/inventory.html>`_
-----------------------------------------
OCI / Docker V2 registries as build cache
-----------------------------------------
Spack can also use OCI or Docker V2 registries such as Dockerhub, Quay.io,
Github Packages, GitLab Container Registry, JFrog Artifactory, and others
as build caches. This is a convenient way to share binaries using public
infrastructure, or to cache Spack built binaries in Github Actions and
GitLab CI.
To get started, configure an OCI mirror using ``oci://`` as the scheme,
and optionally specify a username and password (or personal access token):
.. code-block:: console
$ spack mirror add --oci-username username --oci-password password my_registry oci://example.com/my_image
Spack follows the naming conventions of Docker, with Dockerhub as the default
registry. To use Dockerhub, you can omit the registry domain:
.. code-block:: console
$ spack mirror add --oci-username username --oci-password password my_registry oci://username/my_image
From here, you can use the mirror as any other build cache:
.. code-block:: console
$ spack buildcache push my_registry <specs...> # push to the registry
$ spack install <specs...> # install from the registry
A unique feature of buildcaches on top of OCI registries is that it's incredibly
easy to generate get a runnable container image with the binaries installed. This
is a great way to make applications available to users without requiring them to
install Spack -- all you need is Docker, Podman or any other OCI-compatible container
runtime.
To produce container images, all you need to do is add the ``--base-image`` flag
when pushing to the build cache:
.. code-block:: console
$ spack buildcache push --base-image ubuntu:20.04 my_registry ninja
Pushed to example.com/my_image:ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spack
$ docker run -it example.com/my_image:ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spack
root@e4c2b6f6b3f4:/# ninja --version
1.11.1
If ``--base-image`` is not specified, distroless images are produced. In practice,
you won't be able to run these as containers, since they don't come with libc and
other system dependencies. However, they are still compatible with tools like
``skopeo``, ``podman``, and ``docker`` for pulling and pushing.
.. note::
The docker ``overlayfs2`` storage driver is limited to 128 layers, above which a
``max depth exceeded`` error may be produced when pulling the image. There
are `alternative drivers <https://docs.docker.com/storage/storagedriver/>`_.
------------------------------------
Using a buildcache in GitHub Actions
------------------------------------
GitHub Actions is a popular CI/CD platform for building and testing software,
but each CI job has limited resources, making from source builds too slow for
many applications. Spack build caches can be used to share binaries between CI
runs, speeding up CI significantly.
A typical workflow is to include a ``spack.yaml`` environment in your repository
that specifies the packages to install:
.. code-block:: yaml
spack:
specs: [pkg-x, pkg-y]
packages:
all:
require: target=x86_64_v2
mirrors:
github_packages: oci://ghcr.io/<user>/<repo>
And a GitHub action that sets up Spack, installs packages from the build cache
or from sources, and pushes newly built binaries to the build cache:
.. code-block:: yaml
name: Install Spack packages
on: push
env:
SPACK_COLOR: always
jobs:
example:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Spack
run: |
git clone --depth=1 https://github.com/spack/spack.git
echo "$PWD/spack/bin/" >> "$GITHUB_PATH"
- name: Concretize
run: spack -e . concretize
- name: Install
run: spack -e . install --no-check-signature --fail-fast
- name: Push to buildcache
run: |
spack -e . mirror set --oci-username <user> --oci-password "${{ secrets.GITHUB_TOKEN }}" github_packages
spack -e . buildcache push --base-image ubuntu:22.04 --unsigned --update-index github_packages
if: always()
The first time this action runs, it will build the packages from source and
push them to the build cache. Subsequent runs will pull the binaries from the
build cache. The concretizer will ensure that prebuilt binaries are favored
over source builds.
The build cache entries appear in the GitHub Packages section of your repository,
and contain instructions for pulling and running them with ``docker`` or ``podman``.
----------
Relocation
----------

View File

@@ -127,9 +127,9 @@ check out a commit from the ``master`` branch, you would want to add:
.. code-block:: python
depends_on('autoconf', type='build', when='@master')
depends_on('automake', type='build', when='@master')
depends_on('libtool', type='build', when='@master')
depends_on("autoconf", type="build", when="@master")
depends_on("automake", type="build", when="@master")
depends_on("libtool", type="build", when="@master")
It is typically redundant to list the ``m4`` macro processor package as a
dependency, since ``autoconf`` already depends on it.
@@ -145,7 +145,7 @@ example, the ``bash`` shell is used to run the ``autogen.sh`` script.
.. code-block:: python
def autoreconf(self, spec, prefix):
which('bash')('autogen.sh')
which("bash")("autogen.sh")
"""""""""""""""""""""""""""""""""""""""
patching configure or Makefile.in files
@@ -186,9 +186,9 @@ To opt out of this feature, use the following setting:
To enable it conditionally on different architectures, define a property and
make the package depend on ``gnuconfig`` as a build dependency:
.. code-block
.. code-block:: python
depends_on('gnuconfig', when='@1.0:')
depends_on("gnuconfig", when="@1.0:")
@property
def patch_config_files(self):
@@ -230,7 +230,7 @@ version, this can be done like so:
@property
def force_autoreconf(self):
return self.version == Version('1.2.3')
return self.version == Version("1.2.3")
^^^^^^^^^^^^^^^^^^^^^^^
Finding configure flags
@@ -278,13 +278,22 @@ function like so:
def configure_args(self):
args = []
if '+mpi' in self.spec:
args.append('--enable-mpi')
if self.spec.satisfies("+mpi"):
args.append("--enable-mpi")
else:
args.append('--disable-mpi')
args.append("--disable-mpi")
return args
Alternatively, you can use the :ref:`enable_or_disable <autotools_enable_or_disable>` helper:
.. code-block:: python
def configure_args(self):
return [self.enable_or_disable("mpi")]
Note that we are explicitly disabling MPI support if it is not
requested. This is important, as many Autotools packages will enable
options by default if the dependencies are found, and disable them
@@ -295,9 +304,11 @@ and `here <https://wiki.gentoo.org/wiki/Project:Quality_Assurance/Automagic_depe
for a rationale as to why these so-called "automagic" dependencies
are a problem.
By default, Autotools installs packages to ``/usr``. We don't want this,
so Spack automatically adds ``--prefix=/path/to/installation/prefix``
to your list of ``configure_args``. You don't need to add this yourself.
.. note::
By default, Autotools installs packages to ``/usr``. We don't want this,
so Spack automatically adds ``--prefix=/path/to/installation/prefix``
to your list of ``configure_args``. You don't need to add this yourself.
^^^^^^^^^^^^^^^^
Helper functions
@@ -308,6 +319,8 @@ You may have noticed that most of the Autotools flags are of the form
``--without-baz``. Since these flags are so common, Spack provides a
couple of helper functions to make your life easier.
.. _autotools_enable_or_disable:
"""""""""""""""""
enable_or_disable
"""""""""""""""""
@@ -319,11 +332,11 @@ typically used to enable or disable some feature within the package.
.. code-block:: python
variant(
'memchecker',
"memchecker",
default=False,
description='Memchecker support for debugging [degrades performance]'
description="Memchecker support for debugging [degrades performance]"
)
config_args.extend(self.enable_or_disable('memchecker'))
config_args.extend(self.enable_or_disable("memchecker"))
In this example, specifying the variant ``+memchecker`` will generate
the following configuration options:
@@ -343,15 +356,15 @@ the ``with_or_without`` method.
.. code-block:: python
variant(
'schedulers',
"schedulers",
values=disjoint_sets(
('auto',), ('alps', 'lsf', 'tm', 'slurm', 'sge', 'loadleveler')
).with_non_feature_values('auto', 'none'),
("auto",), ("alps", "lsf", "tm", "slurm", "sge", "loadleveler")
).with_non_feature_values("auto", "none"),
description="List of schedulers for which support is enabled; "
"'auto' lets openmpi determine",
)
if 'schedulers=auto' not in spec:
config_args.extend(self.with_or_without('schedulers'))
if not spec.satisfies("schedulers=auto"):
config_args.extend(self.with_or_without("schedulers"))
In this example, specifying the variant ``schedulers=slurm,sge`` will
generate the following configuration options:
@@ -376,16 +389,16 @@ generated, using the ``activation_value`` argument to
.. code-block:: python
variant(
'fabrics',
"fabrics",
values=disjoint_sets(
('auto',), ('psm', 'psm2', 'verbs', 'mxm', 'ucx', 'libfabric')
).with_non_feature_values('auto', 'none'),
("auto",), ("psm", "psm2", "verbs", "mxm", "ucx", "libfabric")
).with_non_feature_values("auto", "none"),
description="List of fabrics that are enabled; "
"'auto' lets openmpi determine",
)
if 'fabrics=auto' not in spec:
config_args.extend(self.with_or_without('fabrics',
activation_value='prefix'))
if not spec.satisfies("fabrics=auto"):
config_args.extend(self.with_or_without("fabrics",
activation_value="prefix"))
``activation_value`` accepts a callable that generates the configure
parameter value given the variant value; but the special value
@@ -409,16 +422,16 @@ When Spack variants and configure flags do not correspond one-to-one, the
.. code-block:: python
variant('debug_tools', default=False)
config_args += self.enable_or_disable('debug-tools', variant='debug_tools')
variant("debug_tools", default=False)
config_args += self.enable_or_disable("debug-tools", variant="debug_tools")
Or when one variant controls multiple flags:
.. code-block:: python
variant('debug_tools', default=False)
config_args += self.with_or_without('memchecker', variant='debug_tools')
config_args += self.with_or_without('profiler', variant='debug_tools')
variant("debug_tools", default=False)
config_args += self.with_or_without("memchecker", variant="debug_tools")
config_args += self.with_or_without("profiler", variant="debug_tools")
""""""""""""""""""""
@@ -432,8 +445,8 @@ For example:
.. code-block:: python
variant('profiler', when='@2.0:')
config_args += self.with_or_without('profiler')
variant("profiler", when="@2.0:")
config_args += self.with_or_without("profiler")
will neither add ``--with-profiler`` nor ``--without-profiler`` when the version is
below ``2.0``.
@@ -452,10 +465,10 @@ the variant values require atypical behavior.
def with_or_without_verbs(self, activated):
# Up through version 1.6, this option was named --with-openib.
# In version 1.7, it was renamed to be --with-verbs.
opt = 'verbs' if self.spec.satisfies('@1.7:') else 'openib'
opt = "verbs" if self.spec.satisfies("@1.7:") else "openib"
if not activated:
return '--without-{0}'.format(opt)
return '--with-{0}={1}'.format(opt, self.spec['rdma-core'].prefix)
return f"--without-{opt}"
return f"--with-{opt}={self.spec['rdma-core'].prefix}"
Defining ``with_or_without_verbs`` overrides the behavior of a
``fabrics=verbs`` variant, changing the configure-time option to
@@ -479,7 +492,7 @@ do this like so:
.. code-block:: python
configure_directory = 'src'
configure_directory = "src"
^^^^^^^^^^^^^^^^^^^^^^
Building out of source
@@ -491,7 +504,7 @@ This can be done using the ``build_directory`` variable:
.. code-block:: python
build_directory = 'spack-build'
build_directory = "spack-build"
By default, Spack will build the package in the same directory that
contains the ``configure`` script
@@ -514,8 +527,8 @@ library or build the documentation, you can add these like so:
.. code-block:: python
build_targets = ['all', 'docs']
install_targets = ['install', 'docs']
build_targets = ["all", "docs"]
install_targets = ["install", "docs"]
^^^^^^^
Testing

View File

@@ -87,7 +87,7 @@ A typical usage of these methods may look something like this:
.. code-block:: python
def initconfig_mpi_entries(self)
def initconfig_mpi_entries(self):
# Get existing MPI configurations
entries = super(self, Foo).initconfig_mpi_entries()
@@ -95,25 +95,25 @@ A typical usage of these methods may look something like this:
# This spec has an MPI variant, and we need to enable MPI when it is on.
# This hypothetical package controls MPI with the ``FOO_MPI`` option to
# cmake.
if '+mpi' in self.spec:
entries.append(cmake_cache_option('FOO_MPI', True, "enable mpi"))
if self.spec.satisfies("+mpi"):
entries.append(cmake_cache_option("FOO_MPI", True, "enable mpi"))
else:
entries.append(cmake_cache_option('FOO_MPI', False, "disable mpi"))
entries.append(cmake_cache_option("FOO_MPI", False, "disable mpi"))
def initconfig_package_entries(self):
# Package specific options
entries = []
entries.append('#Entries for build options')
entries.append("#Entries for build options")
bar_on = '+bar' in self.spec
entries.append(cmake_cache_option('FOO_BAR', bar_on, 'toggle bar'))
bar_on = self.spec.satisfies("+bar")
entries.append(cmake_cache_option("FOO_BAR", bar_on, "toggle bar"))
entries.append('#Entries for dependencies')
entries.append("#Entries for dependencies")
if self.spec['blas'].name == 'baz': # baz is our blas provider
entries.append(cmake_cache_string('FOO_BLAS', 'baz', 'Use baz'))
entries.append(cmake_cache_path('BAZ_PREFIX', self.spec['baz'].prefix))
if self.spec["blas"].name == "baz": # baz is our blas provider
entries.append(cmake_cache_string("FOO_BLAS", "baz", "Use baz"))
entries.append(cmake_cache_path("BAZ_PREFIX", self.spec["baz"].prefix))
^^^^^^^^^^^^^^^^^^^^^^
External documentation

View File

@@ -54,8 +54,8 @@ to terminate such build attempts with a suitable message:
.. code-block:: python
conflicts('cuda_arch=none', when='+cuda',
msg='CUDA architecture is required')
conflicts("cuda_arch=none", when="+cuda",
msg="CUDA architecture is required")
Similarly, if your software does not support all versions of the property,
you could add ``conflicts`` to your package for those versions. For example,
@@ -66,13 +66,13 @@ custom message should a user attempt such a build:
.. code-block:: python
unsupported_cuda_archs = [
'10', '11', '12', '13',
'20', '21',
'30', '32', '35', '37'
"10", "11", "12", "13",
"20", "21",
"30", "32", "35", "37"
]
for value in unsupported_cuda_archs:
conflicts('cuda_arch={0}'.format(value), when='+cuda',
msg='CUDA architecture {0} is not supported'.format(value))
conflicts(f"cuda_arch={value}", when="+cuda",
msg=f"CUDA architecture {value} is not supported")
^^^^^^^
Methods
@@ -107,16 +107,16 @@ class of your package. For example, you can add it to your
spec = self.spec
args = []
...
if '+cuda' in spec:
if spec.satisfies("+cuda"):
# Set up the cuda macros needed by the build
args.append('-DWITH_CUDA=ON')
cuda_arch_list = spec.variants['cuda_arch'].value
args.append("-DWITH_CUDA=ON")
cuda_arch_list = spec.variants["cuda_arch"].value
cuda_arch = cuda_arch_list[0]
if cuda_arch != 'none':
args.append('-DCUDA_FLAGS=-arch=sm_{0}'.format(cuda_arch))
if cuda_arch != "none":
args.append(f"-DCUDA_FLAGS=-arch=sm_{cuda_arch}")
else:
# Ensure build with cuda is disabled
args.append('-DWITH_CUDA=OFF')
args.append("-DWITH_CUDA=OFF")
...
return args
@@ -125,7 +125,7 @@ You will need to customize options as needed for your build.
This example also illustrates how to check for the ``cuda`` variant using
``self.spec`` and how to retrieve the ``cuda_arch`` variant's value, which
is a list, using ``self.spec.variants['cuda_arch'].value``.
is a list, using ``self.spec.variants["cuda_arch"].value``.
With over 70 packages using ``CudaPackage`` as of January 2021 there are
lots of examples to choose from to get more ideas for using this package.

View File

@@ -57,13 +57,13 @@ If you look at the ``perl`` package, you'll see:
.. code-block:: python
phases = ['configure', 'build', 'install']
phases = ["configure", "build", "install"]
Similarly, ``cmake`` defines:
.. code-block:: python
phases = ['bootstrap', 'build', 'install']
phases = ["bootstrap", "build", "install"]
If we look at the ``cmake`` example, this tells Spack's ``PackageBase``
class to run the ``bootstrap``, ``build``, and ``install`` functions
@@ -78,7 +78,7 @@ If we look at ``perl``, we see that it defines a ``configure`` method:
.. code-block:: python
def configure(self, spec, prefix):
configure = Executable('./Configure')
configure = Executable("./Configure")
configure(*self.configure_args())
There is also a corresponding ``configure_args`` function that handles
@@ -92,7 +92,7 @@ phases are pretty simple:
make()
def install(self, spec, prefix):
make('install')
make("install")
The ``cmake`` package looks very similar, but with a ``bootstrap``
function instead of ``configure``:
@@ -100,14 +100,14 @@ function instead of ``configure``:
.. code-block:: python
def bootstrap(self, spec, prefix):
bootstrap = Executable('./bootstrap')
bootstrap = Executable("./bootstrap")
bootstrap(*self.bootstrap_args())
def build(self, spec, prefix):
make()
def install(self, spec, prefix):
make('install')
make("install")
Again, there is a ``boostrap_args`` function that determines the
correct bootstrap flags to use.
@@ -128,16 +128,16 @@ before or after a particular phase. For example, in ``perl``, we see:
.. code-block:: python
@run_after('install')
@run_after("install")
def install_cpanm(self):
spec = self.spec
if '+cpanm' in spec:
with working_dir(join_path('cpanm', 'cpanm')):
perl = spec['perl'].command
perl('Makefile.PL')
if spec.satisfies("+cpanm"):
with working_dir(join_path("cpanm", "cpanm")):
perl = spec["perl"].command
perl("Makefile.PL")
make()
make('install')
make("install")
This extra step automatically installs ``cpanm`` in addition to the
base Perl installation.
@@ -174,10 +174,10 @@ In the ``perl`` package, we can see:
.. code-block:: python
@run_after('build')
@run_after("build")
@on_package_attributes(run_tests=True)
def test(self):
make('test')
make("test")
As you can guess, this runs ``make test`` *after* building the package,
if and only if testing is requested. Again, this is not specific to
@@ -189,7 +189,7 @@ custom build systems, it can be added to existing build systems as well.
.. code-block:: python
@run_after('install')
@run_after("install")
@on_package_attributes(run_tests=True)
works as expected. However, if you reverse the ordering:
@@ -197,7 +197,7 @@ custom build systems, it can be added to existing build systems as well.
.. code-block:: python
@on_package_attributes(run_tests=True)
@run_after('install')
@run_after("install")
the tests will always be run regardless of whether or not
``--test=root`` is requested. See https://github.com/spack/spack/issues/3833

View File

@@ -59,7 +59,7 @@ using GNU Make, you should add a dependency on ``gmake``:
.. code-block:: python
depends_on('gmake', type='build')
depends_on("gmake", type="build")
^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -93,8 +93,8 @@ there are any other variables you need to set, you can do this in the
.. code-block:: python
def edit(self, spec, prefix):
env['PREFIX'] = prefix
env['BLASLIB'] = spec['blas'].libs.ld_flags
env["PREFIX"] = prefix
env["BLASLIB"] = spec["blas"].libs.ld_flags
`cbench <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cbench/package.py>`_
@@ -113,7 +113,7 @@ you can do this like so:
.. code-block:: python
build_targets = ['CC=cc']
build_targets = ["CC=cc"]
If you do need access to the spec, you can create a property like so:
@@ -125,8 +125,8 @@ If you do need access to the spec, you can create a property like so:
spec = self.spec
return [
'CC=cc',
'BLASLIB={0}'.format(spec['blas'].libs.ld_flags),
"CC=cc",
f"BLASLIB={spec['blas'].libs.ld_flags}",
]
@@ -145,12 +145,12 @@ and a ``filter_file`` method to help with this. For example:
.. code-block:: python
def edit(self, spec, prefix):
makefile = FileFilter('Makefile')
makefile = FileFilter("Makefile")
makefile.filter(r'^\s*CC\s*=.*', 'CC = ' + spack_cc)
makefile.filter(r'^\s*CXX\s*=.*', 'CXX = ' + spack_cxx)
makefile.filter(r'^\s*F77\s*=.*', 'F77 = ' + spack_f77)
makefile.filter(r'^\s*FC\s*=.*', 'FC = ' + spack_fc)
makefile.filter(r"^\s*CC\s*=.*", f"CC = {spack_cc}")
makefile.filter(r"^\s*CXX\s*=.*", f"CXX = {spack_cxx}")
makefile.filter(r"^\s*F77\s*=.*", f"F77 = {spack_f77}")
makefile.filter(r"^\s*FC\s*=.*", f"FC = {spack_fc}")
`stream <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stream/package.py>`_
@@ -181,16 +181,16 @@ well for storing variables:
def edit(self, spec, prefix):
config = {
'CC': 'cc',
'MAKE': 'make',
"CC": "cc",
"MAKE": "make",
}
if '+blas' in spec:
config['BLAS_LIBS'] = spec['blas'].libs.joined()
if spec.satisfies("+blas"):
config["BLAS_LIBS"] = spec["blas"].libs.joined()
with open('make.inc', 'w') as inc:
with open("make.inc", "w") as inc:
for key in config:
inc.write('{0} = {1}\n'.format(key, config[key]))
inc.write(f"{key} = {config[key]}\n")
`elk <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/elk/package.py>`_
@@ -204,14 +204,14 @@ them in a list:
def edit(self, spec, prefix):
config = [
'INSTALL_DIR = {0}'.format(prefix),
'INCLUDE_DIR = $(INSTALL_DIR)/include',
'LIBRARY_DIR = $(INSTALL_DIR)/lib',
f"INSTALL_DIR = {prefix}",
"INCLUDE_DIR = $(INSTALL_DIR)/include",
"LIBRARY_DIR = $(INSTALL_DIR)/lib",
]
with open('make.inc', 'w') as inc:
with open("make.inc", "w") as inc:
for var in config:
inc.write('{0}\n'.format(var))
inc.write(f"{var}\n")
`hpl <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpl/package.py>`_
@@ -284,7 +284,7 @@ can tell Spack where to locate it like so:
.. code-block:: python
build_directory = 'src'
build_directory = "src"
^^^^^^^^^^^^^^^^^^^
@@ -299,8 +299,8 @@ install the package:
def install(self, spec, prefix):
mkdir(prefix.bin)
install('foo', prefix.bin)
install_tree('lib', prefix.lib)
install("foo", prefix.bin)
install_tree("lib", prefix.lib)
^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -152,16 +152,16 @@ set. Once set, ``pypi`` will be used to define the ``homepage``,
.. code-block:: python
homepage = 'https://pypi.org/project/setuptools/'
url = 'https://pypi.org/packages/source/s/setuptools/setuptools-49.2.0.zip'
list_url = 'https://pypi.org/simple/setuptools/'
homepage = "https://pypi.org/project/setuptools/"
url = "https://pypi.org/packages/source/s/setuptools/setuptools-49.2.0.zip"
list_url = "https://pypi.org/simple/setuptools/"
is equivalent to:
.. code-block:: python
pypi = 'setuptools/setuptools-49.2.0.zip'
pypi = "setuptools/setuptools-49.2.0.zip"
If a package has a different homepage listed on PyPI, you can
@@ -208,7 +208,7 @@ dependencies to your package:
.. code-block:: python
depends_on('py-setuptools@42:', type='build')
depends_on("py-setuptools@42:", type="build")
Note that ``py-wheel`` is already listed as a build dependency in the
@@ -232,7 +232,7 @@ Look for dependencies under the following keys:
* ``dependencies`` under ``[project]``
These packages are required for building and installation. You can
add them with ``type=('build', 'run')``.
add them with ``type=("build", "run")``.
* ``[project.optional-dependencies]``
@@ -279,12 +279,12 @@ distutils library, and has almost the exact same API. In addition to
* ``setup_requires``
These packages are usually only needed at build-time, so you can
add them with ``type='build'``.
add them with ``type="build"``.
* ``install_requires``
These packages are required for building and installation. You can
add them with ``type=('build', 'run')``.
add them with ``type=("build", "run")``.
* ``extras_require``
@@ -296,7 +296,7 @@ distutils library, and has almost the exact same API. In addition to
These are packages that are required to run the unit tests for the
package. These dependencies can be specified using the
``type='test'`` dependency type. However, the PyPI tarballs rarely
``type="test"`` dependency type. However, the PyPI tarballs rarely
contain unit tests, so there is usually no reason to add these.
See https://setuptools.pypa.io/en/latest/userguide/dependency_management.html
@@ -321,7 +321,7 @@ older versions of flit may use the following keys:
* ``requires`` under ``[tool.flit.metadata]``
These packages are required for building and installation. You can
add them with ``type=('build', 'run')``.
add them with ``type=("build", "run")``.
* ``[tool.flit.metadata.requires-extra]``
@@ -434,12 +434,12 @@ the BLAS/LAPACK library you want pkg-config to search for:
.. code-block:: python
depends_on('py-pip@22.1:', type='build')
depends_on("py-pip@22.1:", type="build")
def config_settings(self, spec, prefix):
return {
'blas': spec['blas'].libs.names[0],
'lapack': spec['lapack'].libs.names[0],
"blas": spec["blas"].libs.names[0],
"lapack": spec["lapack"].libs.names[0],
}
@@ -463,10 +463,10 @@ has an optional dependency on ``libyaml`` that can be enabled like so:
def global_options(self, spec, prefix):
options = []
if '+libyaml' in spec:
options.append('--with-libyaml')
if spec.satisfies("+libyaml"):
options.append("--with-libyaml")
else:
options.append('--without-libyaml')
options.append("--without-libyaml")
return options
@@ -492,10 +492,10 @@ allows you to specify the directories to search for ``libyaml``:
def install_options(self, spec, prefix):
options = []
if '+libyaml' in spec:
if spec.satisfies("+libyaml"):
options.extend([
spec['libyaml'].libs.search_flags,
spec['libyaml'].headers.include_flags,
spec["libyaml"].libs.search_flags,
spec["libyaml"].headers.include_flags,
])
return options
@@ -556,7 +556,7 @@ detected are wrong, you can provide the names yourself by overriding
.. code-block:: python
import_modules = ['six']
import_modules = ["six"]
Sometimes the list of module names to import depends on how the
@@ -571,9 +571,9 @@ This can be expressed like so:
@property
def import_modules(self):
modules = ['yaml']
if '+libyaml' in self.spec:
modules.append('yaml.cyaml')
modules = ["yaml"]
if self.spec.satisfies("+libyaml"):
modules.append("yaml.cyaml")
return modules
@@ -586,14 +586,14 @@ Instead of defining the ``import_modules`` explicitly, only the subset
of module names to be skipped can be defined by using ``skip_modules``.
If a defined module has submodules, they are skipped as well, e.g.,
in case the ``plotting`` modules should be excluded from the
automatically detected ``import_modules`` ``['nilearn', 'nilearn.surface',
'nilearn.plotting', 'nilearn.plotting.data']`` set:
automatically detected ``import_modules`` ``["nilearn", "nilearn.surface",
"nilearn.plotting", "nilearn.plotting.data"]`` set:
.. code-block:: python
skip_modules = ['nilearn.plotting']
skip_modules = ["nilearn.plotting"]
This will set ``import_modules`` to ``['nilearn', 'nilearn.surface']``
This will set ``import_modules`` to ``["nilearn", "nilearn.surface"]``
Import tests can be run during the installation using ``spack install
--test=root`` or at any time after the installation using
@@ -612,11 +612,11 @@ after the ``install`` phase:
.. code-block:: python
@run_after('install')
@run_after("install")
@on_package_attributes(run_tests=True)
def install_test(self):
with working_dir('spack-test', create=True):
python('-c', 'import numpy; numpy.test("full", verbose=2)')
with working_dir("spack-test", create=True):
python("-c", "import numpy; numpy.test('full', verbose=2)")
when testing is enabled during the installation (i.e., ``spack install
@@ -638,7 +638,7 @@ provides Python bindings in a ``python`` directory, you can use:
.. code-block:: python
build_directory = 'python'
build_directory = "python"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -81,28 +81,27 @@ class of your package. For example, you can add it to your
class MyRocmPackage(CMakePackage, ROCmPackage):
...
# Ensure +rocm and amdgpu_targets are passed to dependencies
depends_on('mydeppackage', when='+rocm')
depends_on("mydeppackage", when="+rocm")
for val in ROCmPackage.amdgpu_targets:
depends_on('mydeppackage amdgpu_target={0}'.format(val),
when='amdgpu_target={0}'.format(val))
depends_on(f"mydeppackage amdgpu_target={val}",
when=f"amdgpu_target={val}")
...
def cmake_args(self):
spec = self.spec
args = []
...
if '+rocm' in spec:
if spec.satisfies("+rocm"):
# Set up the hip macros needed by the build
args.extend([
'-DENABLE_HIP=ON',
'-DHIP_ROOT_DIR={0}'.format(spec['hip'].prefix)])
rocm_archs = spec.variants['amdgpu_target'].value
if 'none' not in rocm_archs:
args.append('-DHIP_HIPCC_FLAGS=--amdgpu-target={0}'
.format(",".join(rocm_archs)))
"-DENABLE_HIP=ON",
f"-DHIP_ROOT_DIR={spec['hip'].prefix}"])
rocm_archs = spec.variants["amdgpu_target"].value
if "none" not in rocm_archs:
args.append(f"-DHIP_HIPCC_FLAGS=--amdgpu-target={','.join(rocm_archs}")
else:
# Ensure build with hip is disabled
args.append('-DENABLE_HIP=OFF')
args.append("-DENABLE_HIP=OFF")
...
return args
...
@@ -114,7 +113,7 @@ build.
This example also illustrates how to check for the ``rocm`` variant using
``self.spec`` and how to retrieve the ``amdgpu_target`` variant's value
using ``self.spec.variants['amdgpu_target'].value``.
using ``self.spec.variants["amdgpu_target"].value``.
All five packages using ``ROCmPackage`` as of January 2021 also use the
:ref:`CudaPackage <cudapackage>`. So it is worth looking at those packages

View File

@@ -57,7 +57,7 @@ overridden like so:
.. code-block:: python
def test(self):
scons('check')
scons("check")
^^^^^^^^^^^^^^^
@@ -88,7 +88,7 @@ base class already contains:
.. code-block:: python
depends_on('scons', type='build')
depends_on("scons", type="build")
If you want to specify a particular version requirement, you can override
@@ -96,7 +96,7 @@ this in your package:
.. code-block:: python
depends_on('scons@2.3.0:', type='build')
depends_on("scons@2.3.0:", type="build")
^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -238,14 +238,14 @@ the package build phase. This is done by overriding ``build_args`` like so:
def build_args(self, spec, prefix):
args = [
'PREFIX={0}'.format(prefix),
'ZLIB={0}'.format(spec['zlib'].prefix),
f"PREFIX={prefix}",
f"ZLIB={spec['zlib'].prefix}",
]
if '+debug' in spec:
args.append('DEBUG=yes')
if spec.satisfies("+debug"):
args.append("DEBUG=yes")
else:
args.append('DEBUG=no')
args.append("DEBUG=no")
return args
@@ -275,8 +275,8 @@ environment variables. For example, cantera has the following option:
* env_vars: [ string ]
Environment variables to propagate through to SCons. Either the
string "all" or a comma separated list of variable names, e.g.
'LD_LIBRARY_PATH,HOME'.
- default: 'LD_LIBRARY_PATH,PYTHONPATH'
"LD_LIBRARY_PATH,HOME".
- default: "LD_LIBRARY_PATH,PYTHONPATH"
In the case of cantera, using ``env_vars=all`` allows us to use

View File

@@ -204,6 +204,7 @@ def setup(sphinx):
("py:class", "clingo.Control"),
("py:class", "six.moves.urllib.parse.ParseResult"),
("py:class", "TextIO"),
("py:class", "hashlib._Hash"),
# Spack classes that are private and we don't want to expose
("py:class", "spack.provider_index._IndexBase"),
("py:class", "spack.repo._PrependFileLoader"),

Binary file not shown.

After

Width:  |  Height:  |  Size: 296 KiB

View File

@@ -0,0 +1,534 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?><!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"><!-- Generated by graphviz version 2.40.1 (20161225.0304)
--><!-- Title: G Pages: 1 --><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="3044pt" height="1683pt" viewBox="0.00 0.00 3043.65 1682.80">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 1678.8)">
<title>G</title>
<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-1678.8 3039.6456,-1678.8 3039.6456,4 -4,4"/>
<!-- hkcrbrtf2qex6rvzuok5tzdrbam55pdn -->
<g id="node1" class="node">
<title>hkcrbrtf2qex6rvzuok5tzdrbam55pdn</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M2407.965,-1198.3002C2407.965,-1198.3002 1948.1742,-1198.3002 1948.1742,-1198.3002 1942.1742,-1198.3002 1936.1742,-1192.3002 1936.1742,-1186.3002 1936.1742,-1186.3002 1936.1742,-1123.6998 1936.1742,-1123.6998 1936.1742,-1117.6998 1942.1742,-1111.6998 1948.1742,-1111.6998 1948.1742,-1111.6998 2407.965,-1111.6998 2407.965,-1111.6998 2413.965,-1111.6998 2419.965,-1117.6998 2419.965,-1123.6998 2419.965,-1123.6998 2419.965,-1186.3002 2419.965,-1186.3002 2419.965,-1192.3002 2413.965,-1198.3002 2407.965,-1198.3002"/>
<text text-anchor="middle" x="2178.0696" y="-1147.8" font-family="Monaco" font-size="24.00" fill="#000000">netlib-scalapack@2.2.0%gcc@9.4.0/hkcrbrt</text>
</g>
<!-- o524gebsxavobkte3k5fglgwnedfkadf -->
<g id="node8" class="node">
<title>o524gebsxavobkte3k5fglgwnedfkadf</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M901.2032,-1039.5002C901.2032,-1039.5002 486.936,-1039.5002 486.936,-1039.5002 480.936,-1039.5002 474.936,-1033.5002 474.936,-1027.5002 474.936,-1027.5002 474.936,-964.8998 474.936,-964.8998 474.936,-958.8998 480.936,-952.8998 486.936,-952.8998 486.936,-952.8998 901.2032,-952.8998 901.2032,-952.8998 907.2032,-952.8998 913.2032,-958.8998 913.2032,-964.8998 913.2032,-964.8998 913.2032,-1027.5002 913.2032,-1027.5002 913.2032,-1033.5002 907.2032,-1039.5002 901.2032,-1039.5002"/>
<text text-anchor="middle" x="694.0696" y="-989" font-family="Monaco" font-size="24.00" fill="#000000">openblas@0.3.21%gcc@9.4.0/o524geb</text>
</g>
<!-- hkcrbrtf2qex6rvzuok5tzdrbam55pdn&#45;&gt;o524gebsxavobkte3k5fglgwnedfkadf -->
<g id="edge10" class="edge">
<title>hkcrbrtf2qex6rvzuok5tzdrbam55pdn-&gt;o524gebsxavobkte3k5fglgwnedfkadf</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1936.1981,-1113.832C1933.0949,-1113.4088 1930.0059,-1112.9948 1926.9392,-1112.5915 1575.405,-1066.3348 1485.3504,-1074.0879 1131.9752,-1040.5955 1064.2267,-1034.1713 990.6114,-1026.9648 923.4066,-1020.2975"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1936.4684,-1111.8504C1933.3606,-1111.4265 1930.2716,-1111.0125 1927.2,-1110.6085 1575.2335,-1064.3422 1485.1789,-1072.0953 1132.164,-1038.6045 1064.4216,-1032.1808 990.8062,-1024.9744 923.604,-1018.3073"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="923.505,-1015.7853 913.2081,-1018.2801 922.8133,-1022.751 923.505,-1015.7853"/>
<text text-anchor="middle" x="1368.79" y="-1067.6346" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=blas,lapack</text>
</g>
<!-- 2w3nq3n3hcj2tqlvcpewsryamltlu5tw -->
<g id="node23" class="node">
<title>2w3nq3n3hcj2tqlvcpewsryamltlu5tw</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M2767.3081,-1039.5002C2767.3081,-1039.5002 2166.8311,-1039.5002 2166.8311,-1039.5002 2160.8311,-1039.5002 2154.8311,-1033.5002 2154.8311,-1027.5002 2154.8311,-1027.5002 2154.8311,-964.8998 2154.8311,-964.8998 2154.8311,-958.8998 2160.8311,-952.8998 2166.8311,-952.8998 2166.8311,-952.8998 2767.3081,-952.8998 2767.3081,-952.8998 2773.3081,-952.8998 2779.3081,-958.8998 2779.3081,-964.8998 2779.3081,-964.8998 2779.3081,-1027.5002 2779.3081,-1027.5002 2779.3081,-1033.5002 2773.3081,-1039.5002 2767.3081,-1039.5002"/>
<text text-anchor="middle" x="2467.0696" y="-989" font-family="Monaco" font-size="24.00" fill="#000000">intel-parallel-studio@cluster.2020.4%gcc@9.4.0/2w3nq3n</text>
</g>
<!-- hkcrbrtf2qex6rvzuok5tzdrbam55pdn&#45;&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw -->
<g id="edge29" class="edge">
<title>hkcrbrtf2qex6rvzuok5tzdrbam55pdn-&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2256.5586,-1110.7308C2294.3103,-1089.9869 2339.6329,-1065.083 2378.4976,-1043.7276"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M2257.5217,-1112.4836C2295.2735,-1091.7397 2340.5961,-1066.8358 2379.4607,-1045.4804"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2381.116,-1047.4235 2388.1946,-1039.5403 2377.745,-1041.2886 2381.116,-1047.4235"/>
<text text-anchor="middle" x="2286.6606" y="-1079.8414" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=mpi</text>
</g>
<!-- gguve5icmo5e4cw5o3hvvfsxremc46if -->
<g id="node27" class="node">
<title>gguve5icmo5e4cw5o3hvvfsxremc46if</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M1539.1928,-1039.5002C1539.1928,-1039.5002 1152.9464,-1039.5002 1152.9464,-1039.5002 1146.9464,-1039.5002 1140.9464,-1033.5002 1140.9464,-1027.5002 1140.9464,-1027.5002 1140.9464,-964.8998 1140.9464,-964.8998 1140.9464,-958.8998 1146.9464,-952.8998 1152.9464,-952.8998 1152.9464,-952.8998 1539.1928,-952.8998 1539.1928,-952.8998 1545.1928,-952.8998 1551.1928,-958.8998 1551.1928,-964.8998 1551.1928,-964.8998 1551.1928,-1027.5002 1551.1928,-1027.5002 1551.1928,-1033.5002 1545.1928,-1039.5002 1539.1928,-1039.5002"/>
<text text-anchor="middle" x="1346.0696" y="-989" font-family="Monaco" font-size="24.00" fill="#000000">cmake@3.25.1%gcc@9.4.0/gguve5i</text>
</g>
<!-- hkcrbrtf2qex6rvzuok5tzdrbam55pdn&#45;&gt;gguve5icmo5e4cw5o3hvvfsxremc46if -->
<g id="edge17" class="edge">
<title>hkcrbrtf2qex6rvzuok5tzdrbam55pdn-&gt;gguve5icmo5e4cw5o3hvvfsxremc46if</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1950.9968,-1111.6597C1829.5529,-1088.4802 1680.8338,-1060.0949 1561.2457,-1037.2697"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1561.7091,-1033.795 1551.2303,-1035.3581 1560.3967,-1040.6709 1561.7091,-1033.795"/>
</g>
<!-- i4avrindvhcamhurzbfdaggbj2zgsrrh -->
<g id="node2" class="node">
<title>i4avrindvhcamhurzbfdaggbj2zgsrrh</title>
<path fill="#ff7f50" stroke="#000000" stroke-width="4" d="M1536.3649,-86.7002C1536.3649,-86.7002 1155.7743,-86.7002 1155.7743,-86.7002 1149.7743,-86.7002 1143.7743,-80.7002 1143.7743,-74.7002 1143.7743,-74.7002 1143.7743,-12.0998 1143.7743,-12.0998 1143.7743,-6.0998 1149.7743,-.0998 1155.7743,-.0998 1155.7743,-.0998 1536.3649,-.0998 1536.3649,-.0998 1542.3649,-.0998 1548.3649,-6.0998 1548.3649,-12.0998 1548.3649,-12.0998 1548.3649,-74.7002 1548.3649,-74.7002 1548.3649,-80.7002 1542.3649,-86.7002 1536.3649,-86.7002"/>
<text text-anchor="middle" x="1346.0696" y="-36.2" font-family="Monaco" font-size="24.00" fill="#000000">pkgconf@1.8.0%gcc@9.4.0/i4avrin</text>
</g>
<!-- ywrpvv2hgooeepdke33exkqrtdpd5gkl -->
<g id="node3" class="node">
<title>ywrpvv2hgooeepdke33exkqrtdpd5gkl</title>
<path fill="#ff7f50" stroke="#000000" stroke-width="4" d="M849.3673,-721.9002C849.3673,-721.9002 480.7719,-721.9002 480.7719,-721.9002 474.7719,-721.9002 468.7719,-715.9002 468.7719,-709.9002 468.7719,-709.9002 468.7719,-647.2998 468.7719,-647.2998 468.7719,-641.2998 474.7719,-635.2998 480.7719,-635.2998 480.7719,-635.2998 849.3673,-635.2998 849.3673,-635.2998 855.3673,-635.2998 861.3673,-641.2998 861.3673,-647.2998 861.3673,-647.2998 861.3673,-709.9002 861.3673,-709.9002 861.3673,-715.9002 855.3673,-721.9002 849.3673,-721.9002"/>
<text text-anchor="middle" x="665.0696" y="-671.4" font-family="Monaco" font-size="24.00" fill="#000000">perl@5.36.0%gcc@9.4.0/ywrpvv2</text>
</g>
<!-- h3ujmb3ts4kxxxv77knh2knuystuerbx -->
<g id="node7" class="node">
<title>h3ujmb3ts4kxxxv77knh2knuystuerbx</title>
<path fill="#ff7f50" stroke="#000000" stroke-width="4" d="M392.4016,-563.1002C392.4016,-563.1002 19.7376,-563.1002 19.7376,-563.1002 13.7376,-563.1002 7.7376,-557.1002 7.7376,-551.1002 7.7376,-551.1002 7.7376,-488.4998 7.7376,-488.4998 7.7376,-482.4998 13.7376,-476.4998 19.7376,-476.4998 19.7376,-476.4998 392.4016,-476.4998 392.4016,-476.4998 398.4016,-476.4998 404.4016,-482.4998 404.4016,-488.4998 404.4016,-488.4998 404.4016,-551.1002 404.4016,-551.1002 404.4016,-557.1002 398.4016,-563.1002 392.4016,-563.1002"/>
<text text-anchor="middle" x="206.0696" y="-512.6" font-family="Monaco" font-size="24.00" fill="#000000">bzip2@1.0.8%gcc@9.4.0/h3ujmb3</text>
</g>
<!-- ywrpvv2hgooeepdke33exkqrtdpd5gkl&#45;&gt;h3ujmb3ts4kxxxv77knh2knuystuerbx -->
<g id="edge9" class="edge">
<title>ywrpvv2hgooeepdke33exkqrtdpd5gkl-&gt;h3ujmb3ts4kxxxv77knh2knuystuerbx</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M539.3189,-636.1522C477.7157,-614.8394 403.4197,-589.1353 340.5959,-567.4002"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M539.9728,-634.2622C478.3696,-612.9494 404.0736,-587.2452 341.2498,-565.5101"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="341.9365,-563.1023 331.3417,-563.1403 339.6478,-569.7176 341.9365,-563.1023"/>
</g>
<!-- uabgssx6lsgrevwbttslldnr5nzguprj -->
<g id="node19" class="node">
<title>uabgssx6lsgrevwbttslldnr5nzguprj</title>
<path fill="#ff7f50" stroke="#000000" stroke-width="4" d="M1298.2296,-563.1002C1298.2296,-563.1002 937.9096,-563.1002 937.9096,-563.1002 931.9096,-563.1002 925.9096,-557.1002 925.9096,-551.1002 925.9096,-551.1002 925.9096,-488.4998 925.9096,-488.4998 925.9096,-482.4998 931.9096,-476.4998 937.9096,-476.4998 937.9096,-476.4998 1298.2296,-476.4998 1298.2296,-476.4998 1304.2296,-476.4998 1310.2296,-482.4998 1310.2296,-488.4998 1310.2296,-488.4998 1310.2296,-551.1002 1310.2296,-551.1002 1310.2296,-557.1002 1304.2296,-563.1002 1298.2296,-563.1002"/>
<text text-anchor="middle" x="1118.0696" y="-512.6" font-family="Monaco" font-size="24.00" fill="#000000">gdbm@1.23%gcc@9.4.0/uabgssx</text>
</g>
<!-- ywrpvv2hgooeepdke33exkqrtdpd5gkl&#45;&gt;uabgssx6lsgrevwbttslldnr5nzguprj -->
<g id="edge44" class="edge">
<title>ywrpvv2hgooeepdke33exkqrtdpd5gkl-&gt;uabgssx6lsgrevwbttslldnr5nzguprj</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M788.523,-634.2635C849.3209,-612.9507 922.6457,-587.2465 984.6483,-565.5114"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M789.1847,-636.1509C849.9825,-614.8381 923.3073,-589.1339 985.3099,-567.3988"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="986.1559,-569.7515 994.435,-563.1403 983.8402,-563.1456 986.1559,-569.7515"/>
</g>
<!-- gkw4dg2p7rdnhru3m6lcnsjbzyr7g3hb -->
<g id="node20" class="node">
<title>gkw4dg2p7rdnhru3m6lcnsjbzyr7g3hb</title>
<path fill="#ff7f50" stroke="#000000" stroke-width="4" d="M896.1744,-563.1002C896.1744,-563.1002 433.9648,-563.1002 433.9648,-563.1002 427.9648,-563.1002 421.9648,-557.1002 421.9648,-551.1002 421.9648,-551.1002 421.9648,-488.4998 421.9648,-488.4998 421.9648,-482.4998 427.9648,-476.4998 433.9648,-476.4998 433.9648,-476.4998 896.1744,-476.4998 896.1744,-476.4998 902.1744,-476.4998 908.1744,-482.4998 908.1744,-488.4998 908.1744,-488.4998 908.1744,-551.1002 908.1744,-551.1002 908.1744,-557.1002 902.1744,-563.1002 896.1744,-563.1002"/>
<text text-anchor="middle" x="665.0696" y="-512.6" font-family="Monaco" font-size="24.00" fill="#000000">berkeley-db@18.1.40%gcc@9.4.0/gkw4dg2</text>
</g>
<!-- ywrpvv2hgooeepdke33exkqrtdpd5gkl&#45;&gt;gkw4dg2p7rdnhru3m6lcnsjbzyr7g3hb -->
<g id="edge23" class="edge">
<title>ywrpvv2hgooeepdke33exkqrtdpd5gkl-&gt;gkw4dg2p7rdnhru3m6lcnsjbzyr7g3hb</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M664.0696,-635.2072C664.0696,-616.1263 664.0696,-593.5257 664.0696,-573.4046"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M666.0696,-635.2072C666.0696,-616.1263 666.0696,-593.5257 666.0696,-573.4046"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="668.5697,-573.1403 665.0696,-563.1403 661.5697,-573.1404 668.5697,-573.1403"/>
</g>
<!-- nizxi5u5bbrzhzwfy2qb7hatlhuswlrz -->
<g id="node24" class="node">
<title>nizxi5u5bbrzhzwfy2qb7hatlhuswlrz</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M2195.2248,-563.1002C2195.2248,-563.1002 1840.9144,-563.1002 1840.9144,-563.1002 1834.9144,-563.1002 1828.9144,-557.1002 1828.9144,-551.1002 1828.9144,-551.1002 1828.9144,-488.4998 1828.9144,-488.4998 1828.9144,-482.4998 1834.9144,-476.4998 1840.9144,-476.4998 1840.9144,-476.4998 2195.2248,-476.4998 2195.2248,-476.4998 2201.2248,-476.4998 2207.2248,-482.4998 2207.2248,-488.4998 2207.2248,-488.4998 2207.2248,-551.1002 2207.2248,-551.1002 2207.2248,-557.1002 2201.2248,-563.1002 2195.2248,-563.1002"/>
<text text-anchor="middle" x="2018.0696" y="-512.6" font-family="Monaco" font-size="24.00" fill="#000000">zlib@1.2.13%gcc@9.4.0/nizxi5u</text>
</g>
<!-- ywrpvv2hgooeepdke33exkqrtdpd5gkl&#45;&gt;nizxi5u5bbrzhzwfy2qb7hatlhuswlrz -->
<g id="edge4" class="edge">
<title>ywrpvv2hgooeepdke33exkqrtdpd5gkl-&gt;nizxi5u5bbrzhzwfy2qb7hatlhuswlrz</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M861.3292,-654.5584C1116.9929,-624.5514 1561.4447,-572.3867 1818.5758,-542.2075"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M861.5624,-656.5447C1117.2261,-626.5378 1561.6778,-574.373 1818.8089,-544.1939"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1819.373,-546.6449 1828.8968,-542.003 1818.5569,-539.6926 1819.373,-546.6449"/>
</g>
<!-- idvshq5nqmygzd4uo62mdispwgxsw7id -->
<g id="node4" class="node">
<title>idvshq5nqmygzd4uo62mdispwgxsw7id</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M2383.212,-1674.7002C2383.212,-1674.7002 1972.9272,-1674.7002 1972.9272,-1674.7002 1966.9272,-1674.7002 1960.9272,-1668.7002 1960.9272,-1662.7002 1960.9272,-1662.7002 1960.9272,-1600.0998 1960.9272,-1600.0998 1960.9272,-1594.0998 1966.9272,-1588.0998 1972.9272,-1588.0998 1972.9272,-1588.0998 2383.212,-1588.0998 2383.212,-1588.0998 2389.212,-1588.0998 2395.212,-1594.0998 2395.212,-1600.0998 2395.212,-1600.0998 2395.212,-1662.7002 2395.212,-1662.7002 2395.212,-1668.7002 2389.212,-1674.7002 2383.212,-1674.7002"/>
<text text-anchor="middle" x="2178.0696" y="-1624.2" font-family="Monaco" font-size="24.00" fill="#000000">strumpack@7.0.1%gcc@9.4.0/idvshq5</text>
</g>
<!-- idvshq5nqmygzd4uo62mdispwgxsw7id&#45;&gt;hkcrbrtf2qex6rvzuok5tzdrbam55pdn -->
<g id="edge33" class="edge">
<title>idvshq5nqmygzd4uo62mdispwgxsw7id-&gt;hkcrbrtf2qex6rvzuok5tzdrbam55pdn</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2177.0696,-1587.8598C2177.0696,-1500.5185 2177.0696,-1304.1624 2177.0696,-1208.8885"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M2179.0696,-1587.8598C2179.0696,-1500.5185 2179.0696,-1304.1624 2179.0696,-1208.8885"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2181.5697,-1208.611 2178.0696,-1198.611 2174.5697,-1208.611 2181.5697,-1208.611"/>
<text text-anchor="middle" x="2125.9224" y="-1397.5399" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=scalapack</text>
</g>
<!-- idvshq5nqmygzd4uo62mdispwgxsw7id&#45;&gt;o524gebsxavobkte3k5fglgwnedfkadf -->
<g id="edge8" class="edge">
<title>idvshq5nqmygzd4uo62mdispwgxsw7id-&gt;o524gebsxavobkte3k5fglgwnedfkadf</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1960.6199,-1629.1097C1600.5855,-1621.4505 897.1143,-1596.5054 662.748,-1516.9469 459.8544,-1447.9506 281.1117,-1289.236 401.2427,-1111.0377 418.213,-1086.3492 472.759,-1062.01 530.3793,-1041.9698"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1960.6625,-1627.1101C1600.6564,-1619.4517 897.1852,-1594.5067 663.3912,-1515.0531 461.1823,-1446.4551 282.4397,-1287.7405 402.8965,-1112.1623 419.028,-1088.1757 473.574,-1063.8364 531.0362,-1043.8589"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="532.0142,-1046.1665 540.3395,-1039.6137 529.7449,-1039.5445 532.0142,-1046.1665"/>
<text text-anchor="middle" x="1175.5163" y="-1600.8866" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=blas,lapack</text>
</g>
<!-- imopnxjmv7cwzyiecdw2saq42qvpnauh -->
<g id="node12" class="node">
<title>imopnxjmv7cwzyiecdw2saq42qvpnauh</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M3003.3872,-1357.1002C3003.3872,-1357.1002 2606.752,-1357.1002 2606.752,-1357.1002 2600.752,-1357.1002 2594.752,-1351.1002 2594.752,-1345.1002 2594.752,-1345.1002 2594.752,-1282.4998 2594.752,-1282.4998 2594.752,-1276.4998 2600.752,-1270.4998 2606.752,-1270.4998 2606.752,-1270.4998 3003.3872,-1270.4998 3003.3872,-1270.4998 3009.3872,-1270.4998 3015.3872,-1276.4998 3015.3872,-1282.4998 3015.3872,-1282.4998 3015.3872,-1345.1002 3015.3872,-1345.1002 3015.3872,-1351.1002 3009.3872,-1357.1002 3003.3872,-1357.1002"/>
<text text-anchor="middle" x="2805.0696" y="-1306.6" font-family="Monaco" font-size="24.00" fill="#000000">parmetis@4.0.3%gcc@9.4.0/imopnxj</text>
</g>
<!-- idvshq5nqmygzd4uo62mdispwgxsw7id&#45;&gt;imopnxjmv7cwzyiecdw2saq42qvpnauh -->
<g id="edge51" class="edge">
<title>idvshq5nqmygzd4uo62mdispwgxsw7id-&gt;imopnxjmv7cwzyiecdw2saq42qvpnauh</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2393.6993,-1587.0809C2455.3565,-1569.7539 2521.1771,-1546.2699 2577.5864,-1515.1245 2649.1588,-1475.6656 2717.4141,-1409.6691 2759.9512,-1363.9364"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M2394.2404,-1589.0062C2456.0286,-1571.6376 2521.8491,-1548.1536 2578.5528,-1516.8755 2650.5491,-1477.1034 2718.8043,-1411.107 2761.4156,-1365.2986"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2763.3454,-1366.8938 2767.5512,-1357.1695 2758.1992,-1362.1485 2763.3454,-1366.8938"/>
</g>
<!-- ern66gyp6qmhmpod4jaynxx4weoberfm -->
<g id="node13" class="node">
<title>ern66gyp6qmhmpod4jaynxx4weoberfm</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M2928.3784,-1198.3002C2928.3784,-1198.3002 2563.7608,-1198.3002 2563.7608,-1198.3002 2557.7608,-1198.3002 2551.7608,-1192.3002 2551.7608,-1186.3002 2551.7608,-1186.3002 2551.7608,-1123.6998 2551.7608,-1123.6998 2551.7608,-1117.6998 2557.7608,-1111.6998 2563.7608,-1111.6998 2563.7608,-1111.6998 2928.3784,-1111.6998 2928.3784,-1111.6998 2934.3784,-1111.6998 2940.3784,-1117.6998 2940.3784,-1123.6998 2940.3784,-1123.6998 2940.3784,-1186.3002 2940.3784,-1186.3002 2940.3784,-1192.3002 2934.3784,-1198.3002 2928.3784,-1198.3002"/>
<text text-anchor="middle" x="2746.0696" y="-1147.8" font-family="Monaco" font-size="24.00" fill="#000000">metis@5.1.0%gcc@9.4.0/ern66gy</text>
</g>
<!-- idvshq5nqmygzd4uo62mdispwgxsw7id&#45;&gt;ern66gyp6qmhmpod4jaynxx4weoberfm -->
<g id="edge25" class="edge">
<title>idvshq5nqmygzd4uo62mdispwgxsw7id-&gt;ern66gyp6qmhmpod4jaynxx4weoberfm</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2371.6269,-1587.103C2443.5875,-1567.249 2513.691,-1542.0963 2537.3223,-1515.3355 2611.3482,-1433.6645 2525.4748,-1364.8484 2585.2274,-1269.8608 2602.2478,-1243.3473 2627.3929,-1221.1402 2652.8797,-1203.3777"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M2372.1589,-1589.0309C2444.2629,-1569.1315 2514.3664,-1543.9788 2538.8169,-1516.6645 2612.5989,-1432.1038 2526.7255,-1363.2878 2586.9118,-1270.9392 2603.5717,-1244.8464 2628.7168,-1222.6393 2654.0229,-1205.0188"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2655.7411,-1206.8749 2662.0621,-1198.3722 2651.8184,-1201.0773 2655.7411,-1206.8749"/>
</g>
<!-- nqiyrxlid6tikfpvoqdpvsjt5drs2obf -->
<g id="node14" class="node">
<title>nqiyrxlid6tikfpvoqdpvsjt5drs2obf</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M1964.017,-1357.1002C1964.017,-1357.1002 1532.1222,-1357.1002 1532.1222,-1357.1002 1526.1222,-1357.1002 1520.1222,-1351.1002 1520.1222,-1345.1002 1520.1222,-1345.1002 1520.1222,-1282.4998 1520.1222,-1282.4998 1520.1222,-1276.4998 1526.1222,-1270.4998 1532.1222,-1270.4998 1532.1222,-1270.4998 1964.017,-1270.4998 1964.017,-1270.4998 1970.017,-1270.4998 1976.017,-1276.4998 1976.017,-1282.4998 1976.017,-1282.4998 1976.017,-1345.1002 1976.017,-1345.1002 1976.017,-1351.1002 1970.017,-1357.1002 1964.017,-1357.1002"/>
<text text-anchor="middle" x="1748.0696" y="-1306.6" font-family="Monaco" font-size="24.00" fill="#000000">butterflypack@2.2.2%gcc@9.4.0/nqiyrxl</text>
</g>
<!-- idvshq5nqmygzd4uo62mdispwgxsw7id&#45;&gt;nqiyrxlid6tikfpvoqdpvsjt5drs2obf -->
<g id="edge26" class="edge">
<title>idvshq5nqmygzd4uo62mdispwgxsw7id-&gt;nqiyrxlid6tikfpvoqdpvsjt5drs2obf</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2118.5874,-1588.7094C2039.1194,-1530.0139 1897.9154,-1425.72 1814.4793,-1364.0937"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M2119.7757,-1587.1006C2040.3076,-1528.4052 1899.1036,-1424.1112 1815.6675,-1362.485"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1817.0581,-1360.404 1806.9348,-1357.2781 1812.8992,-1366.0347 1817.0581,-1360.404"/>
</g>
<!-- 4bu62kyfuh4ikdkuyxfxjxanf7e7qopu -->
<g id="node16" class="node">
<title>4bu62kyfuh4ikdkuyxfxjxanf7e7qopu</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M1106.2192,-1515.9002C1106.2192,-1515.9002 683.92,-1515.9002 683.92,-1515.9002 677.92,-1515.9002 671.92,-1509.9002 671.92,-1503.9002 671.92,-1503.9002 671.92,-1441.2998 671.92,-1441.2998 671.92,-1435.2998 677.92,-1429.2998 683.92,-1429.2998 683.92,-1429.2998 1106.2192,-1429.2998 1106.2192,-1429.2998 1112.2192,-1429.2998 1118.2192,-1435.2998 1118.2192,-1441.2998 1118.2192,-1441.2998 1118.2192,-1503.9002 1118.2192,-1503.9002 1118.2192,-1509.9002 1112.2192,-1515.9002 1106.2192,-1515.9002"/>
<text text-anchor="middle" x="895.0696" y="-1465.4" font-family="Monaco" font-size="24.00" fill="#000000">slate@2022.07.00%gcc@9.4.0/4bu62ky</text>
</g>
<!-- idvshq5nqmygzd4uo62mdispwgxsw7id&#45;&gt;4bu62kyfuh4ikdkuyxfxjxanf7e7qopu -->
<g id="edge5" class="edge">
<title>idvshq5nqmygzd4uo62mdispwgxsw7id-&gt;4bu62kyfuh4ikdkuyxfxjxanf7e7qopu</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1960.6663,-1605.4991C1729.5518,-1576.8935 1365.2868,-1531.8075 1128.237,-1502.4673"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1960.912,-1603.5143C1729.7975,-1574.9086 1365.5325,-1529.8227 1128.4827,-1500.4825"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1128.5789,-1497.9754 1118.2247,-1500.2204 1127.719,-1504.9224 1128.5789,-1497.9754"/>
</g>
<!-- idvshq5nqmygzd4uo62mdispwgxsw7id&#45;&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw -->
<g id="edge20" class="edge">
<title>idvshq5nqmygzd4uo62mdispwgxsw7id-&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2395.1113,-1591.5061C2621.5772,-1545.7968 2953.3457,-1462.5053 3023.2362,-1356.6473 3049.986,-1316.785 3021.2047,-1131.5143 3003.3326,-1112.2759 2971.8969,-1077.7826 2884.3944,-1052.6467 2789.1441,-1034.9179"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M2395.507,-1593.4665C2622.0642,-1547.7366 2953.8327,-1464.4452 3024.903,-1357.7527 3051.9623,-1316.478 3023.181,-1131.2073 3004.8066,-1110.9241 2972.4491,-1075.8603 2884.9466,-1050.7244 2789.5102,-1032.9517"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2789.9449,-1030.4898 2779.4781,-1032.132 2788.6845,-1037.3754 2789.9449,-1030.4898"/>
<text text-anchor="middle" x="2611.7445" y="-1537.8321" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=mpi</text>
</g>
<!-- 7rzbmgoxhmm2jhellkgcjmn62uklf22x -->
<g id="node25" class="node">
<title>7rzbmgoxhmm2jhellkgcjmn62uklf22x</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M1749.1952,-1515.9002C1749.1952,-1515.9002 1398.944,-1515.9002 1398.944,-1515.9002 1392.944,-1515.9002 1386.944,-1509.9002 1386.944,-1503.9002 1386.944,-1503.9002 1386.944,-1441.2998 1386.944,-1441.2998 1386.944,-1435.2998 1392.944,-1429.2998 1398.944,-1429.2998 1398.944,-1429.2998 1749.1952,-1429.2998 1749.1952,-1429.2998 1755.1952,-1429.2998 1761.1952,-1435.2998 1761.1952,-1441.2998 1761.1952,-1441.2998 1761.1952,-1503.9002 1761.1952,-1503.9002 1761.1952,-1509.9002 1755.1952,-1515.9002 1749.1952,-1515.9002"/>
<text text-anchor="middle" x="1574.0696" y="-1465.4" font-family="Monaco" font-size="24.00" fill="#000000">zfp@0.5.5%gcc@9.4.0/7rzbmgo</text>
</g>
<!-- idvshq5nqmygzd4uo62mdispwgxsw7id&#45;&gt;7rzbmgoxhmm2jhellkgcjmn62uklf22x -->
<g id="edge36" class="edge">
<title>idvshq5nqmygzd4uo62mdispwgxsw7id-&gt;7rzbmgoxhmm2jhellkgcjmn62uklf22x</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2012.7697,-1588.9743C1930.7903,-1567.4208 1831.729,-1541.3762 1748.4742,-1519.4874"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M2013.2782,-1587.0401C1931.2989,-1565.4866 1832.2376,-1539.442 1748.9827,-1517.5531"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1749.477,-1515.0982 1738.9157,-1515.9403 1747.697,-1521.8681 1749.477,-1515.0982"/>
</g>
<!-- idvshq5nqmygzd4uo62mdispwgxsw7id&#45;&gt;gguve5icmo5e4cw5o3hvvfsxremc46if -->
<g id="edge3" class="edge">
<title>idvshq5nqmygzd4uo62mdispwgxsw7id-&gt;gguve5icmo5e4cw5o3hvvfsxremc46if</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2229.2864,-1587.9836C2336.2076,-1492.3172 2562.5717,-1260.0833 2429.0696,-1111.6 2372.2327,-1048.3851 1860.8259,-1017.0375 1561.5401,-1003.9799"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1561.5673,-1000.4779 1551.4253,-1003.5421 1561.2645,-1007.4714 1561.5673,-1000.4779"/>
</g>
<!-- mujlx42xgttdc6u6rmiftsktpsrcmpbs -->
<g id="node5" class="node">
<title>mujlx42xgttdc6u6rmiftsktpsrcmpbs</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M912.4048,-1198.3002C912.4048,-1198.3002 475.7344,-1198.3002 475.7344,-1198.3002 469.7344,-1198.3002 463.7344,-1192.3002 463.7344,-1186.3002 463.7344,-1186.3002 463.7344,-1123.6998 463.7344,-1123.6998 463.7344,-1117.6998 469.7344,-1111.6998 475.7344,-1111.6998 475.7344,-1111.6998 912.4048,-1111.6998 912.4048,-1111.6998 918.4048,-1111.6998 924.4048,-1117.6998 924.4048,-1123.6998 924.4048,-1123.6998 924.4048,-1186.3002 924.4048,-1186.3002 924.4048,-1192.3002 918.4048,-1198.3002 912.4048,-1198.3002"/>
<text text-anchor="middle" x="694.0696" y="-1147.8" font-family="Monaco" font-size="24.00" fill="#000000">blaspp@2022.07.00%gcc@9.4.0/mujlx42</text>
</g>
<!-- mujlx42xgttdc6u6rmiftsktpsrcmpbs&#45;&gt;o524gebsxavobkte3k5fglgwnedfkadf -->
<g id="edge16" class="edge">
<title>mujlx42xgttdc6u6rmiftsktpsrcmpbs-&gt;o524gebsxavobkte3k5fglgwnedfkadf</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M693.0696,-1111.6072C693.0696,-1092.5263 693.0696,-1069.9257 693.0696,-1049.8046"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M695.0696,-1111.6072C695.0696,-1092.5263 695.0696,-1069.9257 695.0696,-1049.8046"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="697.5697,-1049.5403 694.0696,-1039.5403 690.5697,-1049.5404 697.5697,-1049.5403"/>
<text text-anchor="middle" x="657.8516" y="-1079.8482" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=blas</text>
</g>
<!-- mujlx42xgttdc6u6rmiftsktpsrcmpbs&#45;&gt;gguve5icmo5e4cw5o3hvvfsxremc46if -->
<g id="edge28" class="edge">
<title>mujlx42xgttdc6u6rmiftsktpsrcmpbs-&gt;gguve5icmo5e4cw5o3hvvfsxremc46if</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M872.2315,-1111.6072C960.9952,-1089.988 1068.311,-1063.8504 1158.3512,-1041.9204"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1159.2354,-1045.3074 1168.1232,-1039.5403 1157.5789,-1038.5062 1159.2354,-1045.3074"/>
</g>
<!-- htzjns66gmq6pjofohp26djmjnpbegho -->
<g id="node6" class="node">
<title>htzjns66gmq6pjofohp26djmjnpbegho</title>
<path fill="#ff7f50" stroke="#000000" stroke-width="4" d="M2663.3553,-880.7002C2663.3553,-880.7002 2270.7839,-880.7002 2270.7839,-880.7002 2264.7839,-880.7002 2258.7839,-874.7002 2258.7839,-868.7002 2258.7839,-868.7002 2258.7839,-806.0998 2258.7839,-806.0998 2258.7839,-800.0998 2264.7839,-794.0998 2270.7839,-794.0998 2270.7839,-794.0998 2663.3553,-794.0998 2663.3553,-794.0998 2669.3553,-794.0998 2675.3553,-800.0998 2675.3553,-806.0998 2675.3553,-806.0998 2675.3553,-868.7002 2675.3553,-868.7002 2675.3553,-874.7002 2669.3553,-880.7002 2663.3553,-880.7002"/>
<text text-anchor="middle" x="2467.0696" y="-830.2" font-family="Monaco" font-size="24.00" fill="#000000">patchelf@0.16.1%gcc@9.4.0/htzjns6</text>
</g>
<!-- xm3ldz3y3msfdc3hzshvxpbpg5hnt6o6 -->
<g id="node15" class="node">
<title>xm3ldz3y3msfdc3hzshvxpbpg5hnt6o6</title>
<path fill="#ff7f50" stroke="#000000" stroke-width="4" d="M394.2232,-404.3002C394.2232,-404.3002 17.916,-404.3002 17.916,-404.3002 11.916,-404.3002 5.916,-398.3002 5.916,-392.3002 5.916,-392.3002 5.916,-329.6998 5.916,-329.6998 5.916,-323.6998 11.916,-317.6998 17.916,-317.6998 17.916,-317.6998 394.2232,-317.6998 394.2232,-317.6998 400.2232,-317.6998 406.2232,-323.6998 406.2232,-329.6998 406.2232,-329.6998 406.2232,-392.3002 406.2232,-392.3002 406.2232,-398.3002 400.2232,-404.3002 394.2232,-404.3002"/>
<text text-anchor="middle" x="206.0696" y="-353.8" font-family="Monaco" font-size="24.00" fill="#000000">diffutils@3.8%gcc@9.4.0/xm3ldz3</text>
</g>
<!-- h3ujmb3ts4kxxxv77knh2knuystuerbx&#45;&gt;xm3ldz3y3msfdc3hzshvxpbpg5hnt6o6 -->
<g id="edge1" class="edge">
<title>h3ujmb3ts4kxxxv77knh2knuystuerbx-&gt;xm3ldz3y3msfdc3hzshvxpbpg5hnt6o6</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M206.0696,-476.4072C206.0696,-457.3263 206.0696,-434.7257 206.0696,-414.6046"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="209.5697,-414.3403 206.0696,-404.3403 202.5697,-414.3404 209.5697,-414.3403"/>
</g>
<!-- o524gebsxavobkte3k5fglgwnedfkadf&#45;&gt;ywrpvv2hgooeepdke33exkqrtdpd5gkl -->
<g id="edge11" class="edge">
<title>o524gebsxavobkte3k5fglgwnedfkadf-&gt;ywrpvv2hgooeepdke33exkqrtdpd5gkl</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M690.0981,-952.705C684.8522,-895.2533 675.6173,-794.1153 669.9514,-732.0637"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="673.4345,-731.7184 669.0396,-722.0781 666.4635,-732.355 673.4345,-731.7184"/>
</g>
<!-- 4vsmjofkhntilgzh4zebluqak5mdsu3x -->
<g id="node9" class="node">
<title>4vsmjofkhntilgzh4zebluqak5mdsu3x</title>
<path fill="#ff7f50" stroke="#000000" stroke-width="4" d="M1977.9121,-721.9002C1977.9121,-721.9002 1386.2271,-721.9002 1386.2271,-721.9002 1380.2271,-721.9002 1374.2271,-715.9002 1374.2271,-709.9002 1374.2271,-709.9002 1374.2271,-647.2998 1374.2271,-647.2998 1374.2271,-641.2998 1380.2271,-635.2998 1386.2271,-635.2998 1386.2271,-635.2998 1977.9121,-635.2998 1977.9121,-635.2998 1983.9121,-635.2998 1989.9121,-641.2998 1989.9121,-647.2998 1989.9121,-647.2998 1989.9121,-709.9002 1989.9121,-709.9002 1989.9121,-715.9002 1983.9121,-721.9002 1977.9121,-721.9002"/>
<text text-anchor="middle" x="1682.0696" y="-671.4" font-family="Monaco" font-size="24.00" fill="#000000">ca-certificates-mozilla@2023-01-10%gcc@9.4.0/4vsmjof</text>
</g>
<!-- xiro2z6na56qdd4czjhj54eag3ekbiow -->
<g id="node10" class="node">
<title>xiro2z6na56qdd4czjhj54eag3ekbiow</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M988.1824,-1357.1002C988.1824,-1357.1002 533.9568,-1357.1002 533.9568,-1357.1002 527.9568,-1357.1002 521.9568,-1351.1002 521.9568,-1345.1002 521.9568,-1345.1002 521.9568,-1282.4998 521.9568,-1282.4998 521.9568,-1276.4998 527.9568,-1270.4998 533.9568,-1270.4998 533.9568,-1270.4998 988.1824,-1270.4998 988.1824,-1270.4998 994.1824,-1270.4998 1000.1824,-1276.4998 1000.1824,-1282.4998 1000.1824,-1282.4998 1000.1824,-1345.1002 1000.1824,-1345.1002 1000.1824,-1351.1002 994.1824,-1357.1002 988.1824,-1357.1002"/>
<text text-anchor="middle" x="761.0696" y="-1306.6" font-family="Monaco" font-size="24.00" fill="#000000">lapackpp@2022.07.00%gcc@9.4.0/xiro2z6</text>
</g>
<!-- xiro2z6na56qdd4czjhj54eag3ekbiow&#45;&gt;mujlx42xgttdc6u6rmiftsktpsrcmpbs -->
<g id="edge37" class="edge">
<title>xiro2z6na56qdd4czjhj54eag3ekbiow-&gt;mujlx42xgttdc6u6rmiftsktpsrcmpbs</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M741.8402,-1270.7959C733.6789,-1251.4525 723.9915,-1228.4917 715.4149,-1208.1641"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M743.6829,-1270.0185C735.5216,-1250.675 725.8342,-1227.7143 717.2576,-1207.3866"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="719.4676,-1206.1933 712.3555,-1198.3403 713.0181,-1208.9144 719.4676,-1206.1933"/>
</g>
<!-- xiro2z6na56qdd4czjhj54eag3ekbiow&#45;&gt;o524gebsxavobkte3k5fglgwnedfkadf -->
<g id="edge35" class="edge">
<title>xiro2z6na56qdd4czjhj54eag3ekbiow-&gt;o524gebsxavobkte3k5fglgwnedfkadf</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M597.2326,-1271.3826C534.1471,-1251.0571 472.8527,-1225.5904 454.2471,-1198.9688 432.1275,-1166.6075 433.5639,-1144.2113 454.2226,-1111.0684 472.6194,-1081.8657 500.3255,-1060.004 530.6572,-1043.4601"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M597.8458,-1269.4789C534.9144,-1249.2102 473.6201,-1223.7435 455.8921,-1197.8312 434.1234,-1166.7355 435.5598,-1144.3393 455.9166,-1112.1316 473.8583,-1083.4358 501.5644,-1061.5741 531.6142,-1045.2163"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="532.9062,-1047.362 540.1422,-1039.6231 529.6595,-1041.1605 532.9062,-1047.362"/>
<text text-anchor="middle" x="474.3109" y="-1250.2598" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=blas,lapack</text>
</g>
<!-- xiro2z6na56qdd4czjhj54eag3ekbiow&#45;&gt;gguve5icmo5e4cw5o3hvvfsxremc46if -->
<g id="edge45" class="edge">
<title>xiro2z6na56qdd4czjhj54eag3ekbiow-&gt;gguve5icmo5e4cw5o3hvvfsxremc46if</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M833.5823,-1270.3956C865.3249,-1250.0918 902.2709,-1224.6296 933.0696,-1198.4 973.2414,-1164.1878 969.8532,-1140.395 1014.0696,-1111.6 1058.5051,-1082.6623 1111.0286,-1060.0733 1161.029,-1042.8573"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1162.313,-1046.1177 1170.6621,-1039.5953 1160.0678,-1039.4876 1162.313,-1046.1177"/>
</g>
<!-- j5rupoqliu7kasm6xndl7ui32wgawkru -->
<g id="node11" class="node">
<title>j5rupoqliu7kasm6xndl7ui32wgawkru</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M1527.3625,-245.5002C1527.3625,-245.5002 1164.7767,-245.5002 1164.7767,-245.5002 1158.7767,-245.5002 1152.7767,-239.5002 1152.7767,-233.5002 1152.7767,-233.5002 1152.7767,-170.8998 1152.7767,-170.8998 1152.7767,-164.8998 1158.7767,-158.8998 1164.7767,-158.8998 1164.7767,-158.8998 1527.3625,-158.8998 1527.3625,-158.8998 1533.3625,-158.8998 1539.3625,-164.8998 1539.3625,-170.8998 1539.3625,-170.8998 1539.3625,-233.5002 1539.3625,-233.5002 1539.3625,-239.5002 1533.3625,-245.5002 1527.3625,-245.5002"/>
<text text-anchor="middle" x="1346.0696" y="-195" font-family="Monaco" font-size="24.00" fill="#000000">ncurses@6.4%gcc@9.4.0/j5rupoq</text>
</g>
<!-- j5rupoqliu7kasm6xndl7ui32wgawkru&#45;&gt;i4avrindvhcamhurzbfdaggbj2zgsrrh -->
<g id="edge15" class="edge">
<title>j5rupoqliu7kasm6xndl7ui32wgawkru-&gt;i4avrindvhcamhurzbfdaggbj2zgsrrh</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1346.0696,-158.8072C1346.0696,-139.7263 1346.0696,-117.1257 1346.0696,-97.0046"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1349.5697,-96.7403 1346.0696,-86.7403 1342.5697,-96.7404 1349.5697,-96.7403"/>
<text text-anchor="middle" x="1292.7436" y="-127.0482" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=pkgconfig</text>
</g>
<!-- imopnxjmv7cwzyiecdw2saq42qvpnauh&#45;&gt;ern66gyp6qmhmpod4jaynxx4weoberfm -->
<g id="edge19" class="edge">
<title>imopnxjmv7cwzyiecdw2saq42qvpnauh-&gt;ern66gyp6qmhmpod4jaynxx4weoberfm</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2788.0102,-1270.7555C2780.8234,-1251.412 2772.2926,-1228.4513 2764.7402,-1208.1236"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M2789.885,-1270.0589C2782.6982,-1250.7155 2774.1674,-1227.7547 2766.615,-1207.4271"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2768.9358,-1206.4953 2762.1721,-1198.3403 2762.3741,-1208.9332 2768.9358,-1206.4953"/>
</g>
<!-- imopnxjmv7cwzyiecdw2saq42qvpnauh&#45;&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw -->
<g id="edge12" class="edge">
<title>imopnxjmv7cwzyiecdw2saq42qvpnauh-&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2907.2846,-1269.5018C2936.475,-1251.8137 2964.9158,-1228.1116 2981.1904,-1197.9236 2999.477,-1164.2363 3005.2125,-1141.4693 2981.289,-1112.225 2954.5472,-1078.5579 2876.5297,-1053.8974 2789.2983,-1036.3535"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M2908.3216,-1271.2119C2937.7554,-1253.3501 2966.1962,-1229.648 2982.9488,-1198.8764 3001.4164,-1164.7249 3007.1519,-1141.9579 2982.8502,-1110.975 2955.15,-1076.6509 2877.1325,-1051.9904 2789.6927,-1034.3928"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2790.125,-1031.93 2779.6364,-1033.4269 2788.7692,-1038.7974 2790.125,-1031.93"/>
<text text-anchor="middle" x="2836.0561" y="-1059.5023" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=mpi</text>
</g>
<!-- imopnxjmv7cwzyiecdw2saq42qvpnauh&#45;&gt;gguve5icmo5e4cw5o3hvvfsxremc46if -->
<g id="edge49" class="edge">
<title>imopnxjmv7cwzyiecdw2saq42qvpnauh-&gt;gguve5icmo5e4cw5o3hvvfsxremc46if</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2883.731,-1270.4691C2909.4451,-1251.9243 2934.9956,-1227.7144 2949.0696,-1198.4 2965.7663,-1163.6227 2975.3506,-1139.841 2949.0696,-1111.6 2925.7161,-1086.5049 1993.0368,-1031.9055 1561.3071,-1007.9103"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1561.3813,-1004.4092 1551.2026,-1007.3492 1560.9931,-1011.3984 1561.3813,-1004.4092"/>
</g>
<!-- ern66gyp6qmhmpod4jaynxx4weoberfm&#45;&gt;gguve5icmo5e4cw5o3hvvfsxremc46if -->
<g id="edge50" class="edge">
<title>ern66gyp6qmhmpod4jaynxx4weoberfm-&gt;gguve5icmo5e4cw5o3hvvfsxremc46if</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2551.6031,-1113.7387C2547.0531,-1112.9948 2542.537,-1112.2802 2538.0696,-1111.6 2198.5338,-1059.8997 1800.8632,-1026.8711 1561.4583,-1009.9443"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1561.4619,-1006.436 1551.2407,-1009.2249 1560.9702,-1013.4187 1561.4619,-1006.436"/>
</g>
<!-- nqiyrxlid6tikfpvoqdpvsjt5drs2obf&#45;&gt;hkcrbrtf2qex6rvzuok5tzdrbam55pdn -->
<g id="edge34" class="edge">
<title>nqiyrxlid6tikfpvoqdpvsjt5drs2obf-&gt;hkcrbrtf2qex6rvzuok5tzdrbam55pdn</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1865.2226,-1269.4691C1922.6966,-1248.2438 1991.964,-1222.6632 2050.6644,-1200.985"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1865.9154,-1271.3453C1923.3894,-1250.12 1992.6569,-1224.5394 2051.3572,-1202.8612"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2052.5441,-1205.088 2060.7123,-1198.3403 2050.119,-1198.5215 2052.5441,-1205.088"/>
<text text-anchor="middle" x="1910.9073" y="-1238.6056" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=scalapack</text>
</g>
<!-- nqiyrxlid6tikfpvoqdpvsjt5drs2obf&#45;&gt;o524gebsxavobkte3k5fglgwnedfkadf -->
<g id="edge52" class="edge">
<title>nqiyrxlid6tikfpvoqdpvsjt5drs2obf-&gt;o524gebsxavobkte3k5fglgwnedfkadf</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1519.9696,-1290.6844C1394.6018,-1273.3057 1237.6631,-1244.7294 1102.7507,-1199.3478 1021.8138,-1171.8729 1008.1992,-1149.8608 932.6248,-1112.4956 887.1715,-1089.9216 836.578,-1065.4054 793.6914,-1044.8018"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1520.2442,-1288.7034C1394.9601,-1271.3381 1238.0214,-1242.7618 1103.3885,-1197.4522 1023.5148,-1170.8208 1009.9002,-1148.8087 933.5144,-1110.7044 888.0436,-1088.1218 837.4502,-1063.6056 794.5574,-1042.999"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="795.6235,-1040.7377 785.0938,-1039.565 792.5939,-1047.0482 795.6235,-1040.7377"/>
<text text-anchor="middle" x="1046.8307" y="-1202.5988" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=blas,lapack</text>
</g>
<!-- lfh3aovn65e66cs24qiehq3nd2ddojef -->
<g id="node21" class="node">
<title>lfh3aovn65e66cs24qiehq3nd2ddojef</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M1547.9922,-1198.3002C1547.9922,-1198.3002 1144.147,-1198.3002 1144.147,-1198.3002 1138.147,-1198.3002 1132.147,-1192.3002 1132.147,-1186.3002 1132.147,-1186.3002 1132.147,-1123.6998 1132.147,-1123.6998 1132.147,-1117.6998 1138.147,-1111.6998 1144.147,-1111.6998 1144.147,-1111.6998 1547.9922,-1111.6998 1547.9922,-1111.6998 1553.9922,-1111.6998 1559.9922,-1117.6998 1559.9922,-1123.6998 1559.9922,-1123.6998 1559.9922,-1186.3002 1559.9922,-1186.3002 1559.9922,-1192.3002 1553.9922,-1198.3002 1547.9922,-1198.3002"/>
<text text-anchor="middle" x="1346.0696" y="-1147.8" font-family="Monaco" font-size="24.00" fill="#000000">arpack-ng@3.8.0%gcc@9.4.0/lfh3aov</text>
</g>
<!-- nqiyrxlid6tikfpvoqdpvsjt5drs2obf&#45;&gt;lfh3aovn65e66cs24qiehq3nd2ddojef -->
<g id="edge46" class="edge">
<title>nqiyrxlid6tikfpvoqdpvsjt5drs2obf-&gt;lfh3aovn65e66cs24qiehq3nd2ddojef</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1637.8539,-1271.3373C1584.2332,-1250.1557 1519.6324,-1224.6368 1464.827,-1202.9873"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1638.5887,-1269.4771C1584.968,-1248.2956 1520.3672,-1222.7767 1465.5618,-1201.1272"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1466.3716,-1198.7592 1455.785,-1198.3403 1463.7998,-1205.2696 1466.3716,-1198.7592"/>
</g>
<!-- 57joith2sqq6sehge54vlloyolm36mdu -->
<g id="node22" class="node">
<title>57joith2sqq6sehge54vlloyolm36mdu</title>
<path fill="#ff7f50" stroke="#000000" stroke-width="4" d="M1906.2352,-1198.3002C1906.2352,-1198.3002 1589.904,-1198.3002 1589.904,-1198.3002 1583.904,-1198.3002 1577.904,-1192.3002 1577.904,-1186.3002 1577.904,-1186.3002 1577.904,-1123.6998 1577.904,-1123.6998 1577.904,-1117.6998 1583.904,-1111.6998 1589.904,-1111.6998 1589.904,-1111.6998 1906.2352,-1111.6998 1906.2352,-1111.6998 1912.2352,-1111.6998 1918.2352,-1117.6998 1918.2352,-1123.6998 1918.2352,-1123.6998 1918.2352,-1186.3002 1918.2352,-1186.3002 1918.2352,-1192.3002 1912.2352,-1198.3002 1906.2352,-1198.3002"/>
<text text-anchor="middle" x="1748.0696" y="-1147.8" font-family="Monaco" font-size="24.00" fill="#000000">sed@4.8%gcc@9.4.0/57joith</text>
</g>
<!-- nqiyrxlid6tikfpvoqdpvsjt5drs2obf&#45;&gt;57joith2sqq6sehge54vlloyolm36mdu -->
<g id="edge27" class="edge">
<title>nqiyrxlid6tikfpvoqdpvsjt5drs2obf-&gt;57joith2sqq6sehge54vlloyolm36mdu</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1748.0696,-1270.4072C1748.0696,-1251.3263 1748.0696,-1228.7257 1748.0696,-1208.6046"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1751.5697,-1208.3403 1748.0696,-1198.3403 1744.5697,-1208.3404 1751.5697,-1208.3403"/>
</g>
<!-- nqiyrxlid6tikfpvoqdpvsjt5drs2obf&#45;&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw -->
<g id="edge24" class="edge">
<title>nqiyrxlid6tikfpvoqdpvsjt5drs2obf-&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1975.9734,-1301.684C2148.2819,-1288.3961 2365.6859,-1259.5384 2428.3689,-1197.6866 2466.9261,-1160.1438 2472.9783,-1095.7153 2471.5152,-1049.9701"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1976.1272,-1303.678C2148.5451,-1290.3788 2365.949,-1261.521 2429.7703,-1199.1134 2468.9173,-1160.3309 2474.9695,-1095.9024 2473.5142,-1049.9065"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2476.0078,-1049.7027 2472.0657,-1039.8686 2469.0147,-1050.0146 2476.0078,-1049.7027"/>
<text text-anchor="middle" x="2207.8884" y="-1273.0053" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=mpi</text>
</g>
<!-- nqiyrxlid6tikfpvoqdpvsjt5drs2obf&#45;&gt;gguve5icmo5e4cw5o3hvvfsxremc46if -->
<g id="edge6" class="edge">
<title>nqiyrxlid6tikfpvoqdpvsjt5drs2obf-&gt;gguve5icmo5e4cw5o3hvvfsxremc46if</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1520.1614,-1301.6771C1362.9712,-1287.992 1173.582,-1259.0928 1123.0696,-1198.4 1098.3914,-1168.7481 1103.0165,-1144.5563 1123.0696,-1111.6 1140.5998,-1082.79 1167.9002,-1060.8539 1197.4647,-1044.2681"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1199.1408,-1047.3408 1206.2789,-1039.5114 1195.8163,-1041.1806 1199.1408,-1047.3408"/>
</g>
<!-- ogcucq2eod3xusvvied5ol2iobui4nsb -->
<g id="node18" class="node">
<title>ogcucq2eod3xusvvied5ol2iobui4nsb</title>
<path fill="#ff7f50" stroke="#000000" stroke-width="4" d="M400.2088,-245.5002C400.2088,-245.5002 11.9304,-245.5002 11.9304,-245.5002 5.9304,-245.5002 -.0696,-239.5002 -.0696,-233.5002 -.0696,-233.5002 -.0696,-170.8998 -.0696,-170.8998 -.0696,-164.8998 5.9304,-158.8998 11.9304,-158.8998 11.9304,-158.8998 400.2088,-158.8998 400.2088,-158.8998 406.2088,-158.8998 412.2088,-164.8998 412.2088,-170.8998 412.2088,-170.8998 412.2088,-233.5002 412.2088,-233.5002 412.2088,-239.5002 406.2088,-245.5002 400.2088,-245.5002"/>
<text text-anchor="middle" x="206.0696" y="-195" font-family="Monaco" font-size="24.00" fill="#000000">libiconv@1.17%gcc@9.4.0/ogcucq2</text>
</g>
<!-- xm3ldz3y3msfdc3hzshvxpbpg5hnt6o6&#45;&gt;ogcucq2eod3xusvvied5ol2iobui4nsb -->
<g id="edge47" class="edge">
<title>xm3ldz3y3msfdc3hzshvxpbpg5hnt6o6-&gt;ogcucq2eod3xusvvied5ol2iobui4nsb</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M205.0696,-317.6072C205.0696,-298.5263 205.0696,-275.9257 205.0696,-255.8046"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M207.0696,-317.6072C207.0696,-298.5263 207.0696,-275.9257 207.0696,-255.8046"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="209.5697,-255.5403 206.0696,-245.5403 202.5697,-255.5404 209.5697,-255.5403"/>
<text text-anchor="middle" x="165.5739" y="-285.8482" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=iconv</text>
</g>
<!-- 4bu62kyfuh4ikdkuyxfxjxanf7e7qopu&#45;&gt;mujlx42xgttdc6u6rmiftsktpsrcmpbs -->
<g id="edge42" class="edge">
<title>4bu62kyfuh4ikdkuyxfxjxanf7e7qopu-&gt;mujlx42xgttdc6u6rmiftsktpsrcmpbs</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M672.6614,-1430.2151C600.7916,-1411.3548 534.1254,-1386.9583 512.2667,-1357.7962 489.0909,-1326.029 493.54,-1304.0273 512.1928,-1269.9192 527.5256,-1242.0821 552.3382,-1220.1508 578.9347,-1203.0434"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M673.169,-1428.2806C601.4789,-1409.4766 534.8127,-1385.0802 513.8725,-1356.6038 491.0512,-1326.4254 495.5003,-1304.4237 513.9464,-1270.8808 528.8502,-1243.5806 553.6627,-1221.6493 580.016,-1204.7259"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="581.46,-1206.7724 588.1193,-1198.532 577.7747,-1200.8211 581.46,-1206.7724"/>
</g>
<!-- 4bu62kyfuh4ikdkuyxfxjxanf7e7qopu&#45;&gt;o524gebsxavobkte3k5fglgwnedfkadf -->
<g id="edge43" class="edge">
<title>4bu62kyfuh4ikdkuyxfxjxanf7e7qopu-&gt;o524gebsxavobkte3k5fglgwnedfkadf</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M680.4783,-1430.2246C600.8632,-1410.3933 522.8724,-1385.2921 493.3877,-1357.9314 411.1392,-1281.1573 374.1678,-1206.1582 435.2305,-1111.0561 454.3431,-1081.6726 482.5021,-1059.8261 513.5088,-1043.3725"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M680.9617,-1428.2839C601.476,-1408.4895 523.4851,-1383.3883 494.7515,-1356.4686 412.9331,-1280.273 375.9616,-1205.2739 436.9087,-1112.1439 455.569,-1083.2528 483.728,-1061.4063 514.4455,-1045.1396"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="515.8631,-1047.2236 523.1893,-1039.5699 512.6893,-1040.9844 515.8631,-1047.2236"/>
<text text-anchor="middle" x="453.0969" y="-1356.92" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=blas</text>
</g>
<!-- 4bu62kyfuh4ikdkuyxfxjxanf7e7qopu&#45;&gt;xiro2z6na56qdd4czjhj54eag3ekbiow -->
<g id="edge38" class="edge">
<title>4bu62kyfuh4ikdkuyxfxjxanf7e7qopu-&gt;xiro2z6na56qdd4czjhj54eag3ekbiow</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M857.6892,-1429.8521C840.9235,-1409.9835 820.9375,-1386.2985 803.4466,-1365.5705"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M859.2178,-1428.5623C842.4521,-1408.6937 822.466,-1385.0087 804.9751,-1364.2807"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="806.7654,-1362.5258 797.6414,-1357.1403 801.4156,-1367.0402 806.7654,-1362.5258"/>
</g>
<!-- 4bu62kyfuh4ikdkuyxfxjxanf7e7qopu&#45;&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw -->
<g id="edge13" class="edge">
<title>4bu62kyfuh4ikdkuyxfxjxanf7e7qopu-&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1118.1783,-1450.5735C1412.4221,-1422.447 1902.6188,-1374.0528 1984.8578,-1356.2227 2203.916,-1308.9943 2329.6342,-1377.1305 2461.2658,-1197.8052 2492.3675,-1156.1664 2488.743,-1094.1171 2480.3694,-1050.0521"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1118.3686,-1452.5644C1412.6186,-1424.4374 1902.8153,-1376.0432 1985.2814,-1358.1773 2202.963,-1310.7526 2328.6812,-1378.8889 2462.8734,-1198.9948 2494.3641,-1156.0498 2490.7395,-1094.0005 2482.3343,-1049.6791"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2484.7438,-1048.9818 2479.3189,-1039.8812 2477.8845,-1050.3784 2484.7438,-1048.9818"/>
<text text-anchor="middle" x="1820.4407" y="-1379.7188" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=mpi</text>
</g>
<!-- 4bu62kyfuh4ikdkuyxfxjxanf7e7qopu&#45;&gt;gguve5icmo5e4cw5o3hvvfsxremc46if -->
<g id="edge32" class="edge">
<title>4bu62kyfuh4ikdkuyxfxjxanf7e7qopu-&gt;gguve5icmo5e4cw5o3hvvfsxremc46if</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M947.2173,-1428.5496C968.7089,-1408.5917 992.2747,-1383.3345 1008.2117,-1356.6861 1067.0588,-1259.8646 1008.3745,-1197.6371 1084.3226,-1110.9351 1110.3076,-1081.7965 1144.7149,-1059.7578 1180.1804,-1043.0531"/>
<path fill="none" stroke="#daa520" stroke-width="2" d="M948.5783,-1430.0151C970.1712,-1409.9561 993.737,-1384.6989 1009.9275,-1357.7139 1068.5139,-1258.4924 1009.8295,-1196.2649 1085.8166,-1112.2649 1111.3864,-1083.4807 1145.7936,-1061.442 1181.0322,-1044.8626"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1182.4567,-1046.9607 1190.1008,-1039.6246 1179.5503,-1040.5926 1182.4567,-1046.9607"/>
</g>
<!-- 5xerf6imlgo4xlubacr4mljacc3edexo -->
<g id="node17" class="node">
<title>5xerf6imlgo4xlubacr4mljacc3edexo</title>
<path fill="#add8e6" stroke="#000000" stroke-width="4" d="M1822.3657,-880.7002C1822.3657,-880.7002 1437.7735,-880.7002 1437.7735,-880.7002 1431.7735,-880.7002 1425.7735,-874.7002 1425.7735,-868.7002 1425.7735,-868.7002 1425.7735,-806.0998 1425.7735,-806.0998 1425.7735,-800.0998 1431.7735,-794.0998 1437.7735,-794.0998 1437.7735,-794.0998 1822.3657,-794.0998 1822.3657,-794.0998 1828.3657,-794.0998 1834.3657,-800.0998 1834.3657,-806.0998 1834.3657,-806.0998 1834.3657,-868.7002 1834.3657,-868.7002 1834.3657,-874.7002 1828.3657,-880.7002 1822.3657,-880.7002"/>
<text text-anchor="middle" x="1630.0696" y="-830.2" font-family="Monaco" font-size="24.00" fill="#000000">openssl@1.1.1s%gcc@9.4.0/5xerf6i</text>
</g>
<!-- 5xerf6imlgo4xlubacr4mljacc3edexo&#45;&gt;ywrpvv2hgooeepdke33exkqrtdpd5gkl -->
<g id="edge22" class="edge">
<title>5xerf6imlgo4xlubacr4mljacc3edexo-&gt;ywrpvv2hgooeepdke33exkqrtdpd5gkl</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1425.7129,-803.7711C1262.7545,-776.9548 1035.5151,-739.5603 871.9084,-712.6373"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="872.1525,-709.1305 861.7169,-710.9602 871.0158,-716.0376 872.1525,-709.1305"/>
</g>
<!-- 5xerf6imlgo4xlubacr4mljacc3edexo&#45;&gt;4vsmjofkhntilgzh4zebluqak5mdsu3x -->
<g id="edge48" class="edge">
<title>5xerf6imlgo4xlubacr4mljacc3edexo-&gt;4vsmjofkhntilgzh4zebluqak5mdsu3x</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1644.2788,-794.0072C1650.5843,-774.7513 1658.0636,-751.9107 1664.6976,-731.6514"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1668.0917,-732.533 1667.8776,-721.9403 1661.4393,-730.3546 1668.0917,-732.533"/>
</g>
<!-- 5xerf6imlgo4xlubacr4mljacc3edexo&#45;&gt;nizxi5u5bbrzhzwfy2qb7hatlhuswlrz -->
<g id="edge41" class="edge">
<title>5xerf6imlgo4xlubacr4mljacc3edexo-&gt;nizxi5u5bbrzhzwfy2qb7hatlhuswlrz</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1834.3289,-793.5645C1906.6817,-774.1673 1975.9199,-749.2273 1998.2925,-721.3707 2031.5218,-680.681 2032.1636,-617.9031 2027.044,-573.3921"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1834.8468,-795.4962C1907.3595,-776.0489 1976.5977,-751.1089 1999.8467,-722.6293 2033.5217,-680.7015 2034.1635,-617.9235 2029.0309,-573.1639"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2031.4885,-572.6712 2026.7474,-563.1964 2024.5451,-573.5598 2031.4885,-572.6712"/>
</g>
<!-- v32wejd4d5lc6uka4qlrogwh5xae2h3r -->
<g id="node26" class="node">
<title>v32wejd4d5lc6uka4qlrogwh5xae2h3r</title>
<path fill="#ff7f50" stroke="#000000" stroke-width="4" d="M1306.1776,-404.3002C1306.1776,-404.3002 929.9616,-404.3002 929.9616,-404.3002 923.9616,-404.3002 917.9616,-398.3002 917.9616,-392.3002 917.9616,-392.3002 917.9616,-329.6998 917.9616,-329.6998 917.9616,-323.6998 923.9616,-317.6998 929.9616,-317.6998 929.9616,-317.6998 1306.1776,-317.6998 1306.1776,-317.6998 1312.1776,-317.6998 1318.1776,-323.6998 1318.1776,-329.6998 1318.1776,-329.6998 1318.1776,-392.3002 1318.1776,-392.3002 1318.1776,-398.3002 1312.1776,-404.3002 1306.1776,-404.3002"/>
<text text-anchor="middle" x="1118.0696" y="-353.8" font-family="Monaco" font-size="24.00" fill="#000000">readline@8.2%gcc@9.4.0/v32wejd</text>
</g>
<!-- uabgssx6lsgrevwbttslldnr5nzguprj&#45;&gt;v32wejd4d5lc6uka4qlrogwh5xae2h3r -->
<g id="edge7" class="edge">
<title>uabgssx6lsgrevwbttslldnr5nzguprj-&gt;v32wejd4d5lc6uka4qlrogwh5xae2h3r</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1117.0696,-476.4072C1117.0696,-457.3263 1117.0696,-434.7257 1117.0696,-414.6046"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1119.0696,-476.4072C1119.0696,-457.3263 1119.0696,-434.7257 1119.0696,-414.6046"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1121.5697,-414.3403 1118.0696,-404.3403 1114.5697,-414.3404 1121.5697,-414.3403"/>
</g>
<!-- lfh3aovn65e66cs24qiehq3nd2ddojef&#45;&gt;o524gebsxavobkte3k5fglgwnedfkadf -->
<g id="edge14" class="edge">
<title>lfh3aovn65e66cs24qiehq3nd2ddojef-&gt;o524gebsxavobkte3k5fglgwnedfkadf</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1167.6711,-1112.5788C1078.9073,-1090.9596 971.5916,-1064.822 881.5513,-1042.892"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1168.1444,-1110.6356C1079.3806,-1089.0165 972.0649,-1062.8788 882.0246,-1040.9488"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="882.5603,-1038.5062 872.016,-1039.5403 880.9038,-1045.3074 882.5603,-1038.5062"/>
<text text-anchor="middle" x="963.904" y="-1079.817" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=blas,lapack</text>
</g>
<!-- lfh3aovn65e66cs24qiehq3nd2ddojef&#45;&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw -->
<g id="edge31" class="edge">
<title>lfh3aovn65e66cs24qiehq3nd2ddojef-&gt;2w3nq3n3hcj2tqlvcpewsryamltlu5tw</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1559.7922,-1112.1043C1562.8511,-1111.5975 1565.8904,-1111.1002 1568.9103,-1110.6128 1759.2182,-1079.8992 1973.2397,-1052.1328 2144.6143,-1031.5343"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1560.1191,-1114.0774C1563.1741,-1113.5712 1566.2134,-1113.0739 1569.2289,-1112.5872 1759.4755,-1081.8826 1973.497,-1054.1161 2144.8529,-1033.52"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2145.1529,-1036.002 2154.6648,-1031.3357 2144.3191,-1029.0518 2145.1529,-1036.002"/>
<text text-anchor="middle" x="1828.178" y="-1072.4692" font-family="Times,serif" font-size="14.00" fill="#000000">virtuals=mpi</text>
</g>
<!-- lfh3aovn65e66cs24qiehq3nd2ddojef&#45;&gt;gguve5icmo5e4cw5o3hvvfsxremc46if -->
<g id="edge21" class="edge">
<title>lfh3aovn65e66cs24qiehq3nd2ddojef-&gt;gguve5icmo5e4cw5o3hvvfsxremc46if</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1346.0696,-1111.6072C1346.0696,-1092.5263 1346.0696,-1069.9257 1346.0696,-1049.8046"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1349.5697,-1049.5403 1346.0696,-1039.5403 1342.5697,-1049.5404 1349.5697,-1049.5403"/>
</g>
<!-- 2w3nq3n3hcj2tqlvcpewsryamltlu5tw&#45;&gt;htzjns66gmq6pjofohp26djmjnpbegho -->
<g id="edge30" class="edge">
<title>2w3nq3n3hcj2tqlvcpewsryamltlu5tw-&gt;htzjns66gmq6pjofohp26djmjnpbegho</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M2467.0696,-952.8072C2467.0696,-933.7263 2467.0696,-911.1257 2467.0696,-891.0046"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="2470.5697,-890.7403 2467.0696,-880.7403 2463.5697,-890.7404 2470.5697,-890.7403"/>
</g>
<!-- 7rzbmgoxhmm2jhellkgcjmn62uklf22x&#45;&gt;gguve5icmo5e4cw5o3hvvfsxremc46if -->
<g id="edge2" class="edge">
<title>7rzbmgoxhmm2jhellkgcjmn62uklf22x-&gt;gguve5icmo5e4cw5o3hvvfsxremc46if</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1422.351,-1429.2133C1312.2528,-1388.8872 1171.1589,-1316.8265 1103.0696,-1198.4 1083.8409,-1164.956 1082.4563,-1144.2088 1103.0696,-1111.6 1121.4102,-1082.5864 1149.2483,-1060.7204 1179.6189,-1044.2895"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1181.4205,-1047.2977 1188.6801,-1039.5809 1178.1927,-1041.0863 1181.4205,-1047.2977"/>
</g>
<!-- v32wejd4d5lc6uka4qlrogwh5xae2h3r&#45;&gt;j5rupoqliu7kasm6xndl7ui32wgawkru -->
<g id="edge39" class="edge">
<title>v32wejd4d5lc6uka4qlrogwh5xae2h3r-&gt;j5rupoqliu7kasm6xndl7ui32wgawkru</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1179.8001,-316.7866C1209.2065,-296.3053 1244.4355,-271.7686 1274.8343,-250.5961"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1180.9431,-318.4278C1210.3495,-297.9465 1245.5785,-273.4098 1275.9774,-252.2373"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1277.6375,-254.1277 1283.8429,-245.5403 1273.6367,-248.3836 1277.6375,-254.1277"/>
</g>
<!-- gguve5icmo5e4cw5o3hvvfsxremc46if&#45;&gt;j5rupoqliu7kasm6xndl7ui32wgawkru -->
<g id="edge18" class="edge">
<title>gguve5icmo5e4cw5o3hvvfsxremc46if-&gt;j5rupoqliu7kasm6xndl7ui32wgawkru</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1345.0696,-952.7909C1345.0696,-891.6316 1345.0696,-776.6094 1345.0696,-678.6 1345.0696,-678.6 1345.0696,-678.6 1345.0696,-519.8 1345.0696,-426.9591 1345.0696,-318.8523 1345.0696,-255.7237"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1347.0696,-952.7909C1347.0696,-891.6316 1347.0696,-776.6094 1347.0696,-678.6 1347.0696,-678.6 1347.0696,-678.6 1347.0696,-519.8 1347.0696,-426.9591 1347.0696,-318.8523 1347.0696,-255.7237"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1349.5697,-255.6091 1346.0696,-245.6091 1342.5697,-255.6092 1349.5697,-255.6091"/>
</g>
<!-- gguve5icmo5e4cw5o3hvvfsxremc46if&#45;&gt;5xerf6imlgo4xlubacr4mljacc3edexo -->
<g id="edge40" class="edge">
<title>gguve5icmo5e4cw5o3hvvfsxremc46if-&gt;5xerf6imlgo4xlubacr4mljacc3edexo</title>
<path fill="none" stroke="#1e90ff" stroke-width="2" d="M1423.1858,-951.9344C1460.2844,-931.1905 1504.8229,-906.2866 1543.0151,-884.9312"/>
<path fill="none" stroke="#dc143c" stroke-width="2" d="M1424.1619,-953.68C1461.2605,-932.9361 1505.799,-908.0322 1543.9912,-886.6769"/>
<polygon fill="#1e90ff" stroke="#1e90ff" stroke-width="2" points="1545.5391,-888.6757 1552.5592,-880.7403 1542.1228,-882.5659 1545.5391,-888.6757"/>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 58 KiB

View File

@@ -1549,7 +1549,7 @@ its value:
def configure_args(self):
...
if "+shared" in self.spec:
if self.spec.satisfies("+shared"):
extra_args.append("--enable-shared")
else:
extra_args.append("--disable-shared")
@@ -1636,7 +1636,7 @@ Within a package recipe a multi-valued variant is tested using a ``key=value`` s
.. code-block:: python
if "languages=jit" in spec:
if spec.satisfies("languages=jit"):
options.append("--enable-host-shared")
"""""""""""""""""""""""""""""""""""""""""""
@@ -2352,7 +2352,7 @@ the following at the command line of a bash shell:
.. code-block:: console
$ for i in {1..12}; do nohup spack install -j 4 mpich@3.3.2 >> mpich_install.txt 2>&1 &; done
$ for i in {1..12}; do nohup spack install -j 4 mpich@3.3.2 >> mpich_install.txt 2>&1 & done
.. note::
@@ -2557,9 +2557,10 @@ Conditional dependencies
^^^^^^^^^^^^^^^^^^^^^^^^
You may have a package that only requires a dependency under certain
conditions. For example, you may have a package that has optional MPI support,
- MPI is only a dependency when you want to enable MPI support for the
package. In that case, you could say something like:
conditions. For example, you may have a package with optional MPI support.
You would then provide a variant to reflect that the feature is optional
and specify the MPI dependency only applies when MPI support is enabled.
In that case, you could say something like:
.. code-block:: python
@@ -2567,13 +2568,39 @@ package. In that case, you could say something like:
depends_on("mpi", when="+mpi")
``when`` can include constraints on the variant, version, compiler, etc. and
the :mod:`syntax<spack.spec>` is the same as for Specs written on the command
line.
If a dependency/feature of a package isn't typically used, you can save time
by making it conditional (since Spack will not build the dependency unless it
is required for the Spec).
Suppose the above package also has, since version 3, optional `Trilinos`
support and you want them both to build either with or without MPI. Further
suppose you require a version of `Trilinos` no older than 12.6. In that case,
the `trilinos` variant and dependency directives would be:
.. code-block:: python
variant("trilinos", default=False, description="Enable Trilinos support")
depends_on("trilinos@12.6:", when="@3: +trilinos")
depends_on("trilinos@12.6: +mpi", when="@3: +trilinos +mpi")
Alternatively, you could use the `when` context manager to equivalently specify
the `trilinos` variant dependencies as follows:
.. code-block:: python
with when("@3: +trilinos"):
depends_on("trilinos@12.6:")
depends_on("trilinos +mpi", when="+mpi")
The argument to ``when`` in either case can include any Spec constraints that
are supported on the command line using the same :ref:`syntax <sec-specs>`.
.. note::
If a dependency isn't typically used, you can save time by making it
conditional since Spack will not build the dependency unless it is
required for the Spec.
.. _dependency_dependency_patching:
@@ -2661,60 +2688,6 @@ appear in the package file (or in this case, in the list).
right version. If two packages depend on ``binutils`` patched *the
same* way, they can both use a single installation of ``binutils``.
.. _setup-dependent-environment:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Influence how dependents are built or run
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack provides a mechanism for dependencies to influence the
environment of their dependents by overriding the
:meth:`setup_dependent_run_environment <spack.package_base.PackageBase.setup_dependent_run_environment>`
or the
:meth:`setup_dependent_build_environment <spack.builder.Builder.setup_dependent_build_environment>`
methods.
The Qt package, for instance, uses this call:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/qt/package.py
:pyobject: Qt.setup_dependent_build_environment
:linenos:
to set the ``QTDIR`` environment variable so that packages
that depend on a particular Qt installation will find it.
Another good example of how a dependency can influence
the build environment of dependents is the Python package:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.setup_dependent_build_environment
:linenos:
In the method above it is ensured that any package that depends on Python
will have the ``PYTHONPATH``, ``PYTHONHOME`` and ``PATH`` environment
variables set appropriately before starting the installation. To make things
even simpler the ``python setup.py`` command is also inserted into the module
scope of dependents by overriding a third method called
:meth:`setup_dependent_package <spack.package_base.PackageBase.setup_dependent_package>`
:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.setup_dependent_package
:linenos:
This allows most python packages to have a very simple install procedure,
like the following:
.. code-block:: python
def install(self, spec, prefix):
setup_py("install", "--prefix={0}".format(prefix))
Finally the Python package takes also care of the modifications to ``PYTHONPATH``
to allow dependencies to run correctly:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.setup_dependent_run_environment
:linenos:
.. _packaging_conflicts:
@@ -2859,6 +2832,70 @@ variant(s) are selected. This may be accomplished with conditional
extends("python", when="+python")
...
.. _setup-environment:
--------------------------------------------
Runtime and build time environment variables
--------------------------------------------
Spack provides a few methods to help package authors set up the required environment variables for
their package. Environment variables typically depend on how the package is used: variables that
make sense during the build phase may not be needed at runtime, and vice versa. Further, sometimes
it makes sense to let a dependency set the environment variables for its dependents. To allow all
this, Spack provides four different methods that can be overridden in a package:
1. :meth:`setup_build_environment <spack.builder.Builder.setup_build_environment>`
2. :meth:`setup_run_environment <spack.package_base.PackageBase.setup_run_environment>`
3. :meth:`setup_dependent_build_environment <spack.builder.Builder.setup_dependent_build_environment>`
4. :meth:`setup_dependent_run_environment <spack.package_base.PackageBase.setup_dependent_run_environment>`
The Qt package, for instance, uses this call:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/qt/package.py
:pyobject: Qt.setup_dependent_build_environment
:linenos:
to set the ``QTDIR`` environment variable so that packages that depend on a particular Qt
installation will find it.
The following diagram will give you an idea when each of these methods is called in a build
context:
.. image:: images/setup_env.png
:align: center
Notice that ``setup_dependent_run_environment`` can be called multiple times, once for each
dependent package, whereas ``setup_run_environment`` is called only once for the package itself.
This means that the former should only be used if the environment variables depend on the dependent
package, whereas the latter should be used if the environment variables depend only on the package
itself.
--------------------------------
Setting package module variables
--------------------------------
Apart from modifying environment variables of the dependent package, you can also define Python
variables to be used by the dependent. This is done by implementing
:meth:`setup_dependent_package <spack.package_base.PackageBase.setup_dependent_package>`. An
example of this can be found in the ``Python`` package:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.setup_dependent_package
:linenos:
This allows Python packages to directly use these variables:
.. code-block:: python
def install(self, spec, prefix):
...
install("script.py", python_platlib)
.. note::
We recommend using ``setup_dependent_package`` sparingly, as it is not always clear where
global variables are coming from when editing a ``package.py`` file.
-----
Views
-----
@@ -2937,6 +2974,33 @@ The ``provides("mpi")`` call tells Spack that the ``mpich`` package
can be used to satisfy the dependency of any package that
``depends_on("mpi")``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Providing multiple virtuals simultaneously
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Packages can provide more than one virtual dependency. Sometimes, due to implementation details,
there are subsets of those virtuals that need to be provided together by the same package.
A well-known example is ``openblas``, which provides both the ``lapack`` and ``blas`` API in a single ``libopenblas``
library. A package that needs ``lapack`` and ``blas`` must either use ``openblas`` to provide both, or not use
``openblas`` at all. It cannot pick one or the other.
To express this constraint in a package, the two virtual dependencies must be listed in the same ``provides`` directive:
.. code-block:: python
provides('blas', 'lapack')
This makes it impossible to select ``openblas`` as a provider for one of the two
virtual dependencies and not for the other. If you try to, Spack will report an error:
.. code-block:: console
$ spack spec netlib-scalapack ^[virtuals=lapack] openblas ^[virtuals=blas] atlas
==> Error: concretization failed for the following reasons:
1. Package 'openblas' needs to provide both 'lapack' and 'blas' together, but provides only 'lapack'
^^^^^^^^^^^^^^^^^^^^
Versioned Interfaces
^^^^^^^^^^^^^^^^^^^^
@@ -3501,7 +3565,7 @@ need to override methods like ``configure_args``:
def configure_args(self):
args = ["--enable-cxx"] + self.enable_or_disable("libs")
if "libs=static" in self.spec:
if self.spec.satisfies("libs=static"):
args.append("--with-pic")
return args
@@ -3738,7 +3802,7 @@ Similarly, ``spack install example +feature build_system=autotools`` will pick
the ``AutotoolsBuilder`` and invoke ``./configure --with-my-feature``.
Dependencies are always specified in the package class. When some dependencies
depend on the choice of the build system, it is possible to use when conditions as
depend on the choice of the build system, it is possible to use when conditions as
usual:
.. code-block:: python
@@ -3756,7 +3820,7 @@ usual:
depends_on("cmake@3.18:", when="@2.0:", type="build")
depends_on("cmake@3:", type="build")
# Specify extra build dependencies used only in the configure script
# Specify extra build dependencies used only in the configure script
with when("build_system=autotools"):
depends_on("perl", type="build")
depends_on("pkgconfig", type="build")
@@ -4364,7 +4428,7 @@ for supported features, for instance:
.. code-block:: python
if "avx512" in spec.target:
if spec.satisfies("target=avx512"):
args.append("--with-avx512")
The snippet above will append the ``--with-avx512`` item to a list of arguments only if the corresponding
@@ -6804,25 +6868,58 @@ the adapter role is to "emulate" a method resolution order like the one represen
Specifying License Information
------------------------------
A significant portion of software that Spack packages is open source. Most open
source software is released under one or more common open source licenses.
Specifying the specific license that a package is released under in a project's
`package.py` is good practice. To specify a license, find the SPDX identifier for
a project and then add it using the license directive:
Most of the software in Spack is open source, and most open source software is released
under one or more `common open source licenses <https://opensource.org/licenses/>`_.
Specifying the license that a package is released under in a project's
`package.py` is good practice. To specify a license, find the `SPDX identifier
<https://spdx.org/licenses/>`_ for a project and then add it using the license
directive:
.. code-block:: python
license("<SPDX Identifier HERE>")
For example, the SPDX ID for the Apache Software License, version 2.0 is ``Apache-2.0``,
so you'd write:
.. code-block:: python
license("Apache-2.0")
Or, for a dual-licensed package like Spack, you would use an `SPDX Expression
<https://spdx.github.io/spdx-spec/v2-draft/SPDX-license-expressions/>`_ with both of its
licenses:
.. code-block:: python
license("Apache-2.0 OR MIT")
Note that specifying a license without a when clause makes it apply to all
versions and variants of the package, which might not actually be the case.
For example, a project might have switched licenses at some point or have
certain build configurations that include files that are licensed differently.
To account for this, you can specify when licenses should be applied. For
example, to specify that a specific license identifier should only apply
to versionup to and including 1.5, you could write the following directive:
Spack itself used to be under the ``LGPL-2.1`` license, until it was relicensed
in version ``0.12`` in 2018.
You can specify when a ``license()`` directive applies using with a ``when=``
clause, just like other directives. For example, to specify that a specific
license identifier should only apply to versions up to ``0.11``, but another
license should apply for later versions, you could write:
.. code-block:: python
license("...", when="@:1.5")
license("LGPL-2.1", when="@:0.11")
license("Apache-2.0 OR MIT", when="@0.12:")
Note that unlike for most other directives, the ``when=`` constraints in the
``license()`` directive can't intersect. Spack needs to be able to resolve
exactly one license identifier expression for any given version. To specify
*multiple* licenses, use SPDX expressions and operators as above. The operators
you probably care most about are:
* ``OR``: user chooses one license to adhere to; and
* ``AND``: user has to adhere to all the licenses.
You may also care about `license exceptions
<https://spdx.org/licenses/exceptions-index.html>`_ that use the ``WITH`` operator,
e.g. ``Apache-2.0 WITH LLVM-exception``.

View File

@@ -6,8 +6,8 @@ python-levenshtein==0.23.0
docutils==0.18.1
pygments==2.16.1
urllib3==2.0.7
pytest==7.4.2
pytest==7.4.3
isort==5.12.0
black==23.9.1
black==23.10.1
flake8==6.1.0
mypy==1.6.1

View File

@@ -211,6 +211,7 @@ def info(message, *args, **kwargs):
stream.write(line + "\n")
else:
stream.write(indent + _output_filter(str(arg)) + "\n")
stream.flush()
def verbose(message, *args, **kwargs):

View File

@@ -307,10 +307,17 @@ def _check_build_test_callbacks(pkgs, error_cls):
@package_directives
def _check_patch_urls(pkgs, error_cls):
"""Ensure that patches fetched from GitHub have stable sha256 hashes."""
"""Ensure that patches fetched from GitHub and GitLab have stable sha256
hashes."""
github_patch_url_re = (
r"^https?://(?:patch-diff\.)?github(?:usercontent)?\.com/"
".+/.+/(?:commit|pull)/[a-fA-F0-9]*.(?:patch|diff)"
r".+/.+/(?:commit|pull)/[a-fA-F0-9]+\.(?:patch|diff)"
)
# Only .diff URLs have stable/full hashes:
# https://forum.gitlab.com/t/patches-with-full-index/29313
gitlab_patch_url_re = (
r"^https?://(?:.+)?gitlab(?:.+)/"
r".+/.+/-/(?:commit|merge_requests)/[a-fA-F0-9]+\.(?:patch|diff)"
)
errors = []
@@ -321,19 +328,27 @@ def _check_patch_urls(pkgs, error_cls):
if not isinstance(patch, spack.patch.UrlPatch):
continue
if not re.match(github_patch_url_re, patch.url):
continue
full_index_arg = "?full_index=1"
if not patch.url.endswith(full_index_arg):
errors.append(
error_cls(
"patch URL in package {0} must end with {1}".format(
pkg_cls.name, full_index_arg
),
[patch.url],
if re.match(github_patch_url_re, patch.url):
full_index_arg = "?full_index=1"
if not patch.url.endswith(full_index_arg):
errors.append(
error_cls(
"patch URL in package {0} must end with {1}".format(
pkg_cls.name, full_index_arg
),
[patch.url],
)
)
elif re.match(gitlab_patch_url_re, patch.url):
if not patch.url.endswith(".diff"):
errors.append(
error_cls(
"patch URL in package {0} must end with .diff".format(
pkg_cls.name
),
[patch.url],
)
)
)
return errors

View File

@@ -5,11 +5,13 @@
import codecs
import collections
import errno
import hashlib
import io
import itertools
import json
import os
import pathlib
import re
import shutil
import sys
@@ -23,7 +25,7 @@
import warnings
from contextlib import closing, contextmanager
from gzip import GzipFile
from typing import Dict, List, NamedTuple, Optional, Tuple, Union
from typing import Dict, List, NamedTuple, Optional, Set, Tuple
from urllib.error import HTTPError, URLError
import llnl.util.filesystem as fsys
@@ -31,6 +33,7 @@
import llnl.util.tty as tty
from llnl.util.filesystem import BaseDirectoryVisitor, mkdirp, visit_directory_tree
import spack.caches
import spack.cmd
import spack.config as config
import spack.database as spack_db
@@ -38,6 +41,9 @@
import spack.hooks
import spack.hooks.sbang
import spack.mirror
import spack.oci.image
import spack.oci.oci
import spack.oci.opener
import spack.platforms
import spack.relocate as relocate
import spack.repo
@@ -47,6 +53,7 @@
import spack.util.crypto
import spack.util.file_cache as file_cache
import spack.util.gpg
import spack.util.path
import spack.util.spack_json as sjson
import spack.util.spack_yaml as syaml
import spack.util.timer as timer
@@ -124,25 +131,25 @@ class BinaryCacheIndex:
mean we should have paid the price to update the cache earlier?
"""
def __init__(self, cache_root):
self._index_cache_root = cache_root
def __init__(self, cache_root: Optional[str] = None):
self._index_cache_root: str = cache_root or binary_index_location()
# the key associated with the serialized _local_index_cache
self._index_contents_key = "contents.json"
# a FileCache instance storing copies of remote binary cache indices
self._index_file_cache = None
self._index_file_cache: Optional[file_cache.FileCache] = None
# stores a map of mirror URL to index hash and cache key (index path)
self._local_index_cache = None
self._local_index_cache: Optional[dict] = None
# hashes of remote indices already ingested into the concrete spec
# cache (_mirrors_for_spec)
self._specs_already_associated = set()
self._specs_already_associated: Set[str] = set()
# mapping from mirror urls to the time.time() of the last index fetch and a bool indicating
# whether the fetch succeeded or not.
self._last_fetch_times = {}
self._last_fetch_times: Dict[str, float] = {}
# _mirrors_for_spec is a dictionary mapping DAG hashes to lists of
# entries indicating mirrors where that concrete spec can be found.
@@ -152,7 +159,7 @@ def __init__(self, cache_root):
# - the concrete spec itself, keyed by ``spec`` (including the
# full hash, since the dag hash may match but we want to
# use the updated source if available)
self._mirrors_for_spec = {}
self._mirrors_for_spec: Dict[str, dict] = {}
def _init_local_index_cache(self):
if not self._index_file_cache:
@@ -471,14 +478,18 @@ def _fetch_and_cache_index(self, mirror_url, cache_entry={}):
FetchIndexError
"""
# TODO: get rid of this request, handle 404 better
if not web_util.url_exists(
scheme = urllib.parse.urlparse(mirror_url).scheme
if scheme != "oci" and not web_util.url_exists(
url_util.join(mirror_url, _build_cache_relative_path, "index.json")
):
return False
etag = cache_entry.get("etag", None)
if etag:
fetcher = EtagIndexFetcher(mirror_url, etag)
if scheme == "oci":
# TODO: Actually etag and OCI are not mutually exclusive...
fetcher = OCIIndexFetcher(mirror_url, cache_entry.get("index_hash", None))
elif cache_entry.get("etag"):
fetcher = EtagIndexFetcher(mirror_url, cache_entry["etag"])
else:
fetcher = DefaultIndexFetcher(
mirror_url, local_hash=cache_entry.get("index_hash", None)
@@ -519,15 +530,8 @@ def binary_index_location():
return spack.util.path.canonicalize_path(cache_root)
def _binary_index():
"""Get the singleton store instance."""
return BinaryCacheIndex(binary_index_location())
#: Singleton binary_index instance
binary_index: Union[BinaryCacheIndex, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(
_binary_index
)
#: Default binary cache index instance
BINARY_INDEX: BinaryCacheIndex = llnl.util.lang.Singleton(BinaryCacheIndex) # type: ignore
class NoOverwriteException(spack.error.SpackError):
@@ -622,21 +626,14 @@ def build_cache_prefix(prefix):
def buildinfo_file_name(prefix):
"""
Filename of the binary package meta-data file
"""
return os.path.join(prefix, ".spack/binary_distribution")
"""Filename of the binary package meta-data file"""
return os.path.join(prefix, ".spack", "binary_distribution")
def read_buildinfo_file(prefix):
"""
Read buildinfo file
"""
filename = buildinfo_file_name(prefix)
with open(filename, "r") as inputfile:
content = inputfile.read()
buildinfo = syaml.load(content)
return buildinfo
"""Read buildinfo file"""
with open(buildinfo_file_name(prefix), "r") as f:
return syaml.load(f)
class BuildManifestVisitor(BaseDirectoryVisitor):
@@ -819,18 +816,6 @@ def tarball_path_name(spec, ext):
return os.path.join(tarball_directory_name(spec), tarball_name(spec, ext))
def checksum_tarball(file):
# calculate sha256 hash of tar file
block_size = 65536
hasher = hashlib.sha256()
with open(file, "rb") as tfile:
buf = tfile.read(block_size)
while len(buf) > 0:
hasher.update(buf)
buf = tfile.read(block_size)
return hasher.hexdigest()
def select_signing_key(key=None):
if key is None:
keys = spack.util.gpg.signing_keys()
@@ -1147,14 +1132,17 @@ def gzip_compressed_tarfile(path):
# compresslevel=6 gzip default: llvm takes 4mins, roughly 2.1GB
# compresslevel=9 python default: llvm takes 12mins, roughly 2.1GB
# So we follow gzip.
with open(path, "wb") as fileobj, closing(
GzipFile(filename="", mode="wb", compresslevel=6, mtime=0, fileobj=fileobj)
) as gzip_file, tarfile.TarFile(name="", mode="w", fileobj=gzip_file) as tar:
yield tar
with open(path, "wb") as f, ChecksumWriter(f) as inner_checksum, closing(
GzipFile(filename="", mode="wb", compresslevel=6, mtime=0, fileobj=inner_checksum)
) as gzip_file, ChecksumWriter(gzip_file) as outer_checksum, tarfile.TarFile(
name="", mode="w", fileobj=outer_checksum
) as tar:
yield tar, inner_checksum, outer_checksum
def _tarinfo_name(p: str):
return p.lstrip("/")
def _tarinfo_name(absolute_path: str, *, _path=pathlib.PurePath) -> str:
"""Compute tarfile entry name as the relative path from the (system) root."""
return _path(*_path(absolute_path).parts[1:]).as_posix()
def tarfile_of_spec_prefix(tar: tarfile.TarFile, prefix: str) -> None:
@@ -1234,8 +1222,88 @@ def tarfile_of_spec_prefix(tar: tarfile.TarFile, prefix: str) -> None:
dir_stack.extend(reversed(new_dirs)) # we pop, so reverse to stay alphabetical
class ChecksumWriter(io.BufferedIOBase):
"""Checksum writer computes a checksum while writing to a file."""
myfileobj = None
def __init__(self, fileobj, algorithm=hashlib.sha256):
self.fileobj = fileobj
self.hasher = algorithm()
self.length = 0
def hexdigest(self):
return self.hasher.hexdigest()
def write(self, data):
if isinstance(data, (bytes, bytearray)):
length = len(data)
else:
data = memoryview(data)
length = data.nbytes
if length > 0:
self.fileobj.write(data)
self.hasher.update(data)
self.length += length
return length
def read(self, size=-1):
raise OSError(errno.EBADF, "read() on write-only object")
def read1(self, size=-1):
raise OSError(errno.EBADF, "read1() on write-only object")
def peek(self, n):
raise OSError(errno.EBADF, "peek() on write-only object")
@property
def closed(self):
return self.fileobj is None
def close(self):
fileobj = self.fileobj
if fileobj is None:
return
self.fileobj.close()
self.fileobj = None
def flush(self):
self.fileobj.flush()
def fileno(self):
return self.fileobj.fileno()
def rewind(self):
raise OSError("Can't rewind while computing checksum")
def readable(self):
return False
def writable(self):
return True
def seekable(self):
return True
def tell(self):
return self.fileobj.tell()
def seek(self, offset, whence=io.SEEK_SET):
# In principle forward seek is possible with b"0" padding,
# but this is not implemented.
if offset == 0 and whence == io.SEEK_CUR:
return
raise OSError("Can't seek while computing checksum")
def readline(self, size=-1):
raise OSError(errno.EBADF, "readline() on write-only object")
def _do_create_tarball(tarfile_path: str, binaries_dir: str, buildinfo: dict):
with gzip_compressed_tarfile(tarfile_path) as tar:
with gzip_compressed_tarfile(tarfile_path) as (tar, inner_checksum, outer_checksum):
# Tarball the install prefix
tarfile_of_spec_prefix(tar, binaries_dir)
@@ -1247,6 +1315,8 @@ def _do_create_tarball(tarfile_path: str, binaries_dir: str, buildinfo: dict):
tarinfo.mode = 0o644
tar.addfile(tarinfo, io.BytesIO(bstring))
return inner_checksum.hexdigest(), outer_checksum.hexdigest()
class PushOptions(NamedTuple):
#: Overwrite existing tarball/metadata files in buildcache
@@ -1322,13 +1392,9 @@ def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, option
# create info for later relocation and create tar
buildinfo = get_buildinfo_dict(spec)
_do_create_tarball(tarfile_path, binaries_dir, buildinfo)
# get the sha256 checksum of the tarball
checksum = checksum_tarball(tarfile_path)
checksum, _ = _do_create_tarball(tarfile_path, binaries_dir, buildinfo)
# add sha256 checksum to spec.json
with open(spec_file, "r") as inputfile:
content = inputfile.read()
if spec_file.endswith(".json"):
@@ -1371,10 +1437,21 @@ def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, option
return None
class NotInstalledError(spack.error.SpackError):
"""Raised when a spec is not installed but picked to be packaged."""
def __init__(self, specs: List[Spec]):
super().__init__(
"Cannot push non-installed packages",
", ".join(s.cformat("{name}{@version}{/hash:7}") for s in specs),
)
def specs_to_be_packaged(
specs: List[Spec], root: bool = True, dependencies: bool = True
) -> List[Spec]:
"""Return the list of nodes to be packaged, given a list of specs.
Raises NotInstalledError if a spec is not installed but picked to be packaged.
Args:
specs: list of root specs to be processed
@@ -1382,19 +1459,35 @@ def specs_to_be_packaged(
dependencies: include the dependencies of each
spec in the nodes
"""
if not root and not dependencies:
return []
elif dependencies:
nodes = traverse.traverse_nodes(specs, root=root, deptype="all")
else:
nodes = set(specs)
# Limit to installed non-externals.
packageable = lambda n: not n.external and n.installed
# Mass install check
# Filter packageable roots
with spack.store.STORE.db.read_transaction():
return list(filter(packageable, nodes))
if root:
# Error on uninstalled roots, when roots are requested
uninstalled_roots = list(s for s in specs if not s.installed)
if uninstalled_roots:
raise NotInstalledError(uninstalled_roots)
roots = specs
else:
roots = []
if dependencies:
# Error on uninstalled deps, when deps are requested
deps = list(
traverse.traverse_nodes(
specs, deptype="all", order="breadth", root=False, key=traverse.by_dag_hash
)
)
uninstalled_deps = list(s for s in deps if not s.installed)
if uninstalled_deps:
raise NotInstalledError(uninstalled_deps)
else:
deps = []
return [s for s in itertools.chain(roots, deps) if not s.external]
def push(spec: Spec, mirror_url: str, options: PushOptions):
@@ -1502,8 +1595,6 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
tarball = tarball_path_name(spec, ".spack")
specfile_prefix = tarball_name(spec, ".spec")
mirrors_to_try = []
# Note on try_first and try_next:
# mirrors_for_spec mostly likely came from spack caching remote
# mirror indices locally and adding their specs to a local data
@@ -1516,63 +1607,116 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
try_first = [i["mirror_url"] for i in mirrors_for_spec] if mirrors_for_spec else []
try_next = [i.fetch_url for i in configured_mirrors if i.fetch_url not in try_first]
for url in try_first + try_next:
mirrors_to_try.append(
{
"specfile": url_util.join(url, _build_cache_relative_path, specfile_prefix),
"spackfile": url_util.join(url, _build_cache_relative_path, tarball),
}
)
mirrors = try_first + try_next
tried_to_verify_sigs = []
# Assumes we care more about finding a spec file by preferred ext
# than by mirrory priority. This can be made less complicated as
# we remove support for deprecated spec formats and buildcache layouts.
for ext in ["json.sig", "json"]:
for mirror_to_try in mirrors_to_try:
specfile_url = "{0}.{1}".format(mirror_to_try["specfile"], ext)
spackfile_url = mirror_to_try["spackfile"]
local_specfile_stage = try_fetch(specfile_url)
if local_specfile_stage:
local_specfile_path = local_specfile_stage.save_filename
signature_verified = False
for try_signed in (True, False):
for mirror in mirrors:
# If it's an OCI index, do things differently, since we cannot compose URLs.
parsed = urllib.parse.urlparse(mirror)
if ext.endswith(".sig") and not unsigned:
# If we found a signed specfile at the root, try to verify
# the signature immediately. We will not download the
# tarball if we could not verify the signature.
tried_to_verify_sigs.append(specfile_url)
signature_verified = try_verify(local_specfile_path)
if not signature_verified:
tty.warn("Failed to verify: {0}".format(specfile_url))
# TODO: refactor this to some "nice" place.
if parsed.scheme == "oci":
ref = spack.oci.image.ImageReference.from_string(mirror[len("oci://") :]).with_tag(
spack.oci.image.default_tag(spec)
)
if unsigned or signature_verified or not ext.endswith(".sig"):
# We will download the tarball in one of three cases:
# 1. user asked for --no-check-signature
# 2. user didn't ask for --no-check-signature, but we
# found a spec.json.sig and verified the signature already
# 3. neither of the first two cases are true, but this file
# is *not* a signed json (not a spec.json.sig file). That
# means we already looked at all the mirrors and either didn't
# find any .sig files or couldn't verify any of them. But it
# is still possible to find an old style binary package where
# the signature is a detached .asc file in the outer archive
# of the tarball, and in that case, the only way to know is to
# download the tarball. This is a deprecated use case, so if
# something goes wrong during the extraction process (can't
# verify signature, checksum doesn't match) we will fail at
# that point instead of trying to download more tarballs from
# the remaining mirrors, looking for one we can use.
tarball_stage = try_fetch(spackfile_url)
if tarball_stage:
return {
"tarball_stage": tarball_stage,
"specfile_stage": local_specfile_stage,
"signature_verified": signature_verified,
}
# Fetch the manifest
try:
response = spack.oci.opener.urlopen(
urllib.request.Request(
url=ref.manifest_url(),
headers={"Accept": "application/vnd.oci.image.manifest.v1+json"},
)
)
except Exception:
continue
local_specfile_stage.destroy()
# Download the config = spec.json and the relevant tarball
try:
manifest = json.loads(response.read())
spec_digest = spack.oci.image.Digest.from_string(manifest["config"]["digest"])
tarball_digest = spack.oci.image.Digest.from_string(
manifest["layers"][-1]["digest"]
)
except Exception:
continue
with spack.oci.oci.make_stage(
ref.blob_url(spec_digest), spec_digest, keep=True
) as local_specfile_stage:
try:
local_specfile_stage.fetch()
local_specfile_stage.check()
except Exception:
continue
local_specfile_stage.cache_local()
with spack.oci.oci.make_stage(
ref.blob_url(tarball_digest), tarball_digest, keep=True
) as tarball_stage:
try:
tarball_stage.fetch()
tarball_stage.check()
except Exception:
continue
tarball_stage.cache_local()
return {
"tarball_stage": tarball_stage,
"specfile_stage": local_specfile_stage,
"signature_verified": False,
}
else:
ext = "json.sig" if try_signed else "json"
specfile_path = url_util.join(mirror, _build_cache_relative_path, specfile_prefix)
specfile_url = f"{specfile_path}.{ext}"
spackfile_url = url_util.join(mirror, _build_cache_relative_path, tarball)
local_specfile_stage = try_fetch(specfile_url)
if local_specfile_stage:
local_specfile_path = local_specfile_stage.save_filename
signature_verified = False
if try_signed and not unsigned:
# If we found a signed specfile at the root, try to verify
# the signature immediately. We will not download the
# tarball if we could not verify the signature.
tried_to_verify_sigs.append(specfile_url)
signature_verified = try_verify(local_specfile_path)
if not signature_verified:
tty.warn("Failed to verify: {0}".format(specfile_url))
if unsigned or signature_verified or not try_signed:
# We will download the tarball in one of three cases:
# 1. user asked for --no-check-signature
# 2. user didn't ask for --no-check-signature, but we
# found a spec.json.sig and verified the signature already
# 3. neither of the first two cases are true, but this file
# is *not* a signed json (not a spec.json.sig file). That
# means we already looked at all the mirrors and either didn't
# find any .sig files or couldn't verify any of them. But it
# is still possible to find an old style binary package where
# the signature is a detached .asc file in the outer archive
# of the tarball, and in that case, the only way to know is to
# download the tarball. This is a deprecated use case, so if
# something goes wrong during the extraction process (can't
# verify signature, checksum doesn't match) we will fail at
# that point instead of trying to download more tarballs from
# the remaining mirrors, looking for one we can use.
tarball_stage = try_fetch(spackfile_url)
if tarball_stage:
return {
"tarball_stage": tarball_stage,
"specfile_stage": local_specfile_stage,
"signature_verified": signature_verified,
}
local_specfile_stage.destroy()
# Falling through the nested loops meeans we exhaustively searched
# for all known kinds of spec files on all mirrors and did not find
@@ -1805,7 +1949,7 @@ def _extract_inner_tarball(spec, filename, extract_to, unsigned, remote_checksum
)
# compute the sha256 checksum of the tarball
local_checksum = checksum_tarball(tarfile_path)
local_checksum = spack.util.crypto.checksum(hashlib.sha256, tarfile_path)
expected = remote_checksum["hash"]
# if the checksums don't match don't install
@@ -1866,6 +2010,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
spec_dict = sjson.load(content)
bchecksum = spec_dict["binary_cache_checksum"]
filename = download_result["tarball_stage"].save_filename
signature_verified = download_result["signature_verified"]
tmpdir = None
@@ -1898,7 +2043,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
)
# compute the sha256 checksum of the tarball
local_checksum = checksum_tarball(tarfile_path)
local_checksum = spack.util.crypto.checksum(hashlib.sha256, tarfile_path)
expected = bchecksum["hash"]
# if the checksums don't match don't install
@@ -2104,7 +2249,7 @@ def get_mirrors_for_spec(spec=None, mirrors_to_check=None, index_only=False):
tty.debug("No Spack mirrors are currently configured")
return {}
results = binary_index.find_built_spec(spec, mirrors_to_check=mirrors_to_check)
results = BINARY_INDEX.find_built_spec(spec, mirrors_to_check=mirrors_to_check)
# The index may be out-of-date. If we aren't only considering indices, try
# to fetch directly since we know where the file should be.
@@ -2113,7 +2258,7 @@ def get_mirrors_for_spec(spec=None, mirrors_to_check=None, index_only=False):
# We found a spec by the direct fetch approach, we might as well
# add it to our mapping.
if results:
binary_index.update_spec(spec, results)
BINARY_INDEX.update_spec(spec, results)
return results
@@ -2129,12 +2274,12 @@ def update_cache_and_get_specs():
Throws:
FetchCacheError
"""
binary_index.update()
return binary_index.get_all_built_specs()
BINARY_INDEX.update()
return BINARY_INDEX.get_all_built_specs()
def clear_spec_cache():
binary_index.clear()
BINARY_INDEX.clear()
def get_keys(install=False, trust=False, force=False, mirrors=None):
@@ -2457,7 +2602,7 @@ def get_remote_hash(self):
return None
return remote_hash.decode("utf-8")
def conditional_fetch(self):
def conditional_fetch(self) -> FetchIndexResult:
# Do an intermediate fetch for the hash
# and a conditional fetch for the contents
@@ -2471,12 +2616,12 @@ def conditional_fetch(self):
try:
response = self.urlopen(urllib.request.Request(url_index, headers=self.headers))
except urllib.error.URLError as e:
raise FetchIndexError("Could not fetch index from {}".format(url_index), e)
raise FetchIndexError("Could not fetch index from {}".format(url_index), e) from e
try:
result = codecs.getreader("utf-8")(response).read()
except ValueError as e:
return FetchCacheError("Remote index {} is invalid".format(url_index), e)
raise FetchIndexError("Remote index {} is invalid".format(url_index), e) from e
computed_hash = compute_hash(result)
@@ -2508,7 +2653,7 @@ def __init__(self, url, etag, urlopen=web_util.urlopen):
self.etag = etag
self.urlopen = urlopen
def conditional_fetch(self):
def conditional_fetch(self) -> FetchIndexResult:
# Just do a conditional fetch immediately
url = url_util.join(self.url, _build_cache_relative_path, "index.json")
headers = {
@@ -2539,3 +2684,59 @@ def conditional_fetch(self):
data=result,
fresh=False,
)
class OCIIndexFetcher:
def __init__(self, url: str, local_hash, urlopen=None) -> None:
self.local_hash = local_hash
# Remove oci:// prefix
assert url.startswith("oci://")
self.ref = spack.oci.image.ImageReference.from_string(url[6:])
self.urlopen = urlopen or spack.oci.opener.urlopen
def conditional_fetch(self) -> FetchIndexResult:
"""Download an index from an OCI registry type mirror."""
url_manifest = self.ref.with_tag(spack.oci.image.default_index_tag).manifest_url()
try:
response = self.urlopen(
urllib.request.Request(
url=url_manifest,
headers={"Accept": "application/vnd.oci.image.manifest.v1+json"},
)
)
except urllib.error.URLError as e:
raise FetchIndexError(
"Could not fetch manifest from {}".format(url_manifest), e
) from e
try:
manifest = json.loads(response.read())
except Exception as e:
raise FetchIndexError("Remote index {} is invalid".format(url_manifest), e) from e
# Get first blob hash, which should be the index.json
try:
index_digest = spack.oci.image.Digest.from_string(manifest["layers"][0]["digest"])
except Exception as e:
raise FetchIndexError("Remote index {} is invalid".format(url_manifest), e) from e
# Fresh?
if index_digest.digest == self.local_hash:
return FetchIndexResult(etag=None, hash=None, data=None, fresh=True)
# Otherwise fetch the blob / index.json
response = self.urlopen(
urllib.request.Request(
url=self.ref.blob_url(index_digest),
headers={"Accept": "application/vnd.oci.image.layer.v1.tar+gzip"},
)
)
result = codecs.getreader("utf-8")(response).read()
# Make sure the blob we download has the advertised hash
if compute_hash(result) != index_digest.digest:
raise FetchIndexError(f"Remote index {url_manifest} is invalid")
return FetchIndexResult(etag=None, hash=index_digest.digest, data=result, fresh=False)

View File

@@ -214,7 +214,7 @@ def _install_and_test(
with spack.config.override(self.mirror_scope):
# This index is currently needed to get the compiler used to build some
# specs that we know by dag hash.
spack.binary_distribution.binary_index.regenerate_spec_cache()
spack.binary_distribution.BINARY_INDEX.regenerate_spec_cache()
index = spack.binary_distribution.update_cache_and_get_specs()
if not index:
@@ -291,6 +291,10 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
with spack_python_interpreter():
# Add hint to use frontend operating system on Cray
concrete_spec = spack.spec.Spec(abstract_spec_str + " ^" + spec_for_current_python())
# This is needed to help the old concretizer taking the `setuptools` dependency
# only when bootstrapping from sources on Python 3.12
if spec_for_current_python() == "python@3.12":
concrete_spec.constrain("+force_setuptools")
if module == "clingo":
# TODO: remove when the old concretizer is deprecated # pylint: disable=fixme

View File

@@ -752,19 +752,13 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
target = platform.target(pkg.spec.architecture.target)
platform.setup_platform_environment(pkg, env_mods)
if context == Context.BUILD:
tty.debug("setup_package: setup build environment for root")
builder = spack.builder.create(pkg)
builder.setup_build_environment(env_mods)
if (not dirty) and (not env_mods.is_unset("CPATH")):
tty.debug(
"A dependency has updated CPATH, this may lead pkg-"
"config to assume that the package is part of the system"
" includes and omit it when invoked with '--cflags'."
)
elif context == Context.TEST:
if context == Context.TEST:
env_mods.prepend_path("PATH", ".")
elif context == Context.BUILD and not dirty and not env_mods.is_unset("CPATH"):
tty.debug(
"A dependency has updated CPATH, this may lead pkg-config to assume that the package "
"is part of the system includes and omit it when invoked with '--cflags'."
)
# First apply the clean environment changes
env_base.apply_modifications()
@@ -953,8 +947,11 @@ def __init__(self, *specs: spack.spec.Spec, context: Context) -> None:
reversed(specs_with_type), lambda t: t[0].external
)
self.should_be_runnable = UseMode.BUILDTIME_DIRECT | UseMode.RUNTIME_EXECUTABLE
self.should_setup_run_env = UseMode.RUNTIME | UseMode.RUNTIME_EXECUTABLE
self.should_setup_run_env = (
UseMode.BUILDTIME_DIRECT | UseMode.RUNTIME | UseMode.RUNTIME_EXECUTABLE
)
self.should_setup_dependent_build_env = UseMode.BUILDTIME | UseMode.BUILDTIME_DIRECT
self.should_setup_build_env = UseMode.ROOT if context == Context.BUILD else UseMode(0)
if context == Context.RUN or context == Context.TEST:
self.should_be_runnable |= UseMode.ROOT
@@ -994,8 +991,9 @@ def get_env_modifications(self) -> EnvironmentModifications:
- Updating PATH for packages that are required at runtime
- Updating CMAKE_PREFIX_PATH and PKG_CONFIG_PATH so that their respective
tools can find Spack-built dependencies (when context=build)
- Running custom package environment modifications (setup_run_environment,
setup_dependent_build_environment, setup_dependent_run_environment)
- Running custom package environment modifications: setup_run_environment,
setup_dependent_run_environment, setup_build_environment,
setup_dependent_build_environment.
The (partial) order imposed on the specs is externals first, then topological
from leaf to root. That way externals cannot contribute search paths that would shadow
@@ -1008,16 +1006,17 @@ def get_env_modifications(self) -> EnvironmentModifications:
if self.should_setup_dependent_build_env & flag:
self._make_buildtime_detectable(dspec, env)
for spec in self.specs:
builder = spack.builder.create(pkg)
builder.setup_dependent_build_environment(env, spec)
for root in self.specs: # there is only one root in build context
spack.builder.create(pkg).setup_dependent_build_environment(env, root)
if self.should_setup_build_env & flag:
spack.builder.create(pkg).setup_build_environment(env)
if self.should_be_runnable & flag:
self._make_runnable(dspec, env)
if self.should_setup_run_env & flag:
# TODO: remove setup_dependent_run_environment...
for spec in dspec.dependents(deptype=dt.RUN):
for spec in dspec.dependents(deptype=dt.LINK | dt.RUN):
if id(spec) in self.nodes_in_subdag:
pkg.setup_dependent_run_environment(env, spec)
pkg.setup_run_environment(env)

View File

@@ -6,7 +6,6 @@
import os
import re
import shutil
import stat
from typing import Optional
import archspec
@@ -24,14 +23,29 @@
import spack.spec
import spack.store
from spack.directives import build_system, depends_on, extends, maintainers
from spack.error import NoHeadersError, NoLibrariesError, SpecError
from spack.error import NoHeadersError, NoLibrariesError
from spack.install_test import test_part
from spack.util.executable import Executable
from spack.version import Version
from ._checks import BaseBuilder, execute_install_time_tests
def _flatten_dict(dictionary):
"""Iterable that yields KEY=VALUE paths through a dictionary.
Args:
dictionary: Possibly nested dictionary of arbitrary keys and values.
Yields:
A single path through the dictionary.
"""
for key, item in dictionary.items():
if isinstance(item, dict):
# Recursive case
for value in _flatten_dict(item):
yield f"{key}={value}"
else:
# Base case
yield f"{key}={item}"
class PythonExtension(spack.package_base.PackageBase):
maintainers("adamjstewart")
@@ -353,51 +367,6 @@ def libs(self):
raise NoLibrariesError(msg.format(self.spec.name, root))
def fixup_shebangs(path: str, old_interpreter: bytes, new_interpreter: bytes):
# Recurse into the install prefix and fixup shebangs
exe = stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH
dirs = [path]
hardlinks = set()
while dirs:
with os.scandir(dirs.pop()) as entries:
for entry in entries:
if entry.is_dir(follow_symlinks=False):
dirs.append(entry.path)
continue
# Only consider files, not symlinks
if not entry.is_file(follow_symlinks=False):
continue
lstat = entry.stat(follow_symlinks=False)
# Skip over files that are not executable
if not (lstat.st_mode & exe):
continue
# Don't modify hardlinks more than once
if lstat.st_nlink > 1:
key = (lstat.st_ino, lstat.st_dev)
if key in hardlinks:
continue
hardlinks.add(key)
# Finally replace shebangs if any.
with open(entry.path, "rb+") as f:
contents = f.read(2)
if contents != b"#!":
continue
contents += f.read()
if old_interpreter not in contents:
continue
f.seek(0)
f.write(contents.replace(old_interpreter, new_interpreter))
f.truncate()
@spack.builder.builder("python_pip")
class PythonPipBuilder(BaseBuilder):
phases = ("install",)
@@ -409,7 +378,7 @@ class PythonPipBuilder(BaseBuilder):
legacy_long_methods = ("install_options", "global_options", "config_settings")
#: Names associated with package attributes in the old build-system format
legacy_attributes = ("build_directory", "install_time_test_callbacks")
legacy_attributes = ("archive_files", "build_directory", "install_time_test_callbacks")
#: Callback names for install-time test
install_time_test_callbacks = ["test"]
@@ -454,14 +423,15 @@ def build_directory(self):
def config_settings(self, spec, prefix):
"""Configuration settings to be passed to the PEP 517 build backend.
Requires pip 22.1 or newer.
Requires pip 22.1 or newer for keys that appear only a single time,
or pip 23.1 or newer if the same key appears multiple times.
Args:
spec (spack.spec.Spec): build spec
prefix (spack.util.prefix.Prefix): installation prefix
Returns:
dict: dictionary of KEY, VALUE settings
dict: Possibly nested dictionary of KEY, VALUE settings
"""
return {}
@@ -494,84 +464,32 @@ def global_options(self, spec, prefix):
"""
return []
@property
def _build_venv_path(self):
"""Return the path to the virtual environment used for building when
python is external."""
return os.path.join(self.spec.package.stage.path, "build_env")
@property
def _build_venv_python(self) -> Executable:
"""Return the Python executable in the build virtual environment when
python is external."""
return Executable(os.path.join(self._build_venv_path, "bin", "python"))
def install(self, pkg, spec, prefix):
"""Install everything from build directory."""
python: Executable = spec["python"].command
# Since we invoke pip with --no-build-isolation, we have to make sure that pip cannot
# execute hooks from user and system site-packages.
if spec["python"].external:
# There are no environment variables to disable the system site-packages, so we use a
# virtual environment instead. The downside of this approach is that pip produces
# incorrect shebangs that refer to the virtual environment, which we have to fix up.
python("-m", "venv", "--without-pip", self._build_venv_path)
pip = self._build_venv_python
else:
# For a Spack managed Python, system site-packages is empty/unused by design, so it
# suffices to disable user site-packages, for which there is an environment variable.
pip = python
pip.add_default_env("PYTHONNOUSERSITE", "1")
pip.add_default_arg("-m")
pip.add_default_arg("pip")
args = PythonPipBuilder.std_args(pkg) + ["--prefix=" + prefix]
for key, value in self.config_settings(spec, prefix).items():
if spec["py-pip"].version < Version("22.1"):
raise SpecError(
"'{}' package uses 'config_settings' which is only supported by "
"pip 22.1+. Add the following line to the package to fix this:\n\n"
' depends_on("py-pip@22.1:", type="build")'.format(spec.name)
)
args.append("--config-settings={}={}".format(key, value))
args = PythonPipBuilder.std_args(pkg) + [f"--prefix={prefix}"]
for setting in _flatten_dict(self.config_settings(spec, prefix)):
args.append(f"--config-settings={setting}")
for option in self.install_options(spec, prefix):
args.append("--install-option=" + option)
args.append(f"--install-option={option}")
for option in self.global_options(spec, prefix):
args.append("--global-option=" + option)
args.append(f"--global-option={option}")
if pkg.stage.archive_file and pkg.stage.archive_file.endswith(".whl"):
args.append(pkg.stage.archive_file)
else:
args.append(".")
pip = spec["python"].command
# Hide user packages, since we don't have build isolation. This is
# necessary because pip / setuptools may run hooks from arbitrary
# packages during the build. There is no equivalent variable to hide
# system packages, so this is not reliable for external Python.
pip.add_default_env("PYTHONNOUSERSITE", "1")
pip.add_default_arg("-m")
pip.add_default_arg("pip")
with fs.working_dir(self.build_directory):
pip(*args)
@spack.builder.run_after("install")
def fixup_shebangs_pointing_to_build(self):
"""When installing a package using an external python, we use a temporary virtual
environment which improves build isolation. The downside is that pip produces shebangs
that point to the temporary virtual environment. This method fixes them up to point to the
underlying Python."""
# No need to fixup shebangs if no build venv was used. (this post install function also
# runs when install was overridden in another package, so check existence of the venv path)
if not os.path.exists(self._build_venv_path):
return
# Use sys.executable, since that's what pip uses.
interpreter = (
lambda python: python("-c", "import sys; print(sys.executable)", output=str)
.strip()
.encode("utf-8")
)
fixup_shebangs(
path=self.spec.prefix,
old_interpreter=interpreter(self._build_venv_python),
new_interpreter=interpreter(self.spec["python"].command),
)
spack.builder.run_after("install")(execute_install_time_tests)

View File

@@ -25,6 +25,7 @@
import llnl.util.filesystem as fs
import llnl.util.tty as tty
from llnl.util.lang import memoized
from llnl.util.tty.color import cescape, colorize
import spack
import spack.binary_distribution as bindist
@@ -97,15 +98,6 @@ def _remove_reserved_tags(tags):
return [tag for tag in tags if tag not in SPACK_RESERVED_TAGS]
def _get_spec_string(spec):
format_elements = ["{name}{@version}", "{%compiler}"]
if spec.architecture:
format_elements.append(" {arch=architecture}")
return spec.format("".join(format_elements))
def _spec_deps_key(s):
return "{0}/{1}".format(s.name, s.dag_hash(7))
@@ -210,22 +202,22 @@ def _print_staging_summary(spec_labels, stages, mirrors_to_check, rebuild_decisi
tty.msg("Staging summary ([x] means a job needs rebuilding):")
for stage_index, stage in enumerate(stages):
tty.msg(" stage {0} ({1} jobs):".format(stage_index, len(stage)))
tty.msg(f" stage {stage_index} ({len(stage)} jobs):")
for job in sorted(stage):
for job in sorted(stage, key=lambda j: (not rebuild_decisions[j].rebuild, j)):
s = spec_labels[job]
rebuild = rebuild_decisions[job].rebuild
reason = rebuild_decisions[job].reason
reason_msg = " ({0})".format(reason) if reason else ""
tty.msg(
" [{1}] {0} -> {2}{3}".format(
job, "x" if rebuild else " ", _get_spec_string(s), reason_msg
)
)
if rebuild_decisions[job].mirrors:
tty.msg(" found on the following mirrors:")
for murl in rebuild_decisions[job].mirrors:
tty.msg(" {0}".format(murl))
reason_msg = f" ({reason})" if reason else ""
spec_fmt = "{name}{@version}{%compiler}{/hash:7}"
if rebuild_decisions[job].rebuild:
status = colorize("@*g{[x]} ")
msg = f" {status}{s.cformat(spec_fmt)}{reason_msg}"
else:
msg = f"{s.format(spec_fmt)}{reason_msg}"
if rebuild_decisions[job].mirrors:
msg += f" [{', '.join(rebuild_decisions[job].mirrors)}]"
msg = colorize(f" @K - {cescape(msg)}@.")
tty.msg(msg)
def _compute_spec_deps(spec_list):
@@ -932,7 +924,7 @@ def generate_gitlab_ci_yaml(
# Speed up staging by first fetching binary indices from all mirrors
try:
bindist.binary_index.update()
bindist.BINARY_INDEX.update()
except bindist.FetchCacheError as e:
tty.warn(e)
@@ -2258,13 +2250,13 @@ def build_name(self):
spec.architecture,
self.build_group,
)
tty.verbose(
tty.debug(
"Generated CDash build name ({0}) from the {1}".format(build_name, spec.name)
)
return build_name
build_name = os.environ.get("SPACK_CDASH_BUILD_NAME")
tty.verbose("Using CDash build name ({0}) from the environment".format(build_name))
tty.debug("Using CDash build name ({0}) from the environment".format(build_name))
return build_name
@property # type: ignore
@@ -2278,11 +2270,11 @@ def build_stamp(self):
Returns: (str) current CDash build stamp"""
build_stamp = os.environ.get("SPACK_CDASH_BUILD_STAMP")
if build_stamp:
tty.verbose("Using build stamp ({0}) from the environment".format(build_stamp))
tty.debug("Using build stamp ({0}) from the environment".format(build_stamp))
return build_stamp
build_stamp = cdash_build_stamp(self.build_group, time.time())
tty.verbose("Generated new build stamp ({0})".format(build_stamp))
tty.debug("Generated new build stamp ({0})".format(build_stamp))
return build_stamp
@property # type: ignore

View File

@@ -3,16 +3,19 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse
import copy
import glob
import hashlib
import json
import multiprocessing.pool
import os
import shutil
import sys
import tempfile
from typing import List
import urllib.request
from typing import Dict, List, Optional, Tuple
import llnl.util.tty as tty
import llnl.util.tty.color as clr
from llnl.string import plural
from llnl.util.lang import elide_list
@@ -22,17 +25,37 @@
import spack.config
import spack.environment as ev
import spack.error
import spack.hash_types as ht
import spack.mirror
import spack.oci.oci
import spack.oci.opener
import spack.relocate
import spack.repo
import spack.spec
import spack.stage
import spack.store
import spack.user_environment
import spack.util.crypto
import spack.util.url as url_util
import spack.util.web as web_util
from spack.build_environment import determine_number_of_jobs
from spack.cmd import display_specs
from spack.oci.image import (
Digest,
ImageReference,
default_config,
default_index_tag,
default_manifest,
default_tag,
tag_is_spec,
)
from spack.oci.oci import (
copy_missing_layers_with_retry,
get_manifest_and_config_with_retry,
upload_blob_with_retry,
upload_manifest_with_retry,
)
from spack.spec import Spec, save_dependency_specfiles
from spack.stage import Stage
description = "create, download and install binary packages"
section = "packaging"
@@ -58,7 +81,9 @@ def setup_parser(subparser: argparse.ArgumentParser):
push_sign.add_argument(
"--key", "-k", metavar="key", type=str, default=None, help="key for signing"
)
push.add_argument("mirror", type=str, help="mirror name, path, or URL")
push.add_argument(
"mirror", type=arguments.mirror_name_or_url, help="mirror name, path, or URL"
)
push.add_argument(
"--update-index",
"--rebuild-index",
@@ -84,7 +109,10 @@ def setup_parser(subparser: argparse.ArgumentParser):
action="store_true",
help="stop pushing on first failure (default is best effort)",
)
arguments.add_common_arguments(push, ["specs"])
push.add_argument(
"--base-image", default=None, help="specify the base image for the buildcache. "
)
arguments.add_common_arguments(push, ["specs", "jobs"])
push.set_defaults(func=push_fn)
install = subparsers.add_parser("install", help=install_fn.__doc__)
@@ -268,7 +296,22 @@ def _matching_specs(specs: List[Spec]) -> List[Spec]:
return [spack.cmd.disambiguate_spec(s, ev.active_environment(), installed=any) for s in specs]
def push_fn(args: argparse.Namespace):
def _format_spec(spec: Spec) -> str:
return spec.cformat("{name}{@version}{/hash:7}")
def _progress(i: int, total: int):
if total > 1:
digits = len(str(total))
return f"[{i+1:{digits}}/{total}] "
return ""
def _make_pool():
return multiprocessing.pool.Pool(determine_number_of_jobs(parallel=True))
def push_fn(args):
"""create a binary package and push it to a mirror"""
if args.spec_file:
tty.warn(
@@ -281,63 +324,80 @@ def push_fn(args: argparse.Namespace):
else:
specs = spack.cmd.require_active_env("buildcache push").all_specs()
mirror = arguments.mirror_name_or_url(args.mirror)
if args.allow_root:
tty.warn(
"The flag `--allow-root` is the default in Spack 0.21, will be removed in Spack 0.22"
)
url = mirror.push_url
# Check if this is an OCI image.
try:
image_ref = spack.oci.oci.image_from_mirror(args.mirror)
except ValueError:
image_ref = None
# For OCI images, we require dependencies to be pushed for now.
if image_ref:
if "dependencies" not in args.things_to_install:
tty.die("Dependencies must be pushed for OCI images.")
if not args.unsigned:
tty.warn(
"Code signing is currently not supported for OCI images. "
"Use --unsigned to silence this warning."
)
# This is a list of installed, non-external specs.
specs = bindist.specs_to_be_packaged(
specs,
root="package" in args.things_to_install,
dependencies="dependencies" in args.things_to_install,
)
url = args.mirror.push_url
# When pushing multiple specs, print the url once ahead of time, as well as how
# many specs are being pushed.
if len(specs) > 1:
tty.info(f"Selected {len(specs)} specs to push to {url}")
skipped = []
failed = []
# tty printing
color = clr.get_color_when()
format_spec = lambda s: s.format("{name}{@version}{/hash:7}", color=color)
total_specs = len(specs)
digits = len(str(total_specs))
# TODO: unify this logic in the future.
if image_ref:
with tempfile.TemporaryDirectory(
dir=spack.stage.get_stage_root()
) as tmpdir, _make_pool() as pool:
skipped = _push_oci(args, image_ref, specs, tmpdir, pool)
else:
skipped = []
for i, spec in enumerate(specs):
try:
bindist.push_or_raise(
spec,
url,
bindist.PushOptions(
force=args.force,
unsigned=args.unsigned,
key=args.key,
regenerate_index=args.update_index,
),
)
for i, spec in enumerate(specs):
try:
bindist.push_or_raise(
spec,
url,
bindist.PushOptions(
force=args.force,
unsigned=args.unsigned,
key=args.key,
regenerate_index=args.update_index,
),
)
if total_specs > 1:
msg = f"[{i+1:{digits}}/{total_specs}] Pushed {format_spec(spec)}"
else:
msg = f"Pushed {format_spec(spec)} to {url}"
msg = f"{_progress(i, len(specs))}Pushed {_format_spec(spec)}"
if len(specs) == 1:
msg += f" to {url}"
tty.info(msg)
tty.info(msg)
except bindist.NoOverwriteException:
skipped.append(_format_spec(spec))
except bindist.NoOverwriteException:
skipped.append(format_spec(spec))
# Catch any other exception unless the fail fast option is set
except Exception as e:
if args.fail_fast or isinstance(e, (bindist.PickKeyException, bindist.NoKeyException)):
raise
failed.append((format_spec(spec), e))
# Catch any other exception unless the fail fast option is set
except Exception as e:
if args.fail_fast or isinstance(
e, (bindist.PickKeyException, bindist.NoKeyException)
):
raise
failed.append((_format_spec(spec), e))
if skipped:
if len(specs) == 1:
@@ -364,6 +424,341 @@ def push_fn(args: argparse.Namespace):
),
)
# Update the index if requested
# TODO: remove update index logic out of bindist; should be once after all specs are pushed
# not once per spec.
if image_ref and len(skipped) < len(specs) and args.update_index:
with tempfile.TemporaryDirectory(
dir=spack.stage.get_stage_root()
) as tmpdir, _make_pool() as pool:
_update_index_oci(image_ref, tmpdir, pool)
def _get_spack_binary_blob(image_ref: ImageReference) -> Optional[spack.oci.oci.Blob]:
"""Get the spack tarball layer digests and size if it exists"""
try:
manifest, config = get_manifest_and_config_with_retry(image_ref)
return spack.oci.oci.Blob(
compressed_digest=Digest.from_string(manifest["layers"][-1]["digest"]),
uncompressed_digest=Digest.from_string(config["rootfs"]["diff_ids"][-1]),
size=manifest["layers"][-1]["size"],
)
except Exception:
return None
def _push_single_spack_binary_blob(image_ref: ImageReference, spec: spack.spec.Spec, tmpdir: str):
filename = os.path.join(tmpdir, f"{spec.dag_hash()}.tar.gz")
# Create an oci.image.layer aka tarball of the package
compressed_tarfile_checksum, tarfile_checksum = spack.oci.oci.create_tarball(spec, filename)
blob = spack.oci.oci.Blob(
Digest.from_sha256(compressed_tarfile_checksum),
Digest.from_sha256(tarfile_checksum),
os.path.getsize(filename),
)
# Upload the blob
upload_blob_with_retry(image_ref, file=filename, digest=blob.compressed_digest)
# delete the file
os.unlink(filename)
return blob
def _retrieve_env_dict_from_config(config: dict) -> dict:
"""Retrieve the environment variables from the image config file.
Sets a default value for PATH if it is not present.
Args:
config (dict): The image config file.
Returns:
dict: The environment variables.
"""
env = {"PATH": "/bin:/usr/bin"}
if "Env" in config.get("config", {}):
for entry in config["config"]["Env"]:
key, value = entry.split("=", 1)
env[key] = value
return env
def _archspec_to_gooarch(spec: spack.spec.Spec) -> str:
name = spec.target.family.name
name_map = {"aarch64": "arm64", "x86_64": "amd64"}
return name_map.get(name, name)
def _put_manifest(
base_images: Dict[str, Tuple[dict, dict]],
checksums: Dict[str, spack.oci.oci.Blob],
spec: spack.spec.Spec,
image_ref: ImageReference,
tmpdir: str,
):
architecture = _archspec_to_gooarch(spec)
dependencies = list(
reversed(
list(
s
for s in spec.traverse(order="topo", deptype=("link", "run"), root=True)
if not s.external
)
)
)
base_manifest, base_config = base_images[architecture]
env = _retrieve_env_dict_from_config(base_config)
spack.user_environment.environment_modifications_for_specs(spec).apply_modifications(env)
# Create an oci.image.config file
config = copy.deepcopy(base_config)
# Add the diff ids of the dependencies
for s in dependencies:
config["rootfs"]["diff_ids"].append(str(checksums[s.dag_hash()].uncompressed_digest))
# Set the environment variables
config["config"]["Env"] = [f"{k}={v}" for k, v in env.items()]
# From the OCI v1.0 spec:
# > Any extra fields in the Image JSON struct are considered implementation
# > specific and MUST be ignored by any implementations which are unable to
# > interpret them.
# We use this to store the Spack spec, so we can use it to create an index.
spec_dict = spec.to_dict(hash=ht.dag_hash)
spec_dict["buildcache_layout_version"] = 1
spec_dict["binary_cache_checksum"] = {
"hash_algorithm": "sha256",
"hash": checksums[spec.dag_hash()].compressed_digest.digest,
}
config.update(spec_dict)
config_file = os.path.join(tmpdir, f"{spec.dag_hash()}.config.json")
with open(config_file, "w") as f:
json.dump(config, f, separators=(",", ":"))
config_file_checksum = Digest.from_sha256(
spack.util.crypto.checksum(hashlib.sha256, config_file)
)
# Upload the config file
upload_blob_with_retry(image_ref, file=config_file, digest=config_file_checksum)
oci_manifest = {
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"schemaVersion": 2,
"config": {
"mediaType": base_manifest["config"]["mediaType"],
"digest": str(config_file_checksum),
"size": os.path.getsize(config_file),
},
"layers": [
*(layer for layer in base_manifest["layers"]),
*(
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"digest": str(checksums[s.dag_hash()].compressed_digest),
"size": checksums[s.dag_hash()].size,
}
for s in dependencies
),
],
"annotations": {"org.opencontainers.image.description": spec.format()},
}
image_ref_for_spec = image_ref.with_tag(default_tag(spec))
# Finally upload the manifest
upload_manifest_with_retry(image_ref_for_spec, oci_manifest=oci_manifest)
# delete the config file
os.unlink(config_file)
return image_ref_for_spec
def _push_oci(
args,
image_ref: ImageReference,
installed_specs_with_deps: List[Spec],
tmpdir: str,
pool: multiprocessing.pool.Pool,
) -> List[str]:
"""Push specs to an OCI registry
Args:
args: The command line arguments.
image_ref: The image reference.
installed_specs_with_deps: The installed specs to push, excluding externals,
including deps, ordered from roots to leaves.
Returns:
List[str]: The list of skipped specs (already in the buildcache).
"""
# Reverse the order
installed_specs_with_deps = list(reversed(installed_specs_with_deps))
# The base image to use for the package. When not set, we use
# the OCI registry only for storage, and do not use any base image.
base_image_ref: Optional[ImageReference] = (
ImageReference.from_string(args.base_image) if args.base_image else None
)
# Spec dag hash -> blob
checksums: Dict[str, spack.oci.oci.Blob] = {}
# arch -> (manifest, config)
base_images: Dict[str, Tuple[dict, dict]] = {}
# Specs not uploaded because they already exist
skipped = []
if not args.force:
tty.info("Checking for existing specs in the buildcache")
to_be_uploaded = []
tags_to_check = (image_ref.with_tag(default_tag(s)) for s in installed_specs_with_deps)
available_blobs = pool.map(_get_spack_binary_blob, tags_to_check)
for spec, maybe_blob in zip(installed_specs_with_deps, available_blobs):
if maybe_blob is not None:
checksums[spec.dag_hash()] = maybe_blob
skipped.append(_format_spec(spec))
else:
to_be_uploaded.append(spec)
else:
to_be_uploaded = installed_specs_with_deps
if not to_be_uploaded:
return skipped
tty.info(
f"{len(to_be_uploaded)} specs need to be pushed to {image_ref.domain}/{image_ref.name}"
)
# Upload blobs
new_blobs = pool.starmap(
_push_single_spack_binary_blob, ((image_ref, spec, tmpdir) for spec in to_be_uploaded)
)
# And update the spec to blob mapping
for spec, blob in zip(to_be_uploaded, new_blobs):
checksums[spec.dag_hash()] = blob
# Copy base image layers, probably fine to do sequentially.
for spec in to_be_uploaded:
architecture = _archspec_to_gooarch(spec)
# Get base image details, if we don't have them yet
if architecture in base_images:
continue
if base_image_ref is None:
base_images[architecture] = (default_manifest(), default_config(architecture, "linux"))
else:
base_images[architecture] = copy_missing_layers_with_retry(
base_image_ref, image_ref, architecture
)
# Upload manifests
tty.info("Uploading manifests")
pushed_image_ref = pool.starmap(
_put_manifest,
((base_images, checksums, spec, image_ref, tmpdir) for spec in to_be_uploaded),
)
# Print the image names of the top-level specs
for spec, ref in zip(to_be_uploaded, pushed_image_ref):
tty.info(f"Pushed {_format_spec(spec)} to {ref}")
return skipped
def _config_from_tag(image_ref: ImageReference, tag: str) -> Optional[dict]:
# Don't allow recursion here, since Spack itself always uploads
# vnd.oci.image.manifest.v1+json, not vnd.oci.image.index.v1+json
_, config = get_manifest_and_config_with_retry(image_ref.with_tag(tag), tag, recurse=0)
# Do very basic validation: if "spec" is a key in the config, it
# must be a Spec object too.
return config if "spec" in config else None
def _update_index_oci(
image_ref: ImageReference, tmpdir: str, pool: multiprocessing.pool.Pool
) -> None:
response = spack.oci.opener.urlopen(urllib.request.Request(url=image_ref.tags_url()))
spack.oci.opener.ensure_status(response, 200)
tags = json.load(response)["tags"]
# Fetch all image config files in parallel
spec_dicts = pool.starmap(
_config_from_tag, ((image_ref, tag) for tag in tags if tag_is_spec(tag))
)
# Populate the database
db_root_dir = os.path.join(tmpdir, "db_root")
db = bindist.BuildCacheDatabase(db_root_dir)
for spec_dict in spec_dicts:
spec = Spec.from_dict(spec_dict)
db.add(spec, directory_layout=None)
db.mark(spec, "in_buildcache", True)
# Create the index.json file
index_json_path = os.path.join(tmpdir, "index.json")
with open(index_json_path, "w") as f:
db._write_to_file(f)
# Create an empty config.json file
empty_config_json_path = os.path.join(tmpdir, "config.json")
with open(empty_config_json_path, "wb") as f:
f.write(b"{}")
# Upload the index.json file
index_shasum = Digest.from_sha256(spack.util.crypto.checksum(hashlib.sha256, index_json_path))
upload_blob_with_retry(image_ref, file=index_json_path, digest=index_shasum)
# Upload the config.json file
empty_config_digest = Digest.from_sha256(
spack.util.crypto.checksum(hashlib.sha256, empty_config_json_path)
)
upload_blob_with_retry(image_ref, file=empty_config_json_path, digest=empty_config_digest)
# Push a manifest file that references the index.json file as a layer
# Notice that we push this as if it is an image, which it of course is not.
# When the ORAS spec becomes official, we can use that instead of a fake image.
# For now we just use the OCI image spec, so that we don't run into issues with
# automatic garbage collection of blobs that are not referenced by any image manifest.
oci_manifest = {
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"schemaVersion": 2,
# Config is just an empty {} file for now, and irrelevant
"config": {
"mediaType": "application/vnd.oci.image.config.v1+json",
"digest": str(empty_config_digest),
"size": os.path.getsize(empty_config_json_path),
},
# The buildcache index is the only layer, and is not a tarball, we lie here.
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"digest": str(index_shasum),
"size": os.path.getsize(index_json_path),
}
],
}
upload_manifest_with_retry(image_ref.with_tag(default_index_tag), oci_manifest)
def install_fn(args):
"""install from a binary package"""
@@ -522,7 +917,7 @@ def copy_buildcache_file(src_url, dest_url, local_path=None):
local_path = os.path.join(tmpdir, os.path.basename(src_url))
try:
temp_stage = Stage(src_url, path=os.path.dirname(local_path))
temp_stage = spack.stage.Stage(src_url, path=os.path.dirname(local_path))
try:
temp_stage.create()
temp_stage.fetch()
@@ -616,6 +1011,20 @@ def manifest_copy(manifest_file_list):
def update_index(mirror: spack.mirror.Mirror, update_keys=False):
# Special case OCI images for now.
try:
image_ref = spack.oci.oci.image_from_mirror(mirror)
except ValueError:
image_ref = None
if image_ref:
with tempfile.TemporaryDirectory(
dir=spack.stage.get_stage_root()
) as tmpdir, _make_pool() as pool:
_update_index_oci(image_ref, tmpdir, pool)
return
# Otherwise, assume a normal mirror.
url = mirror.push_url
bindist.generate_package_index(url_util.join(url, bindist.build_cache_relative_path()))

View File

@@ -3,7 +3,6 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse
import re
import sys
@@ -21,7 +20,6 @@
from spack.package_base import PackageBase, deprecated_version, preferred_version
from spack.util.editor import editor
from spack.util.format import get_version_lines
from spack.util.naming import valid_fully_qualified_module_name
from spack.version import Version
description = "checksum available versions of a package"
@@ -37,30 +35,30 @@ def setup_parser(subparser):
help="don't clean up staging area when command completes",
)
subparser.add_argument(
"-b",
"--batch",
"-b",
action="store_true",
default=False,
help="don't ask which versions to checksum",
)
subparser.add_argument(
"-l",
"--latest",
"-l",
action="store_true",
default=False,
help="checksum the latest available version",
)
subparser.add_argument(
"-p",
"--preferred",
"-p",
action="store_true",
default=False,
help="checksum the known Spack preferred version",
)
modes_parser = subparser.add_mutually_exclusive_group()
modes_parser.add_argument(
"-a",
"--add-to-package",
"-a",
action="store_true",
default=False,
help="add new versions to package",
@@ -68,27 +66,26 @@ def setup_parser(subparser):
modes_parser.add_argument(
"--verify", action="store_true", default=False, help="verify known package checksums"
)
arguments.add_common_arguments(subparser, ["package", "jobs"])
subparser.add_argument("package", help="name or spec (e.g. `cmake` or `cmake@3.18`)")
subparser.add_argument(
"versions", nargs=argparse.REMAINDER, help="versions to generate checksums for"
"versions",
nargs="*",
help="checksum these specific versions (if omitted, Spack searches for remote versions)",
)
arguments.add_common_arguments(subparser, ["jobs"])
subparser.epilog = (
"examples:\n"
" `spack checksum zlib@1.2` autodetects versions 1.2.0 to 1.2.13 from the remote\n"
" `spack checksum zlib 1.2.13` checksums exact version 1.2.13 directly without search\n"
)
def checksum(parser, args):
# Did the user pass 'package@version' string?
if len(args.versions) == 0 and "@" in args.package:
args.versions = [args.package.split("@")[1]]
args.package = args.package.split("@")[0]
# Make sure the user provided a package and not a URL
if not valid_fully_qualified_module_name(args.package):
tty.die("`spack checksum` accepts package names, not URLs.")
spec = spack.spec.Spec(args.package)
# Get the package we're going to generate checksums for
pkg_cls = spack.repo.PATH.get_pkg_class(args.package)
pkg = pkg_cls(spack.spec.Spec(args.package))
pkg = spack.repo.PATH.get_pkg_class(spec.name)(spec)
# Build a list of versions to checksum
versions = [Version(v) for v in args.versions]
# Define placeholder for remote versions.
@@ -152,7 +149,10 @@ def checksum(parser, args):
tty.die(f"Could not find any remote versions for {pkg.name}")
elif len(url_dict) > 1 and not args.batch and sys.stdin.isatty():
filtered_url_dict = spack.stage.interactive_version_filter(
url_dict, pkg.versions, url_changes=url_changed_for_version
url_dict,
pkg.versions,
url_changes=url_changed_for_version,
initial_verion_filter=spec.versions,
)
if not filtered_url_dict:
exit(0)

View File

@@ -543,7 +543,7 @@ def add_concretizer_args(subparser):
)
def add_s3_connection_args(subparser, add_help):
def add_connection_args(subparser, add_help):
subparser.add_argument(
"--s3-access-key-id", help="ID string to use to connect to this S3 mirror"
)
@@ -559,6 +559,8 @@ def add_s3_connection_args(subparser, add_help):
subparser.add_argument(
"--s3-endpoint-url", help="endpoint URL to use to connect to this S3 mirror"
)
subparser.add_argument("--oci-username", help="username to use to connect to this OCI mirror")
subparser.add_argument("--oci-password", help="password to use to connect to this OCI mirror")
def use_buildcache(cli_arg_value):

View File

@@ -64,6 +64,7 @@ class {class_name}({base_class_name}):
# maintainers("github_user1", "github_user2")
# FIXME: Add the SPDX identifier of the project's license below.
# See https://spdx.org/licenses/ for a list.
license("UNKNOWN")
{versions}

View File

@@ -380,28 +380,33 @@ def env_remove(args):
and manifests embedded in repositories should be removed manually.
"""
read_envs = []
bad_envs = []
for env_name in args.rm_env:
env = ev.read(env_name)
read_envs.append(env)
try:
env = ev.read(env_name)
read_envs.append(env)
except spack.config.ConfigFormatError:
bad_envs.append(env_name)
if not args.yes_to_all:
answer = tty.get_yes_or_no(
"Really remove %s %s?"
% (
string.plural(len(args.rm_env), "environment", show_n=False),
string.comma_and(args.rm_env),
),
default=False,
)
environments = string.plural(len(args.rm_env), "environment", show_n=False)
envs = string.comma_and(args.rm_env)
answer = tty.get_yes_or_no(f"Really remove {environments} {envs}?", default=False)
if not answer:
tty.die("Will not remove any environments")
for env in read_envs:
name = env.name
if env.active:
tty.die("Environment %s can't be removed while activated." % env.name)
tty.die(f"Environment {name} can't be removed while activated.")
env.destroy()
tty.msg("Successfully removed environment '%s'" % env.name)
tty.msg(f"Successfully removed environment '{name}'")
for bad_env_name in bad_envs:
shutil.rmtree(
spack.environment.environment.environment_dir_from_name(bad_env_name, exists_ok=True)
)
tty.msg(f"Successfully removed environment '{bad_env_name}'")
#
@@ -667,18 +672,31 @@ def env_depfile(args):
# Currently only make is supported.
spack.cmd.require_active_env(cmd_name="env depfile")
env = ev.active_environment()
# What things do we build when running make? By default, we build the
# root specs. If specific specs are provided as input, we build those.
filter_specs = spack.cmd.parse_specs(args.specs) if args.specs else None
template = spack.tengine.make_environment().get_template(os.path.join("depfile", "Makefile"))
model = depfile.MakefileModel.from_env(
ev.active_environment(),
env,
filter_specs=filter_specs,
pkg_buildcache=depfile.UseBuildCache.from_string(args.use_buildcache[0]),
dep_buildcache=depfile.UseBuildCache.from_string(args.use_buildcache[1]),
make_prefix=args.make_prefix,
jobserver=args.jobserver,
)
# Warn in case we're generating a depfile for an empty environment. We don't automatically
# concretize; the user should do that explicitly. Could be changed in the future if requested.
if model.empty:
if not env.user_specs:
tty.warn("no specs in the environment")
elif filter_specs is not None:
tty.warn("no concrete matching specs found in environment")
else:
tty.warn("environment is not concretized. Run `spack concretize` first")
makefile = template.render(model.to_dict())
# Finally write to stdout/file.

View File

@@ -111,7 +111,7 @@ def setup_parser(subparser):
"and source use `--type binary --type source` (default)"
),
)
arguments.add_s3_connection_args(add_parser, False)
arguments.add_connection_args(add_parser, False)
# Remove
remove_parser = sp.add_parser("remove", aliases=["rm"], help=mirror_remove.__doc__)
remove_parser.add_argument("name", help="mnemonic name for mirror", metavar="mirror")
@@ -141,7 +141,7 @@ def setup_parser(subparser):
default=spack.config.default_modify_scope(),
help="configuration scope to modify",
)
arguments.add_s3_connection_args(set_url_parser, False)
arguments.add_connection_args(set_url_parser, False)
# Set
set_parser = sp.add_parser("set", help=mirror_set.__doc__)
@@ -170,7 +170,7 @@ def setup_parser(subparser):
default=spack.config.default_modify_scope(),
help="configuration scope to modify",
)
arguments.add_s3_connection_args(set_parser, False)
arguments.add_connection_args(set_parser, False)
# List
list_parser = sp.add_parser("list", help=mirror_list.__doc__)
@@ -192,6 +192,8 @@ def mirror_add(args):
or args.s3_profile
or args.s3_endpoint_url
or args.type
or args.oci_username
or args.oci_password
):
connection = {"url": args.url}
if args.s3_access_key_id and args.s3_access_key_secret:
@@ -202,6 +204,8 @@ def mirror_add(args):
connection["profile"] = args.s3_profile
if args.s3_endpoint_url:
connection["endpoint_url"] = args.s3_endpoint_url
if args.oci_username and args.oci_password:
connection["access_pair"] = [args.oci_username, args.oci_password]
if args.type:
connection["binary"] = "binary" in args.type
connection["source"] = "source" in args.type
@@ -235,6 +239,8 @@ def _configure_mirror(args):
changes["profile"] = args.s3_profile
if args.s3_endpoint_url:
changes["endpoint_url"] = args.s3_endpoint_url
if args.oci_username and args.oci_password:
changes["access_pair"] = [args.oci_username, args.oci_password]
# argparse cannot distinguish between --binary and --no-binary when same dest :(
# notice that set-url does not have these args, so getattr

View File

@@ -269,7 +269,7 @@ def find_windows_compiler_root_paths() -> List[str]:
At the moment simply returns location of VS install paths from VSWhere
But should be extended to include more information as relevant"""
return list(winOs.WindowsOs.vs_install_paths)
return list(winOs.WindowsOs().vs_install_paths)
@staticmethod
def find_windows_compiler_cmake_paths() -> List[str]:

View File

@@ -15,9 +15,12 @@
from typing import Dict, List, Optional, Set, Tuple
import llnl.util.filesystem
import llnl.util.lang
import llnl.util.tty
import spack.util.elf as elf_utils
import spack.util.environment
import spack.util.environment as environment
import spack.util.ld_so_conf
from .common import (
@@ -39,15 +42,29 @@
DETECTION_TIMEOUT = 120
def common_windows_package_paths() -> List[str]:
def common_windows_package_paths(pkg_cls=None) -> List[str]:
"""Get the paths for common package installation location on Windows
that are outside the PATH
Returns [] on unix
"""
if sys.platform != "win32":
return []
paths = WindowsCompilerExternalPaths.find_windows_compiler_bundled_packages()
paths.extend(find_win32_additional_install_paths())
paths.extend(WindowsKitExternalPaths.find_windows_kit_bin_paths())
paths.extend(WindowsKitExternalPaths.find_windows_kit_reg_installed_roots_paths())
paths.extend(WindowsKitExternalPaths.find_windows_kit_reg_sdk_paths())
if pkg_cls:
paths.extend(compute_windows_user_path_for_package(pkg_cls))
paths.extend(compute_windows_program_path_for_package(pkg_cls))
return paths
def file_identifier(path):
s = os.stat(path)
return (s.st_dev, s.st_ino)
def executables_in_path(path_hints: List[str]) -> Dict[str, str]:
"""Get the paths of all executables available from the current PATH.
@@ -62,18 +79,44 @@ def executables_in_path(path_hints: List[str]) -> Dict[str, str]:
path_hints: list of paths to be searched. If None the list will be
constructed based on the PATH environment variable.
"""
if sys.platform == "win32":
path_hints.extend(common_windows_package_paths())
search_paths = llnl.util.filesystem.search_paths_for_executables(*path_hints)
return path_to_dict(search_paths)
def get_elf_compat(path):
"""For ELF files, get a triplet (EI_CLASS, EI_DATA, e_machine) and see if
it is host-compatible."""
# On ELF platforms supporting, we try to be a bit smarter when it comes to shared
# libraries, by dropping those that are not host compatible.
with open(path, "rb") as f:
elf = elf_utils.parse_elf(f, only_header=True)
return (elf.is_64_bit, elf.is_little_endian, elf.elf_hdr.e_machine)
def accept_elf(path, host_compat):
"""Accept an ELF file if the header matches the given compat triplet,
obtained with :py:func:`get_elf_compat`. In case it's not an ELF (e.g.
static library, or some arbitrary file, fall back to is_readable_file)."""
# Fast path: assume libraries at least have .so in their basename.
# Note: don't replace with splitext, because of libsmth.so.1.2.3 file names.
if ".so" not in os.path.basename(path):
return llnl.util.filesystem.is_readable_file(path)
try:
return host_compat == get_elf_compat(path)
except (OSError, elf_utils.ElfParsingError):
return llnl.util.filesystem.is_readable_file(path)
def libraries_in_ld_and_system_library_path(
path_hints: Optional[List[str]] = None,
) -> Dict[str, str]:
"""Get the paths of all libraries available from LD_LIBRARY_PATH,
LIBRARY_PATH, DYLD_LIBRARY_PATH, DYLD_FALLBACK_LIBRARY_PATH, and
standard system library paths.
"""Get the paths of all libraries available from ``path_hints`` or the
following defaults:
- Environment variables (Linux: ``LD_LIBRARY_PATH``, Darwin: ``DYLD_LIBRARY_PATH``,
and ``DYLD_FALLBACK_LIBRARY_PATH``)
- Dynamic linker default paths (glibc: ld.so.conf, musl: ld-musl-<arch>.path)
- Default system library paths.
For convenience, this is constructed as a dictionary where the keys are
the library paths and the values are the names of the libraries
@@ -87,31 +130,71 @@ def libraries_in_ld_and_system_library_path(
constructed based on the set of LD_LIBRARY_PATH, LIBRARY_PATH,
DYLD_LIBRARY_PATH, and DYLD_FALLBACK_LIBRARY_PATH environment
variables as well as the standard system library paths.
path_hints (list): list of paths to be searched. If ``None``, the default
system paths are used.
"""
path_hints = (
path_hints
or spack.util.environment.get_path("LD_LIBRARY_PATH")
+ spack.util.environment.get_path("DYLD_LIBRARY_PATH")
+ spack.util.environment.get_path("DYLD_FALLBACK_LIBRARY_PATH")
+ spack.util.ld_so_conf.host_dynamic_linker_search_paths()
if path_hints:
search_paths = llnl.util.filesystem.search_paths_for_libraries(*path_hints)
else:
search_paths = []
# Environment variables
if sys.platform == "darwin":
search_paths.extend(environment.get_path("DYLD_LIBRARY_PATH"))
search_paths.extend(environment.get_path("DYLD_FALLBACK_LIBRARY_PATH"))
elif sys.platform.startswith("linux"):
search_paths.extend(environment.get_path("LD_LIBRARY_PATH"))
# Dynamic linker paths
search_paths.extend(spack.util.ld_so_conf.host_dynamic_linker_search_paths())
# Drop redundant paths
search_paths = list(filter(os.path.isdir, search_paths))
# Make use we don't doubly list /usr/lib and /lib etc
search_paths = list(llnl.util.lang.dedupe(search_paths, key=file_identifier))
try:
host_compat = get_elf_compat(sys.executable)
accept = lambda path: accept_elf(path, host_compat)
except (OSError, elf_utils.ElfParsingError):
accept = llnl.util.filesystem.is_readable_file
path_to_lib = {}
# Reverse order of search directories so that a lib in the first
# search path entry overrides later entries
for search_path in reversed(search_paths):
for lib in os.listdir(search_path):
lib_path = os.path.join(search_path, lib)
if accept(lib_path):
path_to_lib[lib_path] = lib
return path_to_lib
def libraries_in_windows_paths(path_hints: Optional[List[str]] = None) -> Dict[str, str]:
"""Get the paths of all libraries available from the system PATH paths.
For more details, see `libraries_in_ld_and_system_library_path` regarding
return type and contents.
Args:
path_hints: list of paths to be searched. If None the list will be
constructed based on the set of PATH environment
variables as well as the standard system library paths.
"""
search_hints = (
path_hints if path_hints is not None else spack.util.environment.get_path("PATH")
)
search_paths = llnl.util.filesystem.search_paths_for_libraries(*path_hints)
return path_to_dict(search_paths)
def libraries_in_windows_paths(path_hints: List[str]) -> Dict[str, str]:
path_hints.extend(spack.util.environment.get_path("PATH"))
search_paths = llnl.util.filesystem.search_paths_for_libraries(*path_hints)
search_paths = llnl.util.filesystem.search_paths_for_libraries(*search_hints)
# on Windows, some libraries (.dlls) are found in the bin directory or sometimes
# at the search root. Add both of those options to the search scheme
search_paths.extend(llnl.util.filesystem.search_paths_for_executables(*path_hints))
search_paths.extend(WindowsKitExternalPaths.find_windows_kit_lib_paths())
search_paths.extend(WindowsKitExternalPaths.find_windows_kit_bin_paths())
search_paths.extend(WindowsKitExternalPaths.find_windows_kit_reg_installed_roots_paths())
search_paths.extend(WindowsKitExternalPaths.find_windows_kit_reg_sdk_paths())
# SDK and WGL should be handled by above, however on occasion the WDK is in an atypical
# location, so we handle that case specifically.
search_paths.extend(WindowsKitExternalPaths.find_windows_driver_development_kit_paths())
search_paths.extend(llnl.util.filesystem.search_paths_for_executables(*search_hints))
if path_hints is None:
# if no user provided path was given, add defaults to the search
search_paths.extend(WindowsKitExternalPaths.find_windows_kit_lib_paths())
# SDK and WGL should be handled by above, however on occasion the WDK is in an atypical
# location, so we handle that case specifically.
search_paths.extend(WindowsKitExternalPaths.find_windows_driver_development_kit_paths())
return path_to_dict(search_paths)
@@ -125,19 +208,8 @@ def _group_by_prefix(paths: Set[str]) -> Dict[str, Set[str]]:
class Finder:
"""Inspects the file-system looking for packages. Guesses places where to look using PATH."""
def path_hints(
self, *, pkg: "spack.package_base.PackageBase", initial_guess: Optional[List[str]] = None
) -> List[str]:
"""Returns the list of paths to be searched.
Args:
pkg: package being detected
initial_guess: initial list of paths from caller
"""
result = initial_guess or []
result.extend(compute_windows_user_path_for_package(pkg))
result.extend(compute_windows_program_path_for_package(pkg))
return result
def default_path_hints(self) -> List[str]:
return []
def search_patterns(self, *, pkg: "spack.package_base.PackageBase") -> List[str]:
"""Returns the list of patterns used to match candidate files.
@@ -245,6 +317,8 @@ def find(
Args:
pkg_name: package being detected
initial_guess: initial list of paths to search from the caller
if None, default paths are searched. If this
is an empty list, nothing will be searched.
"""
import spack.repo
@@ -252,13 +326,18 @@ def find(
patterns = self.search_patterns(pkg=pkg_cls)
if not patterns:
return []
path_hints = self.path_hints(pkg=pkg_cls, initial_guess=initial_guess)
candidates = self.candidate_files(patterns=patterns, paths=path_hints)
if initial_guess is None:
initial_guess = self.default_path_hints()
initial_guess.extend(common_windows_package_paths(pkg_cls))
candidates = self.candidate_files(patterns=patterns, paths=initial_guess)
result = self.detect_specs(pkg=pkg_cls, paths=candidates)
return result
class ExecutablesFinder(Finder):
def default_path_hints(self) -> List[str]:
return spack.util.environment.get_path("PATH")
def search_patterns(self, *, pkg: "spack.package_base.PackageBase") -> List[str]:
result = []
if hasattr(pkg, "executables") and hasattr(pkg, "platform_executables"):
@@ -298,7 +377,7 @@ def candidate_files(self, *, patterns: List[str], paths: List[str]) -> List[str]
libraries_by_path = (
libraries_in_ld_and_system_library_path(path_hints=paths)
if sys.platform != "win32"
else libraries_in_windows_paths(paths)
else libraries_in_windows_paths(path_hints=paths)
)
patterns = [re.compile(x) for x in patterns]
result = []
@@ -334,21 +413,16 @@ def by_path(
# TODO: Packages should be able to define both .libraries and .executables in the future
# TODO: determine_spec_details should get all relevant libraries and executables in one call
executables_finder, libraries_finder = ExecutablesFinder(), LibrariesFinder()
executables_path_guess = (
spack.util.environment.get_path("PATH") if path_hints is None else path_hints
)
libraries_path_guess = [] if path_hints is None else path_hints
detected_specs_by_package: Dict[str, Tuple[concurrent.futures.Future, ...]] = {}
result = collections.defaultdict(list)
with concurrent.futures.ProcessPoolExecutor(max_workers=max_workers) as executor:
for pkg in packages_to_search:
executable_future = executor.submit(
executables_finder.find, pkg_name=pkg, initial_guess=executables_path_guess
executables_finder.find, pkg_name=pkg, initial_guess=path_hints
)
library_future = executor.submit(
libraries_finder.find, pkg_name=pkg, initial_guess=libraries_path_guess
libraries_finder.find, pkg_name=pkg, initial_guess=path_hints
)
detected_specs_by_package[pkg] = executable_future, library_future
@@ -359,9 +433,13 @@ def by_path(
if detected:
_, unqualified_name = spack.repo.partition_package_name(pkg_name)
result[unqualified_name].extend(detected)
except Exception:
except concurrent.futures.TimeoutError:
llnl.util.tty.debug(
f"[EXTERNAL DETECTION] Skipping {pkg_name}: timeout reached"
)
except Exception as e:
llnl.util.tty.debug(
f"[EXTERNAL DETECTION] Skipping {pkg_name}: exception occured {e}"
)
return result

View File

@@ -573,17 +573,21 @@ def _execute_extends(pkg):
return _execute_extends
@directive("provided")
def provides(*specs, **kwargs):
"""Allows packages to provide a virtual dependency. If a package provides
'mpi', other packages can declare that they depend on "mpi", and spack
can use the providing package to satisfy the dependency.
@directive(dicts=("provided", "provided_together"))
def provides(*specs, when: Optional[str] = None):
"""Allows packages to provide a virtual dependency.
If a package provides "mpi", other packages can declare that they depend on "mpi",
and spack can use the providing package to satisfy the dependency.
Args:
*specs: virtual specs provided by this package
when: condition when this provides clause needs to be considered
"""
def _execute_provides(pkg):
import spack.parser # Avoid circular dependency
when = kwargs.get("when")
when_spec = make_when_spec(when)
if not when_spec:
return
@@ -591,15 +595,18 @@ def _execute_provides(pkg):
# ``when`` specs for ``provides()`` need a name, as they are used
# to build the ProviderIndex.
when_spec.name = pkg.name
spec_objs = [spack.spec.Spec(x) for x in specs]
spec_names = [x.name for x in spec_objs]
if len(spec_names) > 1:
pkg.provided_together.setdefault(when_spec, []).append(set(spec_names))
for string in specs:
for provided_spec in spack.parser.parse(string):
if pkg.name == provided_spec.name:
raise CircularReferenceError("Package '%s' cannot provide itself." % pkg.name)
for provided_spec in spec_objs:
if pkg.name == provided_spec.name:
raise CircularReferenceError("Package '%s' cannot provide itself." % pkg.name)
if provided_spec not in pkg.provided:
pkg.provided[provided_spec] = set()
pkg.provided[provided_spec].add(when_spec)
if provided_spec not in pkg.provided:
pkg.provided[provided_spec] = set()
pkg.provided[provided_spec].add(when_spec)
return _execute_provides

View File

@@ -232,6 +232,10 @@ def to_dict(self):
"pkg_ids": " ".join(self.all_pkg_identifiers),
}
@property
def empty(self):
return len(self.roots) == 0
@staticmethod
def from_env(
env: ev.Environment,
@@ -254,15 +258,10 @@ def from_env(
jobserver: when enabled, make will invoke Spack with jobserver support. For
dry-run this should be disabled.
"""
# If no specs are provided as a filter, build all the specs in the environment.
if filter_specs:
entrypoints = [env.matching_spec(s) for s in filter_specs]
else:
entrypoints = [s for _, s in env.concretized_specs()]
roots = env.all_matching_specs(*filter_specs) if filter_specs else env.concrete_roots()
visitor = DepfileSpecVisitor(pkg_buildcache, dep_buildcache)
traverse.traverse_breadth_first_with_visitor(
entrypoints, traverse.CoverNodesVisitor(visitor, key=lambda s: s.dag_hash())
roots, traverse.CoverNodesVisitor(visitor, key=lambda s: s.dag_hash())
)
return MakefileModel(env, entrypoints, visitor.adjacency_list, make_prefix, jobserver)
return MakefileModel(env, roots, visitor.adjacency_list, make_prefix, jobserver)

View File

@@ -330,16 +330,21 @@ def create_in_dir(
if with_view is None and keep_relative:
return Environment(manifest_dir)
manifest = EnvironmentManifestFile(manifest_dir)
try:
manifest = EnvironmentManifestFile(manifest_dir)
if with_view is not None:
manifest.set_default_view(with_view)
if with_view is not None:
manifest.set_default_view(with_view)
if not keep_relative and init_file is not None and str(init_file).endswith(manifest_name):
init_file = pathlib.Path(init_file)
manifest.absolutify_dev_paths(init_file.parent)
if not keep_relative and init_file is not None and str(init_file).endswith(manifest_name):
init_file = pathlib.Path(init_file)
manifest.absolutify_dev_paths(init_file.parent)
manifest.flush()
manifest.flush()
except spack.config.ConfigFormatError as e:
shutil.rmtree(manifest_dir)
raise e
return Environment(manifest_dir)
@@ -1484,7 +1489,7 @@ def _concretize_separately(self, tests=False):
for uspec, uspec_constraints in zip(self.user_specs, self.user_specs.specs_as_constraints):
if uspec not in old_concretized_user_specs:
root_specs.append(uspec)
args.append((i, uspec_constraints, tests))
args.append((i, [str(x) for x in uspec_constraints], tests))
i += 1
# Ensure we don't try to bootstrap clingo in parallel
@@ -1518,11 +1523,21 @@ def _concretize_separately(self, tests=False):
tty.msg(msg)
batch = []
for i, concrete, duration in spack.util.parallel.imap_unordered(
_concretize_task, args, processes=num_procs, debug=tty.is_debug()
for j, (i, concrete, duration) in enumerate(
spack.util.parallel.imap_unordered(
_concretize_task,
args,
processes=num_procs,
debug=tty.is_debug(),
maxtaskperchild=1,
)
):
batch.append((i, concrete))
tty.verbose(f"[{duration:7.2f}s] {root_specs[i]}")
percentage = (j + 1) / len(args) * 100
tty.verbose(
f"{duration:6.1f}s [{percentage:3.0f}%] {concrete.cformat('{hash:7}')} "
f"{root_specs[i].colored_str}"
)
sys.stdout.flush()
# Add specs in original order
@@ -2397,6 +2412,7 @@ def _concretize_from_constraints(spec_constraints, tests=False):
def _concretize_task(packed_arguments) -> Tuple[int, Spec, float]:
index, spec_constraints, tests = packed_arguments
spec_constraints = [Spec(x) for x in spec_constraints]
with tty.SuppressOutput(msg_enabled=False):
start = time.time()
spec = _concretize_from_constraints(spec_constraints, tests)

View File

@@ -28,6 +28,7 @@
import os.path
import re
import shutil
import urllib.error
import urllib.parse
from typing import List, Optional
@@ -41,6 +42,7 @@
import spack.config
import spack.error
import spack.oci.opener
import spack.url
import spack.util.crypto as crypto
import spack.util.git
@@ -537,6 +539,34 @@ def fetch(self):
tty.msg("Using cached archive: {0}".format(path))
class OCIRegistryFetchStrategy(URLFetchStrategy):
def __init__(self, url=None, checksum=None, **kwargs):
super().__init__(url, checksum, **kwargs)
self._urlopen = kwargs.get("_urlopen", spack.oci.opener.urlopen)
@_needs_stage
def fetch(self):
file = self.stage.save_filename
tty.msg(f"Fetching {self.url}")
try:
response = self._urlopen(self.url)
except urllib.error.URLError as e:
# clean up archive on failure.
if self.archive_file:
os.remove(self.archive_file)
if os.path.lexists(file):
os.remove(file)
raise FailedDownloadError(self.url, f"Failed to fetch {self.url}: {e}") from e
if os.path.lexists(file):
os.remove(file)
with open(file, "wb") as f:
shutil.copyfileobj(response, f)
class VCSFetchStrategy(FetchStrategy):
"""Superclass for version control system fetch strategies.
@@ -743,8 +773,7 @@ def git(self):
# Disable advice for a quieter fetch
# https://github.com/git/git/blob/master/Documentation/RelNotes/1.7.2.txt
if self.git_version >= spack.version.Version("1.7.2"):
self._git.add_default_arg("-c")
self._git.add_default_arg("advice.detachedHead=false")
self._git.add_default_arg("-c", "advice.detachedHead=false")
# If the user asked for insecure fetching, make that work
# with git as well.

View File

@@ -528,10 +528,15 @@ def node_entry(self, node):
def edge_entry(self, edge):
colormap = {"build": "dodgerblue", "link": "crimson", "run": "goldenrod"}
label = ""
if edge.virtuals:
label = f" xlabel=\"virtuals={','.join(edge.virtuals)}\""
return (
edge.parent.dag_hash(),
edge.spec.dag_hash(),
f"[color=\"{':'.join(colormap[x] for x in dt.flag_to_tuple(edge.depflag))}\"]",
f"[color=\"{':'.join(colormap[x] for x in dt.flag_to_tuple(edge.depflag))}\""
+ label
+ "]",
)

View File

@@ -18,7 +18,7 @@
import sys
import traceback
import urllib.parse
from typing import Optional, Union
from typing import List, Optional, Union
import llnl.url
import llnl.util.tty as tty
@@ -27,18 +27,18 @@
import spack.caches
import spack.config
import spack.error
import spack.fetch_strategy as fs
import spack.fetch_strategy
import spack.mirror
import spack.oci.image
import spack.spec
import spack.util.path
import spack.util.spack_json as sjson
import spack.util.spack_yaml as syaml
import spack.util.url as url_util
from spack.util.spack_yaml import syaml_dict
from spack.version import VersionList
import spack.version
#: What schemes do we support
supported_url_schemes = ("file", "http", "https", "sftp", "ftp", "s3", "gs")
supported_url_schemes = ("file", "http", "https", "sftp", "ftp", "s3", "gs", "oci")
def _url_or_path_to_url(url_or_path: str) -> str:
@@ -230,12 +230,12 @@ def _get_value(self, attribute: str, direction: str):
value = self._data.get(direction, {})
# Return top-level entry if only a URL was set.
if isinstance(value, str):
return self._data.get(attribute, None)
if isinstance(value, str) or attribute not in value:
return self._data.get(attribute)
return self._data.get(direction, {}).get(attribute, None)
return value[attribute]
def get_url(self, direction: str):
def get_url(self, direction: str) -> str:
if direction not in ("fetch", "push"):
raise ValueError(f"direction must be either 'fetch' or 'push', not {direction}")
@@ -255,18 +255,21 @@ def get_url(self, direction: str):
elif "url" in info:
url = info["url"]
return _url_or_path_to_url(url) if url else None
if not url:
raise ValueError(f"Mirror {self.name} has no URL configured")
def get_access_token(self, direction: str):
return _url_or_path_to_url(url)
def get_access_token(self, direction: str) -> Optional[str]:
return self._get_value("access_token", direction)
def get_access_pair(self, direction: str):
def get_access_pair(self, direction: str) -> Optional[List]:
return self._get_value("access_pair", direction)
def get_profile(self, direction: str):
def get_profile(self, direction: str) -> Optional[str]:
return self._get_value("profile", direction)
def get_endpoint_url(self, direction: str):
def get_endpoint_url(self, direction: str) -> Optional[str]:
return self._get_value("endpoint_url", direction)
@@ -330,7 +333,7 @@ def from_json(stream, name=None):
raise sjson.SpackJSONError("error parsing JSON mirror collection:", str(e)) from e
def to_dict(self, recursive=False):
return syaml_dict(
return syaml.syaml_dict(
sorted(
((k, (v.to_dict() if recursive else v)) for (k, v) in self._mirrors.items()),
key=operator.itemgetter(0),
@@ -372,7 +375,7 @@ def __len__(self):
def _determine_extension(fetcher):
if isinstance(fetcher, fs.URLFetchStrategy):
if isinstance(fetcher, spack.fetch_strategy.URLFetchStrategy):
if fetcher.expand_archive:
# If we fetch with a URLFetchStrategy, use URL's archive type
ext = llnl.url.determine_url_file_extension(fetcher.url)
@@ -437,6 +440,19 @@ def __iter__(self):
yield self.cosmetic_path
class OCIImageLayout:
"""Follow the OCI Image Layout Specification to archive blobs
Paths are of the form `blobs/<algorithm>/<digest>`
"""
def __init__(self, digest: spack.oci.image.Digest) -> None:
self.storage_path = os.path.join("blobs", digest.algorithm, digest.digest)
def __iter__(self):
yield self.storage_path
def mirror_archive_paths(fetcher, per_package_ref, spec=None):
"""Returns a ``MirrorReference`` object which keeps track of the relative
storage path of the resource associated with the specified ``fetcher``."""
@@ -482,7 +498,7 @@ def get_all_versions(specs):
for version in pkg_cls.versions:
version_spec = spack.spec.Spec(pkg_cls.name)
version_spec.versions = VersionList([version])
version_spec.versions = spack.version.VersionList([version])
version_specs.append(version_spec)
return version_specs
@@ -521,7 +537,7 @@ def get_matching_versions(specs, num_versions=1):
# Generate only versions that satisfy the spec.
if spec.concrete or v.intersects(spec.versions):
s = spack.spec.Spec(pkg.name)
s.versions = VersionList([v])
s.versions = spack.version.VersionList([v])
s.variants = spec.variants.copy()
# This is needed to avoid hanging references during the
# concretization phase
@@ -591,14 +607,14 @@ def add(mirror: Mirror, scope=None):
"""Add a named mirror in the given scope"""
mirrors = spack.config.get("mirrors", scope=scope)
if not mirrors:
mirrors = syaml_dict()
mirrors = syaml.syaml_dict()
if mirror.name in mirrors:
tty.die("Mirror with name {} already exists.".format(mirror.name))
items = [(n, u) for n, u in mirrors.items()]
items.insert(0, (mirror.name, mirror.to_dict()))
mirrors = syaml_dict(items)
mirrors = syaml.syaml_dict(items)
spack.config.set("mirrors", mirrors, scope=scope)
@@ -606,7 +622,7 @@ def remove(name, scope):
"""Remove the named mirror in the given scope"""
mirrors = spack.config.get("mirrors", scope=scope)
if not mirrors:
mirrors = syaml_dict()
mirrors = syaml.syaml_dict()
if name not in mirrors:
tty.die("No mirror with name %s" % name)

View File

@@ -491,10 +491,6 @@ def excluded(self):
exclude_rules = conf.get("exclude", [])
exclude_matches = [x for x in exclude_rules if spec.satisfies(x)]
# Should I exclude the module because it's implicit?
exclude_implicits = conf.get("exclude_implicits", None)
excluded_as_implicit = exclude_implicits and not self.explicit
def debug_info(line_header, match_list):
if match_list:
msg = "\t{0} : {1}".format(line_header, spec.cshort_spec)
@@ -505,16 +501,28 @@ def debug_info(line_header, match_list):
debug_info("INCLUDE", include_matches)
debug_info("EXCLUDE", exclude_matches)
if excluded_as_implicit:
msg = "\tEXCLUDED_AS_IMPLICIT : {0}".format(spec.cshort_spec)
tty.debug(msg)
is_excluded = exclude_matches or excluded_as_implicit
if not include_matches and is_excluded:
if not include_matches and exclude_matches:
return True
return False
@property
def hidden(self):
"""Returns True if the module has been hidden, False otherwise."""
# A few variables for convenience of writing the method
spec = self.spec
conf = self.module.configuration(self.name)
hidden_as_implicit = not self.explicit and conf.get(
"hide_implicits", conf.get("exclude_implicits", False)
)
if hidden_as_implicit:
tty.debug(f"\tHIDDEN_AS_IMPLICIT : {spec.cshort_spec}")
return hidden_as_implicit
@property
def context(self):
return self.conf.get("context", {})
@@ -723,7 +731,9 @@ def environment_modifications(self):
# for that to work, globals have to be set on the package modules, and the
# whole chain of setup_dependent_package has to be followed from leaf to spec.
# So: just run it here, but don't collect env mods.
spack.build_environment.SetupContext(context=Context.RUN).set_all_package_py_globals()
spack.build_environment.SetupContext(
spec, context=Context.RUN
).set_all_package_py_globals()
# Then run setup_dependent_run_environment before setup_run_environment.
for dep in spec.dependencies(deptype=("link", "run")):
@@ -849,6 +859,26 @@ def __init__(self, spec, module_set_name, explicit=None):
name = type(self).__name__
raise DefaultTemplateNotDefined(msg.format(name))
# Check if format for module hide command has been defined,
# throw if not found
try:
self.hide_cmd_format
except AttributeError:
msg = "'{0}' object has no attribute 'hide_cmd_format'\n"
msg += "Did you forget to define it in the class?"
name = type(self).__name__
raise HideCmdFormatNotDefined(msg.format(name))
# Check if modulerc header content has been defined,
# throw if not found
try:
self.modulerc_header
except AttributeError:
msg = "'{0}' object has no attribute 'modulerc_header'\n"
msg += "Did you forget to define it in the class?"
name = type(self).__name__
raise ModulercHeaderNotDefined(msg.format(name))
def _get_template(self):
"""Gets the template that will be rendered for this spec."""
# Get templates and put them in the order of importance:
@@ -943,6 +973,9 @@ def write(self, overwrite=False):
# Symlink defaults if needed
self.update_module_defaults()
# record module hiddenness if implicit
self.update_module_hiddenness()
def update_module_defaults(self):
if any(self.spec.satisfies(default) for default in self.conf.defaults):
# This spec matches a default, it needs to be symlinked to default
@@ -953,6 +986,60 @@ def update_module_defaults(self):
os.symlink(self.layout.filename, default_tmp)
os.rename(default_tmp, default_path)
def update_module_hiddenness(self, remove=False):
"""Update modulerc file corresponding to module to add or remove
command that hides module depending on its hidden state.
Args:
remove (bool): if True, hiddenness information for module is
removed from modulerc.
"""
modulerc_path = self.layout.modulerc
hide_module_cmd = self.hide_cmd_format % self.layout.use_name
hidden = self.conf.hidden and not remove
modulerc_exists = os.path.exists(modulerc_path)
updated = False
if modulerc_exists:
# retrieve modulerc content
with open(modulerc_path, "r") as f:
content = f.readlines()
content = "".join(content).split("\n")
# remove last empty item if any
if len(content[-1]) == 0:
del content[-1]
already_hidden = hide_module_cmd in content
# remove hide command if module not hidden
if already_hidden and not hidden:
content.remove(hide_module_cmd)
updated = True
# add hide command if module is hidden
elif not already_hidden and hidden:
if len(content) == 0:
content = self.modulerc_header.copy()
content.append(hide_module_cmd)
updated = True
else:
content = self.modulerc_header.copy()
if hidden:
content.append(hide_module_cmd)
updated = True
# no modulerc file change if no content update
if updated:
is_empty = content == self.modulerc_header or len(content) == 0
# remove existing modulerc if empty
if modulerc_exists and is_empty:
os.remove(modulerc_path)
# create or update modulerc
elif content != self.modulerc_header:
# ensure file ends with a newline character
content.append("")
with open(modulerc_path, "w") as f:
f.write("\n".join(content))
def remove(self):
"""Deletes the module file."""
mod_file = self.layout.filename
@@ -960,6 +1047,7 @@ def remove(self):
try:
os.remove(mod_file) # Remove the module file
self.remove_module_defaults() # Remove default targeting module file
self.update_module_hiddenness(remove=True) # Remove hide cmd in modulerc
os.removedirs(
os.path.dirname(mod_file)
) # Remove all the empty directories from the leaf up
@@ -1003,5 +1091,17 @@ class DefaultTemplateNotDefined(AttributeError, ModulesError):
"""
class HideCmdFormatNotDefined(AttributeError, ModulesError):
"""Raised if the attribute 'hide_cmd_format' has not been specified
in the derived classes.
"""
class ModulercHeaderNotDefined(AttributeError, ModulesError):
"""Raised if the attribute 'modulerc_header' has not been specified
in the derived classes.
"""
class ModulesTemplateNotFoundError(ModulesError, RuntimeError):
"""Raised if the template for a module file was not found."""

View File

@@ -232,6 +232,13 @@ def missing(self):
"""Returns the list of tokens that are not available."""
return [x for x in self.hierarchy_tokens if x not in self.available]
@property
def hidden(self):
# Never hide a module that opens a hierarchy
if any(self.spec.package.provides(x) for x in self.hierarchy_tokens):
return False
return super().hidden
class LmodFileLayout(BaseFileLayout):
"""File layout for lmod module files."""
@@ -274,6 +281,13 @@ def filename(self):
)
return fullname
@property
def modulerc(self):
"""Returns the modulerc file associated with current module file"""
return os.path.join(
os.path.dirname(self.filename), ".".join([".modulerc", self.extension])
)
def token_to_path(self, name, value):
"""Transforms a hierarchy token into the corresponding path part.
@@ -470,6 +484,10 @@ class LmodModulefileWriter(BaseModuleFileWriter):
default_template = posixpath.join("modules", "modulefile.lua")
modulerc_header: list = []
hide_cmd_format = 'hide_version("%s")'
class CoreCompilersNotFoundError(spack.error.SpackError, KeyError):
"""Error raised if the key 'core_compilers' has not been specified

View File

@@ -6,6 +6,7 @@
"""This module implements the classes necessary to generate Tcl
non-hierarchical modules.
"""
import os.path
import posixpath
from typing import Any, Dict
@@ -56,6 +57,11 @@ class TclConfiguration(BaseConfiguration):
class TclFileLayout(BaseFileLayout):
"""File layout for tcl module files."""
@property
def modulerc(self):
"""Returns the modulerc file associated with current module file"""
return os.path.join(os.path.dirname(self.filename), ".modulerc")
class TclContext(BaseContext):
"""Context class for tcl module files."""
@@ -73,3 +79,7 @@ class TclModulefileWriter(BaseModuleFileWriter):
# os.path.join due to spack.spec.Spec.format
# requiring forward slash path seperators at this stage
default_template = posixpath.join("modules", "modulefile.tcl")
modulerc_header = ["#%Module4.7"]
hide_cmd_format = "module-hide --soft --hidden-loaded %s"

View File

@@ -0,0 +1,4 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -0,0 +1,235 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import re
import urllib.parse
from typing import Optional, Union
import spack.spec
# notice: Docker is more strict (no uppercase allowed). We parse image names *with* uppercase
# and normalize, so: example.com/Organization/Name -> example.com/organization/name. Tags are
# case sensitive though.
alphanumeric_with_uppercase = r"[a-zA-Z0-9]+"
separator = r"(?:[._]|__|[-]+)"
localhost = r"localhost"
domainNameComponent = r"(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])"
optionalPort = r"(?::[0-9]+)?"
tag = r"[\w][\w.-]{0,127}"
digestPat = r"[A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][0-9a-fA-F]{32,}"
ipv6address = r"\[(?:[a-fA-F0-9:]+)\]"
# domain name
domainName = rf"{domainNameComponent}(?:\.{domainNameComponent})*"
host = rf"(?:{domainName}|{ipv6address})"
domainAndPort = rf"{host}{optionalPort}"
# image name
pathComponent = rf"{alphanumeric_with_uppercase}(?:{separator}{alphanumeric_with_uppercase})*"
remoteName = rf"{pathComponent}(?:\/{pathComponent})*"
namePat = rf"(?:{domainAndPort}\/)?{remoteName}"
# Regex for a full image reference, with 3 groups: name, tag, digest
referencePat = re.compile(rf"^({namePat})(?::({tag}))?(?:@({digestPat}))?$")
# Regex for splitting the name into domain and path components
anchoredNameRegexp = re.compile(rf"^(?:({domainAndPort})\/)?({remoteName})$")
def ensure_sha256_checksum(oci_blob: str):
"""Validate that the reference is of the format sha256:<checksum>
Return the checksum if valid, raise ValueError otherwise."""
if ":" not in oci_blob:
raise ValueError(f"Invalid OCI blob format: {oci_blob}")
alg, checksum = oci_blob.split(":", 1)
if alg != "sha256":
raise ValueError(f"Unsupported OCI blob checksum algorithm: {alg}")
if len(checksum) != 64:
raise ValueError(f"Invalid OCI blob checksum length: {len(checksum)}")
return checksum
class Digest:
"""Represents a digest in the format <algorithm>:<digest>.
Currently only supports sha256 digests."""
__slots__ = ["algorithm", "digest"]
def __init__(self, *, algorithm: str, digest: str) -> None:
self.algorithm = algorithm
self.digest = digest
def __eq__(self, __value: object) -> bool:
if not isinstance(__value, Digest):
return NotImplemented
return self.algorithm == __value.algorithm and self.digest == __value.digest
@classmethod
def from_string(cls, string: str) -> "Digest":
return cls(algorithm="sha256", digest=ensure_sha256_checksum(string))
@classmethod
def from_sha256(cls, digest: str) -> "Digest":
return cls(algorithm="sha256", digest=digest)
def __str__(self) -> str:
return f"{self.algorithm}:{self.digest}"
class ImageReference:
"""A parsed image of the form domain/name:tag[@digest].
The digest is optional, and domain and tag are automatically
filled out with defaults when parsed from string."""
__slots__ = ["domain", "name", "tag", "digest"]
def __init__(
self, *, domain: str, name: str, tag: str = "latest", digest: Optional[Digest] = None
):
self.domain = domain
self.name = name
self.tag = tag
self.digest = digest
@classmethod
def from_string(cls, string) -> "ImageReference":
match = referencePat.match(string)
if not match:
raise ValueError(f"Invalid image reference: {string}")
image, tag, digest = match.groups()
assert isinstance(image, str)
assert isinstance(tag, (str, type(None)))
assert isinstance(digest, (str, type(None)))
match = anchoredNameRegexp.match(image)
# This can never happen, since the regex is implied
# by the regex above. It's just here to make mypy happy.
assert match, f"Invalid image reference: {string}"
domain, name = match.groups()
assert isinstance(domain, (str, type(None)))
assert isinstance(name, str)
# Fill out defaults like docker would do...
# Based on github.com/distribution/distribution: allow short names like "ubuntu"
# and "user/repo" to be interpreted as "library/ubuntu" and "user/repo:latest
# Not sure if Spack should follow Docker, but it's what people expect...
if not domain:
domain = "index.docker.io"
name = f"library/{name}"
elif (
"." not in domain
and ":" not in domain
and domain != "localhost"
and domain == domain.lower()
):
name = f"{domain}/{name}"
domain = "index.docker.io"
# Lowercase the image name. This is enforced by Docker, although the OCI spec isn't clear?
# We do this anyways, cause for example in Github Actions the <organization>/<repository>
# part can have uppercase, and may be interpolated when specifying the relevant OCI image.
name = name.lower()
if not tag:
tag = "latest"
# sha256 is currently the only algorithm that
# we implement, even though the spec allows for more
if isinstance(digest, str):
digest = Digest.from_string(digest)
return cls(domain=domain, name=name, tag=tag, digest=digest)
def manifest_url(self) -> str:
digest_or_tag = self.digest or self.tag
return f"https://{self.domain}/v2/{self.name}/manifests/{digest_or_tag}"
def blob_url(self, digest: Union[str, Digest]) -> str:
if isinstance(digest, str):
digest = Digest.from_string(digest)
return f"https://{self.domain}/v2/{self.name}/blobs/{digest}"
def with_digest(self, digest: Union[str, Digest]) -> "ImageReference":
if isinstance(digest, str):
digest = Digest.from_string(digest)
return ImageReference(domain=self.domain, name=self.name, tag=self.tag, digest=digest)
def with_tag(self, tag: str) -> "ImageReference":
return ImageReference(domain=self.domain, name=self.name, tag=tag, digest=self.digest)
def uploads_url(self, digest: Optional[Digest] = None) -> str:
url = f"https://{self.domain}/v2/{self.name}/blobs/uploads/"
if digest:
url += f"?digest={digest}"
return url
def tags_url(self) -> str:
return f"https://{self.domain}/v2/{self.name}/tags/list"
def endpoint(self, path: str = "") -> str:
return urllib.parse.urljoin(f"https://{self.domain}/v2/", path)
def __str__(self) -> str:
s = f"{self.domain}/{self.name}"
if self.tag:
s += f":{self.tag}"
if self.digest:
s += f"@{self.digest}"
return s
def __eq__(self, __value: object) -> bool:
if not isinstance(__value, ImageReference):
return NotImplemented
return (
self.domain == __value.domain
and self.name == __value.name
and self.tag == __value.tag
and self.digest == __value.digest
)
def _ensure_valid_tag(tag: str) -> str:
"""Ensure a tag is valid for an OCI registry."""
sanitized = re.sub(r"[^\w.-]", "_", tag)
if len(sanitized) > 128:
return sanitized[:64] + sanitized[-64:]
return sanitized
def default_tag(spec: "spack.spec.Spec") -> str:
"""Return a valid, default image tag for a spec."""
return _ensure_valid_tag(f"{spec.name}-{spec.version}-{spec.dag_hash()}.spack")
#: Default OCI index tag
default_index_tag = "index.spack"
def tag_is_spec(tag: str) -> bool:
"""Check if a tag is likely a Spec"""
return tag.endswith(".spack") and tag != default_index_tag
def default_config(architecture: str, os: str):
return {
"architecture": architecture,
"os": os,
"rootfs": {"type": "layers", "diff_ids": []},
"config": {"Env": []},
}
def default_manifest():
return {
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"schemaVersion": 2,
"config": {"mediaType": "application/vnd.oci.image.config.v1+json"},
"layers": [],
}

381
lib/spack/spack/oci/oci.py Normal file
View File

@@ -0,0 +1,381 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import hashlib
import json
import os
import time
import urllib.error
import urllib.parse
import urllib.request
from http.client import HTTPResponse
from typing import NamedTuple, Tuple
from urllib.request import Request
import llnl.util.tty as tty
import spack.binary_distribution
import spack.config
import spack.error
import spack.fetch_strategy
import spack.mirror
import spack.oci.opener
import spack.repo
import spack.spec
import spack.stage
import spack.traverse
import spack.util.crypto
from .image import Digest, ImageReference
class Blob(NamedTuple):
compressed_digest: Digest
uncompressed_digest: Digest
size: int
def create_tarball(spec: spack.spec.Spec, tarfile_path):
buildinfo = spack.binary_distribution.get_buildinfo_dict(spec)
return spack.binary_distribution._do_create_tarball(tarfile_path, spec.prefix, buildinfo)
def _log_upload_progress(digest: Digest, size: int, elapsed: float):
elapsed = max(elapsed, 0.001) # guard against division by zero
tty.info(f"Uploaded {digest} ({elapsed:.2f}s, {size / elapsed / 1024 / 1024:.2f} MB/s)")
def with_query_param(url: str, param: str, value: str) -> str:
"""Add a query parameter to a URL
Args:
url: The URL to add the parameter to.
param: The parameter name.
value: The parameter value.
Returns:
The URL with the parameter added.
"""
parsed = urllib.parse.urlparse(url)
query = urllib.parse.parse_qs(parsed.query)
if param in query:
query[param].append(value)
else:
query[param] = [value]
return urllib.parse.urlunparse(
parsed._replace(query=urllib.parse.urlencode(query, doseq=True))
)
def upload_blob(
ref: ImageReference,
file: str,
digest: Digest,
force: bool = False,
small_file_size: int = 0,
_urlopen: spack.oci.opener.MaybeOpen = None,
) -> bool:
"""Uploads a blob to an OCI registry
We only do monolithic uploads, even though it's very simple to do chunked.
Observed problems with chunked uploads:
(1) it's slow, many sequential requests, (2) some registries set an *unknown*
max chunk size, and the spec doesn't say how to obtain it
Args:
ref: The image reference.
file: The file to upload.
digest: The digest of the file.
force: Whether to force upload the blob, even if it already exists.
small_file_size: For files at most this size, attempt
to do a single POST request instead of POST + PUT.
Some registries do no support single requests, and others
do not specify what size they support in single POST.
For now this feature is disabled by default (0KB)
Returns:
True if the blob was uploaded, False if it already existed.
"""
_urlopen = _urlopen or spack.oci.opener.urlopen
# Test if the blob already exists, if so, early exit.
if not force and blob_exists(ref, digest, _urlopen):
return False
start = time.time()
with open(file, "rb") as f:
file_size = os.fstat(f.fileno()).st_size
# For small blobs, do a single POST request.
# The spec says that registries MAY support this
if file_size <= small_file_size:
request = Request(
url=ref.uploads_url(digest),
method="POST",
data=f,
headers={
"Content-Type": "application/octet-stream",
"Content-Length": str(file_size),
},
)
else:
request = Request(
url=ref.uploads_url(), method="POST", headers={"Content-Length": "0"}
)
response = _urlopen(request)
# Created the blob in one go.
if response.status == 201:
_log_upload_progress(digest, file_size, time.time() - start)
return True
# Otherwise, do another PUT request.
spack.oci.opener.ensure_status(response, 202)
assert "Location" in response.headers
# Can be absolute or relative, joining handles both
upload_url = with_query_param(
ref.endpoint(response.headers["Location"]), "digest", str(digest)
)
f.seek(0)
response = _urlopen(
Request(
url=upload_url,
method="PUT",
data=f,
headers={
"Content-Type": "application/octet-stream",
"Content-Length": str(file_size),
},
)
)
spack.oci.opener.ensure_status(response, 201)
# print elapsed time and # MB/s
_log_upload_progress(digest, file_size, time.time() - start)
return True
def upload_manifest(
ref: ImageReference,
oci_manifest: dict,
tag: bool = True,
_urlopen: spack.oci.opener.MaybeOpen = None,
):
"""Uploads a manifest/index to a registry
Args:
ref: The image reference.
oci_manifest: The OCI manifest or index.
tag: When true, use the tag, otherwise use the digest,
this is relevant for multi-arch images, where the
tag is an index, referencing the manifests by digest.
Returns:
The digest and size of the uploaded manifest.
"""
_urlopen = _urlopen or spack.oci.opener.urlopen
data = json.dumps(oci_manifest, separators=(",", ":")).encode()
digest = Digest.from_sha256(hashlib.sha256(data).hexdigest())
size = len(data)
if not tag:
ref = ref.with_digest(digest)
response = _urlopen(
Request(
url=ref.manifest_url(),
method="PUT",
data=data,
headers={"Content-Type": oci_manifest["mediaType"]},
)
)
spack.oci.opener.ensure_status(response, 201)
return digest, size
def image_from_mirror(mirror: spack.mirror.Mirror) -> ImageReference:
"""Given an OCI based mirror, extract the URL and image name from it"""
url = mirror.push_url
if not url.startswith("oci://"):
raise ValueError(f"Mirror {mirror} is not an OCI mirror")
return ImageReference.from_string(url[6:])
def blob_exists(
ref: ImageReference, digest: Digest, _urlopen: spack.oci.opener.MaybeOpen = None
) -> bool:
"""Checks if a blob exists in an OCI registry"""
try:
_urlopen = _urlopen or spack.oci.opener.urlopen
response = _urlopen(Request(url=ref.blob_url(digest), method="HEAD"))
return response.status == 200
except urllib.error.HTTPError as e:
if e.getcode() == 404:
return False
raise
def copy_missing_layers(
src: ImageReference,
dst: ImageReference,
architecture: str,
_urlopen: spack.oci.opener.MaybeOpen = None,
) -> Tuple[dict, dict]:
"""Copy image layers from src to dst for given architecture.
Args:
src: The source image reference.
dst: The destination image reference.
architecture: The architecture (when referencing an index)
Returns:
Tuple of manifest and config of the base image.
"""
_urlopen = _urlopen or spack.oci.opener.urlopen
manifest, config = get_manifest_and_config(src, architecture, _urlopen=_urlopen)
# Get layer digests
digests = [Digest.from_string(layer["digest"]) for layer in manifest["layers"]]
# Filter digests that are don't exist in the registry
missing_digests = [
digest for digest in digests if not blob_exists(dst, digest, _urlopen=_urlopen)
]
if not missing_digests:
return manifest, config
# Pull missing blobs, push them to the registry
with spack.stage.StageComposite.from_iterable(
make_stage(url=src.blob_url(digest), digest=digest, _urlopen=_urlopen)
for digest in missing_digests
) as stages:
stages.fetch()
stages.check()
stages.cache_local()
for stage, digest in zip(stages, missing_digests):
# No need to check existince again, force=True.
upload_blob(
dst, file=stage.save_filename, force=True, digest=digest, _urlopen=_urlopen
)
return manifest, config
#: OCI manifest content types (including docker type)
manifest_content_type = [
"application/vnd.oci.image.manifest.v1+json",
"application/vnd.docker.distribution.manifest.v2+json",
]
#: OCI index content types (including docker type)
index_content_type = [
"application/vnd.oci.image.index.v1+json",
"application/vnd.docker.distribution.manifest.list.v2+json",
]
#: All OCI manifest / index content types
all_content_type = manifest_content_type + index_content_type
def get_manifest_and_config(
ref: ImageReference,
architecture="amd64",
recurse=3,
_urlopen: spack.oci.opener.MaybeOpen = None,
) -> Tuple[dict, dict]:
"""Recursively fetch manifest and config for a given image reference
with a given architecture.
Args:
ref: The image reference.
architecture: The architecture (when referencing an index)
recurse: How many levels of index to recurse into.
Returns:
A tuple of (manifest, config)"""
_urlopen = _urlopen or spack.oci.opener.urlopen
# Get manifest
response: HTTPResponse = _urlopen(
Request(url=ref.manifest_url(), headers={"Accept": ", ".join(all_content_type)})
)
# Recurse when we find an index
if response.headers["Content-Type"] in index_content_type:
if recurse == 0:
raise Exception("Maximum recursion depth reached while fetching OCI manifest")
index = json.load(response)
manifest_meta = next(
manifest
for manifest in index["manifests"]
if manifest["platform"]["architecture"] == architecture
)
return get_manifest_and_config(
ref.with_digest(manifest_meta["digest"]),
architecture=architecture,
recurse=recurse - 1,
_urlopen=_urlopen,
)
# Otherwise, require a manifest
if response.headers["Content-Type"] not in manifest_content_type:
raise Exception(f"Unknown content type {response.headers['Content-Type']}")
manifest = json.load(response)
# Download, verify and cache config file
config_digest = Digest.from_string(manifest["config"]["digest"])
with make_stage(ref.blob_url(config_digest), config_digest, _urlopen=_urlopen) as stage:
stage.fetch()
stage.check()
stage.cache_local()
with open(stage.save_filename, "rb") as f:
config = json.load(f)
return manifest, config
#: Same as upload_manifest, but with retry wrapper
upload_manifest_with_retry = spack.oci.opener.default_retry(upload_manifest)
#: Same as upload_blob, but with retry wrapper
upload_blob_with_retry = spack.oci.opener.default_retry(upload_blob)
#: Same as get_manifest_and_config, but with retry wrapper
get_manifest_and_config_with_retry = spack.oci.opener.default_retry(get_manifest_and_config)
#: Same as copy_missing_layers, but with retry wrapper
copy_missing_layers_with_retry = spack.oci.opener.default_retry(copy_missing_layers)
def make_stage(
url: str, digest: Digest, keep: bool = False, _urlopen: spack.oci.opener.MaybeOpen = None
) -> spack.stage.Stage:
_urlopen = _urlopen or spack.oci.opener.urlopen
fetch_strategy = spack.fetch_strategy.OCIRegistryFetchStrategy(
url, checksum=digest.digest, _urlopen=_urlopen
)
# Use blobs/<alg>/<encoded> as the cache path, which follows
# the OCI Image Layout Specification. What's missing though,
# is the `oci-layout` and `index.json` files, which are
# required by the spec.
return spack.stage.Stage(
fetch_strategy,
mirror_paths=spack.mirror.OCIImageLayout(digest),
name=digest.digest,
keep=keep,
)

View File

@@ -0,0 +1,442 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""All the logic for OCI fetching and authentication"""
import base64
import json
import re
import time
import urllib.error
import urllib.parse
import urllib.request
from enum import Enum, auto
from http.client import HTTPResponse
from typing import Callable, Dict, Iterable, List, NamedTuple, Optional, Tuple
from urllib.request import Request
import llnl.util.lang
import spack.config
import spack.mirror
import spack.parser
import spack.repo
import spack.util.web
from .image import ImageReference
def _urlopen():
opener = create_opener()
def dispatch_open(fullurl, data=None, timeout=None):
timeout = timeout or spack.config.get("config:connect_timeout", 10)
return opener.open(fullurl, data, timeout)
return dispatch_open
OpenType = Callable[..., HTTPResponse]
MaybeOpen = Optional[OpenType]
#: Opener that automatically uses OCI authentication based on mirror config
urlopen: OpenType = llnl.util.lang.Singleton(_urlopen)
SP = r" "
OWS = r"[ \t]*"
BWS = OWS
HTAB = r"\t"
VCHAR = r"\x21-\x7E"
tchar = r"[!#$%&'*+\-.^_`|~0-9A-Za-z]"
token = rf"{tchar}+"
obs_text = r"\x80-\xFF"
qdtext = rf"[{HTAB}{SP}\x21\x23-\x5B\x5D-\x7E{obs_text}]"
quoted_pair = rf"\\([{HTAB}{SP}{VCHAR}{obs_text}])"
quoted_string = rf'"(?:({qdtext}*)|{quoted_pair})*"'
class TokenType(spack.parser.TokenBase):
AUTH_PARAM = rf"({token}){BWS}={BWS}({token}|{quoted_string})"
# TOKEN68 = r"([A-Za-z0-9\-._~+/]+=*)" # todo... support this?
TOKEN = rf"{tchar}+"
EQUALS = rf"{BWS}={BWS}"
COMMA = rf"{OWS},{OWS}"
SPACE = r" +"
EOF = r"$"
ANY = r"."
TOKEN_REGEXES = [rf"(?P<{token}>{token.regex})" for token in TokenType]
ALL_TOKENS = re.compile("|".join(TOKEN_REGEXES))
class State(Enum):
CHALLENGE = auto()
AUTH_PARAM_LIST_START = auto()
AUTH_PARAM = auto()
NEXT_IN_LIST = auto()
AUTH_PARAM_OR_SCHEME = auto()
def tokenize(input: str):
scanner = ALL_TOKENS.scanner(input) # type: ignore[attr-defined]
for match in iter(scanner.match, None): # type: ignore[var-annotated]
yield spack.parser.Token(
TokenType.__members__[match.lastgroup], # type: ignore[attr-defined]
match.group(), # type: ignore[attr-defined]
match.start(), # type: ignore[attr-defined]
match.end(), # type: ignore[attr-defined]
)
class Challenge:
__slots__ = ["scheme", "params"]
def __init__(
self, scheme: Optional[str] = None, params: Optional[List[Tuple[str, str]]] = None
) -> None:
self.scheme = scheme or ""
self.params = params or []
def __repr__(self) -> str:
return f"Challenge({self.scheme}, {self.params})"
def __eq__(self, other: object) -> bool:
return (
isinstance(other, Challenge)
and self.scheme == other.scheme
and self.params == other.params
)
def parse_www_authenticate(input: str):
"""Very basic parsing of www-authenticate parsing (RFC7235 section 4.1)
Notice: this omits token68 support."""
# auth-scheme = token
# auth-param = token BWS "=" BWS ( token / quoted-string )
# challenge = auth-scheme [ 1*SP ( token68 / #auth-param ) ]
# WWW-Authenticate = 1#challenge
challenges: List[Challenge] = []
_unquote = re.compile(quoted_pair).sub
unquote = lambda s: _unquote(r"\1", s[1:-1])
mode: State = State.CHALLENGE
tokens = tokenize(input)
current_challenge = Challenge()
def extract_auth_param(input: str) -> Tuple[str, str]:
key, value = input.split("=", 1)
key = key.rstrip()
value = value.lstrip()
if value.startswith('"'):
value = unquote(value)
return key, value
while True:
token: spack.parser.Token = next(tokens)
if mode == State.CHALLENGE:
if token.kind == TokenType.EOF:
raise ValueError(token)
elif token.kind == TokenType.TOKEN:
current_challenge.scheme = token.value
mode = State.AUTH_PARAM_LIST_START
else:
raise ValueError(token)
elif mode == State.AUTH_PARAM_LIST_START:
if token.kind == TokenType.EOF:
challenges.append(current_challenge)
break
elif token.kind == TokenType.COMMA:
# Challenge without param list, followed by another challenge.
challenges.append(current_challenge)
current_challenge = Challenge()
mode = State.CHALLENGE
elif token.kind == TokenType.SPACE:
# A space means it must be followed by param list
mode = State.AUTH_PARAM
else:
raise ValueError(token)
elif mode == State.AUTH_PARAM:
if token.kind == TokenType.EOF:
raise ValueError(token)
elif token.kind == TokenType.AUTH_PARAM:
key, value = extract_auth_param(token.value)
current_challenge.params.append((key, value))
mode = State.NEXT_IN_LIST
else:
raise ValueError(token)
elif mode == State.NEXT_IN_LIST:
if token.kind == TokenType.EOF:
challenges.append(current_challenge)
break
elif token.kind == TokenType.COMMA:
mode = State.AUTH_PARAM_OR_SCHEME
else:
raise ValueError(token)
elif mode == State.AUTH_PARAM_OR_SCHEME:
if token.kind == TokenType.EOF:
raise ValueError(token)
elif token.kind == TokenType.TOKEN:
challenges.append(current_challenge)
current_challenge = Challenge(token.value)
mode = State.AUTH_PARAM_LIST_START
elif token.kind == TokenType.AUTH_PARAM:
key, value = extract_auth_param(token.value)
current_challenge.params.append((key, value))
mode = State.NEXT_IN_LIST
return challenges
class RealmServiceScope(NamedTuple):
realm: str
service: str
scope: str
class UsernamePassword(NamedTuple):
username: str
password: str
def get_bearer_challenge(challenges: List[Challenge]) -> Optional[RealmServiceScope]:
# Find a challenge that we can handle (currently only Bearer)
challenge = next((c for c in challenges if c.scheme == "Bearer"), None)
if challenge is None:
return None
# Get realm / service / scope from challenge
realm = next((v for k, v in challenge.params if k == "realm"), None)
service = next((v for k, v in challenge.params if k == "service"), None)
scope = next((v for k, v in challenge.params if k == "scope"), None)
if realm is None or service is None or scope is None:
return None
return RealmServiceScope(realm, service, scope)
class OCIAuthHandler(urllib.request.BaseHandler):
def __init__(self, credentials_provider: Callable[[str], Optional[UsernamePassword]]):
"""
Args:
credentials_provider: A function that takes a domain and may return a UsernamePassword.
"""
self.credentials_provider = credentials_provider
# Cached bearer tokens for a given domain.
self.cached_tokens: Dict[str, str] = {}
def obtain_bearer_token(self, registry: str, challenge: RealmServiceScope, timeout) -> str:
# See https://docs.docker.com/registry/spec/auth/token/
query = urllib.parse.urlencode(
{"service": challenge.service, "scope": challenge.scope, "client_id": "spack"}
)
parsed = urllib.parse.urlparse(challenge.realm)._replace(
query=query, fragment="", params=""
)
# Don't send credentials over insecure transport.
if parsed.scheme != "https":
raise ValueError(
f"Cannot login to {registry} over insecure {parsed.scheme} connection"
)
request = Request(urllib.parse.urlunparse(parsed))
# I guess we shouldn't cache this, since we don't know
# the context in which it's used (may depend on config)
pair = self.credentials_provider(registry)
if pair is not None:
encoded = base64.b64encode(f"{pair.username}:{pair.password}".encode("utf-8")).decode(
"utf-8"
)
request.add_unredirected_header("Authorization", f"Basic {encoded}")
# Do a GET request.
response = self.parent.open(request, timeout=timeout)
# Read the response and parse the JSON
response_json = json.load(response)
# Get the token from the response
token = response_json["token"]
# Remember the last obtained token for this registry
# Note: we should probably take into account realm, service and scope
# so we can store multiple tokens for the same registry.
self.cached_tokens[registry] = token
return token
def https_request(self, req: Request):
# Eagerly add the bearer token to the request if no
# auth header is set yet, to avoid 401s in multiple
# requests to the same registry.
# Use has_header, not .headers, since there are two
# types of headers (redirected and unredirected)
if req.has_header("Authorization"):
return req
parsed = urllib.parse.urlparse(req.full_url)
token = self.cached_tokens.get(parsed.netloc)
if not token:
return req
req.add_unredirected_header("Authorization", f"Bearer {token}")
return req
def http_error_401(self, req: Request, fp, code, msg, headers):
# Login failed, avoid infinite recursion where we go back and
# forth between auth server and registry
if hasattr(req, "login_attempted"):
raise urllib.error.HTTPError(
req.full_url, code, f"Failed to login to {req.full_url}: {msg}", headers, fp
)
# On 401 Unauthorized, parse the WWW-Authenticate header
# to determine what authentication is required
if "WWW-Authenticate" not in headers:
raise urllib.error.HTTPError(
req.full_url,
code,
"Cannot login to registry, missing WWW-Authenticate header",
headers,
fp,
)
header_value = headers["WWW-Authenticate"]
try:
challenge = get_bearer_challenge(parse_www_authenticate(header_value))
except ValueError as e:
raise urllib.error.HTTPError(
req.full_url,
code,
f"Cannot login to registry, malformed WWW-Authenticate header: {header_value}",
headers,
fp,
) from e
# If there is no bearer challenge, we can't handle it
if not challenge:
raise urllib.error.HTTPError(
req.full_url,
code,
f"Cannot login to registry, unsupported authentication scheme: {header_value}",
headers,
fp,
)
# Get the token from the auth handler
try:
token = self.obtain_bearer_token(
registry=urllib.parse.urlparse(req.get_full_url()).netloc,
challenge=challenge,
timeout=req.timeout,
)
except ValueError as e:
raise urllib.error.HTTPError(
req.full_url,
code,
f"Cannot login to registry, failed to obtain bearer token: {e}",
headers,
fp,
) from e
# Add the token to the request
req.add_unredirected_header("Authorization", f"Bearer {token}")
setattr(req, "login_attempted", True)
return self.parent.open(req, timeout=req.timeout)
def credentials_from_mirrors(
domain: str, *, mirrors: Optional[Iterable[spack.mirror.Mirror]] = None
) -> Optional[UsernamePassword]:
"""Filter out OCI registry credentials from a list of mirrors."""
mirrors = mirrors or spack.mirror.MirrorCollection().values()
for mirror in mirrors:
# Prefer push credentials over fetch. Unlikely that those are different
# but our config format allows it.
for direction in ("push", "fetch"):
pair = mirror.get_access_pair(direction)
if pair is None:
continue
url = mirror.get_url(direction)
if not url.startswith("oci://"):
continue
try:
parsed = ImageReference.from_string(url[6:])
except ValueError:
continue
if parsed.domain == domain:
return UsernamePassword(*pair)
return None
def create_opener():
"""Create an opener that can handle OCI authentication."""
opener = urllib.request.OpenerDirector()
for handler in [
urllib.request.UnknownHandler(),
urllib.request.HTTPSHandler(),
spack.util.web.SpackHTTPDefaultErrorHandler(),
urllib.request.HTTPRedirectHandler(),
urllib.request.HTTPErrorProcessor(),
OCIAuthHandler(credentials_from_mirrors),
]:
opener.add_handler(handler)
return opener
def ensure_status(response: HTTPResponse, status: int):
"""Raise an error if the response status is not the expected one."""
if response.status == status:
return
raise urllib.error.HTTPError(
response.geturl(), response.status, response.reason, response.info(), None
)
def default_retry(f, retries: int = 3, sleep=None):
sleep = sleep or time.sleep
def wrapper(*args, **kwargs):
for i in range(retries):
try:
return f(*args, **kwargs)
except urllib.error.HTTPError as e:
# Retry on internal server errors, and rate limit errors
# Potentially this could take into account the Retry-After header
# if registries support it
if i + 1 != retries and (500 <= e.code < 600 or e.code == 429):
# Exponential backoff
sleep(2**i)
continue
raise
return wrapper

View File

@@ -5,10 +5,12 @@
import glob
import os
import pathlib
import platform
import subprocess
from spack.error import SpackError
from spack.util import windows_registry as winreg
from spack.version import Version
from ._operating_system import OperatingSystem
@@ -31,43 +33,6 @@ class WindowsOs(OperatingSystem):
10.
"""
# Find MSVC directories using vswhere
comp_search_paths = []
vs_install_paths = []
root = os.environ.get("ProgramFiles(x86)") or os.environ.get("ProgramFiles")
if root:
try:
extra_args = {"encoding": "mbcs", "errors": "strict"}
paths = subprocess.check_output( # type: ignore[call-overload] # novermin
[
os.path.join(root, "Microsoft Visual Studio", "Installer", "vswhere.exe"),
"-prerelease",
"-requires",
"Microsoft.VisualStudio.Component.VC.Tools.x86.x64",
"-property",
"installationPath",
"-products",
"*",
],
**extra_args,
).strip()
vs_install_paths = paths.split("\n")
msvc_paths = [os.path.join(path, "VC", "Tools", "MSVC") for path in vs_install_paths]
for p in msvc_paths:
comp_search_paths.extend(glob.glob(os.path.join(p, "*", "bin", "Hostx64", "x64")))
if os.getenv("ONEAPI_ROOT"):
comp_search_paths.extend(
glob.glob(
os.path.join(
str(os.getenv("ONEAPI_ROOT")), "compiler", "*", "windows", "bin"
)
)
)
except (subprocess.CalledProcessError, OSError, UnicodeDecodeError):
pass
if comp_search_paths:
compiler_search_paths = comp_search_paths
def __init__(self):
plat_ver = windows_version()
if plat_ver < Version("10"):
@@ -76,3 +41,71 @@ def __init__(self):
def __str__(self):
return self.name
@property
def vs_install_paths(self):
vs_install_paths = []
root = os.environ.get("ProgramFiles(x86)") or os.environ.get("ProgramFiles")
if root:
try:
extra_args = {"encoding": "mbcs", "errors": "strict"}
paths = subprocess.check_output( # type: ignore[call-overload] # novermin
[
os.path.join(root, "Microsoft Visual Studio", "Installer", "vswhere.exe"),
"-prerelease",
"-requires",
"Microsoft.VisualStudio.Component.VC.Tools.x86.x64",
"-property",
"installationPath",
"-products",
"*",
],
**extra_args,
).strip()
vs_install_paths = paths.split("\n")
except (subprocess.CalledProcessError, OSError, UnicodeDecodeError):
pass
return vs_install_paths
@property
def msvc_paths(self):
return [os.path.join(path, "VC", "Tools", "MSVC") for path in self.vs_install_paths]
@property
def compiler_search_paths(self):
# First Strategy: Find MSVC directories using vswhere
_compiler_search_paths = []
for p in self.msvc_paths:
_compiler_search_paths.extend(glob.glob(os.path.join(p, "*", "bin", "Hostx64", "x64")))
if os.getenv("ONEAPI_ROOT"):
_compiler_search_paths.extend(
glob.glob(
os.path.join(str(os.getenv("ONEAPI_ROOT")), "compiler", "*", "windows", "bin")
)
)
# Second strategy: Find MSVC via the registry
msft = winreg.WindowsRegistryView(
"SOFTWARE\\WOW6432Node\\Microsoft", winreg.HKEY.HKEY_LOCAL_MACHINE
)
vs_entries = msft.find_subkeys(r"VisualStudio_.*")
vs_paths = []
def clean_vs_path(path):
path = path.split(",")[0].lstrip("@")
return str((pathlib.Path(path).parent / "..\\..").resolve())
for entry in vs_entries:
try:
val = entry.get_subkey("Capabilities").get_value("ApplicationDescription").value
vs_paths.append(clean_vs_path(val))
except FileNotFoundError as e:
if hasattr(e, "winerror"):
if e.winerror == 2:
pass
else:
raise
else:
raise
_compiler_search_paths.extend(vs_paths)
return _compiler_search_paths

View File

@@ -6,7 +6,7 @@
Here is the EBNF grammar for a spec::
spec = [name] [node_options] { ^ node } |
spec = [name] [node_options] { ^[edge_properties] node } |
[name] [node_options] hash |
filename
@@ -14,7 +14,8 @@
[name] [node_options] hash |
filename
node_options = [@(version_list|version_pair)] [%compiler] { variant }
node_options = [@(version_list|version_pair)] [%compiler] { variant }
edge_properties = [ { bool_variant | key_value } ]
hash = / id
filename = (.|/|[a-zA-Z0-9-_]*/)([a-zA-Z0-9-_./]*)(.json|.yaml)
@@ -64,9 +65,9 @@
from llnl.util.tty import color
import spack.deptypes
import spack.error
import spack.spec
import spack.variant
import spack.version
IS_WINDOWS = sys.platform == "win32"
@@ -97,9 +98,9 @@
VALUE = r"(?:[a-zA-Z_0-9\-+\*.,:=\~\/\\]+)"
QUOTED_VALUE = r"[\"']+(?:[a-zA-Z_0-9\-+\*.,:=\~\/\\\s]+)[\"']+"
VERSION = r"=?([a-zA-Z0-9_][a-zA-Z_0-9\-\.]*\b)"
VERSION_RANGE = rf"({VERSION}\s*:\s*{VERSION}(?!\s*=)|:\s*{VERSION}(?!\s*=)|{VERSION}\s*:|:)"
VERSION_LIST = rf"({VERSION_RANGE}|{VERSION})(\s*[,]\s*({VERSION_RANGE}|{VERSION}))*"
VERSION = r"=?(?:[a-zA-Z0-9_][a-zA-Z_0-9\-\.]*\b)"
VERSION_RANGE = rf"(?:(?:{VERSION})?:(?:{VERSION}(?!\s*=))?)"
VERSION_LIST = rf"(?:{VERSION_RANGE}|{VERSION})(?:\s*,\s*(?:{VERSION_RANGE}|{VERSION}))*"
class TokenBase(enum.Enum):
@@ -127,6 +128,8 @@ class TokenType(TokenBase):
"""
# Dependency
START_EDGE_PROPERTIES = r"(?:\^\[)"
END_EDGE_PROPERTIES = r"(?:\])"
DEPENDENCY = r"(?:\^)"
# Version
VERSION_HASH_PAIR = rf"(?:@(?:{GIT_VERSION_PATTERN})=(?:{VERSION}))"
@@ -164,7 +167,7 @@ class Token:
__slots__ = "kind", "value", "start", "end"
def __init__(
self, kind: TokenType, value: str, start: Optional[int] = None, end: Optional[int] = None
self, kind: TokenBase, value: str, start: Optional[int] = None, end: Optional[int] = None
):
self.kind = kind
self.value = value
@@ -264,8 +267,8 @@ def tokens(self) -> List[Token]:
return list(filter(lambda x: x.kind != TokenType.WS, tokenize(self.literal_str)))
def next_spec(
self, initial_spec: Optional[spack.spec.Spec] = None
) -> Optional[spack.spec.Spec]:
self, initial_spec: Optional["spack.spec.Spec"] = None
) -> Optional["spack.spec.Spec"]:
"""Return the next spec parsed from text.
Args:
@@ -281,16 +284,15 @@ def next_spec(
initial_spec = initial_spec or spack.spec.Spec()
root_spec = SpecNodeParser(self.ctx).parse(initial_spec)
while True:
if self.ctx.accept(TokenType.DEPENDENCY):
dependency = SpecNodeParser(self.ctx).parse()
if dependency is None:
msg = (
"this dependency sigil needs to be followed by a package name "
"or a node attribute (version, variant, etc.)"
)
raise SpecParsingError(msg, self.ctx.current_token, self.literal_str)
if self.ctx.accept(TokenType.START_EDGE_PROPERTIES):
edge_properties = EdgeAttributeParser(self.ctx, self.literal_str).parse()
edge_properties.setdefault("depflag", 0)
edge_properties.setdefault("virtuals", ())
dependency = self._parse_node(root_spec)
root_spec._add_dependency(dependency, **edge_properties)
elif self.ctx.accept(TokenType.DEPENDENCY):
dependency = self._parse_node(root_spec)
root_spec._add_dependency(dependency, depflag=0, virtuals=())
else:
@@ -298,7 +300,19 @@ def next_spec(
return root_spec
def all_specs(self) -> List[spack.spec.Spec]:
def _parse_node(self, root_spec):
dependency = SpecNodeParser(self.ctx).parse()
if dependency is None:
msg = (
"the dependency sigil and any optional edge attributes must be followed by a "
"package name or a node attribute (version, variant, etc.)"
)
raise SpecParsingError(msg, self.ctx.current_token, self.literal_str)
if root_spec.concrete:
raise spack.spec.RedundantSpecError(root_spec, "^" + str(dependency))
return dependency
def all_specs(self) -> List["spack.spec.Spec"]:
"""Return all the specs that remain to be parsed"""
return list(iter(self.next_spec, None))
@@ -313,7 +327,9 @@ def __init__(self, ctx):
self.has_compiler = False
self.has_version = False
def parse(self, initial_spec: Optional[spack.spec.Spec] = None) -> Optional[spack.spec.Spec]:
def parse(
self, initial_spec: Optional["spack.spec.Spec"] = None
) -> Optional["spack.spec.Spec"]:
"""Parse a single spec node from a stream of tokens
Args:
@@ -414,7 +430,7 @@ class FileParser:
def __init__(self, ctx):
self.ctx = ctx
def parse(self, initial_spec: spack.spec.Spec) -> spack.spec.Spec:
def parse(self, initial_spec: "spack.spec.Spec") -> "spack.spec.Spec":
"""Parse a spec tree from a specfile.
Args:
@@ -437,7 +453,42 @@ def parse(self, initial_spec: spack.spec.Spec) -> spack.spec.Spec:
return initial_spec
def parse(text: str) -> List[spack.spec.Spec]:
class EdgeAttributeParser:
__slots__ = "ctx", "literal_str"
def __init__(self, ctx, literal_str):
self.ctx = ctx
self.literal_str = literal_str
def parse(self):
attributes = {}
while True:
if self.ctx.accept(TokenType.KEY_VALUE_PAIR):
name, value = self.ctx.current_token.value.split("=", maxsplit=1)
name = name.strip("'\" ")
value = value.strip("'\" ").split(",")
attributes[name] = value
if name not in ("deptypes", "virtuals"):
msg = (
"the only edge attributes that are currently accepted "
'are "deptypes" and "virtuals"'
)
raise SpecParsingError(msg, self.ctx.current_token, self.literal_str)
# TODO: Add code to accept bool variants here as soon as use variants are implemented
elif self.ctx.accept(TokenType.END_EDGE_PROPERTIES):
break
else:
msg = "unexpected token in edge attributes"
raise SpecParsingError(msg, self.ctx.next_token, self.literal_str)
# Turn deptypes=... to depflag representation
if "deptypes" in attributes:
deptype_string = attributes.pop("deptypes")
attributes["depflag"] = spack.deptypes.canonicalize(deptype_string)
return attributes
def parse(text: str) -> List["spack.spec.Spec"]:
"""Parse text into a list of strings
Args:
@@ -450,8 +501,8 @@ def parse(text: str) -> List[spack.spec.Spec]:
def parse_one_or_raise(
text: str, initial_spec: Optional[spack.spec.Spec] = None
) -> spack.spec.Spec:
text: str, initial_spec: Optional["spack.spec.Spec"] = None
) -> "spack.spec.Spec":
"""Parse exactly one spec from text and return it, or raise
Args:

View File

@@ -7,6 +7,7 @@
import inspect
import os
import os.path
import pathlib
import sys
import llnl.util.filesystem
@@ -36,10 +37,12 @@ def apply_patch(stage, patch_path, level=1, working_dir="."):
"""
git_utils_path = os.environ.get("PATH", "")
if sys.platform == "win32":
git = which_string("git", required=True)
git_root = git.split("\\")[:-2]
git_root.extend(["usr", "bin"])
git_utils_path = os.sep.join(git_root)
git = which_string("git")
if git:
git = pathlib.Path(git)
git_root = git.parent.parent
git_root = git_root / "usr" / "bin"
git_utils_path = os.pathsep.join([str(git_root), git_utils_path])
# TODO: Decouple Spack's patch support on Windows from Git
# for Windows, and instead have Spack directly fetch, install, and

View File

@@ -3,7 +3,6 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Classes and functions to manage providers of virtual dependencies"""
import itertools
from typing import Dict, List, Optional, Set
import spack.error
@@ -11,33 +10,6 @@
import spack.util.spack_json as sjson
def _cross_provider_maps(lmap, rmap):
"""Return a dictionary that combines constraint requests from both input.
Args:
lmap: main provider map
rmap: provider map with additional constraints
"""
# TODO: this is pretty darned nasty, and inefficient, but there
# TODO: are not that many vdeps in most specs.
result = {}
for lspec, rspec in itertools.product(lmap, rmap):
try:
constrained = lspec.constrained(rspec)
except spack.error.UnsatisfiableSpecError:
continue
# lp and rp are left and right provider specs.
for lp_spec, rp_spec in itertools.product(lmap[lspec], rmap[rspec]):
if lp_spec.name == rp_spec.name:
try:
const = lp_spec.constrained(rp_spec, deps=False)
result.setdefault(constrained, set()).add(const)
except spack.error.UnsatisfiableSpecError:
continue
return result
class _IndexBase:
#: This is a dict of dicts used for finding providers of particular
#: virtual dependencies. The dict of dicts looks like:
@@ -81,29 +53,6 @@ def providers_for(self, virtual_spec):
def __contains__(self, name):
return name in self.providers
def satisfies(self, other):
"""Determine if the providers of virtual specs are compatible.
Args:
other: another provider index
Returns:
True if the providers are compatible, False otherwise.
"""
common = set(self.providers) & set(other.providers)
if not common:
return True
# This ensures that some provider in other COULD satisfy the
# vpkg constraints on self.
result = {}
for name in common:
crossed = _cross_provider_maps(self.providers[name], other.providers[name])
if crossed:
result[name] = crossed
return all(c in result for c in common)
def __eq__(self, other):
return self.providers == other.providers

View File

@@ -17,7 +17,7 @@
#: THIS NEEDS TO BE UPDATED FOR EVERY NEW KEYWORD THAT
#: IS ADDED IMMEDIATELY BELOW THE MODULE TYPE ATTRIBUTE
spec_regex = (
r"(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|"
r"(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|"
r"whitelist|blacklist|" # DEPRECATED: remove in 0.20.
r"include|exclude|" # use these more inclusive/consistent options
r"projections|naming_scheme|core_compilers|all)(^\w[\w-]*)"
@@ -89,6 +89,7 @@
"exclude": array_of_strings,
"exclude_implicits": {"type": "boolean", "default": False},
"defaults": array_of_strings,
"hide_implicits": {"type": "boolean", "default": False},
"naming_scheme": {"type": "string"}, # Can we be more specific here?
"projections": projections_scheme,
"all": module_file_configuration,
@@ -187,3 +188,52 @@
"additionalProperties": False,
"properties": properties,
}
# deprecated keys and their replacements
old_to_new_key = {"exclude_implicits": "hide_implicits"}
def update_keys(data, key_translations):
"""Change blacklist/whitelist to exclude/include.
Arguments:
data (dict): data from a valid modules configuration.
key_translations (dict): A dictionary of keys to translate to
their respective values.
Return:
(bool) whether anything was changed in data
"""
changed = False
if isinstance(data, dict):
keys = list(data.keys())
for key in keys:
value = data[key]
translation = key_translations.get(key)
if translation:
data[translation] = data.pop(key)
changed = True
changed |= update_keys(value, key_translations)
elif isinstance(data, list):
for elt in data:
changed |= update_keys(elt, key_translations)
return changed
def update(data):
"""Update the data in place to remove deprecated properties.
Args:
data (dict): dictionary to be updated
Returns:
True if data was changed, False otherwise
"""
# translate blacklist/whitelist to exclude/include
return update_keys(data, old_to_new_key)

View File

@@ -8,7 +8,6 @@
import enum
import itertools
import os
import pathlib
import pprint
import re
import types
@@ -889,14 +888,6 @@ def on_model(model):
timer.start("solve")
solve_result = self.control.solve(**solve_kwargs)
if solve_result.satisfiable and self._model_has_cycles(models):
tty.debug(f"cycles detected, falling back to slower algorithm [specs={specs}]")
self.control.load(os.path.join(parent_dir, "cycle_detection.lp"))
self.control.ground([("no_cycle", [])])
models.clear()
solve_result = self.control.solve(**solve_kwargs)
timer.stop("solve")
# once done, construct the solve result
@@ -950,26 +941,6 @@ def on_model(model):
return result, timer, self.control.statistics
def _model_has_cycles(self, models):
"""Returns true if the best model has cycles in it"""
cycle_detection = clingo.Control()
parent_dir = pathlib.Path(__file__).parent
lp_file = parent_dir / "cycle_detection.lp"
min_cost, best_model = min(models)
with cycle_detection.backend() as backend:
for atom in best_model:
if atom.name == "attr" and str(atom.arguments[0]) == '"depends_on"':
symbol = fn.depends_on(atom.arguments[1], atom.arguments[2])
atom_id = backend.add_atom(symbol.symbol())
backend.add_rule([atom_id], [], choice=False)
cycle_detection.load(str(lp_file))
cycle_detection.ground([("base", []), ("no_cycle", [])])
cycle_result = cycle_detection.solve()
return cycle_result.unsatisfiable
class ConcreteSpecsByHash(collections.abc.Mapping):
"""Mapping containing concrete specs keyed by DAG hash.
@@ -1530,6 +1501,17 @@ def package_provider_rules(self, pkg):
)
self.gen.newline()
for when, sets_of_virtuals in pkg.provided_together.items():
condition_id = self.condition(
when, name=pkg.name, msg="Virtuals are provided together"
)
for set_id, virtuals_together in enumerate(sets_of_virtuals):
for name in virtuals_together:
self.gen.fact(
fn.pkg_fact(pkg.name, fn.provided_together(condition_id, set_id, name))
)
self.gen.newline()
def package_dependencies_rules(self, pkg):
"""Translate 'depends_on' directives into ASP logic."""
for _, conditions in sorted(pkg.dependencies.items()):
@@ -1931,6 +1913,15 @@ class Body:
clauses.append(fn.attr("package_hash", spec.name, spec._package_hash))
clauses.append(fn.attr("hash", spec.name, spec.dag_hash()))
edges = spec.edges_from_dependents()
virtuals = [x for x in itertools.chain.from_iterable([edge.virtuals for edge in edges])]
if not body:
for virtual in virtuals:
clauses.append(fn.attr("provider_set", spec.name, virtual))
else:
for virtual in virtuals:
clauses.append(fn.attr("virtual_on_incoming_edges", spec.name, virtual))
# add all clauses from dependencies
if transitive:
# TODO: Eventually distinguish 2 deps on the same pkg (build and link)
@@ -3153,10 +3144,11 @@ def __init__(self, provided, conflicts):
msg = (
"Spack concretizer internal error. Please submit a bug report and include the "
"command, environment if applicable and the following error message."
f"\n {provided} is unsatisfiable, errors are:"
f"\n {provided} is unsatisfiable"
)
msg += "".join([f"\n {conflict}" for conflict in conflicts])
if conflicts:
msg += ", errors are:" + "".join([f"\n {conflict}" for conflict in conflicts])
super(spack.error.UnsatisfiableSpecError, self).__init__(msg)

View File

@@ -113,10 +113,11 @@ unification_set(SetID, VirtualNode)
multiple_nodes_attribute("node_flag_source").
multiple_nodes_attribute("depends_on").
multiple_nodes_attribute("virtual_on_edge").
multiple_nodes_attribute("provider_set").
% Map constraint on the literal ID to facts on the node
attr(Name, node(min_dupe_id, A1)) :- literal(LiteralID, Name, A1), solve_literal(LiteralID).
attr(Name, node(min_dupe_id, A1), A2) :- literal(LiteralID, Name, A1, A2), solve_literal(LiteralID).
attr(Name, node(min_dupe_id, A1), A2) :- literal(LiteralID, Name, A1, A2), solve_literal(LiteralID), not multiple_nodes_attribute(Name).
attr(Name, node(min_dupe_id, A1), A2, A3) :- literal(LiteralID, Name, A1, A2, A3), solve_literal(LiteralID), not multiple_nodes_attribute(Name).
attr(Name, node(min_dupe_id, A1), A2, A3, A4) :- literal(LiteralID, Name, A1, A2, A3, A4), solve_literal(LiteralID).
@@ -124,6 +125,10 @@ attr(Name, node(min_dupe_id, A1), A2, A3, A4) :- literal(LiteralID, Name, A1, A2
attr("node_flag_source", node(min_dupe_id, A1), A2, node(min_dupe_id, A3)) :- literal(LiteralID, "node_flag_source", A1, A2, A3), solve_literal(LiteralID).
attr("depends_on", node(min_dupe_id, A1), node(min_dupe_id, A2), A3) :- literal(LiteralID, "depends_on", A1, A2, A3), solve_literal(LiteralID).
attr("virtual_node", node(min_dupe_id, Virtual)) :- literal(LiteralID, "provider_set", _, Virtual), solve_literal(LiteralID).
attr("provider_set", node(min_dupe_id, Provider), node(min_dupe_id, Virtual)) :- literal(LiteralID, "provider_set", Provider, Virtual), solve_literal(LiteralID).
provider(node(min_dupe_id, Provider), node(min_dupe_id, Virtual)) :- literal(LiteralID, "provider_set", Provider, Virtual), solve_literal(LiteralID).
% Discriminate between "roots" that have been explicitly requested, and roots that are deduced from "virtual roots"
explicitly_requested_root(node(min_dupe_id, A1)) :- literal(LiteralID, "root", A1), solve_literal(LiteralID).
@@ -476,6 +481,21 @@ error(1, Msg)
% Virtual dependencies
%-----------------------------------------------------------------------------
% If the provider is set from the command line, its weight is 0
possible_provider_weight(ProviderNode, VirtualNode, 0, "Set on the command line")
:- attr("provider_set", ProviderNode, VirtualNode).
% Enforces all virtuals to be provided, if multiple of them are provided together
error(100, "Package '{0}' needs to provide both '{1}' and '{2}' together, but provides only '{1}'", Package, Virtual1, Virtual2)
:- condition_holds(ID, node(X, Package)),
pkg_fact(Package, provided_together(ID, SetID, Virtual1)),
pkg_fact(Package, provided_together(ID, SetID, Virtual2)),
Virtual1 != Virtual2,
attr("virtual_on_incoming_edges", node(X, Package), Virtual1),
not attr("virtual_on_incoming_edges", node(X, Package), Virtual2),
attr("virtual_node", node(_, Virtual1)),
attr("virtual_node", node(_, Virtual2)).
% if a package depends on a virtual, it's not external and we have a
% provider for that virtual then it depends on the provider
node_depends_on_virtual(PackageNode, Virtual, Type)
@@ -494,6 +514,9 @@ attr("virtual_on_edge", PackageNode, ProviderNode, Virtual)
provider(ProviderNode, node(_, Virtual)),
not external(PackageNode).
attr("virtual_on_incoming_edges", ProviderNode, Virtual)
:- attr("virtual_on_edge", _, ProviderNode, Virtual).
% dependencies on virtuals also imply that the virtual is a virtual node
1 { attr("virtual_node", node(0..X-1, Virtual)) : max_dupes(Virtual, X) }
:- node_depends_on_virtual(PackageNode, Virtual).
@@ -501,6 +524,10 @@ attr("virtual_on_edge", PackageNode, ProviderNode, Virtual)
% If there's a virtual node, we must select one and only one provider.
% The provider must be selected among the possible providers.
error(100, "'{0}' cannot be a provider for the '{1}' virtual", Package, Virtual)
:- attr("provider_set", node(min_dupe_id, Package), node(min_dupe_id, Virtual)),
not virtual_condition_holds( node(min_dupe_id, Package), Virtual).
error(100, "Cannot find valid provider for virtual {0}", Virtual)
:- attr("virtual_node", node(X, Virtual)),
not provider(_, node(X, Virtual)).
@@ -521,20 +548,6 @@ attr("root", PackageNode) :- attr("virtual_root", VirtualNode), provider(Package
attr("node", PackageNode), virtual_condition_holds(PackageNode, Virtual) } 1
:- attr("virtual_node", node(X, Virtual)).
% If a spec is selected as a provider, it is for all the virtual it could provide
:- provider(PackageNode, node(X, Virtual1)),
virtual_condition_holds(PackageNode, Virtual2),
Virtual2 != Virtual1,
unification_set(SetID, PackageNode),
unification_set(SetID, node(X, Virtual2)),
not provider(PackageNode, node(X, Virtual2)).
% If a spec is a dependency, and could provide a needed virtual, it must be a provider
:- node_depends_on_virtual(PackageNode, Virtual),
depends_on(PackageNode, PossibleProviderNode),
virtual_condition_holds(PossibleProviderNode, Virtual),
not attr("virtual_on_edge", PackageNode, PossibleProviderNode, Virtual).
% The provider provides the virtual if some provider condition holds.
virtual_condition_holds(node(ProviderID, Provider), Virtual) :- virtual_condition_holds(ID, node(ProviderID, Provider), Virtual).
virtual_condition_holds(ID, node(ProviderID, Provider), Virtual) :-
@@ -561,6 +574,8 @@ do_not_impose(EffectID, node(X, Package))
not virtual_condition_holds(PackageNode, Virtual),
internal_error("Virtual when provides not respected").
#defined provided_together/4.
%-----------------------------------------------------------------------------
% Virtual dependency weights
%-----------------------------------------------------------------------------
@@ -696,15 +711,18 @@ requirement_group_satisfied(node(ID, Package), X) :-
% flags if their only source is from a requirement. This is overly-specific
% and should use a more-generic approach like in https://github.com/spack/spack/pull/37180
{ attr("node_flag", node(ID, A1), A2, A3) } :-
requirement_group_member(Y, Package, X),
activate_requirement(node(ID, Package), X),
imposed_constraint(Y,"node_flag_set", A1, A2, A3).
{ attr("node_flag", node(ID, Package), FlagType, FlagValue) } :-
requirement_group_member(ConditionID, Package, RequirementID),
activate_requirement(node(ID, Package), RequirementID),
pkg_fact(Package, condition_effect(ConditionID, EffectID)),
imposed_constraint(EffectID, "node_flag_set", Package, FlagType, FlagValue).
{ attr("node_flag_source", node(ID, A1), A2, node(ID, A3)) } :-
requirement_group_member(Y, Package, X),
activate_requirement(node(ID, Package), X),
imposed_constraint(Y,"node_flag_source", A1, A2, A3).
{ attr("node_flag_source", node(NodeID1, Package1), FlagType, node(NodeID2, Package2)) } :-
requirement_group_member(ConditionID, Package1, RequirementID),
activate_requirement(node(NodeID1, Package1), RequirementID),
pkg_fact(Package1, condition_effect(ConditionID, EffectID)),
imposed_constraint(EffectID, "node_flag_source", Package1, FlagType, Package2),
imposed_nodes(EffectID, node(NodeID2, Package2), node(NodeID1, Package1)).
requirement_weight(node(ID, Package), Group, W) :-
W = #min {
@@ -863,6 +881,7 @@ variant_default_not_used(node(ID, Package), Variant, Value)
:- variant_default_value(Package, Variant, Value),
node_has_variant(node(ID, Package), Variant),
not attr("variant_value", node(ID, Package), Variant, Value),
not attr("variant_propagate", node(ID, Package), Variant, _, _),
attr("node", node(ID, Package)).
% The variant is set in an external spec
@@ -1325,6 +1344,10 @@ build_priority(PackageNode, 0) :- not build(PackageNode), attr("node", Package
#defined installed_hash/2.
% This statement, which is a hidden feature of clingo, let us avoid cycles in the DAG
#edge (A, B) : depends_on(A, B).
%-----------------------------------------------------------------
% Optimization to avoid errors
%-----------------------------------------------------------------

View File

@@ -1,21 +0,0 @@
% Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
% Spack Project Developers. See the top-level COPYRIGHT file for details.
%
% SPDX-License-Identifier: (Apache-2.0 OR MIT)
%=============================================================================
% Avoid cycles in the DAG
%
% Some combinations of conditional dependencies can result in cycles;
% this ensures that we solve around them. Note that these rules are quite
% demanding on both grounding and solving, since they need to compute and
% consider all possible paths between pair of nodes.
%=============================================================================
#program no_cycle.
path(Parent, Child) :- depends_on(Parent, Child).
path(Parent, Descendant) :- path(Parent, A), depends_on(A, Descendant).
:- path(A, A).
#defined depends_on/2.

View File

@@ -59,7 +59,7 @@
import re
import socket
import warnings
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union
import llnl.path
import llnl.string
@@ -75,6 +75,7 @@
import spack.deptypes as dt
import spack.error
import spack.hash_types as ht
import spack.parser
import spack.patch
import spack.paths
import spack.platforms
@@ -1318,8 +1319,6 @@ def __init__(
self.external_path = external_path
self.external_module = external_module
"""
import spack.parser
# Copy if spec_like is a Spec.
if isinstance(spec_like, Spec):
self._dup(spec_like)
@@ -1465,6 +1464,26 @@ def edges_to_dependencies(self, name=None, depflag: dt.DepFlag = dt.ALL):
"""
return [d for d in self._dependencies.select(child=name, depflag=depflag)]
@property
def edge_attributes(self) -> str:
"""Helper method to print edge attributes in spec literals"""
edges = self.edges_from_dependents()
if not edges:
return ""
union = DependencySpec(parent=Spec(), spec=self, depflag=0, virtuals=())
for edge in edges:
union.update_deptypes(edge.depflag)
union.update_virtuals(edge.virtuals)
deptypes_str = (
f"deptypes={','.join(dt.flag_to_tuple(union.depflag))}" if union.depflag else ""
)
virtuals_str = f"virtuals={','.join(union.virtuals)}" if union.virtuals else ""
if not deptypes_str and not virtuals_str:
return ""
result = f"{deptypes_str} {virtuals_str}".strip()
return f"[{result}]"
def dependencies(self, name=None, deptype: Union[dt.DepTypes, dt.DepFlag] = dt.ALL):
"""Return a list of direct dependencies (nodes in the DAG).
@@ -3689,8 +3708,15 @@ def intersects(self, other: Union[str, "Spec"], deps: bool = True) -> bool:
if other.concrete and self.concrete:
return self.dag_hash() == other.dag_hash()
self_hash = self.dag_hash() if self.concrete else self.abstract_hash
other_hash = other.dag_hash() if other.concrete else other.abstract_hash
elif self.concrete:
return self.satisfies(other)
elif other.concrete:
return other.satisfies(self)
# From here we know both self and other are not concrete
self_hash = self.abstract_hash
other_hash = other.abstract_hash
if (
self_hash
@@ -3779,10 +3805,6 @@ def _intersects_dependencies(self, other):
repository=spack.repo.PATH, specs=other.traverse(), restrict=True
)
# This handles cases where there are already providers for both vpkgs
if not self_index.satisfies(other_index):
return False
# These two loops handle cases where there is an overly restrictive
# vpkg in one spec for a provider in the other (e.g., mpi@3: is not
# compatible with mpich2)
@@ -3880,7 +3902,46 @@ def satisfies(self, other: Union[str, "Spec"], deps: bool = True) -> bool:
return False
# If we arrived here, then rhs is abstract. At the moment we don't care about the edge
# structure of an abstract DAG - hence the deps=False parameter.
# structure of an abstract DAG, so we check if any edge could satisfy the properties
# we ask for.
lhs_edges: Dict[str, Set[DependencySpec]] = collections.defaultdict(set)
for rhs_edge in other.traverse_edges(root=False, cover="edges"):
# If we are checking for ^mpi we need to verify if there is any edge
if rhs_edge.spec.virtual:
rhs_edge.update_virtuals(virtuals=(rhs_edge.spec.name,))
if not rhs_edge.virtuals:
continue
if not lhs_edges:
# Construct a map of the link/run subDAG + direct "build" edges,
# keyed by dependency name
for lhs_edge in self.traverse_edges(
root=False, cover="edges", deptype=("link", "run")
):
lhs_edges[lhs_edge.spec.name].add(lhs_edge)
for virtual_name in lhs_edge.virtuals:
lhs_edges[virtual_name].add(lhs_edge)
build_edges = self.edges_to_dependencies(depflag=dt.BUILD)
for lhs_edge in build_edges:
lhs_edges[lhs_edge.spec.name].add(lhs_edge)
for virtual_name in lhs_edge.virtuals:
lhs_edges[virtual_name].add(lhs_edge)
# We don't have edges to this dependency
current_dependency_name = rhs_edge.spec.name
if current_dependency_name not in lhs_edges:
return False
for virtual in rhs_edge.virtuals:
has_virtual = any(
virtual in edge.virtuals for edge in lhs_edges[current_dependency_name]
)
if not has_virtual:
return False
# Edges have been checked above already, hence deps=False
return all(
any(lhs.satisfies(rhs, deps=False) for lhs in self.traverse(root=False))
for rhs in other.traverse(root=False)
@@ -4082,9 +4143,7 @@ def __getitem__(self, name):
"""
query_parameters = name.split(":")
if len(query_parameters) > 2:
msg = "key has more than one ':' symbol."
msg += " At most one is admitted."
raise KeyError(msg)
raise KeyError("key has more than one ':' symbol. At most one is admitted.")
name, query_parameters = query_parameters[0], query_parameters[1:]
if query_parameters:
@@ -4109,11 +4168,17 @@ def __getitem__(self, name):
itertools.chain(
# Regular specs
(x for x in order() if x.name == name),
(
x
for x in order()
if (not x.virtual)
and any(name in edge.virtuals for edge in x.edges_from_dependents())
),
(x for x in order() if (not x.virtual) and x.package.provides(name)),
)
)
except StopIteration:
raise KeyError("No spec with name %s in %s" % (name, self))
raise KeyError(f"No spec with name {name} in {self}")
if self._concrete:
return SpecBuildInterface(value, name, query_parameters)
@@ -4491,10 +4556,26 @@ def format_path(
return str(path_ctor(*output_path_components))
def __str__(self):
sorted_nodes = [self] + sorted(
self.traverse(root=False), key=lambda x: x.name or x.abstract_hash
root_str = [self.format()]
sorted_dependencies = sorted(
self.traverse(root=False), key=lambda x: (x.name, x.abstract_hash)
)
spec_str = " ^".join(d.format() for d in sorted_nodes)
sorted_dependencies = [
d.format("{edge_attributes} " + DEFAULT_FORMAT) for d in sorted_dependencies
]
spec_str = " ^".join(root_str + sorted_dependencies)
return spec_str.strip()
@property
def colored_str(self):
root_str = [self.cformat()]
sorted_dependencies = sorted(
self.traverse(root=False), key=lambda x: (x.name, x.abstract_hash)
)
sorted_dependencies = [
d.cformat("{edge_attributes} " + DISPLAY_FORMAT) for d in sorted_dependencies
]
spec_str = " ^".join(root_str + sorted_dependencies)
return spec_str.strip()
def install_status(self):

View File

@@ -37,6 +37,7 @@
import spack.fetch_strategy as fs
import spack.mirror
import spack.paths
import spack.resource
import spack.spec
import spack.stage
import spack.util.lock
@@ -455,6 +456,7 @@ def fetch(self, mirror_only=False, err_msg=None):
mirror_urls = [
url_util.join(mirror.fetch_url, rel_path)
for mirror in spack.mirror.MirrorCollection(source=True).values()
if not mirror.fetch_url.startswith("oci://")
for rel_path in self.mirror_paths
]
@@ -658,8 +660,14 @@ def destroy(self):
class ResourceStage(Stage):
def __init__(self, url_or_fetch_strategy, root, resource, **kwargs):
super().__init__(url_or_fetch_strategy, **kwargs)
def __init__(
self,
fetch_strategy: fs.FetchStrategy,
root: Stage,
resource: spack.resource.Resource,
**kwargs,
):
super().__init__(fetch_strategy, **kwargs)
self.root_stage = root
self.resource = resource
@@ -870,6 +878,7 @@ def interactive_version_filter(
url_dict: Dict[StandardVersion, str],
known_versions: Iterable[StandardVersion] = (),
*,
initial_verion_filter: Optional[VersionList] = None,
url_changes: Set[StandardVersion] = set(),
input: Callable[..., str] = input,
) -> Optional[Dict[StandardVersion, str]]:
@@ -883,9 +892,10 @@ def interactive_version_filter(
Filtered dictionary of versions to URLs or None if the user wants to quit
"""
# Find length of longest string in the list for padding
sorted_and_filtered = sorted(url_dict.keys(), reverse=True)
version_filter = VersionList([":"])
max_len = max(len(str(v)) for v in sorted_and_filtered)
version_filter = initial_verion_filter or VersionList([":"])
max_len = max(len(str(v)) for v in url_dict) if url_dict else 0
sorted_and_filtered = [v for v in url_dict if v.satisfies(version_filter)]
sorted_and_filtered.sort(reverse=True)
orig_url_dict = url_dict # only copy when using editor to modify
print_header = True
VERSION_COLOR = spack.spec.VERSION_COLOR
@@ -893,21 +903,20 @@ def interactive_version_filter(
if print_header:
has_filter = version_filter != VersionList([":"])
header = []
if not sorted_and_filtered:
header.append("No versions selected")
elif len(sorted_and_filtered) == len(orig_url_dict):
if len(orig_url_dict) > 0 and len(sorted_and_filtered) == len(orig_url_dict):
header.append(
f"Selected {llnl.string.plural(len(sorted_and_filtered), 'version')}"
)
else:
header.append(
f"Selected {len(sorted_and_filtered)} of {len(orig_url_dict)} versions"
f"Selected {len(sorted_and_filtered)} of "
f"{llnl.string.plural(len(orig_url_dict), 'version')}"
)
if sorted_and_filtered and known_versions:
num_new = sum(1 for v in sorted_and_filtered if v not in known_versions)
header.append(f"{llnl.string.plural(num_new, 'new version')}")
if has_filter:
header.append(colorize(f"Filtered by {VERSION_COLOR}{version_filter}@."))
header.append(colorize(f"Filtered by {VERSION_COLOR}@@{version_filter}@."))
version_with_url = [
colorize(

View File

@@ -21,6 +21,10 @@
(["wrong-variant-in-depends-on"], ["PKG-DIRECTIVES", "PKG-PROPERTIES"]),
# This package has a GitHub patch URL without full_index=1
(["invalid-github-patch-url"], ["PKG-DIRECTIVES", "PKG-PROPERTIES"]),
# This package has invalid GitLab patch URLs
(["invalid-gitlab-patch-url"], ["PKG-DIRECTIVES", "PKG-PROPERTIES"]),
# This package has invalid GitLab patch URLs
(["invalid-selfhosted-gitlab-patch-url"], ["PKG-DIRECTIVES", "PKG-PROPERTIES"]),
# This package has a stand-alone 'test*' method in build-time callbacks
(["fail-test-audit"], ["PKG-DIRECTIVES", "PKG-PROPERTIES"]),
# This package has no issues

View File

@@ -642,3 +642,13 @@ def test_effective_deptype_run_environment(default_mock_concretization):
for spec, effective_type in spack.build_environment.effective_deptypes(s, context=Context.RUN):
assert effective_type & expected_flags.pop(spec.name) == effective_type
assert not expected_flags, f"Missing {expected_flags.keys()} from effective_deptypes"
def test_monkey_patching_works_across_virtual(default_mock_concretization):
"""Assert that a monkeypatched attribute is found regardless we access through the
real name or the virtual name.
"""
s = default_mock_concretization("mpileaks ^mpich")
s["mpich"].foo = "foo"
assert s["mpich"].foo == "foo"
assert s["mpi"].foo == "foo"

View File

@@ -326,4 +326,8 @@ def fake_push(node, push_url, options):
buildcache(*buildcache_create_args)
assert packages_to_push == expected
# Order is not guaranteed, so we can't just compare lists
assert set(packages_to_push) == set(expected)
# Ensure no duplicates
assert len(set(packages_to_push)) == len(packages_to_push)

View File

@@ -8,6 +8,7 @@
import pytest
import spack.cmd.checksum
import spack.parser
import spack.repo
import spack.spec
from spack.main import SpackCommand
@@ -254,17 +255,10 @@ def test_checksum_deprecated_version(mock_packages, mock_clone_repo, mock_fetch,
assert "Added 0 new versions to" not in output
def test_checksum_at(mock_packages):
pkg_cls = spack.repo.PATH.get_pkg_class("zlib")
versions = [str(v) for v in pkg_cls.versions]
output = spack_checksum(f"zlib@{versions[0]}")
assert "Found 1 version" in output
def test_checksum_url(mock_packages):
pkg_cls = spack.repo.PATH.get_pkg_class("zlib")
output = spack_checksum(f"{pkg_cls.url}", fail_on_error=False)
assert "accepts package names" in output
with pytest.raises(spack.parser.SpecSyntaxError):
spack_checksum(f"{pkg_cls.url}")
def test_checksum_verification_fails(install_mockery, capsys):

View File

@@ -14,7 +14,14 @@
dependencies = SpackCommand("dependencies")
mpis = ["low-priority-provider", "mpich", "mpich2", "multi-provider-mpi", "zmpi"]
mpis = [
"intel-parallel-studio",
"low-priority-provider",
"mpich",
"mpich2",
"multi-provider-mpi",
"zmpi",
]
mpi_deps = ["fake"]

View File

@@ -14,6 +14,7 @@
import llnl.util.filesystem as fs
import llnl.util.link_tree
import llnl.util.tty as tty
import spack.cmd.env
import spack.config
@@ -977,10 +978,9 @@ def test_included_config_precedence(environment_from_manifest):
assert any([x.satisfies("libelf@0.8.10") for x in e._get_environment_specs()])
def test_bad_env_yaml_format(tmpdir):
filename = str(tmpdir.join("spack.yaml"))
with open(filename, "w") as f:
f.write(
def test_bad_env_yaml_format(environment_from_manifest):
with pytest.raises(spack.config.ConfigFormatError) as e:
environment_from_manifest(
"""\
spack:
spacks:
@@ -988,12 +988,59 @@ def test_bad_env_yaml_format(tmpdir):
"""
)
with tmpdir.as_cwd():
with pytest.raises(spack.config.ConfigFormatError) as e:
env("create", "test", "./spack.yaml")
assert "spack.yaml:2" in str(e)
assert "'spacks' was unexpected" in str(e)
assert "test" not in env("list")
def test_bad_env_yaml_format_remove(mutable_mock_env_path):
badenv = "badenv"
env("create", badenv)
filename = mutable_mock_env_path / "spack.yaml"
with open(filename, "w") as f:
f.write(
"""\
- mpileaks
"""
)
assert badenv in env("list")
env("remove", "-y", badenv)
assert badenv not in env("list")
@pytest.mark.parametrize("answer", ["-y", ""])
def test_multi_env_remove(mutable_mock_env_path, monkeypatch, answer):
"""Test removal (or not) of a valid and invalid environment"""
remove_environment = answer == "-y"
monkeypatch.setattr(tty, "get_yes_or_no", lambda prompt, default: remove_environment)
environments = ["goodenv", "badenv"]
for e in environments:
env("create", e)
# Ensure the bad environment contains invalid yaml
filename = mutable_mock_env_path / environments[1] / "spack.yaml"
filename.write_text(
"""\
- libdwarf
"""
)
assert all(e in env("list") for e in environments)
args = [answer] if answer else []
args.extend(environments)
output = env("remove", *args, fail_on_error=False)
if remove_environment is True:
# Successfully removed (and reported removal) of *both* environments
assert not all(e in env("list") for e in environments)
assert output.count("Successfully removed") == 2
else:
# Not removing any of the environments
assert all(e in env("list") for e in environments)
def test_env_loads(install_mockery, mock_fetch):
env("create", "test")
@@ -2443,8 +2490,12 @@ def test_concretize_user_specs_together():
e.remove("mpich")
e.add("mpich2")
exc_cls = spack.error.SpackError
if spack.config.get("config:concretizer") == "clingo":
exc_cls = spack.error.UnsatisfiableSpecError
# Concretizing without invalidating the concrete spec for mpileaks fails
with pytest.raises(spack.error.UnsatisfiableSpecError):
with pytest.raises(exc_cls):
e.concretize()
e.concretize(force=True)
@@ -2476,9 +2527,12 @@ def test_duplicate_packages_raise_when_concretizing_together():
e.add("mpileaks~opt")
e.add("mpich")
with pytest.raises(
spack.error.UnsatisfiableSpecError, match=r"You could consider setting `concretizer:unify`"
):
exc_cls, match = spack.error.SpackError, None
if spack.config.get("config:concretizer") == "clingo":
exc_cls = spack.error.UnsatisfiableSpecError
match = r"You could consider setting `concretizer:unify`"
with pytest.raises(exc_cls, match=match):
e.concretize()
@@ -3328,6 +3382,20 @@ def test_spack_package_ids_variable(tmpdir, mock_packages):
assert "post-install: {}".format(s.dag_hash()) in out
def test_depfile_empty_does_not_error(tmp_path):
# For empty environments Spack should create a depfile that does nothing
make = Executable("make")
makefile = str(tmp_path / "Makefile")
env("create", "test")
with ev.read("test"):
env("depfile", "-o", makefile)
make("-f", makefile)
assert make.returncode == 0
def test_unify_when_possible_works_around_conflicts():
e = ev.create("coconcretization")
e.unify = "when_possible"

View File

@@ -28,21 +28,12 @@ def _mock_search(path_hints=None):
return _factory
@pytest.fixture
def _platform_executables(monkeypatch):
def _win_exe_ext():
return ".bat"
monkeypatch.setattr(spack.util.path, "win_exe_ext", _win_exe_ext)
def define_plat_exe(exe):
if sys.platform == "win32":
exe += ".bat"
return exe
@pytest.mark.xfail(sys.platform == "win32", reason="https://github.com/spack/spack/pull/39850")
def test_find_external_single_package(mock_executable):
cmake_path = mock_executable("cmake", output="echo cmake version 1.foo")
search_dir = cmake_path.parent.parent
@@ -54,7 +45,7 @@ def test_find_external_single_package(mock_executable):
assert len(detected_spec) == 1 and detected_spec[0].spec == Spec("cmake@1.foo")
def test_find_external_two_instances_same_package(mock_executable, _platform_executables):
def test_find_external_two_instances_same_package(mock_executable):
# Each of these cmake instances is created in a different prefix
# In Windows, quoted strings are echo'd with quotes includes
# we need to avoid that for proper regex.
@@ -236,32 +227,7 @@ def test_list_detectable_packages(mutable_config, mutable_mock_repo):
assert external.returncode == 0
@pytest.mark.xfail(sys.platform == "win32", reason="https://github.com/spack/spack/pull/39850")
def test_packages_yaml_format(mock_executable, mutable_config, monkeypatch, _platform_executables):
# Prepare an environment to detect a fake gcc
gcc_exe = mock_executable("gcc", output="echo 4.2.1")
prefix = os.path.dirname(gcc_exe)
monkeypatch.setenv("PATH", prefix)
# Find the external spec
external("find", "gcc")
# Check entries in 'packages.yaml'
packages_yaml = spack.config.get("packages")
assert "gcc" in packages_yaml
assert "externals" in packages_yaml["gcc"]
externals = packages_yaml["gcc"]["externals"]
assert len(externals) == 1
external_gcc = externals[0]
assert external_gcc["spec"] == "gcc@4.2.1 languages=c"
assert external_gcc["prefix"] == os.path.dirname(prefix)
assert "extra_attributes" in external_gcc
extra_attributes = external_gcc["extra_attributes"]
assert "prefix" not in extra_attributes
assert extra_attributes["compilers"]["c"] == str(gcc_exe)
def test_overriding_prefix(mock_executable, mutable_config, monkeypatch, _platform_executables):
def test_overriding_prefix(mock_executable, mutable_config, monkeypatch):
gcc_exe = mock_executable("gcc", output="echo 4.2.1")
search_dir = gcc_exe.parent
@@ -282,10 +248,7 @@ def _determine_variants(cls, exes, version_str):
assert gcc.external_path == os.path.sep + os.path.join("opt", "gcc", "bin")
@pytest.mark.xfail(sys.platform == "win32", reason="https://github.com/spack/spack/pull/39850")
def test_new_entries_are_reported_correctly(
mock_executable, mutable_config, monkeypatch, _platform_executables
):
def test_new_entries_are_reported_correctly(mock_executable, mutable_config, monkeypatch):
# Prepare an environment to detect a fake gcc
gcc_exe = mock_executable("gcc", output="echo 4.2.1")
prefix = os.path.dirname(gcc_exe)

View File

@@ -472,6 +472,18 @@ def test_concretize_propagated_variant_is_not_passed_to_dependent(self):
assert spec.satisfies("^openblas+shared")
@pytest.mark.only_clingo("Original concretizer is allowed to forego variant propagation")
def test_concretize_propagate_multivalue_variant(self):
"""Test that multivalue variants are propagating the specified value(s)
to their dependecies. The dependencies should not have the default value"""
spec = Spec("multivalue-variant foo==baz,fee")
spec.concretize()
assert spec.satisfies("^a foo=baz,fee")
assert spec.satisfies("^b foo=baz,fee")
assert not spec.satisfies("^a foo=bar")
assert not spec.satisfies("^b foo=bar")
def test_no_matching_compiler_specs(self, mock_low_high_config):
# only relevant when not building compilers as needed
with spack.concretize.enable_compiler_existence_check():
@@ -1838,7 +1850,8 @@ def test_installed_specs_disregard_conflicts(self, mutable_database, monkeypatch
# If we concretize with --reuse it is not, since "mpich~debug" was already installed
with spack.config.override("concretizer:reuse", True):
s = Spec("mpich").concretized()
assert s.satisfies("~debug")
assert s.installed
assert s.satisfies("~debug"), s
@pytest.mark.regression("32471")
@pytest.mark.only_clingo("Use case not supported by the original concretizer")
@@ -2132,14 +2145,16 @@ def test_reuse_python_from_cli_and_extension_from_db(self, mutable_database):
@pytest.fixture()
def duplicates_test_repository():
builder_test_path = os.path.join(spack.paths.repos_path, "duplicates.test")
with spack.repo.use_repositories(builder_test_path) as mock_repo:
repository_path = os.path.join(spack.paths.repos_path, "duplicates.test")
with spack.repo.use_repositories(repository_path) as mock_repo:
yield mock_repo
@pytest.mark.usefixtures("mutable_config", "duplicates_test_repository")
@pytest.mark.only_clingo("Not supported by the original concretizer")
class TestConcretizeSeparately:
"""Collects test on separate concretization"""
@pytest.mark.parametrize("strategy", ["minimal", "full"])
def test_two_gmake(self, strategy):
"""Tests that we can concretize a spec with nodes using the same build
@@ -2320,3 +2335,53 @@ def test_adding_specs(self, input_specs, default_mock_concretization):
assert node == container[node.dag_hash()]
assert node.dag_hash() in container
assert node is not container[node.dag_hash()]
@pytest.fixture()
def edges_test_repository():
repository_path = os.path.join(spack.paths.repos_path, "edges.test")
with spack.repo.use_repositories(repository_path) as mock_repo:
yield mock_repo
@pytest.mark.usefixtures("mutable_config", "edges_test_repository")
@pytest.mark.only_clingo("Edge properties not supported by the original concretizer")
class TestConcretizeEdges:
"""Collects tests on edge properties"""
@pytest.mark.parametrize(
"spec_str,expected_satisfies,expected_not_satisfies",
[
("conditional-edge", ["^zlib@2.0"], ["^zlib-api"]),
("conditional-edge~foo", ["^zlib@2.0"], ["^zlib-api"]),
(
"conditional-edge+foo",
["^zlib@1.0", "^zlib-api", "^[virtuals=zlib-api] zlib"],
["^[virtuals=mpi] zlib"],
),
],
)
def test_condition_triggered_by_edge_property(
self, spec_str, expected_satisfies, expected_not_satisfies
):
"""Tests that we can enforce constraints based on edge attributes"""
s = Spec(spec_str).concretized()
for expected in expected_satisfies:
assert s.satisfies(expected), str(expected)
for not_expected in expected_not_satisfies:
assert not s.satisfies(not_expected), str(not_expected)
def test_virtuals_provided_together_but_only_one_required_in_dag(self):
"""Tests that we can use a provider that provides more than one virtual together,
and is providing only one, iff the others are not needed in the DAG.
o blas-only-client
| [virtual=blas]
o openblas (provides blas and lapack together)
"""
s = Spec("blas-only-client ^openblas").concretized()
assert s.satisfies("^[virtuals=blas] openblas")
assert not s.satisfies("^[virtuals=blas,lapack] openblas")

View File

@@ -469,16 +469,22 @@ def test_one_package_multiple_oneof_groups(concretize_scope, test_repo):
@pytest.mark.regression("34241")
def test_require_cflags(concretize_scope, test_repo):
def test_require_cflags(concretize_scope, mock_packages):
"""Ensures that flags can be required from configuration."""
conf_str = """\
packages:
y:
mpich2:
require: cflags="-g"
mpi:
require: mpich cflags="-O1"
"""
update_packages_config(conf_str)
spec = Spec("y").concretized()
assert spec.satisfies("cflags=-g")
spec_mpich2 = Spec("mpich2").concretized()
assert spec_mpich2.satisfies("cflags=-g")
spec_mpi = Spec("mpi").concretized()
assert spec_mpi.satisfies("mpich cflags=-O1")
def test_requirements_for_package_that_is_not_needed(concretize_scope, test_repo):

View File

@@ -31,6 +31,7 @@
import spack.binary_distribution
import spack.caches
import spack.cmd.buildcache
import spack.compilers
import spack.config
import spack.database
@@ -494,7 +495,7 @@ def mock_binary_index(monkeypatch, tmpdir_factory):
tmpdir = tmpdir_factory.mktemp("mock_binary_index")
index_path = tmpdir.join("binary_index").strpath
mock_index = spack.binary_distribution.BinaryCacheIndex(index_path)
monkeypatch.setattr(spack.binary_distribution, "binary_index", mock_index)
monkeypatch.setattr(spack.binary_distribution, "BINARY_INDEX", mock_index)
yield
@@ -1709,8 +1710,8 @@ def inode_cache():
@pytest.fixture(autouse=True)
def brand_new_binary_cache():
yield
spack.binary_distribution.binary_index = llnl.util.lang.Singleton(
spack.binary_distribution._binary_index
spack.binary_distribution.BINARY_INDEX = llnl.util.lang.Singleton(
spack.binary_distribution.BinaryCacheIndex
)
@@ -1948,3 +1949,21 @@ def pytest_runtest_setup(item):
not_on_windows_marker = item.get_closest_marker(name="not_on_windows")
if not_on_windows_marker and sys.platform == "win32":
pytest.skip(*not_on_windows_marker.args)
@pytest.fixture(scope="function")
def disable_parallel_buildcache_push(monkeypatch):
class MockPool:
def map(self, func, args):
return [func(a) for a in args]
def starmap(self, func, args):
return [func(*a) for a in args]
def __enter__(self):
return self
def __exit__(self, *args):
pass
monkeypatch.setattr(spack.cmd.buildcache, "_make_pool", MockPool)

View File

@@ -0,0 +1,11 @@
enable:
- lmod
lmod:
hide_implicits: true
core_compilers:
- 'clang@3.3'
hierarchy:
- mpi
all:
autoload: direct

View File

@@ -1,3 +1,5 @@
# DEPRECATED: remove this in ?
# See `hide_implicits.yaml` for the new syntax
enable:
- tcl
tcl:

View File

@@ -0,0 +1,6 @@
enable:
- tcl
tcl:
hide_implicits: true
all:
autoload: direct

View File

@@ -0,0 +1,30 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import collections
import spack.detection
import spack.spec
def test_detection_update_config(mutable_config):
# mock detected package
detected_packages = collections.defaultdict(list)
detected_packages["cmake"] = [
spack.detection.common.DetectedPackage(
spec=spack.spec.Spec("cmake@3.27.5"), prefix="/usr/bin"
)
]
# update config for new package
spack.detection.common.update_configuration(detected_packages)
# Check entries in 'packages.yaml'
packages_yaml = spack.config.get("packages")
assert "cmake" in packages_yaml
assert "externals" in packages_yaml["cmake"]
externals = packages_yaml["cmake"]["externals"]
assert len(externals) == 1
external_gcc = externals[0]
assert external_gcc["spec"] == "cmake@3.27.5"
assert external_gcc["prefix"] == "/usr/bin"

View File

@@ -690,3 +690,29 @@ def test_removing_spec_from_manifest_with_exact_duplicates(
assert "zlib" in manifest.read_text()
with ev.Environment(tmp_path) as env:
assert len(env.user_specs) == 1
@pytest.mark.regression("35298")
@pytest.mark.only_clingo("Propagation not supported in the original concretizer")
def test_variant_propagation_with_unify_false(tmp_path, mock_packages):
"""Spack distributes concretizations to different processes, when unify:false is selected and
the number of roots is 2 or more. When that happens, the specs to be concretized need to be
properly reconstructed on the worker process, if variant propagation was requested.
"""
manifest = tmp_path / "spack.yaml"
manifest.write_text(
"""
spack:
specs:
- parent-foo ++foo
- c
concretizer:
unify: false
"""
)
with ev.Environment(tmp_path) as env:
env.concretize()
root = env.matching_spec("parent-foo")
for node in root.traverse():
assert node.satisfies("+foo")

View File

@@ -14,6 +14,7 @@
import spack.package_base
import spack.schema.modules
import spack.spec
import spack.util.spack_yaml as syaml
from spack.modules.common import UpstreamModuleIndex
from spack.spec import Spec
@@ -190,11 +191,30 @@ def find_nothing(*args):
spack.package_base.PackageBase.uninstall_by_spec(spec)
@pytest.mark.parametrize(
"module_type, old_config,new_config",
[("tcl", "exclude_implicits.yaml", "hide_implicits.yaml")],
)
def test_exclude_include_update(module_type, old_config, new_config):
module_test_data_root = os.path.join(spack.paths.test_path, "data", "modules", module_type)
with open(os.path.join(module_test_data_root, old_config)) as f:
old_yaml = syaml.load(f)
with open(os.path.join(module_test_data_root, new_config)) as f:
new_yaml = syaml.load(f)
# ensure file that needs updating is translated to the right thing.
assert spack.schema.modules.update_keys(old_yaml, spack.schema.modules.old_to_new_key)
assert new_yaml == old_yaml
# ensure a file that doesn't need updates doesn't get updated
original_new_yaml = new_yaml.copy()
assert not spack.schema.modules.update_keys(new_yaml, spack.schema.modules.old_to_new_key)
assert original_new_yaml == new_yaml
@pytest.mark.regression("37649")
def test_check_module_set_name(mutable_config):
"""Tests that modules set name are validated correctly and an error is reported if the
name we require does not exist or is reserved by the configuration."""
# Minimal modules.yaml config.
spack.config.set(
"modules",

View File

@@ -3,6 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import pytest
@@ -433,3 +434,87 @@ def test_modules_no_arch(self, factory, module_configuration):
path = module.layout.filename
assert str(spec.os) not in path
def test_hide_implicits(self, module_configuration):
"""Tests the addition and removal of hide command in modulerc."""
module_configuration("hide_implicits")
spec = spack.spec.Spec("mpileaks@2.3").concretized()
# mpileaks is defined as implicit, thus hide command should appear in modulerc
writer = writer_cls(spec, "default", False)
writer.write()
assert os.path.exists(writer.layout.modulerc)
with open(writer.layout.modulerc) as f:
content = f.readlines()
content = "".join(content).split("\n")
hide_cmd = 'hide_version("%s")' % writer.layout.use_name
assert len([x for x in content if hide_cmd == x]) == 1
# mpileaks becomes explicit, thus modulerc is removed
writer = writer_cls(spec, "default", True)
writer.write(overwrite=True)
assert not os.path.exists(writer.layout.modulerc)
# mpileaks is defined as explicit, no modulerc file should exist
writer = writer_cls(spec, "default", True)
writer.write()
assert not os.path.exists(writer.layout.modulerc)
# explicit module is removed
writer.remove()
assert not os.path.exists(writer.layout.modulerc)
assert not os.path.exists(writer.layout.filename)
# implicit module is removed
writer = writer_cls(spec, "default", False)
writer.write(overwrite=True)
assert os.path.exists(writer.layout.filename)
assert os.path.exists(writer.layout.modulerc)
writer.remove()
assert not os.path.exists(writer.layout.modulerc)
assert not os.path.exists(writer.layout.filename)
# three versions of mpileaks are implicit
writer = writer_cls(spec, "default", False)
writer.write(overwrite=True)
spec_alt1 = spack.spec.Spec("mpileaks@2.2").concretized()
spec_alt2 = spack.spec.Spec("mpileaks@2.1").concretized()
writer_alt1 = writer_cls(spec_alt1, "default", False)
writer_alt1.write(overwrite=True)
writer_alt2 = writer_cls(spec_alt2, "default", False)
writer_alt2.write(overwrite=True)
assert os.path.exists(writer.layout.modulerc)
with open(writer.layout.modulerc) as f:
content = f.readlines()
content = "".join(content).split("\n")
hide_cmd = 'hide_version("%s")' % writer.layout.use_name
hide_cmd_alt1 = 'hide_version("%s")' % writer_alt1.layout.use_name
hide_cmd_alt2 = 'hide_version("%s")' % writer_alt2.layout.use_name
assert len([x for x in content if hide_cmd == x]) == 1
assert len([x for x in content if hide_cmd_alt1 == x]) == 1
assert len([x for x in content if hide_cmd_alt2 == x]) == 1
# one version is removed, a second becomes explicit
writer_alt1.remove()
writer_alt2 = writer_cls(spec_alt2, "default", True)
writer_alt2.write(overwrite=True)
assert os.path.exists(writer.layout.modulerc)
with open(writer.layout.modulerc) as f:
content = f.readlines()
content = "".join(content).split("\n")
assert len([x for x in content if hide_cmd == x]) == 1
assert len([x for x in content if hide_cmd_alt1 == x]) == 0
assert len([x for x in content if hide_cmd_alt2 == x]) == 0
# disable hide_implicits configuration option
module_configuration("autoload_direct")
writer = writer_cls(spec, "default")
writer.write(overwrite=True)
assert not os.path.exists(writer.layout.modulerc)
# reenable hide_implicits configuration option
module_configuration("hide_implicits")
writer = writer_cls(spec, "default")
writer.write(overwrite=True)
assert os.path.exists(writer.layout.modulerc)

View File

@@ -3,6 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import pytest
@@ -132,9 +133,9 @@ def test_prepend_path_separator(self, modulefile_content, module_configuration):
module_configuration("module_path_separator")
content = modulefile_content("module-path-separator")
assert len([x for x in content if "append-path --delim {:} COLON {foo}" in x]) == 1
assert len([x for x in content if "prepend-path --delim {:} COLON {foo}" in x]) == 1
assert len([x for x in content if "remove-path --delim {:} COLON {foo}" in x]) == 1
assert len([x for x in content if "append-path COLON {foo}" in x]) == 1
assert len([x for x in content if "prepend-path COLON {foo}" in x]) == 1
assert len([x for x in content if "remove-path COLON {foo}" in x]) == 1
assert len([x for x in content if "append-path --delim {;} SEMICOLON {bar}" in x]) == 1
assert len([x for x in content if "prepend-path --delim {;} SEMICOLON {bar}" in x]) == 1
assert len([x for x in content if "remove-path --delim {;} SEMICOLON {bar}" in x]) == 1
@@ -149,37 +150,23 @@ def test_manpath_setup(self, modulefile_content, module_configuration):
# no manpath set by module
content = modulefile_content("mpileaks")
assert len([x for x in content if "append-path --delim {:} MANPATH {}" in x]) == 0
assert len([x for x in content if "append-path MANPATH {}" in x]) == 0
# manpath set by module with prepend-path
content = modulefile_content("module-manpath-prepend")
assert (
len([x for x in content if "prepend-path --delim {:} MANPATH {/path/to/man}" in x])
== 1
)
assert (
len(
[
x
for x in content
if "prepend-path --delim {:} MANPATH {/path/to/share/man}" in x
]
)
== 1
)
assert len([x for x in content if "append-path --delim {:} MANPATH {}" in x]) == 1
assert len([x for x in content if "prepend-path MANPATH {/path/to/man}" in x]) == 1
assert len([x for x in content if "prepend-path MANPATH {/path/to/share/man}" in x]) == 1
assert len([x for x in content if "append-path MANPATH {}" in x]) == 1
# manpath set by module with append-path
content = modulefile_content("module-manpath-append")
assert (
len([x for x in content if "append-path --delim {:} MANPATH {/path/to/man}" in x]) == 1
)
assert len([x for x in content if "append-path --delim {:} MANPATH {}" in x]) == 1
assert len([x for x in content if "append-path MANPATH {/path/to/man}" in x]) == 1
assert len([x for x in content if "append-path MANPATH {}" in x]) == 1
# manpath set by module with setenv
content = modulefile_content("module-manpath-setenv")
assert len([x for x in content if "setenv MANPATH {/path/to/man}" in x]) == 1
assert len([x for x in content if "append-path --delim {:} MANPATH {}" in x]) == 0
assert len([x for x in content if "append-path MANPATH {}" in x]) == 0
@pytest.mark.regression("29578")
def test_setenv_raw_value(self, modulefile_content, module_configuration):
@@ -438,38 +425,40 @@ def test_extend_context(self, modulefile_content, module_configuration):
@pytest.mark.regression("4400")
@pytest.mark.db
def test_exclude_implicits(self, module_configuration, database):
module_configuration("exclude_implicits")
@pytest.mark.parametrize("config_name", ["hide_implicits", "exclude_implicits"])
def test_hide_implicits_no_arg(self, module_configuration, database, config_name):
module_configuration(config_name)
# mpileaks has been installed explicitly when setting up
# the tests database
mpileaks_specs = database.query("mpileaks")
for item in mpileaks_specs:
writer = writer_cls(item, "default")
assert not writer.conf.excluded
assert not writer.conf.hidden
# callpath is a dependency of mpileaks, and has been pulled
# in implicitly
callpath_specs = database.query("callpath")
for item in callpath_specs:
writer = writer_cls(item, "default")
assert writer.conf.excluded
assert writer.conf.hidden
@pytest.mark.regression("12105")
def test_exclude_implicits_with_arg(self, module_configuration):
module_configuration("exclude_implicits")
@pytest.mark.parametrize("config_name", ["hide_implicits", "exclude_implicits"])
def test_hide_implicits_with_arg(self, module_configuration, config_name):
module_configuration(config_name)
# mpileaks is defined as explicit with explicit argument set on writer
mpileaks_spec = spack.spec.Spec("mpileaks")
mpileaks_spec.concretize()
writer = writer_cls(mpileaks_spec, "default", True)
assert not writer.conf.excluded
assert not writer.conf.hidden
# callpath is defined as implicit with explicit argument set on writer
callpath_spec = spack.spec.Spec("callpath")
callpath_spec.concretize()
writer = writer_cls(callpath_spec, "default", False)
assert writer.conf.excluded
assert writer.conf.hidden
@pytest.mark.regression("9624")
@pytest.mark.db
@@ -498,3 +487,87 @@ def test_modules_no_arch(self, factory, module_configuration):
path = module.layout.filename
assert str(spec.os) not in path
def test_hide_implicits(self, module_configuration):
"""Tests the addition and removal of hide command in modulerc."""
module_configuration("hide_implicits")
spec = spack.spec.Spec("mpileaks@2.3").concretized()
# mpileaks is defined as implicit, thus hide command should appear in modulerc
writer = writer_cls(spec, "default", False)
writer.write()
assert os.path.exists(writer.layout.modulerc)
with open(writer.layout.modulerc) as f:
content = f.readlines()
content = "".join(content).split("\n")
hide_cmd = "module-hide --soft --hidden-loaded %s" % writer.layout.use_name
assert len([x for x in content if hide_cmd == x]) == 1
# mpileaks becomes explicit, thus modulerc is removed
writer = writer_cls(spec, "default", True)
writer.write(overwrite=True)
assert not os.path.exists(writer.layout.modulerc)
# mpileaks is defined as explicit, no modulerc file should exist
writer = writer_cls(spec, "default", True)
writer.write()
assert not os.path.exists(writer.layout.modulerc)
# explicit module is removed
writer.remove()
assert not os.path.exists(writer.layout.modulerc)
assert not os.path.exists(writer.layout.filename)
# implicit module is removed
writer = writer_cls(spec, "default", False)
writer.write(overwrite=True)
assert os.path.exists(writer.layout.filename)
assert os.path.exists(writer.layout.modulerc)
writer.remove()
assert not os.path.exists(writer.layout.modulerc)
assert not os.path.exists(writer.layout.filename)
# three versions of mpileaks are implicit
writer = writer_cls(spec, "default", False)
writer.write(overwrite=True)
spec_alt1 = spack.spec.Spec("mpileaks@2.2").concretized()
spec_alt2 = spack.spec.Spec("mpileaks@2.1").concretized()
writer_alt1 = writer_cls(spec_alt1, "default", False)
writer_alt1.write(overwrite=True)
writer_alt2 = writer_cls(spec_alt2, "default", False)
writer_alt2.write(overwrite=True)
assert os.path.exists(writer.layout.modulerc)
with open(writer.layout.modulerc) as f:
content = f.readlines()
content = "".join(content).split("\n")
hide_cmd = "module-hide --soft --hidden-loaded %s" % writer.layout.use_name
hide_cmd_alt1 = "module-hide --soft --hidden-loaded %s" % writer_alt1.layout.use_name
hide_cmd_alt2 = "module-hide --soft --hidden-loaded %s" % writer_alt2.layout.use_name
assert len([x for x in content if hide_cmd == x]) == 1
assert len([x for x in content if hide_cmd_alt1 == x]) == 1
assert len([x for x in content if hide_cmd_alt2 == x]) == 1
# one version is removed, a second becomes explicit
writer_alt1.remove()
writer_alt2 = writer_cls(spec_alt2, "default", True)
writer_alt2.write(overwrite=True)
assert os.path.exists(writer.layout.modulerc)
with open(writer.layout.modulerc) as f:
content = f.readlines()
content = "".join(content).split("\n")
assert len([x for x in content if hide_cmd == x]) == 1
assert len([x for x in content if hide_cmd_alt1 == x]) == 0
assert len([x for x in content if hide_cmd_alt2 == x]) == 0
# disable hide_implicits configuration option
module_configuration("autoload_direct")
writer = writer_cls(spec, "default")
writer.write(overwrite=True)
assert not os.path.exists(writer.layout.modulerc)
# reenable hide_implicits configuration option
module_configuration("hide_implicits")
writer = writer_cls(spec, "default")
writer.write(overwrite=True)
assert os.path.exists(writer.layout.modulerc)

View File

@@ -0,0 +1,105 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import re
import pytest
import spack.spec
from spack.oci.image import Digest, ImageReference, default_tag, tag
@pytest.mark.parametrize(
"image_ref, expected",
[
(
f"example.com:1234/a/b/c:tag@sha256:{'a'*64}",
("example.com:1234", "a/b/c", "tag", Digest.from_sha256("a" * 64)),
),
("example.com:1234/a/b/c:tag", ("example.com:1234", "a/b/c", "tag", None)),
("example.com:1234/a/b/c", ("example.com:1234", "a/b/c", "latest", None)),
(
f"example.com:1234/a/b/c@sha256:{'a'*64}",
("example.com:1234", "a/b/c", "latest", Digest.from_sha256("a" * 64)),
),
# ipv4
("1.2.3.4:1234/a/b/c:tag", ("1.2.3.4:1234", "a/b/c", "tag", None)),
# ipv6
("[2001:db8::1]:1234/a/b/c:tag", ("[2001:db8::1]:1234", "a/b/c", "tag", None)),
# Follow docker rules for parsing
("ubuntu:22.04", ("index.docker.io", "library/ubuntu", "22.04", None)),
("myname/myimage:abc", ("index.docker.io", "myname/myimage", "abc", None)),
("myname:1234/myimage:abc", ("myname:1234", "myimage", "abc", None)),
("localhost/myimage:abc", ("localhost", "myimage", "abc", None)),
("localhost:1234/myimage:abc", ("localhost:1234", "myimage", "abc", None)),
(
"example.com/UPPERCASE/lowercase:AbC",
("example.com", "uppercase/lowercase", "AbC", None),
),
],
)
def test_name_parsing(image_ref, expected):
x = ImageReference.from_string(image_ref)
assert (x.domain, x.name, x.tag, x.digest) == expected
@pytest.mark.parametrize(
"image_ref",
[
# wrong order of tag and sha
f"example.com:1234/a/b/c@sha256:{'a'*64}:tag",
# double tag
"example.com:1234/a/b/c:tag:tag",
# empty tag
"example.com:1234/a/b/c:",
# empty digest
"example.com:1234/a/b/c@sha256:",
# unsupport digest algorithm
f"example.com:1234/a/b/c@sha512:{'a'*128}",
# invalid digest length
f"example.com:1234/a/b/c@sha256:{'a'*63}",
# whitespace
"example.com:1234/a/b/c :tag",
"example.com:1234/a/b/c: tag",
"example.com:1234/a/b/c:tag ",
" example.com:1234/a/b/c:tag",
# broken ipv4
"1.2..3:1234/a/b/c:tag",
],
)
def test_parsing_failure(image_ref):
with pytest.raises(ValueError):
ImageReference.from_string(image_ref)
def test_digest():
valid_digest = "1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef"
# Test string roundtrip
assert str(Digest.from_string(f"sha256:{valid_digest}")) == f"sha256:{valid_digest}"
# Invalid digest length
with pytest.raises(ValueError):
Digest.from_string("sha256:abcdef")
# Missing algorithm
with pytest.raises(ValueError):
Digest.from_string(valid_digest)
@pytest.mark.parametrize(
"spec",
[
# Standard case
"short-name@=1.2.3",
# Unsupported characters in git version
f"git-version@{1:040x}=develop",
# Too long of a name
f"{'too-long':x<256}@=1.2.3",
],
)
def test_default_tag(spec: str):
"""Make sure that computed image tags are valid."""
assert re.fullmatch(tag, default_tag(spack.spec.Spec(spec)))

View File

@@ -0,0 +1,148 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# These are slow integration tests that do concretization, install, tarballing
# and compression. They still use an in-memory OCI registry.
import hashlib
import json
import os
from contextlib import contextmanager
import spack.oci.opener
from spack.binary_distribution import gzip_compressed_tarfile
from spack.main import SpackCommand
from spack.oci.image import Digest, ImageReference, default_config, default_manifest
from spack.oci.oci import blob_exists, get_manifest_and_config, upload_blob, upload_manifest
from spack.test.oci.mock_registry import DummyServer, InMemoryOCIRegistry, create_opener
buildcache = SpackCommand("buildcache")
mirror = SpackCommand("mirror")
@contextmanager
def oci_servers(*servers: DummyServer):
old_opener = spack.oci.opener.urlopen
spack.oci.opener.urlopen = create_opener(*servers).open
yield
spack.oci.opener.urlopen = old_opener
def test_buildcache_push_command(mutable_database, disable_parallel_buildcache_push):
with oci_servers(InMemoryOCIRegistry("example.com")):
mirror("add", "oci-test", "oci://example.com/image")
# Push the package(s) to the OCI registry
buildcache("push", "--update-index", "oci-test", "mpileaks^mpich")
# Remove mpileaks from the database
matches = mutable_database.query_local("mpileaks^mpich")
assert len(matches) == 1
spec = matches[0]
spec.package.do_uninstall()
# Reinstall mpileaks from the OCI registry
buildcache("install", "--unsigned", "mpileaks^mpich")
# Now it should be installed again
assert spec.installed
# And let's check that the bin/mpileaks executable is there
assert os.path.exists(os.path.join(spec.prefix, "bin", "mpileaks"))
def test_buildcache_push_with_base_image_command(
mutable_database, tmpdir, disable_parallel_buildcache_push
):
"""Test that we can push a package with a base image to an OCI registry.
This test is a bit involved, cause we have to create a small base image."""
registry_src = InMemoryOCIRegistry("src.example.com")
registry_dst = InMemoryOCIRegistry("dst.example.com")
base_image = ImageReference.from_string("src.example.com/my-base-image:latest")
with oci_servers(registry_src, registry_dst):
mirror("add", "oci-test", "oci://dst.example.com/image")
# TODO: simplify creation of images...
# We create a rootfs.tar.gz, a config file and a manifest file,
# and upload those.
config, manifest = default_config(architecture="amd64", os="linux"), default_manifest()
# Create a small rootfs
rootfs = tmpdir.join("rootfs")
rootfs.ensure(dir=True)
rootfs.join("bin").ensure(dir=True)
rootfs.join("bin", "sh").ensure(file=True)
# Create a tarball of it.
tarball = tmpdir.join("base.tar.gz")
with gzip_compressed_tarfile(tarball) as (tar, tar_gz_checksum, tar_checksum):
tar.add(rootfs, arcname=".")
tar_gz_digest = Digest.from_sha256(tar_gz_checksum.hexdigest())
tar_digest = Digest.from_sha256(tar_checksum.hexdigest())
# Save the config file
config["rootfs"]["diff_ids"] = [str(tar_digest)]
config_file = tmpdir.join("config.json")
with open(config_file, "w") as f:
f.write(json.dumps(config))
config_digest = Digest.from_sha256(
hashlib.sha256(open(config_file, "rb").read()).hexdigest()
)
# Register the layer in the manifest
manifest["layers"].append(
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"digest": str(tar_gz_digest),
"size": tarball.size(),
}
)
manifest["config"]["digest"] = str(config_digest)
manifest["config"]["size"] = config_file.size()
# Upload the layer and config file
upload_blob(base_image, tarball, tar_gz_digest)
upload_blob(base_image, config_file, config_digest)
# Upload the manifest
upload_manifest(base_image, manifest)
# END TODO
# Finally... use it as a base image
buildcache("push", "--base-image", str(base_image), "oci-test", "mpileaks^mpich")
# Figure out what tag was produced
tag = next(tag for _, tag in registry_dst.manifests.keys() if tag.startswith("mpileaks-"))
assert tag is not None
# Fetch the manifest and config
dst_image = ImageReference.from_string(f"dst.example.com/image:{tag}")
retrieved_manifest, retrieved_config = get_manifest_and_config(dst_image)
# Check that the base image layer is first.
assert retrieved_manifest["layers"][0]["digest"] == str(tar_gz_digest)
assert retrieved_config["rootfs"]["diff_ids"][0] == str(tar_digest)
# And also check that we have layers for each link-run dependency
matches = mutable_database.query_local("mpileaks^mpich")
assert len(matches) == 1
spec = matches[0]
num_runtime_deps = len(list(spec.traverse(root=True, deptype=("link", "run"))))
# One base layer + num_runtime_deps
assert len(retrieved_manifest["layers"]) == 1 + num_runtime_deps
# And verify that all layers including the base layer are present
for layer in retrieved_manifest["layers"]:
assert blob_exists(dst_image, digest=Digest.from_string(layer["digest"]))

View File

@@ -0,0 +1,410 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import base64
import email.message
import hashlib
import io
import json
import re
import urllib.error
import urllib.parse
import urllib.request
import uuid
from typing import Callable, Dict, List, Optional, Pattern, Tuple
from urllib.request import Request
from spack.oci.image import Digest
from spack.oci.opener import OCIAuthHandler
class MockHTTPResponse(io.IOBase):
"""This is a mock HTTP response, which implements part of http.client.HTTPResponse"""
def __init__(self, status, reason, headers=None, body=None):
self.msg = None
self.version = 11
self.url = None
self.headers = email.message.EmailMessage()
self.status = status
self.code = status
self.reason = reason
self.debuglevel = 0
self._body = body
if headers is not None:
for key, value in headers.items():
self.headers[key] = value
@classmethod
def with_json(cls, status, reason, headers=None, body=None):
"""Create a mock HTTP response with JSON string as body"""
body = io.BytesIO(json.dumps(body).encode("utf-8"))
return cls(status, reason, headers, body)
def read(self, *args, **kwargs):
return self._body.read(*args, **kwargs)
def getheader(self, name, default=None):
self.headers.get(name, default)
def getheaders(self):
return self.headers.items()
def fileno(self):
return 0
def getcode(self):
return self.status
def info(self):
return self.headers
class MiddlewareError(Exception):
"""Thrown in a handler to return a response early."""
def __init__(self, response: MockHTTPResponse):
self.response = response
class Router:
"""This class is a small router for requests to the OCI registry.
It is used to dispatch requests to a handler, and middleware can be
used to transform requests, as well as return responses early
(e.g. for authentication)."""
def __init__(self) -> None:
self.routes: List[Tuple[str, Pattern, Callable]] = []
self.middleware: List[Callable[[Request], Request]] = []
def handle(self, req: Request) -> MockHTTPResponse:
"""Dispatch a request to a handler."""
result = urllib.parse.urlparse(req.full_url)
# Apply middleware
try:
for handler in self.middleware:
req = handler(req)
except MiddlewareError as e:
return e.response
for method, path_regex, handler in self.routes:
if method != req.get_method():
continue
match = re.fullmatch(path_regex, result.path)
if not match:
continue
return handler(req, **match.groupdict())
return MockHTTPResponse(404, "Not found")
def register(self, method, path: str, handler: Callable):
self.routes.append((method, re.compile(path), handler))
def add_middleware(self, handler: Callable[[Request], Request]):
self.middleware.append(handler)
class DummyServer:
def __init__(self, domain: str) -> None:
# The domain of the server, e.g. "registry.example.com"
self.domain = domain
# List of (method, url) tuples
self.requests: List[Tuple[str, str]] = []
# Dispatches requests to handlers
self.router = Router()
# Always install a request logger
self.router.add_middleware(self.log_request)
def handle(self, req: Request) -> MockHTTPResponse:
return self.router.handle(req)
def log_request(self, req: Request):
path = urllib.parse.urlparse(req.full_url).path
self.requests.append((req.get_method(), path))
return req
def clear_log(self):
self.requests = []
class InMemoryOCIRegistry(DummyServer):
"""This implements the basic OCI registry API, but in memory.
It supports two types of blob uploads:
1. POST + PUT: the client first starts a session with POST, then does a large PUT request
2. POST: the client does a single POST request with the whole blob
Option 2 is not supported by all registries, so we allow to disable it,
with allow_single_post=False.
A third option is to use the chunked upload, but this is not implemented here, because
it's typically a major performance hit in upload speed, so we're not using it in Spack."""
def __init__(self, domain: str, allow_single_post: bool = True) -> None:
super().__init__(domain)
self.router.register("GET", r"/v2/", self.index)
self.router.register("HEAD", r"/v2/(?P<name>.+)/blobs/(?P<digest>.+)", self.head_blob)
self.router.register("POST", r"/v2/(?P<name>.+)/blobs/uploads/", self.start_session)
self.router.register("PUT", r"/upload", self.put_session)
self.router.register("PUT", r"/v2/(?P<name>.+)/manifests/(?P<ref>.+)", self.put_manifest)
self.router.register("GET", r"/v2/(?P<name>.+)/manifests/(?P<ref>.+)", self.get_manifest)
self.router.register("GET", r"/v2/(?P<name>.+)/blobs/(?P<digest>.+)", self.get_blob)
self.router.register("GET", r"/v2/(?P<name>.+)/tags/list", self.list_tags)
# If True, allow single POST upload, not all registries support this
self.allow_single_post = allow_single_post
# Used for POST + PUT upload. This is a map from session ID to image name
self.sessions: Dict[str, str] = {}
# Set of sha256:... digests that are known to the registry
self.blobs: Dict[str, bytes] = {}
# Map from (name, tag) to manifest
self.manifests: Dict[Tuple[str, str], Dict] = {}
def index(self, req: Request):
return MockHTTPResponse.with_json(200, "OK", body={})
def head_blob(self, req: Request, name: str, digest: str):
if digest in self.blobs:
return MockHTTPResponse(200, "OK", headers={"Content-Length": "1234"})
return MockHTTPResponse(404, "Not found")
def get_blob(self, req: Request, name: str, digest: str):
if digest in self.blobs:
return MockHTTPResponse(200, "OK", body=io.BytesIO(self.blobs[digest]))
return MockHTTPResponse(404, "Not found")
def start_session(self, req: Request, name: str):
id = str(uuid.uuid4())
self.sessions[id] = name
# Check if digest is present (single monolithic upload)
result = urllib.parse.urlparse(req.full_url)
query = urllib.parse.parse_qs(result.query)
if self.allow_single_post and "digest" in query:
return self.handle_upload(
req, name=name, digest=Digest.from_string(query["digest"][0])
)
return MockHTTPResponse(202, "Accepted", headers={"Location": f"/upload?uuid={id}"})
def put_session(self, req: Request):
# Do the upload.
result = urllib.parse.urlparse(req.full_url)
query = urllib.parse.parse_qs(result.query)
# uuid param should be preserved, and digest should be present
assert "uuid" in query and len(query["uuid"]) == 1
assert "digest" in query and len(query["digest"]) == 1
id = query["uuid"][0]
assert id in self.sessions
name, digest = self.sessions[id], Digest.from_string(query["digest"][0])
response = self.handle_upload(req, name=name, digest=digest)
# End the session
del self.sessions[id]
return response
def put_manifest(self, req: Request, name: str, ref: str):
# In requests, Python runs header.capitalize().
content_type = req.get_header("Content-type")
assert content_type in (
"application/vnd.oci.image.manifest.v1+json",
"application/vnd.oci.image.index.v1+json",
)
index_or_manifest = json.loads(self._require_data(req))
# Verify that we have all blobs (layers for manifest, manifests for index)
if content_type == "application/vnd.oci.image.manifest.v1+json":
for layer in index_or_manifest["layers"]:
assert layer["digest"] in self.blobs, "Missing blob while uploading manifest"
else:
for manifest in index_or_manifest["manifests"]:
assert (
name,
manifest["digest"],
) in self.manifests, "Missing manifest while uploading index"
self.manifests[(name, ref)] = index_or_manifest
return MockHTTPResponse(
201, "Created", headers={"Location": f"/v2/{name}/manifests/{ref}"}
)
def get_manifest(self, req: Request, name: str, ref: str):
if (name, ref) not in self.manifests:
return MockHTTPResponse(404, "Not found")
manifest_or_index = self.manifests[(name, ref)]
return MockHTTPResponse.with_json(
200,
"OK",
headers={"Content-type": manifest_or_index["mediaType"]},
body=manifest_or_index,
)
def _require_data(self, req: Request) -> bytes:
"""Extract request.data, it's type remains a mystery"""
assert req.data is not None
if hasattr(req.data, "read"):
return req.data.read()
elif isinstance(req.data, bytes):
return req.data
raise ValueError("req.data should be bytes or have a read() method")
def handle_upload(self, req: Request, name: str, digest: Digest):
"""Verify the digest, save the blob, return created status"""
data = self._require_data(req)
assert hashlib.sha256(data).hexdigest() == digest.digest
self.blobs[str(digest)] = data
return MockHTTPResponse(201, "Created", headers={"Location": f"/v2/{name}/blobs/{digest}"})
def list_tags(self, req: Request, name: str):
# List all tags, exclude digests.
tags = [_tag for _name, _tag in self.manifests.keys() if _name == name and ":" not in _tag]
tags.sort()
return MockHTTPResponse.with_json(200, "OK", body={"tags": tags})
class DummyServerUrllibHandler(urllib.request.BaseHandler):
"""Glue between urllib and DummyServer, routing requests to
the correct mock server for a given domain."""
def __init__(self) -> None:
self.servers: Dict[str, DummyServer] = {}
def add_server(self, domain: str, api: DummyServer):
self.servers[domain] = api
return self
def https_open(self, req: Request):
domain = urllib.parse.urlparse(req.full_url).netloc
if domain not in self.servers:
return MockHTTPResponse(404, "Not found")
return self.servers[domain].handle(req)
class InMemoryOCIRegistryWithAuth(InMemoryOCIRegistry):
"""This is another in-memory OCI registry, but it requires authentication."""
def __init__(
self, domain, token: Optional[str], realm: str, allow_single_post: bool = True
) -> None:
super().__init__(domain, allow_single_post)
self.token = token # token to accept
self.realm = realm # url to the authorization server
self.router.add_middleware(self.authenticate)
def authenticate(self, req: Request):
# Any request needs an Authorization header
authorization = req.get_header("Authorization")
if authorization is None:
raise MiddlewareError(self.unauthorized())
# Ensure that the token is correct
assert authorization.startswith("Bearer ")
token = authorization[7:]
if token != self.token:
raise MiddlewareError(self.unauthorized())
return req
def unauthorized(self):
return MockHTTPResponse(
401,
"Unauthorized",
{
"www-authenticate": f'Bearer realm="{self.realm}",'
f'service="{self.domain}",'
'scope="repository:spack-registry:pull,push"'
},
)
class MockBearerTokenServer(DummyServer):
"""Simulates a basic server that hands out bearer tokens
at the /login endpoint for the following services:
public.example.com, which doesn't require Basic Auth
private.example.com, which requires Basic Auth, with user:pass
"""
def __init__(self, domain: str) -> None:
super().__init__(domain)
self.router.register("GET", "/login", self.login)
def login(self, req: Request):
url = urllib.parse.urlparse(req.full_url)
query_params = urllib.parse.parse_qs(url.query)
# Verify query params, from the www-authenticate header
assert query_params["client_id"] == ["spack"]
assert len(query_params["service"]) == 1
assert query_params["scope"] == ["repository:spack-registry:pull,push"]
service = query_params["service"][0]
if service == "public.example.com":
return self.public_auth(req)
elif service == "private.example.com":
return self.private_auth(req)
return MockHTTPResponse(404, "Not found")
def public_auth(self, req: Request):
# No need to login with username and password for the public registry
assert req.get_header("Authorization") is None
return MockHTTPResponse.with_json(200, "OK", body={"token": "public_token"})
def private_auth(self, req: Request):
# For the private registry we need to login with username and password
auth_value = req.get_header("Authorization")
if (
auth_value is None
or not auth_value.startswith("Basic ")
or base64.b64decode(auth_value[6:]) != b"user:pass"
):
return MockHTTPResponse(401, "Unauthorized")
return MockHTTPResponse.with_json(200, "OK", body={"token": "private_token"})
def create_opener(*servers: DummyServer, credentials_provider=None):
"""Creates a mock opener, that can be used to fake requests to a list
of servers."""
opener = urllib.request.OpenerDirector()
handler = DummyServerUrllibHandler()
for server in servers:
handler.add_server(server.domain, server)
opener.add_handler(handler)
opener.add_handler(urllib.request.HTTPDefaultErrorHandler())
opener.add_handler(urllib.request.HTTPErrorProcessor())
if credentials_provider is not None:
opener.add_handler(OCIAuthHandler(credentials_provider))
return opener

View File

@@ -0,0 +1,672 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import hashlib
import json
import urllib.error
import urllib.parse
import urllib.request
from urllib.request import Request
import pytest
import spack.mirror
from spack.oci.image import Digest, ImageReference, default_config, default_manifest
from spack.oci.oci import (
copy_missing_layers,
get_manifest_and_config,
image_from_mirror,
upload_blob,
upload_manifest,
)
from spack.oci.opener import (
Challenge,
RealmServiceScope,
UsernamePassword,
credentials_from_mirrors,
default_retry,
get_bearer_challenge,
parse_www_authenticate,
)
from spack.test.oci.mock_registry import (
DummyServer,
DummyServerUrllibHandler,
InMemoryOCIRegistry,
InMemoryOCIRegistryWithAuth,
MiddlewareError,
MockBearerTokenServer,
MockHTTPResponse,
create_opener,
)
def test_parse_www_authenticate():
"""Test parsing of valid WWW-Authenticate header, check whether it's
decomposed into a list of challenges with correct scheme and parameters
according to RFC 7235 section 4.1"""
www_authenticate = 'Bearer realm="https://spack.io/authenticate",service="spack-registry",scope="repository:spack-registry:pull,push"'
assert parse_www_authenticate(www_authenticate) == [
Challenge(
"Bearer",
[
("realm", "https://spack.io/authenticate"),
("service", "spack-registry"),
("scope", "repository:spack-registry:pull,push"),
],
)
]
assert parse_www_authenticate("Bearer") == [Challenge("Bearer")]
assert parse_www_authenticate("MethodA, MethodB,MethodC") == [
Challenge("MethodA"),
Challenge("MethodB"),
Challenge("MethodC"),
]
assert parse_www_authenticate(
'Digest realm="Digest Realm", nonce="1234567890", algorithm=MD5, qop="auth"'
) == [
Challenge(
"Digest",
[
("realm", "Digest Realm"),
("nonce", "1234567890"),
("algorithm", "MD5"),
("qop", "auth"),
],
)
]
assert parse_www_authenticate(
r'Newauth realm="apps", type=1, title="Login to \"apps\"", Basic realm="simple"'
) == [
Challenge("Newauth", [("realm", "apps"), ("type", "1"), ("title", 'Login to "apps"')]),
Challenge("Basic", [("realm", "simple")]),
]
@pytest.mark.parametrize(
"invalid_str",
[
# Not comma separated
"SchemeA SchemeB SchemeC",
# Unexpected eof
"SchemeA, SchemeB, SchemeC, ",
# Invalid auth param or scheme
r"Scheme x=y, ",
# Unexpected eof
"Scheme key=",
# Invalid token
r'"Bearer"',
# Invalid token
r'Scheme"xyz"',
# No auth param
r"Scheme ",
],
)
def test_invalid_www_authenticate(invalid_str):
with pytest.raises(ValueError):
parse_www_authenticate(invalid_str)
def test_get_bearer_challenge():
"""Test extracting Bearer challenge from a list of challenges"""
# Only an incomplete bearer challenge, missing service and scope, not usable.
assert (
get_bearer_challenge(
[
Challenge("Bearer", [("realm", "https://spack.io/authenticate")]),
Challenge("Basic", [("realm", "simple")]),
Challenge(
"Digest",
[
("realm", "Digest Realm"),
("nonce", "1234567890"),
("algorithm", "MD5"),
("qop", "auth"),
],
),
]
)
is None
)
# Multiple challenges, should pick the bearer one.
assert get_bearer_challenge(
[
Challenge(
"Dummy",
[("realm", "https://example.com/"), ("service", "service"), ("scope", "scope")],
),
Challenge(
"Bearer",
[
("realm", "https://spack.io/authenticate"),
("service", "spack-registry"),
("scope", "repository:spack-registry:pull,push"),
],
),
]
) == RealmServiceScope(
"https://spack.io/authenticate", "spack-registry", "repository:spack-registry:pull,push"
)
@pytest.mark.parametrize(
"image_ref,token",
[
("public.example.com/spack-registry:latest", "public_token"),
("private.example.com/spack-registry:latest", "private_token"),
],
)
def test_automatic_oci_authentication(image_ref, token):
image = ImageReference.from_string(image_ref)
def credentials_provider(domain: str):
return UsernamePassword("user", "pass") if domain == "private.example.com" else None
opener = create_opener(
InMemoryOCIRegistryWithAuth(
image.domain, token=token, realm="https://auth.example.com/login"
),
MockBearerTokenServer("auth.example.com"),
credentials_provider=credentials_provider,
)
# Run this twice, as it will triggers a code path that caches the bearer token
assert opener.open(image.endpoint()).status == 200
assert opener.open(image.endpoint()).status == 200
def test_wrong_credentials():
"""Test that when wrong credentials are rejected by the auth server, we
get a 401 error."""
credentials_provider = lambda domain: UsernamePassword("wrong", "wrong")
image = ImageReference.from_string("private.example.com/image")
opener = create_opener(
InMemoryOCIRegistryWithAuth(
image.domain, token="something", realm="https://auth.example.com/login"
),
MockBearerTokenServer("auth.example.com"),
credentials_provider=credentials_provider,
)
with pytest.raises(urllib.error.HTTPError) as e:
opener.open(image.endpoint())
assert e.value.getcode() == 401
def test_wrong_bearer_token_returned_by_auth_server():
"""When the auth server returns a wrong bearer token, we should get a 401 error
when the request we attempt fails. We shouldn't go in circles getting a 401 from
the registry, then a non-working token from the auth server, then a 401 from the
registry, etc."""
image = ImageReference.from_string("private.example.com/image")
opener = create_opener(
InMemoryOCIRegistryWithAuth(
image.domain,
token="other_token_than_token_server_provides",
realm="https://auth.example.com/login",
),
MockBearerTokenServer("auth.example.com"),
credentials_provider=lambda domain: UsernamePassword("user", "pass"),
)
with pytest.raises(urllib.error.HTTPError) as e:
opener.open(image.endpoint())
assert e.value.getcode() == 401
class TrivialAuthServer(DummyServer):
"""A trivial auth server that hands out a bearer token at GET /login."""
def __init__(self, domain: str, token: str) -> None:
super().__init__(domain)
self.router.register("GET", "/login", self.login)
self.token = token
def login(self, req: Request):
return MockHTTPResponse.with_json(200, "OK", body={"token": self.token})
def test_registry_with_short_lived_bearer_tokens():
"""An issued bearer token is mostly opaque to the client, but typically
it embeds a short-lived expiration date. To speed up requests to a registry,
it's good not to authenticate on every request, but to cache the bearer token,
however: we have to deal with the case of an expired bearer token.
Here we test that when the bearer token expires, we authenticate again, and
when the token is still valid, we don't re-authenticate."""
image = ImageReference.from_string("private.example.com/image")
credentials_provider = lambda domain: UsernamePassword("user", "pass")
auth_server = TrivialAuthServer("auth.example.com", token="token")
registry_server = InMemoryOCIRegistryWithAuth(
image.domain, token="token", realm="https://auth.example.com/login"
)
urlopen = create_opener(
registry_server, auth_server, credentials_provider=credentials_provider
).open
# First request, should work with token "token"
assert urlopen(image.endpoint()).status == 200
# Invalidate the token on the registry
registry_server.token = "new_token"
auth_server.token = "new_token"
# Second request: reusing the cached token should fail
# but in the background we will get a new token from the auth server
assert urlopen(image.endpoint()).status == 200
# Subsequent requests should work with the same token, let's do two more
assert urlopen(image.endpoint()).status == 200
assert urlopen(image.endpoint()).status == 200
# And finally, we should see that we've issues exactly two requests to the auth server
assert auth_server.requests == [("GET", "/login"), ("GET", "/login")]
# Whereas we've done more requests to the registry
assert registry_server.requests == [
("GET", "/v2/"), # 1: without bearer token
("GET", "/v2/"), # 2: retry with bearer token
("GET", "/v2/"), # 3: with incorrect bearer token
("GET", "/v2/"), # 4: retry with new bearer token
("GET", "/v2/"), # 5: with recyled correct bearer token
("GET", "/v2/"), # 6: with recyled correct bearer token
]
class InMemoryRegistryWithUnsupportedAuth(InMemoryOCIRegistry):
"""A registry that does set a WWW-Authenticate header, but
with a challenge we don't support."""
def __init__(self, domain: str, allow_single_post: bool = True, www_authenticate=None) -> None:
self.www_authenticate = www_authenticate
super().__init__(domain, allow_single_post)
self.router.add_middleware(self.unsupported_auth_method)
def unsupported_auth_method(self, req: Request):
headers = {}
if self.www_authenticate:
headers["WWW-Authenticate"] = self.www_authenticate
raise MiddlewareError(MockHTTPResponse(401, "Unauthorized", headers=headers))
@pytest.mark.parametrize(
"www_authenticate,error_message",
[
# missing service and scope
('Bearer realm="https://auth.example.com/login"', "unsupported authentication scheme"),
# we don't do basic auth
('Basic realm="https://auth.example.com/login"', "unsupported authentication scheme"),
# multiple unsupported challenges
(
"CustomChallenge method=unsupported, OtherChallenge method=x,param=y",
"unsupported authentication scheme",
),
# no challenge
(None, "missing WWW-Authenticate header"),
# malformed challenge, missing quotes
("Bearer realm=https://auth.example.com", "malformed WWW-Authenticate header"),
# http instead of https
('Bearer realm="http://auth.example.com",scope=x,service=y', "insecure http connection"),
],
)
def test_auth_method_we_cannot_handle_is_error(www_authenticate, error_message):
# We can only handle WWW-Authenticate with a Bearer challenge
image = ImageReference.from_string("private.example.com/image")
urlopen = create_opener(
InMemoryRegistryWithUnsupportedAuth(image.domain, www_authenticate=www_authenticate),
TrivialAuthServer("auth.example.com", token="token"),
credentials_provider=lambda domain: UsernamePassword("user", "pass"),
).open
with pytest.raises(urllib.error.HTTPError, match=error_message) as e:
urlopen(image.endpoint())
assert e.value.getcode() == 401
# Parametrize over single POST vs POST + PUT.
@pytest.mark.parametrize("client_single_request", [True, False])
@pytest.mark.parametrize("server_single_request", [True, False])
def test_oci_registry_upload(tmpdir, client_single_request, server_single_request):
opener = urllib.request.OpenerDirector()
opener.add_handler(
DummyServerUrllibHandler().add_server(
"example.com", InMemoryOCIRegistry(server_single_request)
)
)
opener.add_handler(urllib.request.HTTPDefaultErrorHandler())
opener.add_handler(urllib.request.HTTPErrorProcessor())
# Create a small blob
blob = tmpdir.join("blob")
blob.write("Hello world!")
image = ImageReference.from_string("example.com/image:latest")
digest = Digest.from_sha256(hashlib.sha256(blob.read_binary()).hexdigest())
# Set small file size larger than the blob iff we're doing single request
small_file_size = 1024 if client_single_request else 0
# Upload once, should actually upload
assert upload_blob(
ref=image,
file=blob.strpath,
digest=digest,
small_file_size=small_file_size,
_urlopen=opener.open,
)
# Second time should exit as it exists
assert not upload_blob(
ref=image,
file=blob.strpath,
digest=digest,
small_file_size=small_file_size,
_urlopen=opener.open,
)
# Force upload should upload again
assert upload_blob(
ref=image,
file=blob.strpath,
digest=digest,
force=True,
small_file_size=small_file_size,
_urlopen=opener.open,
)
def test_copy_missing_layers(tmpdir, config):
"""Test copying layers from one registry to another.
Creates 3 blobs, 1 config and 1 manifest in registry A
and copies layers to registry B. Then checks that all
layers are present in registry B. Finally it runs the copy
again and checks that no new layers are uploaded."""
# NOTE: config fixture is used to disable default source mirrors
# which are used in Stage(...). Otherwise this test doesn't really
# rely on globals.
src = ImageReference.from_string("a.example.com/image:x")
dst = ImageReference.from_string("b.example.com/image:y")
src_registry = InMemoryOCIRegistry(src.domain)
dst_registry = InMemoryOCIRegistry(dst.domain)
urlopen = create_opener(src_registry, dst_registry).open
# TODO: make it a bit easier to create bunch of blobs + config + manifest?
# Create a few blobs and a config file
blobs = [tmpdir.join(f"blob{i}") for i in range(3)]
for i, blob in enumerate(blobs):
blob.write(f"Blob {i}")
digests = [
Digest.from_sha256(hashlib.sha256(blob.read_binary()).hexdigest()) for blob in blobs
]
config = default_config(architecture="amd64", os="linux")
configfile = tmpdir.join("config.json")
configfile.write(json.dumps(config))
config_digest = Digest.from_sha256(hashlib.sha256(configfile.read_binary()).hexdigest())
for blob, digest in zip(blobs, digests):
upload_blob(src, blob.strpath, digest, _urlopen=urlopen)
upload_blob(src, configfile.strpath, config_digest, _urlopen=urlopen)
# Then create a manifest referencing them
manifest = default_manifest()
for blob, digest in zip(blobs, digests):
manifest["layers"].append(
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"digest": str(digest),
"size": blob.size(),
}
)
manifest["config"] = {
"mediaType": "application/vnd.oci.image.config.v1+json",
"digest": str(config_digest),
"size": configfile.size(),
}
upload_manifest(src, manifest, _urlopen=urlopen)
# Finally, copy the image from src to dst
copy_missing_layers(src, dst, architecture="amd64", _urlopen=urlopen)
# Check that all layers (not config) were copied and identical
assert len(dst_registry.blobs) == len(blobs)
for blob, digest in zip(blobs, digests):
assert dst_registry.blobs.get(str(digest)) == blob.read_binary()
is_upload = lambda method, path: method == "POST" and path == "/v2/image/blobs/uploads/"
is_exists = lambda method, path: method == "HEAD" and path.startswith("/v2/image/blobs/")
# Check that exactly 3 uploads were initiated, and that we don't do
# double existence checks when uploading.
assert sum(is_upload(method, path) for method, path in dst_registry.requests) == 3
assert sum(is_exists(method, path) for method, path in dst_registry.requests) == 3
# Check that re-uploading skips existing layers.
dst_registry.clear_log()
copy_missing_layers(src, dst, architecture="amd64", _urlopen=urlopen)
# Check that no uploads were initiated, only existence checks were done.
assert sum(is_upload(method, path) for method, path in dst_registry.requests) == 0
assert sum(is_exists(method, path) for method, path in dst_registry.requests) == 3
def test_image_from_mirror():
mirror = spack.mirror.Mirror("oci://example.com/image")
assert image_from_mirror(mirror) == ImageReference.from_string("example.com/image")
def test_image_reference_str():
"""Test that with_digest() works with Digest and str."""
digest_str = f"sha256:{1234:064x}"
digest = Digest.from_string(digest_str)
img = ImageReference.from_string("example.com/image")
assert str(img.with_digest(digest)) == f"example.com/image:latest@{digest}"
assert str(img.with_digest(digest_str)) == f"example.com/image:latest@{digest}"
assert str(img.with_tag("hello")) == "example.com/image:hello"
assert str(img.with_tag("hello").with_digest(digest)) == f"example.com/image:hello@{digest}"
@pytest.mark.parametrize(
"image",
[
# white space issue
" example.com/image",
# not alpha-numeric
"hello#world:latest",
],
)
def test_image_reference_invalid(image):
with pytest.raises(ValueError, match="Invalid image reference"):
ImageReference.from_string(image)
def test_default_credentials_provider():
"""The default credentials provider uses a collection of configured
mirrors."""
mirrors = [
# OCI mirror with push credentials
spack.mirror.Mirror(
{"url": "oci://a.example.com/image", "push": {"access_pair": ["user.a", "pass.a"]}}
),
# Not an OCI mirror
spack.mirror.Mirror(
{"url": "https://b.example.com/image", "access_pair": ["user.b", "pass.b"]}
),
# No credentials
spack.mirror.Mirror("oci://c.example.com/image"),
# Top-level credentials
spack.mirror.Mirror(
{"url": "oci://d.example.com/image", "access_pair": ["user.d", "pass.d"]}
),
# Dockerhub short reference
spack.mirror.Mirror(
{"url": "oci://user/image", "access_pair": ["dockerhub_user", "dockerhub_pass"]}
),
# Localhost (not a dockerhub short reference)
spack.mirror.Mirror(
{"url": "oci://localhost/image", "access_pair": ["user.localhost", "pass.localhost"]}
),
]
assert credentials_from_mirrors("a.example.com", mirrors=mirrors) == UsernamePassword(
"user.a", "pass.a"
)
assert credentials_from_mirrors("b.example.com", mirrors=mirrors) is None
assert credentials_from_mirrors("c.example.com", mirrors=mirrors) is None
assert credentials_from_mirrors("d.example.com", mirrors=mirrors) == UsernamePassword(
"user.d", "pass.d"
)
assert credentials_from_mirrors("index.docker.io", mirrors=mirrors) == UsernamePassword(
"dockerhub_user", "dockerhub_pass"
)
assert credentials_from_mirrors("localhost", mirrors=mirrors) == UsernamePassword(
"user.localhost", "pass.localhost"
)
def test_manifest_index(tmpdir):
"""Test obtaining manifest + config from a registry
that has an index"""
urlopen = create_opener(InMemoryOCIRegistry("registry.example.com")).open
img = ImageReference.from_string("registry.example.com/image")
# Create two config files and manifests, for different architectures
manifest_descriptors = []
manifest_and_config = {}
for arch in ("amd64", "arm64"):
file = tmpdir.join(f"config_{arch}.json")
config = default_config(architecture=arch, os="linux")
file.write(json.dumps(config))
config_digest = Digest.from_sha256(hashlib.sha256(file.read_binary()).hexdigest())
assert upload_blob(img, file, config_digest, _urlopen=urlopen)
manifest = {
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"config": {
"mediaType": "application/vnd.oci.image.config.v1+json",
"digest": str(config_digest),
"size": file.size(),
},
"layers": [],
}
manifest_digest, manifest_size = upload_manifest(
img, manifest, tag=False, _urlopen=urlopen
)
manifest_descriptors.append(
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"platform": {"architecture": arch, "os": "linux"},
"digest": str(manifest_digest),
"size": manifest_size,
}
)
manifest_and_config[arch] = (manifest, config)
# And a single index.
index = {
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.index.v1+json",
"manifests": manifest_descriptors,
}
upload_manifest(img, index, tag=True, _urlopen=urlopen)
# Check that we fetcht the correct manifest and config for each architecture
for arch in ("amd64", "arm64"):
assert (
get_manifest_and_config(img, architecture=arch, _urlopen=urlopen)
== manifest_and_config[arch]
)
# Also test max recursion
with pytest.raises(Exception, match="Maximum recursion depth reached"):
get_manifest_and_config(img, architecture="amd64", recurse=0, _urlopen=urlopen)
class BrokenServer(DummyServer):
"""Dummy server that returns 500 and 429 errors twice before succeeding"""
def __init__(self, domain: str) -> None:
super().__init__(domain)
self.router.register("GET", r"/internal-server-error/", self.internal_server_error_twice)
self.router.register("GET", r"/rate-limit/", self.rate_limit_twice)
self.router.register("GET", r"/not-found/", self.not_found)
self.count_500 = 0
self.count_429 = 0
def internal_server_error_twice(self, request: Request):
self.count_500 += 1
if self.count_500 < 3:
return MockHTTPResponse(500, "Internal Server Error")
else:
return MockHTTPResponse(200, "OK")
def rate_limit_twice(self, request: Request):
self.count_429 += 1
if self.count_429 < 3:
return MockHTTPResponse(429, "Rate Limit Exceeded")
else:
return MockHTTPResponse(200, "OK")
def not_found(self, request: Request):
return MockHTTPResponse(404, "Not Found")
@pytest.mark.parametrize(
"url,max_retries,expect_failure,expect_requests",
[
# 500s should be retried
("https://example.com/internal-server-error/", 2, True, 2),
("https://example.com/internal-server-error/", 5, False, 3),
# 429s should be retried
("https://example.com/rate-limit/", 2, True, 2),
("https://example.com/rate-limit/", 5, False, 3),
# 404s shouldn't be retried
("https://example.com/not-found/", 3, True, 1),
],
)
def test_retry(url, max_retries, expect_failure, expect_requests):
server = BrokenServer("example.com")
urlopen = create_opener(server).open
sleep_time = []
dont_sleep = lambda t: sleep_time.append(t) # keep track of sleep times
try:
response = default_retry(urlopen, retries=max_retries, sleep=dont_sleep)(url)
except urllib.error.HTTPError as e:
if not expect_failure:
assert False, f"Unexpected HTTPError: {e}"
else:
if expect_failure:
assert False, "Expected HTTPError, but none was raised"
assert response.status == 200
assert len(server.requests) == expect_requests
assert sleep_time == [2**i for i in range(expect_requests - 1)]

View File

@@ -37,6 +37,7 @@ def mpileaks_possible_deps(mock_packages, mpi_names):
"low-priority-provider": set(),
"dyninst": set(["libdwarf", "libelf"]),
"fake": set(),
"intel-parallel-studio": set(),
"libdwarf": set(["libelf"]),
"libelf": set(),
"mpich": set(),

View File

@@ -532,6 +532,7 @@ def test_normalize_mpileaks(self):
assert not spec.eq_dag(expected_normalized, deptypes=True)
assert not spec.eq_dag(non_unique_nodes, deptypes=True)
@pytest.mark.xfail(reason="String representation changed")
def test_normalize_with_virtual_package(self):
spec = Spec("mpileaks ^mpi ^libelf@1.8.11 ^libdwarf")
spec.normalize()

View File

@@ -294,13 +294,10 @@ def test_concrete_specs_which_satisfies_abstract(self, lhs, rhs, default_mock_co
("foo@4.0%pgi@4.5", "@1:3%pgi@4.4:4.6"),
("builtin.mock.mpich", "builtin.mpich"),
("mpileaks ^builtin.mock.mpich", "^builtin.mpich"),
("mpileaks^mpich", "^zmpi"),
("mpileaks^zmpi", "^mpich"),
("mpileaks^mpich@1.2", "^mpich@2.0"),
("mpileaks^mpich@4.0^callpath@1.5", "^mpich@1:3^callpath@1.4:1.6"),
("mpileaks^mpich@2.0^callpath@1.7", "^mpich@1:3^callpath@1.4:1.6"),
("mpileaks^mpich@4.0^callpath@1.7", "^mpich@1:3^callpath@1.4:1.6"),
("mpileaks^mpich", "^zmpi"),
("mpileaks^mpi@3", "^mpi@1.2:1.6"),
("mpileaks^mpi@3:", "^mpich2@1.4"),
("mpileaks^mpi@3:", "^mpich2"),
@@ -338,30 +335,30 @@ def test_constraining_abstract_specs_with_empty_intersection(self, lhs, rhs):
rhs.constrain(lhs)
@pytest.mark.parametrize(
"lhs,rhs,intersection_expected",
"lhs,rhs",
[
("mpich", "mpich +foo", True),
("mpich", "mpich~foo", True),
("mpich", "mpich foo=1", True),
("mpich", "mpich++foo", True),
("mpich", "mpich~~foo", True),
("mpich", "mpich foo==1", True),
("mpich", "mpich +foo"),
("mpich", "mpich~foo"),
("mpich", "mpich foo=1"),
("mpich", "mpich++foo"),
("mpich", "mpich~~foo"),
("mpich", "mpich foo==1"),
# Flags semantics is currently different from other variant
("mpich", 'mpich cflags="-O3"', True),
("mpich cflags=-O3", 'mpich cflags="-O3 -Ofast"', False),
("mpich cflags=-O2", 'mpich cflags="-O3"', False),
("multivalue-variant foo=bar", "multivalue-variant +foo", False),
("multivalue-variant foo=bar", "multivalue-variant ~foo", False),
("multivalue-variant fee=bar", "multivalue-variant fee=baz", False),
("mpich", 'mpich cflags="-O3"'),
("mpich cflags=-O3", 'mpich cflags="-O3 -Ofast"'),
("mpich cflags=-O2", 'mpich cflags="-O3"'),
("multivalue-variant foo=bar", "multivalue-variant +foo"),
("multivalue-variant foo=bar", "multivalue-variant ~foo"),
("multivalue-variant fee=bar", "multivalue-variant fee=baz"),
],
)
def test_concrete_specs_which_do_not_satisfy_abstract(
self, lhs, rhs, intersection_expected, default_mock_concretization
self, lhs, rhs, default_mock_concretization
):
lhs, rhs = default_mock_concretization(lhs), Spec(rhs)
assert lhs.intersects(rhs) is intersection_expected
assert rhs.intersects(lhs) is intersection_expected
assert lhs.intersects(rhs) is False
assert rhs.intersects(lhs) is False
assert not lhs.satisfies(rhs)
assert not rhs.satisfies(lhs)
@@ -483,10 +480,14 @@ def test_intersects_virtual(self):
assert Spec("mpich2").intersects(Spec("mpi"))
assert Spec("zmpi").intersects(Spec("mpi"))
def test_intersects_virtual_dep_with_virtual_constraint(self):
def test_intersects_virtual_providers(self):
"""Tests that we can always intersect virtual providers from abstract specs.
Concretization will give meaning to virtuals, and eventually forbid certain
configurations.
"""
assert Spec("netlib-lapack ^openblas").intersects("netlib-lapack ^openblas")
assert not Spec("netlib-lapack ^netlib-blas").intersects("netlib-lapack ^openblas")
assert not Spec("netlib-lapack ^openblas").intersects("netlib-lapack ^netlib-blas")
assert Spec("netlib-lapack ^netlib-blas").intersects("netlib-lapack ^openblas")
assert Spec("netlib-lapack ^openblas").intersects("netlib-lapack ^netlib-blas")
assert Spec("netlib-lapack ^netlib-blas").intersects("netlib-lapack ^netlib-blas")
def test_intersectable_concrete_specs_must_have_the_same_hash(self):
@@ -1006,6 +1007,103 @@ def test_spec_override(self):
assert new_spec.compiler_flags["cflags"] == ["-O2"]
assert new_spec.compiler_flags["cxxflags"] == ["-O1"]
@pytest.mark.parametrize(
"spec_str,specs_in_dag",
[
("hdf5 ^[virtuals=mpi] mpich", [("mpich", "mpich"), ("mpi", "mpich")]),
# Try different combinations with packages that provides a
# disjoint set of virtual dependencies
(
"netlib-scalapack ^mpich ^openblas-with-lapack",
[
("mpi", "mpich"),
("lapack", "openblas-with-lapack"),
("blas", "openblas-with-lapack"),
],
),
(
"netlib-scalapack ^[virtuals=mpi] mpich ^openblas-with-lapack",
[
("mpi", "mpich"),
("lapack", "openblas-with-lapack"),
("blas", "openblas-with-lapack"),
],
),
(
"netlib-scalapack ^mpich ^[virtuals=lapack] openblas-with-lapack",
[
("mpi", "mpich"),
("lapack", "openblas-with-lapack"),
("blas", "openblas-with-lapack"),
],
),
(
"netlib-scalapack ^[virtuals=mpi] mpich ^[virtuals=lapack] openblas-with-lapack",
[
("mpi", "mpich"),
("lapack", "openblas-with-lapack"),
("blas", "openblas-with-lapack"),
],
),
# Test that we can mix dependencies that provide an overlapping
# sets of virtual dependencies
(
"netlib-scalapack ^[virtuals=mpi] intel-parallel-studio "
"^[virtuals=lapack] openblas-with-lapack",
[
("mpi", "intel-parallel-studio"),
("lapack", "openblas-with-lapack"),
("blas", "openblas-with-lapack"),
],
),
(
"netlib-scalapack ^[virtuals=mpi] intel-parallel-studio ^openblas-with-lapack",
[
("mpi", "intel-parallel-studio"),
("lapack", "openblas-with-lapack"),
("blas", "openblas-with-lapack"),
],
),
(
"netlib-scalapack ^intel-parallel-studio ^[virtuals=lapack] openblas-with-lapack",
[
("mpi", "intel-parallel-studio"),
("lapack", "openblas-with-lapack"),
("blas", "openblas-with-lapack"),
],
),
# Test that we can bind more than one virtual to the same provider
(
"netlib-scalapack ^[virtuals=lapack,blas] openblas-with-lapack",
[("lapack", "openblas-with-lapack"), ("blas", "openblas-with-lapack")],
),
],
)
def test_virtual_deps_bindings(self, default_mock_concretization, spec_str, specs_in_dag):
if spack.config.get("config:concretizer") == "original":
pytest.skip("Use case not supported by the original concretizer")
s = default_mock_concretization(spec_str)
for label, expected in specs_in_dag:
assert label in s
assert s[label].satisfies(expected), label
@pytest.mark.parametrize(
"spec_str",
[
# openblas-with-lapack needs to provide blas and lapack together
"netlib-scalapack ^[virtuals=blas] intel-parallel-studio ^openblas-with-lapack",
# intel-* provides blas and lapack together, openblas can provide blas only
"netlib-scalapack ^[virtuals=lapack] intel-parallel-studio ^openblas",
],
)
def test_unsatisfiable_virtual_deps_bindings(self, spec_str):
if spack.config.get("config:concretizer") == "original":
pytest.skip("Use case not supported by the original concretizer")
with pytest.raises(spack.solver.asp.UnsatisfiableSpecError):
Spec(spec_str).concretized()
@pytest.mark.parametrize(
"spec_str,format_str,expected",

View File

@@ -472,33 +472,46 @@ def _specfile_for(spec_str, filename):
[Token(TokenType.PROPAGATED_KEY_VALUE_PAIR, value='cflags=="-O3 -g"')],
'cflags=="-O3 -g"',
),
# Way too many spaces
# Whitespace is allowed in version lists
("@1.2:1.4 , 1.6 ", [Token(TokenType.VERSION, value="@1.2:1.4 , 1.6")], "@1.2:1.4,1.6"),
# But not in ranges. `a@1:` and `b` are separate specs, not a single `a@1:b`.
(
"@1.2 : 1.4 , 1.6 ",
[Token(TokenType.VERSION, value="@1.2 : 1.4 , 1.6")],
"@1.2:1.4,1.6",
),
("@1.2 : develop", [Token(TokenType.VERSION, value="@1.2 : develop")], "@1.2:develop"),
(
"@1.2 : develop = foo",
"a@1: b",
[
Token(TokenType.VERSION, value="@1.2 :"),
Token(TokenType.UNQUALIFIED_PACKAGE_NAME, value="a"),
Token(TokenType.VERSION, value="@1:"),
Token(TokenType.UNQUALIFIED_PACKAGE_NAME, value="b"),
],
"a@1:",
),
(
"@1.2: develop = foo",
[
Token(TokenType.VERSION, value="@1.2:"),
Token(TokenType.KEY_VALUE_PAIR, value="develop = foo"),
],
"@1.2: develop=foo",
),
(
"% intel @ 12.1 : 12.6 + debug",
"@1.2:develop = foo",
[
Token(TokenType.COMPILER_AND_VERSION, value="% intel @ 12.1 : 12.6"),
Token(TokenType.VERSION, value="@1.2:"),
Token(TokenType.KEY_VALUE_PAIR, value="develop = foo"),
],
"@1.2: develop=foo",
),
(
"% intel @ 12.1:12.6 + debug",
[
Token(TokenType.COMPILER_AND_VERSION, value="% intel @ 12.1:12.6"),
Token(TokenType.BOOL_VARIANT, value="+ debug"),
],
"%intel@12.1:12.6+debug",
),
(
"@ 12.1 : 12.6 + debug - qt_4",
"@ 12.1:12.6 + debug - qt_4",
[
Token(TokenType.VERSION, value="@ 12.1 : 12.6"),
Token(TokenType.VERSION, value="@ 12.1:12.6"),
Token(TokenType.BOOL_VARIANT, value="+ debug"),
Token(TokenType.BOOL_VARIANT, value="- qt_4"),
],
@@ -517,6 +530,26 @@ def _specfile_for(spec_str, filename):
[Token(TokenType.VERSION, value="@:0.4"), Token(TokenType.COMPILER, value="% nvhpc")],
"@:0.4%nvhpc",
),
(
"^[virtuals=mpi] openmpi",
[
Token(TokenType.START_EDGE_PROPERTIES, value="^["),
Token(TokenType.KEY_VALUE_PAIR, value="virtuals=mpi"),
Token(TokenType.END_EDGE_PROPERTIES, value="]"),
Token(TokenType.UNQUALIFIED_PACKAGE_NAME, value="openmpi"),
],
"^[virtuals=mpi] openmpi",
),
(
"^[deptypes=link,build] zlib",
[
Token(TokenType.START_EDGE_PROPERTIES, value="^["),
Token(TokenType.KEY_VALUE_PAIR, value="deptypes=link,build"),
Token(TokenType.END_EDGE_PROPERTIES, value="]"),
Token(TokenType.UNQUALIFIED_PACKAGE_NAME, value="zlib"),
],
"^[deptypes=build,link] zlib",
),
(
"zlib@git.foo/bar",
[
@@ -525,6 +558,31 @@ def _specfile_for(spec_str, filename):
],
"zlib@git.foo/bar",
),
# Variant propagation
(
"zlib ++foo",
[
Token(TokenType.UNQUALIFIED_PACKAGE_NAME, "zlib"),
Token(TokenType.PROPAGATED_BOOL_VARIANT, "++foo"),
],
"zlib++foo",
),
(
"zlib ~~foo",
[
Token(TokenType.UNQUALIFIED_PACKAGE_NAME, "zlib"),
Token(TokenType.PROPAGATED_BOOL_VARIANT, "~~foo"),
],
"zlib~~foo",
),
(
"zlib foo==bar",
[
Token(TokenType.UNQUALIFIED_PACKAGE_NAME, "zlib"),
Token(TokenType.PROPAGATED_KEY_VALUE_PAIR, "foo==bar"),
],
"zlib foo==bar",
),
],
)
def test_parse_single_spec(spec_str, tokens, expected_roundtrip):
@@ -885,6 +943,9 @@ def test_disambiguate_hash_by_spec(spec1, spec2, constraint, mock_packages, monk
("x platform=test platform=test", spack.spec.DuplicateArchitectureError),
("x os=fe platform=test target=fe os=fe", spack.spec.DuplicateArchitectureError),
("x target=be platform=test os=be os=fe", spack.spec.DuplicateArchitectureError),
("^[@foo] zlib", spack.parser.SpecParsingError),
# TODO: Remove this as soon as use variants are added and we can parse custom attributes
("^[foo=bar] zlib", spack.parser.SpecParsingError),
],
)
def test_error_conditions(text, exc_cls):

View File

@@ -120,6 +120,21 @@ def test_parser_doesnt_deal_with_nonzero_offset():
elf.parse_elf(elf_at_offset_one)
def test_only_header():
# When passing only_header=True parsing a file that is literally just a header
# without any sections/segments should not error.
# 32 bit
elf_32 = elf.parse_elf(io.BytesIO(b"\x7fELF\x01\x01" + b"\x00" * 46), only_header=True)
assert not elf_32.is_64_bit
assert elf_32.is_little_endian
# 64 bit
elf_64 = elf.parse_elf(io.BytesIO(b"\x7fELF\x02\x01" + b"\x00" * 58), only_header=True)
assert elf_64.is_64_bit
assert elf_64.is_little_endian
@pytest.mark.requires_executables("gcc")
@skip_unless_linux
def test_elf_get_and_replace_rpaths(binary_with_rpaths):

View File

@@ -4,10 +4,12 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import hashlib
from typing import Any, Callable, Dict # novm
from typing import BinaryIO, Callable, Dict, Optional
import llnl.util.tty as tty
HashFactory = Callable[[], "hashlib._Hash"]
#: Set of hash algorithms that Spack can use, mapped to digest size in bytes
hashes = {"sha256": 32, "md5": 16, "sha1": 20, "sha224": 28, "sha384": 48, "sha512": 64}
# Note: keys are ordered by popularity for earliest return in ``hash_key in version_dict`` checks.
@@ -23,7 +25,7 @@
#: cache of hash functions generated
_hash_functions: Dict[str, Callable[[], Any]] = {}
_hash_functions: Dict[str, HashFactory] = {}
class DeprecatedHash:
@@ -44,55 +46,57 @@ def __call__(self, disable_alert=False):
return hashlib.new(self.hash_alg)
def hash_fun_for_algo(algo):
def hash_fun_for_algo(algo: str) -> HashFactory:
"""Get a function that can perform the specified hash algorithm."""
hash_gen = _hash_functions.get(algo)
if hash_gen is None:
if algo in _deprecated_hash_algorithms:
try:
hash_gen = DeprecatedHash(algo, tty.debug, disable_security_check=False)
fun = _hash_functions.get(algo)
if fun:
return fun
elif algo not in _deprecated_hash_algorithms:
_hash_functions[algo] = getattr(hashlib, algo)
else:
try:
deprecated_fun = DeprecatedHash(algo, tty.debug, disable_security_check=False)
# call once to get a ValueError if usedforsecurity is needed
hash_gen(disable_alert=True)
except ValueError:
# Some systems may support the 'usedforsecurity' option
# so try with that (but display a warning when it is used)
hash_gen = DeprecatedHash(algo, tty.warn, disable_security_check=True)
else:
hash_gen = getattr(hashlib, algo)
_hash_functions[algo] = hash_gen
return hash_gen
# call once to get a ValueError if usedforsecurity is needed
deprecated_fun(disable_alert=True)
except ValueError:
# Some systems may support the 'usedforsecurity' option
# so try with that (but display a warning when it is used)
deprecated_fun = DeprecatedHash(algo, tty.warn, disable_security_check=True)
_hash_functions[algo] = deprecated_fun
return _hash_functions[algo]
def hash_algo_for_digest(hexdigest):
def hash_algo_for_digest(hexdigest: str) -> str:
"""Gets name of the hash algorithm for a hex digest."""
bytes = len(hexdigest) / 2
if bytes not in _size_to_hash:
raise ValueError("Spack knows no hash algorithm for this digest: %s" % hexdigest)
return _size_to_hash[bytes]
algo = _size_to_hash.get(len(hexdigest) // 2)
if algo is None:
raise ValueError(f"Spack knows no hash algorithm for this digest: {hexdigest}")
return algo
def hash_fun_for_digest(hexdigest):
def hash_fun_for_digest(hexdigest: str) -> HashFactory:
"""Gets a hash function corresponding to a hex digest."""
return hash_fun_for_algo(hash_algo_for_digest(hexdigest))
def checksum(hashlib_algo, filename, **kwargs):
"""Returns a hex digest of the filename generated using an
algorithm from hashlib.
"""
block_size = kwargs.get("block_size", 2**20)
def checksum_stream(hashlib_algo: HashFactory, fp: BinaryIO, *, block_size: int = 2**20) -> str:
"""Returns a hex digest of the stream generated using given algorithm from hashlib."""
hasher = hashlib_algo()
with open(filename, "rb") as file:
while True:
data = file.read(block_size)
if not data:
break
hasher.update(data)
while True:
data = fp.read(block_size)
if not data:
break
hasher.update(data)
return hasher.hexdigest()
def checksum(hashlib_algo: HashFactory, filename: str, *, block_size: int = 2**20) -> str:
"""Returns a hex digest of the filename generated using an algorithm from hashlib."""
with open(filename, "rb") as f:
return checksum_stream(hashlib_algo, f, block_size=block_size)
class Checker:
"""A checker checks files against one particular hex digest.
It will automatically determine what hashing algorithm
@@ -115,18 +119,18 @@ class Checker:
a 1MB (2**20 bytes) buffer.
"""
def __init__(self, hexdigest, **kwargs):
def __init__(self, hexdigest: str, **kwargs) -> None:
self.block_size = kwargs.get("block_size", 2**20)
self.hexdigest = hexdigest
self.sum = None
self.sum: Optional[str] = None
self.hash_fun = hash_fun_for_digest(hexdigest)
@property
def hash_name(self):
def hash_name(self) -> str:
"""Get the name of the hash function this Checker is using."""
return self.hash_fun().name.lower()
def check(self, filename):
def check(self, filename: str) -> bool:
"""Read the file with the specified name and check its checksum
against self.hexdigest. Return True if they match, False
otherwise. Actual checksum is stored in self.sum.

View File

@@ -377,7 +377,7 @@ def parse_header(f, elf):
elf.elf_hdr = ElfHeader._make(unpack(elf_header_fmt, data))
def _do_parse_elf(f, interpreter=True, dynamic_section=True):
def _do_parse_elf(f, interpreter=True, dynamic_section=True, only_header=False):
# We don't (yet?) allow parsing ELF files at a nonzero offset, we just
# jump to absolute offsets as they are specified in the ELF file.
if f.tell() != 0:
@@ -386,6 +386,9 @@ def _do_parse_elf(f, interpreter=True, dynamic_section=True):
elf = ElfFile()
parse_header(f, elf)
if only_header:
return elf
# We don't handle anything but executables and shared libraries now.
if elf.elf_hdr.e_type not in (ELF_CONSTANTS.ET_EXEC, ELF_CONSTANTS.ET_DYN):
raise ElfParsingError("Not an ET_DYN or ET_EXEC type")
@@ -403,11 +406,11 @@ def _do_parse_elf(f, interpreter=True, dynamic_section=True):
return elf
def parse_elf(f, interpreter=False, dynamic_section=False):
def parse_elf(f, interpreter=False, dynamic_section=False, only_header=False):
"""Given a file handle f for an ELF file opened in binary mode, return an ElfFile
object that is stores data about rpaths"""
try:
return _do_parse_elf(f, interpreter, dynamic_section)
return _do_parse_elf(f, interpreter, dynamic_section, only_header)
except (DeprecationWarning, struct.error):
# According to the docs old versions of Python can throw DeprecationWarning
# instead of struct.error.

View File

@@ -35,9 +35,9 @@ def __init__(self, name):
if not self.exe:
raise ProcessError("Cannot construct executable for '%s'" % name)
def add_default_arg(self, arg):
"""Add a default argument to the command."""
self.exe.append(arg)
def add_default_arg(self, *args):
"""Add default argument(s) to the command."""
self.exe.extend(args)
def add_default_env(self, key, value):
"""Set an environment variable when the command is run.

View File

@@ -24,7 +24,6 @@ def git(required: bool = False):
# If we're running under pytest, add this to ignore the fix for CVE-2022-39253 in
# git 2.38.1+. Do this in one place; we need git to do this in all parts of Spack.
if git and "pytest" in sys.modules:
git.add_default_arg("-c")
git.add_default_arg("protocol.file.allow=always")
git.add_default_arg("-c", "protocol.file.allow=always")
return git

View File

@@ -6,6 +6,7 @@
import os
import sys
import traceback
from typing import Optional
class ErrorFromWorker:
@@ -53,7 +54,9 @@ def __call__(self, *args, **kwargs):
return value
def imap_unordered(f, list_of_args, *, processes: int, debug=False):
def imap_unordered(
f, list_of_args, *, processes: int, maxtaskperchild: Optional[int] = None, debug=False
):
"""Wrapper around multiprocessing.Pool.imap_unordered.
Args:
@@ -62,6 +65,8 @@ def imap_unordered(f, list_of_args, *, processes: int, debug=False):
processes: maximum number of processes allowed
debug: if False, raise an exception containing just the error messages
from workers, if True an exception with complete stacktraces
maxtaskperchild: number of tasks to be executed by a child before being
killed and substituted
Raises:
RuntimeError: if any error occurred in the worker processes
@@ -70,7 +75,7 @@ def imap_unordered(f, list_of_args, *, processes: int, debug=False):
yield from map(f, list_of_args)
return
with multiprocessing.Pool(processes) as p:
with multiprocessing.Pool(processes, maxtasksperchild=maxtaskperchild) as p:
for result in p.imap_unordered(Task(f), list_of_args):
if isinstance(result, ErrorFromWorker):
raise RuntimeError(result.stacktrace if debug else str(result))

View File

@@ -98,7 +98,7 @@ def replacements():
def win_exe_ext():
return ".exe"
return r"(?:\.bat|\.exe)"
def sanitize_filename(filename: str) -> str:

View File

@@ -8,6 +8,7 @@
"""
import os
import re
import sys
from contextlib import contextmanager
@@ -68,8 +69,19 @@ def _gather_subkey_info(self):
sub_keys, _, _ = winreg.QueryInfoKey(self.hkey)
for i in range(sub_keys):
sub_name = winreg.EnumKey(self.hkey, i)
sub_handle = winreg.OpenKeyEx(self.hkey, sub_name, access=winreg.KEY_READ)
self._keys.append(RegistryKey(os.path.join(self.path, sub_name), sub_handle))
try:
sub_handle = winreg.OpenKeyEx(self.hkey, sub_name, access=winreg.KEY_READ)
self._keys.append(RegistryKey(os.path.join(self.path, sub_name), sub_handle))
except OSError as e:
if hasattr(e, "winerror"):
if e.winerror == 5:
# This is a permission error, we can't read this key
# move on
pass
else:
raise
else:
raise
def _gather_value_info(self):
"""Compose all values for this key into a dict of form value name: RegistryValue Object"""
@@ -161,6 +173,15 @@ def __init__(self, key, root_key=HKEY.HKEY_CURRENT_USER):
self.root = root_key
self._reg = None
class KeyMatchConditions:
@staticmethod
def regex_matcher(subkey_name):
return lambda x: re.match(subkey_name, x.name)
@staticmethod
def name_matcher(subkey_name):
return lambda x: subkey_name == x.name
@contextmanager
def invalid_reg_ref_error_handler(self):
try:
@@ -193,6 +214,10 @@ def _valid_reg_check(self):
return False
return True
def _regex_match_subkeys(self, subkey):
r_subkey = re.compile(subkey)
return [key for key in self.get_subkeys() if r_subkey.match(key.name)]
@property
def reg(self):
if not self._reg:
@@ -218,51 +243,106 @@ def get_subkeys(self):
with self.invalid_reg_ref_error_handler():
return self.reg.subkeys
def get_matching_subkeys(self, subkey_name):
"""Returns all subkeys regex matching subkey name
Note: this method obtains only direct subkeys of the given key and does not
desced to transtitve subkeys. For this behavior, see `find_matching_subkeys`"""
self._regex_match_subkeys(subkey_name)
def get_values(self):
if not self._valid_reg_check():
raise RegistryError("Cannot query values from invalid key %s" % self.key)
with self.invalid_reg_ref_error_handler():
return self.reg.values
def _traverse_subkeys(self, stop_condition):
def _traverse_subkeys(self, stop_condition, collect_all_matching=False):
"""Perform simple BFS of subkeys, returning the key
that successfully triggers the stop condition.
Args:
stop_condition: lambda or function pointer that takes a single argument
a key and returns a boolean value based on that key
collect_all_matching: boolean value, if True, the traversal collects and returns
all keys meeting stop condition. If false, once stop
condition is met, the key that triggered the condition '
is returned.
Return:
the key if stop_condition is triggered, or None if not
"""
collection = []
if not self._valid_reg_check():
raise RegistryError("Cannot query values from invalid key %s" % self.key)
with self.invalid_reg_ref_error_handler():
queue = self.reg.subkeys
for key in queue:
if stop_condition(key):
return key
if collect_all_matching:
collection.append(key)
else:
return key
queue.extend(key.subkeys)
return None
return collection if collection else None
def find_subkey(self, subkey_name, recursive=True):
"""If non recursive, this method is the same as get subkey with error handling
Otherwise perform a BFS of subkeys until desired key is found
Returns None or RegistryKey object corresponding to requested key name
def _find_subkey_s(self, search_key, collect_all_matching=False):
"""Retrieve one or more keys regex matching `search_key`.
One key will be returned unless `collect_all_matching` is enabled,
in which case call matches are returned.
Args:
subkey_name (str): string representing subkey to be searched for
recursive (bool): optional argument, if True, subkey need not be a direct
sub key of this registry entry, and this method will
search all subkeys recursively.
Default is True
search_key (str): regex string represeting a subkey name structure
to be matched against.
Cannot be provided alongside `direct_subkey`
collect_all_matching (bool): No-op if `direct_subkey` is specified
Return:
the desired subkey as a RegistryKey object, or none
"""
return self._traverse_subkeys(search_key, collect_all_matching=collect_all_matching)
if not recursive:
return self.get_subkey(subkey_name)
def find_subkey(self, subkey_name):
"""Perform a BFS of subkeys until desired key is found
Returns None or RegistryKey object corresponding to requested key name
else:
return self._traverse_subkeys(lambda x: x.name == subkey_name)
Args:
subkey_name (str)
Return:
the desired subkey as a RegistryKey object, or none
For more details, see the WindowsRegistryView._find_subkey_s method docstring
"""
return self._find_subkey_s(
WindowsRegistryView.KeyMatchConditions.name_matcher(subkey_name)
)
def find_matching_subkey(self, subkey_name):
"""Perform a BFS of subkeys until a key matching subkey name regex is found
Returns None or the first RegistryKey object corresponding to requested key name
Args:
subkey_name (str)
Return:
the desired subkey as a RegistryKey object, or none
For more details, see the WindowsRegistryView._find_subkey_s method docstring
"""
return self._find_subkey_s(
WindowsRegistryView.KeyMatchConditions.regex_matcher(subkey_name)
)
def find_subkeys(self, subkey_name):
"""Exactly the same as find_subkey, except this function tries to match
a regex to multiple keys
Args:
subkey_name (str)
Return:
the desired subkeys as a list of RegistryKey object, or none
For more details, see the WindowsRegistryView._find_subkey_s method docstring
"""
kwargs = {"collect_all_matching": True}
return self._find_subkey_s(
WindowsRegistryView.KeyMatchConditions.regex_matcher(subkey_name), **kwargs
)
def find_value(self, val_name, recursive=True):
"""

View File

@@ -141,10 +141,16 @@ default:
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- export SPACK_CI_CONFIG_ROOT="${SPACK_ROOT}/share/spack/gitlab/cloud_pipelines/configs"
- spack
--config-scope "${SPACK_CI_CONFIG_ROOT}"
--config-scope "${SPACK_CI_CONFIG_ROOT}/${SPACK_TARGET_PLATFORM}"
--config-scope "${SPACK_CI_CONFIG_ROOT}/${SPACK_TARGET_PLATFORM}/${SPACK_TARGET_ARCH}"
${CI_STACK_CONFIG_SCOPES}
compiler find
- spack python -c "import os,sys; print(os.path.expandvars(sys.stdin.read()))"
< "${SPACK_CI_CONFIG_ROOT}/${PIPELINE_MIRROR_TEMPLATE}" > "${SPACK_CI_CONFIG_ROOT}/mirrors.yaml"
- spack config add -f "${SPACK_CI_CONFIG_ROOT}/mirrors.yaml"
- spack -v
- spack -v --color=always
--config-scope "${SPACK_CI_CONFIG_ROOT}"
--config-scope "${SPACK_CI_CONFIG_ROOT}/${SPACK_TARGET_PLATFORM}"
--config-scope "${SPACK_CI_CONFIG_ROOT}/${SPACK_TARGET_PLATFORM}/${SPACK_TARGET_ARCH}"
@@ -197,7 +203,7 @@ default:
- spack --version
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- spack -v
- spack -v --color=always
ci generate --check-index-only
--buildcache-destination "${PUSH_BUILDCACHE_DEPRECATED}"
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
@@ -615,7 +621,7 @@ aws-isc-aarch64-build:
tutorial-generate:
extends: [ ".tutorial", ".generate-x86_64"]
image: ghcr.io/spack/tutorial-ubuntu-22.04:v2023-05-07
image: ghcr.io/spack/tutorial-ubuntu-22.04:v2023-10-30
tutorial-build:
extends: [ ".tutorial", ".build" ]
@@ -706,7 +712,7 @@ ml-linux-x86_64-rocm-build:
SPACK_CI_STACK_NAME: ml-darwin-aarch64-mps
ml-darwin-aarch64-mps-generate:
tags: [ "macos-ventura", "apple-clang-14", "aarch64-macos" ]
tags: [ "macos-ventura", "apple-clang-15", "aarch64-macos" ]
extends: [ ".ml-darwin-aarch64-mps", ".generate-base"]
ml-darwin-aarch64-mps-build:

View File

@@ -12,7 +12,7 @@ ci:
before_script-:
- - spack list --count # ensure that spack's cache is populated
- - spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR}
- spack compiler find
- spack compiler list
- if [ -n "$SPACK_BUILD_JOBS" ]; then spack config add "config:build_jobs:$SPACK_BUILD_JOBS"; fi
- - mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)

View File

@@ -54,21 +54,6 @@ spack:
cuda:
version: [11.8.0]
compilers:
- compiler:
spec: gcc@11.4.0
paths:
cc: /usr/bin/gcc
cxx: /usr/bin/g++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu20.04
target: aarch64
modules: []
environment: {}
extra_rpaths: []
specs:
# CPU
- adios
@@ -165,7 +150,7 @@ spack:
- swig@4.0.2-fortran
- sz3
- tasmanian
- tau +mpi +python
- tau +mpi +python +syscall
- trilinos +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
- turbine
- umap
@@ -201,7 +186,7 @@ spack:
- flux-core +cuda
- hpctoolkit +cuda
- papi +cuda
- tau +mpi +cuda
- tau +mpi +cuda +syscall
# --
# - bricks +cuda # not respecting target=aarch64?
# - legion +cuda # legion: needs NVIDIA driver

View File

@@ -5,34 +5,6 @@ spack:
reuse: false
unify: false
compilers:
- compiler:
spec: oneapi@2023.2.1
paths:
cc: /opt/intel/oneapi/compiler/2023.2.1/linux/bin/icx
cxx: /opt/intel/oneapi/compiler/2023.2.1/linux/bin/icpx
f77: /opt/intel/oneapi/compiler/2023.2.1/linux/bin/ifx
fc: /opt/intel/oneapi/compiler/2023.2.1/linux/bin/ifx
flags: {}
operating_system: ubuntu20.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
- compiler:
spec: gcc@=11.4.0
paths:
cc: /usr/bin/gcc
cxx: /usr/bin/g++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu20.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
packages:
all:
require: '%oneapi target=x86_64_v3'
@@ -181,7 +153,7 @@ spack:
- superlu-dist
- sz3
- tasmanian
- tau +mpi +python
- tau +mpi +python +syscall
- trilinos +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
- turbine
- umap
@@ -227,12 +199,12 @@ spack:
- cabana +sycl ^kokkos +sycl +openmp cxxstd=17 +tests +examples
- kokkos +sycl +openmp cxxstd=17 +tests +examples
- kokkos-kernels build_type=Release %oneapi ^kokkos +sycl +openmp cxxstd=17 +tests +examples
- tau +mpi +opencl +level_zero ~pdt # tau: requires libdrm.so to be installed
- slate +sycl
- sundials +sycl cxxstd=17 +examples-install
- tau +mpi +opencl +level_zero ~pdt +syscall # tau: requires libdrm.so to be installed
# --
# - ginkgo +oneapi # InstallError: Ginkgo's oneAPI backend requires theDPC++ compiler as main CXX compiler.
# - hpctoolkit +level_zero # dyninst@12.3.0%gcc: /usr/bin/ld: libiberty/./d-demangle.c:142: undefined reference to `_intel_fast_memcpy'; can't mix intel-tbb@%oneapi with dyninst%gcc
# - sundials +sycl cxxstd=17 # sundials: include/sunmemory/sunmemory_sycl.h:20:10: fatal error: 'CL/sycl.hpp' file not found
- py-scipy

View File

@@ -5,21 +5,6 @@ spack:
reuse: false
unify: false
compilers:
- compiler:
spec: gcc@9.4.0
paths:
cc: /usr/bin/gcc
cxx: /usr/bin/g++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu20.04
target: ppc64le
modules: []
environment: {}
extra_rpaths: []
packages:
all:
require: "%gcc@9.4.0 target=ppc64le"
@@ -165,7 +150,7 @@ spack:
- swig@4.0.2-fortran
- sz3
- tasmanian
- tau +mpi +python # tau: has issue with `spack env depfile` build
- tau +mpi +python # +syscall fails: https://github.com/spack/spack/pull/40830#issuecomment-1790799772; tau: has issue with `spack env depfile` build
- trilinos +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
- turbine
- umap
@@ -214,6 +199,7 @@ spack:
- caliper +cuda cuda_arch=70
- chai ~benchmarks ~tests +cuda cuda_arch=70 ^umpire ~shared
- ecp-data-vis-sdk ~rocm +adios2 ~ascent +hdf5 +vtkm +zfp ~paraview +cuda cuda_arch=70
- exago +mpi +python +raja +hiop ~rocm +cuda cuda_arch=70 ~ipopt ^hiop@1.0.0 ~sparse +mpi +raja ~rocm +cuda cuda_arch=70 #^raja@0.14.0
- flecsi +cuda cuda_arch=70
- ginkgo +cuda cuda_arch=70
- heffte +cuda cuda_arch=70

View File

@@ -5,21 +5,6 @@ spack:
reuse: false
unify: false
compilers:
- compiler:
spec: gcc@=11.4.0
paths:
cc: /usr/bin/gcc
cxx: /usr/bin/g++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu20.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
packages:
all:
require: '%gcc target=x86_64_v3'
@@ -255,15 +240,17 @@ spack:
specs:
# ROCM NOARCH
- hpctoolkit +rocm
- tau +mpi +rocm # tau: has issue with `spack env depfile` build
- tau +mpi +rocm +syscall # tau: has issue with `spack env depfile` build
# ROCM 908
- adios2 +kokkos +rocm amdgpu_target=gfx908
- amrex +rocm amdgpu_target=gfx908
- arborx +rocm amdgpu_target=gfx908
- cabana +rocm amdgpu_target=gfx908
- caliper +rocm amdgpu_target=gfx908
- chai ~benchmarks +rocm amdgpu_target=gfx908
- ecp-data-vis-sdk +paraview +vtkm +rocm amdgpu_target=gfx908
- exago +mpi +python +raja +hiop +rocm amdgpu_target=gfx908 ~ipopt cxxflags="-Wno-error=non-pod-varargs" ^hiop@1.0.0 ~sparse +mpi +raja +rocm amdgpu_target=gfx908
- gasnet +rocm amdgpu_target=gfx908
- ginkgo +rocm amdgpu_target=gfx908
- heffte +rocm amdgpu_target=gfx908
@@ -297,12 +284,14 @@ spack:
# - papi +rocm amdgpu_target=gfx908 # papi: https://github.com/spack/spack/issues/27898
# ROCM 90a
- adios2 +kokkos +rocm amdgpu_target=gfx90a
- amrex +rocm amdgpu_target=gfx90a
- arborx +rocm amdgpu_target=gfx90a
- cabana +rocm amdgpu_target=gfx90a
- caliper +rocm amdgpu_target=gfx90a
- chai ~benchmarks +rocm amdgpu_target=gfx90a
- ecp-data-vis-sdk +paraview +vtkm +rocm amdgpu_target=gfx90a
- exago +mpi +python +raja +hiop +rocm amdgpu_target=gfx90a ~ipopt cxxflags="-Wno-error=non-pod-varargs" ^hiop@1.0.0 ~sparse +mpi +raja +rocm amdgpu_target=gfx90a
- gasnet +rocm amdgpu_target=gfx90a
- ginkgo +rocm amdgpu_target=gfx90a
- heffte +rocm amdgpu_target=gfx90a

View File

@@ -5,21 +5,6 @@ spack:
reuse: false
unify: false
compilers:
- compiler:
spec: gcc@=11.4.0
paths:
cc: /usr/bin/gcc
cxx: /usr/bin/g++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags: {}
operating_system: ubuntu20.04
target: x86_64
modules: []
environment: {}
extra_rpaths: []
packages:
all:
require: '%gcc target=x86_64_v3'
@@ -66,6 +51,8 @@ spack:
require: "@3.4.4"
vtk-m:
require: "+examples"
visit:
require: "~gui"
cuda:
version: [11.8.0]
paraview:
@@ -172,7 +159,7 @@ spack:
- swig@4.0.2-fortran
- sz3
- tasmanian
- tau +mpi +python
- tau +mpi +python +syscall
- trilinos +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
- turbine
- umap
@@ -207,7 +194,7 @@ spack:
- flux-core +cuda
- hpctoolkit +cuda
- papi +cuda
- tau +mpi +cuda
- tau +mpi +cuda +syscall
# --
# - legion +cuda # legion: needs NVIDIA driver
@@ -220,6 +207,7 @@ spack:
- cusz +cuda cuda_arch=80
- dealii +cuda cuda_arch=80
- ecp-data-vis-sdk ~rocm +adios2 ~ascent +hdf5 +vtkm +zfp +paraview +cuda cuda_arch=80 # +ascent fails because fides fetch error
- exago +mpi +python +raja +hiop ~rocm +cuda cuda_arch=80 ~ipopt ^hiop@1.0.0 ~sparse +mpi +raja ~rocm +cuda cuda_arch=80 #^raja@0.14.0
- flecsi +cuda cuda_arch=80
- ginkgo +cuda cuda_arch=80
- heffte +cuda cuda_arch=80
@@ -303,9 +291,10 @@ spack:
# ROCM NOARCH
- hpctoolkit +rocm
- tau +mpi +rocm # tau: has issue with `spack env depfile` build
- tau +mpi +rocm +syscall # tau: has issue with `spack env depfile` build
# ROCM 908
- adios2 +kokkos +rocm amdgpu_target=gfx908
- amrex +rocm amdgpu_target=gfx908
- arborx +rocm amdgpu_target=gfx908
- cabana +rocm amdgpu_target=gfx908
@@ -341,10 +330,12 @@ spack:
- paraview +rocm amdgpu_target=gfx908
# - vtk-m ~openmp +rocm amdgpu_target=gfx908 # vtk-m: https://github.com/spack/spack/issues/40268
# --
# - exago +mpi +python +raja +hiop +rocm amdgpu_target=gfx908 ~ipopt cxxflags="-Wno-error=non-pod-varargs" ^hiop@1.0.0 ~sparse +mpi +raja +rocm amdgpu_target=gfx908 # hiop: CMake Error at cmake/FindHiopHipLibraries.cmake:23 (find_package)
# - lbann ~cuda +rocm amdgpu_target=gfx908 # aluminum: https://github.com/spack/spack/issues/38807
# - papi +rocm amdgpu_target=gfx908 # papi: https://github.com/spack/spack/issues/27898
# ROCM 90a
- adios2 +kokkos +rocm amdgpu_target=gfx90a
- amrex +rocm amdgpu_target=gfx90a
- arborx +rocm amdgpu_target=gfx90a
- cabana +rocm amdgpu_target=gfx90a
@@ -380,6 +371,7 @@ spack:
- paraview +rocm amdgpu_target=gfx90a
# - vtk-m ~openmp +rocm amdgpu_target=gfx90a # vtk-m: https://github.com/spack/spack/issues/40268
# --
# - exago +mpi +python +raja +hiop +rocm amdgpu_target=gfx90a ~ipopt cxxflags="-Wno-error=non-pod-varargs" ^hiop@1.0.0 ~sparse +mpi +raja +rocm amdgpu_target=gfx90a # hiop: CMake Error at cmake/FindHiopHipLibraries.cmake:23 (find_package)
# - lbann ~cuda +rocm amdgpu_target=gfx90a # aluminum: https://github.com/spack/spack/issues/38807
# - papi +rocm amdgpu_target=gfx90a # papi: https://github.com/spack/spack/issues/27898

View File

@@ -89,7 +89,7 @@ spack:
- build-job:
variables:
CI_GPG_KEY_ROOT: /etc/protected-runner
tags: [ "macos-ventura", "apple-clang-14", "aarch64-macos" ]
tags: [ "macos-ventura", "apple-clang-15", "aarch64-macos" ]
cdash:
build-group: Machine Learning MPS

View File

@@ -1,9 +1,4 @@
spack:
config:
# allow deprecated versions in concretizations
# required for zlib
deprecated: true
view: false
packages:
all:
@@ -13,36 +8,36 @@ spack:
definitions:
- gcc_system_packages:
- matrix:
- - zlib
- zlib@1.2.8
- zlib@1.2.8 cflags=-O3
- - gmake
- gmake@4.3
- gmake@4.3 cflags=-O3
- tcl
- tcl ^zlib@1.2.8 cflags=-O3
- tcl ^gmake@4.3 cflags=-O3
- hdf5
- hdf5~mpi
- hdf5+hl+mpi ^mpich
- trilinos
- trilinos +hdf5 ^hdf5+hl+mpi ^mpich
- gcc@12.1.0
- gcc@12
- mpileaks
- lmod
- macsio@1.1+scr^scr@2.0.0~fortran^silo~fortran^hdf5~fortran
- ['%gcc@11.3.0']
- macsio@1.1+scr ^scr@2.0.0~fortran ^silo~fortran ^hdf5~fortran
- ['%gcc@11']
- gcc_old_packages:
- zlib%gcc@10.4.0
- gmake%gcc@10
- clang_packages:
- matrix:
- [zlib, tcl ^zlib@1.2.8]
- ['%clang@14.0.0']
- [gmake, tcl ^gmake@4.3]
- ['%clang@14']
- gcc_spack_built_packages:
- matrix:
- [netlib-scalapack]
- [^mpich, ^openmpi]
- [^openblas, ^netlib-lapack]
- ['%gcc@12.1.0']
- ['%gcc@12']
- matrix:
- [py-scipy^openblas, armadillo^openblas, netlib-lapack, openmpi, mpich, elpa^mpich]
- ['%gcc@12.1.0']
- [py-scipy ^openblas, armadillo ^openblas, netlib-lapack, openmpi, mpich, elpa ^mpich]
- ['%gcc@12']
specs:
- $gcc_system_packages
- $gcc_old_packages
@@ -53,7 +48,7 @@ spack:
pipeline-gen:
- build-job:
image:
name: ghcr.io/spack/tutorial-ubuntu-22.04:v2023-05-07
name: ghcr.io/spack/tutorial-ubuntu-22.04:v2023-10-30
entrypoint: ['']
cdash:
build-group: Spack Tutorial

Some files were not shown because too many files have changed in this diff Show More