Compare commits

...

319 Commits

Author SHA1 Message Date
Dave Keeshan
277f8596de yosys: Update to version 0.46, also include 0.43, 0.44 and 0.45 (#47200) 2024-10-26 21:56:12 +02:00
Todd Gamblin
c8bebff7f5 Add -t short option for spack --backtrace (#47227)
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-10-26 09:16:31 +02:00
Paul
61d2d21acc Add Go 1.23.2, 1.22.8, and 1.22.7 (#47225) 2024-10-25 14:15:35 -06:00
John W. Parent
7b27aed4c8 Normalize Spack Win entry points (#38648)
* Normalize Spack Win entrypoints

Currently Spack has multiple entrypoints on Windows that in addition to
differing from *nix implementations, differ from shell to shell on
Windows. This is a bit confusing for new users and in general
unnecessary.
This PR adds a normal setup script for the batch shell while preserving
the previous "click from file explorer for spack shell" behavior.
Additionally adds a shell title to both powershell and cmd letting users
know this is a Spack shell

* remove doskeys
2024-10-25 15:23:29 -04:00
Dom Heinzeller
ad0b256407 Intel/Oneapi compilers: suppress warnings when using Cray wrappers (#47046)
#44588 we added logic to suppress deprecation warnings for the
Intel classic compilers. This depended on matching against 

* The compiler names (looking for icc, icpc, ifort)
* The compiler version

When using an Intel compiler with fortran wrappers, the first check
always fails. To support using the fortran wrappers (in combination
with the classic Intel compilers), we remove the first check and
suppress if just the version matches. This works because:

* The newer compilers like icx can handle (ignore) the flags that
  suppress deprecation warnings
* The Cray wrappers pass the underlying compiler version (e.g. they
  report what icc would report)
2024-10-25 12:17:49 -07:00
Gregory Lee
a2a3a83a26 Packages/javacerts (#47201)
* new openjdk variant to symlink system certificate

* new openjdk variant to symlink system certificate

* new openjdk variant to symlink system certificate

* new openjdk variant to symlink system certificate

* Update var/spack/repos/builtin/packages/openjdk/package.py

Co-authored-by: Alec Scott <hi@alecbcs.com>

---------

Co-authored-by: Alec Scott <hi@alecbcs.com>
2024-10-25 12:45:14 -06:00
Harmen Stoppels
7d86670826 ensure write_fd.close() isn't called when sys.std* cannot be redirected 2024-10-25 10:16:44 -07:00
Harmen Stoppels
ae306b73c3 Avoid a socket to communicate effectively a bit 2024-10-25 10:16:44 -07:00
Harmen Stoppels
b63cbe4e6e Replace MultiProcessFd with Connection objects
Connection objects are Python version, platform and multiprocessing
start method independent, so better to use those than a mix of plain
file descriptors and inadequate guesses in the child process whether it
was forked or not.

This also allows us to delete the now redundant MultiProcessFd class,
hopefully making things a bit easier to follow.
2024-10-25 10:16:44 -07:00
dependabot[bot]
ef220daaca build(deps): bump actions/checkout from 4.2.1 to 4.2.2 (#47185)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.2.1 to 4.2.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](eef61447b9...11bd71901b)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 09:48:50 -07:00
Harmen Stoppels
e86a3b68f7 file_cache.py: allow read transaction on uninitialized cache (#47212)
This allows the following

```python
cache.init_entry("my/cache")
with cache.read_transaction("my/cache") as f:
    data = f.read() if f is not None else None
```

mirroring `write_transaction`, which returns a tuple `(old, new)` where
`old` is `None` if the cache file did not exist yet.

The alternative that requires less defensive programming on the call
site would be to create the "old" file upon first read, but I did not
want to think about how to safely atomically create the file, and it's
not unthinkable that an empty file is an invalid format (for instance
the call site may expect a json file, which requires at least {} bytes).
2024-10-25 17:10:14 +02:00
Dave Keeshan
7319408bc7 Add version 0.0.3836 (#47204) 2024-10-25 08:30:08 +02:00
dependabot[bot]
b34159348f build(deps): bump actions/setup-python from 5.2.0 to 5.3.0 (#47209)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.2.0 to 5.3.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](f677139bbe...0b93645e9f)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 07:36:14 +02:00
Jordan Galby
f13d998d21 Add spack short version in config variables (#47016) 2024-10-25 07:34:59 +02:00
Jon Rood
2912d4a661 tioga: add v1.2.0. (#47208) 2024-10-24 22:04:33 -06:00
Jon Rood
8e2ec58859 exawind: add v1.1.0. (#47207) 2024-10-24 22:00:22 -06:00
Jon Rood
01eb26578b amr-wind: add v3.1.6. (#47205) 2024-10-24 21:53:54 -06:00
Jon Rood
fe0a8a1735 nalu-wind: add v2.1.0. (#47206) 2024-10-24 21:39:57 -06:00
Adam J. Stewart
d523f12e99 py-jupyter: add v1.1.1 (#47194) 2024-10-25 00:42:39 +02:00
Tamara Dahlgren
1b0631b69e Env help: expand and refine subcommand help and descriptions (#47089)
This PR is in response to a question in the `environments` slack channel (https://spackpm.slack.com/archives/CMHK7MF51/p1729200068557219) about inadequate CLI help/documentation for one specific subcommand.

This PR uses the approach I took for the descriptions and help for `spack test` subcommands.  Namely, I use the first line of the relevant docstring as the description, which is shown per subcommand in `spack env -h`, and the entire docstring as the help.  I then added, where it seemed appropriate, help.  I also tweaked argument docstrings to tighten them up, make consistent with similar arguments elsewhere in the command, and elaborate when it seemed important.  (The only subcommand I didn't touch is `loads`.)

For example, before:
```
$ spack env update -h
usage: spack env update [-hy] env

positional arguments:
  env               name or directory of the environment to activate

optional arguments:
  -h, --help        show this help message and exit
  -y, --yes-to-all  assume "yes" is the answer to every confirmation request
```

After the changes in this PR:
```
$ spack env update -h
usage: spack env update [-hy] env

update the environment manifest to the latest schema format

    update the environment to the latest schema format, which may not be
    readable by older versions of spack

    a backup copy of the manifest is retained in case there is a need to revert
    this operation
    

positional arguments:
  env               name or directory of the environment

optional arguments:
  -h, --help        show this help message and exit
  -y, --yes-to-all  assume "yes" is the answer to every confirmation request
```

---------

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2024-10-24 13:55:00 -07:00
AMD Toolchain Support
65bb3a12ea hdf5: disable _Float16 support for aocc (#47123) 2024-10-24 14:44:09 -06:00
Harmen Stoppels
5ac2b8a178 compilers.yaml: require list of strings for modules (#47197) 2024-10-24 13:28:38 -06:00
Martin Lang
b063765c2e miniforge3: wrong sbang replacement (#47178) 2024-10-24 21:26:04 +02:00
Tamara Dahlgren
4511052d26 py-webdataset: new package (#47187) 2024-10-24 13:22:05 -06:00
Adam J. Stewart
3804d128e7 py-lightning-uq-box: add new package (#47132) 2024-10-24 20:08:18 +02:00
Thomas-Ulrich
f09ce00fe1 seissol: new package (#41176)
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-24 14:36:37 +02:00
Tamara Dahlgren
cdde7c3ccf py-braceexpand: new package (#47186) 2024-10-24 04:38:56 -06:00
Laura Weber
c52c0a482f neartree: added version 5.1.1, added Makefile patches to fix libtool error (#47155) 2024-10-24 04:08:50 -06:00
Dr Marco Claudio De La Pierre
8a75cdad9a supermagic: new package (#47176) 2024-10-24 04:04:29 -06:00
Kyle Knoepfel
e0eea48ccf Restore bold uncolored font face (#47108)
Commit aa0825d642 accidentally added a semicolon
to the ANSI escape sequence even if the color code was `None` or unknown, breaking the
bold, uncolored font-face.  This PR restores the old behavior.

---------

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2024-10-24 09:11:43 +00:00
Harmen Stoppels
61cbfc1da0 bootstrap: remove all system gnupg/patchelf executables (#47165) 2024-10-24 08:56:42 +02:00
Harmen Stoppels
d8c8074762 bootstrap: add clingo 3.13 binaries and more (#47126) 2024-10-24 08:55:14 +02:00
Paul R. C. Kent
faeef6272d llvm: add v19.1.2 , v19.1.1 (#47113) 2024-10-24 00:20:18 -06:00
Massimiliano Culpo
f6ad1e23f8 Improve Database.query* methods (#47116)
* Add type hints to all query* methods
* Inline docstrings
* Change defaults from `any` to `None` so they can be type hinted in old Python
* Pre-filter on given hashes instead of iterating over all db specs
* Fix a bug where the `--origin` option of uninstall had no effect
* Fix a bug where query args were not applied when searching by concrete spec

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-10-24 08:13:07 +02:00
shanedsnyder
a0173a5a94 darshan-runtime,darshan-util,py-darshan: new package checksums for darshan-3.4.6 release (#47068)
* new packages for darshan-3.4.6 release

* set darshan-util dependencies in py-darshan
2024-10-24 06:48:29 +02:00
Tim Haines
225be45687 dire: Update Boost dependency (#47129)
* dire: Update Boost dependency

The only version currently available is 2.004, and it does not use Boost.

* Remove unused Boost import
2024-10-24 06:38:34 +02:00
Harmen Stoppels
3581821d3c py-parso: new version and fix forward compat bounds (#47171)
py-parso needs grammar files for each python version, meaning that
every future release needs a forward compat bound.
2024-10-24 06:08:33 +02:00
Harmen Stoppels
79ad6f6b48 env: continue to mark non-roots as implicitly installed on partial env installs (#47183)
Fixes a change in behavior/bug in
70412612c7, where partial environment
installs would mark the selected spec as explicitly installed, even if
it was not a root of the environment.

The desired behavior is that roots by definition are the to be
explicitly installed specs. The specs on the `spack -e ... install x`
command line are just filters for partial installs, so leave them
implicitly installed if they aren't roots.
2024-10-23 21:17:40 +00:00
Andrew W Elble
6320993409 llvm-amdgpu: support building on aarch64 (#47124)
* llvm-amdgpu: support building on aarch64

* missed removing a line
2024-10-23 14:39:45 -06:00
Scott Wittenburg
1472dcace4 ci: Remove deprecated logic from the ci module (#47062)
ci: Remove deprecated logic from the ci module

Remove the following from the ci module, schema, and tests:

- deprecated ci stack and handling of old ci config
- deprecated mirror handling logic
- support for artifacts buildcache
- support for temporary storage url
2024-10-23 12:50:55 -06:00
Matthieu Dorier
755c113c16 librdkafka: added version 2.6.0 (#47181) 2024-10-23 12:49:15 -06:00
Mosè Giordano
43bcb5056f extrae: remove duplicate unconditional dep on papi (#47179) 2024-10-23 20:31:13 +02:00
kwryankrattiger
fd1c95a432 ParaView: Various fixes to better support no mpi and fides builds (#47114)
* ParaView: Explicitly set the ENABLE_MPI on/off
* Disallow MPI in the DAG when ~mpi
* @5.13 uses 'remove_children', use pugixml@1.11:, See #47098
* cloud_pipelines/stacks/data-vis-sdk: paraview +raytracing: add +adios2 +fides

Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-23 11:46:09 -06:00
Olivier Cessenat
5b5be0582f gxsview: add v2024.03.15 (#46901)
* gxsview: new version

* gxsview 2024 patches and qt6 conflicts

* gxsview 2024 demands vtk 9 minimum

* Removing the -lvtkRenderingQt for 2024.03.15

* gxsview: fontconfig inc/lib dirs added to gui/gui.pro

---------

Co-authored-by: Olivier Cessenat <cessenat@jliana.magic>
2024-10-23 19:30:23 +02:00
Thorsten Hater
aed1a3f980 pybind11-stubgen: Add 2.5.1 (#47162) 2024-10-23 19:13:23 +02:00
Adam J. Stewart
978be305a7 py-torchmetrics: add v1.5.1 (#47164) 2024-10-23 18:55:45 +02:00
AMD Toolchain Support
7ddb40a804 cp2k: apply a patch to fix access to unallocated arrays (#47170) 2024-10-23 17:46:47 +02:00
Adam J. Stewart
37664b36da py-grayskull: add v2.7.3 (#47166) 2024-10-23 17:38:13 +02:00
Todd Gamblin
f33912d707 mypy: work around typing issues with functools.partial (#47160) 2024-10-23 06:33:09 -06:00
dependabot[bot]
e785d3716e build(deps): bump sphinx from 7.4.7 to 8.1.3 in /lib/spack/docs (#47159)
Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 7.4.7 to 8.1.3.
- [Release notes](https://github.com/sphinx-doc/sphinx/releases)
- [Changelog](https://github.com/sphinx-doc/sphinx/blob/v8.1.3/CHANGES.rst)
- [Commits](https://github.com/sphinx-doc/sphinx/compare/v7.4.7...v8.1.3)

---
updated-dependencies:
- dependency-name: sphinx
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-23 06:18:20 -06:00
dependabot[bot]
328787b017 build(deps): bump types-six in /.github/workflows/requirements/style (#47158)
Bumps [types-six](https://github.com/python/typeshed) from 1.16.21.20240513 to 1.16.21.20241009.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-six
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-23 06:14:11 -06:00
Lehman Garrison
67a40c6cc4 py-asdf: add 3.5.0, and update py-asdf-standard to match (#47156)
* py-asdf: add 3.5.0, and update py-asdf-standard to match
2024-10-23 01:43:04 -06:00
Laura Weber
eccf97af33 cvector: added version 1.0.3.1, added Makefile patch to fix libtool error (#47154) 2024-10-23 01:23:18 -06:00
Laura Weber
e63e8b5efa cqrlib: added version 1.1.3, added Makefile patch to fix libtool error (#47153) 2024-10-23 01:17:14 -06:00
Andrew W Elble
bb25210b62 py-jaxlib: backport fix for abseil-cpp on aarch64 (#47125) 2024-10-23 00:59:44 -06:00
Tim Haines
f8ab94061f gcta: use intel-oneapi-mkl (#47127)
intel-mkl fails to concretize with the 'Cannot select a single version'
error. My guess would be because all of its versions are marked
deprecated.
2024-10-23 00:36:38 -06:00
Massimiliano Culpo
ed15b73c3b Remove spurious warning, introduced in #46992 (#47152)
fixes #47135

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-10-23 07:33:09 +02:00
Mathew Cleveland
1f6da280b7 draco: add v7.19.0 (#47032)
Co-authored-by: Cleveland <cleveland@lanl.gov>
Co-authored-by: Kelly (KT) Thompson <KineticTheory@users.noreply.github.com>
2024-10-23 05:21:29 +02:00
Martin Lang
1ad5739094 libvdwxc: fix broken patch (#47119) 2024-10-23 00:53:58 +02:00
Richard Berger
06f33dcdbb lammps: add new version 20240829.1 (#47099) 2024-10-23 00:42:40 +02:00
Adam J. Stewart
582254f891 sox: fix build with Apple Clang 15+ (#47128)
* sox: fix build with Apple Clang 15+

---------

Co-authored-by: adamjstewart <adamjstewart@users.noreply.github.com>
2024-10-22 14:33:09 -07:00
Adam J. Stewart
31694fe9bd py-torchaudio: fix build with Apple Clang 15+ (#47130) 2024-10-22 14:31:38 -07:00
Alec Scott
a53a14346e gopls: new package (#47137) 2024-10-22 14:19:06 -07:00
Alec Scott
c102ff953b goimports: new package (#47138) 2024-10-22 14:17:18 -07:00
Ashim Mahara
59a2a87937 py-uv: relaxing rust dependency (#47148) 2024-10-22 13:54:31 -07:00
Kenneth Moreland
d86feeac54 paraview: Add new variant +fixes for enabling Fides (#46971)
When building ParaView with ADIOS2 and allowing VTK-m to be
built, also build Fides. This reads ADIOS2 files with a
particular JSON schema, but it requires VTK-m to read data.
2024-10-22 22:34:41 +02:00
suzannepaterno
43e26b330c totalview: add v2024.3-linux-arm64, v2024.3-powerle, v2024.3-x86-64 (#47030)
* adding 2024.3
   Including the new release of TotalView
2024-10-22 13:04:16 -07:00
Wouter Deconinck
9c8b5f58c0 py-datrie: patch to allow gcc-14 compilation (#47017) 2024-10-22 11:38:57 -07:00
dependabot[bot]
50aa5a7b24 build(deps): bump black from 24.8.0 to 24.10.0 in /lib/spack/docs (#47118)
Bumps [black](https://github.com/psf/black) from 24.8.0 to 24.10.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/24.8.0...24.10.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 18:58:31 +02:00
dependabot[bot]
ffab156366 build(deps): bump black in /.github/workflows/requirements/style (#47117)
Bumps [black](https://github.com/psf/black) from 24.8.0 to 24.10.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/24.8.0...24.10.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 18:58:14 +02:00
John W. Parent
e147679d40 Libmng: Restore Autotools system (#46994)
* Libmng: Restore Autotools system

CMake, when building its Qt gui, depends on Qt, which in turn, depends on libmng, a CMake based build. To avoid this obvious cyclic dependency, we re-introduce libmng's autotools build into Spack and require when building Qt as a CMake dependency, libmng is built with autotools

* Ensure autotools constraint is limited to non-Windows

* refactor qt-libmng relation from CMake
2024-10-22 09:39:48 -07:00
Harmen Stoppels
ef9bb7ebe5 spack arch: add --family --generic flags (#47078)
This allows users to do:

```
spack install ... target=$(spack arch --target --family)
spack install ... arch=$(spack arch --family)

spack install ... target=$(spack arch --target --generic)
spack install ... arch=$(spack arch --generic)
```

Deprecate `--generic-target` in favor of `--generic --target`
2024-10-22 14:13:11 +00:00
Massimiliano Culpo
fc443ea30e builtin repo: remove some uses of spec.compiler (#47061)
This commit remove all the uses of spec.compiler that
can be easily substituted by a more idiomatic approach,
e.g. using spec.satisfies or directives

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-10-22 15:37:17 +02:00
Adam J. Stewart
b601bace24 py-torchdata: add v0.9.0 (#47120) 2024-10-22 13:32:41 +02:00
Harmen Stoppels
cbad3d464a buildcache: recognize . and .. as paths instead of names (#47105) 2024-10-22 13:05:06 +02:00
Richard Berger
b56e792295 py-sphinxcontrib-spelling: new package (#46402)
* py-sphinxcontrib-spelling: new package

* Dependency enchant: Add missing dep on pkgconfig

---------

Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-22 01:18:33 -06:00
Stephen Nicholas Swatman
5b279c0732 acts: add verison v37.1.0 (#47104)
No updates to any dependencies this week.
2024-10-22 08:57:54 +02:00
afzpatel
149753a52e kokkos: change build_environment.get_cmake_prefix_path to build_systems.cmake.get_cmake_prefix_path(self) (#47112) 2024-10-22 04:46:26 +02:00
Thomas-Ulrich
b582eacbc1 fix unzip%nvhpc (#47109) 2024-10-22 04:44:14 +02:00
Luke Diorio-Toth
037196c2bd infernal: add version 1.1.5 (#47028) 2024-10-21 16:32:44 -07:00
Tamara Dahlgren
d9e8c5f13e hip stand-alone test: simplify setting CMAKE_PREFIX_PATH (#46856) 2024-10-21 14:15:33 -07:00
Peter Scheibel
275d1d88f4 avoid double closing of fd in sub-processes (#47035)
Both `multiprocessing.connection.Connection.__del__` and `io.IOBase.__del__` called `os.close` on the same file descriptor. As of Python 3.13, this is an explicit warning. Ensure we close once by usef `os.fdopen(..., closefd=False)`
2024-10-21 18:44:28 +00:00
Tom Scogland
a07d42d35b Devtools darwin (#46910)
* stacks: add a stack for devtools on darwin

After getting this whole mess building on darwin, let's keep it that
way, and maybe make it so we have some non-ML darwin binaries in spack
as well.

* reuse: false for devtools

* dtc: fix darwin dylib name and id

On mac the convention is `lib<name>.<num>.dylib`, while the makefile
creates a num suffixed one by default. The id in the file is also a
local name rather than rewritten to the full path, this fixes both
problems.

* node-js: make whereis more deterministic

* relocation(darwin): catch Mach-O load failure

The MachO library can throw an exception rather than return no headers,
this happened in an elf file in the test data of go-bootstrap.  Trying
catching the exception and moving on for now.  May also need to look
into why we're trying to rewrite an elf file.

* qemu: add darwin flags to clear out warnings

There's a build failure for qemu in CI, but it's invisible because of
the immense mass of warning output.  Explicitly specify the target macos
version and remove the extraneous unknown-warning-option flag.

* dtc: libyaml is also a link dependency

libyaml is required at runtime to run the dtc binary, lack of it caused
the ci for qemu to fail when the library wasn't found.
2024-10-21 17:32:14 +00:00
Harmen Stoppels
19ad29a690 bootstrap: handle a new edge case of binary python packages with missing python-venv (#47094)
relevant for clingo installed without gcc-runtime and python-venv, which
is done for good reasons.
2024-10-21 10:46:13 -06:00
Massimiliano Culpo
4187c57250 Fix broken spack find -u (#47102)
fixes #47101

The bug was introduced in #33495, where `spack find was not updated,
and wasn't caught by unit tests.

Now a Database can accept a custom predicate to select the installation
records. A unit test is added to prevent regressions. The weird convention
of having `any` as a default value has been replaced by the more commonly
used `None`.

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-10-21 18:03:57 +02:00
Valentin Volkl
590be9bba1 root: fix variant detection for spack external find (#47011)
* root: fix variant detection for external

A few fixes (possibly non-exhaustive) to `spack external find root`
Several variants have had `when=` clauses added that need to be
propagated to determine_variants. The previously used
Version.satifies("") method also has been removed, it seems. It's
slightly cumbersome that there is no self.spec to use in
determine_variants, but comparisons using Version(version_str) work at least

* remove debug printout
2024-10-21 09:35:27 -05:00
Harmen Stoppels
3edd68d981 docs: do not promote build_systems/* at all (#47111) 2024-10-21 13:40:29 +02:00
Harmen Stoppels
5ca0e94bdd docs: tune ranking further (#47110)
promote hand-written docs, demote generated "docs" for sources, modules, packages.
2024-10-21 13:21:13 +02:00
Harmen Stoppels
f6c9d98c8f docs search: rank api lowest and generated commands low (#47107) 2024-10-21 12:02:54 +02:00
Stephen Sachs
9854c9e5f2 Build wrf%oneapi in aws-pcluster-x86_64_v4 stack (#47075) 2024-10-21 02:56:36 -06:00
Jordan Galby
e5a602c1bb Modules suffixes config are now spec format strings (#38411) 2024-10-21 09:08:59 +02:00
Jordan Galby
37fe3b4984 Add old gtkplus 3.22.30 (#40310)
This makes it compatible with external glib 2.56 (rhel7/rhel8).

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-10-21 08:40:42 +02:00
AMD Toolchain Support
a00fddef4e lammps: updates for AOCC-5 and zen5 (#47014)
Co-authored-by: viveshar <vivek.sharma2@amd.com>
2024-10-21 08:26:53 +02:00
Tamara Dahlgren
260b36e272 Docs: clarify include path options (#47083) 2024-10-21 07:26:18 +02:00
Adam J. Stewart
117480dba9 py-geocube: add v0.7.0 (#47100) 2024-10-20 20:56:41 +02:00
snehring
bc75f23927 gtkplus: swap to at-spi2-core (#47026)
Signed-off-by: Shane Nehring <snehring@iastate.edu>
2024-10-19 17:17:31 +02:00
Wouter Deconinck
b0f1a0eb7c pkgs: homepage fixes for ill-formed urls (#47038) 2024-10-19 17:16:55 +02:00
Adam J. Stewart
4d616e1168 py-torchmetrics: add v1.5.0 (#47095) 2024-10-19 09:15:52 -06:00
Sreenivasa Murthy Kolam
4de8344c16 hipsolver: add version 6.2.1 for rocm-6.2.1 (#47076) 2024-10-19 17:13:27 +02:00
Miroslav Stoyanov
411ea019f1 heffte: Update @develop for newer cmake (#47067) 2024-10-19 17:12:41 +02:00
Taylor Asplund
296f99d800 icon: add 2024.07 & 2024.10 (#47092) 2024-10-19 17:09:00 +02:00
Martin Diehl
ca4df91e7d damask: add 3.0.1 (#47093) 2024-10-19 17:08:25 +02:00
Harmen Stoppels
9b8c06a049 spack external find: show backtrace on error when --backtrace (#47082) 2024-10-19 15:45:59 +02:00
dependabot[bot]
011ff48f82 build(deps): bump python-levenshtein in /lib/spack/docs (#46494)
Bumps [python-levenshtein](https://github.com/rapidfuzz/python-Levenshtein) from 0.25.1 to 0.26.0.
- [Release notes](https://github.com/rapidfuzz/python-Levenshtein/releases)
- [Changelog](https://github.com/rapidfuzz/python-Levenshtein/blob/main/HISTORY.md)
- [Commits](https://github.com/rapidfuzz/python-Levenshtein/compare/v0.25.1...v0.26.0)

---
updated-dependencies:
- dependency-name: python-levenshtein
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-19 12:12:13 +02:00
Pranav Sivaraman
adcd05b365 sccache: new package (ccache-like tool) (#47090)
* sccache: add new package

* sccache: add older versions and minimum rust versions

* sccache: add more minimum rust versions

* sccache: add sccache executable and tag as build-tools

* sccache: add dist-server

* sccache: add determine_version and determin_variants

* sccache: add sccache-dist executable

* sccache: fix style

* Update var/spack/repos/builtin/packages/sccache/package.py

* In case building very old sccache <= 5 is not needed with these older rust version is not needed, they can be omitted.

* sccache: drop older versions

Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>

* sccache: add openssl dependency

* sccache: openssl is a linux only dependency?

---------

Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-19 05:50:57 +02:00
Pranav Sivaraman
dc160e3a52 eza: add the current version 0.20.4 (#47086)
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-18 21:40:41 -06:00
Matt Thompson
ba953352a1 mapl: add 2.50.1 (#47087) 2024-10-19 04:36:22 +02:00
Wouter Deconinck
d47e726b76 libcgroup: add v3.1.0 (fixes CVE) (#46945) 2024-10-19 03:34:14 +02:00
AMD Toolchain Support
89ab47284f amduprof: Add v5.0 (#47081)
Co-authored-by: vijay kallesh <Vijay-teekinavar.Kallesh@amd.com>
2024-10-19 03:30:25 +02:00
Wouter Deconinck
31bdcd7dc6 rtd: bump sphinx-rtd-theme to 3.0.1 (#47002) 2024-10-19 03:29:36 +02:00
George Young
f2bd11cbf4 hicup: new package @0.9.2 (#47008)
Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2024-10-18 18:53:23 -06:00
Stephen Nicholas Swatman
f69e8297a7 geomodel: Rename v7.0.0 to v6.6.0 (#47079)
The GeoModel devs decided to delete the v7.0.0 release and re-release it
as v6.6.0 (see
https://gitlab.cern.ch/GeoModelDev/GeoModel/-/merge_requests/357).
2024-10-18 18:29:55 -06:00
John W. Parent
c9377d9437 SZ package: tighten constraints for Windows build (#47071) 2024-10-18 16:10:08 -07:00
James Smillie
899004e29a Boost: fix logic for controlling which libs build on Windows (#46414)
Older builds of Boost were failing on Windows because they were
adding --without-... flags for libraries that did not exist in those
versions. So:

* lib variants are updated with version range info (current range
  info for libs is not comprehensive, but represents changes over the
  last few minor versions up to 1.85)
* On Windows, --without-... options are omitted for libraries when they
  don't exist for the version of boost being built. Non-Windows uses
  a different approach, which was not affected because the new libraries
  were not activated by default. It would benefit from similar attention
  though to avoid potential future issues.
2024-10-18 14:37:41 -07:00
eugeneswalker
df6427d259 e4s ci stacks: add nwchem (#47055) 2024-10-18 14:30:46 -07:00
John W. Parent
31cfcafeba Build logic fix: reorder definition of package module variables (#46992)
#44327 made sure to always run `set_package_py_globals` on all
packages before running `setup_dependent_package` for any package,
so that packages implementing the latter could depend on variables
like `spack_cc` being defined.

This ran into an undocumented dependency: `std_cmake_args` is set in
`set_package_py_globals` and makes use of `cmake_prefix_paths` (if it
is defined in the package); `py-torch`es implementation of
`cmake_prefix_paths` depends on a variable set by
`setup_dependent_package` (`python_platlib`).

This generally restores #44327, and corrects the resulting issue by
moving assignment of `std_cmake_args` to after both actions have been
run.
2024-10-18 13:36:16 -07:00
Pranav Sivaraman
230bc7010a hyperfine: add v1.18.0 (#47084)
* hyperfine: convert to cargo package

* hyperfine: add v1.18.0

* hyperfine: add minimum cargo version
2024-10-18 21:12:40 +02:00
Jen Herting
957c0cc9da py-clip-anytorch: new package (#47050)
* py-clip-anytorch: new package

* py-clip-anytorch: ran black

py-langchain-core: ran black

py-pydantic: ran black

py-dalle2-pytorch: ran black

* [py-clip-anytorch] fixed license(checked_by)

* Apply suggestion from Wouter on fixing CI

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

---------

Co-authored-by: Alex C Leute <acl2809@rit.edu>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-10-18 20:55:27 +02:00
Jen Herting
99e4d6b446 py-pytorch-warmup: new package (#47054)
* py-pytorch-warmup: new package

* py-clip-anytorch: ran black

py-langchain-core: ran black

py-pydantic: ran black

py-dalle2-pytorch: ran black

---------

Co-authored-by: Alex C Leute <acl2809@rit.edu>
2024-10-18 19:26:26 +02:00
Jen Herting
7acd0cd86c py-resize-right: new package (#47056)
Co-authored-by: Alex C Leute <acl2809@rit.edu>
2024-10-18 19:25:29 +02:00
Jen Herting
d3378ffd25 py-embedding-reader: New package (#47053)
Co-authored-by: Alex C Leute <acl2809@rit.edu>
2024-10-18 19:25:10 +02:00
Wouter Deconinck
2356ccc816 solr: add v8.11.4, v9.7.0 (fix CVE) (#47037)
Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-18 19:10:35 +02:00
Wouter Deconinck
1d25275bd1 cassandra: add v5.0.1 (fix CVEs) (#47058)
* cassandra: add v5.0.1

* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-18 19:09:29 +02:00
Jen Herting
7678635d36 py-rotary-embedding-torch: New package (#47059)
Co-authored-by: Benjamin Meyers <bsmits@rit.edu>
2024-10-18 19:07:09 +02:00
Jen Herting
b2e28a0b08 py-x-clip: new package (#47060)
Co-authored-by: Alex C Leute <acl2809@rit.edu>
2024-10-18 19:05:37 +02:00
Jen Herting
53385f12da py-ema-pytorch: new package (#47052)
Co-authored-by: Alex C Leute <acl2809@rit.edu>
2024-10-18 18:59:36 +02:00
Jen Herting
cfae194fbd py-coca-pytorch: new package (#47051)
* py-coca-pytorch: new package

* [py-coca-pytorch] black

---------

Co-authored-by: Alex C Leute <acl2809@rit.edu>
2024-10-18 18:58:39 +02:00
Jen Herting
88c193b83a py-open-clip-torch: new package (#47049)
Co-authored-by: Alex C Leute <acl2809@rit.edu>
2024-10-18 18:57:32 +02:00
Sean Koyama
c006cb573a implement prefix property for OneAPI compiler (#47066) 2024-10-18 11:50:12 -04:00
Adam J. Stewart
d8d41e9b0e py-torch: add v2.5.0 (#47069) 2024-10-18 17:33:17 +02:00
Harmen Stoppels
c6bfe7c6bd fix use of traceback.format_exception (#47080)
Co-authored-by: Peter Scheibel <scheibel1@llnl.gov>
2024-10-18 14:54:21 +00:00
Alex Seaton
4432f5a1fe Added heyoka versions 6.0.0 & 6.1.0 (#47074) 2024-10-18 16:47:49 +02:00
Kenneth Moreland
b9e0914ab2 vtk-m: Add sycl option to vtk-m package (#46996)
Some unused methods in VTK-m resulted in compile errors. These were
not discovered because many compilers ignore unused methods in templated
classes, but the SYCL compiler for Aurora gave an error.
2024-10-18 07:25:00 -06:00
Mikael Simberg
49a8e84588 pika: Add minimum CMake version requirement when using CUDA and C++20 (#47077) 2024-10-18 15:18:34 +02:00
Harmen Stoppels
d36452cf4e clingo-bootstrap: no need for setting MACOSX_DEPLOYMENT_TARGET (#47065)
Turns out `os=...` of the spec and `MACOSX_DEPLOYMENT_TARGET` are kept
in sync, and the env variable is used to initialize
`CMAKE_MACOSX_DEPLOYMENT_TARGET`.

In bootstrap code we set the env variable, so these bits are redundant.

---------

Co-authored-by: haampie <haampie@users.noreply.github.com>
2024-10-18 04:03:58 -06:00
Wouter Deconinck
580cc3c91b curl: add v8.10.1 (fix CVE) (#46960) 2024-10-18 02:19:25 -06:00
Ian Lumsden
9ba7af404a Adds variant to toggle use of rdpmc due to icl-utk-edu/papi#238 (#47023) 2024-10-17 14:29:56 -07:00
AMD Toolchain Support
2da812cbad AOCL: add v5.0 (#46964) 2024-10-17 21:33:48 +02:00
Stephen Sachs
420266c5c4 wrf: Enable oneapi on more platforms (#47040)
* Remove the implicit CORE-AVX512 since the CPU specific flags are added by the
compiler wrappers.
* Add `-i_use-path` to help `ifx` find `lld` even if `-gcc-name` is set in
`ifx.cfg`. This file is written by `intel-oneapi-compilers` package to find the
correct `gcc`. Not being able to find `ldd` is a bug in `ifx`. @rschon2 found
this workaround.
2024-10-17 12:54:20 -06:00
Christoph Junghans
049ade024a voropp: swich to cmake (#47039)
* voropp: migrate to cmake

* lammps: update voropp dep
2024-10-17 12:54:02 -06:00
jgraciahlrs
75c71f7291 otf-cpt: add new package for OTF-CPT (#47042)
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-17 12:42:26 -06:00
Wouter Deconinck
0a7533a609 unrar: add v7.0.9 (#47036) 2024-10-17 12:42:10 -06:00
eugeneswalker
7ecdc175ff e4s ci stacks: add fftx cpu, cuda, and rocm builds (#47004)
* e4s ci stacks: add fftx cpu, cuda, and rocm builds

* disable fftx+rocm due to spack github issue #47034

* e4s oneapi: fftx has spack build error https://github.com/spack/spack/issues/47048
2024-10-17 10:11:32 -07:00
afzpatel
962262a1d3 llvm-amdgpu and composable-kernel: fix build failures (#46891) 2024-10-17 18:34:34 +02:00
Harmen Stoppels
adaa0a4863 clingo: use CMAKE_OSX_DEPLOYMENT_TARGET instead of *flags (#47043) 2024-10-17 13:38:59 +02:00
Wouter Deconinck
5f56eee8b0 freetype: prefer 2.13.2 due to interface change in 2.13.3 (#47021) 2024-10-17 01:21:40 -06:00
Harmen Stoppels
aa6caf9ee6 curl: mbedtls 3.6.0 bound should be forward not backward compat (#47029)
and add another backward compat bound for just 8.8
2024-10-17 08:41:25 +02:00
Ian Lumsden
1eb2cb97ad caliper: add +python variant with pybind11 bindings (#47031)
* Updates Caliper recipe to build the new Python bindings

* Implements setup_run_environment for Caliper to update PYTHONPATH
2024-10-17 04:43:34 +02:00
George Young
178a8bbdc5 isoquant: new package @3.6.1 (#47013)
Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2024-10-17 04:29:40 +02:00
eugeneswalker
e4c233710c e4s external rocm ci: upgrade to v6.2.1 (#46871)
* e4s external rocm ci: upgrade to v6.2.1

* use ghcr.io/spack/spack/ubuntu22.04-runner-amd64-gcc-11.4-rocm6.2.1:2024.10.08

* magma +rocm: add entry for v6.2.1
2024-10-16 19:42:19 -06:00
fgava90
b661acfa9b dakota: add conflicts and additional flags (#42906)
Co-authored-by: Gava, Francesco <francesco.gava@mclaren.com>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-16 19:12:56 -06:00
Ashim Mahara
7bddcd27d2 updated package specs for rust; removed redundant dependency due to inherited cmake (#47022) 2024-10-16 18:58:04 -06:00
Wouter Deconinck
5d2c67ec83 openldap: add v2.6.8; conflict gcc@14: for older (#47024) 2024-10-16 18:51:37 -06:00
Wouter Deconinck
62fd5d12c2 cyrus-sasl: patch v2.1.27:2.1.28 for gcc-14 (#47019) 2024-10-16 18:44:49 -06:00
Matthieu Dorier
64a7525e3f duckdb: install headers and libraries (#47015) 2024-10-16 17:24:04 -06:00
Harmen Stoppels
bfe434cbd5 gnuconfig: bump (#47020) 2024-10-16 16:16:13 -07:00
Matthieu Dorier
39063baf18 librdkafka: added version 2.5.3 (#47009) 2024-10-16 16:54:54 -06:00
Chris Marsh
f4a4acd272 py-cfgrib: add v0.9.14.1 (#46879)
* Add 0.9.14.1 and bound xarray version support
* Fix bounds as per review
2024-10-16 15:47:15 -07:00
H. Joe Lee
8d2a059279 hermes: add more versions, variants and depend on hermes-shm (#46602)
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-16 23:33:05 +02:00
kwryankrattiger
34c89c0f7b CI RESTful Configuration (#41622)
* CI: Add dynamic mapping section

* Doc: Add documentation for dynamic mapping section

* Add missing schema property

* Fixes from review

* query build fix up
* add warning output for dynamic mapping request errors

* Cleanup ci schema

* Add more protections for disabling/mitigating bad endpoints for dynamic
mapping

* Remove references to "gantry" in the docs

* Fixup rtd header

* Add unit testing for dynamic-mapping section

* Add arch to dynamic-mapping query string

* Tests and cleanup schema
2024-10-16 14:06:09 -06:00
AMD Toolchain Support
e1ea9e12a6 extrae: Add single mpi lib variant (#46918)
Extrae normally separates the C and MPI fortran interception libs, but
for mixed C/Fortran applications a combined lib is needed.

Co-authored-by: fpanichi <fpanichi@amd.com>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-16 19:25:51 +02:00
Chris Green
5611523baf ftgl: Handle char/unsigned char API change with the update to freetype@2.13.3 (#47003)
* [ftgl] Restrict GCC 14+ patch to apply only to GCC 14+

The patch added by #46927 should only be applied where it is needed:
with GCC 11 it causes a compilation failure where none previously
existed.

* Fix the contraint for applying unsiged char patch to ^freetype@2.13.3:

---------

Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-16 11:18:46 -06:00
Tuomas Koskela
4ff07c3918 purify: new package (#46839)
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-16 11:10:12 -06:00
Adam J. Stewart
49489a4815 py-pillow: add v11.0.0 (#47010) 2024-10-16 10:35:30 -06:00
eugeneswalker
fb53d31d09 ci: use ghcr.io images instead of dockerhub hosted (#46830) 2024-10-16 12:11:26 -04:00
Huston Rogers
80b9807e10 Added miniconda update (#46997)
Co-authored-by: James H. Rogers <jhrogers@spear.hpc.msstate.edu>
2024-10-16 18:03:10 +02:00
AMD Toolchain Support
b573ec3920 Update CP2K recipe for AOCC compiler (#46985) 2024-10-16 16:48:24 +02:00
Massimiliano Culpo
cbdc07248f unit-tests: install.py (#47007)
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-10-16 15:02:52 +02:00
Sam Grayson
db6a2523d9 py-flask-sqlalchemy: add v2.5.1 (#34999)
* Fix py-parsl specification

* Add older version of py-flask-sqlalchemy
2024-10-16 13:16:31 +02:00
Massimiliano Culpo
c710a1597f Reduce the load on clingo-cffi CI job (#46982)
The purpose of this CI job is to ensure that we
can use a modern clingo to concretize specs, if
e.g. it was installed in a virtual environment
with pip.

Since there is no need to re-test unrelated parts
of Spack, reduce the number of tests we run to just
concretize.py
2024-10-16 09:12:14 +02:00
Harmen Stoppels
8c70912b11 Update release documentation (#46991) 2024-10-16 09:11:53 +02:00
Massimiliano Culpo
64f90c38be unit-tests: oci/integration_test.py (#47006)
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-10-16 09:09:52 +02:00
Tom Scogland
d2f1e29927 Fix neovim on Darwin (#46905)
* lua: update luarocks resource to 3.11.1

We have kept an older 3.8 for some time, but that now uses an incorrect
value for the deployment target for macos, causing builds for bundles to
succeed but in such a way that they can't be linked to applications by
`ld` but only loaded by dlopen.  This fixes that, and also generally
updates the tool.

* lua-luajit-openresty: add new version fix LUA_PATH

Adds a newer version of openresty's luajor, and adds the slightly odd
extra share path they use that contains the `jit.*` modules.  Without
that, things that use bytecode-saving and other jit routines (like
neovim) fail.

* lua-lpeg: fix lpeg build to work for neovim on OSX

Normally luarocks builds all lua libraries as bundles on macos, this
makes sense, but means they can't then be linked by LD into executables
the way neovim expects to do.  I'm not sure how this ever worked, if it
did.  This patch adds the appropriate variables to have luarocks build
the library as a shared librar, and subsequently fix the id with
install_name_tool (the built-in functionality for this does not
trigger).

This also adds a symlink from `liblpeg.dylib` to `lpeg.so` because
neovim will not build on macos without it.  See corresponding upstream
pull request at https://github.com/neovim/neovim/pull/30749
2024-10-16 04:04:33 +02:00
Wouter Deconinck
57586df91a libtiff: add v4.7.0 (fix CVE) (#46999)
* libtiff: add v4.7.0

* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-15 19:53:55 -06:00
Michael B Kuhn
c00f36b5e2 adding latest AMR-Wind versions and correcting 2.1.0 reference (#46954) 2024-10-16 02:29:28 +02:00
Wouter Deconinck
2a7dd29f95 krb5: add v1.21.3 (fix CVEs) (#46989)
* krb5: add 1.21.3

* krb5: fix style

* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-16 02:17:03 +02:00
Wouter Deconinck
58e2f7a54f hive: add v4.0.1 (#46995) 2024-10-16 02:03:47 +02:00
Wouter Deconinck
e3afe9a364 yara: add v4.5.2 (fix CVE) (#46998)
* yara: add v4.5.2

* yara: deprecate 3.9.0
2024-10-16 01:50:56 +02:00
Wouter Deconinck
b0314faa3d sqlcipher: add v4.6.1 (#47000) 2024-10-16 01:48:55 +02:00
downloadico
2099e9f5cd abinit: add v10.0.9 (#46923)
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-10-16 01:47:46 +02:00
Chris Marsh
5947c13570 py-metpy: add v1.6.2 (#46990)
* Add v1.6.2

* Add variant desc

* fix style
2024-10-16 01:46:42 +02:00
Caetano Melone
1259992159 Reduce noop job resource requests (#46920)
`no-spec-to-rebuild` jobs use far less resources than they request. For example, [this](https://gitlab.spack.io/spack/spack/-/jobs/12944487) job [used](https://prometheus.spack.io/api/v1/query_range?query=container_memory_working_set_bytes{pod=%22runner-dcsgp53u-project-2-concurrent-3-0ubclrr1%22}&start=1728655743&end=1728656543&step=1s) around 3MB.

While this won't lead to any crazy cost savings, k8s requests effectively block other jobs from using the resources, so reducing this to a reasonable number is important.
2024-10-15 12:49:45 -05:00
psakievich
0477875667 remove concrete spec constraint from spack develop (#46911)
Remove the constraint for concrete specs and simply take the
max(version) if a version is not given. This should default to the
highest infinity version which is also the logical best guess for
doing development.

* Remove concrete verision constriant for develop, set docs

* Add unit-test

* Update lib/spack/docs/environments.rst

Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>

* Update lib/spack/spack/cmd/develop.py

Co-authored-by: Greg Becker <becker33@llnl.gov>

* Consolidate env collection in cmd

* Style

---------

Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
Co-authored-by: Greg Becker <becker33@llnl.gov>
2024-10-15 17:46:27 +00:00
G-Ragghianti
4d5844b460 Changing github branch name (#46988) 2024-10-15 09:22:03 -07:00
Garth N. Wells
fc79c37e2d (py-)fenics-dolfinx: add v0.9.0 (#46987)
* Update fenics-dolfinx to v0.9

* py-fenics-dolfinx update to v0.9

* Small updates

* Small fix
2024-10-15 17:38:18 +02:00
Wouter Deconinck
1d76ed7aa4 py-jupyter-server: add v2.14.2 (fix CVEs) (#46965)
* py-jupyter-server: add v2.14.2

* [@spackbot] updating style on behalf of wdconinc

* py-jupyter-events: add v0.10.0

* py-send2trash: add v1.8.3

* py-websocket-client: add v1.6.4, v1.7.0, v1.8.0

* py-websocket-client: back to underscore in source tarball

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-15 17:16:14 +02:00
Bernhard Kaindl
237f886e5d gimp: Fix missing pkgconfig and gettext dependencies (#46912)
* gimp deps: Fix missing pkgconfig and gettext deps

* Let's mark @:2.10.32 as deprecated and remove after 0.23 is released.
2024-10-15 09:50:00 -05:00
Tobias Ribizel
834ed2f117 env depfile: generate Makefile with absolute script path (#46966)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2024-10-15 13:52:31 +00:00
Stephen Nicholas Swatman
73069045ae acts dependencies: new versions as of 2024/10/15 (#46981)
This commit adds a new version of ACTS, detray, and GeoModel.
2024-10-15 06:45:18 -05:00
Harmen Stoppels
e0efd2bea2 support python 3.13 bootstrapping from sources (#46983) 2024-10-15 12:31:29 +02:00
Harmen Stoppels
b9873c5cea python: drop build-tools tag (#46980)
Remove the `build-tools` tag of python, otherwise these types of
concretizations are possible:

```
py-root
  ^py-pip
    ^python@3.12
  ^python@3.13
```

So, a package would be configured with py-pip using python 3.12, but
installed for 3.13, which does not work.
2024-10-15 11:30:07 +02:00
Massimiliano Culpo
2f711bda5f Improve behavior of spack deprecate (#46917)
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-10-15 09:04:12 +02:00
Tony Weaver
f8381c9a63 tempo: add master (#44298)
* Add new version for master branch

Added new version for master branch.  Also added additional functions to ensure tempo will actually run.  Tempo assumes the stage directory sticks around and references numerous files and directory there.  That has been corrected here only if using the master version.  The LWA-10-2020 version will also have this problem but they may have additional setup in their compute/Spack environment to address this issue already so I did not modify anything when that's the version.  Example of what happens in the LWA-10-17-2020 version regarding missing files is given below

user@cs:~/spack/bin$ tempo
more: cannot open /tempo.hlp: No such file or directory

* Updated to fix format errors

Flake8 check found errors.  Fixed those formatting issues

* Additional format change

Removed redundant setup_dependent_run_environment missed in previous update

* Update url to use https: https is the usual transport and is needed to support checkout behind some firewalls

---------

Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-14 22:53:42 -06:00
kwryankrattiger
c8f61c8662 Don't require OIDC initialization for noop (#46921)
ref. https://github.com/spack/spack-infrastructure/pull/957
2024-10-14 23:39:55 -05:00
Tamara Dahlgren
507965cbc6 Docs: reduce confusion in configuration override of scope precedence section (#46977) 2024-10-15 04:07:48 +00:00
Wouter Deconinck
1f6ce56d3b lucene: add v9.12.0, v10.0.0 (fix CVE) (#46975)
* lucene: add v9.12.0, v10.0.0

* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-14 22:02:52 -06:00
Wouter Deconinck
3918f83ddc libmodbus: add v3.1.10 (fix CVE) (#46968)
* libmodbus: add v3.1.10

* libmodbus: deprecate older versions
2024-10-14 21:51:59 -06:00
Wouter Deconinck
d4dc13fffb libsndfile: add v1.2.2 (fix CVEs) (#46967)
* libsndfile: add v1.2.2

* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-14 21:51:30 -06:00
Wouter Deconinck
5008519a56 log4cxx: add v1.2.0 (#46974) 2024-10-14 21:46:10 -06:00
Wouter Deconinck
dad5ff8796 istio: add v1.23.2 (fix CVEs) (#46961) 2024-10-15 05:44:27 +02:00
Wouter Deconinck
a24220b53f keepalived: add v2.3.1 (#46963) 2024-10-14 21:30:17 -06:00
Jon Rood
2186ff720e Change URLs from http to https in curl and openssl. (#46962) 2024-10-14 21:25:11 -06:00
Garth N. Wells
65d61e12c9 Add fenics-ufcx to v0.9 (#46952) 2024-10-15 05:21:56 +02:00
Adam J. Stewart
05f3fef72c GDAL: add v3.9.3 (#46959) 2024-10-15 04:49:47 +02:00
Harmen Stoppels
21c2eedb80 detection: prefer dir instead of symlink in case of duplicate search paths (#46957) 2024-10-14 17:09:55 +00:00
Massimiliano Culpo
66a3c7bc42 archspec: update to v0.2.5 (#46958)
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-10-14 19:09:17 +02:00
Harmen Stoppels
8b3d3ac2de cmake: remove custom CMAKE_INSTALL_RPATH (#46685)
The CMake builder in Spack actually adds incorrect rpaths. They are
unfiltered and incorrectly ordered compared to what the compiler wrapper
adds.

There is no need to specify paths to dependencies in `CMAKE_INSTALL_RPATH`
because of two reasons:

1. CMake preserves "toolchain" rpaths, which includes the rpaths injected
   by our compiler wrapper.
2. We use `CMAKE_INSTALL_RPATH_USE_LINK_PATH=ON`, so libraries we link
   to are rpath'ed automatically.

However, CMake does not create install rpaths to directories in the package's
own install prefix, so we set `CMAKE_INSTALL_RPATH` to the educated guess
`<prefix>/{lib,lib64}`, but omit dependencies.
2024-10-14 12:35:50 +02:00
Harmen Stoppels
b5610cdb8b py-greenlet: add missing forward compat bound for python (#46951) 2024-10-14 03:21:39 -06:00
John W. Parent
6c6b262140 Add "only_windows" marker for unit tests (#45979) 2024-10-14 09:02:49 +02:00
Wouter Deconinck
796e372bde tomcat: add v9.0.96, v10.1.31, v11.0.0 (fix CVEs) (#46950)
* tomcat: add v9.0.96, v10.1.31, v11.0.0

* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-13 16:44:16 -06:00
Laura Weber
78740942f9 tecplot: Add version 2024r1 (#46469) 2024-10-13 16:24:03 -06:00
Jordan Galby
02a991688f Fix makefile target check with Make jobserver keep-going (#46784) 2024-10-13 15:02:49 -07:00
Garth N. Wells
a8029c8ec4 Update to ffcx v0.9.0 (#46949) 2024-10-13 15:41:04 -06:00
AMD Toolchain Support
adb8f37fc5 Sort FORTRAN std when using flang (#46922) 2024-10-13 23:30:21 +02:00
Wouter Deconinck
81b41d5948 guacamole-{client,server}: add v1.5.5 (fix CVEs) (#46948)
* guacamole-client: add v1.5.5

* guacamole-server: add v1.5.5

* guacamole-client: add patch and ensure maven doesn't flag it

* guacamole-client: limit patch to 1.5; java@:16 when @:1.4
2024-10-13 15:09:16 -06:00
Adam J. Stewart
0ff980ae87 GDAL: fix Autotools build (#46946) 2024-10-13 22:15:24 +02:00
Wouter Deconinck
74a93c04d8 systemd: add v256.7 (#46944) 2024-10-13 21:54:38 +02:00
Wouter Deconinck
b72c7deacb libde265: add v1.0.15 (fix CVEs) (#46942)
Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
2024-10-13 09:01:01 -06:00
Wouter Deconinck
b061bbbb8f perl-*: add new versions (#46935)
* perl-*: add new versions

* [@spackbot] updating style on behalf of wdconinc

* perl-task-weaken: depends on perl-module-install

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-13 08:45:32 -06:00
Wouter Deconinck
bbfad7e979 libepoxy: add v1.5.10 (switch to meson) (#46938) 2024-10-13 16:10:06 +02:00
Garth N. Wells
3a9963b497 (py-)fenics-basix: add v0.9.0 (#46931) 2024-10-13 16:09:40 +02:00
Garth N. Wells
8ac00aa58f py-fenics-ufl: add v2024.2.0 (#46933) 2024-10-13 16:09:08 +02:00
Joseph Wang
13f80ff142 ftgl: Fix gcc14 compilation error due to type mismatch in FTContour (#46927)
* ftgl: add type fix

* ftgl: fix style

* Add comment: Fix gcc14 compilation error due to type mismatch in FTContour

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-13 15:54:12 +02:00
Robert Cohn
e8291cbd74 [intel-oneapi-compilers] use llvm flags for ifx (#46866) 2024-10-13 08:19:22 -04:00
Wouter Deconinck
0dded55f39 libarchive: add v3.7.5, v3.7.6 (fix CVEs) (#46940)
* libarchive: add v3.7.5, v3.7.6 (fix CVEs)

* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-13 03:37:51 -06:00
Wouter Deconinck
a4ca6452c0 libvpx: add v1.14.1 (#46941) 2024-10-13 03:22:34 -06:00
Wouter Deconinck
36761715fd dosfstools: add v4.2 (#46939)
* dosfstools: add v4.2

* dosfstools: autogen.sh; depends_on gettext

* dosfstools: fix style
2024-10-13 03:12:47 -06:00
Wouter Deconinck
02b116bd56 libsamplerate: add v0.2.2; fix url and homepage (#46937) 2024-10-13 03:09:08 -06:00
Wouter Deconinck
d4d7d5830d popt: add v1.19; fix url and homepage (#46936) 2024-10-13 03:03:56 -06:00
Wouter Deconinck
389b1824e9 gtkplus: build_system={autotools,meson} (#46869)
* gtkplus: build_system={autotools,meson}

* gtkplus: fix style, fix spelling

* gtkplus: setup_dependent_build_environment in mixin

* gtkplus: swap setup_dependent_{run,build}_environment
2024-10-12 21:49:23 +02:00
Wouter Deconinck
e65be13056 openexr: add v3.3.1 (#46928) 2024-10-12 16:39:17 +02:00
John W. Parent
1580c1745c cmake: add v3.30.3, v3.30.4, v3.30.5 (#46616)
* CMake: add recent releases

* CMake: add 3.30.5
2024-10-12 16:33:18 +02:00
AMD Toolchain Support
cf54ef0fd3 AOCC: add v5.0.0 (#46929)
Co-authored-by: vijay kallesh <Vijay-teekinavar.Kallesh@amd.com>
2024-10-12 16:17:47 +02:00
Auriane R.
b8b02e0691 Replace if ... in spec with spec.satisfies in k* packages and l* packages (#46388)
* Replace if ... in spec with spec.satisfies in k* packages

* Replace if ... in spec with spec.satisfies in l* packages
2024-10-12 04:12:38 -06:00
shanedsnyder
8d986b8a99 darshan-*: ensure proper usage of spack compilers (#45636) 2024-10-11 18:30:58 -06:00
dependabot[bot]
4b836cb795 build(deps): bump docker/build-push-action from 6.8.0 to 6.9.0 (#46674)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 6.8.0 to 6.9.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](32945a3392...4f58ea7922)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-11 18:25:59 -06:00
Massimiliano Culpo
d5966e676d Fix splicing related unit tests (#46914)
Some assertion are not testing DAG invariants, and they are passing only
because of the simple structure of the builtin.mock repository on develop.

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-10-12 00:25:41 +00:00
Wouter Deconinck
e187508485 zookeeper: add v3.8.4 (#46899)
* zookeeper: add v3.8.4

* zookeeper: use bin archive, depend_on java, setup run environment, deprecate EoL

* zookeeper: fix bin url

* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-12 02:13:16 +02:00
Matt Thompson
80982149d5 mapl: add 2.50.0 (#46904) 2024-10-12 02:06:44 +02:00
Ken Raffenetti
a1f2e794c7 mpich: Disallow dataloop variant with GPU support (#46903)
MPICH only supports GPU-aware builds with the yaksa datatype
engine. Fixes #44092.
2024-10-12 01:45:39 +02:00
Wouter Deconinck
dbe323c631 ruby: add v3.3.5 (#46805)
* ruby: add v3.3.5

* ruby: add variant +yjit to control rust-based JIT
2024-10-11 18:39:24 -05:00
Adam J. Stewart
77ddafaaac py-sphinx: add v8.1.0 (#46907)
* py-sphinx: add v8.1.0

* py-sphinxcontrib-*help: add new versions

* blacken
2024-10-12 01:07:09 +02:00
John W. Parent
17efd6153c lz4: version 1.10.0 (#46908) 2024-10-12 01:03:22 +02:00
Joseph Wang
93f356c1cc py-xgboost: add lib64 (#46926) 2024-10-11 15:55:03 -06:00
Krishna Chilleri
386d115333 update versions of Neo4j and Redis deps (#46874)
* update versions of Neo4j and Redis deps

* deprecating older versions due to security vulnerabilities

* [@spackbot] updating style on behalf of kchilleri

* Update var/spack/repos/builtin/packages/redis/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* adding previous urls to use archives on project websites

* [@spackbot] updating style on behalf of kchilleri

* adding new required maven version

* label when to use specific maven versions

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

---------

Co-authored-by: Krishna Chilleri <krishnachilleri@lanl.gov>
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-10-11 12:40:01 -05:00
Dom Heinzeller
6b512210d4 Update py-scipy: add conflict for AOCC compilers (#46452)
Addresses https://github.com/spack/spack/issues/45718
2024-10-11 08:44:05 -06:00
Larry Knox
ba215ca824 hdf5: add v1.14.4-3 -> develop-1.17 (#46888)
* Add versions 1.14.4-3, 1.14.5, develop-1.16, and update develop-1.15 to
develop-1.17.
* Remove unused list_url, list_depth, and style disapproved spaced in url
= "...
* One more style fix.
2024-10-11 04:18:59 -06:00
Mosè Giordano
629a3e9396 julia: add v1.11.0 (#46715) 2024-10-11 10:45:03 +03:00
Garth N. Wells
08b07b9b27 py-nanobind: add v2.1.0 and v2.2.0 (#46884)
* Update nanobind versions

* Small fix

* More small fixes

* Formatting update
2024-10-10 23:19:17 -06:00
Wouter Deconinck
3a38122764 gimp: add v2.10.38 (#46816) 2024-10-11 05:27:57 +02:00
Dom Heinzeller
25ab7cc16d mvapich: hydra process manager requires pmi=simple (#46789) 2024-10-11 04:49:43 +02:00
Andrew W Elble
41773383ec llvm-amdgpu: apply patch from https://github.com/llvm/llvm-project/pull/80071 (#46753)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2024-10-11 04:47:40 +02:00
afzpatel
9855fbf7f1 omnitrace: add versions 6.2.0 and 6.2.1 (#46848) 2024-10-11 03:47:36 +02:00
Ufuk Turunçoğlu
5ef9d7e3ed esmf: add version 8.7.0 (#46860) 2024-10-11 03:43:51 +02:00
Adam J. Stewart
5a4b7d3d44 py-torchgeo: add v0.6.1 (#46906) 2024-10-11 03:38:21 +02:00
Adam J. Stewart
9b40c1e89d py-torchmetrics: add v1.4.3 (#46902) 2024-10-11 02:44:35 +02:00
Alec Scott
edff99aab3 smee-client: add v2.0.3 (#46909) 2024-10-11 02:41:32 +02:00
Stephen Sachs
22043617aa libunistring: 1.2 needs std=c18 for icc, add icc C++ flags update (#37607) 2024-10-11 02:25:32 +02:00
Sébastien Valat
7df23c7471 malt: add v1.2.3, v1.2.4 (#46842)
* Add support for version 1.2.4 of MALT
* Use specific URL for malt-1.2.1 due to change of ways of downloading archive in newer version.
   Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Update sha of 1.2.2, url version and # generated

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-10 16:16:55 -07:00
Axel Huebl
ef87a9a052 openpmd-api: add v0.16.0 (#46859)
* openPMD-api: 0.16.0

Signed-off-by: Axel Huebl <axel.huebl@plasma.ninja>

* [Patch] Fix: CMake Internal Control

https://github.com/openPMD/openPMD-api/pull/1678

Signed-off-by: Axel Huebl <axel.huebl@plasma.ninja>
2024-10-11 01:10:46 +02:00
Nathan Hanford
af62a062cc Installer: rewire spliced specs via RewireTask (#39136)
This PR allows users to configure explicit splicing replacement of an abstract spec in the concretizer.

concretizer:
  splice:
    explicit:
    - target: mpi
      replacement: mpich/abcdef
      transitive: true

This config block would mean "for any spec that concretizes to use mpi, splice in mpich/abcdef in place of the mpi it would naturally concretize to use. See #20262, #26873, #27919, and #46382 for PRs enabling splicing in the Spec object. This PR will be the first place the splice method is used in a user-facing manner. See https://spack.readthedocs.io/en/latest/spack.html#spack.spec.Spec.splice for more information on splicing.

This will allow users to reuse generic public binaries while splicing in the performant local mpi implementation on their system.

In the config file, the target may be any abstract spec. The replacement must be a spec that includes an abstract hash `/abcdef`. The transitive key is optional, defaulting to true if left out.

Two important items to note:

1. When writing explicit splice config, the user is in charge of ensuring that the replacement specs they use are binary compatible with whatever targets they replace. In practice, this will likely require either specific knowledge of what packages will be installed by the user's workflow, or somewhat more specific abstract "target" specs for splicing, to ensure binary compatibility.
2. Explicit splices can cause the output of the concretizer not to satisfy the input. For example, using the config above and consider a package in a binary cache `hdf5/xyzabc` that depends on mvapich2. Then the command `spack install hdf5/xyzabc` will instead install the result of splicing `mpich/abcdef` into `hdf5/xyzabc` in place of whatever mvapich2 spec it previously depended on. When this occurs, a warning message is printed `Warning: explicit splice configuration has caused the the concretized spec {concrete_spec} not to satisfy the input spec {input_spec}".

Highlighted technical details of implementation:

1. This PR required modifying the installer to have two separate types of Tasks, `RewireTask` and `BuildTask`. Spliced specs are queued as `RewireTask` and standard specs are queued as `BuildTask`. Each spliced spec retains a pointer to its build_spec for provenance. If a RewireTask is dequeued and the associated `build_spec` is neither available in the install_tree nor from a binary cache, the RewireTask is requeued with a new dependency on a BuildTask for the build_spec, and BuildTasks are queued for the build spec and its dependencies.
2. Relocation is modified so that a spack binary can be simultaneously installed and rewired. This ensures that installing the build_spec is not necessary when splicing from a binary cache.
3. The splicing model is modified to more accurately represent build dependencies -- that is, spliced specs do not have build dependencies, as spliced specs are never built. Their build_specs retain the build dependencies, as they may be built as part of installing the spliced spec.
4. There were vestiges of the compiler bootstrapping logic that were not removed in #46237 because I asked alalazo to leave them in to avoid making the rebase for this PR harder than it needed to be. Those last remains are removed in this PR.

Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
2024-10-10 15:48:58 -07:00
Stephen Nicholas Swatman
e6114f544d acts dependencies: new versions as of 2024/10/07 (#46836)
This commit adds new versions of Acts, detray, and vecmem.
2024-10-10 14:43:43 -05:00
dependabot[bot]
8d651625f7 build(deps): bump actions/upload-artifact from 4.4.0 to 4.4.3 (#46895)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.4.0 to 4.4.3.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](50769540e7...b4b15b8c7c)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-10 11:38:56 -05:00
Mikael Simberg
9346306b79 hip: Set --gcc-toolchain to ensure external HIP installs pick up correct GCC (#46573) 2024-10-10 18:15:02 +02:00
Garth N. Wells
f3a3e85bb9 py-scikit-build-core: add v0.10.7 (#46883)
* Add scikit-build-core version

* Update var/spack/repos/builtin/packages/py-scikit-build-core/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-10-10 10:09:21 -06:00
pauleonix
caaaba464e cuda: add v12.6.2 (#46864)
* cuda: Add 12.6.2
* Update cuda build system
   - Remove gcc@6 conflict that was only a deprecation (probably has be added again with cuda@13)
   - Update cuda_arch support by CUDA version
   - Kepler support has ended with cuda@12
   - Recently added 90a Hopper "experimental" features architecture was
    missing the dependency on cuda@12:
2024-10-10 09:48:20 -06:00
Chris Marsh
8fae388f57 pcre2: Fix spec reference without self
Fixes bug introduced in #46788
2024-10-10 11:45:51 -04:00
Rob Falgout
a332e0c143 Update package.py for release 2.32.0 of hypre (#46865) 2024-10-10 08:27:51 -06:00
kjrstory
bc662b8764 su2 fixes and improvements: AD, scipy/numpy, and Mutationpp setup, environment variable (#46774)
* su2 fixes and improvements: AD, scipy/numpy, and Mutationpp setup, environment variable

* su2: Conflict %gcc@13: when @:7, mpp was added with @7.1.0

* py-scipy: SciPy 1.14: requires GCC >= 9.1

---------

Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-10 08:19:05 -06:00
Juan Miguel Carceller
7a8955597d py-cython: add v3.0.11 (#46772)
* py-cython: add v3.0.11
   Add url for cython because they are using lower case for 3.0.11
   Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Don't use f-string
* Remove old version directive for 3.0.11

---------

Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-10-10 08:18:34 -06:00
BOUDAOUD34
bcf9c646cf pocl: switch from deprecated master branch to main branch in git repo (#46476)
Co-authored-by: U-PALLAS\boudaoud <boudaoud@pc44.pallas.cines.fr>
2024-10-10 15:56:01 +02:00
Daryl W. Grunau
a76fffe8ff eospac: add versions 6.5.10 and 6.5.11 (#46894)
Co-authored-by: Daryl W. Grunau <dwg@lanl.gov>
2024-10-10 13:32:30 +02:00
Jen Herting
26c8714a24 [py-transformers] limit numpy to <2 (#46890) 2024-10-10 13:31:50 +02:00
Adam J. Stewart
0776ff05d2 py-cartopy: add v0.24.1 (#46882)
* py-cartopy: add v0.24.1
2024-10-10 04:48:06 -06:00
djabaay
d3beef6584 Adding "import llnl.util.tty as tty" for PETSC to correctly run for versions <3.13 (#46892)
petsc: fix missing tty import needed to print the warning.
2024-10-10 04:07:43 -06:00
Sergey Kosukhin
bdd06cb176 hdf5: conflicts zlib-ng+new_strategies (#43535) 2024-10-10 12:01:46 +02:00
Tom Scogland
f639c4f1e6 add lima package, update qemu to make it usable (#46893)
adds the lima-vm project, in order to make that useful adds a newer
version of qemu so qemu VMs can work, and builds qemu with flags that
allow it to do things like give the VMs networking and virtfs
filesystems.

Also adds vde as a dependency of qemu.
2024-10-10 03:25:16 -06:00
吴坎
f18a106759 apache-tvm: add missing dependencies (#46818) 2024-10-10 11:17:42 +02:00
Julien Cortial
5b01ddf832 mumps: Add version 5.7.3 (#46862) 2024-10-10 10:52:11 +02:00
Tuomas Koskela
c1fc98eef8 sopt: new package (#46837)
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-10 10:10:16 +02:00
Stephen Sachs
e9831985e4 Use pcluster-alinux2 container image with pre-installed compilers (#44150) 2024-10-10 10:01:59 +02:00
Juan Miguel Carceller
30e9545d3e py-lxml: add 5.3.0 (#46771)
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-10-10 01:37:29 -06:00
Arne Becker
ce0910a82c perl-sort-naturally: new package (#46886)
* perl-sort-naturally: new package
   Adds Sort::Naturally
* Use new mechanism for testing
* Make black happier
2024-10-10 01:05:11 -06:00
psakievich
afc01f9570 CMake: Improve incremental build speed. (#46878)
* CMake: Improve incremental build speed.

CMake automatically embeds an updated configure step into make/ninja that will be called during the build phase.  By default if a `CMakeCache.txt` file exists in the build directory CMake will use it and this + `spec.is_develop` is sufficient evidence of an incremental build.

This PR removes duplicate work/expense from CMake packages when using `spack develop`.

* Update cmake.py

* [@spackbot] updating style on behalf of psakievich

* Update cmake.py

meant self not spec...

---------

Co-authored-by: psakievich <psakievich@users.noreply.github.com>
2024-10-10 00:45:10 -06:00
Sreenivasa Murthy Kolam
fc3a484a8c bump up version for rocm-opencl for 6.2.1 release (#46881) 2024-10-10 00:40:36 -06:00
Luke Pickering
de0d5ba883 hepmc3: fix typo in cmake arg for the +protobuf variant (#46872)
* fix typo in variable name in hepmc3 variant

* set cxx standard to 14 when using protobuf

* add myself to hepmc3 maintainer list

* hepmc3: Applied suggestion of @acecbs for spec.satisfies("+protobuf") (agreed!)

Co-authored-by: Alec Scott <hi@alecbcs.com>

* hepmc3: cxx_standard for protobuf

only set cxx standard to meet protobuf minimum (14) if not also rootio variant as that sets the cxx standard to match the root public API standard requirements
2024-10-10 00:40:22 -06:00
Nils Vu
f756ab156c spectre: add v2024.09.16 (#46857) 2024-10-10 00:29:54 -06:00
Wouter Deconinck
540de118c1 imagemagick: add v7.1.1-39 (#46853)
* imagemagick: add v7.1.1-39

* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-10-10 00:18:36 -06:00
dependabot[bot]
675be13a7b build(deps): bump actions/checkout from 4.2.0 to 4.2.1 (#46854)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.2.0 to 4.2.1.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](d632683dd7...eef61447b9)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-10 00:18:09 -06:00
Jason Hicken
3342866e0e adept: new package (#46793)
* added adept package

* forgot to remove boilerplate comment

* fixed formatting issue and use of lapack_prefix

* adept: use f-string

* removed debug variant and corresponding configure_args conditional

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-10-09 21:14:56 +02:00
Caetano Melone
39ff675898 py-sqlparse: add version 0.5.1 (#46876) 2024-10-09 21:12:07 +02:00
Greg Becker
f807337273 Environment.clear: ensure clearing is passed through to manifest (#46880)
* Environment.clear: ensure clearing is passed through to manifest
* test/cmd/env: make test_remove_commmand round-trip to disk
* cleanup vestigial variables
2024-10-09 11:13:56 -07:00
Massimiliano Culpo
8e4e3c9060 python: rework how we compute the "command" property (#46850)
Some Windows Python installations may store the Python exe in Scripts/
rather than the base directory. Update `.command` to search in both
locations on Windows. On all systems, the search is now done
recursively from the search root: on Windows, that is the base install
directory, and on other systems it is bin/.
2024-10-09 01:08:27 -06:00
Christoph Junghans
6d67992191 cabana: add +all, with new package "all"(A Load Balancing Library) (#46852)
* Add libALL support

* cabana: also require ALL

* cabana: Bugfix: Fix spec for cmake>=3.26 to be @3.26: and HDF5 support requires MPI

* cabana: MPI requires C: Add depends_on("c", type="build", when="+mpi")

* cabana: +mpi requires C, but at least for some CMake versions, Cabana's enable of C is too late. Patch it.

* cabana: simplify disabling of find_package's for disabled options and improve comment

* cabana: +grid of 0.6.0 does not compile with gcc-13: It misses iostream includes

* cabana: +test requires googletest at build time: gtest is a linked library(not a plugin or tool)

* cabana: 0.6.0+cuda requires kokkos@3.7:, see https://github.com/ECP-copa/Cabana/releases

* cabana: As 0.6.0+grid does not support gcc-13 and newer, I think it's good to add 0.6.1 and 0.7.0?

---------

Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-10-08 19:48:02 -06:00
Bernhard Kaindl
0f3fea511e gettext: Fix ~libxml2: Skip patch for external libxml (#46870) 2024-10-08 19:04:10 -06:00
Cameron Rutherford
a0611650e2 resolve: Add LUSOL variant and fix CMake variable definition. (#44790)
* resolve: Add LUSOL variant and fix CMake variable definition.
* Update variant with correct version constraints.
2024-10-08 16:45:41 -07:00
Harmen Stoppels
5959be577f python: add 3.13.0 (#46773) 2024-10-08 22:52:14 +03:00
Matt Thompson
9b5e508d15 mapl: add 2.49.1, 2.46.4 (#46849) 2024-10-08 13:18:14 -06:00
Ashim Mahara
66a30aef98 Added package Evodiff and dependencies (#46418)
* added py-evodiff and dependencies

* deleted the FIXME

* fixed style issues

* added versions for biotite dependencies; added hash to py-hatch-vcs

* added python version for py-hatch-cython

* updated biotraj dependencies

* - added versions for the packages and dependencies
- added more dependencies for py-hatch
- added rust versions
- added py-uv as a new package

* updated pacakges and their dependencies according to the PR review by @meyersbs

* typo fix for hatchling version; fix the minimum required setuptools version for evodiff

* added 1.9.0 and 1.7.0 userpath versions; required as a dependency

* added mlflow as a dependency

* changed biopython to an optional dependency according to review from @meyersbs; variant esmfold

* Updated Specs

- Pinned biotraj to 1:1 for py-biotite
- Added numpy and other dependencies for py-biotraj; they are dependent
  on the versions
- Excluded py-mlflow as a dependency for package py-evodiff; missing
  usage in
  package.
- Removed versioned dependencies from py-fair-esm
- Added a version to py-packaging
- Added py-setuptools as a dependency in py-userpath
- Added sha256 as hashes for py-uv

* style changes
2024-10-08 11:10:11 -07:00
Jen Herting
b117074df4 py-pydantic: add v2.7.4 with dep py-annotated-types (#46307)
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-10-08 18:19:51 +02:00
afzpatel
9f4be17451 rocblas,miopen-hip: fix miopen-hip@6.2.1 build and rocblas build test (#46755) 2024-10-08 18:00:56 +02:00
Massimiliano Culpo
d70e9e131d Fix relocating MachO binary, when store projection changes (#46840)
* Remove "modify_object_macholib"

According to documentation, this function is used when installing
Mach-O binaries on linux. The implementation seems questionable at
least, and the code seems to be never hit (Spack currently doesn't
support installing Mach-O binaries on linux).

* Fix relocation on macOS, when store projection changes

---------

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-10-08 13:32:28 +02:00
Jen Herting
d7643d4f88 [py-torchmetrics] added image variant and deprecated 1.3.0 (#46258)
* [py-torchmetrics]

- Added variant
- deprecated version 1.3.0

* [py-torchmetrics]

- py-scipy@1.0.1:
- removed upper bounds on dependencies
2024-10-08 05:26:31 -06:00
Adam J. Stewart
73b6aa9b92 py-cartopy: add v0.24.0 (#46851)
* py-cartopy: add v0.24.0
* py-owslib: add v0.31.0
2024-10-08 00:28:27 -06:00
Seth R. Johnson
6d51d94dab celeritas: add v0.5.0 (#46841) 2024-10-08 06:29:09 +02:00
Adam J. Stewart
1a965e9ec2 py-torchvision: fix build with Apple Clang 16 (#46462) 2024-10-08 05:54:39 +02:00
Adrien Bernede
a9e9b901d1 mfem: Apply minor changes (replace ' with ") (#46537) 2024-10-08 05:53:38 +02:00
afzpatel
95b46dca3d kokkos: modify standlone test to run with +rocm (#46779) 2024-10-08 05:48:50 +02:00
Wouter Deconinck
7f6ae2a51e py-codespell: add v2.3.0 (#46760) 2024-10-07 21:00:44 -06:00
Wouter Deconinck
489d5b0f21 py-urllib3: add v1.26.20 (#46739) 2024-10-07 20:31:26 -06:00
Wouter Deconinck
f884817009 harfbuzz: add v10.0.0, v10.0.1 (#46741) 2024-10-07 20:24:13 -06:00
Wouter Deconinck
a30704fdad py-ipython: add v8.28.0 (#46742) 2024-10-07 20:06:43 -06:00
Alex Richert
57eb21ac3d rust: conflicts with %intel and %oneapi (#46756) 2024-10-07 16:38:08 -07:00
arezaii
f48c36fc2c add bz2 variant, fix brotli dependency (#46803) 2024-10-07 16:20:06 -07:00
James Smillie
a09b9f0659 Windows/Testing: enable spack view tests on Windows (#46335)
Enable tests for symlink-based views (this works with almost no
modifications to the view logic). View logic is not yet robust
for hardlink/junction-based views, so those are disabled for now
(both in the tests and as subcommands to `spack view`).
2024-10-07 16:05:23 -07:00
shanedsnyder
92d940b7f4 darshan-runtime: add new variants (#46847)
* add new darshan-runtime variants
   - `lustre` variant enables instrumentation of Lustre files
       * requires Lustre headers, so Lustre is a proper dependency
         for this variant
   - `log_path` variant allows setting of a centralized log directory
     for Darshan logs (commonly used for facility deployments)
       * when this variant is used, the `DARSHAN_LOG_DIR_PATH` env var
         is no longer used to set the log file path
   - `group_readable_logs` variant sets Darshan log permissions to
     allow reads from users in the same group
* add mmap_logs variant to enable usage of mmap logs
2024-10-07 14:36:33 -07:00
Jonas Thies
d8c7cbe8f0 phist: new version 1.12.1 and conflict some compiler/library combinations for earlier versions (#46802) 2024-10-07 16:07:03 -05:00
John W. Parent
717d4800e1 Qt package: Add Windows Port (#46788)
Also adds support for Paraview and CMake to build with Qt support on
Windows.

The remaining edits are to enable building of Qt itself on Windows:

* Several packages needed to update `.libs` to properly locate
  libraries on Windows
* Qt needed a patch to allow it to build using a Python with a space
  in the path
* Some Qt dependencies had not been ported to Windows yet
  (e.g. `harfbuzz` and `lcms`)

This PR does not provide a sufficient GL for Qt to use Qt Quick2, as
such Qt Quick2 is disabled on the Windows platform by this PR.

---------

Co-authored-by: Dan Lipsa <dan.lipsa@kitware.com>
2024-10-07 13:33:25 -07:00
Tamara Dahlgren
c77916146c Bugfix/Installer: properly track task queueing (#46293)
* Bugfix/Installer: properly track task queueing
* Move ordinal() to llnl.string; change time to attempt
* Convert BuildTask to use kwargs (after pkg); convert STATUS_ to BuildStats enum
* BuildTask: instantiate with keyword only args after the request
* Installer: build request is required for initializing task
* Installer: only the initial BuildTask cannnot have status REMOVED
* Change queueing check
* ordinal(): simplify suffix determination [tgamblin]
* BuildStatus: ADDED -> QUEUED [becker33]
* BuildTask: clarify TypeError for 'installed' argument
2024-10-07 10:42:09 -07:00
579 changed files with 10369 additions and 5054 deletions

View File

@@ -28,8 +28,8 @@ jobs:
run:
shell: ${{ matrix.system.shell }}
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: ${{inputs.python_version}}
- name: Install Python packages
@@ -66,7 +66,7 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
spack -d audit externals
./share/spack/qa/validate_last_exit.ps1
- uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
if: ${{ inputs.with_coverage == 'true' && runner.os != 'Windows' }}
with:
name: coverage-audits-${{ matrix.system.os }}

View File

@@ -1,7 +1,7 @@
#!/bin/bash
set -e
source share/spack/setup-env.sh
$PYTHON bin/spack bootstrap disable github-actions-v0.4
$PYTHON bin/spack bootstrap disable github-actions-v0.5
$PYTHON bin/spack bootstrap disable spack-install
$PYTHON bin/spack $SPACK_FLAGS solve zlib
tree $BOOTSTRAP/store

View File

@@ -37,14 +37,14 @@ jobs:
make patch unzip which xz python3 python3-devel tree \
cmake bison
- name: Checkout
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.6
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.4
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
@@ -60,17 +60,17 @@ jobs:
run: |
brew install cmake bison tree
- name: Checkout
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: "3.12"
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.6
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.4
spack external find --not-buildable cmake bison
spack -d solve zlib
tree $HOME/.spack/bootstrap/store/
@@ -83,22 +83,22 @@ jobs:
steps:
- name: Setup macOS
if: ${{ matrix.runner != 'ubuntu-latest' }}
run: brew install tree gawk
- name: Remove system executables
run: |
brew install tree gawk
sudo rm -rf $(command -v gpg gpg2)
- name: Setup Ubuntu
if: ${{ matrix.runner == 'ubuntu-latest' }}
run: sudo rm -rf $(command -v gpg gpg2 patchelf)
while [ -n "$(command -v gpg gpg2 patchelf)" ]; do
sudo rm $(command -v gpg gpg2 patchelf)
done
- name: Checkout
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
spack solve zlib
spack bootstrap disable github-actions-v0.6
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.4
spack -d gpg list
tree ~/.spack/bootstrap/store/
@@ -110,19 +110,17 @@ jobs:
steps:
- name: Setup macOS
if: ${{ matrix.runner != 'ubuntu-latest' }}
run: brew install tree
- name: Remove system executables
run: |
brew install tree
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Setup Ubuntu
if: ${{ matrix.runner == 'ubuntu-latest' }}
run: |
sudo rm -rf $(which gpg) $(which gpg2) $(which patchelf)
while [ -n "$(command -v gpg gpg2 patchelf)" ]; do
sudo rm $(command -v gpg gpg2 patchelf)
done
- name: Checkout
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: |
3.8
@@ -130,15 +128,16 @@ jobs:
3.10
3.11
3.12
3.13
- name: Set bootstrap sources
run: |
source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.5
spack bootstrap disable spack-install
- name: Bootstrap clingo
run: |
set -e
for ver in '3.8' '3.9' '3.10' '3.11' '3.12' ; do
for ver in '3.8' '3.9' '3.10' '3.11' '3.12' '3.13'; do
not_found=1
ver_dir="$(find $RUNNER_TOOL_CACHE/Python -wholename "*/${ver}.*/*/bin" | grep . || true)"
if [[ -d "$ver_dir" ]] ; then
@@ -172,10 +171,10 @@ jobs:
runs-on: "windows-latest"
steps:
- name: Checkout
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: "3.12"
- name: Setup Windows
@@ -185,8 +184,8 @@ jobs:
- name: Bootstrap clingo
run: |
./share/spack/setup-env.ps1
spack bootstrap disable github-actions-v0.6
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.4
spack external find --not-buildable cmake bison
spack -d solve zlib
./share/spack/qa/validate_last_exit.ps1

View File

@@ -55,7 +55,7 @@ jobs:
if: github.repository == 'spack/spack'
steps:
- name: Checkout
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: docker/metadata-action@8e5442c4ef9f78752691e2d8f8d19755c6f78e81
id: docker_meta
@@ -87,7 +87,7 @@ jobs:
fi
- name: Upload Dockerfile
uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
with:
name: dockerfiles_${{ matrix.dockerfile[0] }}
path: dockerfiles
@@ -113,7 +113,7 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Deploy ${{ matrix.dockerfile[0] }}
uses: docker/build-push-action@32945a339266b759abcbdc89316275140b0fc960
uses: docker/build-push-action@4f58ea79222b3b9dc2c8bbdd6debcef730109a75
with:
context: dockerfiles/${{ matrix.dockerfile[0] }}
platforms: ${{ matrix.dockerfile[1] }}
@@ -126,7 +126,7 @@ jobs:
needs: deploy-images
steps:
- name: Merge Artifacts
uses: actions/upload-artifact/merge@50769540e7f4bd5e21e526ee35c689e35e0d6874
uses: actions/upload-artifact/merge@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
with:
name: dockerfiles
pattern: dockerfiles_*

View File

@@ -24,7 +24,7 @@ jobs:
core: ${{ steps.filter.outputs.core }}
packages: ${{ steps.filter.outputs.packages }}
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
if: ${{ github.event_name == 'push' }}
with:
fetch-depth: 0

View File

@@ -8,8 +8,8 @@ jobs:
upload:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.11'
cache: 'pip'

View File

@@ -14,10 +14,10 @@ jobs:
build-paraview-deps:
runs-on: windows-latest
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: 3.9
- name: Install Python packages

View File

@@ -1,7 +1,7 @@
black==24.8.0
black==24.10.0
clingo==5.7.1
flake8==7.1.1
isort==5.13.2
mypy==1.8.0
types-six==1.16.21.20240513
types-six==1.16.21.20241009
vermin==1.6.0

View File

@@ -40,10 +40,10 @@ jobs:
on_develop: false
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: ${{ matrix.python-version }}
- name: Install System packages
@@ -80,7 +80,7 @@ jobs:
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }}
run: |
share/spack/qa/run-unit-tests
- uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
with:
name: coverage-${{ matrix.os }}-python${{ matrix.python-version }}
path: coverage
@@ -89,10 +89,10 @@ jobs:
shell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.11'
- name: Install System packages
@@ -113,7 +113,7 @@ jobs:
COVERAGE: true
run: |
share/spack/qa/run-shell-tests
- uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
with:
name: coverage-shell
path: coverage
@@ -130,7 +130,7 @@ jobs:
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: Setup repo and non-root user
run: |
git --version
@@ -149,32 +149,33 @@ jobs:
clingo-cffi:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.11'
python-version: '3.13'
- name: Install System packages
run: |
sudo apt-get -y update
sudo apt-get -y install coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build kcov
sudo apt-get -y install coreutils gfortran graphviz gnupg2
- name: Install Python packages
run: |
pip install --upgrade pip setuptools pytest coverage[toml] pytest-cov clingo pytest-xdist
pip install --upgrade pip setuptools pytest coverage[toml] pytest-cov clingo
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black"
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/bin/setup_git.sh
- name: Run unit tests (full suite with coverage)
env:
COVERAGE: true
COVERAGE_FILE: coverage/.coverage-clingo-cffi
run: |
share/spack/qa/run-unit-tests
- uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874
. share/spack/setup-env.sh
spack bootstrap disable spack-install
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.6
spack bootstrap status
spack solve zlib
spack unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml lib/spack/spack/test/concretize.py
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
with:
name: coverage-clingo-cffi
path: coverage
@@ -187,10 +188,10 @@ jobs:
os: [macos-13, macos-14]
python-version: ["3.11"]
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: ${{ matrix.python-version }}
- name: Install Python packages
@@ -212,7 +213,7 @@ jobs:
$(which spack) solve zlib
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
$(which spack) unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}"
- uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
with:
name: coverage-${{ matrix.os }}-python${{ matrix.python-version }}
path: coverage
@@ -225,10 +226,10 @@ jobs:
powershell Invoke-Expression -Command "./share/spack/qa/windows_test_setup.ps1"; {0}
runs-on: windows-latest
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: 3.9
- name: Install Python packages
@@ -243,7 +244,7 @@ jobs:
run: |
spack unit-test -x --verbose --cov --cov-config=pyproject.toml
./share/spack/qa/validate_last_exit.ps1
- uses: actions/upload-artifact@50769540e7f4bd5e21e526ee35c689e35e0d6874
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
with:
name: coverage-windows
path: coverage

View File

@@ -18,8 +18,8 @@ jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.11'
cache: 'pip'
@@ -35,10 +35,10 @@ jobs:
style:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@f677139bbe7f9c59b41e40162b753c062f5d49a3
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.11'
cache: 'pip'
@@ -70,7 +70,7 @@ jobs:
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: Setup repo and non-root user
run: |
git --version
@@ -85,7 +85,7 @@ jobs:
source share/spack/setup-env.sh
spack debug report
spack -d bootstrap now --dev
spack style -t black
spack -d style -t black
spack unit-test -V
import-check:
runs-on: ubuntu-latest
@@ -98,14 +98,14 @@ jobs:
# PR: use the base of the PR as the old commit
- name: Checkout PR base commit
if: github.event_name == 'pull_request'
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
ref: ${{ github.event.pull_request.base.sha }}
path: old
# not a PR: use the previous commit as the old commit
- name: Checkout previous commit
if: github.event_name != 'pull_request'
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 2
path: old
@@ -114,11 +114,11 @@ jobs:
run: git -C old reset --hard HEAD^
- name: Checkout new commit
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
path: new
- name: Install circular import checker
uses: actions/checkout@d632683dd7b4114ad314bca15554477dd762a938
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: haampie/circular-import-fighter
ref: 555519c6fd5564fd2eb844e7b87e84f4d12602e2

View File

@@ -14,3 +14,26 @@ sphinx:
python:
install:
- requirements: lib/spack/docs/requirements.txt
search:
ranking:
spack.html: -10
spack.*.html: -10
llnl.html: -10
llnl.*.html: -10
_modules/*: -10
command_index.html: -9
basic_usage.html: 5
configuration.html: 5
config_yaml.html: 5
packages_yaml.html: 5
build_settings.html: 5
environments.html: 5
containers.html: 5
mirrors.html: 5
module_file_support.html: 5
repositories.html: 5
binary_caches.html: 5
chain.html: 5
pipelines.html: 5
packaging_guide.html: 5

View File

@@ -1,71 +1,11 @@
@ECHO OFF
setlocal EnableDelayedExpansion
:: (c) 2021 Lawrence Livermore National Laboratory
:: To use this file independently of Spack's installer, execute this script in its directory, or add the
:: associated bin directory to your PATH. Invoke to launch Spack Shell.
::
:: source_dir/spack/bin/spack_cmd.bat
::
pushd %~dp0..
set SPACK_ROOT=%CD%
pushd %CD%\..
set spackinstdir=%CD%
popd
:: Check if Python is on the PATH
if not defined python_pf_ver (
(for /f "delims=" %%F in ('where python.exe') do (
set "python_pf_ver=%%F"
goto :found_python
) ) 2> NUL
)
:found_python
if not defined python_pf_ver (
:: If not, look for Python from the Spack installer
:get_builtin
(for /f "tokens=*" %%g in ('dir /b /a:d "!spackinstdir!\Python*"') do (
set "python_ver=%%g")) 2> NUL
if not defined python_ver (
echo Python was not found on your system.
echo Please install Python or add Python to your PATH.
) else (
set "py_path=!spackinstdir!\!python_ver!"
set "py_exe=!py_path!\python.exe"
)
goto :exitpoint
) else (
:: Python is already on the path
set "py_exe=!python_pf_ver!"
(for /F "tokens=* USEBACKQ" %%F in (
`"!py_exe!" --version`) do (set "output=%%F")) 2>NUL
if not "!output:Microsoft Store=!"=="!output!" goto :get_builtin
goto :exitpoint
)
:exitpoint
set "PATH=%SPACK_ROOT%\bin\;%PATH%"
if defined py_path (
set "PATH=%py_path%;%PATH%"
)
if defined py_exe (
"%py_exe%" "%SPACK_ROOT%\bin\haspywin.py"
)
set "EDITOR=notepad"
DOSKEY spacktivate=spack env activate $*
@echo **********************************************************************
@echo ** Spack Package Manager
@echo **********************************************************************
IF "%1"=="" GOTO CONTINUE
set
GOTO:EOF
:continue
set PROMPT=[spack] %PROMPT%
%comspec% /k
call "%~dp0..\share\spack\setup-env.bat"
pushd %SPACK_ROOT%
%comspec% /K

View File

@@ -9,15 +9,15 @@ bootstrap:
# may not be able to bootstrap all the software that Spack needs,
# depending on its type.
sources:
- name: 'github-actions-v0.5'
- name: github-actions-v0.6
metadata: $spack/share/spack/bootstrap/github-actions-v0.6
- name: github-actions-v0.5
metadata: $spack/share/spack/bootstrap/github-actions-v0.5
- name: 'github-actions-v0.4'
metadata: $spack/share/spack/bootstrap/github-actions-v0.4
- name: 'spack-install'
- name: spack-install
metadata: $spack/share/spack/bootstrap/spack-install
trusted:
# By default we trust bootstrapping from sources and from binaries
# produced on Github via the workflow
github-actions-v0.6: true
github-actions-v0.5: true
github-actions-v0.4: true
spack-install: true

View File

@@ -166,3 +166,74 @@ while `py-numpy` still needs an older version:
Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the
default behavior is ``duplicates:strategy:minimal``.
--------
Splicing
--------
The ``splice`` key covers config attributes for splicing specs in the solver.
"Splicing" is a method for replacing a dependency with another spec
that provides the same package or virtual. There are two types of
splices, referring to different behaviors for shared dependencies
between the root spec and the new spec replacing a dependency:
"transitive" and "intransitive". A "transitive" splice is one that
resolves all conflicts by taking the dependency from the new node. An
"intransitive" splice is one that resolves all conflicts by taking the
dependency from the original root. From a theory perspective, hybrid
splices are possible but are not modeled by Spack.
All spliced specs retain a ``build_spec`` attribute that points to the
original Spec before any splice occurred. The ``build_spec`` for a
non-spliced spec is itself.
The figure below shows examples of transitive and intransitive splices:
.. figure:: images/splices.png
:align: center
The concretizer can be configured to explicitly splice particular
replacements for a target spec. Splicing will allow the user to make
use of generically built public binary caches, while swapping in
highly optimized local builds for performance critical components
and/or components that interact closely with the specific hardware
details of the system. The most prominent candidate for splicing is
MPI providers. MPI packages have relatively well-understood ABI
characteristics, and most High Performance Computing facilities deploy
highly optimized MPI packages tailored to their particular
hardware. The following config block configures Spack to replace
whatever MPI provider each spec was concretized to use with the
particular package of ``mpich`` with the hash that begins ``abcdef``.
.. code-block:: yaml
concretizer:
splice:
explicit:
- target: mpi
replacement: mpich/abcdef
transitive: false
.. warning::
When configuring an explicit splice, you as the user take on the
responsibility for ensuring ABI compatibility between the specs
matched by the target and the replacement you provide. If they are
not compatible, Spack will not warn you and your application will
fail to run.
The ``target`` field of an explicit splice can be any abstract
spec. The ``replacement`` field must be a spec that includes the hash
of a concrete spec, and the replacement must either be the same
package as the target, provide the virtual that is the target, or
provide a virtual that the target provides. The ``transitive`` field
is optional -- by default, splices will be transitive.
.. note::
With explicit splices configured, it is possible for Spack to
concretize to a spec that does not satisfy the input. For example,
with the config above ``hdf5 ^mvapich2`` will concretize to user
``mpich/abcdef`` instead of ``mvapich2`` as the MPI provider. Spack
will warn the user in this case, but will not fail the
concretization.

View File

@@ -281,7 +281,7 @@ When spack queries for configuration parameters, it searches in
higher-precedence scopes first. So, settings in a higher-precedence file
can override those with the same key in a lower-precedence one. For
list-valued settings, Spack *prepends* higher-precedence settings to
lower-precedence settings. Completely ignoring higher-level configuration
lower-precedence settings. Completely ignoring lower-precedence configuration
options is supported with the ``::`` notation for keys (see
:ref:`config-overrides` below).
@@ -511,6 +511,7 @@ Spack understands over a dozen special variables. These are:
* ``$target_family``. The target family for the current host, as
detected by ArchSpec. E.g. ``x86_64`` or ``aarch64``.
* ``$date``: the current date in the format YYYY-MM-DD
* ``$spack_short_version``: the Spack version truncated to the first components.
Note that, as with shell variables, you can write these as ``$varname``

View File

@@ -712,27 +712,27 @@ Release branches
^^^^^^^^^^^^^^^^
There are currently two types of Spack releases: :ref:`major releases
<major-releases>` (``0.17.0``, ``0.18.0``, etc.) and :ref:`point releases
<point-releases>` (``0.17.1``, ``0.17.2``, ``0.17.3``, etc.). Here is a
<major-releases>` (``0.21.0``, ``0.22.0``, etc.) and :ref:`patch releases
<patch-releases>` (``0.22.1``, ``0.22.2``, ``0.22.3``, etc.). Here is a
diagram of how Spack release branches work::
o branch: develop (latest version, v0.19.0.dev0)
o branch: develop (latest version, v0.23.0.dev0)
|
o
| o branch: releases/v0.18, tag: v0.18.1
| o branch: releases/v0.22, tag: v0.22.1
o |
| o tag: v0.18.0
| o tag: v0.22.0
o |
| o
|/
o
|
o
| o branch: releases/v0.17, tag: v0.17.2
| o branch: releases/v0.21, tag: v0.21.2
o |
| o tag: v0.17.1
| o tag: v0.21.1
o |
| o tag: v0.17.0
| o tag: v0.21.0
o |
| o
|/
@@ -743,8 +743,8 @@ requests target ``develop``. The ``develop`` branch will report that its
version is that of the next **major** release with a ``.dev0`` suffix.
Each Spack release series also has a corresponding branch, e.g.
``releases/v0.18`` has ``0.18.x`` versions of Spack, and
``releases/v0.17`` has ``0.17.x`` versions. A major release is the first
``releases/v0.22`` has ``v0.22.x`` versions of Spack, and
``releases/v0.21`` has ``v0.21.x`` versions. A major release is the first
tagged version on a release branch. Minor releases are back-ported from
develop onto release branches. This is typically done by cherry-picking
bugfix commits off of ``develop``.
@@ -774,27 +774,40 @@ for more details.
Scheduling work for releases
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We schedule work for releases by creating `GitHub projects
<https://github.com/spack/spack/projects>`_. At any time, there may be
several open release projects. For example, below are two releases (from
some past version of the page linked above):
We schedule work for **major releases** through `milestones
<https://github.com/spack/spack/milestones>`_ and `GitHub Projects
<https://github.com/spack/spack/projects>`_, while **patch releases** use `labels
<https://github.com/spack/spack/labels>`_.
.. image:: images/projects.png
There is only one milestone open at a time. Its name corresponds to the next major version, for
example ``v0.23``. Important issues and pull requests should be assigned to this milestone by
core developers, so that they are not forgotten at the time of release. The milestone is closed
when the release is made, and a new milestone is created for the next major release.
This image shows one release in progress for ``0.15.1`` and another for
``0.16.0``. Each of these releases has a project board containing issues
and pull requests. GitHub shows a status bar with completed work in
green, work in progress in purple, and work not started yet in gray, so
it's fairly easy to see progress.
Bug reports in GitHub issues are automatically labelled ``bug`` and ``triage``. Spack developers
assign one of the labels ``impact-low``, ``impact-medium`` or ``impact-high``. This will make the
issue appear in the `Triaged bugs <https://github.com/orgs/spack/projects/6>`_ project board.
Important issues should be assigned to the next milestone as well, so they appear at the top of
the project board.
Spack's project boards are not firm commitments so we move work between
releases frequently. If we need to make a release and some tasks are not
yet done, we will simply move them to the next minor or major release, rather
than delaying the release to complete them.
Spack's milestones are not firm commitments so we move work between releases frequently. If we
need to make a release and some tasks are not yet done, we will simply move them to the next major
release milestone, rather than delaying the release to complete them.
For more on using GitHub project boards, see `GitHub's documentation
<https://docs.github.com/en/github/managing-your-work-on-github/about-project-boards>`_.
^^^^^^^^^^^^^^^^^^^^^
Backporting bug fixes
^^^^^^^^^^^^^^^^^^^^^
When a bug is fixed in the ``develop`` branch, it is often necessary to backport the fix to one
(or more) of the ``release/vX.Y`` branches. Only the release manager is responsible for doing
backports, but Spack maintainers are responsible for labelling pull requests (and issues if no bug
fix is available yet) with ``vX.Y.Z`` labels. The label should correspond to the next patch version
that the bug fix should be backported to.
Backports are done publicly by the release manager using a pull request named ``Backports vX.Y.Z``.
This pull request is opened from the ``backports/vX.Y.Z`` branch, targets the ``releases/vX.Y``
branch and contains a (growing) list of cherry-picked commits from the ``develop`` branch.
Typically there are one or two backport pull requests open at any given time.
.. _major-releases:
@@ -802,25 +815,21 @@ For more on using GitHub project boards, see `GitHub's documentation
Making major releases
^^^^^^^^^^^^^^^^^^^^^
Assuming a project board has already been created and all required work
completed, the steps to make the major release are:
Assuming all required work from the milestone is completed, the steps to make the major release
are:
#. Create two new project boards:
#. `Create a new milestone <https://github.com/spack/spack/milestones>`_ for the next major
release.
* One for the next major release
* One for the next point release
#. `Create a new label <https://github.com/spack/spack/labels>`_ for the next patch release.
#. Move any optional tasks that are not done to one of the new project boards.
In general, small bugfixes should go to the next point release. Major
features, refactors, and changes that could affect concretization should
go in the next major release.
#. Move any optional tasks that are not done to the next milestone.
#. Create a branch for the release, based on ``develop``:
.. code-block:: console
$ git checkout -b releases/v0.15 develop
$ git checkout -b releases/v0.23 develop
For a version ``vX.Y.Z``, the branch's name should be
``releases/vX.Y``. That is, you should create a ``releases/vX.Y``
@@ -856,8 +865,8 @@ completed, the steps to make the major release are:
Create a pull request targeting the ``develop`` branch, bumping the major
version in ``lib/spack/spack/__init__.py`` with a ``dev0`` release segment.
For instance when you have just released ``v0.15.0``, set the version
to ``(0, 16, 0, 'dev0')`` on ``develop``.
For instance when you have just released ``v0.23.0``, set the version
to ``(0, 24, 0, 'dev0')`` on ``develop``.
#. Follow the steps in :ref:`publishing-releases`.
@@ -866,82 +875,52 @@ completed, the steps to make the major release are:
#. Follow the steps in :ref:`announcing-releases`.
.. _point-releases:
.. _patch-releases:
^^^^^^^^^^^^^^^^^^^^^
Making point releases
Making patch releases
^^^^^^^^^^^^^^^^^^^^^
Assuming a project board has already been created and all required work
completed, the steps to make the point release are:
To make the patch release process both efficient and transparent, we use a *backports pull request*
which contains cherry-picked commits from the ``develop`` branch. The majority of the work is to
cherry-pick the bug fixes, which ideally should be done as soon as they land on ``develop``:
this ensures cherry-picking happens in order, and makes conflicts easier to resolve since the
changes are fresh in the mind of the developer.
#. Create a new project board for the next point release.
The backports pull request is always titled ``Backports vX.Y.Z`` and is labelled ``backports``. It
is opened from a branch named ``backports/vX.Y.Z`` and targets the ``releases/vX.Y`` branch.
#. Move any optional tasks that are not done to the next project board.
Whenever a pull request labelled ``vX.Y.Z`` is merged, cherry-pick the associated squashed commit
on ``develop`` to the ``backports/vX.Y.Z`` branch. For pull requests that were rebased (or not
squashed), cherry-pick each associated commit individually. Never force push to the
``backports/vX.Y.Z`` branch.
#. Check out the release branch (it should already exist).
.. warning::
For the ``X.Y.Z`` release, the release branch is called ``releases/vX.Y``.
For ``v0.15.1``, you would check out ``releases/v0.15``:
Sometimes you may **still** get merge conflicts even if you have
cherry-picked all the commits in order. This generally means there
is some other intervening pull request that the one you're trying
to pick depends on. In these cases, you'll need to make a judgment
call regarding those pull requests. Consider the number of affected
files and/or the resulting differences.
.. code-block:: console
1. If the changes are small, you might just cherry-pick it.
$ git checkout releases/v0.15
2. If the changes are large, then you may decide that this fix is not
worth including in a patch release, in which case you should remove
the label from the pull request. Remember that large, manual backports
are seldom the right choice for a patch release.
#. If a pull request to the release branch named ``Backports vX.Y.Z`` is not already
in the project, create it. This pull request ought to be created as early as
possible when working on a release project, so that we can build the release
commits incrementally, and identify potential conflicts at an early stage.
When all commits are cherry-picked in the ``backports/vX.Y.Z`` branch, make the patch
release as follows:
#. Cherry-pick each pull request in the ``Done`` column of the release
project board onto the ``Backports vX.Y.Z`` pull request.
#. `Create a new label <https://github.com/spack/spack/labels>`_ ``vX.Y.{Z+1}`` for the next patch
release.
This is **usually** fairly simple since we squash the commits from the
vast majority of pull requests. That means there is only one commit
per pull request to cherry-pick. For example, `this pull request
<https://github.com/spack/spack/pull/15777>`_ has three commits, but
they were squashed into a single commit on merge. You can see the
commit that was created here:
#. Replace the label ``vX.Y.Z`` with ``vX.Y.{Z+1}`` for all PRs and issues that are not done.
.. image:: images/pr-commit.png
You can easily cherry pick it like this (assuming you already have the
release branch checked out):
.. code-block:: console
$ git cherry-pick 7e46da7
For pull requests that were rebased (or not squashed), you'll need to
cherry-pick each associated commit individually.
.. warning::
It is important to cherry-pick commits in the order they happened,
otherwise you can get conflicts while cherry-picking. When
cherry-picking look at the merge date,
**not** the number of the pull request or the date it was opened.
Sometimes you may **still** get merge conflicts even if you have
cherry-picked all the commits in order. This generally means there
is some other intervening pull request that the one you're trying
to pick depends on. In these cases, you'll need to make a judgment
call regarding those pull requests. Consider the number of affected
files and or the resulting differences.
1. If the dependency changes are small, you might just cherry-pick it,
too. If you do this, add the task to the release board.
2. If the changes are large, then you may decide that this fix is not
worth including in a point release, in which case you should remove
the task from the release project.
3. You can always decide to manually back-port the fix to the release
branch if neither of the above options makes sense, but this can
require a lot of work. It's seldom the right choice.
#. When all the commits from the project board are cherry-picked into
the ``Backports vX.Y.Z`` pull request, you can push a commit to:
#. Manually push a single commit with commit message ``Set version to vX.Y.Z`` to the
``backports/vX.Y.Z`` branch, that both bumps the Spack version number and updates the changelog:
1. Bump the version in ``lib/spack/spack/__init__.py``.
2. Update ``CHANGELOG.md`` with a list of the changes.
@@ -950,20 +929,22 @@ completed, the steps to make the point release are:
release branch. See `the changelog from 0.14.1
<https://github.com/spack/spack/commit/ff0abb9838121522321df2a054d18e54b566b44a>`_.
#. Merge the ``Backports vX.Y.Z`` PR with the **Rebase and merge** strategy. This
is needed to keep track in the release branch of all the commits that were
cherry-picked.
#. Make sure CI passes on the release branch, including:
#. Make sure CI passes on the **backports pull request**, including:
* Regular unit tests
* Build tests
* The E4S pipeline at `gitlab.spack.io <https://gitlab.spack.io>`_
If CI does not pass, you'll need to figure out why, and make changes
to the release branch until it does. You can make more commits, modify
or remove cherry-picked commits, or cherry-pick **more** from
``develop`` to make this happen.
#. Merge the ``Backports vX.Y.Z`` PR with the **Rebase and merge** strategy. This
is needed to keep track in the release branch of all the commits that were
cherry-picked.
#. Make sure CI passes on the last commit of the **release branch**.
#. In the rare case you need to include additional commits in the patch release after the backports
PR is merged, it is best to delete the last commit ``Set version to vX.Y.Z`` from the release
branch with a single force push, open a new backports PR named ``Backports vX.Y.Z (2)``, and
repeat the process. Avoid repeated force pushes to the release branch.
#. Follow the steps in :ref:`publishing-releases`.
@@ -1038,25 +1019,31 @@ Updating `releases/latest`
If the new release is the **highest** Spack release yet, you should
also tag it as ``releases/latest``. For example, suppose the highest
release is currently ``0.15.3``:
release is currently ``0.22.3``:
* If you are releasing ``0.15.4`` or ``0.16.0``, then you should tag
it with ``releases/latest``, as these are higher than ``0.15.3``.
* If you are releasing ``0.22.4`` or ``0.23.0``, then you should tag
it with ``releases/latest``, as these are higher than ``0.22.3``.
* If you are making a new release of an **older** major version of
Spack, e.g. ``0.14.4``, then you should not tag it as
Spack, e.g. ``0.21.4``, then you should not tag it as
``releases/latest`` (as there are newer major versions).
To tag ``releases/latest``, do this:
To do so, first fetch the latest tag created on GitHub, since you may not have it locally:
.. code-block:: console
$ git checkout releases/vX.Y # vX.Y is the new release's branch
$ git tag --force releases/latest
$ git push --force --tags
$ git fetch --force git@github.com:spack/spack vX.Y.Z
The ``--force`` argument to ``git tag`` makes ``git`` overwrite the existing
``releases/latest`` tag with the new one.
Then tag ``vX.Y.Z`` as ``releases/latest`` and push the individual tag to GitHub.
.. code-block:: console
$ git tag --force releases/latest vX.Y.Z
$ git push --force git@github.com:spack/spack releases/latest
The ``--force`` argument to ``git tag`` makes ``git`` overwrite the existing ``releases/latest``
tag with the new one. Do **not** use the ``--tags`` flag when pushing, since this will push *all*
local tags.
.. _announcing-releases:

View File

@@ -425,9 +425,13 @@ Developing Packages in a Spack Environment
The ``spack develop`` command allows one to develop Spack packages in
an environment. It requires a spec containing a concrete version, and
will configure Spack to install the package from local source. By
default, it will also clone the package to a subdirectory in the
environment. This package will have a special variant ``dev_path``
will configure Spack to install the package from local source.
If a version is not provided from the command line interface then spack
will automatically pick the highest version the package has defined.
This means any infinity versions (``develop``, ``main``, ``stable``) will be
preferred in this selection process.
By default, ``spack develop`` will also clone the package to a subdirectory in the
environment for the local source. This package will have a special variant ``dev_path``
set, and Spack will ensure the package and its dependents are rebuilt
any time the environment is installed if the package's local source
code has been modified. Spack's native implementation to check for modifications
@@ -669,6 +673,9 @@ them to the environment.
Environments can include files or URLs. File paths can be relative or
absolute. URLs include the path to the text for individual files or
can be the path to a directory containing configuration files.
Spack supports ``file``, ``http``, ``https`` and ``ftp`` protocols (or
schemes). Spack-specific, environment and user path variables may be
used in these paths. See :ref:`config-file-variables` for more information.
^^^^^^^^^^^^^^^^^^^^^^^^
Configuration precedence

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

View File

@@ -457,11 +457,11 @@ For instance, the following config options,
tcl:
all:
suffixes:
^python@3.12: 'python-3.12'
^python@3: 'python{^python.version}'
^openblas: 'openblas'
will add a ``python-3.12`` version string to any packages compiled with
Python matching the spec, ``python@3.12``. This is useful to know which
will add a ``python-3.12.1`` version string to any packages compiled with
Python matching the spec, ``python@3``. This is useful to know which
version of Python a set of Python extensions is associated with. Likewise, the
``openblas`` string is attached to any program that has openblas in the spec,
most likely via the ``+blas`` variant specification.

View File

@@ -59,7 +59,7 @@ Functional Example
------------------
The simplest fully functional standalone example of a working pipeline can be
examined live at this example `project <https://gitlab.com/scott.wittenburg/spack-pipeline-demo>`_
examined live at this example `project <https://gitlab.com/spack/pipeline-quickstart>`_
on gitlab.com.
Here's the ``.gitlab-ci.yml`` file from that example that builds and runs the
@@ -67,39 +67,46 @@ pipeline:
.. code-block:: yaml
stages: [generate, build]
stages: [ "generate", "build" ]
variables:
SPACK_REPO: https://github.com/scottwittenburg/spack.git
SPACK_REF: pipelines-reproducible-builds
SPACK_REPOSITORY: "https://github.com/spack/spack.git"
SPACK_REF: "develop-2024-10-06"
SPACK_USER_CONFIG_PATH: ${CI_PROJECT_DIR}
SPACK_BACKTRACE: 1
generate-pipeline:
stage: generate
tags:
- docker
- saas-linux-small-amd64
stage: generate
image:
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
name: ghcr.io/spack/ubuntu20.04-runner-x86_64:2023-01-01
script:
- git clone ${SPACK_REPOSITORY}
- cd spack && git checkout ${SPACK_REF} && cd ../
- . "./spack/share/spack/setup-env.sh"
- spack --version
- spack env activate --without-view .
- spack -d ci generate
- spack -d -v --color=always
ci generate
--check-index-only
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts:
paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir"
build-jobs:
build-pipeline:
stage: build
trigger:
include:
- artifact: "jobs_scratch_dir/pipeline.yml"
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: generate-pipeline
strategy: depend
needs:
- artifacts: True
job: generate-pipeline
The key thing to note above is that there are two jobs: The first job to run,
``generate-pipeline``, runs the ``spack ci generate`` command to generate a
@@ -114,82 +121,93 @@ And here's the spack environment built by the pipeline represented as a
spack:
view: false
concretizer:
unify: false
unify: true
reuse: false
definitions:
- pkgs:
- zlib
- bzip2
- arch:
- '%gcc@7.5.0 arch=linux-ubuntu18.04-x86_64'
- bzip2 ~debug
- compiler:
- '%gcc'
specs:
- matrix:
- - $pkgs
- - $arch
mirrors: { "mirror": "s3://spack-public/mirror" }
- - $compiler
ci:
enable-artifacts-buildcache: True
rebuild-index: False
target: gitlab
pipeline-gen:
- any-job:
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
- . "./spack/share/spack/setup-env.sh"
- build-job:
tags: [docker]
tags:
- saas-linux-small-amd64
image:
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]
name: ghcr.io/spack/ubuntu20.04-runner-x86_64:2023-01-01
before_script:
- git clone ${SPACK_REPOSITORY}
- cd spack && git checkout ${SPACK_REF} && cd ../
- . "./spack/share/spack/setup-env.sh"
- spack --version
- export SPACK_USER_CONFIG_PATH=${CI_PROJECT_DIR}
- spack config blame mirrors
The elements of this file important to spack ci pipelines are described in more
detail below, but there are a couple of things to note about the above working
example:
.. note::
There is no ``script`` attribute specified for here. The reason for this is
Spack CI will automatically generate reasonable default scripts. More
detail on what is in these scripts can be found below.
The use of ``reuse: false`` in spack environments used for pipelines is
almost always what you want, as without it your pipelines will not rebuild
packages even if package hashes have changed. This is due to the concretizer
strongly preferring known hashes when ``reuse: true``.
Also notice the ``before_script`` section. It is required when using any of the
default scripts to source the ``setup-env.sh`` script in order to inform
the default scripts where to find the ``spack`` executable.
The ``ci`` section in the above environment file contains the bare minimum
configuration required for ``spack ci generate`` to create a working pipeline.
The ``target: gitlab`` tells spack that the desired pipeline output is for
gitlab. However, this isn't strictly required, as currently gitlab is the
only possible output format for pipelines. The ``pipeline-gen`` section
contains the key information needed to specify attributes for the generated
jobs. Notice that it contains a list which has only a single element in
this case. In real pipelines it will almost certainly have more elements,
and in those cases, order is important: spack starts at the bottom of the
list and works upwards when applying attributes.
Normally ``enable-artifacts-buildcache`` is not recommended in production as it
results in large binary artifacts getting transferred back and forth between
gitlab and the runners. But in this example on gitlab.com where there is no
shared, persistent file system, and where no secrets are stored for giving
permission to write to an S3 bucket, ``enabled-buildcache-artifacts`` is the only
way to propagate binaries from jobs to their dependents.
But in this simple case, we use only the special key ``any-job`` to
indicate that spack should apply the specified attributes (``tags``, ``image``,
and ``before_script``) to any job it generates. This includes jobs for
building/pushing all packages, a ``rebuild-index`` job at the end of the
pipeline, as well as any ``noop`` jobs that might be needed by gitlab when
no rebuilds are required.
Also, it is usually a good idea to let the pipeline generate a final "rebuild the
buildcache index" job, so that subsequent pipeline generation can quickly determine
which specs are up to date and which need to be rebuilt (it's a good idea for other
reasons as well, but those are out of scope for this discussion). In this case we
have disabled it (using ``rebuild-index: False``) because the index would only be
generated in the artifacts mirror anyway, and consequently would not be available
during subsequent pipeline runs.
Something to note is that in this simple case, we rely on spack to
generate a reasonable script for the package build jobs (it just creates
a script that invokes ``spack ci rebuild``).
.. note::
With the addition of reproducible builds (#22887) a previously working
pipeline will require some changes:
Another thing to note is the use of the ``SPACK_USER_CONFIG_DIR`` environment
variable in any generated jobs. The purpose of this is to make spack
aware of one final file in the example, the one that contains the mirror
configuration. This file, ``mirrors.yaml`` looks like this:
* In the build-jobs, the environment location changed.
This will typically show as a ``KeyError`` in the failing job. Be sure to
point to ``${SPACK_CONCRETE_ENV_DIR}``.
.. code-block:: yaml
* When using ``include`` in your environment, be sure to make the included
files available in the build jobs. This means adding those files to the
artifact directory. Those files will also be missing in the reproducibility
artifact.
mirrors:
buildcache-destination:
url: oci://registry.gitlab.com/spack/pipeline-quickstart
binary: true
access_pair:
id_variable: CI_REGISTRY_USER
secret_variable: CI_REGISTRY_PASSWORD
* Because the location of the environment changed, including files with
relative path may have to be adapted to work both in the project context
(generation job) and in the concrete env dir context (build job).
Note the name of the mirror is ``buildcache-destination``, which is required
as of Spack 0.23 (see below for more information). The mirror url simply
points to the container registry associated with the project, while
``id_variable`` and ``secret_variable`` refer to to environment variables
containing the access credentials for the mirror.
When spack builds packages for this example project, they will be pushed to
the project container registry, where they will be available for subsequent
jobs to install as dependencies, or for other pipelines to use to build runnable
container images.
-----------------------------------
Spack commands supporting pipelines
@@ -417,15 +435,6 @@ configuration with a ``script`` attribute. Specifying a signing job without a sc
does not create a signing job and the job configuration attributes will be ignored.
Signing jobs are always assigned the runner tags ``aws``, ``protected``, and ``notary``.
^^^^^^^^^^^^^^^^^
Cleanup (cleanup)
^^^^^^^^^^^^^^^^^
When using ``temporary-storage-url-prefix`` the cleanup job will destroy the mirror
created for the associated Gitlab pipeline. Cleanup jobs do not allow modifying the
script, but do expect that the spack command is in the path and require a
``before_script`` to be specified that sources the ``setup-env.sh`` script.
.. _noop_jobs:
^^^^^^^^^^^^
@@ -592,6 +601,77 @@ the attributes will be merged starting from the bottom match going up to the top
In the case that no match is found in a submapping section, no additional attributes will be applied.
^^^^^^^^^^^^^^^^^^^^^^^^
Dynamic Mapping Sections
^^^^^^^^^^^^^^^^^^^^^^^^
For large scale CI where cost optimization is required, dynamic mapping allows for the use of real-time
mapping schemes served by a web service. This type of mapping does not support the ``-remove`` type
behavior, but it does follow the rest of the merge rules for configurations.
The dynamic mapping service needs to implement a single REST API interface for getting
requests ``GET <URL>[:PORT][/PATH]?spec=<pkg_name@pkg_version +variant1+variant2%compiler@compiler_version>``.
example request.
.. code-block::
https://my-dyn-mapping.spack.io/allocation?spec=zlib-ng@2.1.6 +compat+opt+shared+pic+new_strategies arch=linux-ubuntu20.04-x86_64_v3%gcc@12.0.0
With an example response the updates kubernetes request variables, overrides the max retries for gitlab,
and prepends a note about the modifications made by the my-dyn-mapping.spack.io service.
.. code-block::
200 OK
{
"variables":
{
"KUBERNETES_CPU_REQUEST": "500m",
"KUBERNETES_MEMORY_REQUEST": "2G",
},
"retry": { "max:": "1"}
"script+:":
[
"echo \"Job modified by my-dyn-mapping.spack.io\""
]
}
The ci.yaml configuration section takes the URL endpoint as well as a number of options to configure how responses are handled.
It is possible to specify a list of allowed and ignored configuration attributes under ``allow`` and ``ignore``
respectively. It is also possible to configure required attributes under ``required`` section.
Options to configure the client timeout and SSL verification using the ``timeout`` and ``verify_ssl`` options.
By default, the ``timeout`` is set to the option in ``config:timeout`` and ``veryify_ssl`` is set the the option in ``config::verify_ssl``.
Passing header parameters to the request can be achieved through the ``header`` section. The values of the variables passed to the
header may be environment variables that are expanded at runtime, such as a private token configured on the runner.
Here is an example configuration pointing to ``my-dyn-mapping.spack.io/allocation``.
.. code-block:: yaml
ci:
- dynamic-mapping:
endpoint: my-dyn-mapping.spack.io/allocation
timeout: 10
verify_ssl: True
header:
PRIVATE_TOKEN: ${MY_PRIVATE_TOKEN}
MY_CONFIG: "fuzz_allocation:false"
allow:
- variables
ignore:
- script
require: []
^^^^^^^^^^^^^
Bootstrapping
^^^^^^^^^^^^^
@@ -670,15 +750,6 @@ environment/stack file, and in that case no bootstrapping will be done (only the
specs will be staged for building) and the runners will be expected to already
have all needed compilers installed and configured for spack to use.
^^^^^^^^^^^^^^^^^^^
Pipeline Buildcache
^^^^^^^^^^^^^^^^^^^
The ``enable-artifacts-buildcache`` key
takes a boolean and determines whether the pipeline uses artifacts to store and
pass along the buildcaches from one stage to the next (the default if you don't
provide this option is ``False``).
^^^^^^^^^^^^^^^^
Broken Specs URL
^^^^^^^^^^^^^^^^

View File

@@ -1,13 +1,13 @@
sphinx==7.4.7
sphinx==8.1.3
sphinxcontrib-programoutput==0.17
sphinx_design==0.6.1
sphinx-rtd-theme==2.0.0
python-levenshtein==0.25.1
sphinx-rtd-theme==3.0.1
python-levenshtein==0.26.0
docutils==0.20.1
pygments==2.18.0
urllib3==2.2.3
pytest==8.3.3
isort==5.13.2
black==24.8.0
black==24.10.0
flake8==7.1.1
mypy==1.11.1

View File

@@ -18,7 +18,7 @@
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.2.5-dev (commit bceb39528ac49dd0c876b2e9bf3e7482e9c2be4a)
* Version: 0.2.5 (commit 38ce485258ffc4fc6dd6688f8dc90cb269478c47)
astunparse
----------------

View File

@@ -81,8 +81,13 @@ def __init__(self, name, parents, vendor, features, compilers, generation=0, cpu
self.generation = generation
# Only relevant for AArch64
self.cpu_part = cpu_part
# Cache the ancestor computation
# Cache the "ancestor" computation
self._ancestors = None
# Cache the "generic" computation
self._generic = None
# Cache the "family" computation
self._family = None
@property
def ancestors(self):
@@ -174,18 +179,22 @@ def __contains__(self, feature):
@property
def family(self):
"""Returns the architecture family a given target belongs to"""
roots = [x for x in [self] + self.ancestors if not x.ancestors]
msg = "a target is expected to belong to just one architecture family"
msg += f"[found {', '.join(str(x) for x in roots)}]"
assert len(roots) == 1, msg
if self._family is None:
roots = [x for x in [self] + self.ancestors if not x.ancestors]
msg = "a target is expected to belong to just one architecture family"
msg += f"[found {', '.join(str(x) for x in roots)}]"
assert len(roots) == 1, msg
self._family = roots.pop()
return roots.pop()
return self._family
@property
def generic(self):
"""Returns the best generic architecture that is compatible with self"""
generics = [x for x in [self] + self.ancestors if x.vendor == "generic"]
return max(generics, key=lambda x: len(x.ancestors))
if self._generic is None:
generics = [x for x in [self] + self.ancestors if x.vendor == "generic"]
self._generic = max(generics, key=lambda x: len(x.ancestors))
return self._generic
def to_dict(self):
"""Returns a dictionary representation of this object."""

View File

@@ -1482,7 +1482,6 @@
"cldemote",
"movdir64b",
"movdiri",
"pdcm",
"serialize",
"waitpkg"
],
@@ -2237,6 +2236,84 @@
]
}
},
"zen5": {
"from": ["zen4"],
"vendor": "AuthenticAMD",
"features": [
"abm",
"aes",
"avx",
"avx2",
"avx512_bf16",
"avx512_bitalg",
"avx512bw",
"avx512cd",
"avx512dq",
"avx512f",
"avx512ifma",
"avx512vbmi",
"avx512_vbmi2",
"avx512vl",
"avx512_vnni",
"avx512_vp2intersect",
"avx512_vpopcntdq",
"avx_vnni",
"bmi1",
"bmi2",
"clflushopt",
"clwb",
"clzero",
"cppc",
"cx16",
"f16c",
"flush_l1d",
"fma",
"fsgsbase",
"gfni",
"ibrs_enhanced",
"mmx",
"movbe",
"movdir64b",
"movdiri",
"pclmulqdq",
"popcnt",
"rdseed",
"sse",
"sse2",
"sse4_1",
"sse4_2",
"sse4a",
"ssse3",
"tsc_adjust",
"vaes",
"vpclmulqdq",
"xsavec",
"xsaveopt"
],
"compilers": {
"gcc": [
{
"versions": "14.1:",
"name": "znver5",
"flags": "-march={name} -mtune={name}"
}
],
"aocc": [
{
"versions": "5.0:",
"name": "znver5",
"flags": "-march={name} -mtune={name}"
}
],
"clang": [
{
"versions": "19.1:",
"name": "znver5",
"flags": "-march={name} -mtune={name}"
}
]
}
},
"ppc64": {
"from": [],
"vendor": "generic",

View File

@@ -41,6 +41,20 @@ def comma_and(sequence: List[str]) -> str:
return comma_list(sequence, "and")
def ordinal(number: int) -> str:
"""Return the ordinal representation (1st, 2nd, 3rd, etc.) for the provided number.
Args:
number: int to convert to ordinal number
Returns: number's corresponding ordinal
"""
idx = (number % 10) << 1
tens = number % 100 // 10
suffix = "th" if tens == 1 or idx > 6 else "thstndrd"[idx : idx + 2]
return f"{number}{suffix}"
def quote(sequence: List[str], q: str = "'") -> List[str]:
"""Quotes each item in the input list with the quote character passed as second argument."""
return [f"{q}{e}{q}" for e in sequence]

View File

@@ -47,6 +47,7 @@
"copy_mode",
"filter_file",
"find",
"find_first",
"find_headers",
"find_all_headers",
"find_libraries",

View File

@@ -263,7 +263,9 @@ def match_to_ansi(match):
f"Incomplete color format: '{match.group(0)}' in '{match.string}'"
)
ansi_code = _escape(f"{styles[style]};{colors.get(color_code, '')}", color, enclose, zsh)
color_number = colors.get(color_code, "")
semi = ";" if color_number else ""
ansi_code = _escape(f"{styles[style]}{semi}{color_number}", color, enclose, zsh)
if text:
return f"{ansi_code}{text}{_escape(0, color, enclose, zsh)}"
else:

View File

@@ -10,7 +10,6 @@
import errno
import io
import multiprocessing
import multiprocessing.connection
import os
import re
import select
@@ -19,9 +18,10 @@
import threading
import traceback
from contextlib import contextmanager
from multiprocessing.connection import Connection
from threading import Thread
from types import ModuleType
from typing import Optional
from typing import Callable, Optional
import llnl.util.tty as tty
@@ -345,49 +345,6 @@ def close(self):
self.file.close()
class MultiProcessFd:
"""Return an object which stores a file descriptor and can be passed as an
argument to a function run with ``multiprocessing.Process``, such that
the file descriptor is available in the subprocess."""
def __init__(self, fd):
self._connection = None
self._fd = None
if sys.version_info >= (3, 8):
self._connection = multiprocessing.connection.Connection(fd)
else:
self._fd = fd
@property
def fd(self):
if self._connection:
return self._connection._handle
else:
return self._fd
def close(self):
if self._connection:
self._connection.close()
else:
os.close(self._fd)
def close_connection_and_file(multiprocess_fd, file):
# MultiprocessFd is intended to transmit a FD
# to a child process, this FD is then opened to a Python File object
# (using fdopen). In >= 3.8, MultiprocessFd encapsulates a
# multiprocessing.connection.Connection; Connection closes the FD
# when it is deleted, and prints a warning about duplicate closure if
# it is not explicitly closed. In < 3.8, MultiprocessFd encapsulates a
# simple FD; closing the FD here appears to conflict with
# closure of the File object (in < 3.8 that is). Therefore this needs
# to choose whether to close the File or the Connection.
if sys.version_info >= (3, 8):
multiprocess_fd.close()
else:
file.close()
@contextmanager
def replace_environment(env):
"""Replace the current environment (`os.environ`) with `env`.
@@ -545,22 +502,20 @@ def __enter__(self):
# forcing debug output.
self._saved_debug = tty._debug
# OS-level pipe for redirecting output to logger
read_fd, write_fd = os.pipe()
# Pipe for redirecting output to logger
read_fd, self.write_fd = multiprocessing.Pipe(duplex=False)
read_multiprocess_fd = MultiProcessFd(read_fd)
# Multiprocessing pipe for communication back from the daemon
# Pipe for communication back from the daemon
# Currently only used to save echo value between uses
self.parent_pipe, child_pipe = multiprocessing.Pipe()
self.parent_pipe, child_pipe = multiprocessing.Pipe(duplex=False)
# Sets a daemon that writes to file what it reads from a pipe
try:
# need to pass this b/c multiprocessing closes stdin in child.
input_multiprocess_fd = None
input_fd = None
try:
if sys.stdin.isatty():
input_multiprocess_fd = MultiProcessFd(os.dup(sys.stdin.fileno()))
input_fd = Connection(os.dup(sys.stdin.fileno()))
except BaseException:
# just don't forward input if this fails
pass
@@ -569,9 +524,9 @@ def __enter__(self):
self.process = multiprocessing.Process(
target=_writer_daemon,
args=(
input_multiprocess_fd,
read_multiprocess_fd,
write_fd,
input_fd,
read_fd,
self.write_fd,
self.echo,
self.log_file,
child_pipe,
@@ -582,9 +537,9 @@ def __enter__(self):
self.process.start()
finally:
if input_multiprocess_fd:
input_multiprocess_fd.close()
read_multiprocess_fd.close()
if input_fd:
input_fd.close()
read_fd.close()
# Flush immediately before redirecting so that anything buffered
# goes to the original stream
@@ -602,9 +557,9 @@ def __enter__(self):
self._saved_stderr = os.dup(sys.stderr.fileno())
# redirect to the pipe we created above
os.dup2(write_fd, sys.stdout.fileno())
os.dup2(write_fd, sys.stderr.fileno())
os.close(write_fd)
os.dup2(self.write_fd.fileno(), sys.stdout.fileno())
os.dup2(self.write_fd.fileno(), sys.stderr.fileno())
self.write_fd.close()
else:
# Handle I/O the Python way. This won't redirect lower-level
@@ -617,7 +572,7 @@ def __enter__(self):
self._saved_stderr = sys.stderr
# create a file object for the pipe; redirect to it.
pipe_fd_out = os.fdopen(write_fd, "w")
pipe_fd_out = os.fdopen(self.write_fd.fileno(), "w", closefd=False)
sys.stdout = pipe_fd_out
sys.stderr = pipe_fd_out
@@ -653,6 +608,7 @@ def __exit__(self, exc_type, exc_val, exc_tb):
else:
sys.stdout = self._saved_stdout
sys.stderr = self._saved_stderr
self.write_fd.close()
# print log contents in parent if needed.
if self.log_file.write_in_parent:
@@ -866,14 +822,14 @@ def force_echo(self):
def _writer_daemon(
stdin_multiprocess_fd,
read_multiprocess_fd,
write_fd,
echo,
log_file_wrapper,
control_pipe,
filter_fn,
):
stdin_fd: Optional[Connection],
read_fd: Connection,
write_fd: Connection,
echo: bool,
log_file_wrapper: FileWrapper,
control_fd: Connection,
filter_fn: Optional[Callable[[str], str]],
) -> None:
"""Daemon used by ``log_output`` to write to a log file and to ``stdout``.
The daemon receives output from the parent process and writes it both
@@ -910,43 +866,37 @@ def _writer_daemon(
``StringIO`` in the parent. This is mainly for testing.
Arguments:
stdin_multiprocess_fd (int): input from the terminal
read_multiprocess_fd (int): pipe for reading from parent's redirected
stdout
echo (bool): initial echo setting -- controlled by user and
preserved across multiple writer daemons
log_file_wrapper (FileWrapper): file to log all output
control_pipe (Pipe): multiprocessing pipe on which to send control
information to the parent
filter_fn (callable, optional): function to filter each line of output
stdin_fd: optional input from the terminal
read_fd: pipe for reading from parent's redirected stdout
echo: initial echo setting -- controlled by user and preserved across multiple writer
daemons
log_file_wrapper: file to log all output
control_pipe: multiprocessing pipe on which to send control information to the parent
filter_fn: optional function to filter each line of output
"""
# If this process was forked, then it will inherit file descriptors from
# the parent process. This process depends on closing all instances of
# write_fd to terminate the reading loop, so we close the file descriptor
# here. Forking is the process spawning method everywhere except Mac OS
# for Python >= 3.8 and on Windows
if sys.version_info < (3, 8) or sys.platform != "darwin":
os.close(write_fd)
# This process depends on closing all instances of write_pipe to terminate the reading loop
write_fd.close()
# 1. Use line buffering (3rd param = 1) since Python 3 has a bug
# that prevents unbuffered text I/O.
# 2. Python 3.x before 3.7 does not open with UTF-8 encoding by default
in_pipe = os.fdopen(read_multiprocess_fd.fd, "r", 1, encoding="utf-8")
# 3. closefd=False because Connection has "ownership"
read_file = os.fdopen(read_fd.fileno(), "r", 1, encoding="utf-8", closefd=False)
if stdin_multiprocess_fd:
stdin = os.fdopen(stdin_multiprocess_fd.fd)
if stdin_fd:
stdin_file = os.fdopen(stdin_fd.fileno(), closefd=False)
else:
stdin = None
stdin_file = None
# list of streams to select from
istreams = [in_pipe, stdin] if stdin else [in_pipe]
istreams = [read_file, stdin_file] if stdin_file else [read_file]
force_echo = False # parent can force echo for certain output
log_file = log_file_wrapper.unwrap()
try:
with keyboard_input(stdin) as kb:
with keyboard_input(stdin_file) as kb:
while True:
# fix the terminal settings if we recently came to
# the foreground
@@ -959,12 +909,12 @@ def _writer_daemon(
# Allow user to toggle echo with 'v' key.
# Currently ignores other chars.
# only read stdin if we're in the foreground
if stdin in rlist and not _is_background_tty(stdin):
if stdin_file and stdin_file in rlist and not _is_background_tty(stdin_file):
# it's possible to be backgrounded between the above
# check and the read, so we ignore SIGTTIN here.
with ignore_signal(signal.SIGTTIN):
try:
if stdin.read(1) == "v":
if stdin_file.read(1) == "v":
echo = not echo
except IOError as e:
# If SIGTTIN is ignored, the system gives EIO
@@ -973,13 +923,13 @@ def _writer_daemon(
if e.errno != errno.EIO:
raise
if in_pipe in rlist:
if read_file in rlist:
line_count = 0
try:
while line_count < 100:
# Handle output from the calling process.
try:
line = _retry(in_pipe.readline)()
line = _retry(read_file.readline)()
except UnicodeDecodeError:
# installs like --test=root gpgme produce non-UTF8 logs
line = "<line lost: output was not encoded as UTF-8>\n"
@@ -1008,7 +958,7 @@ def _writer_daemon(
if xoff in controls:
force_echo = False
if not _input_available(in_pipe):
if not _input_available(read_file):
break
finally:
if line_count > 0:
@@ -1023,14 +973,14 @@ def _writer_daemon(
finally:
# send written data back to parent if we used a StringIO
if isinstance(log_file, io.StringIO):
control_pipe.send(log_file.getvalue())
control_fd.send(log_file.getvalue())
log_file_wrapper.close()
close_connection_and_file(read_multiprocess_fd, in_pipe)
if stdin_multiprocess_fd:
close_connection_and_file(stdin_multiprocess_fd, stdin)
read_fd.close()
if stdin_fd:
stdin_fd.close()
# send echo value back to the parent so it can be preserved.
control_pipe.send(echo)
control_fd.send(echo)
def _retry(function):

View File

@@ -69,4 +69,15 @@ def get_version() -> str:
return spack_version
__all__ = ["spack_version_info", "spack_version", "get_version", "get_spack_commit"]
def get_short_version() -> str:
"""Short Spack version."""
return f"{spack_version_info[0]}.{spack_version_info[1]}"
__all__ = [
"spack_version_info",
"spack_version",
"get_version",
"get_spack_commit",
"get_short_version",
]

View File

@@ -35,6 +35,7 @@
import spack.caches
import spack.config as config
import spack.database as spack_db
import spack.deptypes as dt
import spack.error
import spack.hash_types as ht
import spack.hooks
@@ -251,7 +252,7 @@ def _associate_built_specs_with_mirror(self, cache_key, mirror_url):
spec_list = [
s
for s in db.query_local(installed=any, in_buildcache=any)
for s in db.query_local(installed=any)
if s.external or db.query_local_by_spec_hash(s.dag_hash()).in_buildcache
]
@@ -712,15 +713,32 @@ def get_buildfile_manifest(spec):
return data
def hashes_to_prefixes(spec):
"""Return a dictionary of hashes to prefixes for a spec and its deps, excluding externals"""
return {
s.dag_hash(): str(s.prefix)
def deps_to_relocate(spec):
"""Return the transitive link and direct run dependencies of the spec.
This is a special traversal for dependencies we need to consider when relocating a package.
Package binaries, scripts, and other files may refer to the prefixes of dependencies, so
we need to rewrite those locations when dependencies are in a different place at install time
than they were at build time.
This traversal covers transitive link dependencies and direct run dependencies because:
1. Spack adds RPATHs for transitive link dependencies so that packages can find needed
dependency libraries.
2. Packages may call any of their *direct* run dependencies (and may bake their paths into
binaries or scripts), so we also need to search for run dependency prefixes when relocating.
This returns a deduplicated list of transitive link dependencies and direct run dependencies.
"""
deps = [
s
for s in itertools.chain(
spec.traverse(root=True, deptype="link"), spec.dependencies(deptype="run")
)
if not s.external
}
]
return llnl.util.lang.dedupe(deps, key=lambda s: s.dag_hash())
def get_buildinfo_dict(spec):
@@ -736,7 +754,7 @@ def get_buildinfo_dict(spec):
"relocate_binaries": manifest["binary_to_relocate"],
"relocate_links": manifest["link_to_relocate"],
"hardlinks_deduped": manifest["hardlinks_deduped"],
"hash_to_prefix": hashes_to_prefixes(spec),
"hash_to_prefix": {d.dag_hash(): str(d.prefix) for d in deps_to_relocate(spec)},
}
@@ -1631,7 +1649,6 @@ def _oci_push(
Dict[str, spack.oci.oci.Blob],
List[Tuple[Spec, BaseException]],
]:
# Spec dag hash -> blob
checksums: Dict[str, spack.oci.oci.Blob] = {}
@@ -2201,11 +2218,36 @@ def relocate_package(spec):
# First match specific prefix paths. Possibly the *local* install prefix
# of some dependency is in an upstream, so we cannot assume the original
# spack store root can be mapped uniformly to the new spack store root.
for dag_hash, new_dep_prefix in hashes_to_prefixes(spec).items():
if dag_hash in hash_to_old_prefix:
old_dep_prefix = hash_to_old_prefix[dag_hash]
prefix_to_prefix_bin[old_dep_prefix] = new_dep_prefix
prefix_to_prefix_text[old_dep_prefix] = new_dep_prefix
#
# If the spec is spliced, we need to handle the simultaneous mapping
# from the old install_tree to the new install_tree and from the build_spec
# to the spliced spec.
# Because foo.build_spec is foo for any non-spliced spec, we can simplify
# by checking for spliced-in nodes by checking for nodes not in the build_spec
# without any explicit check for whether the spec is spliced.
# An analog in this algorithm is any spec that shares a name or provides the same virtuals
# in the context of the relevant root spec. This ensures that the analog for a spec s
# is the spec that s replaced when we spliced.
relocation_specs = deps_to_relocate(spec)
build_spec_ids = set(id(s) for s in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD))
for s in relocation_specs:
analog = s
if id(s) not in build_spec_ids:
analogs = [
d
for d in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD)
if s._splice_match(d, self_root=spec, other_root=spec.build_spec)
]
if analogs:
# Prefer same-name analogs and prefer higher versions
# This matches the preferences in Spec.splice, so we will find same node
analog = max(analogs, key=lambda a: (a.name == s.name, a.version))
lookup_dag_hash = analog.dag_hash()
if lookup_dag_hash in hash_to_old_prefix:
old_dep_prefix = hash_to_old_prefix[lookup_dag_hash]
prefix_to_prefix_bin[old_dep_prefix] = str(s.prefix)
prefix_to_prefix_text[old_dep_prefix] = str(s.prefix)
# Only then add the generic fallback of install prefix -> install prefix.
prefix_to_prefix_text[old_prefix] = new_prefix
@@ -2520,7 +2562,13 @@ def _ensure_common_prefix(tar: tarfile.TarFile) -> str:
return pkg_prefix
def install_root_node(spec, unsigned=False, force=False, sha256=None):
def install_root_node(
spec: spack.spec.Spec,
unsigned=False,
force: bool = False,
sha256: Optional[str] = None,
allow_missing: bool = False,
) -> None:
"""Install the root node of a concrete spec from a buildcache.
Checking the sha256 sum of a node before installation is usually needed only
@@ -2529,11 +2577,10 @@ def install_root_node(spec, unsigned=False, force=False, sha256=None):
Args:
spec: spec to be installed (note that only the root node will be installed)
unsigned (bool): if True allows installing unsigned binaries
force (bool): force installation if the spec is already present in the
local store
sha256 (str): optional sha256 of the binary package, to be checked
before installation
unsigned: if True allows installing unsigned binaries
force: force installation if the spec is already present in the local store
sha256: optional sha256 of the binary package, to be checked before installation
allow_missing: when true, allows installing a node with missing dependencies
"""
# Early termination
if spec.external or spec.virtual:
@@ -2543,10 +2590,10 @@ def install_root_node(spec, unsigned=False, force=False, sha256=None):
warnings.warn("Package for spec {0} already installed.".format(spec.format()))
return
download_result = download_tarball(spec, unsigned)
download_result = download_tarball(spec.build_spec, unsigned)
if not download_result:
msg = 'download of binary cache file for spec "{0}" failed'
raise RuntimeError(msg.format(spec.format()))
raise RuntimeError(msg.format(spec.build_spec.format()))
if sha256:
checker = spack.util.crypto.Checker(sha256)
@@ -2565,8 +2612,13 @@ def install_root_node(spec, unsigned=False, force=False, sha256=None):
with spack.util.path.filter_padding():
tty.msg('Installing "{0}" from a buildcache'.format(spec.format()))
extract_tarball(spec, download_result, force)
spec.package.windows_establish_runtime_linkage()
if spec.spliced: # overwrite old metadata with new
spack.store.STORE.layout.write_spec(
spec, spack.store.STORE.layout.spec_file_path(spec)
)
spack.hooks.post_install(spec, False)
spack.store.STORE.db.add(spec)
spack.store.STORE.db.add(spec, allow_missing=allow_missing)
def install_single_spec(spec, unsigned=False, force=False):

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Common basic functions used through the spack.bootstrap package"""
import fnmatch
import glob
import importlib
import os.path
import re
@@ -60,10 +61,19 @@ def _try_import_from_store(
python, *_ = candidate_spec.dependencies("python-venv")
else:
python, *_ = candidate_spec.dependencies("python")
module_paths = [
os.path.join(candidate_spec.prefix, python.package.purelib),
os.path.join(candidate_spec.prefix, python.package.platlib),
]
# if python is installed, ask it for the layout
if python.installed:
module_paths = [
os.path.join(candidate_spec.prefix, python.package.purelib),
os.path.join(candidate_spec.prefix, python.package.platlib),
]
# otherwise search for the site-packages directory
# (clingo from binaries with truncated python-venv runtime)
else:
module_paths = glob.glob(
os.path.join(candidate_spec.prefix, "lib", "python*", "site-packages")
)
path_before = list(sys.path)
# NOTE: try module_paths first and last, last allows an existing version in path

View File

@@ -37,6 +37,7 @@
import spack.binary_distribution
import spack.config
import spack.detection
import spack.mirror
import spack.platforms
import spack.spec
import spack.store
@@ -91,12 +92,7 @@ def __init__(self, conf: ConfigDictionary) -> None:
self.metadata_dir = spack.util.path.canonicalize_path(conf["metadata"])
# Promote (relative) paths to file urls
url = conf["info"]["url"]
if spack.util.url.is_path_instead_of_url(url):
if not os.path.isabs(url):
url = os.path.join(self.metadata_dir, url)
url = spack.util.url.path_to_file_url(url)
self.url = url
self.url = spack.mirror.Mirror(conf["info"]["url"]).fetch_url
@property
def mirror_scope(self) -> spack.config.InternalConfigScope:
@@ -175,7 +171,15 @@ def _install_by_hash(
query = spack.binary_distribution.BinaryCacheQuery(all_architectures=True)
for match in spack.store.find([f"/{pkg_hash}"], multiple=False, query_fn=query):
spack.binary_distribution.install_root_node(
match, unsigned=True, force=True, sha256=pkg_sha256
# allow_missing is true since when bootstrapping clingo we truncate runtime
# deps such as gcc-runtime, since we link libstdc++ statically, and the other
# further runtime deps are loaded by the Python interpreter. This just silences
# warnings about missing dependencies.
match,
unsigned=True,
force=True,
sha256=pkg_sha256,
allow_missing=True,
)
def _install_and_test(

View File

@@ -44,6 +44,7 @@
from collections import defaultdict
from enum import Flag, auto
from itertools import chain
from multiprocessing.connection import Connection
from typing import Callable, Dict, List, Optional, Set, Tuple
import archspec.cpu
@@ -54,7 +55,6 @@
from llnl.util.lang import dedupe, stable_partition
from llnl.util.symlink import symlink
from llnl.util.tty.color import cescape, colorize
from llnl.util.tty.log import MultiProcessFd
import spack.build_systems._checks
import spack.build_systems.cmake
@@ -91,7 +91,7 @@
)
from spack.util.executable import Executable
from spack.util.log_parse import make_log_context, parse_log_events
from spack.util.module_cmd import load_module, path_from_modules
from spack.util.module_cmd import load_module
#
# This can be set by the user to globally disable parallel builds.
@@ -617,13 +617,11 @@ def set_package_py_globals(pkg, context: Context = Context.BUILD):
"""
module = ModuleChangePropagator(pkg)
if context == Context.BUILD:
module.std_cmake_args = spack.build_systems.cmake.CMakeBuilder.std_args(pkg)
module.std_meson_args = spack.build_systems.meson.MesonBuilder.std_args(pkg)
module.std_pip_args = spack.build_systems.python.PythonPipBuilder.std_args(pkg)
jobs = spack.config.determine_number_of_jobs(parallel=pkg.parallel)
module.make_jobs = jobs
if context == Context.BUILD:
module.std_meson_args = spack.build_systems.meson.MesonBuilder.std_args(pkg)
module.std_pip_args = spack.build_systems.python.PythonPipBuilder.std_args(pkg)
# TODO: make these build deps that can be installed if not found.
module.make = MakeExecutable("make", jobs)
@@ -792,21 +790,6 @@ def get_rpath_deps(pkg: spack.package_base.PackageBase) -> List[spack.spec.Spec]
return _get_rpath_deps_from_spec(pkg.spec, pkg.transitive_rpaths)
def get_rpaths(pkg):
"""Get a list of all the rpaths for a package."""
rpaths = [pkg.prefix.lib, pkg.prefix.lib64]
deps = get_rpath_deps(pkg)
rpaths.extend(d.prefix.lib for d in deps if os.path.isdir(d.prefix.lib))
rpaths.extend(d.prefix.lib64 for d in deps if os.path.isdir(d.prefix.lib64))
# Second module is our compiler mod name. We use that to get rpaths from
# module show output.
if pkg.compiler.modules and len(pkg.compiler.modules) > 1:
mod_rpath = path_from_modules([pkg.compiler.modules[1]])
if mod_rpath:
rpaths.append(mod_rpath)
return list(dedupe(filter_system_paths(rpaths)))
def load_external_modules(pkg):
"""Traverse a package's spec DAG and load any external modules.
@@ -1063,6 +1046,12 @@ def set_all_package_py_globals(self):
# This includes runtime dependencies, also runtime deps of direct build deps.
set_package_py_globals(pkg, context=Context.RUN)
# Looping over the set of packages a second time
# ensures all globals are loaded into the module space prior to
# any package setup. This guarantees package setup methods have
# access to expected module level definitions such as "spack_cc"
for dspec, flag in chain(self.external, self.nonexternal):
pkg = dspec.package
for spec in dspec.dependents():
# Note: some specs have dependents that are unreachable from the root, so avoid
# setting globals for those.
@@ -1072,6 +1061,15 @@ def set_all_package_py_globals(self):
pkg.setup_dependent_package(dependent_module, spec)
dependent_module.propagate_changes_to_mro()
if self.context == Context.BUILD:
pkg = self.specs[0].package
module = ModuleChangePropagator(pkg)
# std_cmake_args is not sufficiently static to be defined
# in set_package_py_globals and is deprecated so its handled
# here as a special case
module.std_cmake_args = spack.build_systems.cmake.CMakeBuilder.std_args(pkg)
module.propagate_changes_to_mro()
def get_env_modifications(self) -> EnvironmentModifications:
"""Returns the environment variable modifications for the given input specs and context.
Environment modifications include:
@@ -1141,40 +1139,14 @@ def _make_runnable(self, dep: spack.spec.Spec, env: EnvironmentModifications):
env.prepend_path("PATH", bin_dir)
def get_cmake_prefix_path(pkg):
# Note that unlike modifications_from_dependencies, this does not include
# any edits to CMAKE_PREFIX_PATH defined in custom
# setup_dependent_build_environment implementations of dependency packages
build_deps = set(pkg.spec.dependencies(deptype=("build", "test")))
link_deps = set(pkg.spec.traverse(root=False, deptype=("link")))
build_link_deps = build_deps | link_deps
spack_built = []
externals = []
# modifications_from_dependencies updates CMAKE_PREFIX_PATH by first
# prepending all externals and then all non-externals
for dspec in pkg.spec.traverse(root=False, order="post"):
if dspec in build_link_deps:
if dspec.external:
externals.insert(0, dspec)
else:
spack_built.insert(0, dspec)
ordered_build_link_deps = spack_built + externals
cmake_prefix_path_entries = []
for spec in ordered_build_link_deps:
cmake_prefix_path_entries.extend(spec.package.cmake_prefix_paths)
return filter_system_paths(cmake_prefix_path_entries)
def _setup_pkg_and_run(
serialized_pkg: "spack.subprocess_context.PackageInstallContext",
function: Callable,
kwargs: Dict,
write_pipe: multiprocessing.connection.Connection,
input_multiprocess_fd: Optional[MultiProcessFd],
jsfd1: Optional[MultiProcessFd],
jsfd2: Optional[MultiProcessFd],
write_pipe: Connection,
input_pipe: Optional[Connection],
jsfd1: Optional[Connection],
jsfd2: Optional[Connection],
):
"""Main entry point in the child process for Spack builds.
@@ -1216,13 +1188,12 @@ def _setup_pkg_and_run(
context: str = kwargs.get("context", "build")
try:
# We are in the child process. Python sets sys.stdin to
# open(os.devnull) to prevent our process and its parent from
# simultaneously reading from the original stdin. But, we assume
# that the parent process is not going to read from it till we
# are done with the child, so we undo Python's precaution.
if input_multiprocess_fd is not None:
sys.stdin = os.fdopen(input_multiprocess_fd.fd)
# We are in the child process. Python sets sys.stdin to open(os.devnull) to prevent our
# process and its parent from simultaneously reading from the original stdin. But, we
# assume that the parent process is not going to read from it till we are done with the
# child, so we undo Python's precaution. closefd=False since Connection has ownership.
if input_pipe is not None:
sys.stdin = os.fdopen(input_pipe.fileno(), closefd=False)
pkg = serialized_pkg.restore()
@@ -1245,7 +1216,7 @@ def _setup_pkg_and_run(
# objects can't be sent to the parent.
exc_type = type(e)
tb = e.__traceback__
tb_string = traceback.format_exception(exc_type, e, tb)
tb_string = "".join(traceback.format_exception(exc_type, e, tb))
# build up some context from the offending package so we can
# show that, too.
@@ -1291,8 +1262,8 @@ def _setup_pkg_and_run(
finally:
write_pipe.close()
if input_multiprocess_fd is not None:
input_multiprocess_fd.close()
if input_pipe is not None:
input_pipe.close()
def start_build_process(pkg, function, kwargs):
@@ -1319,23 +1290,9 @@ def child_fun():
If something goes wrong, the child process catches the error and
passes it to the parent wrapped in a ChildError. The parent is
expected to handle (or re-raise) the ChildError.
This uses `multiprocessing.Process` to create the child process. The
mechanism used to create the process differs on different operating
systems and for different versions of Python. In some cases "fork"
is used (i.e. the "fork" system call) and some cases it starts an
entirely new Python interpreter process (in the docs this is referred
to as the "spawn" start method). Breaking it down by OS:
- Linux always uses fork.
- Mac OS uses fork before Python 3.8 and "spawn" for 3.8 and after.
- Windows always uses the "spawn" start method.
For more information on `multiprocessing` child process creation
mechanisms, see https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
"""
read_pipe, write_pipe = multiprocessing.Pipe(duplex=False)
input_multiprocess_fd = None
input_fd = None
jobserver_fd1 = None
jobserver_fd2 = None
@@ -1344,14 +1301,13 @@ def child_fun():
try:
# Forward sys.stdin when appropriate, to allow toggling verbosity
if sys.platform != "win32" and sys.stdin.isatty() and hasattr(sys.stdin, "fileno"):
input_fd = os.dup(sys.stdin.fileno())
input_multiprocess_fd = MultiProcessFd(input_fd)
input_fd = Connection(os.dup(sys.stdin.fileno()))
mflags = os.environ.get("MAKEFLAGS", False)
if mflags:
m = re.search(r"--jobserver-[^=]*=(\d),(\d)", mflags)
if m:
jobserver_fd1 = MultiProcessFd(int(m.group(1)))
jobserver_fd2 = MultiProcessFd(int(m.group(2)))
jobserver_fd1 = Connection(int(m.group(1)))
jobserver_fd2 = Connection(int(m.group(2)))
p = multiprocessing.Process(
target=_setup_pkg_and_run,
@@ -1360,7 +1316,7 @@ def child_fun():
function,
kwargs,
write_pipe,
input_multiprocess_fd,
input_fd,
jobserver_fd1,
jobserver_fd2,
),
@@ -1380,8 +1336,8 @@ def child_fun():
finally:
# Close the input stream in the parent process
if input_multiprocess_fd is not None:
input_multiprocess_fd.close()
if input_fd is not None:
input_fd.close()
def exitcode_msg(p):
typ = "exit" if p.exitcode >= 0 else "signal"

View File

@@ -10,7 +10,6 @@
import llnl.util.filesystem as fs
import llnl.util.tty as tty
import spack.build_environment
import spack.builder
from .cmake import CMakeBuilder, CMakePackage
@@ -297,18 +296,6 @@ def initconfig_hardware_entries(self):
def std_initconfig_entries(self):
cmake_prefix_path_env = os.environ["CMAKE_PREFIX_PATH"]
cmake_prefix_path = cmake_prefix_path_env.replace(os.pathsep, ";")
cmake_rpaths_env = spack.build_environment.get_rpaths(self.pkg)
cmake_rpaths_path = ";".join(cmake_rpaths_env)
complete_rpath_list = cmake_rpaths_path
if "SPACK_COMPILER_EXTRA_RPATHS" in os.environ:
spack_extra_rpaths_env = os.environ["SPACK_COMPILER_EXTRA_RPATHS"]
spack_extra_rpaths_path = spack_extra_rpaths_env.replace(os.pathsep, ";")
complete_rpath_list = "{0};{1}".format(complete_rpath_list, spack_extra_rpaths_path)
if "SPACK_COMPILER_IMPLICIT_RPATHS" in os.environ:
spack_implicit_rpaths_env = os.environ["SPACK_COMPILER_IMPLICIT_RPATHS"]
spack_implicit_rpaths_path = spack_implicit_rpaths_env.replace(os.pathsep, ";")
complete_rpath_list = "{0};{1}".format(complete_rpath_list, spack_implicit_rpaths_path)
return [
"#------------------{0}".format("-" * 60),
@@ -318,8 +305,6 @@ def std_initconfig_entries(self):
"#------------------{0}\n".format("-" * 60),
cmake_cache_string("CMAKE_PREFIX_PATH", cmake_prefix_path),
cmake_cache_string("CMAKE_INSTALL_RPATH_USE_LINK_PATH", "ON"),
cmake_cache_string("CMAKE_BUILD_RPATH", complete_rpath_list),
cmake_cache_string("CMAKE_INSTALL_RPATH", complete_rpath_list),
self.define_cmake_cache_from_variant("CMAKE_BUILD_TYPE", "build_type"),
]

View File

@@ -8,17 +8,19 @@
import platform
import re
import sys
from typing import List, Optional, Tuple
from itertools import chain
from typing import List, Optional, Set, Tuple
import llnl.util.filesystem as fs
from llnl.util.lang import stable_partition
import spack.build_environment
import spack.builder
import spack.deptypes as dt
import spack.error
import spack.package_base
from spack.directives import build_system, conflicts, depends_on, variant
from spack.multimethod import when
from spack.util.environment import filter_system_paths
from ._checks import BaseBuilder, execute_build_time_tests
@@ -152,6 +154,24 @@ def _values(x):
conflicts(f"generator={x}")
def get_cmake_prefix_path(pkg: spack.package_base.PackageBase) -> List[str]:
"""Obtain the CMAKE_PREFIX_PATH entries for a package, based on the cmake_prefix_path package
attribute of direct build/test and transitive link dependencies."""
# Add direct build/test deps
selected: Set[str] = {s.dag_hash() for s in pkg.spec.dependencies(deptype=dt.BUILD | dt.TEST)}
# Add transitive link deps
selected.update(s.dag_hash() for s in pkg.spec.traverse(root=False, deptype=dt.LINK))
# Separate out externals so they do not shadow Spack prefixes
externals, spack_built = stable_partition(
(s for s in pkg.spec.traverse(root=False, order="topo") if s.dag_hash() in selected),
lambda x: x.external,
)
return filter_system_paths(
path for spec in chain(spack_built, externals) for path in spec.package.cmake_prefix_paths
)
class CMakePackage(spack.package_base.PackageBase):
"""Specialized class for packages built using CMake
@@ -358,6 +378,16 @@ def std_args(pkg, generator=None):
"-G",
generator,
define("CMAKE_INSTALL_PREFIX", pathlib.Path(pkg.prefix).as_posix()),
define("CMAKE_INSTALL_RPATH_USE_LINK_PATH", True),
# only include the install prefix lib dirs; rpaths for deps are added by USE_LINK_PATH
define(
"CMAKE_INSTALL_RPATH",
[
pathlib.Path(pkg.prefix, "lib").as_posix(),
pathlib.Path(pkg.prefix, "lib64").as_posix(),
],
),
define("CMAKE_PREFIX_PATH", get_cmake_prefix_path(pkg)),
define("CMAKE_BUILD_TYPE", build_type),
]
@@ -372,15 +402,6 @@ def std_args(pkg, generator=None):
_conditional_cmake_defaults(pkg, args)
_maybe_set_python_hints(pkg, args)
# Set up CMake rpath
args.extend(
[
define("CMAKE_INSTALL_RPATH_USE_LINK_PATH", True),
define("CMAKE_INSTALL_RPATH", spack.build_environment.get_rpaths(pkg)),
define("CMAKE_PREFIX_PATH", spack.build_environment.get_cmake_prefix_path(pkg)),
]
)
return args
@staticmethod
@@ -541,6 +562,13 @@ def cmake_args(self):
def cmake(self, pkg, spec, prefix):
"""Runs ``cmake`` in the build directory"""
# skip cmake phase if it is an incremental develop build
if spec.is_develop and os.path.isfile(
os.path.join(self.build_directory, "CMakeCache.txt")
):
return
options = self.std_cmake_args
options += self.cmake_args()
options.append(os.path.abspath(self.root_cmakelists_dir))

View File

@@ -110,8 +110,8 @@ def compute_capabilities(arch_list: Iterable[str]) -> List[str]:
depends_on("cuda@5.0:10.2", when="cuda_arch=30")
depends_on("cuda@5.0:10.2", when="cuda_arch=32")
depends_on("cuda@5.0:", when="cuda_arch=35")
depends_on("cuda@6.5:", when="cuda_arch=37")
depends_on("cuda@5.0:11.8", when="cuda_arch=35")
depends_on("cuda@6.5:11.8", when="cuda_arch=37")
depends_on("cuda@6.0:", when="cuda_arch=50")
depends_on("cuda@6.5:", when="cuda_arch=52")
@@ -131,6 +131,7 @@ def compute_capabilities(arch_list: Iterable[str]) -> List[str]:
depends_on("cuda@11.8:", when="cuda_arch=89")
depends_on("cuda@12.0:", when="cuda_arch=90")
depends_on("cuda@12.0:", when="cuda_arch=90a")
# From the NVIDIA install guide we know of conflicts for particular
# platforms (linux, darwin), architectures (x86, powerpc) and compilers
@@ -149,7 +150,6 @@ def compute_capabilities(arch_list: Iterable[str]) -> List[str]:
# minimum supported versions
conflicts("%gcc@:4", when="+cuda ^cuda@11.0:")
conflicts("%gcc@:5", when="+cuda ^cuda@11.4:")
conflicts("%gcc@:7.2", when="+cuda ^cuda@12.4:")
conflicts("%clang@:6", when="+cuda ^cuda@12.2:")
# maximum supported version

View File

@@ -10,6 +10,7 @@
import os
import re
import shutil
import ssl
import stat
import subprocess
import sys
@@ -19,21 +20,21 @@
from collections import defaultdict, namedtuple
from typing import Dict, List, Optional, Set, Tuple
from urllib.error import HTTPError, URLError
from urllib.parse import urlencode
from urllib.request import HTTPHandler, Request, build_opener
from urllib.parse import quote, urlencode, urlparse
from urllib.request import HTTPHandler, HTTPSHandler, Request, build_opener
import ruamel.yaml
import llnl.util.filesystem as fs
import llnl.util.tty as tty
from llnl.util.lang import memoized
from llnl.util.lang import Singleton, memoized
from llnl.util.tty.color import cescape, colorize
import spack
import spack.binary_distribution as bindist
import spack.concretize
import spack.config as cfg
import spack.environment as ev
import spack.error
import spack.main
import spack.mirror
import spack.paths
@@ -50,6 +51,31 @@
from spack.reporters.cdash import SPACK_CDASH_TIMEOUT
from spack.reporters.cdash import build_stamp as cdash_build_stamp
def _urlopen():
error_handler = web_util.SpackHTTPDefaultErrorHandler()
# One opener with HTTPS ssl enabled
with_ssl = build_opener(
HTTPHandler(), HTTPSHandler(context=web_util.ssl_create_default_context()), error_handler
)
# One opener with HTTPS ssl disabled
without_ssl = build_opener(
HTTPHandler(), HTTPSHandler(context=ssl._create_unverified_context()), error_handler
)
# And dynamically dispatch based on the config:verify_ssl.
def dispatch_open(fullurl, data=None, timeout=None, verify_ssl=True):
opener = with_ssl if verify_ssl else without_ssl
timeout = timeout or spack.config.get("config:connect_timeout", 1)
return opener.open(fullurl, data, timeout)
return dispatch_open
_dyn_mapping_urlopener = Singleton(_urlopen)
# See https://docs.gitlab.com/ee/ci/yaml/#retry for descriptions of conditions
JOB_RETRY_CONDITIONS = [
# "always",
@@ -69,8 +95,6 @@
TEMP_STORAGE_MIRROR_NAME = "ci_temporary_mirror"
SPACK_RESERVED_TAGS = ["public", "protected", "notary"]
# TODO: Remove this in Spack 0.23
SHARED_PR_MIRROR_URL = "s3://spack-binaries-prs/shared_pr_mirror"
JOB_NAME_FORMAT = (
"{name}{@version} {/hash:7} {%compiler.name}{@compiler.version}{ arch=architecture}"
)
@@ -175,11 +199,11 @@ def _remove_satisfied_deps(deps, satisfied_list):
return nodes, edges, stages
def _print_staging_summary(spec_labels, stages, mirrors_to_check, rebuild_decisions):
def _print_staging_summary(spec_labels, stages, rebuild_decisions):
if not stages:
return
mirrors = spack.mirror.MirrorCollection(mirrors=mirrors_to_check, binary=True)
mirrors = spack.mirror.MirrorCollection(binary=True)
tty.msg("Checked the following mirrors for binaries:")
for m in mirrors.values():
tty.msg(f" {m.fetch_url}")
@@ -226,21 +250,14 @@ def _spec_matches(spec, match_string):
return spec.intersects(match_string)
def _format_job_needs(
dep_jobs, build_group, prune_dag, rebuild_decisions, enable_artifacts_buildcache
):
def _format_job_needs(dep_jobs, build_group, prune_dag, rebuild_decisions):
needs_list = []
for dep_job in dep_jobs:
dep_spec_key = _spec_ci_label(dep_job)
rebuild = rebuild_decisions[dep_spec_key].rebuild
if not prune_dag or rebuild:
needs_list.append(
{
"job": get_job_name(dep_job, build_group),
"artifacts": enable_artifacts_buildcache,
}
)
needs_list.append({"job": get_job_name(dep_job, build_group), "artifacts": False})
return needs_list
@@ -384,12 +401,6 @@ def __init__(self, ci_config, spec_labels, stages):
self.ir = {
"jobs": {},
"temporary-storage-url-prefix": self.ci_config.get(
"temporary-storage-url-prefix", None
),
"enable-artifacts-buildcache": self.ci_config.get(
"enable-artifacts-buildcache", False
),
"rebuild-index": self.ci_config.get("rebuild-index", True),
"broken-specs-url": self.ci_config.get("broken-specs-url", None),
"broken-tests-packages": self.ci_config.get("broken-tests-packages", []),
@@ -405,9 +416,20 @@ def __init__(self, ci_config, spec_labels, stages):
if name not in ["any", "build"]:
jobs[name] = self.__init_job("")
def __init_job(self, spec):
def __init_job(self, release_spec):
"""Initialize job object"""
return {"spec": spec, "attributes": {}}
job_object = {"spec": release_spec, "attributes": {}}
if release_spec:
job_vars = job_object["attributes"].setdefault("variables", {})
job_vars["SPACK_JOB_SPEC_DAG_HASH"] = release_spec.dag_hash()
job_vars["SPACK_JOB_SPEC_PKG_NAME"] = release_spec.name
job_vars["SPACK_JOB_SPEC_PKG_VERSION"] = release_spec.format("{version}")
job_vars["SPACK_JOB_SPEC_COMPILER_NAME"] = release_spec.format("{compiler.name}")
job_vars["SPACK_JOB_SPEC_COMPILER_VERSION"] = release_spec.format("{compiler.version}")
job_vars["SPACK_JOB_SPEC_ARCH"] = release_spec.format("{architecture}")
job_vars["SPACK_JOB_SPEC_VARIANTS"] = release_spec.format("{variants}")
return job_object
def __is_named(self, section):
"""Check if a pipeline-gen configuration section is for a named job,
@@ -500,6 +522,7 @@ def generate_ir(self):
for section in reversed(pipeline_gen):
name = self.__is_named(section)
has_submapping = "submapping" in section
has_dynmapping = "dynamic-mapping" in section
section = cfg.InternalConfigScope._process_dict_keyname_overrides(section)
if name:
@@ -542,6 +565,108 @@ def _apply_section(dest, src):
job["attributes"] = self.__apply_submapping(
job["attributes"], job["spec"], section
)
elif has_dynmapping:
mapping = section["dynamic-mapping"]
dynmap_name = mapping.get("name")
# Check if this section should be skipped
dynmap_skip = os.environ.get("SPACK_CI_SKIP_DYNAMIC_MAPPING")
if dynmap_name and dynmap_skip:
if re.match(dynmap_skip, dynmap_name):
continue
# Get the endpoint
endpoint = mapping["endpoint"]
endpoint_url = urlparse(endpoint)
# Configure the request header
header = {"User-Agent": web_util.SPACK_USER_AGENT}
header.update(mapping.get("header", {}))
# Expand header environment variables
# ie. if tokens are passed
for value in header.values():
value = os.path.expandvars(value)
verify_ssl = mapping.get("verify_ssl", spack.config.get("config:verify_ssl", True))
timeout = mapping.get("timeout", spack.config.get("config:connect_timeout", 1))
required = mapping.get("require", [])
allowed = mapping.get("allow", [])
ignored = mapping.get("ignore", [])
# required keys are implicitly allowed
allowed = sorted(set(allowed + required))
ignored = sorted(set(ignored))
required = sorted(set(required))
# Make sure required things are not also ignored
assert not any([ikey in required for ikey in ignored])
def job_query(job):
job_vars = job["attributes"]["variables"]
query = (
"{SPACK_JOB_SPEC_PKG_NAME}@{SPACK_JOB_SPEC_PKG_VERSION}"
# The preceding spaces are required (ref. https://github.com/spack/spack-gantry/blob/develop/docs/api.md#allocation)
" {SPACK_JOB_SPEC_VARIANTS}"
" arch={SPACK_JOB_SPEC_ARCH}"
"%{SPACK_JOB_SPEC_COMPILER_NAME}@{SPACK_JOB_SPEC_COMPILER_VERSION}"
).format_map(job_vars)
return f"spec={quote(query)}"
for job in jobs.values():
if not job["spec"]:
continue
# Create request for this job
query = job_query(job)
request = Request(
endpoint_url._replace(query=query).geturl(), headers=header, method="GET"
)
try:
response = _dyn_mapping_urlopener(
request, verify_ssl=verify_ssl, timeout=timeout
)
except Exception as e:
# For now just ignore any errors from dynamic mapping and continue
# This is still experimental, and failures should not stop CI
# from running normally
tty.warn(f"Failed to fetch dynamic mapping for query:\n\t{query}")
tty.warn(f"{e}")
continue
config = json.load(codecs.getreader("utf-8")(response))
# Strip ignore keys
if ignored:
for key in ignored:
if key in config:
config.pop(key)
# Only keep allowed keys
clean_config = {}
if allowed:
for key in allowed:
if key in config:
clean_config[key] = config[key]
else:
clean_config = config
# Verify all of the required keys are present
if required:
missing_keys = []
for key in required:
if key not in clean_config.keys():
missing_keys.append(key)
if missing_keys:
tty.warn(f"Response missing required keys: {missing_keys}")
if clean_config:
job["attributes"] = spack.config.merge_yaml(
job.get("attributes", {}), clean_config
)
for _, job in jobs.items():
if job["spec"]:
@@ -558,14 +683,13 @@ def generate_gitlab_ci_yaml(
prune_dag=False,
check_index_only=False,
artifacts_root=None,
remote_mirror_override=None,
):
"""Generate a gitlab yaml file to run a dynamic child pipeline from
the spec matrix in the active environment.
Arguments:
env (spack.environment.Environment): Activated environment object
which must contain a gitlab-ci section describing how to map
which must contain a ci section describing how to map
specs to runners
print_summary (bool): Should we print a summary of all the jobs in
the stages in which they were placed.
@@ -580,39 +704,21 @@ def generate_gitlab_ci_yaml(
artifacts_root (str): Path where artifacts like logs, environment
files (spack.yaml, spack.lock), etc should be written. GitLab
requires this to be within the project directory.
remote_mirror_override (str): Typically only needed when one spack.yaml
is used to populate several mirrors with binaries, based on some
criteria. Spack protected pipelines populate different mirrors based
on branch name, facilitated by this option. DEPRECATED
"""
with spack.concretize.disable_compiler_existence_check():
with env.write_transaction():
env.concretize()
env.write()
yaml_root = env.manifest[ev.TOP_LEVEL_KEY]
# Get the joined "ci" config with all of the current scopes resolved
ci_config = cfg.get("ci")
config_deprecated = False
if not ci_config:
tty.warn("Environment does not have `ci` a configuration")
gitlabci_config = yaml_root.get("gitlab-ci")
if not gitlabci_config:
tty.die("Environment yaml does not have `gitlab-ci` config section. Cannot recover.")
tty.warn(
"The `gitlab-ci` configuration is deprecated in favor of `ci`.\n",
"To update run \n\t$ spack env update /path/to/ci/spack.yaml",
)
translate_deprecated_config(gitlabci_config)
ci_config = gitlabci_config
config_deprecated = True
raise SpackCIError("Environment does not have a `ci` configuration")
# Default target is gitlab...and only target is gitlab
if not ci_config.get("target", "gitlab") == "gitlab":
tty.die('Spack CI module only generates target "gitlab"')
raise SpackCIError('Spack CI module only generates target "gitlab"')
cdash_config = cfg.get("cdash")
cdash_handler = CDashHandler(cdash_config) if "build-group" in cdash_config else None
@@ -673,12 +779,6 @@ def generate_gitlab_ci_yaml(
spack_pipeline_type = os.environ.get("SPACK_PIPELINE_TYPE", None)
copy_only_pipeline = spack_pipeline_type == "spack_copy_only"
if copy_only_pipeline and config_deprecated:
tty.warn(
"SPACK_PIPELINE_TYPE=spack_copy_only is not supported when using\n",
"deprecated ci configuration, a no-op pipeline will be generated\n",
"instead.",
)
def ensure_expected_target_path(path):
"""Returns passed paths with all Windows path separators exchanged
@@ -697,38 +797,16 @@ def ensure_expected_target_path(path):
return path
pipeline_mirrors = spack.mirror.MirrorCollection(binary=True)
deprecated_mirror_config = False
buildcache_destination = None
if "buildcache-destination" in pipeline_mirrors:
if remote_mirror_override:
tty.die(
"Using the deprecated --buildcache-destination cli option and "
"having a mirror named 'buildcache-destination' at the same time "
"is not allowed"
)
buildcache_destination = pipeline_mirrors["buildcache-destination"]
else:
deprecated_mirror_config = True
# TODO: This will be an error in Spack 0.23
if "buildcache-destination" not in pipeline_mirrors:
raise SpackCIError("spack ci generate requires a mirror named 'buildcache-destination'")
# TODO: Remove this block in spack 0.23
remote_mirror_url = None
if deprecated_mirror_config:
if "mirrors" not in yaml_root or len(yaml_root["mirrors"].values()) < 1:
tty.die("spack ci generate requires an env containing a mirror")
ci_mirrors = yaml_root["mirrors"]
mirror_urls = [url for url in ci_mirrors.values()]
remote_mirror_url = mirror_urls[0]
buildcache_destination = pipeline_mirrors["buildcache-destination"]
spack_buildcache_copy = os.environ.get("SPACK_COPY_BUILDCACHE", None)
if spack_buildcache_copy:
buildcache_copies = {}
buildcache_copy_src_prefix = (
buildcache_destination.fetch_url
if buildcache_destination
else remote_mirror_override or remote_mirror_url
)
buildcache_copy_src_prefix = buildcache_destination.fetch_url
buildcache_copy_dest_prefix = spack_buildcache_copy
# Check for a list of "known broken" specs that we should not bother
@@ -738,55 +816,10 @@ def ensure_expected_target_path(path):
if "broken-specs-url" in ci_config:
broken_specs_url = ci_config["broken-specs-url"]
enable_artifacts_buildcache = False
if "enable-artifacts-buildcache" in ci_config:
tty.warn("Support for enable-artifacts-buildcache will be removed in Spack 0.23")
enable_artifacts_buildcache = ci_config["enable-artifacts-buildcache"]
rebuild_index_enabled = True
if "rebuild-index" in ci_config and ci_config["rebuild-index"] is False:
rebuild_index_enabled = False
temp_storage_url_prefix = None
if "temporary-storage-url-prefix" in ci_config:
tty.warn("Support for temporary-storage-url-prefix will be removed in Spack 0.23")
temp_storage_url_prefix = ci_config["temporary-storage-url-prefix"]
# If a remote mirror override (alternate buildcache destination) was
# specified, add it here in case it has already built hashes we might
# generate.
# TODO: Remove this block in Spack 0.23
mirrors_to_check = None
if deprecated_mirror_config and remote_mirror_override:
if spack_pipeline_type == "spack_protected_branch":
# Overriding the main mirror in this case might result
# in skipping jobs on a release pipeline because specs are
# up to date in develop. Eventually we want to notice and take
# advantage of this by scheduling a job to copy the spec from
# develop to the release, but until we have that, this makes
# sure we schedule a rebuild job if the spec isn't already in
# override mirror.
mirrors_to_check = {"override": remote_mirror_override}
# If we have a remote override and we want generate pipeline using
# --check-index-only, then the override mirror needs to be added to
# the configured mirrors when bindist.update() is run, or else we
# won't fetch its index and include in our local cache.
spack.mirror.add(
spack.mirror.Mirror(remote_mirror_override, name="ci_pr_mirror"),
cfg.default_modify_scope(),
)
# TODO: Remove this block in Spack 0.23
shared_pr_mirror = None
if deprecated_mirror_config and spack_pipeline_type == "spack_pull_request":
stack_name = os.environ.get("SPACK_CI_STACK_NAME", "")
shared_pr_mirror = url_util.join(SHARED_PR_MIRROR_URL, stack_name)
spack.mirror.add(
spack.mirror.Mirror(shared_pr_mirror, name="ci_shared_pr_mirror"),
cfg.default_modify_scope(),
)
pipeline_artifacts_dir = artifacts_root
if not pipeline_artifacts_dir:
proj_dir = os.environ.get("CI_PROJECT_DIR", os.getcwd())
@@ -795,9 +828,8 @@ def ensure_expected_target_path(path):
pipeline_artifacts_dir = os.path.abspath(pipeline_artifacts_dir)
concrete_env_dir = os.path.join(pipeline_artifacts_dir, "concrete_environment")
# Now that we've added the mirrors we know about, they should be properly
# reflected in the environment manifest file, so copy that into the
# concrete environment directory, along with the spack.lock file.
# Copy the environment manifest file into the concrete environment directory,
# along with the spack.lock file.
if not os.path.exists(concrete_env_dir):
os.makedirs(concrete_env_dir)
shutil.copyfile(env.manifest_path, os.path.join(concrete_env_dir, "spack.yaml"))
@@ -822,18 +854,12 @@ def ensure_expected_target_path(path):
env_includes.extend(include_scopes)
env_yaml_root["spack"]["include"] = [ensure_expected_target_path(i) for i in env_includes]
if "gitlab-ci" in env_yaml_root["spack"] and "ci" not in env_yaml_root["spack"]:
env_yaml_root["spack"]["ci"] = env_yaml_root["spack"].pop("gitlab-ci")
translate_deprecated_config(env_yaml_root["spack"]["ci"])
with open(os.path.join(concrete_env_dir, "spack.yaml"), "w") as fd:
fd.write(syaml.dump_config(env_yaml_root, default_flow_style=False))
job_log_dir = os.path.join(pipeline_artifacts_dir, "logs")
job_repro_dir = os.path.join(pipeline_artifacts_dir, "reproduction")
job_test_dir = os.path.join(pipeline_artifacts_dir, "tests")
# TODO: Remove this line in Spack 0.23
local_mirror_dir = os.path.join(pipeline_artifacts_dir, "mirror")
user_artifacts_dir = os.path.join(pipeline_artifacts_dir, "user_data")
# We communicate relative paths to the downstream jobs to avoid issues in
@@ -847,8 +873,6 @@ def ensure_expected_target_path(path):
rel_job_log_dir = os.path.relpath(job_log_dir, ci_project_dir)
rel_job_repro_dir = os.path.relpath(job_repro_dir, ci_project_dir)
rel_job_test_dir = os.path.relpath(job_test_dir, ci_project_dir)
# TODO: Remove this line in Spack 0.23
rel_local_mirror_dir = os.path.join(local_mirror_dir, ci_project_dir)
rel_user_artifacts_dir = os.path.relpath(user_artifacts_dir, ci_project_dir)
# Speed up staging by first fetching binary indices from all mirrors
@@ -910,7 +934,7 @@ def ensure_expected_target_path(path):
continue
up_to_date_mirrors = bindist.get_mirrors_for_spec(
spec=release_spec, mirrors_to_check=mirrors_to_check, index_only=check_index_only
spec=release_spec, index_only=check_index_only
)
spec_record.rebuild = not up_to_date_mirrors
@@ -952,36 +976,16 @@ def main_script_replacements(cmd):
job_name = get_job_name(release_spec, build_group)
job_vars = job_object.setdefault("variables", {})
job_vars["SPACK_JOB_SPEC_DAG_HASH"] = release_spec_dag_hash
job_vars["SPACK_JOB_SPEC_PKG_NAME"] = release_spec.name
job_vars["SPACK_JOB_SPEC_PKG_VERSION"] = release_spec.format("{version}")
job_vars["SPACK_JOB_SPEC_COMPILER_NAME"] = release_spec.format("{compiler.name}")
job_vars["SPACK_JOB_SPEC_COMPILER_VERSION"] = release_spec.format("{compiler.version}")
job_vars["SPACK_JOB_SPEC_ARCH"] = release_spec.format("{architecture}")
job_vars["SPACK_JOB_SPEC_VARIANTS"] = release_spec.format("{variants}")
job_object["needs"] = []
if spec_label in dependencies:
if enable_artifacts_buildcache:
# Get dependencies transitively, so they're all
# available in the artifacts buildcache.
dep_jobs = [d for d in release_spec.traverse(deptype="all", root=False)]
else:
# In this case, "needs" is only used for scheduling
# purposes, so we only get the direct dependencies.
dep_jobs = []
for dep_label in dependencies[spec_label]:
dep_jobs.append(spec_labels[dep_label])
# In this case, "needs" is only used for scheduling
# purposes, so we only get the direct dependencies.
dep_jobs = []
for dep_label in dependencies[spec_label]:
dep_jobs.append(spec_labels[dep_label])
job_object["needs"].extend(
_format_job_needs(
dep_jobs,
build_group,
prune_dag,
rebuild_decisions,
enable_artifacts_buildcache,
)
_format_job_needs(dep_jobs, build_group, prune_dag, rebuild_decisions)
)
rebuild_spec = spec_record.rebuild
@@ -1038,6 +1042,7 @@ def main_script_replacements(cmd):
# Let downstream jobs know whether the spec needed rebuilding, regardless
# whether DAG pruning was enabled or not.
job_vars = job_object["variables"]
job_vars["SPACK_SPEC_NEEDS_REBUILD"] = str(rebuild_spec)
if cdash_handler:
@@ -1062,19 +1067,6 @@ def main_script_replacements(cmd):
},
)
# TODO: Remove this block in Spack 0.23
if enable_artifacts_buildcache:
bc_root = os.path.join(local_mirror_dir, "build_cache")
job_object["artifacts"]["paths"].extend(
[
os.path.join(bc_root, p)
for p in [
bindist.tarball_name(release_spec, ".spec.json"),
bindist.tarball_directory_name(release_spec),
]
]
)
job_object["stage"] = stage_name
job_object["retry"] = {"max": 2, "when": JOB_RETRY_CONDITIONS}
job_object["interruptible"] = True
@@ -1089,15 +1081,7 @@ def main_script_replacements(cmd):
job_id += 1
if print_summary:
_print_staging_summary(spec_labels, stages, mirrors_to_check, rebuild_decisions)
# Clean up remote mirror override if enabled
# TODO: Remove this block in Spack 0.23
if deprecated_mirror_config:
if remote_mirror_override:
spack.mirror.remove("ci_pr_mirror", cfg.default_modify_scope())
if spack_pipeline_type == "spack_pull_request":
spack.mirror.remove("ci_shared_pr_mirror", cfg.default_modify_scope())
_print_staging_summary(spec_labels, stages, rebuild_decisions)
tty.debug(f"{job_id} build jobs generated in {stage_id} stages")
@@ -1119,7 +1103,7 @@ def main_script_replacements(cmd):
"when": ["runner_system_failure", "stuck_or_timeout_failure", "script_failure"],
}
if copy_only_pipeline and not config_deprecated:
if copy_only_pipeline:
stage_names.append("copy")
sync_job = copy.deepcopy(spack_ci_ir["jobs"]["copy"]["attributes"])
sync_job["stage"] = "copy"
@@ -1129,17 +1113,12 @@ def main_script_replacements(cmd):
if "variables" not in sync_job:
sync_job["variables"] = {}
sync_job["variables"]["SPACK_COPY_ONLY_DESTINATION"] = (
buildcache_destination.fetch_url
if buildcache_destination
else remote_mirror_override or remote_mirror_url
)
sync_job["variables"]["SPACK_COPY_ONLY_DESTINATION"] = buildcache_destination.fetch_url
if "buildcache-source" in pipeline_mirrors:
buildcache_source = pipeline_mirrors["buildcache-source"].fetch_url
else:
# TODO: Remove this condition in Spack 0.23
buildcache_source = os.environ.get("SPACK_SOURCE_MIRROR", None)
if "buildcache-source" not in pipeline_mirrors:
raise SpackCIError("Copy-only pipelines require a mirror named 'buildcache-source'")
buildcache_source = pipeline_mirrors["buildcache-source"].fetch_url
sync_job["variables"]["SPACK_BUILDCACHE_SOURCE"] = buildcache_source
sync_job["dependencies"] = []
@@ -1147,27 +1126,6 @@ def main_script_replacements(cmd):
job_id += 1
if job_id > 0:
# TODO: Remove this block in Spack 0.23
if temp_storage_url_prefix:
# There were some rebuild jobs scheduled, so we will need to
# schedule a job to clean up the temporary storage location
# associated with this pipeline.
stage_names.append("cleanup-temp-storage")
cleanup_job = copy.deepcopy(spack_ci_ir["jobs"]["cleanup"]["attributes"])
cleanup_job["stage"] = "cleanup-temp-storage"
cleanup_job["when"] = "always"
cleanup_job["retry"] = service_job_retries
cleanup_job["interruptible"] = True
cleanup_job["script"] = _unpack_script(
cleanup_job["script"],
op=lambda cmd: cmd.replace("mirror_prefix", temp_storage_url_prefix),
)
cleanup_job["dependencies"] = []
output_object["cleanup"] = cleanup_job
if (
"script" in spack_ci_ir["jobs"]["signing"]["attributes"]
and spack_pipeline_type == "spack_protected_branch"
@@ -1184,11 +1142,9 @@ def main_script_replacements(cmd):
signing_job["interruptible"] = True
if "variables" not in signing_job:
signing_job["variables"] = {}
signing_job["variables"]["SPACK_BUILDCACHE_DESTINATION"] = (
buildcache_destination.push_url # need the s3 url for aws s3 sync
if buildcache_destination
else remote_mirror_override or remote_mirror_url
)
signing_job["variables"][
"SPACK_BUILDCACHE_DESTINATION"
] = buildcache_destination.push_url
signing_job["dependencies"] = []
output_object["sign-pkgs"] = signing_job
@@ -1199,9 +1155,7 @@ def main_script_replacements(cmd):
final_job = spack_ci_ir["jobs"]["reindex"]["attributes"]
final_job["stage"] = "stage-rebuild-index"
target_mirror = remote_mirror_override or remote_mirror_url
if buildcache_destination:
target_mirror = buildcache_destination.push_url
target_mirror = buildcache_destination.push_url
final_job["script"] = _unpack_script(
final_job["script"],
op=lambda cmd: cmd.replace("{index_target_mirror}", target_mirror),
@@ -1227,17 +1181,11 @@ def main_script_replacements(cmd):
"SPACK_CONCRETE_ENV_DIR": rel_concrete_env_dir,
"SPACK_VERSION": spack_version,
"SPACK_CHECKOUT_VERSION": version_to_clone,
# TODO: Remove this line in Spack 0.23
"SPACK_REMOTE_MIRROR_URL": remote_mirror_url,
"SPACK_JOB_LOG_DIR": rel_job_log_dir,
"SPACK_JOB_REPRO_DIR": rel_job_repro_dir,
"SPACK_JOB_TEST_DIR": rel_job_test_dir,
# TODO: Remove this line in Spack 0.23
"SPACK_LOCAL_MIRROR_DIR": rel_local_mirror_dir,
"SPACK_PIPELINE_TYPE": str(spack_pipeline_type),
"SPACK_CI_STACK_NAME": os.environ.get("SPACK_CI_STACK_NAME", "None"),
# TODO: Remove this line in Spack 0.23
"SPACK_CI_SHARED_PR_MIRROR_URL": shared_pr_mirror or "None",
"SPACK_REBUILD_CHECK_UP_TO_DATE": str(prune_dag),
"SPACK_REBUILD_EVERYTHING": str(rebuild_everything),
"SPACK_REQUIRE_SIGNING": os.environ.get("SPACK_REQUIRE_SIGNING", "False"),
@@ -1246,10 +1194,6 @@ def main_script_replacements(cmd):
for item, val in output_vars.items():
output_vars[item] = ensure_expected_target_path(val)
# TODO: Remove this block in Spack 0.23
if deprecated_mirror_config and remote_mirror_override:
(output_object["variables"]["SPACK_REMOTE_MIRROR_OVERRIDE"]) = remote_mirror_override
spack_stack_name = os.environ.get("SPACK_CI_STACK_NAME", None)
if spack_stack_name:
output_object["variables"]["SPACK_CI_STACK_NAME"] = spack_stack_name
@@ -1276,15 +1220,8 @@ def main_script_replacements(cmd):
noop_job["retry"] = 0
noop_job["allow_failure"] = True
if copy_only_pipeline and config_deprecated:
tty.debug("Generating no-op job as copy-only is unsupported here.")
noop_job["script"] = [
'echo "copy-only pipelines are not supported with deprecated ci configs"'
]
output_object = {"unsupported-copy": noop_job}
else:
tty.debug("No specs to rebuild, generating no-op job")
output_object = {"no-specs-to-rebuild": noop_job}
tty.debug("No specs to rebuild, generating no-op job")
output_object = {"no-specs-to-rebuild": noop_job}
# Ensure the child pipeline always runs
output_object["workflow"] = {"rules": [{"when": "always"}]}
@@ -2322,83 +2259,6 @@ def report_skipped(self, spec: spack.spec.Spec, report_dir: str, reason: Optiona
reporter.test_skipped_report(report_dir, spec, reason)
def translate_deprecated_config(config):
# Remove all deprecated keys from config
mappings = config.pop("mappings", [])
match_behavior = config.pop("match_behavior", "first")
build_job = {}
if "image" in config:
build_job["image"] = config.pop("image")
if "tags" in config:
build_job["tags"] = config.pop("tags")
if "variables" in config:
build_job["variables"] = config.pop("variables")
# Scripts always override in old CI
if "before_script" in config:
build_job["before_script:"] = config.pop("before_script")
if "script" in config:
build_job["script:"] = config.pop("script")
if "after_script" in config:
build_job["after_script:"] = config.pop("after_script")
signing_job = None
if "signing-job-attributes" in config:
signing_job = {"signing-job": config.pop("signing-job-attributes")}
service_job_attributes = None
if "service-job-attributes" in config:
service_job_attributes = config.pop("service-job-attributes")
# If this config already has pipeline-gen do not more
if "pipeline-gen" in config:
return True if mappings or build_job or signing_job or service_job_attributes else False
config["target"] = "gitlab"
config["pipeline-gen"] = []
pipeline_gen = config["pipeline-gen"]
# Build Job
submapping = []
for section in mappings:
submapping_section = {"match": section["match"]}
if "runner-attributes" in section:
remapped_attributes = {}
if match_behavior == "first":
for key, value in section["runner-attributes"].items():
# Scripts always override in old CI
if key == "script":
remapped_attributes["script:"] = value
elif key == "before_script":
remapped_attributes["before_script:"] = value
elif key == "after_script":
remapped_attributes["after_script:"] = value
else:
remapped_attributes[key] = value
else:
# Handle "merge" behavior be allowing scripts to merge in submapping section
remapped_attributes = section["runner-attributes"]
submapping_section["build-job"] = remapped_attributes
if "remove-attributes" in section:
# Old format only allowed tags in this section, so no extra checks are needed
submapping_section["build-job-remove"] = section["remove-attributes"]
submapping.append(submapping_section)
pipeline_gen.append({"submapping": submapping, "match_behavior": match_behavior})
if build_job:
pipeline_gen.append({"build-job": build_job})
# Signing Job
if signing_job:
pipeline_gen.append(signing_job)
# Service Jobs
if service_job_attributes:
pipeline_gen.append({"reindex-job": service_job_attributes})
pipeline_gen.append({"noop-job": service_job_attributes})
pipeline_gen.append({"cleanup-job": service_job_attributes})
return True
class SpackCIError(spack.error.SpackError):
def __init__(self, msg):
super().__init__(msg)

View File

@@ -19,12 +19,23 @@
def setup_parser(subparser):
# DEPRECATED: equivalent to --generic --target
subparser.add_argument(
"-g", "--generic-target", action="store_true", help="show the best generic target"
"-g",
"--generic-target",
action="store_true",
help="show the best generic target (deprecated)",
)
subparser.add_argument(
"--known-targets", action="store_true", help="show a list of all known targets and exit"
)
target_type = subparser.add_mutually_exclusive_group()
target_type.add_argument(
"--family", action="store_true", help="print generic ISA (x86_64, aarch64, ppc64le, ...)"
)
target_type.add_argument(
"--generic", action="store_true", help="print feature level (x86_64_v3, armv8.4a, ...)"
)
parts = subparser.add_mutually_exclusive_group()
parts2 = subparser.add_mutually_exclusive_group()
parts.add_argument(
@@ -80,6 +91,7 @@ def display_target_group(header, target_group):
def arch(parser, args):
if args.generic_target:
# TODO: add deprecation warning in 0.24
print(archspec.cpu.host().generic)
return
@@ -96,6 +108,10 @@ def arch(parser, args):
host_platform = spack.platforms.host()
host_os = host_platform.operating_system(os_args)
host_target = host_platform.target(target_args)
if args.family:
host_target = host_target.family
elif args.generic:
host_target = host_target.generic
architecture = spack.spec.ArchSpec((str(host_platform), str(host_os), str(host_target)))
if args.platform:

View File

@@ -62,13 +62,6 @@ def setup_parser(subparser):
"path to the file where generated jobs file should be written. "
"default is .gitlab-ci.yml in the root of the repository",
)
generate.add_argument(
"--copy-to",
default=None,
help="path to additional directory for job files\n\n"
"this option provides an absolute path to a directory where the generated "
"jobs yaml file should be copied. default is not to copy",
)
generate.add_argument(
"--optimize",
action="store_true",
@@ -83,12 +76,6 @@ def setup_parser(subparser):
default=False,
help="(DEPRECATED) disable DAG scheduling (use 'plain' dependencies)",
)
generate.add_argument(
"--buildcache-destination",
default=None,
help="override the mirror configured in the environment\n\n"
"allows for pushing binaries from the generated pipeline to a different location",
)
prune_group = generate.add_mutually_exclusive_group()
prune_group.add_argument(
"--prune-dag",
@@ -214,20 +201,10 @@ def ci_generate(args):
env = spack.cmd.require_active_env(cmd_name="ci generate")
if args.copy_to:
tty.warn("The flag --copy-to is deprecated and will be removed in Spack 0.23")
if args.buildcache_destination:
tty.warn(
"The flag --buildcache-destination is deprecated and will be removed in Spack 0.23"
)
output_file = args.output_file
copy_yaml_to = args.copy_to
prune_dag = args.prune_dag
index_only = args.index_only
artifacts_root = args.artifacts_root
buildcache_destination = args.buildcache_destination
if not output_file:
output_file = os.path.abspath(".gitlab-ci.yml")
@@ -245,15 +222,8 @@ def ci_generate(args):
prune_dag=prune_dag,
check_index_only=index_only,
artifacts_root=artifacts_root,
remote_mirror_override=buildcache_destination,
)
if copy_yaml_to:
copy_to_dir = os.path.dirname(copy_yaml_to)
if not os.path.exists(copy_to_dir):
os.makedirs(copy_to_dir)
shutil.copyfile(output_file, copy_yaml_to)
def ci_reindex(args):
"""rebuild the buildcache index for the remote mirror
@@ -298,22 +268,13 @@ def ci_rebuild(args):
job_log_dir = os.environ.get("SPACK_JOB_LOG_DIR")
job_test_dir = os.environ.get("SPACK_JOB_TEST_DIR")
repro_dir = os.environ.get("SPACK_JOB_REPRO_DIR")
# TODO: Remove this in Spack 0.23
local_mirror_dir = os.environ.get("SPACK_LOCAL_MIRROR_DIR")
concrete_env_dir = os.environ.get("SPACK_CONCRETE_ENV_DIR")
ci_pipeline_id = os.environ.get("CI_PIPELINE_ID")
ci_job_name = os.environ.get("CI_JOB_NAME")
signing_key = os.environ.get("SPACK_SIGNING_KEY")
job_spec_pkg_name = os.environ.get("SPACK_JOB_SPEC_PKG_NAME")
job_spec_dag_hash = os.environ.get("SPACK_JOB_SPEC_DAG_HASH")
spack_pipeline_type = os.environ.get("SPACK_PIPELINE_TYPE")
# TODO: Remove this in Spack 0.23
remote_mirror_override = os.environ.get("SPACK_REMOTE_MIRROR_OVERRIDE")
# TODO: Remove this in Spack 0.23
remote_mirror_url = os.environ.get("SPACK_REMOTE_MIRROR_URL")
spack_ci_stack_name = os.environ.get("SPACK_CI_STACK_NAME")
# TODO: Remove this in Spack 0.23
shared_pr_mirror_url = os.environ.get("SPACK_CI_SHARED_PR_MIRROR_URL")
rebuild_everything = os.environ.get("SPACK_REBUILD_EVERYTHING")
require_signing = os.environ.get("SPACK_REQUIRE_SIGNING")
@@ -333,12 +294,10 @@ def ci_rebuild(args):
job_log_dir = os.path.join(ci_project_dir, job_log_dir)
job_test_dir = os.path.join(ci_project_dir, job_test_dir)
repro_dir = os.path.join(ci_project_dir, repro_dir)
local_mirror_dir = os.path.join(ci_project_dir, local_mirror_dir)
concrete_env_dir = os.path.join(ci_project_dir, concrete_env_dir)
# Debug print some of the key environment variables we should have received
tty.debug("pipeline_artifacts_dir = {0}".format(pipeline_artifacts_dir))
tty.debug("remote_mirror_url = {0}".format(remote_mirror_url))
tty.debug("job_spec_pkg_name = {0}".format(job_spec_pkg_name))
# Query the environment manifest to find out whether we're reporting to a
@@ -370,51 +329,11 @@ def ci_rebuild(args):
full_rebuild = True if rebuild_everything and rebuild_everything.lower() == "true" else False
pipeline_mirrors = spack.mirror.MirrorCollection(binary=True)
deprecated_mirror_config = False
buildcache_destination = None
if "buildcache-destination" in pipeline_mirrors:
buildcache_destination = pipeline_mirrors["buildcache-destination"]
else:
deprecated_mirror_config = True
# TODO: This will be an error in Spack 0.23
if "buildcache-destination" not in pipeline_mirrors:
tty.die("spack ci rebuild requires a mirror named 'buildcache-destination")
# If no override url exists, then just push binary package to the
# normal remote mirror url.
# TODO: Remove in Spack 0.23
buildcache_mirror_url = remote_mirror_override or remote_mirror_url
if buildcache_destination:
buildcache_mirror_url = buildcache_destination.push_url
# Figure out what is our temporary storage mirror: Is it artifacts
# buildcache? Or temporary-storage-url-prefix? In some cases we need to
# force something or pipelines might not have a way to propagate build
# artifacts from upstream to downstream jobs.
# TODO: Remove this in Spack 0.23
pipeline_mirror_url = None
# TODO: Remove this in Spack 0.23
temp_storage_url_prefix = None
if "temporary-storage-url-prefix" in ci_config:
temp_storage_url_prefix = ci_config["temporary-storage-url-prefix"]
pipeline_mirror_url = url_util.join(temp_storage_url_prefix, ci_pipeline_id)
# TODO: Remove this in Spack 0.23
enable_artifacts_mirror = False
if "enable-artifacts-buildcache" in ci_config:
enable_artifacts_mirror = ci_config["enable-artifacts-buildcache"]
if enable_artifacts_mirror or (
spack_is_pr_pipeline and not enable_artifacts_mirror and not temp_storage_url_prefix
):
# If you explicitly enabled the artifacts buildcache feature, or
# if this is a PR pipeline but you did not enable either of the
# per-pipeline temporary storage features, we force the use of
# artifacts buildcache. Otherwise jobs will not have binary
# dependencies from previous stages available since we do not
# allow pushing binaries to the remote mirror during PR pipelines.
enable_artifacts_mirror = True
pipeline_mirror_url = url_util.path_to_file_url(local_mirror_dir)
mirror_msg = "artifact buildcache enabled, mirror url: {0}".format(pipeline_mirror_url)
tty.debug(mirror_msg)
buildcache_destination = pipeline_mirrors["buildcache-destination"]
# Get the concrete spec to be built by this job.
try:
@@ -489,48 +408,7 @@ def ci_rebuild(args):
fd.write(spack_info.encode("utf8"))
fd.write(b"\n")
pipeline_mirrors = []
# If we decided there should be a temporary storage mechanism, add that
# mirror now so it's used when we check for a hash match already
# built for this spec.
# TODO: Remove this block in Spack 0.23
if pipeline_mirror_url:
mirror = spack.mirror.Mirror(pipeline_mirror_url, name=spack_ci.TEMP_STORAGE_MIRROR_NAME)
spack.mirror.add(mirror, cfg.default_modify_scope())
pipeline_mirrors.append(pipeline_mirror_url)
# Check configured mirrors for a built spec with a matching hash
# TODO: Remove this block in Spack 0.23
mirrors_to_check = None
if remote_mirror_override:
if spack_pipeline_type == "spack_protected_branch":
# Passing "mirrors_to_check" below means we *only* look in the override
# mirror to see if we should skip building, which is what we want.
mirrors_to_check = {"override": remote_mirror_override}
# Adding this mirror to the list of configured mirrors means dependencies
# could be installed from either the override mirror or any other configured
# mirror (e.g. remote_mirror_url which is defined in the environment or
# pipeline_mirror_url), which is also what we want.
spack.mirror.add(
spack.mirror.Mirror(remote_mirror_override, name="mirror_override"),
cfg.default_modify_scope(),
)
pipeline_mirrors.append(remote_mirror_override)
# TODO: Remove this in Spack 0.23
if deprecated_mirror_config and spack_pipeline_type == "spack_pull_request":
if shared_pr_mirror_url != "None":
pipeline_mirrors.append(shared_pr_mirror_url)
matches = (
None
if full_rebuild
else bindist.get_mirrors_for_spec(
job_spec, mirrors_to_check=mirrors_to_check, index_only=False
)
)
matches = None if full_rebuild else bindist.get_mirrors_for_spec(job_spec, index_only=False)
if matches:
# Got a hash match on at least one configured mirror. All
@@ -542,25 +420,10 @@ def ci_rebuild(args):
tty.msg("No need to rebuild {0}, found hash match at: ".format(job_spec_pkg_name))
for match in matches:
tty.msg(" {0}".format(match["mirror_url"]))
# TODO: Remove this block in Spack 0.23
if enable_artifacts_mirror:
matching_mirror = matches[0]["mirror_url"]
build_cache_dir = os.path.join(local_mirror_dir, "build_cache")
tty.debug("Getting {0} buildcache from {1}".format(job_spec_pkg_name, matching_mirror))
tty.debug("Downloading to {0}".format(build_cache_dir))
bindist.download_single_spec(job_spec, build_cache_dir, mirror_url=matching_mirror)
# Now we are done and successful
return 0
# Before beginning the install, if this is a "rebuild everything" pipeline, we
# only want to keep the mirror being used by the current pipeline as it's binary
# package destination. This ensures that the when we rebuild everything, we only
# consume binary dependencies built in this pipeline.
# TODO: Remove this in Spack 0.23
if deprecated_mirror_config and full_rebuild:
spack_ci.remove_other_mirrors(pipeline_mirrors, cfg.default_modify_scope())
# No hash match anywhere means we need to rebuild spec
# Start with spack arguments
@@ -681,17 +544,11 @@ def ci_rebuild(args):
cdash_handler.copy_test_results(reports_dir, job_test_dir)
if install_exit_code == 0:
# If the install succeeded, push it to one or more mirrors. Failure to push to any mirror
# If the install succeeded, push it to the buildcache destination. Failure to push
# will result in a non-zero exit code. Pushing is best-effort.
mirror_urls = [buildcache_mirror_url]
# TODO: Remove this block in Spack 0.23
if pipeline_mirror_url:
mirror_urls.append(pipeline_mirror_url)
for result in spack_ci.create_buildcache(
input_spec=job_spec,
destination_mirror_urls=mirror_urls,
destination_mirror_urls=[buildcache_destination.push_url],
sign_binaries=spack_ci.can_sign_binaries(),
):
if not result.success:

View File

@@ -660,34 +660,32 @@ def mirror_name_or_url(m):
# accidentally to a dir in the current working directory.
# If there's a \ or / in the name, it's interpreted as a path or url.
if "/" in m or "\\" in m:
if "/" in m or "\\" in m or m in (".", ".."):
return spack.mirror.Mirror(m)
# Otherwise, the named mirror is required to exist.
try:
return spack.mirror.require_mirror_name(m)
except ValueError as e:
raise argparse.ArgumentTypeError(
str(e) + ". Did you mean {}?".format(os.path.join(".", m))
)
raise argparse.ArgumentTypeError(f"{e}. Did you mean {os.path.join('.', m)}?") from e
def mirror_url(url):
try:
return spack.mirror.Mirror.from_url(url)
except ValueError as e:
raise argparse.ArgumentTypeError(str(e))
raise argparse.ArgumentTypeError(str(e)) from e
def mirror_directory(path):
try:
return spack.mirror.Mirror.from_local_path(path)
except ValueError as e:
raise argparse.ArgumentTypeError(str(e))
raise argparse.ArgumentTypeError(str(e)) from e
def mirror_name(name):
try:
return spack.mirror.require_mirror_name(name)
except ValueError as e:
raise argparse.ArgumentTypeError(str(e))
raise argparse.ArgumentTypeError(str(e)) from e

View File

@@ -99,5 +99,5 @@ def deconcretize(parser, args):
" Use `spack deconcretize --all` to deconcretize ALL specs.",
)
specs = spack.cmd.parse_specs(args.specs) if args.specs else [any]
specs = spack.cmd.parse_specs(args.specs) if args.specs else [None]
deconcretize_specs(args, specs)

View File

@@ -85,8 +85,14 @@ def _retrieve_develop_source(spec: spack.spec.Spec, abspath: str) -> None:
def develop(parser, args):
# Note: we could put develop specs in any scope, but I assume
# users would only ever want to do this for either (a) an active
# env or (b) a specified config file (e.g. that is included by
# an environment)
# TODO: when https://github.com/spack/spack/pull/35307 is merged,
# an active env is not required if a scope is specified
env = spack.cmd.require_active_env(cmd_name="develop")
if not args.spec:
env = spack.cmd.require_active_env(cmd_name="develop")
if args.clone is False:
raise SpackError("No spec provided to spack develop command")
@@ -116,16 +122,18 @@ def develop(parser, args):
raise SpackError("spack develop requires at most one named spec")
spec = specs[0]
version = spec.versions.concrete_range_as_version
if not version:
raise SpackError("Packages to develop must have a concrete version")
# look up the maximum version so infintiy versions are preferred for develop
version = max(spec.package_class.versions.keys())
tty.msg(f"Defaulting to highest version: {spec.name}@{version}")
spec.versions = spack.version.VersionList([version])
# If user does not specify --path, we choose to create a directory in the
# active environment's directory, named after the spec
path = args.path or spec.name
if not os.path.isabs(path):
env = spack.cmd.require_active_env(cmd_name="develop")
abspath = spack.util.path.canonicalize_path(path, default_wd=env.path)
else:
abspath = path
@@ -149,13 +157,6 @@ def develop(parser, args):
_retrieve_develop_source(spec, abspath)
# Note: we could put develop specs in any scope, but I assume
# users would only ever want to do this for either (a) an active
# env or (b) a specified config file (e.g. that is included by
# an environment)
# TODO: when https://github.com/spack/spack/pull/35307 is merged,
# an active env is not required if a scope is specified
env = spack.cmd.require_active_env(cmd_name="develop")
tty.debug("Updating develop config for {0} transactionally".format(env.name))
with env.write_transaction():
if args.build_directory is not None:

View File

@@ -57,35 +57,41 @@
# env create
#
def env_create_setup_parser(subparser):
"""create a new environment"""
subparser.add_argument("env_name", metavar="env", help="name or directory of environment")
"""create a new environment
create a new environment or, optionally, copy an existing environment
a manifest file results in a new abstract environment while a lock file
creates a new concrete environment
"""
subparser.add_argument(
"env_name", metavar="env", help="name or directory of the new environment"
)
subparser.add_argument(
"-d", "--dir", action="store_true", help="create an environment in a specific directory"
)
subparser.add_argument(
"--keep-relative",
action="store_true",
help="copy relative develop paths verbatim into the new environment"
" when initializing from envfile",
help="copy envfile's relative develop paths verbatim",
)
view_opts = subparser.add_mutually_exclusive_group()
view_opts.add_argument(
"--without-view", action="store_true", help="do not maintain a view for this environment"
)
view_opts.add_argument(
"--with-view",
help="specify that this environment should maintain a view at the"
" specified path (by default the view is maintained in the"
" environment directory)",
"--with-view", help="maintain view at WITH_VIEW (vs. environment's directory)"
)
subparser.add_argument(
"envfile",
nargs="?",
default=None,
help="either a lockfile (must end with '.json' or '.lock') or a manifest file",
help="manifest or lock file (ends with '.json' or '.lock')",
)
subparser.add_argument(
"--include-concrete", action="append", help="name of old environment to copy specs from"
"--include-concrete",
action="append",
help="copy concrete specs from INCLUDE_CONCRETE's environment",
)
@@ -173,7 +179,7 @@ def _env_create(
# env activate
#
def env_activate_setup_parser(subparser):
"""set the current environment"""
"""set the active environment"""
shells = subparser.add_mutually_exclusive_group()
shells.add_argument(
"--sh",
@@ -213,14 +219,14 @@ def env_activate_setup_parser(subparser):
view_options = subparser.add_mutually_exclusive_group()
view_options.add_argument(
"--with-view",
"-v",
"--with-view",
metavar="name",
help="set runtime environment variables for specific view",
help="set runtime environment variables for the named view",
)
view_options.add_argument(
"--without-view",
"-V",
"--without-view",
action="store_true",
help="do not set runtime environment variables for any view",
)
@@ -230,14 +236,14 @@ def env_activate_setup_parser(subparser):
"--prompt",
action="store_true",
default=False,
help="decorate the command line prompt when activating",
help="add the active environment to the command line prompt",
)
subparser.add_argument(
"--temp",
action="store_true",
default=False,
help="create and activate an environment in a temporary directory",
help="create and activate in a temporary directory",
)
subparser.add_argument(
"--create",
@@ -249,13 +255,12 @@ def env_activate_setup_parser(subparser):
"--envfile",
nargs="?",
default=None,
help="either a lockfile (must end with '.json' or '.lock') or a manifest file",
help="manifest or lock file (ends with '.json' or '.lock')",
)
subparser.add_argument(
"--keep-relative",
action="store_true",
help="copy relative develop paths verbatim into the new environment"
" when initializing from envfile",
help="copy envfile's relative develop paths verbatim when create",
)
subparser.add_argument(
"-d",
@@ -269,10 +274,7 @@ def env_activate_setup_parser(subparser):
dest="env_name",
nargs="?",
default=None,
help=(
"name of managed environment or directory of the independent env"
" (when using --dir/-d) to activate"
),
help=("name or directory of the environment being activated"),
)
@@ -385,7 +387,7 @@ def env_activate(args):
# env deactivate
#
def env_deactivate_setup_parser(subparser):
"""deactivate any active environment in the shell"""
"""deactivate the active environment"""
shells = subparser.add_mutually_exclusive_group()
shells.add_argument(
"--sh",
@@ -448,23 +450,27 @@ def env_deactivate(args):
# env remove
#
def env_remove_setup_parser(subparser):
"""remove an existing environment"""
subparser.add_argument("rm_env", metavar="env", nargs="+", help="environment(s) to remove")
"""remove managed environment(s)
remove existing environment(s) managed by Spack
directory environments and manifests embedded in repositories must be
removed manually
"""
subparser.add_argument(
"rm_env", metavar="env", nargs="+", help="name(s) of the environment(s) being removed"
)
arguments.add_common_arguments(subparser, ["yes_to_all"])
subparser.add_argument(
"-f",
"--force",
action="store_true",
help="remove the environment even if it is included in another environment",
help="force removal even when included in other environment(s)",
)
def env_remove(args):
"""Remove a *named* environment.
This removes an environment managed by Spack. Directory environments
and manifests embedded in repositories should be removed manually.
"""
"""remove existing environment(s)"""
remove_envs = []
valid_envs = []
bad_envs = []
@@ -519,29 +525,32 @@ def env_remove(args):
# env rename
#
def env_rename_setup_parser(subparser):
"""rename an existing environment"""
"""rename an existing environment
rename a managed environment or move an independent/directory environment
operation cannot be performed to or from an active environment
"""
subparser.add_argument(
"mv_from", metavar="from", help="name (or path) of existing environment"
)
subparser.add_argument(
"mv_to", metavar="to", help="new name (or path) for existing environment"
"mv_from", metavar="from", help="current name or directory of the environment"
)
subparser.add_argument("mv_to", metavar="to", help="new name or directory for the environment")
subparser.add_argument(
"-d",
"--dir",
action="store_true",
help="the specified arguments correspond to directory paths",
help="positional arguments are environment directory paths",
)
subparser.add_argument(
"-f", "--force", action="store_true", help="allow overwriting of an existing environment"
"-f",
"--force",
action="store_true",
help="force renaming even if overwriting an existing environment",
)
def env_rename(args):
"""Rename an environment.
This renames a managed environment or moves an independent environment.
"""
"""rename or move an existing environment"""
# Directory option has been specified
if args.dir:
@@ -590,7 +599,7 @@ def env_rename(args):
# env list
#
def env_list_setup_parser(subparser):
"""list managed environments"""
"""list all managed environments"""
def env_list(args):
@@ -626,13 +635,14 @@ def actions():
# env view
#
def env_view_setup_parser(subparser):
"""manage a view associated with the environment"""
"""manage the environment's view
provide the path when enabling a view with a non-default path
"""
subparser.add_argument(
"action", choices=ViewAction.actions(), help="action to take for the environment's view"
)
subparser.add_argument(
"view_path", nargs="?", help="when enabling a view, optionally set the path manually"
)
subparser.add_argument("view_path", nargs="?", help="view's non-default path when enabling it")
def env_view(args):
@@ -660,7 +670,7 @@ def env_view(args):
# env status
#
def env_status_setup_parser(subparser):
"""print whether there is an active environment"""
"""print active environment status"""
def env_status(args):
@@ -720,14 +730,22 @@ def env_loads(args):
def env_update_setup_parser(subparser):
"""update environments to the latest format"""
"""update the environment manifest to the latest schema format
update the environment to the latest schema format, which may not be
readable by older versions of spack
a backup copy of the manifest is retained in case there is a need to revert
this operation
"""
subparser.add_argument(
metavar="env", dest="update_env", help="name or directory of the environment to activate"
metavar="env", dest="update_env", help="name or directory of the environment"
)
spack.cmd.common.arguments.add_common_arguments(subparser, ["yes_to_all"])
def env_update(args):
"""update the manifest to the latest format"""
manifest_file = ev.manifest_file(args.update_env)
backup_file = manifest_file + ".bkp"
@@ -757,14 +775,22 @@ def env_update(args):
def env_revert_setup_parser(subparser):
"""restore environments to their state before update"""
"""restore the environment manifest to its previous format
revert the environment's manifest to the schema format from its last
'spack env update'
the current manifest will be overwritten by the backup copy and the backup
copy will be removed
"""
subparser.add_argument(
metavar="env", dest="revert_env", help="name or directory of the environment to activate"
metavar="env", dest="revert_env", help="name or directory of the environment"
)
spack.cmd.common.arguments.add_common_arguments(subparser, ["yes_to_all"])
def env_revert(args):
"""restore the environment manifest to its previous format"""
manifest_file = ev.manifest_file(args.revert_env)
backup_file = manifest_file + ".bkp"
@@ -796,15 +822,19 @@ def env_revert(args):
def env_depfile_setup_parser(subparser):
"""generate a depfile from the concrete environment specs"""
"""generate a depfile to exploit parallel builds across specs
requires the active environment to be concrete
"""
subparser.add_argument(
"--make-prefix",
"--make-target-prefix",
default=None,
metavar="TARGET",
help="prefix Makefile targets (and variables) with <TARGET>/<name>\n\nby default "
"the absolute path to the directory makedeps under the environment metadata dir is "
"used. can be set to an empty string --make-prefix ''",
help="prefix Makefile targets/variables with <TARGET>/<name>,\n"
"which can be an empty string (--make-prefix '')\n"
"defaults to the absolute path of the environment's makedeps\n"
"environment metadata dir\n",
)
subparser.add_argument(
"--make-disable-jobserver",
@@ -819,8 +849,8 @@ def env_depfile_setup_parser(subparser):
type=arguments.use_buildcache,
default="package:auto,dependencies:auto",
metavar="[{auto,only,never},][package:{auto,only,never},][dependencies:{auto,only,never}]",
help="when using `only`, redundant build dependencies are pruned from the DAG\n\n"
"this flag is passed on to the generated spack install commands",
help="use `only` to prune redundant build dependencies\n"
"option is also passed to generated spack install commands",
)
subparser.add_argument(
"-o",
@@ -834,14 +864,14 @@ def env_depfile_setup_parser(subparser):
"--generator",
default="make",
choices=("make",),
help="specify the depfile type\n\ncurrently only make is supported",
help="specify the depfile type (only supports `make`)",
)
subparser.add_argument(
metavar="specs",
dest="specs",
nargs=argparse.REMAINDER,
default=None,
help="generate a depfile only for matching specs in the environment",
help="limit the generated file to matching specs",
)
@@ -910,7 +940,12 @@ def setup_parser(subparser):
setup_parser_cmd_name = "env_%s_setup_parser" % name
setup_parser_cmd = globals()[setup_parser_cmd_name]
subsubparser = sp.add_parser(name, aliases=aliases, help=setup_parser_cmd.__doc__)
subsubparser = sp.add_parser(
name,
aliases=aliases,
description=setup_parser_cmd.__doc__,
help=spack.cmd.first_line(setup_parser_cmd.__doc__),
)
setup_parser_cmd(subsubparser)

View File

@@ -174,17 +174,17 @@ def query_arguments(args):
if (args.missing or args.only_missing) and not args.only_deprecated:
installed.append(InstallStatuses.MISSING)
known = any
predicate_fn = None
if args.unknown:
known = False
predicate_fn = lambda x: not spack.repo.PATH.exists(x.spec.name)
explicit = any
explicit = None
if args.explicit:
explicit = True
if args.implicit:
explicit = False
q_args = {"installed": installed, "known": known, "explicit": explicit}
q_args = {"installed": installed, "predicate_fn": predicate_fn, "explicit": explicit}
install_tree = args.install_tree
upstreams = spack.config.get("upstreams", {})

View File

@@ -80,8 +80,8 @@ def find_matching_specs(specs, allow_multiple_matches=False):
has_errors = True
# No installed package matches the query
if len(matching) == 0 and spec is not any:
tty.die("{0} does not match any installed packages.".format(spec))
if len(matching) == 0 and spec is not None:
tty.die(f"{spec} does not match any installed packages.")
specs_from_cli.extend(matching)
@@ -116,6 +116,6 @@ def mark(parser, args):
" Use `spack mark --all` to mark ALL packages.",
)
# [any] here handles the --all case by forcing all specs to be returned
specs = spack.cmd.parse_specs(args.specs) if args.specs else [any]
# [None] here handles the --all case by forcing all specs to be returned
specs = spack.cmd.parse_specs(args.specs) if args.specs else [None]
mark_specs(args, specs)

View File

@@ -378,7 +378,10 @@ def refresh(module_type, specs, args):
def modules_cmd(parser, args, module_type, callbacks=callbacks):
# Qualifiers to be used when querying the db for specs
constraint_qualifiers = {
"refresh": {"installed": True, "known": lambda x: not spack.repo.PATH.exists(x)}
"refresh": {
"installed": True,
"predicate_fn": lambda x: spack.repo.PATH.exists(x.spec.name),
}
}
query_args = constraint_qualifiers.get(args.subparser_name, {})

View File

@@ -165,7 +165,7 @@ def test_run(args):
if args.fail_fast:
spack.config.set("config:fail_fast", True, scope="command_line")
explicit = args.explicit or any
explicit = args.explicit or None
explicit_str = "explicitly " if args.explicit else ""
# Get specs to test

View File

@@ -90,6 +90,7 @@ def find_matching_specs(
env: optional active environment
specs: list of specs to be matched against installed packages
allow_multiple_matches: if True multiple matches are admitted
origin: origin of the spec
Return:
list: list of specs
@@ -98,7 +99,7 @@ def find_matching_specs(
hashes = env.all_hashes() if env else None
# List of specs that match expressions given via command line
specs_from_cli = []
specs_from_cli: List["spack.spec.Spec"] = []
has_errors = False
for spec in specs:
install_query = [InstallStatuses.INSTALLED, InstallStatuses.DEPRECATED]
@@ -116,7 +117,7 @@ def find_matching_specs(
has_errors = True
# No installed package matches the query
if len(matching) == 0 and spec is not any:
if len(matching) == 0 and spec is not None:
if env:
pkg_type = "packages in environment '%s'" % env.name
else:
@@ -213,7 +214,7 @@ def get_uninstall_list(args, specs: List[spack.spec.Spec], env: Optional[ev.Envi
# Gets the list of installed specs that match the ones given via cli
# args.all takes care of the case where '-a' is given in the cli
matching_specs = find_matching_specs(env, specs, args.all)
matching_specs = find_matching_specs(env, specs, args.all, origin=args.origin)
dependent_specs = installed_dependents(matching_specs)
all_uninstall_specs = matching_specs + dependent_specs if args.dependents else matching_specs
other_dependent_envs = dependent_environments(all_uninstall_specs, current_env=env)
@@ -301,6 +302,6 @@ def uninstall(parser, args):
" Use `spack uninstall --all` to uninstall ALL packages.",
)
# [any] here handles the --all case by forcing all specs to be returned
specs = spack.cmd.parse_specs(args.specs) if args.specs else [any]
# [None] here handles the --all case by forcing all specs to be returned
specs = spack.cmd.parse_specs(args.specs) if args.specs else [None]
uninstall_specs(args, specs)

View File

@@ -33,6 +33,8 @@
YamlFilesystemView.
"""
import sys
import llnl.util.tty as tty
from llnl.util.link_tree import MergeConflictError
@@ -178,7 +180,12 @@ def setup_parser(sp):
def view(parser, args):
"Produce a view of a set of packages."
"""Produce a view of a set of packages."""
if sys.platform == "win32" and args.action in ("hardlink", "hard"):
# Hard-linked views are not yet allowed on Windows.
# See https://github.com/spack/spack/pull/46335#discussion_r1757411915
tty.die("Hard linking is not supported on Windows. Please use symlinks or copy methods.")
specs = spack.cmd.parse_specs(args.specs)
path = args.path[0]

View File

@@ -275,7 +275,7 @@ def __init__(
operating_system,
target,
paths,
modules=None,
modules: Optional[List[str]] = None,
alias=None,
environment=None,
extra_rpaths=None,

View File

@@ -92,6 +92,14 @@ def c11_flag(self):
else:
return "-std=c1x"
@property
def c18_flag(self):
# c18 supported since oneapi 2022, which is classic version 2021.5.0
if self.real_version < Version("21.5.0"):
raise UnsupportedCompilerFlag(self, "the C18 standard", "c18_flag", "< 21.5.0")
else:
return "-std=c18"
@property
def cc_pic_flag(self):
return "-fPIC"
@@ -116,9 +124,8 @@ def setup_custom_environment(self, pkg, env):
# Edge cases for Intel's oneAPI compilers when using the legacy classic compilers:
# Always pass flags to disable deprecation warnings, since these warnings can
# confuse tools that parse the output of compiler commands (e.g. version checks).
if self.cc and self.cc.endswith("icc") and self.real_version >= Version("2021"):
if self.real_version >= Version("2021") and self.real_version <= Version("2023"):
env.append_flags("SPACK_ALWAYS_CFLAGS", "-diag-disable=10441")
if self.cxx and self.cxx.endswith("icpc") and self.real_version >= Version("2021"):
env.append_flags("SPACK_ALWAYS_CXXFLAGS", "-diag-disable=10441")
if self.fc and self.fc.endswith("ifort") and self.real_version >= Version("2021"):
if self.real_version >= Version("2021") and self.real_version <= Version("2024"):
env.append_flags("SPACK_ALWAYS_FFLAGS", "-diag-disable=10448")

View File

@@ -293,6 +293,17 @@ def platform_toolset_ver(self):
vs22_toolset = Version(toolset_ver) > Version("142")
return toolset_ver if not vs22_toolset else "143"
@property
def visual_studio_version(self):
"""The four digit Visual Studio version (i.e. 2019 or 2022)
Note: This differs from the msvc version or toolset version as
those properties track the compiler and build tools version
respectively, whereas this tracks the VS release associated
with a given MSVC compiler.
"""
return re.search(r"[0-9]{4}", self.cc).group(0)
def _compiler_version(self, compiler):
"""Returns version object for given compiler"""
# ignore_errors below is true here due to ifx's

View File

@@ -7,7 +7,9 @@
from os.path import dirname, join
from llnl.util import tty
from llnl.util.filesystem import ancestor
import spack.util.executable
from spack.compiler import Compiler
from spack.version import Version
@@ -116,6 +118,24 @@ def fc_pic_flag(self):
def stdcxx_libs(self):
return ("-cxxlib",)
@property
def prefix(self):
# OneAPI reports its install prefix when running ``--version``
# on the line ``InstalledDir: <prefix>/bin/compiler``.
cc = spack.util.executable.Executable(self.cc)
with self.compiler_environment():
oneapi_output = cc("--version", output=str, error=str)
for line in oneapi_output.splitlines():
if line.startswith("InstalledDir:"):
oneapi_prefix = line.split(":")[1].strip()
# Go from <prefix>/bin/compiler to <prefix>
return ancestor(oneapi_prefix, 2)
raise RuntimeError(
"could not find install prefix of OneAPI from output:\n\t{}".format(oneapi_output)
)
def setup_custom_environment(self, pkg, env):
# workaround bug in icpx driver where it requires sycl-post-link is on the PATH
# It is located in the same directory as the driver. Error message:
@@ -131,11 +151,14 @@ def setup_custom_environment(self, pkg, env):
# Edge cases for Intel's oneAPI compilers when using the legacy classic compilers:
# Always pass flags to disable deprecation warnings, since these warnings can
# confuse tools that parse the output of compiler commands (e.g. version checks).
if self.cc and self.cc.endswith("icc") and self.real_version >= Version("2021"):
# This is really only needed for Fortran, since oneapi@ should be using either
# icx+icpx+ifx or icx+icpx+ifort. But to be on the safe side (some users may
# want to try to swap icpx against icpc, for example), and since the Intel LLVM
# compilers accept these diag-disable flags, we apply them for all compilers.
if self.real_version >= Version("2021") and self.real_version <= Version("2023"):
env.append_flags("SPACK_ALWAYS_CFLAGS", "-diag-disable=10441")
if self.cxx and self.cxx.endswith("icpc") and self.real_version >= Version("2021"):
env.append_flags("SPACK_ALWAYS_CXXFLAGS", "-diag-disable=10441")
if self.fc and self.fc.endswith("ifort") and self.real_version >= Version("2021"):
if self.real_version >= Version("2021") and self.real_version <= Version("2024"):
env.append_flags("SPACK_ALWAYS_FFLAGS", "-diag-disable=10448")
# 2024 release bumped the libsycl version because of an ABI

View File

@@ -32,6 +32,7 @@
Container,
Dict,
Generator,
Iterable,
List,
NamedTuple,
Optional,
@@ -290,55 +291,6 @@ def __reduce__(self):
return ForbiddenLock, tuple()
_QUERY_DOCSTRING = """
Args:
query_spec: queries iterate through specs in the database and
return those that satisfy the supplied ``query_spec``. If
query_spec is `any`, This will match all specs in the
database. If it is a spec, we'll evaluate
``spec.satisfies(query_spec)``
known (bool or None): Specs that are "known" are those
for which Spack can locate a ``package.py`` file -- i.e.,
Spack "knows" how to install them. Specs that are unknown may
represent packages that existed in a previous version of
Spack, but have since either changed their name or
been removed
installed (bool or InstallStatus or typing.Iterable or None):
if ``True``, includes only installed
specs in the search; if ``False`` only missing specs, and if
``any``, all specs in database. If an InstallStatus or iterable
of InstallStatus, returns specs whose install status
(installed, deprecated, or missing) matches (one of) the
InstallStatus. (default: True)
explicit (bool or None): A spec that was installed
following a specific user request is marked as explicit. If
instead it was pulled-in as a dependency of a user requested
spec it's considered implicit.
start_date (datetime.datetime or None): filters the query
discarding specs that have been installed before ``start_date``.
end_date (datetime.datetime or None): filters the query discarding
specs that have been installed after ``end_date``.
hashes (Container): list or set of hashes that we can use to
restrict the search
in_buildcache (bool or None): Specs that are marked in
this database as part of an associated binary cache are
``in_buildcache``. All other specs are not. This field is used
for querying mirror indices. Default is ``any``.
Returns:
list of specs that match the query
"""
class LockConfiguration(NamedTuple):
"""Data class to configure locks in Database objects
@@ -604,6 +556,9 @@ def _path(self, spec: "spack.spec.Spec") -> pathlib.Path:
return self.dir / f"{spec.name}-{spec.dag_hash()}"
SelectType = Callable[[InstallRecord], bool]
class Database:
#: Fields written for each install record
record_fields: Tuple[str, ...] = DEFAULT_INSTALL_RECORD_FIELDS
@@ -1245,7 +1200,7 @@ def _add(
self._data[key].explicit = explicit
@_autospec
def add(self, spec: "spack.spec.Spec", *, explicit: bool = False) -> None:
def add(self, spec: "spack.spec.Spec", *, explicit: bool = False, allow_missing=False) -> None:
"""Add spec at path to database, locking and reading DB to sync.
``add()`` will lock and read from the DB on disk.
@@ -1254,7 +1209,7 @@ def add(self, spec: "spack.spec.Spec", *, explicit: bool = False) -> None:
# TODO: ensure that spec is concrete?
# Entire add is transactional.
with self.write_transaction():
self._add(spec, explicit=explicit)
self._add(spec, explicit=explicit, allow_missing=allow_missing)
def _get_matching_spec_key(self, spec: "spack.spec.Spec", **kwargs) -> str:
"""Get the exact spec OR get a single spec that matches."""
@@ -1525,62 +1480,51 @@ def get_by_hash(self, dag_hash, default=None, installed=any):
def _query(
self,
query_spec=any,
known=any,
installed=True,
explicit=any,
start_date=None,
end_date=None,
hashes=None,
in_buildcache=any,
origin=None,
):
"""Run a query on the database."""
query_spec: Optional[Union[str, "spack.spec.Spec"]] = None,
*,
predicate_fn: Optional[SelectType] = None,
installed: Union[bool, InstallStatus, List[InstallStatus]] = True,
explicit: Optional[bool] = None,
start_date: Optional[datetime.datetime] = None,
end_date: Optional[datetime.datetime] = None,
hashes: Optional[Iterable[str]] = None,
in_buildcache: Optional[bool] = None,
origin: Optional[str] = None,
) -> List["spack.spec.Spec"]:
# TODO: Specs are a lot like queries. Should there be a
# TODO: wildcard spec object, and should specs have attributes
# TODO: like installed and known that can be queried? Or are
# TODO: these really special cases that only belong here?
# Restrict the set of records over which we iterate first
matching_hashes = self._data
if hashes is not None:
matching_hashes = {h: self._data[h] for h in hashes if h in self._data}
if query_spec is not any:
if not isinstance(query_spec, spack.spec.Spec):
query_spec = spack.spec.Spec(query_spec)
if isinstance(query_spec, str):
query_spec = spack.spec.Spec(query_spec)
# Just look up concrete specs with hashes; no fancy search.
if query_spec.concrete:
# TODO: handling of hashes restriction is not particularly elegant.
hash_key = query_spec.dag_hash()
if hash_key in self._data and (not hashes or hash_key in hashes):
return [self._data[hash_key].spec]
else:
return []
if query_spec is not None and query_spec.concrete:
hash_key = query_spec.dag_hash()
if hash_key not in matching_hashes:
return []
matching_hashes = {hash_key: matching_hashes[hash_key]}
# Abstract specs require more work -- currently we test
# against everything.
results = []
start_date = start_date or datetime.datetime.min
end_date = end_date or datetime.datetime.max
# save specs whose name doesn't match for last, to avoid a virtual check
deferred = []
for key, rec in self._data.items():
if hashes is not None and rec.spec.dag_hash() not in hashes:
continue
for rec in matching_hashes.values():
if origin and not (origin == rec.origin):
continue
if not rec.install_type_matches(installed):
continue
if in_buildcache is not any and rec.in_buildcache != in_buildcache:
if in_buildcache is not None and rec.in_buildcache != in_buildcache:
continue
if explicit is not any and rec.explicit != explicit:
if explicit is not None and rec.explicit != explicit:
continue
if known is not any and known(rec.spec.name):
if predicate_fn is not None and not predicate_fn(rec):
continue
if start_date or end_date:
@@ -1588,7 +1532,7 @@ def _query(
if not (start_date < inst_date < end_date):
continue
if query_spec is any:
if query_spec is None or query_spec.concrete:
results.append(rec.spec)
continue
@@ -1606,36 +1550,118 @@ def _query(
# If we did fine something, the query spec can't be virtual b/c we matched an actual
# package installation, so skip the virtual check entirely. If we *didn't* find anything,
# check all the deferred specs *if* the query is virtual.
if not results and query_spec is not any and deferred and query_spec.virtual:
if not results and query_spec is not None and deferred and query_spec.virtual:
results = [spec for spec in deferred if spec.satisfies(query_spec)]
return results
if _query.__doc__ is None:
_query.__doc__ = ""
_query.__doc__ += _QUERY_DOCSTRING
def query_local(
self,
query_spec: Optional[Union[str, "spack.spec.Spec"]] = None,
*,
predicate_fn: Optional[SelectType] = None,
installed: Union[bool, InstallStatus, List[InstallStatus]] = True,
explicit: Optional[bool] = None,
start_date: Optional[datetime.datetime] = None,
end_date: Optional[datetime.datetime] = None,
hashes: Optional[List[str]] = None,
in_buildcache: Optional[bool] = None,
origin: Optional[str] = None,
) -> List["spack.spec.Spec"]:
"""Queries the local Spack database.
def query_local(self, *args, **kwargs):
"""Query only the local Spack database.
This function doesn't guarantee any sorting of the returned data for performance reason,
since comparing specs for __lt__ may be an expensive operation.
This function doesn't guarantee any sorting of the returned
data for performance reason, since comparing specs for __lt__
may be an expensive operation.
Args:
query_spec: if query_spec is ``None``, match all specs in the database.
If it is a spec, return all specs matching ``spec.satisfies(query_spec)``.
predicate_fn: optional predicate taking an InstallRecord as argument, and returning
whether that record is selected for the query. It can be used to craft criteria
that need some data for selection not provided by the Database itself.
installed: if ``True``, includes only installed specs in the search. If ``False`` only
missing specs, and if ``any``, all specs in database. If an InstallStatus or
iterable of InstallStatus, returns specs whose install status matches at least
one of the InstallStatus.
explicit: a spec that was installed following a specific user request is marked as
explicit. If instead it was pulled-in as a dependency of a user requested spec
it's considered implicit.
start_date: if set considers only specs installed from the starting date.
end_date: if set considers only specs installed until the ending date.
in_buildcache: specs that are marked in this database as part of an associated binary
cache are ``in_buildcache``. All other specs are not. This field is used for
querying mirror indices. By default, it does not check this status.
hashes: list of hashes used to restrict the search
origin: origin of the spec
"""
with self.read_transaction():
return self._query(*args, **kwargs)
return self._query(
query_spec,
predicate_fn=predicate_fn,
installed=installed,
explicit=explicit,
start_date=start_date,
end_date=end_date,
hashes=hashes,
in_buildcache=in_buildcache,
origin=origin,
)
if query_local.__doc__ is None:
query_local.__doc__ = ""
query_local.__doc__ += _QUERY_DOCSTRING
def query(
self,
query_spec: Optional[Union[str, "spack.spec.Spec"]] = None,
*,
predicate_fn: Optional[SelectType] = None,
installed: Union[bool, InstallStatus, List[InstallStatus]] = True,
explicit: Optional[bool] = None,
start_date: Optional[datetime.datetime] = None,
end_date: Optional[datetime.datetime] = None,
in_buildcache: Optional[bool] = None,
hashes: Optional[List[str]] = None,
origin: Optional[str] = None,
install_tree: str = "all",
):
"""Queries the Spack database including all upstream databases.
def query(self, *args, **kwargs):
"""Query the Spack database including all upstream databases.
Args:
query_spec: if query_spec is ``None``, match all specs in the database.
If it is a spec, return all specs matching ``spec.satisfies(query_spec)``.
Additional Arguments:
install_tree (str): query 'all' (default), 'local', 'upstream', or upstream path
predicate_fn: optional predicate taking an InstallRecord as argument, and returning
whether that record is selected for the query. It can be used to craft criteria
that need some data for selection not provided by the Database itself.
installed: if ``True``, includes only installed specs in the search. If ``False`` only
missing specs, and if ``any``, all specs in database. If an InstallStatus or
iterable of InstallStatus, returns specs whose install status matches at least
one of the InstallStatus.
explicit: a spec that was installed following a specific user request is marked as
explicit. If instead it was pulled-in as a dependency of a user requested spec
it's considered implicit.
start_date: if set considers only specs installed from the starting date.
end_date: if set considers only specs installed until the ending date.
in_buildcache: specs that are marked in this database as part of an associated binary
cache are ``in_buildcache``. All other specs are not. This field is used for
querying mirror indices. By default, it does not check this status.
hashes: list of hashes used to restrict the search
install_tree: query 'all' (default), 'local', 'upstream', or upstream path
origin: origin of the spec
"""
install_tree = kwargs.pop("install_tree", "all")
valid_trees = ["all", "upstream", "local", self.root] + [u.root for u in self.upstream_dbs]
if install_tree not in valid_trees:
msg = "Invalid install_tree argument to Database.query()\n"
@@ -1651,28 +1677,54 @@ def query(self, *args, **kwargs):
# queries for upstream DBs need to *not* lock - we may not
# have permissions to do this and the upstream DBs won't know about
# us anyway (so e.g. they should never uninstall specs)
upstream_results.extend(upstream_db._query(*args, **kwargs) or [])
upstream_results.extend(
upstream_db._query(
query_spec,
predicate_fn=predicate_fn,
installed=installed,
explicit=explicit,
start_date=start_date,
end_date=end_date,
hashes=hashes,
in_buildcache=in_buildcache,
origin=origin,
)
or []
)
local_results = []
local_results: Set["spack.spec.Spec"] = set()
if install_tree in ("all", "local") or self.root == install_tree:
local_results = set(self.query_local(*args, **kwargs))
local_results = set(
self.query_local(
query_spec,
predicate_fn=predicate_fn,
installed=installed,
explicit=explicit,
start_date=start_date,
end_date=end_date,
hashes=hashes,
in_buildcache=in_buildcache,
origin=origin,
)
)
results = list(local_results) + list(x for x in upstream_results if x not in local_results)
return sorted(results)
if query.__doc__ is None:
query.__doc__ = ""
query.__doc__ += _QUERY_DOCSTRING
def query_one(self, query_spec, known=any, installed=True):
def query_one(
self,
query_spec: Optional[Union[str, "spack.spec.Spec"]],
predicate_fn: Optional[SelectType] = None,
installed: Union[bool, InstallStatus, List[InstallStatus]] = True,
) -> Optional["spack.spec.Spec"]:
"""Query for exactly one spec that matches the query spec.
Raises an assertion error if more than one spec matches the
query. Returns None if no installed package matches.
Returns None if no installed package matches.
Raises:
AssertionError: if more than one spec matches the query.
"""
concrete_specs = self.query(query_spec, known=known, installed=installed)
concrete_specs = self.query(query_spec, predicate_fn=predicate_fn, installed=installed)
assert len(concrete_specs) <= 1
return concrete_specs[0] if concrete_specs else None

View File

@@ -11,6 +11,7 @@
import os.path
import re
import sys
import traceback
import warnings
from typing import Dict, Iterable, List, Optional, Set, Tuple, Type
@@ -18,6 +19,7 @@
import llnl.util.lang
import llnl.util.tty
import spack.error
import spack.spec
import spack.util.elf as elf_utils
import spack.util.environment
@@ -66,6 +68,21 @@ def file_identifier(path):
return s.st_dev, s.st_ino
def dedupe_paths(paths: List[str]) -> List[str]:
"""Deduplicate paths based on inode and device number. In case the list contains first a
symlink and then the directory it points to, the symlink is replaced with the directory path.
This ensures that we pick for example ``/usr/bin`` over ``/bin`` if the latter is a symlink to
the former`."""
seen: Dict[Tuple[int, int], str] = {}
for path in paths:
identifier = file_identifier(path)
if identifier not in seen:
seen[identifier] = path
elif not os.path.islink(path):
seen[identifier] = path
return list(seen.values())
def executables_in_path(path_hints: List[str]) -> Dict[str, str]:
"""Get the paths of all executables available from the current PATH.
@@ -82,8 +99,7 @@ def executables_in_path(path_hints: List[str]) -> Dict[str, str]:
"""
search_paths = llnl.util.filesystem.search_paths_for_executables(*path_hints)
# Make use we don't doubly list /usr/lib and /lib etc
search_paths = list(llnl.util.lang.dedupe(search_paths, key=file_identifier))
return path_to_dict(search_paths)
return path_to_dict(dedupe_paths(search_paths))
def accept_elf(path, host_compat):
@@ -144,7 +160,7 @@ def libraries_in_ld_and_system_library_path(
search_paths = list(filter(os.path.isdir, search_paths))
# Make use we don't doubly list /usr/lib and /lib etc
search_paths = list(llnl.util.lang.dedupe(search_paths, key=file_identifier))
search_paths = dedupe_paths(search_paths)
try:
host_compat = elf_utils.get_elf_compat(sys.executable)
@@ -260,8 +276,12 @@ def detect_specs(
)
except Exception as e:
specs = []
if spack.error.SHOW_BACKTRACE:
details = traceback.format_exc()
else:
details = f"[{e.__class__.__name__}: {e}]"
warnings.warn(
f'error detecting "{pkg.name}" from prefix {candidate_path} [{str(e)}]'
f'error detecting "{pkg.name}" from prefix {candidate_path}: {details}'
)
if not specs:
@@ -435,9 +455,9 @@ def by_path(
llnl.util.tty.debug(
f"[EXTERNAL DETECTION] Skipping {pkg_name}: timeout reached"
)
except Exception as e:
except Exception:
llnl.util.tty.debug(
f"[EXTERNAL DETECTION] Skipping {pkg_name}: exception occured {e}"
f"[EXTERNAL DETECTION] Skipping {pkg_name}: {traceback.format_exc()}"
)
return result

View File

@@ -9,11 +9,13 @@
import os
import re
import shlex
from enum import Enum
from typing import List, Optional
import spack.deptypes as dt
import spack.environment.environment as ev
import spack.paths
import spack.spec
import spack.traverse as traverse
@@ -226,6 +228,7 @@ def to_dict(self):
"install_deps_target": self._target("install-deps"),
"any_hash_target": self._target("%"),
"jobserver_support": self.jobserver_support,
"spack_script": shlex.quote(spack.paths.spack_script),
"adjacency_list": self.make_adjacency_list,
"phony_convenience_targets": " ".join(self.phony_convenience_targets),
"pkg_ids_variable": self.pkg_identifier_variable,

View File

@@ -1159,6 +1159,8 @@ def clear(self, re_read=False):
# things that cannot be recreated from file
self.new_specs = [] # write packages for these on write()
self.manifest.clear()
@property
def active(self):
"""True if this environment is currently active."""
@@ -1954,17 +1956,16 @@ def install_specs(self, specs: Optional[List[Spec]] = None, **install_args):
specs = specs if specs is not None else roots
# Extend the set of specs to overwrite with modified dev specs and their parents
overwrite: Set[str] = set()
overwrite.update(install_args.get("overwrite", []), self._dev_specs_that_need_overwrite())
install_args["overwrite"] = overwrite
install_args["overwrite"] = {
*install_args.get("overwrite", ()),
*self._dev_specs_that_need_overwrite(),
}
explicit: Set[str] = set()
explicit.update(
install_args.get("explicit", []),
(s.dag_hash() for s in specs),
(s.dag_hash() for s in roots),
)
install_args["explicit"] = explicit
# Only environment roots are marked explicit
install_args["explicit"] = {
*install_args.get("explicit", ()),
*(s.dag_hash() for s in roots),
}
PackageInstaller([spec.package for spec in specs], **install_args).install()
@@ -2163,6 +2164,13 @@ def _concrete_specs_dict(self):
# Assumes no legacy formats, since this was just created.
spec_dict[ht.dag_hash.name] = s.dag_hash()
concrete_specs[s.dag_hash()] = spec_dict
if s.build_spec is not s:
for d in s.build_spec.traverse():
build_spec_dict = d.node_dict_with_hashes(hash=ht.dag_hash)
build_spec_dict[ht.dag_hash.name] = d.dag_hash()
concrete_specs[d.dag_hash()] = build_spec_dict
return concrete_specs
def _concrete_roots_dict(self):
@@ -2322,7 +2330,7 @@ def filter_specs(self, reader, json_specs_by_hash, order_concretized):
specs_by_hash[lockfile_key] = spec
# Second pass: For each spec, get its dependencies from the node dict
# and add them to the spec
# and add them to the spec, including build specs
for lockfile_key, node_dict in json_specs_by_hash.items():
name, data = reader.name_and_data(node_dict)
for _, dep_hash, deptypes, _, virtuals in reader.dependencies_from_node_dict(data):
@@ -2330,6 +2338,10 @@ def filter_specs(self, reader, json_specs_by_hash, order_concretized):
specs_by_hash[dep_hash], depflag=dt.canonicalize(deptypes), virtuals=virtuals
)
if "build_spec" in node_dict:
_, bhash, _ = reader.extract_build_spec_info_from_node_dict(node_dict)
specs_by_hash[lockfile_key]._build_spec = specs_by_hash[bhash]
# Traverse the root specs one at a time in the order they appear.
# The first time we see each DAG hash, that's the one we want to
# keep. This is only required as long as we support older lockfile
@@ -2789,6 +2801,11 @@ def remove_user_spec(self, user_spec: str) -> None:
raise SpackEnvironmentError(msg) from e
self.changed = True
def clear(self) -> None:
"""Clear all user specs from the list of root specs"""
self.configuration["specs"] = []
self.changed = True
def override_user_spec(self, user_spec: str, idx: int) -> None:
"""Overrides the user spec at index idx with the one passed as input.

View File

@@ -48,8 +48,6 @@ def activate_header(env, shell, prompt=None, view: Optional[str] = None):
cmds += 'set "SPACK_ENV=%s"\n' % env.path
if view:
cmds += 'set "SPACK_ENV_VIEW=%s"\n' % view
# TODO: despacktivate
# TODO: prompt
elif shell == "pwsh":
cmds += "$Env:SPACK_ENV='%s'\n" % env.path
if view:

View File

@@ -12,6 +12,9 @@
#: this is module-scoped because it needs to be set very early
debug = 0
#: whether to show a backtrace when an error is printed, enabled with --backtrace.
SHOW_BACKTRACE = False
class SpackError(Exception):
"""This is the superclass for all Spack errors.

View File

@@ -33,6 +33,7 @@
from llnl.util.tty.color import colorize
import spack.config
import spack.directory_layout
import spack.paths
import spack.projections
import spack.relocate
@@ -50,7 +51,7 @@
_projections_path = ".spack/projections.yaml"
LinkCallbackType = Callable[[str, str, "FilesystemView", Optional["spack.spec.Spec"]], None]
LinkCallbackType = Callable[[str, str, "FilesystemView", Optional[spack.spec.Spec]], None]
def view_symlink(src: str, dst: str, *args, **kwargs) -> None:
@@ -62,7 +63,7 @@ def view_hardlink(src: str, dst: str, *args, **kwargs) -> None:
def view_copy(
src: str, dst: str, view: "FilesystemView", spec: Optional["spack.spec.Spec"] = None
src: str, dst: str, view: "FilesystemView", spec: Optional[spack.spec.Spec] = None
) -> None:
"""
Copy a file from src to dst.
@@ -100,10 +101,12 @@ def view_copy(
spack.relocate.relocate_text(files=[dst], prefixes=prefix_to_projection)
try:
os.chown(dst, src_stat.st_uid, src_stat.st_gid)
except OSError:
tty.debug(f"Can't change the permissions for {dst}")
# The os module on Windows does not have a chown function.
if sys.platform != "win32":
try:
os.chown(dst, src_stat.st_uid, src_stat.st_gid)
except OSError:
tty.debug(f"Can't change the permissions for {dst}")
#: supported string values for `link_type` in an env, mapped to canonical values
@@ -158,7 +161,7 @@ class FilesystemView:
def __init__(
self,
root: str,
layout: "spack.directory_layout.DirectoryLayout",
layout: spack.directory_layout.DirectoryLayout,
*,
projections: Optional[Dict] = None,
ignore_conflicts: bool = False,
@@ -180,7 +183,10 @@ def __init__(
# Setup link function to include view
self.link_type = link_type
self.link = ft.partial(function_for_link_type(link_type), view=self)
self._link = function_for_link_type(link_type)
def link(self, src: str, dst: str, spec: Optional[spack.spec.Spec] = None) -> None:
self._link(src, dst, self, spec)
def add_specs(self, *specs, **kwargs):
"""
@@ -281,7 +287,7 @@ class YamlFilesystemView(FilesystemView):
def __init__(
self,
root: str,
layout: "spack.directory_layout.DirectoryLayout",
layout: spack.directory_layout.DirectoryLayout,
*,
projections: Optional[Dict] = None,
ignore_conflicts: bool = False,

View File

@@ -2,8 +2,7 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""
This module encapsulates package installation functionality.
"""This module encapsulates package installation functionality.
The PackageInstaller coordinates concurrent builds of packages for the same
Spack instance by leveraging the dependency DAG and file system locks. It
@@ -17,16 +16,18 @@
File system locks enable coordination such that no two processes attempt to
build the same or a failed dependency package.
Failures to install dependency packages result in removal of their dependents'
build tasks from the current process. A failure file is also written (and
locked) so that other processes can detect the failure and adjust their build
tasks accordingly.
If a dependency package fails to install, its dependents' tasks will be
removed from the installing process's queue. A failure file is also written
and locked. Other processes use this file to detect the failure and dequeue
its dependents.
This module supports the coordination of local and distributed concurrent
installations of packages in a Spack instance.
"""
import copy
import enum
import glob
import heapq
import io
@@ -42,6 +43,7 @@
import llnl.util.filesystem as fs
import llnl.util.lock as lk
import llnl.util.tty as tty
from llnl.string import ordinal
from llnl.util.lang import pretty_seconds
from llnl.util.tty.color import colorize
from llnl.util.tty.log import log_output
@@ -57,6 +59,7 @@
import spack.package_base
import spack.package_prefs as prefs
import spack.repo
import spack.rewiring
import spack.spec
import spack.store
import spack.util.executable
@@ -70,25 +73,32 @@
#: were added (see https://docs.python.org/2/library/heapq.html).
_counter = itertools.count(0)
#: Build status indicating task has been added.
STATUS_ADDED = "queued"
#: Build status indicating the spec failed to install
STATUS_FAILED = "failed"
class BuildStatus(enum.Enum):
"""Different build (task) states."""
#: Build status indicating the spec is being installed (possibly by another
#: process)
STATUS_INSTALLING = "installing"
#: Build status indicating task has been added/queued.
QUEUED = enum.auto()
#: Build status indicating the spec was sucessfully installed
STATUS_INSTALLED = "installed"
#: Build status indicating the spec failed to install
FAILED = enum.auto()
#: Build status indicating the task has been popped from the queue
STATUS_DEQUEUED = "dequeued"
#: Build status indicating the spec is being installed (possibly by another
#: process)
INSTALLING = enum.auto()
#: Build status indicating task has been removed (to maintain priority
#: queue invariants).
STATUS_REMOVED = "removed"
#: Build status indicating the spec was sucessfully installed
INSTALLED = enum.auto()
#: Build status indicating the task has been popped from the queue
DEQUEUED = enum.auto()
#: Build status indicating task has been removed (to maintain priority
#: queue invariants).
REMOVED = enum.auto()
def __str__(self):
return f"{self.name.lower()}"
def _write_timer_json(pkg, timer, cache):
@@ -101,13 +111,22 @@ def _write_timer_json(pkg, timer, cache):
return
class InstallAction:
class ExecuteResult(enum.Enum):
# Task succeeded
SUCCESS = enum.auto()
# Task failed
FAILED = enum.auto()
# Task is missing build spec and will be requeued
MISSING_BUILD_SPEC = enum.auto()
class InstallAction(enum.Enum):
#: Don't perform an install
NONE = 0
NONE = enum.auto()
#: Do a standard install
INSTALL = 1
INSTALL = enum.auto()
#: Do an overwrite install
OVERWRITE = 2
OVERWRITE = enum.auto()
class InstallStatus:
@@ -431,7 +450,7 @@ def _process_binary_cache_tarball(
"""
with timer.measure("fetch"):
download_result = binary_distribution.download_tarball(
pkg.spec, unsigned, mirrors_for_spec
pkg.spec.build_spec, unsigned, mirrors_for_spec
)
if download_result is None:
@@ -442,6 +461,11 @@ def _process_binary_cache_tarball(
with timer.measure("install"), spack.util.path.filter_padding():
binary_distribution.extract_tarball(pkg.spec, download_result, force=False, timer=timer)
if pkg.spec.spliced: # overwrite old metadata with new
spack.store.STORE.layout.write_spec(
pkg.spec, spack.store.STORE.layout.spec_file_path(pkg.spec)
)
if hasattr(pkg, "_post_buildcache_install_hook"):
pkg._post_buildcache_install_hook()
@@ -677,7 +701,7 @@ def log(pkg: "spack.package_base.PackageBase") -> None:
def package_id(spec: "spack.spec.Spec") -> str:
"""A "unique" package identifier for installation purposes
The identifier is used to track build tasks, locks, install, and
The identifier is used to track tasks, locks, install, and
failure statuses.
The identifier needs to distinguish between combinations of compilers
@@ -736,14 +760,14 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
)
def __repr__(self) -> str:
"""Returns a formal representation of the build request."""
"""Return a formal representation of the build request."""
rep = f"{self.__class__.__name__}("
for attr, value in self.__dict__.items():
rep += f"{attr}={value.__repr__()}, "
return f"{rep.strip(', ')})"
def __str__(self) -> str:
"""Returns a printable version of the build request."""
"""Return a printable version of the build request."""
return f"package={self.pkg.name}, install_args={self.install_args}"
def _add_default_args(self) -> None:
@@ -840,37 +864,42 @@ def traverse_dependencies(self, spec=None, visited=None) -> Iterator["spack.spec
yield dep
class BuildTask:
"""Class for representing the build task for a package."""
class Task:
"""Base class for representing a task for a package."""
def __init__(
self,
pkg: "spack.package_base.PackageBase",
request: Optional[BuildRequest],
compiler: bool,
start: float,
attempts: int,
status: str,
installed: Set[str],
request: BuildRequest,
*,
compiler: bool = False,
start: float = 0.0,
attempts: int = 0,
status: BuildStatus = BuildStatus.QUEUED,
installed: Set[str] = set(),
):
"""
Instantiate a build task for a package.
Instantiate a task for a package.
Args:
pkg: the package to be built and installed
request: the associated install request where ``None`` can be
used to indicate the package was explicitly requested by the user
compiler: whether task is for a bootstrap compiler
request: the associated install request
start: the initial start time for the package, in seconds
attempts: the number of attempts to install the package
attempts: the number of attempts to install the package, which
should be 0 when the task is initially instantiated
status: the installation status
installed: the identifiers of packages that have
installed: the (string) identifiers of packages that have
been installed so far
Raises:
``InstallError`` if the build status is incompatible with the task
``TypeError`` if provided an argument of the wrong type
``ValueError`` if provided an argument with the wrong value or state
"""
# Ensure dealing with a package that has a concrete spec
if not isinstance(pkg, spack.package_base.PackageBase):
raise ValueError(f"{str(pkg)} must be a package")
raise TypeError(f"{str(pkg)} must be a package")
self.pkg = pkg
if not self.pkg.spec.concrete:
@@ -881,26 +910,34 @@ def __init__(
# The explicit build request associated with the package
if not isinstance(request, BuildRequest):
raise ValueError(f"{str(pkg)} must have a build request")
raise TypeError(f"{request} is not a valid build request")
self.request = request
# Initialize the status to an active state. The status is used to
# ensure priority queue invariants when tasks are "removed" from the
# queue.
if status == STATUS_REMOVED:
raise spack.error.InstallError(
f"Cannot create a build task for {self.pkg_id} with status '{status}'", pkg=pkg
)
if not isinstance(status, BuildStatus):
raise TypeError(f"{status} is not a valid build status")
# The initial build task cannot have status "removed".
if attempts == 0 and status == BuildStatus.REMOVED:
raise spack.error.InstallError(
f"Cannot create a task for {self.pkg_id} with status '{status}'", pkg=pkg
)
self.status = status
# Package is associated with a bootstrap compiler
self.compiler = compiler
# cache the PID, which is used for distributed build messages in self.execute
self.pid = os.getpid()
# The initial start time for processing the spec
self.start = start
if not isinstance(installed, set):
raise TypeError(
f"BuildTask constructor requires 'installed' be a 'set', "
f"not '{installed.__class__.__name__}'."
)
# Set of dependents, which needs to include the requesting package
# to support tracking of parallel, multi-spec, environment installs.
self.dependents = set(get_dependent_ids(self.pkg.spec))
@@ -921,16 +958,22 @@ def __init__(
)
# List of uninstalled dependencies, which is used to establish
# the priority of the build task.
#
# the priority of the task.
self.uninstalled_deps = set(
pkg_id for pkg_id in self.dependencies if pkg_id not in installed
)
# Ensure key sequence-related properties are updated accordingly.
self.attempts = 0
self.attempts = attempts
self._update()
def execute(self, install_status: InstallStatus) -> ExecuteResult:
"""Execute the work of this task.
The ``install_status`` is an ``InstallStatus`` object used to format progress reporting for
this task in the context of the full ``BuildRequest``."""
raise NotImplementedError
def __eq__(self, other):
return self.key == other.key
@@ -950,14 +993,14 @@ def __ne__(self, other):
return self.key != other.key
def __repr__(self) -> str:
"""Returns a formal representation of the build task."""
"""Returns a formal representation of the task."""
rep = f"{self.__class__.__name__}("
for attr, value in self.__dict__.items():
rep += f"{attr}={value.__repr__()}, "
return f"{rep.strip(', ')})"
def __str__(self) -> str:
"""Returns a printable version of the build task."""
"""Returns a printable version of the task."""
dependencies = f"#dependencies={len(self.dependencies)}"
return "priority={0}, status={1}, start={2}, {3}".format(
self.priority, self.status, self.start, dependencies
@@ -974,8 +1017,7 @@ def _update(self) -> None:
def add_dependent(self, pkg_id: str) -> None:
"""
Ensure the dependent package id is in the task's list so it will be
properly updated when this package is installed.
Ensure the package is in this task's ``dependents`` list.
Args:
pkg_id: package identifier of the dependent package
@@ -984,6 +1026,20 @@ def add_dependent(self, pkg_id: str) -> None:
tty.debug(f"Adding {pkg_id} as a dependent of {self.pkg_id}")
self.dependents.add(pkg_id)
def add_dependency(self, pkg_id, installed=False):
"""
Ensure the package is in this task's ``dependencies`` list.
Args:
pkg_id (str): package identifier of the dependency package
installed (bool): install status of the dependency package
"""
if pkg_id != self.pkg_id and pkg_id not in self.dependencies:
tty.debug(f"Adding {pkg_id} as a depencency of {self.pkg_id}")
self.dependencies.add(pkg_id)
if not installed:
self.uninstalled_deps.add(pkg_id)
def flag_installed(self, installed: List[str]) -> None:
"""
Ensure the dependency is not considered to still be uninstalled.
@@ -1000,6 +1056,39 @@ def flag_installed(self, installed: List[str]) -> None:
level=2,
)
def _setup_install_dir(self, pkg: "spack.package_base.PackageBase") -> None:
"""
Create and ensure proper access controls for the install directory.
Write a small metadata file with the current spack environment.
Args:
pkg: the package to be built and installed
"""
# Move to a module level method.
if not os.path.exists(pkg.spec.prefix):
path = spack.util.path.debug_padded_filter(pkg.spec.prefix)
tty.debug(f"Creating the installation directory {path}")
spack.store.STORE.layout.create_install_directory(pkg.spec)
else:
# Set the proper group for the prefix
group = prefs.get_package_group(pkg.spec)
if group:
fs.chgrp(pkg.spec.prefix, group)
# Set the proper permissions.
# This has to be done after group because changing groups blows
# away the sticky group bit on the directory
mode = os.stat(pkg.spec.prefix).st_mode
perms = prefs.get_package_dir_permissions(pkg.spec)
if mode != perms:
os.chmod(pkg.spec.prefix, perms)
# Ensure the metadata path exists as well
fs.mkdirp(spack.store.STORE.layout.metadata_path(pkg.spec), mode=perms)
# Always write host environment - we assume this can change
spack.store.STORE.layout.write_host_environment(pkg.spec)
@property
def explicit(self) -> bool:
return self.pkg.spec.dag_hash() in self.request.install_args.get("explicit", [])
@@ -1030,7 +1119,7 @@ def key(self) -> Tuple[int, int]:
"""The key is the tuple (# uninstalled dependencies, sequence)."""
return (self.priority, self.sequence)
def next_attempt(self, installed) -> "BuildTask":
def next_attempt(self, installed) -> "Task":
"""Create a new, updated task for the next installation attempt."""
task = copy.copy(self)
task._update()
@@ -1044,6 +1133,100 @@ def priority(self):
return len(self.uninstalled_deps)
class BuildTask(Task):
"""Class for representing a build task for a package."""
def execute(self, install_status):
"""
Perform the installation of the requested spec and/or dependency
represented by the build task.
"""
install_args = self.request.install_args
tests = install_args.get("tests")
unsigned = install_args.get("unsigned")
pkg, pkg_id = self.pkg, self.pkg_id
tty.msg(install_msg(pkg_id, self.pid, install_status))
self.start = self.start or time.time()
self.status = BuildStatus.INSTALLING
# Use the binary cache if requested
if self.use_cache:
if _install_from_cache(pkg, self.explicit, unsigned):
return ExecuteResult.SUCCESS
elif self.cache_only:
raise spack.error.InstallError(
"No binary found when cache-only was specified", pkg=pkg
)
else:
tty.msg(f"No binary for {pkg_id} found: installing from source")
pkg.run_tests = tests is True or tests and pkg.name in tests
# hook that allows tests to inspect the Package before installation
# see unit_test_check() docs.
if not pkg.unit_test_check():
return ExecuteResult.FAILED
try:
# Create stage object now and let it be serialized for the child process. That
# way monkeypatch in tests works correctly.
pkg.stage
self._setup_install_dir(pkg)
# Create a child process to do the actual installation.
# Preserve verbosity settings across installs.
spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process(
pkg, build_process, install_args
)
# Note: PARENT of the build process adds the new package to
# the database, so that we don't need to re-read from file.
spack.store.STORE.db.add(pkg.spec, explicit=self.explicit)
except spack.error.StopPhase as e:
# A StopPhase exception means that do_install was asked to
# stop early from clients, and is not an error at this point
pid = f"{self.pid}: " if tty.show_pid() else ""
tty.debug(f"{pid}{str(e)}")
tty.debug(f"Package stage directory: {pkg.stage.source_path}")
return ExecuteResult.SUCCESS
class RewireTask(Task):
"""Class for representing a rewire task for a package."""
def execute(self, install_status):
"""Execute rewire task
Rewire tasks are executed by either rewiring self.package.spec.build_spec that is already
installed or downloading and rewiring a binary for the it.
If not available installed or as binary, return ExecuteResult.MISSING_BUILD_SPEC.
This will prompt the Installer to requeue the task with a dependency on the BuildTask
to install self.pkg.spec.build_spec
"""
oldstatus = self.status
self.status = BuildStatus.INSTALLING
tty.msg(install_msg(self.pkg_id, self.pid, install_status))
self.start = self.start or time.time()
if not self.pkg.spec.build_spec.installed:
try:
install_args = self.request.install_args
unsigned = install_args.get("unsigned")
_process_binary_cache_tarball(self.pkg, explicit=self.explicit, unsigned=unsigned)
_print_installed_pkg(self.pkg.prefix)
return ExecuteResult.SUCCESS
except BaseException as e:
tty.error(f"Failed to rewire {self.pkg.spec} from binary. {e}")
self.status = oldstatus
return ExecuteResult.MISSING_BUILD_SPEC
spack.rewiring.rewire_node(self.pkg.spec, self.explicit)
_print_installed_pkg(self.pkg.prefix)
return ExecuteResult.SUCCESS
class PackageInstaller:
"""
Class for managing the install process for a Spack instance based on a bottom-up DAG approach.
@@ -1137,11 +1320,11 @@ def __init__(
# List of build requests
self.build_requests = [BuildRequest(pkg, install_args) for pkg in packages]
# Priority queue of build tasks
self.build_pq: List[Tuple[Tuple[int, int], BuildTask]] = []
# Priority queue of tasks
self.build_pq: List[Tuple[Tuple[int, int], Task]] = []
# Mapping of unique package ids to build task
self.build_tasks: Dict[str, BuildTask] = {}
# Mapping of unique package ids to task
self.build_tasks: Dict[str, Task] = {}
# Cache of package locks for failed packages, keyed on package's ids
self.failed: Dict[str, Optional[lk.Lock]] = {}
@@ -1162,6 +1345,9 @@ def __init__(
# fast then that option applies to all build requests.
self.fail_fast = False
# Initializing all_dependencies to empty. This will be set later in _init_queue.
self.all_dependencies: Dict[str, Set[str]] = {}
def __repr__(self) -> str:
"""Returns a formal representation of the package installer."""
rep = f"{self.__class__.__name__}("
@@ -1180,23 +1366,19 @@ def __str__(self) -> str:
def _add_init_task(
self,
pkg: "spack.package_base.PackageBase",
request: Optional[BuildRequest],
is_compiler: bool,
request: BuildRequest,
all_deps: Dict[str, Set[str]],
) -> None:
"""
Creates and queus the initial build task for the package.
Creates and queues the initial task for the package.
Args:
pkg: the package to be built and installed
request (BuildRequest or None): the associated install request
where ``None`` can be used to indicate the package was
explicitly requested by the user
is_compiler (bool): whether task is for a bootstrap compiler
all_deps (defaultdict(set)): dictionary of all dependencies and
associated dependents
request: the associated install request
all_deps: dictionary of all dependencies and associated dependents
"""
task = BuildTask(pkg, request, is_compiler, 0, 0, STATUS_ADDED, self.installed)
cls = RewireTask if pkg.spec.spliced else BuildTask
task = cls(pkg, request=request, status=BuildStatus.QUEUED, installed=self.installed)
for dep_id in task.dependencies:
all_deps[dep_id].add(package_id(pkg.spec))
@@ -1270,7 +1452,7 @@ def _check_deps_status(self, request: BuildRequest) -> None:
else:
lock.release_read()
def _prepare_for_install(self, task: BuildTask) -> None:
def _prepare_for_install(self, task: Task) -> None:
"""
Check the database and leftover installation directories/files and
prepare for a new install attempt for an uninstalled package.
@@ -1278,7 +1460,7 @@ def _prepare_for_install(self, task: BuildTask) -> None:
and ensuring the database is up-to-date.
Args:
task (BuildTask): the build task whose associated package is
task: the task whose associated package is
being checked
"""
install_args = task.request.install_args
@@ -1329,7 +1511,7 @@ def _prepare_for_install(self, task: BuildTask) -> None:
spack.store.STORE.db.update_explicit(task.pkg.spec, True)
def _cleanup_all_tasks(self) -> None:
"""Cleanup all build tasks to include releasing their locks."""
"""Cleanup all tasks to include releasing their locks."""
for pkg_id in self.locks:
self._release_lock(pkg_id)
@@ -1361,7 +1543,7 @@ def _cleanup_failed(self, pkg_id: str) -> None:
def _cleanup_task(self, pkg: "spack.package_base.PackageBase") -> None:
"""
Cleanup the build task for the spec
Cleanup the task for the spec
Args:
pkg: the package being installed
@@ -1433,7 +1615,7 @@ def _ensure_locked(
if lock_type == "read":
# Wait until the other process finishes if there are no more
# build tasks with priority 0 (i.e., with no uninstalled
# tasks with priority 0 (i.e., with no uninstalled
# dependencies).
no_p0 = len(self.build_tasks) == 0 or not self._next_is_pri0()
timeout = None if no_p0 else 3.0
@@ -1485,6 +1667,33 @@ def _ensure_locked(
self.locks[pkg_id] = (lock_type, lock)
return self.locks[pkg_id]
def _requeue_with_build_spec_tasks(self, task):
"""Requeue the task and its missing build spec dependencies"""
# Full install of the build_spec is necessary because it didn't already exist somewhere
spec = task.pkg.spec
for dep in spec.build_spec.traverse():
dep_pkg = dep.package
dep_id = package_id(dep)
if dep_id not in self.build_tasks:
self._add_init_task(dep_pkg, task.request, self.all_dependencies)
# Clear any persistent failure markings _unless_ they are
# associated with another process in this parallel build
# of the spec.
spack.store.STORE.failure_tracker.clear(dep, force=False)
# Queue the build spec.
build_pkg_id = package_id(spec.build_spec)
build_spec_task = self.build_tasks[build_pkg_id]
spec_pkg_id = package_id(spec)
spec_task = task.next_attempt(self.installed)
spec_task.status = BuildStatus.QUEUED
# Convey a build spec as a dependency of a deployed spec.
build_spec_task.add_dependent(spec_pkg_id)
spec_task.add_dependency(build_pkg_id)
self._push_task(spec_task)
def _add_tasks(self, request: BuildRequest, all_deps):
"""Add tasks to the priority queue for the given build request.
@@ -1514,7 +1723,7 @@ def _add_tasks(self, request: BuildRequest, all_deps):
dep_id = package_id(dep)
if dep_id not in self.build_tasks:
self._add_init_task(dep_pkg, request, False, all_deps)
self._add_init_task(dep_pkg, request, all_deps=all_deps)
# Clear any persistent failure markings _unless_ they are
# associated with another process in this parallel build
@@ -1532,80 +1741,29 @@ def _add_tasks(self, request: BuildRequest, all_deps):
self._check_deps_status(request)
# Now add the package itself, if appropriate
self._add_init_task(request.pkg, request, False, all_deps)
self._add_init_task(request.pkg, request, all_deps=all_deps)
# Ensure if one request is to fail fast then all requests will.
fail_fast = bool(request.install_args.get("fail_fast"))
self.fail_fast = self.fail_fast or fail_fast
def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
def _install_task(self, task: Task, install_status: InstallStatus) -> None:
"""
Perform the installation of the requested spec and/or dependency
represented by the build task.
represented by the task.
Args:
task: the installation build task for a package
task: the installation task for a package
install_status: the installation status for the package"""
explicit = task.explicit
install_args = task.request.install_args
cache_only = task.cache_only
use_cache = task.use_cache
tests = install_args.get("tests", False)
assert isinstance(tests, (bool, list)) # make mypy happy.
unsigned: Optional[bool] = install_args.get("unsigned")
pkg, pkg_id = task.pkg, task.pkg_id
tty.msg(install_msg(pkg_id, self.pid, install_status))
task.start = task.start or time.time()
task.status = STATUS_INSTALLING
# Use the binary cache if requested
if use_cache:
if _install_from_cache(pkg, explicit, unsigned):
self._update_installed(task)
return
elif cache_only:
raise spack.error.InstallError(
"No binary found when cache-only was specified", pkg=pkg
)
else:
tty.msg(f"No binary for {pkg_id} found: installing from source")
pkg.run_tests = tests if isinstance(tests, bool) else pkg.name in tests
# hook that allows tests to inspect the Package before installation
# see unit_test_check() docs.
if not pkg.unit_test_check():
return
try:
self._setup_install_dir(pkg)
# Create stage object now and let it be serialized for the child process. That
# way monkeypatch in tests works correctly.
pkg.stage
# Create a child process to do the actual installation.
# Preserve verbosity settings across installs.
spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process(
pkg, build_process, install_args
)
# Note: PARENT of the build process adds the new package to
# the database, so that we don't need to re-read from file.
spack.store.STORE.db.add(pkg.spec, explicit=explicit)
except spack.error.StopPhase as e:
# A StopPhase exception means that the installer was asked to stop early from clients,
# and is not an error at this point
pid = f"{self.pid}: " if tty.show_pid() else ""
tty.debug(f"{pid}{str(e)}")
tty.debug(f"Package stage directory: {pkg.stage.source_path}")
rc = task.execute(install_status)
if rc == ExecuteResult.MISSING_BUILD_SPEC:
self._requeue_with_build_spec_tasks(task)
else: # if rc == ExecuteResult.SUCCESS or rc == ExecuteResult.FAILED
self._update_installed(task)
def _next_is_pri0(self) -> bool:
"""
Determine if the next build task has priority 0
Determine if the next task has priority 0
Return:
True if it does, False otherwise
@@ -1615,31 +1773,31 @@ def _next_is_pri0(self) -> bool:
task = self.build_pq[0][1]
return task.priority == 0
def _pop_task(self) -> Optional[BuildTask]:
def _pop_task(self) -> Optional[Task]:
"""
Remove and return the lowest priority build task.
Remove and return the lowest priority task.
Source: Variant of function at docs.python.org/2/library/heapq.html
"""
while self.build_pq:
task = heapq.heappop(self.build_pq)[1]
if task.status != STATUS_REMOVED:
if task.status != BuildStatus.REMOVED:
del self.build_tasks[task.pkg_id]
task.status = STATUS_DEQUEUED
task.status = BuildStatus.DEQUEUED
return task
return None
def _push_task(self, task: BuildTask) -> None:
def _push_task(self, task: Task) -> None:
"""
Push (or queue) the specified build task for the package.
Push (or queue) the specified task for the package.
Source: Customization of "add_task" function at
docs.python.org/2/library/heapq.html
Args:
task: the installation build task for a package
task: the installation task for a package
"""
msg = "{0} a build task for {1} with status '{2}'"
msg = "{0} a task for {1} with status '{2}'"
skip = "Skipping requeue of task for {0}: {1}"
# Ensure do not (re-)queue installed or failed packages whose status
@@ -1652,9 +1810,11 @@ def _push_task(self, task: BuildTask) -> None:
tty.debug(skip.format(task.pkg_id, "failed"))
return
# Remove any associated build task since its sequence will change
# Remove any associated task since its sequence will change
self._remove_task(task.pkg_id)
desc = "Queueing" if task.attempts == 0 else "Requeueing"
desc = (
"Queueing" if task.attempts == 1 else f"Requeueing ({ordinal(task.attempts)} attempt)"
)
tty.debug(msg.format(desc, task.pkg_id, task.status))
# Now add the new task to the queue with a new sequence number to
@@ -1685,9 +1845,9 @@ def _release_lock(self, pkg_id: str) -> None:
except Exception as exc:
tty.warn(err.format(exc.__class__.__name__, ltype, pkg_id, str(exc)))
def _remove_task(self, pkg_id: str) -> Optional[BuildTask]:
def _remove_task(self, pkg_id: str) -> Optional[Task]:
"""
Mark the existing package build task as being removed and return it.
Mark the existing package task as being removed and return it.
Raises KeyError if not found.
Source: Variant of function at docs.python.org/2/library/heapq.html
@@ -1696,71 +1856,39 @@ def _remove_task(self, pkg_id: str) -> Optional[BuildTask]:
pkg_id: identifier for the package to be removed
"""
if pkg_id in self.build_tasks:
tty.debug(f"Removing build task for {pkg_id} from list")
tty.debug(f"Removing task for {pkg_id} from list")
task = self.build_tasks.pop(pkg_id)
task.status = STATUS_REMOVED
task.status = BuildStatus.REMOVED
return task
else:
return None
def _requeue_task(self, task: BuildTask, install_status: InstallStatus) -> None:
def _requeue_task(self, task: Task, install_status: InstallStatus) -> None:
"""
Requeues a task that appears to be in progress by another process.
Args:
task (BuildTask): the installation build task for a package
task (Task): the installation task for a package
"""
if task.status not in [STATUS_INSTALLED, STATUS_INSTALLING]:
if task.status not in [BuildStatus.INSTALLED, BuildStatus.INSTALLING]:
tty.debug(
f"{install_msg(task.pkg_id, self.pid, install_status)} "
"in progress by another process"
)
new_task = task.next_attempt(self.installed)
new_task.status = STATUS_INSTALLING
new_task.status = BuildStatus.INSTALLING
self._push_task(new_task)
def _setup_install_dir(self, pkg: "spack.package_base.PackageBase") -> None:
"""
Create and ensure proper access controls for the install directory.
Write a small metadata file with the current spack environment.
Args:
pkg: the package to be built and installed
"""
if not os.path.exists(pkg.spec.prefix):
path = spack.util.path.debug_padded_filter(pkg.spec.prefix)
tty.debug(f"Creating the installation directory {path}")
spack.store.STORE.layout.create_install_directory(pkg.spec)
else:
# Set the proper group for the prefix
group = prefs.get_package_group(pkg.spec)
if group:
fs.chgrp(pkg.spec.prefix, group)
# Set the proper permissions.
# This has to be done after group because changing groups blows
# away the sticky group bit on the directory
mode = os.stat(pkg.spec.prefix).st_mode
perms = prefs.get_package_dir_permissions(pkg.spec)
if mode != perms:
os.chmod(pkg.spec.prefix, perms)
# Ensure the metadata path exists as well
fs.mkdirp(spack.store.STORE.layout.metadata_path(pkg.spec), mode=perms)
# Always write host environment - we assume this can change
spack.store.STORE.layout.write_host_environment(pkg.spec)
def _update_failed(
self, task: BuildTask, mark: bool = False, exc: Optional[BaseException] = None
self, task: Task, mark: bool = False, exc: Optional[BaseException] = None
) -> None:
"""
Update the task and transitive dependents as failed; optionally mark
externally as failed; and remove associated build tasks.
externally as failed; and remove associated tasks.
Args:
task: the build task for the failed package
task: the task for the failed package
mark: ``True`` if the package and its dependencies are to
be marked as "failed", otherwise, ``False``
exc: optional exception if associated with the failure
@@ -1772,34 +1900,34 @@ def _update_failed(
self.failed[pkg_id] = spack.store.STORE.failure_tracker.mark(task.pkg.spec)
else:
self.failed[pkg_id] = None
task.status = STATUS_FAILED
task.status = BuildStatus.FAILED
for dep_id in task.dependents:
if dep_id in self.build_tasks:
tty.warn(f"Skipping build of {dep_id} since {pkg_id} failed")
# Ensure the dependent's uninstalled dependents are
# up-to-date and their build tasks removed.
# up-to-date and their tasks removed.
dep_task = self.build_tasks[dep_id]
self._update_failed(dep_task, mark)
self._remove_task(dep_id)
else:
tty.debug(f"No build task for {dep_id} to skip since {pkg_id} failed")
tty.debug(f"No task for {dep_id} to skip since {pkg_id} failed")
def _update_installed(self, task: BuildTask) -> None:
def _update_installed(self, task: Task) -> None:
"""
Mark the task as installed and ensure dependent build tasks are aware.
Mark the task as installed and ensure dependent tasks are aware.
Args:
task (BuildTask): the build task for the installed package
task: the task for the installed package
"""
task.status = STATUS_INSTALLED
task.status = BuildStatus.INSTALLED
self._flag_installed(task.pkg, task.dependents)
def _flag_installed(
self, pkg: "spack.package_base.PackageBase", dependent_ids: Optional[Set[str]] = None
) -> None:
"""
Flag the package as installed and ensure known by all build tasks of
Flag the package as installed and ensure known by all tasks of
known dependents.
Args:
@@ -1827,7 +1955,7 @@ def _flag_installed(
dep_task = self.build_tasks[dep_id]
self._push_task(dep_task.next_attempt(self.installed))
else:
tty.debug(f"{dep_id} has no build task to update for {pkg_id}'s success")
tty.debug(f"{dep_id} has no task to update for {pkg_id}'s success")
def _init_queue(self) -> None:
"""Initialize the build queue from the list of build requests."""
@@ -1846,8 +1974,9 @@ def _init_queue(self) -> None:
task = self.build_tasks[dep_id]
for dependent_id in dependents.difference(task.dependents):
task.add_dependent(dependent_id)
self.all_dependencies = all_dependencies
def _install_action(self, task: BuildTask) -> int:
def _install_action(self, task: Task) -> InstallAction:
"""
Determine whether the installation should be overwritten (if it already
exists) or skipped (if has been handled by another process).
@@ -1995,7 +2124,6 @@ def install(self) -> None:
self._update_installed(task)
path = spack.util.path.debug_padded_filter(pkg.prefix)
_print_installed_pkg(path)
else:
# At this point we've failed to get a write or a read
# lock, which means another process has taken a write
@@ -2035,8 +2163,6 @@ def install(self) -> None:
# wrapper -- silence mypy
OverwriteInstall(self, spack.store.STORE.db, task, install_status).install() # type: ignore[arg-type] # noqa: E501
self._update_installed(task)
# If we installed then we should keep the prefix
stop_before_phase = getattr(pkg, "stop_before_phase", None)
last_phase = getattr(pkg, "last_phase", None)
@@ -2080,7 +2206,9 @@ def install(self) -> None:
)
# Terminate if requested to do so on the first failure.
if self.fail_fast:
raise spack.error.InstallError(f"{fail_fast_err}: {str(exc)}", pkg=pkg)
raise spack.error.InstallError(
f"{fail_fast_err}: {str(exc)}", pkg=pkg
) from exc
# Terminate when a single build request has failed, or summarize errors later.
if task.is_build_request:
@@ -2096,7 +2224,8 @@ def install(self) -> None:
# Perform basic task cleanup for the installed spec to
# include downgrading the write to a read lock
self._cleanup_task(pkg)
if pkg.spec.installed:
self._cleanup_task(pkg)
# Cleanup, which includes releasing all of the read locks
self._cleanup_all_tasks()
@@ -2365,6 +2494,15 @@ def build_process(pkg: "spack.package_base.PackageBase", install_args: dict) ->
def deprecate(spec: "spack.spec.Spec", deprecator: "spack.spec.Spec", link_fn) -> None:
"""Deprecate this package in favor of deprecator spec"""
# Here we assume we don't deprecate across different stores, and that same hash
# means same binary artifacts
if spec.dag_hash() == deprecator.dag_hash():
return
# We can't really have control over external specs, and cannot link anything in their place
if spec.external:
return
# Install deprecator if it isn't installed already
if not spack.store.STORE.db.query(deprecator):
PackageInstaller([deprecator.package], explicit=True).install()
@@ -2395,7 +2533,7 @@ def __init__(
self,
installer: PackageInstaller,
database: spack.database.Database,
task: BuildTask,
task: Task,
install_status: InstallStatus,
):
self.installer = installer

View File

@@ -102,9 +102,6 @@
spack_ld_library_path = os.environ.get("LD_LIBRARY_PATH", "")
#: Whether to print backtraces on error
SHOW_BACKTRACE = False
def add_all_commands(parser):
"""Add all spack subcommands to the parser."""
@@ -492,6 +489,7 @@ def make_argument_parser(**kwargs):
help="add stacktraces to all printed statements",
)
parser.add_argument(
"-t",
"--backtrace",
action="store_true",
default="SPACK_BACKTRACE" in os.environ,
@@ -527,8 +525,7 @@ def setup_main_options(args):
if args.debug or args.backtrace:
spack.error.debug = True
global SHOW_BACKTRACE
SHOW_BACKTRACE = True
spack.error.SHOW_BACKTRACE = True
if args.debug:
spack.util.debug.register_interrupt_handler()
@@ -1021,19 +1018,19 @@ def main(argv=None):
e.die() # gracefully die on any SpackErrors
except KeyboardInterrupt:
if spack.config.get("config:debug") or SHOW_BACKTRACE:
if spack.config.get("config:debug") or spack.error.SHOW_BACKTRACE:
raise
sys.stderr.write("\n")
tty.error("Keyboard interrupt.")
return signal.SIGINT.value
except SystemExit as e:
if spack.config.get("config:debug") or SHOW_BACKTRACE:
if spack.config.get("config:debug") or spack.error.SHOW_BACKTRACE:
traceback.print_exc()
return e.code
except Exception as e:
if spack.config.get("config:debug") or SHOW_BACKTRACE:
if spack.config.get("config:debug") or spack.error.SHOW_BACKTRACE:
raise
tty.error(e)
return 3

View File

@@ -89,9 +89,8 @@ def from_url(url: str):
"""Create an anonymous mirror by URL. This method validates the URL."""
if not urllib.parse.urlparse(url).scheme in supported_url_schemes:
raise ValueError(
'"{}" is not a valid mirror URL. Scheme must be once of {}.'.format(
url, ", ".join(supported_url_schemes)
)
f'"{url}" is not a valid mirror URL. '
f"Scheme must be one of {supported_url_schemes}."
)
return Mirror(url)
@@ -759,7 +758,7 @@ def require_mirror_name(mirror_name):
"""Find a mirror by name and raise if it does not exist"""
mirror = spack.mirror.MirrorCollection().get(mirror_name)
if not mirror:
raise ValueError('no mirror named "{0}"'.format(mirror_name))
raise ValueError(f'no mirror named "{mirror_name}"')
return mirror

View File

@@ -527,7 +527,8 @@ def use_name(self):
parts = name.split("/")
name = os.path.join(*parts)
# Add optional suffixes based on constraints
path_elements = [name] + self.conf.suffixes
path_elements = [name]
path_elements.extend(map(self.spec.format, self.conf.suffixes))
return "-".join(path_elements)
@property

View File

@@ -1855,13 +1855,22 @@ def _has_make_target(self, target):
#
# BSD Make:
# make: don't know how to make test. Stop
#
# Note: "Stop." is not printed when running a Make jobserver (spack env depfile) that runs
# with `make -k/--keep-going`
missing_target_msgs = [
"No rule to make target `{0}'. Stop.",
"No rule to make target '{0}'. Stop.",
"don't know how to make {0}. Stop",
"No rule to make target `{0}'.",
"No rule to make target '{0}'.",
"don't know how to make {0}.",
]
kwargs = {"fail_on_error": False, "output": os.devnull, "error": str}
kwargs = {
"fail_on_error": False,
"output": os.devnull,
"error": str,
# Remove MAKEFLAGS to avoid inherited flags from Make jobserver (spack env depfile)
"extra_env": {"MAKEFLAGS": ""},
}
stderr = make("-n", target, **kwargs)

View File

@@ -205,23 +205,33 @@ def macho_find_paths(orig_rpaths, deps, idpath, old_layout_root, prefix_to_prefi
paths_to_paths dictionary which maps all of the old paths to new paths
"""
paths_to_paths = dict()
# Sort from longest path to shortest, to ensure we try /foo/bar/baz before /foo/bar
prefix_iteration_order = sorted(prefix_to_prefix, key=len, reverse=True)
for orig_rpath in orig_rpaths:
if orig_rpath.startswith(old_layout_root):
for old_prefix, new_prefix in prefix_to_prefix.items():
for old_prefix in prefix_iteration_order:
new_prefix = prefix_to_prefix[old_prefix]
if orig_rpath.startswith(old_prefix):
new_rpath = re.sub(re.escape(old_prefix), new_prefix, orig_rpath)
paths_to_paths[orig_rpath] = new_rpath
break
else:
paths_to_paths[orig_rpath] = orig_rpath
if idpath:
for old_prefix, new_prefix in prefix_to_prefix.items():
for old_prefix in prefix_iteration_order:
new_prefix = prefix_to_prefix[old_prefix]
if idpath.startswith(old_prefix):
paths_to_paths[idpath] = re.sub(re.escape(old_prefix), new_prefix, idpath)
break
for dep in deps:
for old_prefix, new_prefix in prefix_to_prefix.items():
for old_prefix in prefix_iteration_order:
new_prefix = prefix_to_prefix[old_prefix]
if dep.startswith(old_prefix):
paths_to_paths[dep] = re.sub(re.escape(old_prefix), new_prefix, dep)
break
if dep.startswith("@"):
paths_to_paths[dep] = dep
@@ -270,40 +280,14 @@ def modify_macho_object(cur_path, rpaths, deps, idpath, paths_to_paths):
install_name_tool = executable.Executable("install_name_tool")
install_name_tool(*args)
return
def modify_object_macholib(cur_path, paths_to_paths):
"""
This function is used when install machO buildcaches on linux by
rewriting mach-o loader commands for dependency library paths of
mach-o binaries and the id path for mach-o libraries.
Rewritting of rpaths is handled by replace_prefix_bin.
Inputs
mach-o binary to be modified
dictionary mapping paths in old install layout to new install layout
"""
dll = macholib.MachO.MachO(cur_path)
dll.rewriteLoadCommands(paths_to_paths.get)
try:
f = open(dll.filename, "rb+")
for header in dll.headers:
f.seek(0)
dll.write(f)
f.seek(0, 2)
f.flush()
f.close()
except Exception:
pass
return
def macholib_get_paths(cur_path):
"""Get rpaths, dependent libraries, and library id of mach-o objects."""
headers = macholib.MachO.MachO(cur_path).headers
headers = []
try:
headers = macholib.MachO.MachO(cur_path).headers
except ValueError:
pass
if not headers:
tty.warn("Failed to read Mach-O headers: {0}".format(cur_path))
commands = []
@@ -415,10 +399,7 @@ def relocate_macho_binaries(
# normalized paths
rel_to_orig = macho_make_paths_normal(orig_path_name, rpaths, deps, idpath)
# replace the relativized paths with normalized paths
if sys.platform == "darwin":
modify_macho_object(path_name, rpaths, deps, idpath, rel_to_orig)
else:
modify_object_macholib(path_name, rel_to_orig)
modify_macho_object(path_name, rpaths, deps, idpath, rel_to_orig)
# get the normalized paths in the mach-o binary
rpaths, deps, idpath = macholib_get_paths(path_name)
# get the mapping of paths in old prefix to path in new prefix
@@ -426,10 +407,7 @@ def relocate_macho_binaries(
rpaths, deps, idpath, old_layout_root, prefix_to_prefix
)
# replace the old paths with new paths
if sys.platform == "darwin":
modify_macho_object(path_name, rpaths, deps, idpath, paths_to_paths)
else:
modify_object_macholib(path_name, paths_to_paths)
modify_macho_object(path_name, rpaths, deps, idpath, paths_to_paths)
# get the new normalized path in the mach-o binary
rpaths, deps, idpath = macholib_get_paths(path_name)
# get the mapping of paths to relative paths in the new prefix
@@ -437,10 +415,7 @@ def relocate_macho_binaries(
path_name, new_layout_root, rpaths, deps, idpath
)
# replace the new paths with relativized paths in the new prefix
if sys.platform == "darwin":
modify_macho_object(path_name, rpaths, deps, idpath, paths_to_paths)
else:
modify_object_macholib(path_name, paths_to_paths)
modify_macho_object(path_name, rpaths, deps, idpath, paths_to_paths)
else:
# get the paths in the old prefix
rpaths, deps, idpath = macholib_get_paths(path_name)
@@ -449,10 +424,7 @@ def relocate_macho_binaries(
rpaths, deps, idpath, old_layout_root, prefix_to_prefix
)
# replace the old paths with new paths
if sys.platform == "darwin":
modify_macho_object(path_name, rpaths, deps, idpath, paths_to_paths)
else:
modify_object_macholib(path_name, paths_to_paths)
modify_macho_object(path_name, rpaths, deps, idpath, paths_to_paths)
def _transform_rpaths(orig_rpaths, orig_root, new_prefixes):

View File

@@ -12,6 +12,7 @@
from llnl.util.symlink import readlink, symlink
import spack.binary_distribution as bindist
import spack.deptypes as dt
import spack.error
import spack.hooks
import spack.platforms
@@ -52,6 +53,7 @@ def rewire_node(spec, explicit):
its subgraph. Binaries, text, and links are all changed in accordance with
the splice. The resulting package is then 'installed.'"""
tempdir = tempfile.mkdtemp()
# copy anything installed to a temporary directory
shutil.copytree(spec.build_spec.prefix, os.path.join(tempdir, spec.dag_hash()))
@@ -59,8 +61,21 @@ def rewire_node(spec, explicit):
# compute prefix-to-prefix for every node from the build spec to the spliced
# spec
prefix_to_prefix = OrderedDict({spec.build_spec.prefix: spec.prefix})
for build_dep in spec.build_spec.traverse(root=False):
prefix_to_prefix[build_dep.prefix] = spec[build_dep.name].prefix
build_spec_ids = set(id(s) for s in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD))
for s in bindist.deps_to_relocate(spec):
analog = s
if id(s) not in build_spec_ids:
analogs = [
d
for d in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD)
if s._splice_match(d, self_root=spec, other_root=spec.build_spec)
]
if analogs:
# Prefer same-name analogs and prefer higher versions
# This matches the preferences in Spec.splice, so we will find same node
analog = max(analogs, key=lambda a: (a.name == s.name, a.version))
prefix_to_prefix[analog.prefix] = s.prefix
manifest = bindist.get_buildfile_manifest(spec.build_spec)
platform = spack.platforms.by_name(spec.platform)

View File

@@ -11,8 +11,6 @@
from llnl.util.lang import union_dicts
import spack.schema.gitlab_ci
# Schema for script fields
# List of lists and/or strings
# This is similar to what is allowed in
@@ -47,7 +45,7 @@
"tags": {"type": "array", "items": {"type": "string"}},
"variables": {
"type": "object",
"patternProperties": {r"[\w\d\-_\.]+": {"type": "string"}},
"patternProperties": {r"[\w\d\-_\.]+": {"type": ["string", "number"]}},
},
"before_script": script_schema,
"script": script_schema,
@@ -77,58 +75,54 @@
},
}
named_attributes_schema = {
"oneOf": [
{
dynamic_mapping_schema = {
"type": "object",
"additionalProperties": False,
"required": ["dynamic-mapping"],
"properties": {
"dynamic-mapping": {
"type": "object",
"additionalProperties": False,
"properties": {"noop-job": attributes_schema, "noop-job-remove": attributes_schema},
},
{
"type": "object",
"additionalProperties": False,
"properties": {"build-job": attributes_schema, "build-job-remove": attributes_schema},
},
{
"type": "object",
"additionalProperties": False,
"properties": {"copy-job": attributes_schema, "copy-job-remove": attributes_schema},
},
{
"type": "object",
"additionalProperties": False,
"required": ["endpoint"],
"properties": {
"reindex-job": attributes_schema,
"reindex-job-remove": attributes_schema,
"name": {"type": "string"},
# "endpoint" cannot have http patternProperties constaint as it is a required field
# Constrain is applied in code
"endpoint": {"type": "string"},
"timeout": {"type": "integer", "minimum": 0},
"verify_ssl": {"type": "boolean", "default": False},
"header": {"type": "object", "additionalProperties": False},
"allow": {"type": "array", "items": {"type": "string"}},
"require": {"type": "array", "items": {"type": "string"}},
"ignore": {"type": "array", "items": {"type": "string"}},
},
},
{
"type": "object",
"additionalProperties": False,
"properties": {
"signing-job": attributes_schema,
"signing-job-remove": attributes_schema,
},
},
{
"type": "object",
"additionalProperties": False,
"properties": {
"cleanup-job": attributes_schema,
"cleanup-job-remove": attributes_schema,
},
},
{
"type": "object",
"additionalProperties": False,
"properties": {"any-job": attributes_schema, "any-job-remove": attributes_schema},
},
]
}
},
}
def job_schema(name: str):
return {
"type": "object",
"additionalProperties": False,
"properties": {f"{name}-job": attributes_schema, f"{name}-job-remove": attributes_schema},
}
pipeline_gen_schema = {
"type": "array",
"items": {"oneOf": [submapping_schema, named_attributes_schema]},
"items": {
"oneOf": [
submapping_schema,
dynamic_mapping_schema,
job_schema("any"),
job_schema("build"),
job_schema("cleanup"),
job_schema("copy"),
job_schema("noop"),
job_schema("reindex"),
job_schema("signing"),
]
},
}
core_shared_properties = union_dicts(
@@ -141,39 +135,8 @@
}
)
# TODO: Remove in Spack 0.23
ci_properties = {
"anyOf": [
{
"type": "object",
"additionalProperties": False,
# "required": ["mappings"],
"properties": union_dicts(
core_shared_properties, {"enable-artifacts-buildcache": {"type": "boolean"}}
),
},
{
"type": "object",
"additionalProperties": False,
# "required": ["mappings"],
"properties": union_dicts(
core_shared_properties, {"temporary-storage-url-prefix": {"type": "string"}}
),
},
]
}
#: Properties for inclusion in other schemas
properties: Dict[str, Any] = {
"ci": {
"oneOf": [
# TODO: Replace with core-shared-properties in Spack 0.23
ci_properties,
# Allow legacy format under `ci` for `config update ci`
spack.schema.gitlab_ci.gitlab_ci_properties,
]
}
}
properties: Dict[str, Any] = {"ci": core_shared_properties}
#: Full schema with metadata
schema = {
@@ -183,21 +146,3 @@
"additionalProperties": False,
"properties": properties,
}
def update(data):
import llnl.util.tty as tty
import spack.ci
import spack.environment as ev
# Warn if deprecated section is still in the environment
ci_env = ev.active_environment()
if ci_env:
env_config = ci_env.manifest[ev.TOP_LEVEL_KEY]
if "gitlab-ci" in env_config:
tty.die("Error: `gitlab-ci` section detected with `ci`, these are not compatible")
# Detect if the ci section is using the new pipeline-gen
# If it is, assume it has already been converted
return spack.ci.translate_deprecated_config(data)

View File

@@ -61,7 +61,10 @@
"target": {"type": "string"},
"alias": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"modules": {
"anyOf": [{"type": "string"}, {"type": "null"}, {"type": "array"}]
"anyOf": [
{"type": "null"},
{"type": "array", "items": {"type": "string"}},
]
},
"implicit_rpaths": implicit_rpaths,
"environment": spack.schema.environment.definition,

View File

@@ -55,6 +55,26 @@
"unify": {
"oneOf": [{"type": "boolean"}, {"type": "string", "enum": ["when_possible"]}]
},
"splice": {
"type": "object",
"additionalProperties": False,
"properties": {
"explicit": {
"type": "array",
"default": [],
"items": {
"type": "object",
"required": ["target", "replacement"],
"additionalProperties": False,
"properties": {
"target": {"type": "string"},
"replacement": {"type": "string"},
"transitive": {"type": "boolean", "default": False},
},
},
}
},
},
"duplicates": {
"type": "object",
"properties": {

View File

@@ -12,7 +12,6 @@
from llnl.util.lang import union_dicts
import spack.schema.gitlab_ci # DEPRECATED
import spack.schema.merged
from .spec_list import spec_list_schema
@@ -26,8 +25,6 @@
"default": {},
"additionalProperties": False,
"properties": union_dicts(
# Include deprecated "gitlab-ci" section
spack.schema.gitlab_ci.properties,
# merged configuration scope schemas
spack.schema.merged.properties,
# extra environment schema properties
@@ -58,15 +55,6 @@ def update(data):
Returns:
True if data was changed, False otherwise
"""
import spack.ci
if "gitlab-ci" in data:
data["ci"] = data.pop("gitlab-ci")
if "ci" in data:
return spack.ci.translate_deprecated_config(data["ci"])
# There are not currently any deprecated attributes in this section
# that have not been removed
return False

View File

@@ -1,125 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Schema for gitlab-ci.yaml configuration file.
.. literalinclude:: ../spack/schema/gitlab_ci.py
:lines: 15-
"""
from typing import Any, Dict
from llnl.util.lang import union_dicts
image_schema = {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"name": {"type": "string"},
"entrypoint": {"type": "array", "items": {"type": "string"}},
},
},
]
}
runner_attributes_schema_items = {
"image": image_schema,
"tags": {"type": "array", "items": {"type": "string"}},
"variables": {"type": "object", "patternProperties": {r"[\w\d\-_\.]+": {"type": "string"}}},
"before_script": {"type": "array", "items": {"type": "string"}},
"script": {"type": "array", "items": {"type": "string"}},
"after_script": {"type": "array", "items": {"type": "string"}},
}
runner_selector_schema = {
"type": "object",
"additionalProperties": True,
"required": ["tags"],
"properties": runner_attributes_schema_items,
}
remove_attributes_schema = {
"type": "object",
"additionalProperties": False,
"required": ["tags"],
"properties": {"tags": {"type": "array", "items": {"type": "string"}}},
}
core_shared_properties = union_dicts(
runner_attributes_schema_items,
{
"bootstrap": {
"type": "array",
"items": {
"anyOf": [
{"type": "string"},
{
"type": "object",
"additionalProperties": False,
"required": ["name"],
"properties": {
"name": {"type": "string"},
"compiler-agnostic": {"type": "boolean", "default": False},
},
},
]
},
},
"match_behavior": {"type": "string", "enum": ["first", "merge"], "default": "first"},
"mappings": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": False,
"required": ["match"],
"properties": {
"match": {"type": "array", "items": {"type": "string"}},
"remove-attributes": remove_attributes_schema,
"runner-attributes": runner_selector_schema,
},
},
},
"service-job-attributes": runner_selector_schema,
"signing-job-attributes": runner_selector_schema,
"rebuild-index": {"type": "boolean"},
"broken-specs-url": {"type": "string"},
"broken-tests-packages": {"type": "array", "items": {"type": "string"}},
},
)
gitlab_ci_properties = {
"anyOf": [
{
"type": "object",
"additionalProperties": False,
"required": ["mappings"],
"properties": union_dicts(
core_shared_properties, {"enable-artifacts-buildcache": {"type": "boolean"}}
),
},
{
"type": "object",
"additionalProperties": False,
"required": ["mappings"],
"properties": union_dicts(
core_shared_properties, {"temporary-storage-url-prefix": {"type": "string"}}
),
},
]
}
#: Properties for inclusion in other schemas
properties: Dict[str, Any] = {"gitlab-ci": gitlab_ci_properties}
#: Full schema with metadata
schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack gitlab-ci configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties,
}

View File

@@ -523,7 +523,12 @@ def _compute_specs_from_answer_set(self):
node = SpecBuilder.make_node(pkg=providers[0])
candidate = answer.get(node)
if candidate and candidate.satisfies(input_spec):
if candidate and candidate.build_spec.satisfies(input_spec):
if not candidate.satisfies(input_spec):
tty.warn(
"explicit splice configuration has caused the concretized spec"
f" {candidate} not to satisfy the input spec {input_spec}"
)
self._concrete_specs.append(answer[node])
self._concrete_specs_by_input[input_spec] = answer[node]
else:
@@ -3814,7 +3819,33 @@ def build_specs(self, function_tuples):
spack.version.git_ref_lookup.GitRefLookup(spec.fullname)
)
return self._specs
specs = self.execute_explicit_splices()
return specs
def execute_explicit_splices(self):
splice_config = spack.config.CONFIG.get("concretizer:splice:explicit", [])
splice_triples = []
for splice_set in splice_config:
target = splice_set["target"]
replacement = spack.spec.Spec(splice_set["replacement"])
assert replacement.abstract_hash
replacement.replace_hash()
transitive = splice_set.get("transitive", False)
splice_triples.append((target, replacement, transitive))
specs = {}
for key, spec in self._specs.items():
current_spec = spec
for target, replacement, transitive in splice_triples:
if target in current_spec:
# matches root or non-root
# e.g. mvapich2%gcc
current_spec = current_spec.splice(replacement, transitive)
new_key = NodeArgument(id=key.id, pkg=current_spec.name)
specs[new_key] = current_spec
return specs
def _develop_specs_from_env(spec, env):

View File

@@ -4183,7 +4183,7 @@ def _virtuals_provided(self, root):
"""Return set of virtuals provided by self in the context of root"""
if root is self:
# Could be using any virtual the package can provide
return set(self.package.virtuals_provided)
return set(v.name for v in self.package.virtuals_provided)
hashes = [s.dag_hash() for s in root.traverse()]
in_edges = set(
@@ -4206,7 +4206,7 @@ def _splice_match(self, other, self_root, other_root):
return True
return bool(
self._virtuals_provided(self_root)
bool(self._virtuals_provided(self_root))
and self._virtuals_provided(self_root) <= other._virtuals_provided(other_root)
)
@@ -4226,29 +4226,24 @@ def _splice_detach_and_add_dependents(self, replacement, context):
# Only set it if it hasn't been spliced before
ancestor._build_spec = ancestor._build_spec or ancestor.copy()
ancestor.clear_cached_hashes(ignore=(ht.package_hash.attr,))
for edge in ancestor.edges_to_dependencies(depflag=dt.BUILD):
if edge.depflag & ~dt.BUILD:
edge.depflag &= ~dt.BUILD
else:
ancestor._dependencies[edge.spec.name].remove(edge)
edge.spec._dependents[ancestor.name].remove(edge)
# For each direct dependent in the link/run graph, replace the dependency on
# node with one on replacement
# For each build dependent, restrict the edge to build-only
for edge in self.edges_from_dependents():
if edge.parent not in ancestors_in_context:
continue
build_dep = edge.depflag & dt.BUILD
other_dep = edge.depflag & ~dt.BUILD
if build_dep:
parent_edge = [e for e in edge.parent._dependencies[self.name] if e.spec is self]
assert len(parent_edge) == 1
edge.depflag = dt.BUILD
parent_edge[0].depflag = dt.BUILD
else:
edge.parent._dependencies.edges[self.name].remove(edge)
self._dependents.edges[edge.parent.name].remove(edge)
edge.parent._dependencies.edges[self.name].remove(edge)
self._dependents.edges[edge.parent.name].remove(edge)
edge.parent._add_dependency(replacement, depflag=edge.depflag, virtuals=edge.virtuals)
if other_dep:
edge.parent._add_dependency(replacement, depflag=other_dep, virtuals=edge.virtuals)
def _splice_helper(self, replacement, self_root, other_root):
def _splice_helper(self, replacement):
"""Main loop of a transitive splice.
The while loop around a traversal of self ensures that changes to self from previous
@@ -4276,8 +4271,7 @@ def _splice_helper(self, replacement, self_root, other_root):
replacements_by_name[node.name].append(node)
virtuals = node._virtuals_provided(root=replacement)
for virtual in virtuals:
# Virtual may be spec or str, get name or return str
replacements_by_name[getattr(virtual, "name", virtual)].append(node)
replacements_by_name[virtual].append(node)
changed = True
while changed:
@@ -4298,8 +4292,8 @@ def _splice_helper(self, replacement, self_root, other_root):
for virtual in node._virtuals_provided(root=self):
analogs += [
r
for r in replacements_by_name[getattr(virtual, "name", virtual)]
if r._splice_match(node, self_root=self_root, other_root=other_root)
for r in replacements_by_name[virtual]
if node._splice_match(r, self_root=self, other_root=replacement)
]
# No match, keep iterating over self
@@ -4313,34 +4307,56 @@ def _splice_helper(self, replacement, self_root, other_root):
# No splice needed here, keep checking
if analog == node:
continue
node._splice_detach_and_add_dependents(analog, context=self)
changed = True
break
def splice(self, other, transitive):
"""Splices dependency "other" into this ("target") Spec, and return the
result as a concrete Spec.
If transitive, then other and its dependencies will be extrapolated to
a list of Specs and spliced in accordingly.
For example, let there exist a dependency graph as follows:
T
| \
Z<-H
In this example, Spec T depends on H and Z, and H also depends on Z.
Suppose, however, that we wish to use a different H, known as H'. This
function will splice in the new H' in one of two ways:
1. transitively, where H' depends on the Z' it was built with, and the
new T* also directly depends on this new Z', or
2. intransitively, where the new T* and H' both depend on the original
Z.
Since the Spec returned by this splicing function is no longer deployed
the same way it was built, any such changes are tracked by setting the
build_spec to point to the corresponding dependency from the original
Spec.
"""
def splice(self, other: "Spec", transitive: bool = True) -> "Spec":
"""Returns a new, spliced concrete Spec with the "other" dependency and,
optionally, its dependencies.
Args:
other: alternate dependency
transitive: include other's dependencies
Returns: a concrete, spliced version of the current Spec
When transitive is "True", use the dependencies from "other" to reconcile
conflicting dependencies. When transitive is "False", use dependencies from self.
For example, suppose we have the following dependency graph:
T
| \
Z<-H
Spec T depends on H and Z, and H also depends on Z. Now we want to use
a different H, called H'. This function can be used to splice in H' to
create a new spec, called T*. If H' was built with Z', then transitive
"True" will ensure H' and T* both depend on Z':
T*
| \
Z'<-H'
If transitive is "False", then H' and T* will both depend on
the original Z, resulting in a new H'*
T*
| \
Z<-H'*
Provenance of the build is tracked through the "build_spec" property
of the spliced spec and any correspondingly modified dependency specs.
The build specs are set to that of the original spec, so the original
spec's provenance is preserved unchanged."""
assert self.concrete
assert other.concrete
if self._splice_match(other, self_root=self, other_root=other):
return other.copy()
if not any(
node._splice_match(other, self_root=self, other_root=other)
for node in self.traverse(root=False, deptype=dt.LINK | dt.RUN)
@@ -4379,12 +4395,12 @@ def mask_build_deps(in_spec):
# Transitively splice any relevant nodes from new into base
# This handles all shared dependencies between self and other
spec._splice_helper(replacement, self_root=self, other_root=other)
spec._splice_helper(replacement)
else:
# Do the same thing as the transitive splice, but reversed
node_pairs = make_node_pairs(other, replacement)
mask_build_deps(replacement)
replacement._splice_helper(spec, self_root=other, other_root=self)
replacement._splice_helper(spec)
# Intransitively splice replacement into spec
# This is very simple now that all shared dependencies have been handled
@@ -4392,13 +4408,14 @@ def mask_build_deps(in_spec):
if node._splice_match(other, self_root=spec, other_root=other):
node._splice_detach_and_add_dependents(replacement, context=spec)
# Set up build dependencies for modified nodes
# Also modify build_spec because the existing ones had build deps removed
# For nodes that were spliced, modify the build spec to ensure build deps are preserved
# For nodes that were not spliced, replace the build deps on the spec itself
for orig, copy in node_pairs:
for edge in orig.edges_to_dependencies(depflag=dt.BUILD):
copy._add_dependency(edge.spec, depflag=dt.BUILD, virtuals=edge.virtuals)
if copy._build_spec:
copy._build_spec = orig.build_spec.copy()
else:
for edge in orig.edges_to_dependencies(depflag=dt.BUILD):
copy._add_dependency(edge.spec, depflag=dt.BUILD, virtuals=edge.virtuals)
return spec
@@ -4797,7 +4814,7 @@ def _load(cls, data):
virtuals=virtuals,
)
if "build_spec" in node.keys():
_, bhash, _ = cls.build_spec_from_node_dict(node, hash_type=hash_type)
_, bhash, _ = cls.extract_build_spec_info_from_node_dict(node, hash_type=hash_type)
node_spec._build_spec = hash_dict[bhash]["node_spec"]
return hash_dict[root_spec_hash]["node_spec"]
@@ -4925,7 +4942,7 @@ def extract_info_from_dep(cls, elt, hash):
return dep_hash, deptypes, hash_type, virtuals
@classmethod
def build_spec_from_node_dict(cls, node, hash_type=ht.dag_hash.name):
def extract_build_spec_info_from_node_dict(cls, node, hash_type=ht.dag_hash.name):
build_spec_dict = node["build_spec"]
return build_spec_dict["name"], build_spec_dict[hash_type], hash_type

View File

@@ -68,22 +68,6 @@ def cache_directory(tmpdir):
spack.config.caches = old_cache_path
@pytest.fixture(scope="module")
def mirror_dir(tmpdir_factory):
dir = tmpdir_factory.mktemp("mirror")
dir.ensure("build_cache", dir=True)
yield str(dir)
dir.join("build_cache").remove()
@pytest.fixture(scope="function")
def test_mirror(mirror_dir):
mirror_url = url_util.path_to_file_url(mirror_dir)
mirror_cmd("add", "--scope", "site", "test-mirror-func", mirror_url)
yield mirror_dir
mirror_cmd("rm", "--scope=site", "test-mirror-func")
@pytest.fixture(scope="module")
def config_directory(tmp_path_factory):
# Copy defaults to a temporary "site" scope
@@ -222,9 +206,9 @@ def dummy_prefix(tmpdir):
@pytest.mark.requires_executables(*args)
@pytest.mark.maybeslow
@pytest.mark.usefixtures(
"default_config", "cache_directory", "install_dir_default_layout", "test_mirror"
"default_config", "cache_directory", "install_dir_default_layout", "temporary_mirror"
)
def test_default_rpaths_create_install_default_layout(mirror_dir):
def test_default_rpaths_create_install_default_layout(temporary_mirror_dir):
"""
Test the creation and installation of buildcaches with default rpaths
into the default directory layout scheme.
@@ -237,13 +221,12 @@ def test_default_rpaths_create_install_default_layout(mirror_dir):
install_cmd("--no-cache", sy_spec.name)
# Create a buildache
buildcache_cmd("push", "-u", mirror_dir, cspec.name, sy_spec.name)
buildcache_cmd("push", "-u", temporary_mirror_dir, cspec.name, sy_spec.name)
# Test force overwrite create buildcache (-f option)
buildcache_cmd("push", "-uf", mirror_dir, cspec.name)
buildcache_cmd("push", "-uf", temporary_mirror_dir, cspec.name)
# Create mirror index
buildcache_cmd("update-index", mirror_dir)
buildcache_cmd("update-index", temporary_mirror_dir)
# List the buildcaches in the mirror
buildcache_cmd("list", "-alv")
@@ -271,9 +254,9 @@ def test_default_rpaths_create_install_default_layout(mirror_dir):
@pytest.mark.maybeslow
@pytest.mark.nomockstage
@pytest.mark.usefixtures(
"default_config", "cache_directory", "install_dir_non_default_layout", "test_mirror"
"default_config", "cache_directory", "install_dir_non_default_layout", "temporary_mirror"
)
def test_default_rpaths_install_nondefault_layout(mirror_dir):
def test_default_rpaths_install_nondefault_layout(temporary_mirror_dir):
"""
Test the creation and installation of buildcaches with default rpaths
into the non-default directory layout scheme.
@@ -294,9 +277,9 @@ def test_default_rpaths_install_nondefault_layout(mirror_dir):
@pytest.mark.maybeslow
@pytest.mark.nomockstage
@pytest.mark.usefixtures(
"default_config", "cache_directory", "install_dir_default_layout", "test_mirror"
"default_config", "cache_directory", "install_dir_default_layout", "temporary_mirror"
)
def test_relative_rpaths_install_default_layout(mirror_dir):
def test_relative_rpaths_install_default_layout(temporary_mirror_dir):
"""
Test the creation and installation of buildcaches with relative
rpaths into the default directory layout scheme.
@@ -323,9 +306,9 @@ def test_relative_rpaths_install_default_layout(mirror_dir):
@pytest.mark.maybeslow
@pytest.mark.nomockstage
@pytest.mark.usefixtures(
"default_config", "cache_directory", "install_dir_non_default_layout", "test_mirror"
"default_config", "cache_directory", "install_dir_non_default_layout", "temporary_mirror"
)
def test_relative_rpaths_install_nondefault(mirror_dir):
def test_relative_rpaths_install_nondefault(temporary_mirror_dir):
"""
Test the installation of buildcaches with relativized rpaths
into the non-default directory layout scheme.
@@ -374,9 +357,9 @@ def test_push_and_fetch_keys(mock_gnupghome, tmp_path):
@pytest.mark.maybeslow
@pytest.mark.nomockstage
@pytest.mark.usefixtures(
"default_config", "cache_directory", "install_dir_non_default_layout", "test_mirror"
"default_config", "cache_directory", "install_dir_non_default_layout", "temporary_mirror"
)
def test_built_spec_cache(mirror_dir):
def test_built_spec_cache(temporary_mirror_dir):
"""Because the buildcache list command fetches the buildcache index
and uses it to populate the binary_distribution built spec cache, when
this test calls get_mirrors_for_spec, it is testing the popluation of
@@ -397,7 +380,7 @@ def fake_dag_hash(spec, length=None):
return "tal4c7h4z0gqmixb1eqa92mjoybxn5l6"[:length]
@pytest.mark.usefixtures("install_mockery", "mock_packages", "mock_fetch", "test_mirror")
@pytest.mark.usefixtures("install_mockery", "mock_packages", "mock_fetch", "temporary_mirror")
def test_spec_needs_rebuild(monkeypatch, tmpdir):
"""Make sure needs_rebuild properly compares remote hash
against locally computed one, avoiding unnecessary rebuilds"""
@@ -518,7 +501,7 @@ def mock_list_url(url, recursive=False):
@pytest.mark.usefixtures("mock_fetch", "install_mockery")
def test_update_sbang(tmpdir, test_mirror):
def test_update_sbang(tmpdir, temporary_mirror):
"""Test the creation and installation of buildcaches with default rpaths
into the non-default directory layout scheme, triggering an update of the
sbang.
@@ -529,7 +512,7 @@ def test_update_sbang(tmpdir, test_mirror):
old_spec_hash_str = "/{0}".format(old_spec.dag_hash())
# Need a fake mirror with *function* scope.
mirror_dir = test_mirror
mirror_dir = temporary_mirror
# Assume all commands will concretize old_spec the same way.
install_cmd("--no-cache", old_spec.name)

View File

@@ -516,6 +516,30 @@ def test_setting_dtags_based_on_config(config_setting, expected_flag, config, mo
assert dtags_to_add.value == expected_flag
def test_module_globals_available_at_setup_dependent_time(
monkeypatch, mutable_config, mock_packages, working_env
):
"""Spack built package externaltest depends on an external package
externaltool. Externaltool's setup_dependent_package needs to be able to
access globals on the dependent"""
def setup_dependent_package(module, dependent_spec):
# Make sure set_package_py_globals was already called on
# dependents
# ninja is always set by the setup context and is not None
dependent_module = dependent_spec.package.module
assert hasattr(dependent_module, "ninja")
assert dependent_module.ninja is not None
dependent_spec.package.test_attr = True
externaltool = spack.spec.Spec("externaltest").concretized()
monkeypatch.setattr(
externaltool["externaltool"].package, "setup_dependent_package", setup_dependent_package
)
spack.build_environment.setup_package(externaltool.package, False)
assert externaltool.package.test_attr
def test_build_jobs_sequential_is_sequential():
assert (
spack.config.determine_number_of_jobs(

View File

@@ -12,22 +12,39 @@
def test_build_task_errors(install_mockery):
with pytest.raises(ValueError, match="must be a package"):
inst.BuildTask("abc", None, False, 0, 0, 0, set())
"""Check expected errors when instantiating a BuildTask."""
spec = spack.spec.Spec("trivial-install-test-package")
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
with pytest.raises(ValueError, match="must have a concrete spec"):
inst.BuildTask(pkg_cls(spec), None, False, 0, 0, 0, set())
# The value of the request argument is expected to not be checked.
for pkg in [None, "abc"]:
with pytest.raises(TypeError, match="must be a package"):
inst.BuildTask(pkg, None)
with pytest.raises(ValueError, match="must have a concrete spec"):
inst.BuildTask(pkg_cls(spec), None)
# Using a concretized package now means the request argument is checked.
spec.concretize()
assert spec.concrete
with pytest.raises(ValueError, match="must have a build request"):
inst.BuildTask(spec.package, None, False, 0, 0, 0, set())
with pytest.raises(TypeError, match="is not a valid build request"):
inst.BuildTask(spec.package, None)
# Using a valid package and spec, the next check is the status argument.
request = inst.BuildRequest(spec.package, {})
with pytest.raises(spack.error.InstallError, match="Cannot create a build task"):
inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_REMOVED, set())
with pytest.raises(TypeError, match="is not a valid build status"):
inst.BuildTask(spec.package, request, status="queued")
# Now we can check that build tasks cannot be create when the status
# indicates the task is/should've been removed.
with pytest.raises(spack.error.InstallError, match="Cannot create a task"):
inst.BuildTask(spec.package, request, status=inst.BuildStatus.REMOVED)
# Also make sure to not accept an incompatible installed argument value.
with pytest.raises(TypeError, match="'installed' be a 'set', not 'str'"):
inst.BuildTask(spec.package, request, installed="mpileaks")
def test_build_task_basics(install_mockery):
@@ -37,7 +54,7 @@ def test_build_task_basics(install_mockery):
# Ensure key properties match expectations
request = inst.BuildRequest(spec.package, {})
task = inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_ADDED, set())
task = inst.BuildTask(spec.package, request=request, status=inst.BuildStatus.QUEUED)
assert not task.explicit
assert task.priority == len(task.uninstalled_deps)
assert task.key == (task.priority, task.sequence)
@@ -59,16 +76,16 @@ def test_build_task_strings(install_mockery):
# Ensure key properties match expectations
request = inst.BuildRequest(spec.package, {})
task = inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_ADDED, set())
task = inst.BuildTask(spec.package, request=request, status=inst.BuildStatus.QUEUED)
# Cover __repr__
irep = task.__repr__()
assert irep.startswith(task.__class__.__name__)
assert "status='queued'" in irep # == STATUS_ADDED
assert "BuildStatus.QUEUED" in irep
assert "sequence=" in irep
# Cover __str__
istr = str(task)
assert "status=queued" in istr # == STATUS_ADDED
assert "status=queued" in istr # == BuildStatus.QUEUED
assert "#dependencies=1" in istr
assert "priority=" in istr

View File

@@ -170,7 +170,7 @@ def test_remove_and_add_a_source(mutable_config):
assert not sources
# Add it back and check we restored the initial state
_bootstrap("add", "github-actions", "$spack/share/spack/bootstrap/github-actions-v0.5")
_bootstrap("add", "github-actions", "$spack/share/spack/bootstrap/github-actions-v0.6")
sources = spack.bootstrap.core.bootstrapping_sources()
assert len(sources) == 1

View File

@@ -2,11 +2,11 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import filecmp
import json
import os
import pathlib
import shutil
from io import BytesIO
from typing import NamedTuple
import jsonschema
@@ -26,7 +26,6 @@
import spack.util.spack_yaml as syaml
from spack.cmd.ci import FAILED_CREATE_BUILDCACHE_CODE
from spack.schema.buildcache_spec import schema as specfile_schema
from spack.schema.ci import schema as ci_schema
from spack.schema.database_index import schema as db_idx_schema
from spack.spec import Spec
@@ -196,7 +195,7 @@ def test_ci_generate_with_env(ci_generate_test, tmp_path, mock_binary_index):
- matrix:
- [$old-gcc-pkgs]
mirrors:
some-mirror: {mirror_url}
buildcache-destination: {mirror_url}
ci:
pipeline-gen:
- submapping:
@@ -238,7 +237,9 @@ def test_ci_generate_with_env(ci_generate_test, tmp_path, mock_binary_index):
assert "rebuild-index" in yaml_contents
rebuild_job = yaml_contents["rebuild-index"]
assert rebuild_job["script"][0] == f"spack buildcache update-index --keys {mirror_url}"
assert (
rebuild_job["script"][0] == f"spack buildcache update-index --keys {mirror_url.as_uri()}"
)
assert rebuild_job["custom_attribute"] == "custom!"
assert "variables" in yaml_contents
@@ -248,31 +249,28 @@ def test_ci_generate_with_env(ci_generate_test, tmp_path, mock_binary_index):
def test_ci_generate_with_env_missing_section(ci_generate_test, tmp_path, mock_binary_index):
"""Make sure we get a reasonable message if we omit gitlab-ci section"""
_, _, output = ci_generate_test(
f"""\
env_yaml = f"""\
spack:
specs:
- archive-files
mirrors:
some-mirror: {tmp_path / 'ci-mirror'}
""",
fail_on_error=False,
)
assert "Environment does not have `ci` a configuration" in output
buildcache-destination: {tmp_path / 'ci-mirror'}
"""
expect = "Environment does not have a `ci` configuration"
with pytest.raises(ci.SpackCIError, match=expect):
ci_generate_test(env_yaml)
def test_ci_generate_with_cdash_token(ci_generate_test, tmp_path, mock_binary_index, monkeypatch):
"""Make sure we it doesn't break if we configure cdash"""
monkeypatch.setenv("SPACK_CDASH_AUTH_TOKEN", "notreallyatokenbutshouldnotmatter")
backup_file = tmp_path / "backup-ci.yml"
spack_yaml_content = f"""\
spack:
specs:
- archive-files
mirrors:
some-mirror: {tmp_path / "ci-mirror"}
buildcache-destination: {tmp_path / "ci-mirror"}
ci:
enable-artifacts-buildcache: True
pipeline-gen:
- submapping:
- match:
@@ -287,16 +285,15 @@ def test_ci_generate_with_cdash_token(ci_generate_test, tmp_path, mock_binary_in
project: Not used
site: Nothing
"""
spack_yaml, original_file, output = ci_generate_test(
spack_yaml_content, "--copy-to", str(backup_file)
)
spack_yaml, original_file, output = ci_generate_test(spack_yaml_content)
yaml_contents = syaml.load(original_file.read_text())
# That fake token should still have resulted in being unable to
# That fake token should have resulted in being unable to
# register build group with cdash, but the workload should
# still have been generated.
assert "Problem populating buildgroup" in output
assert backup_file.exists()
assert filecmp.cmp(str(original_file), str(backup_file))
expected_keys = ["rebuild-index", "stages", "variables", "workflow"]
assert all([key in yaml_contents.keys() for key in expected_keys])
def test_ci_generate_with_custom_settings(
@@ -311,7 +308,7 @@ def test_ci_generate_with_custom_settings(
specs:
- archive-files
mirrors:
some-mirror: {tmp_path / "ci-mirror"}
buildcache-destination: {tmp_path / "ci-mirror"}
ci:
pipeline-gen:
- submapping:
@@ -386,9 +383,8 @@ def test_ci_generate_pkg_with_deps(ci_generate_test, tmp_path, ci_base_environme
specs:
- flatten-deps
mirrors:
some-mirror: {tmp_path / 'ci-mirror'}
buildcache-destination: {tmp_path / 'ci-mirror'}
ci:
enable-artifacts-buildcache: True
pipeline-gen:
- submapping:
- match:
@@ -421,13 +417,8 @@ def test_ci_generate_pkg_with_deps(ci_generate_test, tmp_path, ci_base_environme
def test_ci_generate_for_pr_pipeline(ci_generate_test, tmp_path, monkeypatch):
"""Test that PR pipelines do not include a final stage job for
rebuilding the mirror index, even if that job is specifically
configured.
"""
"""Test generation of a PR pipeline with disabled rebuild-index"""
monkeypatch.setenv("SPACK_PIPELINE_TYPE", "spack_pull_request")
monkeypatch.setenv("SPACK_PR_BRANCH", "fake-test-branch")
monkeypatch.setattr(spack.ci, "SHARED_PR_MIRROR_URL", f"{tmp_path / 'shared-pr-mirror'}")
spack_yaml, outputfile, _ = ci_generate_test(
f"""\
@@ -435,9 +426,8 @@ def test_ci_generate_for_pr_pipeline(ci_generate_test, tmp_path, monkeypatch):
specs:
- flatten-deps
mirrors:
some-mirror: {tmp_path / 'ci-mirror'}
buildcache-destination: {tmp_path / 'ci-mirror'}
ci:
enable-artifacts-buildcache: True
pipeline-gen:
- submapping:
- match:
@@ -473,7 +463,7 @@ def test_ci_generate_with_external_pkg(ci_generate_test, tmp_path, monkeypatch):
- archive-files
- externaltest
mirrors:
some-mirror: {tmp_path / "ci-mirror"}
buildcache-destination: {tmp_path / "ci-mirror"}
ci:
pipeline-gen:
- submapping:
@@ -539,7 +529,6 @@ def create_rebuild_env(
broken_specs_path = scratch / "naughty-list"
mirror_url = mirror_dir.as_uri()
temp_storage_url = (tmp_path / "temp-storage").as_uri()
ci_job_url = "https://some.domain/group/project/-/jobs/42"
ci_pipeline_url = "https://some.domain/group/project/-/pipelines/7"
@@ -554,11 +543,10 @@ def create_rebuild_env(
specs:
- $packages
mirrors:
test-mirror: {mirror_dir}
buildcache-destination: {mirror_dir}
ci:
broken-specs-url: {broken_specs_path.as_uri()}
broken-tests-packages: {json.dumps([pkg_name] if broken_tests else [])}
temporary-storage-url-prefix: {temp_storage_url}
pipeline-gen:
- submapping:
- match:
@@ -710,7 +698,7 @@ def test_ci_require_signing(
specs:
- archive-files
mirrors:
test-mirror: {tmp_path / "ci-mirror"}
buildcache-destination: {tmp_path / "ci-mirror"}
ci:
pipeline-gen:
- submapping:
@@ -758,9 +746,8 @@ def test_ci_nothing_to_rebuild(
specs:
- $packages
mirrors:
test-mirror: {mirror_url}
buildcache-destination: {mirror_url}
ci:
enable-artifacts-buildcache: true
pipeline-gen:
- submapping:
- match:
@@ -787,103 +774,20 @@ def test_ci_nothing_to_rebuild(
"SPACK_JOB_LOG_DIR": "log_dir",
"SPACK_JOB_REPRO_DIR": "repro_dir",
"SPACK_JOB_TEST_DIR": "test_dir",
"SPACK_LOCAL_MIRROR_DIR": str(mirror_dir),
"SPACK_CONCRETE_ENV_DIR": str(tmp_path),
"SPACK_JOB_SPEC_DAG_HASH": env.concrete_roots()[0].dag_hash(),
"SPACK_JOB_SPEC_PKG_NAME": "archive-files",
"SPACK_COMPILER_ACTION": "NONE",
"SPACK_REMOTE_MIRROR_URL": mirror_url,
}
)
def fake_dl_method(spec, *args, **kwargs):
print("fake download buildcache {0}".format(spec.name))
monkeypatch.setattr(spack.binary_distribution, "download_single_spec", fake_dl_method)
ci_out = ci_cmd("rebuild", output=str)
assert "No need to rebuild archive-files" in ci_out
assert "fake download buildcache archive-files" in ci_out
env_cmd("deactivate")
def test_ci_generate_mirror_override(
tmp_path: pathlib.Path,
mutable_mock_env_path,
install_mockery,
mock_fetch,
mock_binary_index,
ci_base_environment,
):
"""Ensure that protected pipelines using --buildcache-destination do not
skip building specs that are not in the override mirror when they are
found in the main mirror."""
os.environ.update({"SPACK_PIPELINE_TYPE": "spack_protected_branch"})
mirror_url = (tmp_path / "mirror").as_uri()
with open(tmp_path / "spack.yaml", "w") as f:
f.write(
f"""
spack:
definitions:
- packages: [patchelf]
specs:
- $packages
mirrors:
test-mirror: {mirror_url}
ci:
pipeline-gen:
- submapping:
- match:
- patchelf
build-job:
tags:
- donotcare
image: donotcare
- cleanup-job:
tags:
- nonbuildtag
image: basicimage
"""
)
with working_dir(tmp_path):
env_cmd("create", "test", "./spack.yaml")
first_ci_yaml = str(tmp_path / ".gitlab-ci-1.yml")
second_ci_yaml = str(tmp_path / ".gitlab-ci-2.yml")
with ev.read("test"):
install_cmd()
buildcache_cmd("push", "-u", mirror_url, "patchelf")
buildcache_cmd("update-index", mirror_url, output=str)
# This generate should not trigger a rebuild of patchelf, since it's in
# the main mirror referenced in the environment.
ci_cmd("generate", "--check-index-only", "--output-file", first_ci_yaml)
# Because we used a mirror override (--buildcache-destination) on a
# spack protected pipeline, we expect to only look in the override
# mirror for the spec, and thus the patchelf job should be generated in
# this pipeline
ci_cmd(
"generate",
"--check-index-only",
"--output-file",
second_ci_yaml,
"--buildcache-destination",
(tmp_path / "does-not-exist").as_uri(),
)
with open(first_ci_yaml) as fd1:
first_yaml = fd1.read()
assert "no-specs-to-rebuild" in first_yaml
with open(second_ci_yaml) as fd2:
second_yaml = fd2.read()
assert "no-specs-to-rebuild" not in second_yaml
@pytest.mark.disable_clean_stage_check
def test_push_to_build_cache(
tmp_path: pathlib.Path,
@@ -910,9 +814,8 @@ def test_push_to_build_cache(
specs:
- $packages
mirrors:
test-mirror: {mirror_url}
buildcache-destination: {mirror_url}
ci:
enable-artifacts-buildcache: True
pipeline-gen:
- submapping:
- match:
@@ -1048,7 +951,7 @@ def test_ci_generate_override_runner_attrs(
- flatten-deps
- pkg-a
mirrors:
some-mirror: {tmp_path / "ci-mirror"}
buildcache-destination: {tmp_path / "ci-mirror"}
ci:
pipeline-gen:
- match_behavior: {match_behavior}
@@ -1188,7 +1091,7 @@ def test_ci_rebuild_index(
specs:
- callpath
mirrors:
test-mirror: {mirror_url}
buildcache-destination: {mirror_url}
ci:
pipeline-gen:
- submapping:
@@ -1244,7 +1147,7 @@ def fake_stack_changed(env_path, rev1="HEAD^", rev2="HEAD"):
- archive-files
- callpath
mirrors:
some-mirror: {tmp_path / 'ci-mirror'}
buildcache-destination: {tmp_path / 'ci-mirror'}
ci:
pipeline-gen:
- build-job:
@@ -1307,101 +1210,15 @@ def test_ci_subcommands_without_mirror(
with ev.read("test"):
# Check the 'generate' subcommand
output = ci_cmd(
"generate",
"--output-file",
str(tmp_path / ".gitlab-ci.yml"),
output=str,
fail_on_error=False,
)
assert "spack ci generate requires an env containing a mirror" in output
expect = "spack ci generate requires a mirror named 'buildcache-destination'"
with pytest.raises(ci.SpackCIError, match=expect):
ci_cmd("generate", "--output-file", str(tmp_path / ".gitlab-ci.yml"))
# Also check the 'rebuild-index' subcommand
output = ci_cmd("rebuild-index", output=str, fail_on_error=False)
assert "spack ci rebuild-index requires an env containing a mirror" in output
def test_ensure_only_one_temporary_storage():
"""Make sure 'gitlab-ci' section of env does not allow specification of
both 'enable-artifacts-buildcache' and 'temporary-storage-url-prefix'."""
gitlab_ci_template = """
ci:
{0}
pipeline-gen:
- submapping:
- match:
- notcheckedhere
build-job:
tags:
- donotcare
"""
enable_artifacts = "enable-artifacts-buildcache: True"
temp_storage = "temporary-storage-url-prefix: file:///temp/mirror"
specify_both = f"{enable_artifacts}\n {temp_storage}"
specify_neither = ""
# User can specify "enable-artifacts-buildcache" (boolean)
yaml_obj = syaml.load(gitlab_ci_template.format(enable_artifacts))
jsonschema.validate(yaml_obj, ci_schema)
# User can also specify "temporary-storage-url-prefix" (string)
yaml_obj = syaml.load(gitlab_ci_template.format(temp_storage))
jsonschema.validate(yaml_obj, ci_schema)
# However, specifying both should fail to validate
yaml_obj = syaml.load(gitlab_ci_template.format(specify_both))
with pytest.raises(jsonschema.ValidationError):
jsonschema.validate(yaml_obj, ci_schema)
# Specifying neither should be fine too, as neither of these properties
# should be required
yaml_obj = syaml.load(gitlab_ci_template.format(specify_neither))
jsonschema.validate(yaml_obj, ci_schema)
def test_ci_generate_temp_storage_url(ci_generate_test, tmp_path, mock_binary_index):
"""Verify correct behavior when using temporary-storage-url-prefix"""
_, outputfile, _ = ci_generate_test(
f"""\
spack:
specs:
- archive-files
mirrors:
some-mirror: {(tmp_path / "ci-mirror").as_uri()}
ci:
temporary-storage-url-prefix: {(tmp_path / "temp-mirror").as_uri()}
pipeline-gen:
- submapping:
- match:
- archive-files
build-job:
tags:
- donotcare
image: donotcare
- cleanup-job:
custom_attribute: custom!
"""
)
yaml_contents = syaml.load(outputfile.read_text())
assert "cleanup" in yaml_contents
cleanup_job = yaml_contents["cleanup"]
assert cleanup_job["custom_attribute"] == "custom!"
assert "script" in cleanup_job
cleanup_task = cleanup_job["script"][0]
assert cleanup_task.startswith("spack -d mirror destroy")
assert "stages" in yaml_contents
stages = yaml_contents["stages"]
# Cleanup job should be 2nd to last, just before rebuild-index
assert "stage" in cleanup_job
assert cleanup_job["stage"] == stages[-2]
def test_ci_generate_read_broken_specs_url(
tmp_path: pathlib.Path,
mutable_mock_env_path,
@@ -1438,7 +1255,7 @@ def test_ci_generate_read_broken_specs_url(
- flatten-deps
- pkg-a
mirrors:
some-mirror: {(tmp_path / "ci-mirror").as_uri()}
buildcache-destination: {(tmp_path / "ci-mirror").as_uri()}
ci:
broken-specs-url: "{broken_specs_url}"
pipeline-gen:
@@ -1483,9 +1300,8 @@ def test_ci_generate_external_signing_job(ci_generate_test, tmp_path, monkeypatc
specs:
- archive-files
mirrors:
some-mirror: {(tmp_path / "ci-mirror").as_uri()}
buildcache-destination: {(tmp_path / "ci-mirror").as_uri()}
ci:
temporary-storage-url-prefix: {(tmp_path / "temp-mirror").as_uri()}
pipeline-gen:
- submapping:
- match:
@@ -1540,7 +1356,7 @@ def test_ci_reproduce(
specs:
- $packages
mirrors:
test-mirror: {tmp_path / "ci-mirror"}
buildcache-destination: {tmp_path / "ci-mirror"}
ci:
pipeline-gen:
- submapping:
@@ -1671,106 +1487,6 @@ def test_cmd_first_line():
assert spack.cmd.first_line(doc) == first
legacy_spack_yaml_contents = """
spack:
definitions:
- old-gcc-pkgs:
- archive-files
- callpath
# specify ^openblas-with-lapack to ensure that builtin.mock repo flake8
# package (which can also provide lapack) is not chosen, as it violates
# a package-level check which requires exactly one fetch strategy (this
# is apparently not an issue for other tests that use it).
- hypre@0.2.15 ^openblas-with-lapack
specs:
- matrix:
- [$old-gcc-pkgs]
mirrors:
test-mirror: {mirror_url}
{key}:
match_behavior: first
mappings:
- match:
- arch=test-debian6-core2
runner-attributes:
tags:
- donotcare
image: donotcare
- match:
- arch=test-debian6-m1
runner-attributes:
tags:
- donotcare
image: donotcare
service-job-attributes:
image: donotcare
tags: [donotcare]
cdash:
build-group: Not important
url: https://my.fake.cdash
project: Not used
site: Nothing
"""
@pytest.mark.regression("36409")
def test_gitlab_ci_deprecated(
tmp_path: pathlib.Path,
mutable_mock_env_path,
install_mockery,
monkeypatch,
ci_base_environment,
mock_binary_index,
):
mirror_url = (tmp_path / "ci-mirror").as_uri()
with open(tmp_path / "spack.yaml", "w") as f:
f.write(legacy_spack_yaml_contents.format(mirror_url=mirror_url, key="gitlab-ci"))
with working_dir(tmp_path):
with ev.Environment("."):
ci_cmd("generate", "--output-file", "generated-pipeline.yaml")
with open("generated-pipeline.yaml") as f:
yaml_contents = syaml.load(f)
assert "stages" in yaml_contents
assert len(yaml_contents["stages"]) == 5
assert yaml_contents["stages"][0] == "stage-0"
assert yaml_contents["stages"][4] == "stage-rebuild-index"
assert "rebuild-index" in yaml_contents
rebuild_job = yaml_contents["rebuild-index"]
expected = f"spack buildcache update-index --keys {mirror_url}"
assert rebuild_job["script"][0] == expected
assert "variables" in yaml_contents
assert "SPACK_ARTIFACTS_ROOT" in yaml_contents["variables"]
artifacts_root = yaml_contents["variables"]["SPACK_ARTIFACTS_ROOT"]
assert artifacts_root == "jobs_scratch_dir"
@pytest.mark.regression("36045")
def test_gitlab_ci_update(
tmp_path: pathlib.Path,
mutable_mock_env_path,
install_mockery,
monkeypatch,
ci_base_environment,
mock_binary_index,
):
with open(tmp_path / "spack.yaml", "w") as f:
f.write(
legacy_spack_yaml_contents.format(mirror_url=(tmp_path / "mirror").as_uri(), key="ci")
)
env_cmd("update", "-y", str(tmp_path))
with open(tmp_path / "spack.yaml") as f:
yaml_contents = syaml.load(f)
ci_root = yaml_contents["spack"]["ci"]
assert "pipeline-gen" in ci_root
def test_gitlab_config_scopes(ci_generate_test, tmp_path):
"""Test pipeline generation with real configs included"""
configs_path = os.path.join(spack_paths.share_path, "gitlab", "cloud_pipelines", "configs")
@@ -1784,7 +1500,7 @@ def test_gitlab_config_scopes(ci_generate_test, tmp_path):
specs:
- flatten-deps
mirrors:
some-mirror: {tmp_path / "ci-mirror"}
buildcache-destination: {tmp_path / "ci-mirror"}
ci:
pipeline-gen:
- build-job:
@@ -1846,3 +1562,91 @@ def test_ci_generate_mirror_config(
pipeline_doc = syaml.load(f)
assert fst not in pipeline_doc["rebuild-index"]["script"][0]
assert snd in pipeline_doc["rebuild-index"]["script"][0]
def dynamic_mapping_setup(tmpdir):
filename = str(tmpdir.join("spack.yaml"))
with open(filename, "w") as f:
f.write(
"""\
spack:
specs:
- pkg-a
mirrors:
buildcache-destination: https://my.fake.mirror
ci:
pipeline-gen:
- dynamic-mapping:
endpoint: https://fake.spack.io/mapper
require: ["variables"]
ignore: ["ignored_field"]
allow: ["variables", "retry"]
"""
)
spec_a = Spec("pkg-a")
spec_a.concretize()
return ci.get_job_name(spec_a)
def test_ci_dynamic_mapping_empty(
tmpdir,
working_env,
mutable_mock_env_path,
install_mockery,
mock_packages,
monkeypatch,
ci_base_environment,
):
# The test will always return an empty dictionary
def fake_dyn_mapping_urlopener(*args, **kwargs):
return BytesIO("{}".encode())
monkeypatch.setattr(ci, "_dyn_mapping_urlopener", fake_dyn_mapping_urlopener)
_ = dynamic_mapping_setup(tmpdir)
with tmpdir.as_cwd():
env_cmd("create", "test", "./spack.yaml")
outputfile = str(tmpdir.join(".gitlab-ci.yml"))
with ev.read("test"):
output = ci_cmd("generate", "--output-file", outputfile)
assert "Response missing required keys: ['variables']" in output
def test_ci_dynamic_mapping_full(
tmpdir,
working_env,
mutable_mock_env_path,
install_mockery,
mock_packages,
monkeypatch,
ci_base_environment,
):
# The test will always return an empty dictionary
def fake_dyn_mapping_urlopener(*args, **kwargs):
return BytesIO(
json.dumps(
{"variables": {"MY_VAR": "hello"}, "ignored_field": 0, "unallowed_field": 0}
).encode()
)
monkeypatch.setattr(ci, "_dyn_mapping_urlopener", fake_dyn_mapping_urlopener)
label = dynamic_mapping_setup(tmpdir)
with tmpdir.as_cwd():
env_cmd("create", "test", "./spack.yaml")
outputfile = str(tmpdir.join(".gitlab-ci.yml"))
with ev.read("test"):
ci_cmd("generate", "--output-file", outputfile)
with open(outputfile) as of:
pipeline_doc = syaml.load(of.read())
assert label in pipeline_doc
job = pipeline_doc[label]
assert job.get("variables", {}).get("MY_VAR") == "hello"
assert "ignored_field" not in job
assert "unallowed_field" not in job

View File

@@ -164,3 +164,30 @@ def test_concretize_deprecated(mock_packages, mock_archive, mock_fetch, install_
spec = spack.spec.Spec("libelf@0.8.10")
with pytest.raises(spack.spec.SpecDeprecatedError):
spec.concretize()
@pytest.mark.usefixtures("mock_packages", "mock_archive", "mock_fetch", "install_mockery")
@pytest.mark.regression("46915")
def test_deprecate_spec_with_external_dependency(mutable_config, temporary_store, tmp_path):
"""Tests that we can deprecate a spec that has an external dependency"""
packages_yaml = {
"libelf": {
"buildable": False,
"externals": [{"spec": "libelf@0.8.13", "prefix": str(tmp_path / "libelf")}],
}
}
mutable_config.set("packages", packages_yaml)
install("--fake", "dyninst ^libdwarf@=20111030")
install("--fake", "libdwarf@=20130729")
# Ensure we are using the external libelf
db = temporary_store.db
libelf = db.query_one("libelf")
assert libelf.external
deprecated_spec = db.query_one("libdwarf@=20111030")
new_libdwarf = db.query_one("libdwarf@=20130729")
deprecate("-y", "libdwarf@=20111030", "libdwarf@=20130729")
assert db.deprecator(deprecated_spec) == new_libdwarf

View File

@@ -65,6 +65,12 @@ def test_develop_no_clone(self, tmpdir):
develop("--no-clone", "-p", str(tmpdir), "mpich@1.0")
self.check_develop(e, spack.spec.Spec("mpich@=1.0"), str(tmpdir))
def test_develop_no_version(self, tmpdir):
env("create", "test")
with ev.read("test") as e:
develop("--no-clone", "-p", str(tmpdir), "mpich")
self.check_develop(e, spack.spec.Spec("mpich@=main"), str(tmpdir))
def test_develop(self):
env("create", "test")
with ev.read("test") as e:

View File

@@ -38,6 +38,7 @@
import spack.util.spack_json as sjson
import spack.util.spack_yaml
from spack.cmd.env import _env_create
from spack.installer import PackageInstaller
from spack.main import SpackCommand, SpackCommandError
from spack.spec import Spec
from spack.stage import stage_prefix
@@ -574,42 +575,76 @@ def test_remove_command():
with ev.read("test"):
add("mpileaks")
with ev.read("test"):
assert "mpileaks" in find()
assert "mpileaks@" not in find()
assert "mpileaks@" not in find("--show-concretized")
with ev.read("test"):
remove("mpileaks")
with ev.read("test"):
assert "mpileaks" not in find()
assert "mpileaks@" not in find()
assert "mpileaks@" not in find("--show-concretized")
with ev.read("test"):
add("mpileaks")
with ev.read("test"):
assert "mpileaks" in find()
assert "mpileaks@" not in find()
assert "mpileaks@" not in find("--show-concretized")
with ev.read("test"):
concretize()
with ev.read("test"):
assert "mpileaks" in find()
assert "mpileaks@" not in find()
assert "mpileaks@" in find("--show-concretized")
with ev.read("test"):
remove("mpileaks")
with ev.read("test"):
assert "mpileaks" not in find()
# removed but still in last concretized specs
assert "mpileaks@" in find("--show-concretized")
with ev.read("test"):
concretize()
with ev.read("test"):
assert "mpileaks" not in find()
assert "mpileaks@" not in find()
# now the lockfile is regenerated and it's gone.
assert "mpileaks@" not in find("--show-concretized")
def test_remove_command_all():
# Need separate ev.read calls for each command to ensure we test round-trip to disk
env("create", "test")
test_pkgs = ("mpileaks", "zlib")
with ev.read("test"):
for name in test_pkgs:
add(name)
with ev.read("test"):
for name in test_pkgs:
assert name in find()
assert f"{name}@" not in find()
with ev.read("test"):
remove("-a")
with ev.read("test"):
for name in test_pkgs:
assert name not in find()
def test_bad_remove_included_env():
env("create", "test")
test = ev.read("test")
@@ -769,6 +804,39 @@ def test_user_removed_spec(environment_from_manifest):
assert not any(x.name == "hypre" for x in env_specs)
def test_lockfile_spliced_specs(environment_from_manifest, install_mockery):
"""Test that an environment can round-trip a spliced spec."""
# Create a local install for zmpi to splice in
# Default concretization is not using zmpi
zmpi = spack.spec.Spec("zmpi").concretized()
PackageInstaller([zmpi.package], fake=True).install()
e1 = environment_from_manifest(
f"""
spack:
specs:
- mpileaks
concretizer:
splice:
explicit:
- target: mpi
replacement: zmpi/{zmpi.dag_hash()}
"""
)
with e1:
e1.concretize()
e1.write()
# By reading into a second environment, we force a round trip to json
e2 = _env_create("test2", init_file=e1.lock_path)
# The one spec is mpileaks
for _, spec in e2.concretized_specs():
assert spec.spliced
assert spec["mpi"].satisfies(f"zmpi@{zmpi.version}")
assert spec["mpi"].build_spec.satisfies(zmpi)
def test_init_from_lockfile(environment_from_manifest):
"""Test that an environment can be instantiated from a lockfile."""
e1 = environment_from_manifest(
@@ -3885,7 +3953,7 @@ def test_environment_depfile_makefile(depfile_flags, expected_installs, tmpdir,
)
# Do make dry run.
out = make("-n", "-f", makefile, output=str)
out = make("-n", "-f", makefile, "SPACK=spack", output=str)
specs_that_make_would_install = _parse_dry_run_package_installs(out)
@@ -3923,7 +3991,7 @@ def test_depfile_works_with_gitversions(tmpdir, mock_packages, monkeypatch):
env("depfile", "-o", makefile, "--make-disable-jobserver", "--make-prefix=prefix")
# Do a dry run on the generated depfile
out = make("-n", "-f", makefile, output=str)
out = make("-n", "-f", makefile, "SPACK=spack", output=str)
# Check that all specs are there (without duplicates)
specs_that_make_would_install = _parse_dry_run_package_installs(out)
@@ -3985,7 +4053,12 @@ def test_depfile_phony_convenience_targets(
# Phony install/* target should install picked package and all its deps
specs_that_make_would_install = _parse_dry_run_package_installs(
make("-n", picked_spec.format("install/{name}-{version}-{hash}"), output=str)
make(
"-n",
picked_spec.format("install/{name}-{version}-{hash}"),
"SPACK=spack",
output=str,
)
)
assert set(specs_that_make_would_install) == set(expected_installs)
@@ -3993,7 +4066,12 @@ def test_depfile_phony_convenience_targets(
# Phony install-deps/* target shouldn't install picked package
specs_that_make_would_install = _parse_dry_run_package_installs(
make("-n", picked_spec.format("install-deps/{name}-{version}-{hash}"), output=str)
make(
"-n",
picked_spec.format("install-deps/{name}-{version}-{hash}"),
"SPACK=spack",
output=str,
)
)
assert set(specs_that_make_would_install) == set(expected_installs) - {picked_package}
@@ -4053,7 +4131,7 @@ def test_spack_package_ids_variable(tmpdir, mock_packages):
make = Executable("make")
# Do dry run.
out = make("-n", "-C", str(tmpdir), output=str)
out = make("-n", "-C", str(tmpdir), "SPACK=spack", output=str)
# post-install: <hash> should've been executed
with ev.read("test") as test:

View File

@@ -70,11 +70,11 @@ def test_query_arguments():
q_args = query_arguments(args)
assert "installed" in q_args
assert "known" in q_args
assert "predicate_fn" in q_args
assert "explicit" in q_args
assert q_args["installed"] == ["installed"]
assert q_args["known"] is any
assert q_args["explicit"] is any
assert q_args["predicate_fn"] is None
assert q_args["explicit"] is None
assert "start_date" in q_args
assert "end_date" not in q_args
assert q_args["install_tree"] == "all"

View File

@@ -4,9 +4,12 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os.path
import sys
import pytest
from llnl.util.symlink import _windows_can_symlink
import spack.util.spack_yaml as s_yaml
from spack.installer import PackageInstaller
from spack.main import SpackCommand
@@ -16,7 +19,16 @@
install = SpackCommand("install")
view = SpackCommand("view")
pytestmark = pytest.mark.not_on_windows("does not run on windows")
if sys.platform == "win32":
if not _windows_can_symlink():
pytest.skip(
"Windows must be able to create symlinks to run tests.", allow_module_level=True
)
# TODO: Skipping hardlink command testing on windows until robust checks can be added.
# See https://github.com/spack/spack/pull/46335#discussion_r1757411915
commands = ["symlink", "add", "copy", "relocate"]
else:
commands = ["hardlink", "symlink", "hard", "add", "copy", "relocate"]
def create_projection_file(tmpdir, projection):
@@ -28,7 +40,7 @@ def create_projection_file(tmpdir, projection):
return projection_file
@pytest.mark.parametrize("cmd", ["hardlink", "symlink", "hard", "add", "copy", "relocate"])
@pytest.mark.parametrize("cmd", commands)
def test_view_link_type(tmpdir, mock_packages, mock_archive, mock_fetch, install_mockery, cmd):
install("libdwarf")
viewpath = str(tmpdir.mkdir("view_{0}".format(cmd)))
@@ -41,7 +53,7 @@ def test_view_link_type(tmpdir, mock_packages, mock_archive, mock_fetch, install
assert os.path.islink(package_prefix) == is_link_cmd
@pytest.mark.parametrize("add_cmd", ["hardlink", "symlink", "hard", "add", "copy", "relocate"])
@pytest.mark.parametrize("add_cmd", commands)
def test_view_link_type_remove(
tmpdir, mock_packages, mock_archive, mock_fetch, install_mockery, add_cmd
):
@@ -55,7 +67,7 @@ def test_view_link_type_remove(
assert not os.path.exists(bindir)
@pytest.mark.parametrize("cmd", ["hardlink", "symlink", "hard", "add", "copy", "relocate"])
@pytest.mark.parametrize("cmd", commands)
def test_view_projections(tmpdir, mock_packages, mock_archive, mock_fetch, install_mockery, cmd):
install("libdwarf@20130207")

View File

@@ -461,9 +461,13 @@ def test_intel_flags():
unsupported_flag_test("cxx14_flag", "intel@=14.0")
supported_flag_test("cxx14_flag", "-std=c++1y", "intel@=15.0")
supported_flag_test("cxx14_flag", "-std=c++14", "intel@=15.0.2")
unsupported_flag_test("cxx17_flag", "intel@=18")
supported_flag_test("cxx17_flag", "-std=c++17", "intel@=19.0")
unsupported_flag_test("c99_flag", "intel@=11.0")
supported_flag_test("c99_flag", "-std=c99", "intel@=12.0")
unsupported_flag_test("c11_flag", "intel@=15.0")
supported_flag_test("c18_flag", "-std=c18", "intel@=21.5.0")
unsupported_flag_test("c18_flag", "intel@=21.4.0")
supported_flag_test("c11_flag", "-std=c1x", "intel@=16.0")
supported_flag_test("cc_pic_flag", "-fPIC", "intel@=1.0")
supported_flag_test("cxx_pic_flag", "-fPIC", "intel@=1.0")

View File

@@ -2281,6 +2281,31 @@ def test_virtuals_are_annotated_on_edges(self, spec_str):
edges = spec.edges_to_dependencies(name="callpath")
assert len(edges) == 1 and edges[0].virtuals == ()
@pytest.mark.parametrize("transitive", [True, False])
def test_explicit_splices(
self, mutable_config, database_mutable_config, mock_packages, transitive, capfd
):
mpich_spec = database_mutable_config.query("mpich")[0]
splice_info = {
"target": "mpi",
"replacement": f"/{mpich_spec.dag_hash()}",
"transitive": transitive,
}
spack.config.CONFIG.set("concretizer", {"splice": {"explicit": [splice_info]}})
spec = spack.spec.Spec("hdf5 ^zmpi").concretized()
assert spec.satisfies(f"^mpich@{mpich_spec.version}")
assert spec.build_spec.dependencies(name="zmpi", deptype="link")
assert spec["mpi"].build_spec.satisfies(mpich_spec)
assert not spec.build_spec.satisfies(f"^mpich/{mpich_spec.dag_hash()}")
assert not spec.dependencies(name="zmpi", deptype="link")
captured = capfd.readouterr()
assert "Warning: explicit splice configuration has caused" in captured.err
assert "hdf5 ^zmpi" in captured.err
assert str(spec) in captured.err
@pytest.mark.db
@pytest.mark.parametrize(
"spec_str,mpi_name",

View File

@@ -472,6 +472,13 @@ def test_substitute_date(mock_low_high_config):
assert date.today().strftime("%Y-%m-%d") in new_path
def test_substitute_spack_version():
version = spack.spack_version_info
assert spack_path.canonicalize_path(
"spack$spack_short_version/test"
) == spack_path.canonicalize_path(f"spack{version[0]}.{version[1]}/test")
PAD_STRING = spack_path.SPACK_PATH_PADDING_CHARS
MAX_PATH_LEN = spack_path.get_system_path_max()
MAX_PADDED_LEN = MAX_PATH_LEN - spack_path.SPACK_MAX_INSTALL_PATH_LENGTH

View File

@@ -62,8 +62,11 @@
import spack.version
from spack.fetch_strategy import URLFetchStrategy
from spack.installer import PackageInstaller
from spack.main import SpackCommand
from spack.util.pattern import Bunch
mirror_cmd = SpackCommand("mirror")
@pytest.fixture(autouse=True)
def check_config_fixture(request):
@@ -989,6 +992,38 @@ def install_mockery(temporary_store: spack.store.Store, mutable_config, mock_pac
temporary_store.failure_tracker.clear_all()
@pytest.fixture(scope="module")
def temporary_mirror_dir(tmpdir_factory):
dir = tmpdir_factory.mktemp("mirror")
dir.ensure("build_cache", dir=True)
yield str(dir)
dir.join("build_cache").remove()
@pytest.fixture(scope="function")
def temporary_mirror(temporary_mirror_dir):
mirror_url = url_util.path_to_file_url(temporary_mirror_dir)
mirror_cmd("add", "--scope", "site", "test-mirror-func", mirror_url)
yield temporary_mirror_dir
mirror_cmd("rm", "--scope=site", "test-mirror-func")
@pytest.fixture(scope="function")
def mutable_temporary_mirror_dir(tmpdir_factory):
dir = tmpdir_factory.mktemp("mirror")
dir.ensure("build_cache", dir=True)
yield str(dir)
dir.join("build_cache").remove()
@pytest.fixture(scope="function")
def mutable_temporary_mirror(mutable_temporary_mirror_dir):
mirror_url = url_util.path_to_file_url(mutable_temporary_mirror_dir)
mirror_cmd("add", "--scope", "site", "test-mirror-func", mirror_url)
yield mutable_temporary_mirror_dir
mirror_cmd("rm", "--scope=site", "test-mirror-func")
@pytest.fixture(scope="function")
def temporary_store(tmpdir, request):
"""Hooks a temporary empty store for the test function."""
@@ -1980,6 +2015,11 @@ def pytest_runtest_setup(item):
if not_on_windows_marker and sys.platform == "win32":
pytest.skip(*not_on_windows_marker.args)
# Skip items marked "only windows" if they're run anywhere but Windows
only_windows_marker = item.get_closest_marker(name="only_windows")
if only_windows_marker and sys.platform != "win32":
pytest.skip(*only_windows_marker.args)
def _sequential_executor(*args, **kwargs):
return spack.util.parallel.SequentialExecutor()

View File

@@ -1,5 +1,5 @@
bootstrap:
sources:
- name: 'github-actions'
metadata: $spack/share/spack/bootstrap/github-actions-v0.5
metadata: $spack/share/spack/bootstrap/github-actions-v0.6
trusted: {}

View File

@@ -0,0 +1,9 @@
enable:
- tcl
tcl:
all:
autoload: none
mpileaks:
suffixes:
mpileaks: 'debug={variants.debug.value}'
'^mpi': 'mpi={^mpi.name}-v{^mpi.version}'

View File

@@ -1181,3 +1181,20 @@ def test_reindex_with_upstreams(tmp_path, monkeypatch, mock_packages, config):
assert not reindexed_local_store.db.query_local("callpath")
assert reindexed_local_store.db.query("callpath") == [callpath]
assert reindexed_local_store.db.query_local("mpileaks") == [mpileaks]
@pytest.mark.regression("47101")
def test_query_with_predicate_fn(database):
all_specs = database.query()
# Name starts with a string
specs = database.query(predicate_fn=lambda x: x.spec.name.startswith("mpil"))
assert specs and all(x.name.startswith("mpil") for x in specs)
assert len(specs) < len(all_specs)
# Recipe is currently known/unknown
specs = database.query(predicate_fn=lambda x: spack.repo.PATH.exists(x.spec.name))
assert specs == all_specs
specs = database.query(predicate_fn=lambda x: not spack.repo.PATH.exists(x.spec.name))
assert not specs

View File

@@ -7,6 +7,7 @@
import spack.config
import spack.detection
import spack.detection.common
import spack.detection.path
import spack.spec
@@ -26,3 +27,28 @@ def test_detection_update_config(mutable_config):
external_gcc = externals[0]
assert external_gcc["spec"] == "cmake@3.27.5"
assert external_gcc["prefix"] == "/usr/bin"
def test_dedupe_paths(tmp_path):
"""Test that ``dedupe_paths`` deals with symlinked directories, retaining the target"""
x = tmp_path / "x"
y = tmp_path / "y"
z = tmp_path / "z"
x.mkdir()
y.mkdir()
z.symlink_to("x", target_is_directory=True)
# dedupe repeated dirs, should preserve order
assert spack.detection.path.dedupe_paths([str(x), str(y), str(x)]) == [str(x), str(y)]
assert spack.detection.path.dedupe_paths([str(y), str(x), str(y)]) == [str(y), str(x)]
# dedupe repeated symlinks
assert spack.detection.path.dedupe_paths([str(z), str(y), str(z)]) == [str(z), str(y)]
assert spack.detection.path.dedupe_paths([str(y), str(z), str(y)]) == [str(y), str(z)]
# when both symlink and target are present, only target is retained, and it comes at the
# priority of the first occurrence.
assert spack.detection.path.dedupe_paths([str(x), str(y), str(z)]) == [str(x), str(y)]
assert spack.detection.path.dedupe_paths([str(z), str(y), str(x)]) == [str(x), str(y)]
assert spack.detection.path.dedupe_paths([str(y), str(z), str(x)]) == [str(y), str(x)]

View File

@@ -892,3 +892,17 @@ def test_stack_enforcement_is_strict(tmp_path, matrix_line, config, mock_package
with pytest.raises(Exception):
with ev.Environment(tmp_path) as e:
e.concretize()
def test_only_roots_are_explicitly_installed(tmp_path, mock_packages, config, temporary_store):
"""When installing specific non-root specs from an environment, we continue to mark them
as implicitly installed. What makes installs explicit is that they are root of the env."""
env = ev.create_in_dir(tmp_path)
env.add("mpileaks")
env.concretize()
mpileaks = env.concrete_roots()[0]
callpath = mpileaks["callpath"]
env.install_specs([callpath], fake=True)
assert callpath in temporary_store.db.query(explicit=False)
env.install_specs([mpileaks], fake=True)
assert temporary_store.db.query(explicit=True) == [mpileaks]

View File

@@ -353,21 +353,21 @@ def test_install_prefix_collision_fails(config, mock_fetch, mock_packages, tmpdi
Test that different specs with coinciding install prefixes will fail
to install.
"""
projections = {"projections": {"all": "all-specs-project-to-this-prefix"}}
projections = {"projections": {"all": "one-prefix-per-package-{name}"}}
with spack.store.use_store(str(tmpdir), extra_data=projections):
with spack.config.override("config:checksum", False):
pkg_a = Spec("libelf@0.8.13").concretized().package
pkg_b = Spec("libelf@0.8.12").concretized().package
PackageInstaller([pkg_a], explicit=True).install()
PackageInstaller([pkg_a], explicit=True, fake=True).install()
with pytest.raises(InstallError, match="Install prefix collision"):
PackageInstaller([pkg_b], explicit=True).install()
PackageInstaller([pkg_b], explicit=True, fake=True).install()
def test_store(install_mockery, mock_fetch):
spec = Spec("cmake-client").concretized()
pkg = spec.package
PackageInstaller([pkg], explicit=True).install()
PackageInstaller([pkg], fake=True, explicit=True).install()
@pytest.mark.disable_clean_stage_check

View File

@@ -29,6 +29,7 @@
import spack.store
import spack.util.lock as lk
from spack.installer import PackageInstaller
from spack.main import SpackCommand
def _mock_repo(root, namespace):
@@ -73,7 +74,7 @@ def create_build_task(
pkg: spack.package_base.PackageBase, install_args: Optional[dict] = None
) -> inst.BuildTask:
request = inst.BuildRequest(pkg, {} if install_args is None else install_args)
return inst.BuildTask(pkg, request, False, 0, 0, inst.STATUS_ADDED, set())
return inst.BuildTask(pkg, request=request, status=inst.BuildStatus.QUEUED)
def create_installer(
@@ -640,6 +641,88 @@ def test_prepare_for_install_on_installed(install_mockery, monkeypatch):
installer._prepare_for_install(task)
def test_installer_init_requests(install_mockery):
"""Test of installer initial requests."""
spec_name = "dependent-install"
with spack.config.override("config:install_missing_compilers", True):
installer = create_installer([spec_name], {})
# There is only one explicit request in this case
assert len(installer.build_requests) == 1
request = installer.build_requests[0]
assert request.pkg.name == spec_name
@pytest.mark.parametrize("transitive", [True, False])
def test_install_spliced(install_mockery, mock_fetch, monkeypatch, capsys, transitive):
"""Test installing a spliced spec"""
spec = spack.spec.Spec("splice-t").concretized()
dep = spack.spec.Spec("splice-h+foo").concretized()
# Do the splice.
out = spec.splice(dep, transitive)
installer = create_installer([out], {"verbose": True, "fail_fast": True})
installer.install()
for node in out.traverse():
assert node.installed
assert node.build_spec.installed
@pytest.mark.parametrize("transitive", [True, False])
def test_install_spliced_build_spec_installed(install_mockery, capfd, mock_fetch, transitive):
"""Test installing a spliced spec with the build spec already installed"""
spec = spack.spec.Spec("splice-t").concretized()
dep = spack.spec.Spec("splice-h+foo").concretized()
# Do the splice.
out = spec.splice(dep, transitive)
PackageInstaller([out.build_spec.package]).install()
installer = create_installer([out], {"verbose": True, "fail_fast": True})
installer._init_queue()
for _, task in installer.build_pq:
assert isinstance(task, inst.RewireTask if task.pkg.spec.spliced else inst.BuildTask)
installer.install()
for node in out.traverse():
assert node.installed
assert node.build_spec.installed
@pytest.mark.not_on_windows("lacking windows support for binary installs")
@pytest.mark.parametrize("transitive", [True, False])
@pytest.mark.parametrize(
"root_str", ["splice-t^splice-h~foo", "splice-h~foo", "splice-vt^splice-a"]
)
def test_install_splice_root_from_binary(
install_mockery, mock_fetch, mutable_temporary_mirror, transitive, root_str
):
"""Test installing a spliced spec with the root available in binary cache"""
# Test splicing and rewiring a spec with the same name, different hash.
original_spec = spack.spec.Spec(root_str).concretized()
spec_to_splice = spack.spec.Spec("splice-h+foo").concretized()
PackageInstaller([original_spec.package, spec_to_splice.package]).install()
out = original_spec.splice(spec_to_splice, transitive)
buildcache = SpackCommand("buildcache")
buildcache(
"push",
"--unsigned",
"--update-index",
mutable_temporary_mirror,
str(original_spec),
str(spec_to_splice),
)
uninstall = SpackCommand("uninstall")
uninstall("-ay")
PackageInstaller([out.package], unsigned=True).install()
assert len(spack.store.STORE.db.query()) == len(list(out.traverse()))
def test_install_task_use_cache(install_mockery, monkeypatch):
installer = create_installer(["trivial-install-test-package"], {})
request = installer.build_requests[0]
@@ -650,6 +733,33 @@ def test_install_task_use_cache(install_mockery, monkeypatch):
assert request.pkg_id in installer.installed
def test_install_task_requeue_build_specs(install_mockery, monkeypatch, capfd):
"""Check that a missing build_spec spec is added by _install_task."""
# This test also ensures coverage of most of the new
# _requeue_with_build_spec_tasks method.
def _missing(*args, **kwargs):
return inst.ExecuteResult.MISSING_BUILD_SPEC
# Set the configuration to ensure _requeue_with_build_spec_tasks actually
# does something.
with spack.config.override("config:install_missing_compilers", True):
installer = create_installer(["depb"], {})
installer._init_queue()
request = installer.build_requests[0]
task = create_build_task(request.pkg)
# Drop one of the specs so its task is missing before _install_task
popped_task = installer._pop_task()
assert inst.package_id(popped_task.pkg.spec) not in installer.build_tasks
monkeypatch.setattr(task, "execute", _missing)
installer._install_task(task, None)
# Ensure the dropped task/spec was added back by _install_task
assert inst.package_id(popped_task.pkg.spec) in installer.build_tasks
def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
"""Test _release_lock for supposed write lock with exception."""
installer = create_installer(["trivial-install-test-package"], {})
@@ -698,7 +808,7 @@ def test_requeue_task(install_mockery, capfd):
ids = list(installer.build_tasks)
assert len(ids) == 1
qtask = installer.build_tasks[ids[0]]
assert qtask.status == inst.STATUS_INSTALLING
assert qtask.status == inst.BuildStatus.INSTALLING
assert qtask.sequence > task.sequence
assert qtask.attempts == task.attempts + 1
@@ -745,8 +855,10 @@ def _chgrp(path, group, follow_symlinks=True):
monkeypatch.setattr(prefs, "get_package_group", _get_group)
monkeypatch.setattr(fs, "chgrp", _chgrp)
installer = create_installer(["trivial-install-test-package"], {})
spec = installer.build_requests[0].pkg.spec
build_task = create_build_task(
spack.spec.Spec("trivial-install-test-package").concretized().package
)
spec = build_task.request.pkg.spec
fs.touchp(spec.prefix)
metadatadir = spack.store.STORE.layout.metadata_path(spec)
@@ -756,7 +868,7 @@ def _chgrp(path, group, follow_symlinks=True):
metadatadir = None
# Should fail with a "not a directory" error
with pytest.raises(OSError, match=metadatadir):
installer._setup_install_dir(spec.package)
build_task._setup_install_dir(spec.package)
out = str(capfd.readouterr()[0])
@@ -843,79 +955,74 @@ def test_install_failed_not_fast(install_mockery, monkeypatch, capsys):
assert "Skipping build of pkg-a" in out
def test_install_fail_on_interrupt(install_mockery, monkeypatch):
def _interrupt(installer, task, install_status, **kwargs):
if task.pkg.name == "pkg-a":
raise KeyboardInterrupt("mock keyboard interrupt for pkg-a")
else:
return installer._real_install_task(task, None)
# installer.installed.add(task.pkg.name)
def test_install_fail_on_interrupt(install_mockery, mock_fetch, monkeypatch):
"""Test ctrl-c interrupted install."""
spec_name = "pkg-a"
err_msg = "mock keyboard interrupt for {0}".format(spec_name)
def _interrupt(installer, task, install_status, **kwargs):
if task.pkg.name == spec_name:
raise KeyboardInterrupt(err_msg)
else:
installer.installed.add(task.pkg.name)
installer = create_installer([spec_name], {})
installer = create_installer([spec_name], {"fake": True})
setattr(inst.PackageInstaller, "_real_install_task", inst.PackageInstaller._install_task)
# Raise a KeyboardInterrupt error to trigger early termination
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _interrupt)
with pytest.raises(KeyboardInterrupt, match=err_msg):
installer.install()
assert "pkg-b" in installer.installed # ensure dependency of pkg-a is 'installed'
assert spec_name not in installer.installed
assert not any(i.startswith("pkg-a-") for i in installer.installed)
assert any(
i.startswith("pkg-b-") for i in installer.installed
) # ensure dependency of a is 'installed'
def test_install_fail_single(install_mockery, monkeypatch):
class MyBuildException(Exception):
pass
def _install_fail_my_build_exception(installer, task, install_status, **kwargs):
print(task, task.pkg.name)
if task.pkg.name == "pkg-a":
raise MyBuildException("mock internal package build error for pkg-a")
else:
# No need for more complex logic here because no splices
task.execute(install_status)
installer._update_installed(task)
def test_install_fail_single(install_mockery, mock_fetch, monkeypatch):
"""Test expected results for failure of single package."""
spec_name = "pkg-a"
err_msg = "mock internal package build error for {0}".format(spec_name)
class MyBuildException(Exception):
pass
def _install(installer, task, install_status, **kwargs):
if task.pkg.name == spec_name:
raise MyBuildException(err_msg)
else:
installer.installed.add(task.pkg.name)
installer = create_installer([spec_name], {})
installer = create_installer(["pkg-a"], {"fake": True})
# Raise a KeyboardInterrupt error to trigger early termination
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install)
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install_fail_my_build_exception)
with pytest.raises(MyBuildException, match=err_msg):
with pytest.raises(MyBuildException, match="mock internal package build error for pkg-a"):
installer.install()
assert "pkg-b" in installer.installed # ensure dependency of a is 'installed'
assert spec_name not in installer.installed
# ensure dependency of a is 'installed' and a is not
assert any(pkg_id.startswith("pkg-b-") for pkg_id in installer.installed)
assert not any(pkg_id.startswith("pkg-a-") for pkg_id in installer.installed)
def test_install_fail_multi(install_mockery, monkeypatch):
def test_install_fail_multi(install_mockery, mock_fetch, monkeypatch):
"""Test expected results for failure of multiple packages."""
spec_name = "pkg-c"
err_msg = "mock internal package build error"
class MyBuildException(Exception):
pass
def _install(installer, task, install_status, **kwargs):
if task.pkg.name == spec_name:
raise MyBuildException(err_msg)
else:
installer.installed.add(task.pkg.name)
installer = create_installer([spec_name, "pkg-a"], {})
installer = create_installer(["pkg-a", "pkg-c"], {"fake": True})
# Raise a KeyboardInterrupt error to trigger early termination
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install)
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install_fail_my_build_exception)
with pytest.raises(spack.error.InstallError, match="Installation request failed"):
installer.install()
assert "pkg-a" in installer.installed # ensure the the second spec installed
assert spec_name not in installer.installed
# ensure the the second spec installed but not the first
assert any(pkg_id.startswith("pkg-c-") for pkg_id in installer.installed)
assert not any(pkg_id.startswith("pkg-a-") for pkg_id in installer.installed)
def test_install_fail_fast_on_detect(install_mockery, monkeypatch, capsys):

View File

@@ -1000,7 +1000,7 @@ def setup_test_dirs():
shutil.rmtree(tmpdir.join("f"))
@pytest.mark.skipif(sys.platform != "win32", reason="No-op on non Windows")
@pytest.mark.only_windows("Test is for Windows specific behavior")
def test_windows_sfn(tmpdir):
# first check some standard Windows locations
# we know require sfn names

View File

@@ -5,7 +5,6 @@
"""Tests for ``llnl/util/symlink.py``"""
import os
import sys
import tempfile
import pytest
@@ -37,7 +36,7 @@ def test_symlink_dir(tmpdir):
assert symlink.islink(link_dir)
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
@pytest.mark.only_windows("Test is for Windows specific behavior")
def test_symlink_source_not_exists(tmpdir):
"""Test the symlink.symlink method for the case where a source path does not exist"""
with tmpdir.as_cwd():
@@ -71,7 +70,7 @@ def test_symlink_src_relative_to_link(tmpdir):
assert os.path.lexists(link_dir)
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
@pytest.mark.only_windows("Test is for Windows specific behavior")
def test_symlink_src_not_relative_to_link(tmpdir):
"""Test the symlink.symlink functionality where the source value does not exist relative to
the link and not relative to the cwd. NOTE that this symlink api call is EXPECTED to raise
@@ -98,7 +97,7 @@ def test_symlink_src_not_relative_to_link(tmpdir):
assert not os.path.lexists(link_dir)
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
@pytest.mark.only_windows("Test is for Windows specific behavior")
def test_symlink_link_already_exists(tmpdir):
"""Test the symlink.symlink method for the case where a link already exists"""
with tmpdir.as_cwd():
@@ -113,7 +112,7 @@ def test_symlink_link_already_exists(tmpdir):
@pytest.mark.skipif(not symlink._windows_can_symlink(), reason="Test requires elevated privileges")
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
@pytest.mark.only_windows("Test is for Windows specific behavior")
def test_symlink_win_file(tmpdir):
"""Check that symlink.symlink makes a symlink file when run with elevated permissions"""
with tmpdir.as_cwd():
@@ -130,7 +129,7 @@ def test_symlink_win_file(tmpdir):
@pytest.mark.skipif(not symlink._windows_can_symlink(), reason="Test requires elevated privileges")
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
@pytest.mark.only_windows("Test is for Windows specific behavior")
def test_symlink_win_dir(tmpdir):
"""Check that symlink.symlink makes a symlink dir when run with elevated permissions"""
with tmpdir.as_cwd():
@@ -147,7 +146,7 @@ def test_symlink_win_dir(tmpdir):
assert not symlink._windows_is_junction(link_dir)
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
@pytest.mark.only_windows("Test is for Windows specific behavior")
def test_windows_create_junction(tmpdir):
"""Test the symlink._windows_create_junction method"""
with tmpdir.as_cwd():
@@ -163,7 +162,7 @@ def test_windows_create_junction(tmpdir):
assert not os.path.islink(junction_link_dir)
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
@pytest.mark.only_windows("Test is for Windows specific behavior")
def test_windows_create_hard_link(tmpdir):
"""Test the symlink._windows_create_hard_link method"""
with tmpdir.as_cwd():
@@ -179,7 +178,7 @@ def test_windows_create_hard_link(tmpdir):
assert not os.path.islink(link_file)
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
@pytest.mark.only_windows("Test is for Windows specific behavior")
def test_windows_create_link_dir(tmpdir):
"""Test the functionality of the windows_create_link method with a directory
which should result in making a junction.
@@ -198,7 +197,7 @@ def test_windows_create_link_dir(tmpdir):
assert not os.path.islink(link_dir)
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
@pytest.mark.only_windows("Test is for Windows specific behavior")
def test_windows_create_link_file(tmpdir):
"""Test the functionality of the windows_create_link method with a file
which should result in the creation of a hard link. It also tests the
@@ -215,7 +214,7 @@ def test_windows_create_link_file(tmpdir):
assert not symlink._windows_is_junction(link_file)
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
@pytest.mark.only_windows("Test is for Windows specific behavior")
def test_windows_read_link(tmpdir):
"""Makes sure symlink.readlink can read the link source for hard links and
junctions on windows."""

View File

@@ -8,6 +8,7 @@
import pytest
from llnl.util.filesystem import working_dir
from llnl.util.symlink import resolve_link_target_relative_to_the_link
import spack.caches
@@ -19,6 +20,7 @@
import spack.util.executable
import spack.util.spack_json as sjson
import spack.util.url as url_util
from spack.cmd.common.arguments import mirror_name_or_url
from spack.spec import Spec
from spack.util.executable import which
from spack.util.spack_yaml import SpackYAMLError
@@ -357,3 +359,12 @@ def test_update_connection_params(direction):
assert m.get_access_token(direction) == "token"
assert m.get_profile(direction) == "profile"
assert m.get_endpoint_url(direction) == "https://example.com"
def test_mirror_name_or_url_dir_parsing(tmp_path):
curdir = tmp_path / "mirror"
curdir.mkdir()
with working_dir(curdir):
assert mirror_name_or_url(".").fetch_url == curdir.as_uri()
assert mirror_name_or_url("..").fetch_url == tmp_path.as_uri()

Some files were not shown because too many files have changed in this diff Show More