Compare commits

...

64 Commits

Author SHA1 Message Date
Gregory Becker
47a1ed8d91 wip 2022-11-08 17:03:45 -08:00
Mikael Simberg
e53a19a08d Patch fmt for hipcc/dpcpp (#33733)
* Patch fmt for hipcc/dpcpp
* Add msimberg as fmt maintainer
2022-11-08 12:14:14 -08:00
Xavier Delaruelle
a8470a7efe environment-modules: add version 5.2.0 (#33762) 2022-11-08 13:10:22 -07:00
Wouter Deconinck
0f26d4402e hepmc3: new version 3.2.5 (#33748)
Changelog at https://gitlab.cern.ch/hepmc/HepMC3/-/tags/3.2.5

Maintainer: @vvolkl
2022-11-08 11:54:28 -08:00
Harmen Stoppels
4d28a64661 fix racy sbang (#33549)
Spack currently creates a temporary sbang that is moved "atomically" in place,
but this temporary causes races when multiple processes start installing sbang.

Let's just stick to an idempotent approach. Notice that we only re-install sbang
if Spack updates it (since we do file compare), and sbang was only touched
18 times in the past 6 years, whereas we hit the sbang tempfile issue
frequently with parallel install on a fresh spack instance in CI.

Also fixes a bug where permissions weren't updated if config changed but
the latest version of the sbang file was already installed.
2022-11-08 11:05:05 -08:00
eugeneswalker
4a5e68816b hypre +rocm: needs explicit rocprim dep (#33745) 2022-11-08 10:50:02 -08:00
Greg Becker
97fe7ad32b use pwd for usernames on unix (#19980) 2022-11-08 10:36:10 -08:00
Harmen Stoppels
052bf6b9df python: 3.11.0 (#33507) 2022-11-08 09:38:45 -08:00
Stephen Sachs
80f5939a94 Install from source if binary cache checksum validation fails (#31696)
* Fix https://github.com/spack/spack/issues/31640

Some packages in the binary cache fail checksum validation. Instead of having to
go back and manually install all failed packages with `--no-cache` option,
requeue those failed packages for installation from source

```shell
$ spack install py-pip
==> Installing py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
==> Fetching https://binaries.spack.io/releases/v0.18/build_cache/linux-amzn2-graviton2-gcc-7.3.1-py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7.spec.json.sig
gpg: Signature made Wed 20 Jul 2022 12:13:43 PM UTC using RSA key ID 3DB0C723
gpg: Good signature from "Spack Project Official Binaries <maintainers@spack.io>"
==> Fetching https://binaries.spack.io/releases/v0.18/build_cache/linux-amzn2-graviton2/gcc-7.3.1/py-pip-21.3.1/linux-amzn2-graviton2-gcc-7.3.1-py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7.spack
==> Extracting py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7 from binary cache
==> Error: Failed to install py-pip due to NoChecksumException: Requeue for manual installation.
==> Installing py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
==> Using cached archive: /shared/spack/var/spack/cache/_source-cache/archive/de/deaf32dcd9ab821e359cd8330786bcd077604b5c5730c0b096eda46f95c24a2d
==> No patches needed for py-pip
==> py-pip: Executing phase: 'install'
==> py-pip: Successfully installed py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
  Fetch: 0.01s.  Build: 2.81s.  Total: 2.82s.
[+] /shared/spack/opt/spack/linux-amzn2-graviton2/gcc-7.3.1/py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
```

* Cleanup style

* better wording

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>

* Update lib/spack/spack/installer.py

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>

* changes quotes for style checks

* Update lib/spack/spack/installer.py

Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>

* Addressing @kwryankrattiger comment to use local 'use_cache`

Co-authored-by: Stephen Sachs <stesachs@amazon.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
2022-11-08 09:19:55 -08:00
Dave Love
bca8b52a8d cosma: Add shared option (#33751) 2022-11-08 18:06:58 +01:00
Annop Wongwathanarat
e4c2d1afc6 gromacs: enable linking with armpl-gcc FFT (#33750) 2022-11-08 08:44:03 -07:00
Massimiliano Culpo
89976af732 scons: fix Scons builder after multi build-system refactoring (#33753) 2022-11-08 16:03:15 +01:00
Massimiliano Culpo
a079722b1c r: fix order of execution for Makeconf filtering (#33752)
fixes #33747
2022-11-08 14:58:33 +01:00
Harmen Stoppels
f332ac6d21 More jobs in Gitlab CI (#33688)
Use at most 32 jobs when available.
2022-11-08 12:53:11 +01:00
Massimiliano Culpo
e4218595de Rework unit test to avoid tripping into concretization slowdown (#33749) 2022-11-08 10:56:24 +01:00
Greg Becker
c9561c5a0e intel oneapi classic bootstrapping (#31285)
The `intel` compiler at versions > 20 is provided by the `intel-oneapi-compilers-classic`
package (a thin wrapper around the `intel-oneapi-compilers` package), and the `oneapi`
compiler is provided by the `intel-oneapi-compilers` package. 

Prior to this work, neither of these compilers could be bootstrapped by Spack as part of
an install with `install_missing_compilers: True`.

Changes made to make these two packages bootstrappable:

1. The `intel-oneapi-compilers-classic` package includes a bin directory and symlinks
   to the compiler executables, not just logical pointers in Spack.
2. Spack can look for bootstrapped compilers in directories other than `$prefix/bin`,
   defined on a per-package basis
3. `intel-oneapi-compilers` specifies a non-default search directory for the
   compiler executables.
4. The `spack.compilers` module now can make more advanced associations between
   packages and compilers, not just simple name translations
5. Spack support for lmod hierarchies accounts for differences between package
   names and the associated compiler names for `intel-oneapi-compilers/oneapi`,
   `intel-oneapi-compilers-classic/intel@20:`, `llvm+clang/clang`, and
   `llvm-amdgpu/rocmcc`.

- [x] full end-to-end testing
- [x] add unit tests
2022-11-07 21:50:16 -08:00
Sergey Kosukhin
f099a68e65 mpich: patch @3.4 to fix checking whether the datatype is contiguous (#33328)
Co-authored-by: Thomas Jahns <jahns@dkrz.de>

Co-authored-by: Thomas Jahns <jahns@dkrz.de>
2022-11-07 22:02:21 -07:00
Adam J. Stewart
54abc7fb7e PyTorch: add v1.13.0 (#33596)
* PyTorch: add v1.13.0
* py-torchaudio: add v0.13.0
* py-torchaudio: add all versions
* py-torchvision: jpeg required for all backends
* py-torchtext: add v0.14.0
* py-torchtext: fix build
* py-torchaudio: fix build
* py-torchtext: update version tag
* Use Spack-built sox
* Explicitly disable sox build
* https -> http
2022-11-07 19:42:35 -08:00
Cédric Chevalier
23eb2dc9d6 Add zstd support for elfutils (#33695)
* Add zstd support for elfutils
Not defining `+zstd` implies `--without-zstd` flag to configure.
This avoids automatic library detection and thus make the build only
depends on Spack installed dependencies.
* Use autotools helper "with_or_without"
* Revert use of with_or_without
Using `with_or_without()` with `variant` keyword does not seem to work.
2022-11-07 20:30:23 -07:00
Peter Scheibel
1a3415619e "spack uninstall": don't modify env (#33711)
"spack install foo" no longer adds package "foo" to the environment
(i.e. to the list of root specs) by default: you must specify "--add".
Likewise "spack uninstall foo" no longer removes package "foo" from
the environment: you must specify --remove. Generally this means
that install/uninstall commands will no longer modify the users list
of root specs (which many users found problematic: they had to
deactivate an environment if they wanted to uninstall a spec without
changing their spack.yaml description).

In more detail: if you have environments e1 and e2, and specs [P, Q, R]
such that P depends on R, Q depends on R, [P, R] are in e1, and [Q, R]
are in e2:

* `spack uninstall --dependents --remove r` in e1: removes R from e1
  (but does not uninstall it) and uninstalls (and removes) P
* `spack uninstall -f --dependents r` in e1: will uninstall P, Q, and
   R (i.e. e2 will have dependent specs uninstalled as a side effect)
* `spack uninstall -f --dependents --remove r` in e1: this uninstalls
   P, Q, and R, and removes [P, R] from e1
* `spack uninstall -f --remove r` in e1: uninstalls R (so it is
  "missing" in both environments) and removes R from e1 (note that e1
  would still install R as a dependency of P, but it would no longer
  be listed as a root spec)
* `spack uninstall --dependents r` in e1: will fail because e2 needs R

Individual unit tests were created for each of these scenarios.
2022-11-08 03:24:51 +00:00
Jordan Galby
84a3d32aa3 Fix missing "*.spack*" files in views (#30980)
All files/dirs containing ".spack" anywhere their name were ignored when
generating a spack view.

For example, this happened with the 'r' package.
2022-11-08 02:58:19 +00:00
akhursev
69d4637671 2022.3.1 oneAPI release promotion (#33742) 2022-11-07 18:14:23 -07:00
Harmen Stoppels
3693622edf reorder packages.yaml: requirements first, then preferences (#33741)
* reorder packages.yaml: requirements first, then preferences
* expand preferences vs reuse vs requirements
2022-11-07 16:16:11 -08:00
Scott Wittenburg
b3b675157c gitlab ci: Add "script_failure" as a reason for retrying service jobs (#33420)
Somehow a network error when cloning the repo for ci gets
categorized by gitlab as a script failure.  To make sure we retry
jobs that failed for that reason or a similar one, include 
"script_failure" as one of the reasons for retrying service jobs
(which include "no specs to rebuild" jobs, update buildcache
index jobs, and temp storage cleanup jobs.
2022-11-07 16:11:04 -07:00
Tom Scogland
6241cdb27b encode development requirements in pyproject.toml (#32616)
Add a `project` block to the toml config along with development and CI
dependencies and a minimal `build-system` block, doing basically
nothing, so that spack can be bootstrapped to a full development
environment with:

```shell
$ hatch -e dev shell
```

or for a minimal environment without hatch:

```shell
$ python3 -m venv venv
$ source venv/bin/activate
$ python3 -m pip install --upgrade pip
$ python3 -m pip install -e '.[dev]'
```

This means we can re-use the requirements list throughout the workflow
yaml files and otherwise maintain this list in *one place* rather than
several disparate ones.  We may be stuck with a couple more temporarily
to continue supporting python2.7, but aside from that it's less places
to get out of sync and a couple new bootstrap options.

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-11-07 15:00:22 -08:00
dependabot[bot]
28d669cb39 build(deps): bump docker/setup-buildx-action from 2.2.0 to 2.2.1 (#33399)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 2.2.0 to 2.2.1.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](c74574e6c8...8c0edbc76e)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-07 20:49:42 +00:00
dependabot[bot]
c0170a675b build(deps): bump actions/upload-artifact from 3.1.0 to 3.1.1 (#33471)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3.1.0 to 3.1.1.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](3cea537223...83fd05a356)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-07 21:39:38 +01:00
Scott Wittenburg
28a77c2821 binary_distribution: Speed up buildcache update-index (#32796)
This change uses the aws cli, if available, to retrieve spec files
from the mirror to a local temp directory, then parallelizes the
reading of those files from disk using multiprocessing.ThreadPool.

If the aws cli is not available, then a ThreadPool is used to fetch
and read the spec files from the mirror.

Using aws cli results in ~16 times speed up to recreate the binary
mirror index, while just parallelizing the fetching and reading
results in ~3 speed up.
2022-11-07 21:31:14 +01:00
Sergey Kosukhin
01a5788517 eckit: fix underlinking (#33739) 2022-11-07 12:54:36 -07:00
Harmen Stoppels
cc84ab1e92 Remove known issues (#33738) 2022-11-07 19:41:16 +00:00
dependabot[bot]
e0e20e3e79 Bump docker/build-push-action from 3.1.1 to 3.2.0 (#33271)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 3.1.1 to 3.2.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](c84f382811...c56af95754)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-07 20:40:06 +01:00
dependabot[bot]
9d08feb63e Bump dorny/paths-filter from 2.10.2 to 2.11.1 (#33270)
Bumps [dorny/paths-filter](https://github.com/dorny/paths-filter) from 2.10.2 to 2.11.1.
- [Release notes](https://github.com/dorny/paths-filter/releases)
- [Changelog](https://github.com/dorny/paths-filter/blob/master/CHANGELOG.md)
- [Commits](b2feaf19c2...4512585405)

---
updated-dependencies:
- dependency-name: dorny/paths-filter
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-07 20:38:32 +01:00
dependabot[bot]
8be6378688 Bump docker/setup-qemu-action from 2.0.0 to 2.1.0 (#33269)
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 2.0.0 to 2.1.0.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](8b122486ce...e81a89b173)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-07 20:13:09 +01:00
Greg Becker
ec05543054 Bugfix: Compiler bootstrapping for compilers that are independently present in env (#32228)
The compiler bootstrapping logic currently does not add a task when the compiler package is already in the install task queue. This causes failures when the compiler package is added without the additional metadata telling the task to update the compilers list.

Solution: requeue compilers for bootstrapping when needed, to update `task.compiler` metadata.
2022-11-07 09:38:51 -08:00
Greg Becker
a30b60f9a6 Apply dev specs for dependencies of roots (#30909)
Currently, develop specs that are not roots and are not explicitly listed dependencies 
of the roots are not applied.

- [x] ensure dev specs are applied.

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2022-11-07 09:37:03 -08:00
Adam J. Stewart
8fb8381b6f Rust: don't apply constraints to nightly/beta versions (#33723) 2022-11-07 09:21:46 -08:00
Laura Bellentani
1dcb5d1fa7 quantum-espresso: improve concretization for intel libraries (#33312) 2022-11-07 16:47:07 +01:00
Yang Zongze
96b8240ea6 singularity: add new versions (#33462) 2022-11-07 16:44:09 +01:00
Harmen Stoppels
47df88404a Simplify repeated _add_dependency calls for same package (#33732) 2022-11-07 16:33:18 +01:00
Veselin Dobrev
476e647c94 GLVis: new versions: v4.1, v4.2 (#33728) 2022-11-07 16:31:59 +01:00
Sajid Ali
c3851704a2 openblas confuses flang/flang-new, so do not set TIME with ~fortran (#33163)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
2022-11-07 16:20:03 +01:00
Axel Huebl
1eb35d0378 Doc: lsb-release (#32479)
Without the `lsb-release` tool installed, Spack cannot identify the
Ubuntu/Debian version.
2022-11-07 16:13:17 +01:00
Sergey Kosukhin
74c3fbdf87 netcdf-c: add variant optimize (#33642) 2022-11-07 15:49:55 +01:00
Michael Kuhn
492525fda5 socat: new package (#33713) 2022-11-07 15:45:50 +01:00
Adam J. Stewart
9dcd4fac15 py-modin: add new package (#33724)
Co-authored-by: adamjstewart <adamjstewart@users.noreply.github.com>
2022-11-07 15:43:37 +01:00
Harmen Stoppels
2ab974f530 concretizer:unify:true as a default (#31787)
`spack env create` enables a view by default (in a weird hidden
directory, but well...). This is asking for trouble with the other
default of `concretizer:unify:false`, since having different flavors of
the same spec in an environment, leads to collision errors when
generating the view.

A change of defaults would improve user experience:

However, `unify:true` makes most sense, since any time the issue is
brought up in Slack, the user changes the concretization config, since
it wasn't the intention to have different flavors of the same spec, and
install times are decreased.

Further we improve the docs and drop the duplicate root spec limitation
2022-11-07 15:38:24 +01:00
Massimiliano Culpo
e045dabb3a archspec: update version, translate renamed uarchs (#33556)
* Update archspec version

* Add a translation table from old names
2022-11-07 04:50:38 -08:00
Tim Haines
f8e4ad5209 elfutils: add version 0.188 (#33715) 2022-11-07 12:23:10 +01:00
Greg Becker
4b84cd8af5 bugfix for matrices with dependencies by hash (#22991)
Dependencies specified by hash are unique in Spack in that the abstract
specs are created with internal structure. In this case, the constraint
generation for spec matrices fails due to flattening the structure.

It turns out that the dep_difference method for Spec.constrain does not
need to operate on transitive deps to ensure correctness. Removing transitive
deps from this method resolves the bug.

- [x] Includes regression test
2022-11-06 16:49:35 -08:00
Greg Becker
fce7bf179f solver setup: extract virtual dependencies from reusable specs (#32434)
* extract virtual dependencies from reusable specs
* bugfix to avoid establishing new node for virtual
2022-11-06 16:47:07 -08:00
Greg Becker
f3db624b86 package preferences: allow specs to be configured buildable when their virtuals are not (#18269)
* respect spec buildable that overrides virtual buildable
2022-11-06 16:45:38 -08:00
Greg Becker
22c2f3fe89 improve error message for dependency on nonexistant compiler (#32084) 2022-11-06 16:44:11 -08:00
Greg Becker
52cc798948 solver: do not punish explicitly requested compiler mismatches (#30074) 2022-11-06 16:40:00 -08:00
Michael Kuhn
258edf7dac MesonPackage: disable automatic download and install of dependencies (#33717)
Without this, Meson will use its Wraps to automatically download and
install dependencies. We want to manage dependencies explicitly,
therefore disable this functionality.
2022-11-06 19:34:43 +00:00
Greg Becker
d4b45605c8 allow multiple compatible deps from CLI (#21262)
Currently, Spack can fail for a valid spec if the spec is constructed from overlapping, but not conflicting, concrete specs via the hash.

For example, if abcdef and ghijkl are the hashes of specs that both depend on zlib/mnopqr, then foo ^/abcdef ^/ghijkl will fail to construct a spec, with the error message "Cannot depend on zlib... twice".

This PR changes this behavior to check whether the specs are compatible before failing.

With this PR, foo ^/abcdef ^/ghijkl will concretize.

As a side-effect, so will foo ^zlib ^zlib and other specs that are redundant on their dependencies.
2022-11-06 11:30:37 -08:00
Morten Kristensen
8b4b26fcbd py-vermin: add latest version 1.5.0 (#33727) 2022-11-06 10:32:17 -08:00
John W. Parent
f07f75a47b CMake: add versions 3.24.3, 3.23.4, and 3.23.5 (#33700) 2022-11-06 14:21:42 +01:00
Glenn Johnson
f286a7fa9a Add version 4.2.2 to R (#33726) 2022-11-06 12:40:14 +01:00
Erik Schnetter
fffc4c4846 z3: New version 4.11.2 (#33725) 2022-11-06 11:05:57 +01:00
Greg Becker
27e1d28c0b canonicalize_path: add arch information to substitutions (#29810)
Co-authored-by: becker33 <becker33@users.noreply.github.com>
2022-11-06 10:11:59 +01:00
Emilio J. Padrón González
e550f48b17 ADD version 0.19.0 in py-gym recipe (#33701)
* ADD version 0.19.0 in py-gym recipe

* Fix py-gym download url and dependencies for v0.19.0

* Fix stupid error in previous commit: no change in py-cloudpickle dep

* Yes, I should've paid more attention! O:)

I think now it is right, thanks!
2022-11-05 16:28:58 -07:00
Adam J. Stewart
0f32f7d0e9 py-transformers: add v4.24.0 (#33716)
* py-transformers: add v4.24.0

* Internet access still required
2022-11-05 12:38:49 -05:00
Massimiliano Culpo
5558940ce6 Add support for Python 3.11 (#33505)
Argparse started raising ArgumentError exceptions
when the same parser is added twice. Therefore, we
perform the addition only if the parser is not there
already

Port match syntax to our unparser
2022-11-05 15:59:12 +01:00
Erik Schnetter
c9fcb8aadc openssh: New version 9.1p1 (#33668)
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2022-11-05 15:34:59 +01:00
138 changed files with 3021 additions and 733 deletions

View File

@@ -25,7 +25,7 @@ jobs:
python-version: ${{inputs.python_version}}
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools pytest codecov 'coverage[toml]<=6.2'
pip install --upgrade pip six setuptools pytest codecov coverage[toml]
- name: Package audits (with coverage)
if: ${{ inputs.with_coverage == 'true' }}
run: |

View File

@@ -80,16 +80,16 @@ jobs:
fi
- name: Upload Dockerfile
uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
with:
name: dockerfiles
path: dockerfiles
- name: Set up QEMU
uses: docker/setup-qemu-action@8b122486cedac8393e77aa9734c3528886e4a1a8 # @v1
uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # @v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@c74574e6c82eeedc46366be1b0d287eff9085eb6 # @v1
uses: docker/setup-buildx-action@8c0edbc76e98fa90f69d9a2c020dcb50019dc325 # @v1
- name: Log in to GitHub Container Registry
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # @v1
@@ -106,7 +106,7 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Deploy ${{ matrix.dockerfile[0] }}
uses: docker/build-push-action@c84f38281176d4c9cdb1626ffafcd6b3911b5d94 # @v2
uses: docker/build-push-action@c56af957549030174b10d6867f20e78cfd7debc5 # @v2
with:
context: dockerfiles/${{ matrix.dockerfile[0] }}
platforms: ${{ matrix.dockerfile[1] }}

View File

@@ -46,7 +46,7 @@ jobs:
with:
fetch-depth: 0
# For pull requests it's not necessary to checkout the code
- uses: dorny/paths-filter@b2feaf19c27470162a626bd6fa8438ae5b263721
- uses: dorny/paths-filter@4512585405083f25c027a35db413c2b3b9006d50
id: filter
with:
# See https://github.com/dorny/paths-filter/issues/56 for the syntax used below

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['2.7', '3.6', '3.7', '3.8', '3.9', '3.10']
python-version: ['2.7', '3.6', '3.7', '3.8', '3.9', '3.10', '3.11']
concretizer: ['clingo']
on_develop:
- ${{ github.ref == 'refs/heads/develop' }}
@@ -22,7 +22,7 @@ jobs:
- python-version: 2.7
concretizer: original
on_develop: ${{ github.ref == 'refs/heads/develop' }}
- python-version: '3.10'
- python-version: '3.11'
concretizer: original
on_develop: ${{ github.ref == 'refs/heads/develop' }}
exclude:
@@ -35,6 +35,9 @@ jobs:
- python-version: '3.9'
concretizer: 'clingo'
on_develop: false
- python-version: '3.10'
concretizer: 'clingo'
on_develop: false
steps:
- uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
@@ -86,7 +89,7 @@ jobs:
SPACK_TEST_SOLVER: ${{ matrix.concretizer }}
SPACK_TEST_PARALLEL: 2
COVERAGE: true
UNIT_TEST_COVERAGE: ${{ (matrix.python-version == '3.10') }}
UNIT_TEST_COVERAGE: ${{ (matrix.python-version == '3.11') }}
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
@@ -101,7 +104,7 @@ jobs:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with:
python-version: '3.10'
python-version: '3.11'
- name: Install System packages
run: |
sudo apt-get -y update
@@ -109,7 +112,7 @@ jobs:
sudo apt-get install -y coreutils kcov csh zsh tcsh fish dash bash
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools pytest codecov coverage[toml]==6.2 pytest-xdist
pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-xdist
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -158,7 +161,7 @@ jobs:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with:
python-version: '3.10'
python-version: '3.11'
- name: Install System packages
run: |
sudo apt-get -y update

View File

@@ -21,7 +21,7 @@ jobs:
- uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with:
python-version: '3.10'
python-version: '3.11'
cache: 'pip'
- name: Install Python Packages
run: |
@@ -40,7 +40,7 @@ jobs:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with:
python-version: '3.10'
python-version: '3.11'
cache: 'pip'
- name: Install Python packages
run: |
@@ -57,4 +57,4 @@ jobs:
uses: ./.github/workflows/audit.yaml
with:
with_coverage: ${{ inputs.with_coverage }}
python_version: '3.10'
python_version: '3.11'

View File

@@ -109,11 +109,11 @@ jobs:
echo "installer_root=$((pwd).Path)" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
env:
ProgressPreference: SilentlyContinue
- uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8
- uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
with:
name: Windows Spack Installer Bundle
path: ${{ env.installer_root }}\pkg\Spack.exe
- uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8
- uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
with:
name: Windows Spack Installer
path: ${{ env.installer_root}}\pkg\Spack.msi

View File

@@ -49,52 +49,8 @@ spack_prefix = os.path.dirname(os.path.dirname(spack_file))
spack_lib_path = os.path.join(spack_prefix, "lib", "spack")
sys.path.insert(0, spack_lib_path)
# Add external libs
spack_external_libs = os.path.join(spack_lib_path, "external")
if sys.version_info[:2] <= (2, 7):
sys.path.insert(0, os.path.join(spack_external_libs, "py2"))
sys.path.insert(0, spack_external_libs)
# Here we delete ruamel.yaml in case it has been already imported from site
# (see #9206 for a broader description of the issue).
#
# Briefly: ruamel.yaml produces a .pth file when installed with pip that
# makes the site installed package the preferred one, even though sys.path
# is modified to point to another version of ruamel.yaml.
if "ruamel.yaml" in sys.modules:
del sys.modules["ruamel.yaml"]
if "ruamel" in sys.modules:
del sys.modules["ruamel"]
# The following code is here to avoid failures when updating
# the develop version, due to spurious argparse.pyc files remaining
# in the libs/spack/external directory, see:
# https://github.com/spack/spack/pull/25376
# TODO: Remove in v0.18.0 or later
try:
import argparse
except ImportError:
argparse_pyc = os.path.join(spack_external_libs, "argparse.pyc")
if not os.path.exists(argparse_pyc):
raise
try:
os.remove(argparse_pyc)
import argparse # noqa: F401
except Exception:
msg = (
"The file\n\n\t{0}\n\nis corrupted and cannot be deleted by Spack. "
"Either delete it manually or ask some administrator to "
"delete it for you."
)
print(msg.format(argparse_pyc))
sys.exit(1)
import spack.main # noqa: E402
from spack_installable.main import main # noqa: E402
# Once we've set up the system path, run the spack main method
if __name__ == "__main__":
sys.exit(spack.main.main())
sys.exit(main())

View File

@@ -33,4 +33,4 @@ concretizer:
# environments can always be activated. When "false" perform concretization separately
# on each root spec, allowing different versions and variants of the same package in
# an environment.
unify: false
unify: true

View File

@@ -302,88 +302,31 @@ microarchitectures considered during the solve are constrained to be compatible
host Spack is currently running on. For instance, if this option is set to ``true``, a
user cannot concretize for ``target=icelake`` while running on an Haswell node.
.. _package-preferences:
-------------------
Package Preferences
-------------------
Spack can be configured to prefer certain compilers, package
versions, dependencies, and variants during concretization.
The preferred configuration can be controlled via the
``~/.spack/packages.yaml`` file for user configurations, or the
``etc/spack/packages.yaml`` site configuration.
Here's an example ``packages.yaml`` file that sets preferred packages:
.. code-block:: yaml
packages:
opencv:
compiler: [gcc@4.9]
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
all:
compiler: [gcc@4.4.7, 'gcc@4.6:', intel, clang, pgi]
target: [sandybridge]
providers:
mpi: [mvapich2, mpich, openmpi]
At a high level, this example is specifying how packages should be
concretized. The opencv package should prefer using GCC 4.9 and
be built with debug options. The gperftools package should prefer version
2.2 over 2.4. Every package on the system should prefer mvapich2 for
its MPI and GCC 4.4.7 (except for opencv, which overrides this by preferring GCC 4.9).
These options are used to fill in implicit defaults. Any of them can be overwritten
on the command line if explicitly requested.
Each ``packages.yaml`` file begins with the string ``packages:`` and
package names are specified on the next level. The special string ``all``
applies settings to *all* packages. Underneath each package name is one
or more components: ``compiler``, ``variants``, ``version``,
``providers``, and ``target``. Each component has an ordered list of
spec ``constraints``, with earlier entries in the list being preferred
over later entries.
Sometimes a package installation may have constraints that forbid
the first concretization rule, in which case Spack will use the first
legal concretization rule. Going back to the example, if a user
requests gperftools 2.3 or later, then Spack will install version 2.4
as the 2.4 version of gperftools is preferred over 2.3.
An explicit concretization rule in the preferred section will always
take preference over unlisted concretizations. In the above example,
xlc isn't listed in the compiler list. Every listed compiler from
gcc to pgi will thus be preferred over the xlc compiler.
The syntax for the ``provider`` section differs slightly from other
concretization rules. A provider lists a value that packages may
``depend_on`` (e.g, MPI) and a list of rules for fulfilling that
dependency.
.. _package-requirements:
--------------------
Package Requirements
--------------------
You can use the configuration to force the concretizer to choose
specific properties for packages when building them. Like preferences,
these are only applied when the package is required by some other
request (e.g. if the package is needed as a dependency of a
request to ``spack install``).
Spack can be configured to always use certain compilers, package
versions, and variants during concretization through package
requirements.
An example of where this is useful is if you have a package that
is normally built as a dependency but only under certain circumstances
(e.g. only when a variant on a dependent is active): you can make
sure that it always builds the way you want it to; this distinguishes
package configuration requirements from constraints that you add to
``spack install`` or to environments (in those cases, the associated
packages are always built).
Package requirements are useful when you find yourself repeatedly
specifying the same constraints on the command line, and wish that
Spack respects these constraints whether you mention them explicitly
or not. Another use case is specifying constraints that should apply
to all root specs in an environment, without having to repeat the
constraint everywhere.
The following is an example of how to enforce package properties in
``packages.yaml``:
Apart from that, requirements config is more flexible than constraints
on the command line, because it can specify constraints on packages
*when they occur* as a dependency. In contrast, on the command line it
is not possible to specify constraints on dependencies while also keeping
those dependencies optional.
The package requirements configuration is specified in ``packages.yaml``
keyed by package name:
.. code-block:: yaml
@@ -452,15 +395,15 @@ under ``all`` are disregarded. For example, with a configuration like this:
cmake:
require: '%gcc'
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including cmake dependencies)
to use ``clang``.
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
dependencies) to use ``clang``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting requirements on virtual specs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A requirement on a virtual spec applies whenever that virtual is present in the DAG. This
can be useful for fixing which virtual provider you want to use:
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
This can be useful for fixing which virtual provider you want to use:
.. code-block:: yaml
@@ -470,8 +413,8 @@ can be useful for fixing which virtual provider you want to use:
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
Requirements on the virtual spec and on the specific provider are both applied, if present. For
instance with a configuration like:
Requirements on the virtual spec and on the specific provider are both applied, if
present. For instance with a configuration like:
.. code-block:: yaml
@@ -483,6 +426,66 @@ instance with a configuration like:
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
.. _package-preferences:
-------------------
Package Preferences
-------------------
In some cases package requirements can be too strong, and package
preferences are the better option. Package preferences do not impose
constraints on packages for particular versions or variants values,
they rather only set defaults -- the concretizer is free to change
them if it must due to other constraints. Also note that package
preferences are of lower priority than reuse of already installed
packages.
Here's an example ``packages.yaml`` file that sets preferred packages:
.. code-block:: yaml
packages:
opencv:
compiler: [gcc@4.9]
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
all:
compiler: [gcc@4.4.7, 'gcc@4.6:', intel, clang, pgi]
target: [sandybridge]
providers:
mpi: [mvapich2, mpich, openmpi]
At a high level, this example is specifying how packages are preferably
concretized. The opencv package should prefer using GCC 4.9 and
be built with debug options. The gperftools package should prefer version
2.2 over 2.4. Every package on the system should prefer mvapich2 for
its MPI and GCC 4.4.7 (except for opencv, which overrides this by preferring GCC 4.9).
These options are used to fill in implicit defaults. Any of them can be overwritten
on the command line if explicitly requested.
Package preferences accept the follow keys or components under
the specific package (or ``all``) section: ``compiler``, ``variants``,
``version``, ``providers``, and ``target``. Each component has an
ordered list of spec ``constraints``, with earlier entries in the
list being preferred over later entries.
Sometimes a package installation may have constraints that forbid
the first concretization rule, in which case Spack will use the first
legal concretization rule. Going back to the example, if a user
requests gperftools 2.3 or later, then Spack will install version 2.4
as the 2.4 version of gperftools is preferred over 2.3.
An explicit concretization rule in the preferred section will always
take preference over unlisted concretizations. In the above example,
xlc isn't listed in the compiler list. Every listed compiler from
gcc to pgi will thus be preferred over the xlc compiler.
The syntax for the ``provider`` section differs slightly from other
concretization rules. A provider lists a value that packages may
``depends_on`` (e.g, MPI) and a list of rules for fulfilling that
dependency.
.. _package_permissions:
-------------------

View File

@@ -405,6 +405,17 @@ Spack understands several special variables. These are:
* ``$user``: name of the current user
* ``$user_cache_path``: user cache directory (``~/.spack`` unless
:ref:`overridden <local-config-overrides>`)
* ``$architecture``: the architecture triple of the current host, as
detected by Spack.
* ``$arch``: alias for ``$architecture``.
* ``$platform``: the platform of the current host, as detected by Spack.
* ``$operating_system``: the operating system of the current host, as
detected by the ``distro`` python module.
* ``$os``: alias for ``$operating_system``.
* ``$target``: the ISA target for the current host, as detected by
ArchSpec. E.g. ``skylake`` or ``neoverse-n1``.
* ``$target_family``. The target family for the current host, as
detected by ArchSpec. E.g. ``x86_64`` or ``aarch64``.
Note that, as with shell variables, you can write these as ``$varname``
or with braces to distinguish the variable from surrounding characters:

View File

@@ -519,27 +519,33 @@ available from the yaml file.
^^^^^^^^^^^^^^^^^^^
Spec concretization
^^^^^^^^^^^^^^^^^^^
An environment can be concretized in three different modes and the behavior active under any environment
is determined by the ``concretizer:unify`` property. By default specs are concretized *separately*, one after the other:
An environment can be concretized in three different modes and the behavior active under
any environment is determined by the ``concretizer:unify`` configuration option.
The *default* mode is to unify all specs:
.. code-block:: yaml
spack:
specs:
- hdf5~mpi
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: false
unify: true
This mode of operation permits to deploy a full software stack where multiple configurations of the same package
need to be installed alongside each other using the best possible selection of transitive dependencies. The downside
is that redundancy of installations is disregarded completely, and thus environments might be more bloated than
strictly needed. In the example above, for instance, if a version of ``zlib`` newer than ``1.2.8`` is known to Spack,
then it will be used for both ``hdf5`` installations.
This means that any package in the environment corresponds to a single concrete spec. In
the above example, when ``hdf5`` depends down the line of ``zlib``, it is required to
take ``zlib@1.2.8`` instead of a newer version. This mode of concretization is
particularly useful when environment views are used: if every package occurs in
only one flavor, it is usually possible to merge all install directories into a view.
If redundancy of the environment is a concern, Spack provides a way to install it *together where possible*,
i.e. trying to maximize reuse of dependencies across different specs:
A downside of unified concretization is that it can be overly strict. For example, a
concretization error would happen when both ``hdf5+mpi`` and ``hdf5~mpi`` are specified
in an environment.
The second mode is to *unify when possible*: this makes concretization of root specs
more independendent. Instead of requiring reuse of dependencies across different root
specs, it is only maximized:
.. code-block:: yaml
@@ -551,26 +557,27 @@ i.e. trying to maximize reuse of dependencies across different specs:
concretizer:
unify: when_possible
Also in this case Spack allows having multiple configurations of the same package, but privileges the reuse of
specs over other factors. Going back to our example, this means that both ``hdf5`` installations will use
``zlib@1.2.8`` as a dependency even if newer versions of that library are available.
Central installations done at HPC centers by system administrators or user support groups are a common case
that fits either of these two modes.
This means that both ``hdf5`` installations will use ``zlib@1.2.8`` as a dependency even
if newer versions of that library are available.
Environments can also be configured to concretize all the root specs *together*, in a self-consistent way, to
ensure that each package in the environment comes with a single configuration:
The third mode of operation is to concretize root specs entirely independently by
disabling unified concretization:
.. code-block:: yaml
spack:
specs:
- hdf5~mpi
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: true
unify: false
This mode of operation is usually what is required by software developers that want to deploy their development
environment and have a single view of it in the filesystem.
In this example ``hdf5`` is concretized separately, and does not consider ``zlib@1.2.8``
as a constraint or preference. Instead, it will take the latest possible version.
The last two concretization options are typically useful for system administrators and
user support groups providing a large software stack for their HPC center.
.. note::
@@ -581,10 +588,10 @@ environment and have a single view of it in the filesystem.
.. admonition:: Re-concretization of user specs
When concretizing specs *together* or *together where possible* the entire set of specs will be
When using *unified* concretization (when possible), the entire set of specs will be
re-concretized after any addition of new user specs, to ensure that
the environment remains consistent / minimal. When instead the specs are concretized
separately only the new specs will be re-concretized after any addition.
the environment remains consistent / minimal. When instead unified concretization is
disabled, only the new specs will be concretized after any addition.
^^^^^^^^^^^^^
Spec Matrices

View File

@@ -44,7 +44,7 @@ A build matrix showing which packages are working on which systems is shown belo
yum install -y epel-release
yum update -y
yum --enablerepo epel groupinstall -y "Development Tools"
yum --enablerepo epel install -y curl findutils gcc-c++ gcc gcc-gfortran git gnupg2 hostname iproute make patch python3 python3-pip python3-setuptools unzip
yum --enablerepo epel install -y curl findutils gcc-c++ gcc gcc-gfortran git gnupg2 hostname iproute redhat-lsb-core make patch python3 python3-pip python3-setuptools unzip
python3 -m pip install boto3
.. tab-item:: macOS Brew

View File

@@ -56,7 +56,6 @@ or refer to the full manual below.
basic_usage
Tutorial: Spack 101 <https://spack-tutorial.readthedocs.io>
replace_conda_homebrew
known_issues
.. toctree::
:maxdepth: 2

View File

@@ -1,40 +0,0 @@
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
============
Known Issues
============
This is a list of known issues in Spack. It provides ways of getting around these
problems if you encounter them.
------------------------------------------------
Spack does not seem to respect ``packages.yaml``
------------------------------------------------
.. note::
This issue is **resolved** as of v0.19.0.dev0 commit
`8281a0c5feabfc4fe180846d6fe95cfe53420bc5`, through the introduction of package
requirements. See :ref:`package-requirements`.
A common problem in Spack v0.18.0 up to v0.19.0.dev0 is that package, compiler and target
preferences specified in ``packages.yaml`` do not seem to be respected. Spack picks the
"wrong" compilers and their versions, package versions and variants, and
micro-architectures.
This is however not a bug. In order to reduce the number of builds of the same
packages, the concretizer values reuse of installed packages higher than preferences
set in ``packages.yaml``. Note that ``packages.yaml`` specifies only preferences, not
hard constraints.
There are multiple workarounds:
1. Disable reuse during concretization: ``spack install --fresh <spec>`` when installing
from the command line, or ``spack concretize --fresh --force`` when using
environments.
2. Turn preferences into constrains, by moving them to the input spec. For example,
use ``spack spec zlib%gcc@12`` when you want to force GCC 12 even if ``zlib`` was
already installed with GCC 10.

View File

@@ -1,5 +1,5 @@
Name, Supported Versions, Notes, Requirement Reason
Python, 2.7/3.6-3.10, , Interpreter for Spack
Python, 2.7/3.6-3.11, , Interpreter for Spack
C/C++ Compilers, , , Building software
make, , , Build software
patch, , , Build software
@@ -11,6 +11,7 @@ bzip2, , , Compress/Decompress archives
xz, , , Compress/Decompress archives
zstd, , Optional, Compress/Decompress archives
file, , , Create/Use Buildcaches
lsb-release, , , Linux: identify operating system version
gnupg2, , , Sign/Verify Buildcaches
git, , , Manage Software Repositories
svn, , Optional, Manage Software Repositories
1 Name Supported Versions Notes Requirement Reason
2 Python 2.7/3.6-3.10 2.7/3.6-3.11 Interpreter for Spack
3 C/C++ Compilers Building software
4 make Build software
5 patch Build software
11 xz Compress/Decompress archives
12 zstd Optional Compress/Decompress archives
13 file Create/Use Buildcaches
14 lsb-release Linux: identify operating system version
15 gnupg2 Sign/Verify Buildcaches
16 git Manage Software Repositories
17 svn Optional Manage Software Repositories

View File

@@ -18,7 +18,7 @@
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.1.4 (commit e2cfdc266174488dee78b8c9058e36d60dc1b548)
* Version: 0.2.0 (commit 77640e572725ad97f18e63a04857155752ace045)
argparse
--------

View File

@@ -132,9 +132,15 @@ def sysctl(*args):
"model name": sysctl("-n", "machdep.cpu.brand_string"),
}
else:
model = (
"m1" if "Apple" in sysctl("-n", "machdep.cpu.brand_string") else "unknown"
)
model = "unknown"
model_str = sysctl("-n", "machdep.cpu.brand_string").lower()
if "m2" in model_str:
model = "m2"
elif "m1" in model_str:
model = "m1"
elif "apple" in model_str:
model = "m1"
info = {
"vendor_id": "Apple",
"flags": [],
@@ -322,14 +328,26 @@ def compatibility_check_for_aarch64(info, target):
features = set(info.get("Features", "").split())
vendor = info.get("CPU implementer", "generic")
# At the moment it's not clear how to detect compatibility with
# a specific version of the architecture
if target.vendor == "generic" and target.name != "aarch64":
return False
arch_root = TARGETS[basename]
return (
(target == arch_root or arch_root in target.ancestors)
and target.vendor in (vendor, "generic")
# On macOS it seems impossible to get all the CPU features with syctl info
and (target.features.issubset(features) or platform.system() == "Darwin")
arch_root_and_vendor = arch_root == target.family and target.vendor in (
vendor,
"generic",
)
# On macOS it seems impossible to get all the CPU features
# with syctl info, but for ARM we can get the exact model
if platform.system() == "Darwin":
model_key = info.get("model", basename)
model = TARGETS[model_key]
return arch_root_and_vendor and (target == model or target in model.ancestors)
return arch_root_and_vendor and target.features.issubset(features)
@compatibility_check(architecture_family="riscv64")
def compatibility_check_for_riscv64(info, target):

View File

@@ -85,7 +85,7 @@
"intel": [
{
"versions": ":",
"name": "x86-64",
"name": "pentium4",
"flags": "-march={name} -mtune=generic"
}
],
@@ -2093,8 +2093,163 @@
]
}
},
"thunderx2": {
"armv8.1a": {
"from": ["aarch64"],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "5:",
"flags": "-march=armv8.1-a -mtune=generic"
}
],
"clang": [
{
"versions": ":",
"flags": "-march=armv8.1-a -mtune=generic"
}
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv8.1-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv8.1-a -mtune=generic"
}
]
}
},
"armv8.2a": {
"from": ["armv8.1a"],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "6:",
"flags": "-march=armv8.2-a -mtune=generic"
}
],
"clang": [
{
"versions": ":",
"flags": "-march=armv8.2-a -mtune=generic"
}
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv8.2-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv8.2-a -mtune=generic"
}
]
}
},
"armv8.3a": {
"from": ["armv8.2a"],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "6:",
"flags": "-march=armv8.3-a -mtune=generic"
}
],
"clang": [
{
"versions": "6:",
"flags": "-march=armv8.3-a -mtune=generic"
}
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv8.3-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv8.3-a -mtune=generic"
}
]
}
},
"armv8.4a": {
"from": ["armv8.3a"],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "8:",
"flags": "-march=armv8.4-a -mtune=generic"
}
],
"clang": [
{
"versions": "8:",
"flags": "-march=armv8.4-a -mtune=generic"
}
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv8.4-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv8.4-a -mtune=generic"
}
]
}
},
"armv8.5a": {
"from": ["armv8.4a"],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "9:",
"flags": "-march=armv8.5-a -mtune=generic"
}
],
"clang": [
{
"versions": "11:",
"flags": "-march=armv8.5-a -mtune=generic"
}
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv8.5-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv8.5-a -mtune=generic"
}
]
}
},
"thunderx2": {
"from": ["armv8.1a"],
"vendor": "Cavium",
"features": [
"fp",
@@ -2141,7 +2296,7 @@
}
},
"a64fx": {
"from": ["aarch64"],
"from": ["armv8.2a"],
"vendor": "Fujitsu",
"features": [
"fp",
@@ -2209,7 +2364,7 @@
]
}
},
"graviton": {
"cortex_a72": {
"from": ["aarch64"],
"vendor": "ARM",
"features": [
@@ -2235,19 +2390,19 @@
},
{
"versions": "6:",
"flags" : "-march=armv8-a+crc+crypto -mtune=cortex-a72"
"flags" : "-mcpu=cortex-a72"
}
],
"clang" : [
{
"versions": "3.9:",
"flags" : "-march=armv8-a+crc+crypto"
"flags" : "-mcpu=cortex-a72"
}
]
}
},
"graviton2": {
"from": ["graviton"],
"neoverse_n1": {
"from": ["cortex_a72", "armv8.2a"],
"vendor": "ARM",
"features": [
"fp",
@@ -2296,7 +2451,7 @@
},
{
"versions": "9.0:",
"flags" : "-march=armv8.2-a+fp16+rcpc+dotprod+crypto -mtune=neoverse-n1"
"flags" : "-mcpu=neoverse-n1"
}
],
"clang" : [
@@ -2307,6 +2462,10 @@
{
"versions": "5:",
"flags" : "-march=armv8.2-a+fp16+rcpc+dotprod+crypto"
},
{
"versions": "10:",
"flags" : "-mcpu=neoverse-n1"
}
],
"arm" : [
@@ -2317,11 +2476,11 @@
]
}
},
"graviton3": {
"from": ["graviton2"],
"neoverse_v1": {
"from": ["neoverse_n1", "armv8.4a"],
"vendor": "ARM",
"features": [
"fp",
"fp",
"asimd",
"evtstrm",
"aes",
@@ -2384,11 +2543,11 @@
},
{
"versions": "9.0:9.9",
"flags" : "-march=armv8.4-a+crypto+rcpc+sha3+sm4+sve+rng+nodotprod -mtune=neoverse-v1"
"flags" : "-mcpu=neoverse-v1"
},
{
"versions": "10.0:",
"flags" : "-march=armv8.4-a+crypto+rcpc+sha3+sm4+sve+rng+ssbs+i8mm+bf16+nodotprod -mtune=neoverse-v1"
"flags" : "-mcpu=neoverse-v1"
}
],
@@ -2404,6 +2563,10 @@
{
"versions": "11:",
"flags" : "-march=armv8.4-a+sve+ssbs+fp16+bf16+crypto+i8mm+rng"
},
{
"versions": "12:",
"flags" : "-mcpu=neoverse-v1"
}
],
"arm" : [
@@ -2419,7 +2582,7 @@
}
},
"m1": {
"from": ["aarch64"],
"from": ["armv8.4a"],
"vendor": "Apple",
"features": [
"fp",
@@ -2484,6 +2647,76 @@
]
}
},
"m2": {
"from": ["m1", "armv8.5a"],
"vendor": "Apple",
"features": [
"fp",
"asimd",
"evtstrm",
"aes",
"pmull",
"sha1",
"sha2",
"crc32",
"atomics",
"fphp",
"asimdhp",
"cpuid",
"asimdrdm",
"jscvt",
"fcma",
"lrcpc",
"dcpop",
"sha3",
"asimddp",
"sha512",
"asimdfhm",
"dit",
"uscat",
"ilrcpc",
"flagm",
"ssbs",
"sb",
"paca",
"pacg",
"dcpodp",
"flagm2",
"frint",
"ecv",
"bf16",
"i8mm",
"bti"
],
"compilers": {
"gcc": [
{
"versions": "8.0:",
"flags" : "-march=armv8.5-a -mtune=generic"
}
],
"clang" : [
{
"versions": "9.0:12.0",
"flags" : "-march=armv8.5-a"
},
{
"versions": "13.0:",
"flags" : "-mcpu=apple-m1"
}
],
"apple-clang": [
{
"versions": "11.0:12.5",
"flags" : "-march=armv8.5-a"
},
{
"versions": "13.0:",
"flags" : "-mcpu=vortex"
}
]
}
},
"arm": {
"from": [],
"vendor": "generic",

View File

@@ -1000,16 +1000,45 @@ def hash_directory(directory, ignore=[]):
return md5_hash.hexdigest()
def _try_unlink(path):
try:
os.unlink(path)
except (IOError, OSError):
# But if that fails, that's OK.
pass
@contextmanager
@system_path_filter
def write_tmp_and_move(filename):
"""Write to a temporary file, then move into place."""
dirname = os.path.dirname(filename)
basename = os.path.basename(filename)
tmp = os.path.join(dirname, ".%s.tmp" % basename)
with open(tmp, "w") as f:
yield f
shutil.move(tmp, filename)
def write_tmp_and_move(path, mode="w"):
"""Write to a temporary file in the same directory, then move into place."""
# Rely on NamedTemporaryFile to give a unique file without races
# in the directory of the target file.
file = tempfile.NamedTemporaryFile(
prefix="." + os.path.basename(path),
suffix=".tmp",
dir=os.path.dirname(path),
mode=mode,
delete=False, # we delete it ourselves
)
tmp_path = file.name
try:
yield file
except BaseException:
# On any failure, try to remove the temporary file.
_try_unlink(tmp_path)
raise
finally:
# Always close the file decriptor
file.close()
# Atomically move into existence.
try:
os.rename(tmp_path, path)
except (IOError, OSError):
_try_unlink(tmp_path)
raise
@contextmanager

View File

@@ -3,11 +3,20 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: (major, minor, micro, dev release) tuple
spack_version_info = (0, 19, 0, "dev0")
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
spack_version = ".".join(str(s) for s in spack_version_info)
__version__ = "0.19.0.dev0"
spack_version = __version__
def __try_int(v):
try:
return int(v)
except ValueError:
return v
#: (major, minor, micro, dev release) tuple
spack_version_info = tuple([__try_int(v) for v in __version__.split(".")])
__all__ = ["spack_version_info", "spack_version"]
__version__ = spack_version

View File

@@ -7,6 +7,7 @@
import collections
import hashlib
import json
import multiprocessing.pool
import os
import shutil
import sys
@@ -45,6 +46,7 @@
from spack.relocate import utf8_paths_to_single_binary_regex
from spack.spec import Spec
from spack.stage import Stage
from spack.util.executable import which
_build_cache_relative_path = "build_cache"
_build_cache_keys_relative_path = "_pgp"
@@ -72,6 +74,10 @@ def __init__(self, errors):
super(FetchCacheError, self).__init__(self.message)
class ListMirrorSpecsError(spack.error.SpackError):
"""Raised when unable to retrieve list of specs from the mirror"""
class BinaryCacheIndex(object):
"""
The BinaryCacheIndex tracks what specs are available on (usually remote)
@@ -881,37 +887,52 @@ def sign_specfile(key, force, specfile_path):
spack.util.gpg.sign(key, specfile_path, signed_specfile_path, clearsign=True)
def _fetch_spec_from_mirror(spec_url):
s = None
tty.debug("fetching {0}".format(spec_url))
_, _, spec_file = web_util.read_from_url(spec_url)
spec_file_contents = codecs.getreader("utf-8")(spec_file).read()
# Need full spec.json name or this gets confused with index.json.
if spec_url.endswith(".json.sig"):
specfile_json = Spec.extract_json_from_clearsig(spec_file_contents)
s = Spec.from_dict(specfile_json)
elif spec_url.endswith(".json"):
s = Spec.from_json(spec_file_contents)
elif spec_url.endswith(".yaml"):
s = Spec.from_yaml(spec_file_contents)
return s
def _read_specs_and_push_index(file_list, read_method, cache_prefix, db, temp_dir, concurrency):
"""Read all the specs listed in the provided list, using thread given thread parallelism,
generate the index, and push it to the mirror.
Args:
file_list (list(str)): List of urls or file paths pointing at spec files to read
read_method: A function taking a single argument, either a url or a file path,
and which reads the spec file at that location, and returns the spec.
cache_prefix (str): prefix of the build cache on s3 where index should be pushed.
db: A spack database used for adding specs and then writing the index.
temp_dir (str): Location to write index.json and hash for pushing
concurrency (int): Number of parallel processes to use when fetching
def _read_specs_and_push_index(file_list, cache_prefix, db, db_root_dir):
for file_path in file_list:
try:
s = _fetch_spec_from_mirror(url_util.join(cache_prefix, file_path))
except (URLError, web_util.SpackWebError) as url_err:
tty.error("Error reading specfile: {0}".format(file_path))
tty.error(url_err)
Return:
None
"""
if s:
db.add(s, None)
db.mark(s, "in_buildcache", True)
def _fetch_spec_from_mirror(spec_url):
spec_file_contents = read_method(spec_url)
if spec_file_contents:
# Need full spec.json name or this gets confused with index.json.
if spec_url.endswith(".json.sig"):
specfile_json = Spec.extract_json_from_clearsig(spec_file_contents)
return Spec.from_dict(specfile_json)
if spec_url.endswith(".json"):
return Spec.from_json(spec_file_contents)
if spec_url.endswith(".yaml"):
return Spec.from_yaml(spec_file_contents)
tp = multiprocessing.pool.ThreadPool(processes=concurrency)
try:
fetched_specs = tp.map(
llnl.util.lang.star(_fetch_spec_from_mirror), [(f,) for f in file_list]
)
finally:
tp.terminate()
tp.join()
for fetched_spec in fetched_specs:
db.add(fetched_spec, None)
db.mark(fetched_spec, "in_buildcache", True)
# Now generate the index, compute its hash, and push the two files to
# the mirror.
index_json_path = os.path.join(db_root_dir, "index.json")
index_json_path = os.path.join(temp_dir, "index.json")
with open(index_json_path, "w") as f:
db._write_to_file(f)
@@ -921,7 +942,7 @@ def _read_specs_and_push_index(file_list, cache_prefix, db, db_root_dir):
index_hash = compute_hash(index_string)
# Write the hash out to a local file
index_hash_path = os.path.join(db_root_dir, "index.json.hash")
index_hash_path = os.path.join(temp_dir, "index.json.hash")
with open(index_hash_path, "w") as f:
f.write(index_hash)
@@ -942,31 +963,142 @@ def _read_specs_and_push_index(file_list, cache_prefix, db, db_root_dir):
)
def generate_package_index(cache_prefix):
"""Create the build cache index page.
def _specs_from_cache_aws_cli(cache_prefix):
"""Use aws cli to sync all the specs into a local temporary directory.
Creates (or replaces) the "index.json" page at the location given in
cache_prefix. This page contains a link for each binary package (.yaml or
.json) under cache_prefix.
Args:
cache_prefix (str): prefix of the build cache on s3
Return:
List of the local file paths and a function that can read each one from the file system.
"""
read_fn = None
file_list = None
aws = which("aws")
def file_read_method(file_path):
with open(file_path) as fd:
return fd.read()
tmpspecsdir = tempfile.mkdtemp()
sync_command_args = [
"s3",
"sync",
"--exclude",
"*",
"--include",
"*.spec.json.sig",
"--include",
"*.spec.json",
"--include",
"*.spec.yaml",
cache_prefix,
tmpspecsdir,
]
try:
tty.debug(
"Using aws s3 sync to download specs from {0} to {1}".format(cache_prefix, tmpspecsdir)
)
aws(*sync_command_args, output=os.devnull, error=os.devnull)
file_list = fsys.find(tmpspecsdir, ["*.spec.json.sig", "*.spec.json", "*.spec.yaml"])
read_fn = file_read_method
except Exception:
tty.warn("Failed to use aws s3 sync to retrieve specs, falling back to parallel fetch")
shutil.rmtree(tmpspecsdir)
return file_list, read_fn
def _specs_from_cache_fallback(cache_prefix):
"""Use spack.util.web module to get a list of all the specs at the remote url.
Args:
cache_prefix (str): Base url of mirror (location of spec files)
Return:
The list of complete spec file urls and a function that can read each one from its
remote location (also using the spack.util.web module).
"""
read_fn = None
file_list = None
def url_read_method(url):
contents = None
try:
_, _, spec_file = web_util.read_from_url(url)
contents = codecs.getreader("utf-8")(spec_file).read()
except (URLError, web_util.SpackWebError) as url_err:
tty.error("Error reading specfile: {0}".format(url))
tty.error(url_err)
return contents
try:
file_list = [
entry
url_util.join(cache_prefix, entry)
for entry in web_util.list_url(cache_prefix)
if entry.endswith(".yaml")
or entry.endswith("spec.json")
or entry.endswith("spec.json.sig")
]
read_fn = url_read_method
except KeyError as inst:
msg = "No packages at {0}: {1}".format(cache_prefix, inst)
tty.warn(msg)
return
except Exception as err:
# If we got some kind of S3 (access denied or other connection
# error), the first non boto-specific class in the exception
# hierarchy is Exception. Just print a warning and return
msg = "Encountered problem listing packages at {0}: {1}".format(cache_prefix, err)
tty.warn(msg)
return file_list, read_fn
def _spec_files_from_cache(cache_prefix):
"""Get a list of all the spec files in the mirror and a function to
read them.
Args:
cache_prefix (str): Base url of mirror (location of spec files)
Return:
A tuple where the first item is a list of absolute file paths or
urls pointing to the specs that should be read from the mirror,
and the second item is a function taking a url or file path and
returning the spec read from that location.
"""
callbacks = []
if cache_prefix.startswith("s3"):
callbacks.append(_specs_from_cache_aws_cli)
callbacks.append(_specs_from_cache_fallback)
for specs_from_cache_fn in callbacks:
file_list, read_fn = specs_from_cache_fn(cache_prefix)
if file_list:
return file_list, read_fn
raise ListMirrorSpecsError("Failed to get list of specs from {0}".format(cache_prefix))
def generate_package_index(cache_prefix, concurrency=32):
"""Create or replace the build cache index on the given mirror. The
buildcache index contains an entry for each binary package under the
cache_prefix.
Args:
cache_prefix(str): Base url of binary mirror.
concurrency: (int): The desired threading concurrency to use when
fetching the spec files from the mirror.
Return:
None
"""
try:
file_list, read_fn = _spec_files_from_cache(cache_prefix)
except ListMirrorSpecsError as err:
tty.error("Unabled to generate package index, {0}".format(err))
return
if any(x.endswith(".yaml") for x in file_list):
@@ -989,7 +1121,7 @@ def generate_package_index(cache_prefix):
)
try:
_read_specs_and_push_index(file_list, cache_prefix, db, db_root_dir)
_read_specs_and_push_index(file_list, read_fn, cache_prefix, db, db_root_dir, concurrency)
except Exception as err:
msg = "Encountered problem pushing package index to {0}: {1}".format(cache_prefix, err)
tty.warn(msg)

View File

@@ -142,15 +142,17 @@ def std_args(pkg):
default_library = "shared"
args = [
"--prefix={0}".format(pkg.prefix),
"-Dprefix={0}".format(pkg.prefix),
# If we do not specify libdir explicitly, Meson chooses something
# like lib/x86_64-linux-gnu, which causes problems when trying to
# find libraries and pkg-config files.
# See https://github.com/mesonbuild/meson/issues/2197
"--libdir={0}".format(pkg.prefix.lib),
"-Dlibdir={0}".format(pkg.prefix.lib),
"-Dbuildtype={0}".format(build_type),
"-Dstrip={0}".format(strip),
"-Ddefault_library={0}".format(default_library),
# Do not automatically download and install dependencies
"-Dwrap_mode=nodownload",
]
return args

View File

@@ -2,6 +2,7 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import glob
import inspect
import os
import re
@@ -223,8 +224,8 @@ def headers(self):
"""Discover header files in platlib."""
# Headers may be in either location
include = self.prefix.join(self.spec["python"].package.include)
platlib = self.prefix.join(self.spec["python"].package.platlib)
include = self.prefix.join(self.include)
platlib = self.prefix.join(self.platlib)
headers = fs.find_all_headers(include) + fs.find_all_headers(platlib)
if headers:
@@ -233,13 +234,29 @@ def headers(self):
msg = "Unable to locate {} headers in {} or {}"
raise NoHeadersError(msg.format(self.spec.name, include, platlib))
@property
def include(self):
include = glob.glob(self.prefix.include.join("python*"))
if include:
return include[0]
return self.spec["python"].package.include
@property
def platlib(self):
for libname in ("lib", "lib64"):
platlib = glob.glob(self.prefix.join(libname).join("python*").join("site-packages"))
if platlib:
return platlib[0]
return self.spec["python"].package.platlib
@property
def libs(self):
"""Discover libraries in platlib."""
# Remove py- prefix in package name
library = "lib" + self.spec.name[3:].replace("-", "?")
root = self.prefix.join(self.spec["python"].package.platlib)
root = self.prefix.join(self.platlib)
for shared in [True, False]:
libs = fs.find_libraries(library, root, shared=shared, recursive=True)

View File

@@ -21,8 +21,6 @@ class SConsPackage(spack.package_base.PackageBase):
#: build-system class we are using
build_system_class = "SConsPackage"
#: Callback names for build-time test
build_time_test_callbacks = ["build_test"]
#: Legacy buildsystem attribute used to deserialize and install old specs
legacy_buildsystem = "scons"
@@ -48,18 +46,24 @@ class SConsBuilder(BaseBuilder):
phases = ("build", "install")
#: Names associated with package methods in the old build-system format
legacy_methods = ("build_args", "install_args", "build_test")
legacy_methods = ("install_args", "build_test")
#: Same as legacy_methods, but the signature is different
legacy_long_methods = ("build_args",)
#: Names associated with package attributes in the old build-system format
legacy_attributes = ()
legacy_attributes = ("build_time_test_callbacks",)
def build_args(self):
#: Callback names for build-time test
build_time_test_callbacks = ["build_test"]
def build_args(self, spec, prefix):
"""Arguments to pass to build."""
return []
def build(self, pkg, spec, prefix):
"""Build the package."""
args = self.build_args()
args = self.build_args(spec, prefix)
inspect.getmodule(self.pkg).scons(*args)
def install_args(self):

View File

@@ -1167,7 +1167,14 @@ def generate_gitlab_ci_yaml(
"after_script",
]
service_job_retries = {"max": 2, "when": ["runner_system_failure", "stuck_or_timeout_failure"]}
service_job_retries = {
"max": 2,
"when": [
"runner_system_failure",
"stuck_or_timeout_failure",
"script_failure",
],
}
if job_id > 0:
if temp_storage_url_prefix:

View File

@@ -531,7 +531,6 @@ def ci_rebuild(args):
slash_hash = "/{}".format(job_spec.dag_hash())
deps_install_args = install_args
root_install_args = install_args + [
"--no-add",
"--keep-stage",
"--only=package",
"--use-buildcache=package:never,dependencies:only",

View File

@@ -193,14 +193,22 @@ def setup_parser(subparser):
default=False,
help="(with environment) only install already concretized specs",
)
subparser.add_argument(
"--no-add",
updateenv_group = subparser.add_mutually_exclusive_group()
updateenv_group.add_argument(
"--add",
action="store_true",
default=False,
help="""(with environment) partially install an environment, limiting
to concrete specs in the environment matching the arguments.
Non-roots remain installed implicitly.""",
help="""(with environment) add spec to the environment as a root.""",
)
updateenv_group.add_argument(
"--no-add",
action="store_false",
dest="add",
help="""(with environment) do not add spec to the environment as a
root (the default behavior).""",
)
subparser.add_argument(
"-f",
"--file",
@@ -289,11 +297,12 @@ def install_specs_inside_environment(specs, install_kwargs, cli_args):
# the matches. Getting to this point means there were either
# no matches or exactly one match.
if not m_spec and cli_args.no_add:
if not m_spec and not cli_args.add:
msg = (
"You asked to install {0} without adding it (--no-add), but no such spec "
"exists in environment"
).format(abstract.name)
"Cannot install '{0}' because it is not in the current environment."
" You can add it to the environment with 'spack add {0}', or as part"
" of the install command with 'spack install --add {0}'"
).format(str(abstract))
tty.die(msg)
if not m_spec:
@@ -303,14 +312,16 @@ def install_specs_inside_environment(specs, install_kwargs, cli_args):
tty.debug("exactly one match for {0} in env -> {1}".format(m_spec.name, m_spec.dag_hash()))
if m_spec in env.roots() or cli_args.no_add:
# either the single match is a root spec (and --no-add is
# the default for roots) or --no-add was stated explicitly
if m_spec in env.roots() or not cli_args.add:
# either the single match is a root spec (in which case
# the spec is not added to the env again), or the user did
# not specify --add (in which case it is assumed we are
# installing already-concretized specs in the env)
tty.debug("just install {0}".format(m_spec.name))
specs_to_install.append(m_spec)
else:
# the single match is not a root (i.e. it's a dependency),
# and --no-add was not specified, so we'll add it as a
# and --add was specified, so we'll add it as a
# root before installing
tty.debug("add {0} then install it".format(m_spec.name))
specs_to_add.append((abstract, concrete))

View File

@@ -5,7 +5,6 @@
from __future__ import print_function
import itertools
import sys
from llnl.util import tty
@@ -61,6 +60,13 @@ def setup_parser(subparser):
dest="force",
help="remove regardless of whether other packages or environments " "depend on this one",
)
subparser.add_argument(
"--remove",
action="store_true",
dest="remove",
help="if in an environment, then the spec should also be removed from "
"the environment description",
)
arguments.add_common_arguments(
subparser, ["recurse_dependents", "yes_to_all", "installed_specs"]
)
@@ -134,13 +140,21 @@ def installed_dependents(specs, env):
env (spack.environment.Environment or None): the active environment, or None
Returns:
tuple: two mappings: one from specs to their dependent environments in the
active environment (or global scope if there is no environment), and one from
specs to their dependents in *inactive* environments (empty if there is no
environment
tuple: two mappings: one from specs to their dependent installs in the
active environment, and one from specs to dependent installs outside of
the active environment.
Any of the input specs may appear in both mappings (if there are
dependents both inside and outside the current environment).
If a dependent spec is used both by the active environment and by
an inactive environment, it will only appear in the first mapping.
If there is not current active environment, the first mapping will be
empty.
"""
active_dpts = {}
inactive_dpts = {}
outside_dpts = {}
env_hashes = set(env.all_hashes()) if env else set()
@@ -153,12 +167,12 @@ def installed_dependents(specs, env):
# dpts that are outside this environment
for dpt in installed:
if dpt not in specs:
if not env or dpt.dag_hash() in env_hashes:
if dpt.dag_hash() in env_hashes:
active_dpts.setdefault(spec, set()).add(dpt)
else:
inactive_dpts.setdefault(spec, set()).add(dpt)
outside_dpts.setdefault(spec, set()).add(dpt)
return active_dpts, inactive_dpts
return active_dpts, outside_dpts
def dependent_environments(specs):
@@ -262,31 +276,65 @@ def is_ready(dag_hash):
def get_uninstall_list(args, specs, env):
# Gets the list of installed specs that match the ones give via cli
"""Returns uninstall_list and remove_list: these may overlap (some things
may be both uninstalled and removed from the current environment).
It is assumed we are in an environment if --remove is specified (this
method raises an exception otherwise).
uninstall_list is topologically sorted: dependents come before
dependencies (so if a user uninstalls specs in the order provided,
the dependents will always be uninstalled first).
"""
if args.remove and not env:
raise ValueError("Can only use --remove when in an environment")
# Gets the list of installed specs that match the ones given via cli
# args.all takes care of the case where '-a' is given in the cli
uninstall_list = find_matching_specs(env, specs, args.all, args.force, args.origin)
base_uninstall_specs = set(find_matching_specs(env, specs, args.all, args.force))
# Takes care of '-R'
active_dpts, inactive_dpts = installed_dependents(uninstall_list, env)
active_dpts, outside_dpts = installed_dependents(base_uninstall_specs, env)
# It will be useful to track the unified set of specs with dependents, as
# well as to separately track specs in the current env with dependents
spec_to_dpts = {}
for spec, dpts in active_dpts.items():
spec_to_dpts[spec] = list(dpts)
for spec, dpts in outside_dpts.items():
if spec in spec_to_dpts:
spec_to_dpts[spec].extend(dpts)
else:
spec_to_dpts[spec] = list(dpts)
# if we are in the global scope, we complain if you try to remove a
# spec that's in an environment. If we're in an environment, we'll
# just *remove* it from the environment, so we ignore this
# error when *in* an environment
spec_envs = dependent_environments(uninstall_list)
spec_envs = inactive_dependent_environments(spec_envs)
all_uninstall_specs = set(base_uninstall_specs)
if args.dependents:
for spec, lst in active_dpts.items():
all_uninstall_specs.update(lst)
for spec, lst in outside_dpts.items():
all_uninstall_specs.update(lst)
# Process spec_dependents and update uninstall_list
has_error = not args.force and (
(active_dpts and not args.dependents) # dependents in the current env
or (not env and spec_envs) # there are environments that need specs
# For each spec that we intend to uninstall, this tracks the set of
# environments outside the current active environment which depend on the
# spec. There may be environments not managed directly with Spack: such
# environments would not be included here.
spec_to_other_envs = inactive_dependent_environments(
dependent_environments(all_uninstall_specs)
)
has_error = not args.force and (
# There are dependents in the current env and we didn't ask to remove
# dependents
(spec_to_dpts and not args.dependents)
# An environment different than the current env (if any) depends on
# one or more of the specs to be uninstalled. There may also be
# packages in those envs which depend on the base set of packages
# to uninstall, but this covers that scenario.
or (not args.remove and spec_to_other_envs)
)
# say why each problem spec is needed
if has_error:
specs = set(active_dpts)
if not env:
specs.update(set(spec_envs)) # environments depend on this
# say why each problem spec is needed
specs = set(spec_to_dpts)
specs.update(set(spec_to_other_envs)) # environments depend on this
for i, spec in enumerate(sorted(specs)):
# space out blocks of reasons
@@ -296,66 +344,86 @@ def get_uninstall_list(args, specs, env):
spec_format = "{name}{@version}{%compiler}{/hash:7}"
tty.info("Will not uninstall %s" % spec.cformat(spec_format), format="*r")
dependents = active_dpts.get(spec)
if dependents:
dependents = spec_to_dpts.get(spec)
if dependents and not args.dependents:
print("The following packages depend on it:")
spack.cmd.display_specs(dependents, **display_args)
if not env:
envs = spec_envs.get(spec)
if envs:
print("It is used by the following environments:")
colify([e.name for e in envs], indent=4)
envs = spec_to_other_envs.get(spec)
if envs:
if env:
env_context_qualifier = " other"
else:
env_context_qualifier = ""
print("It is used by the following{0} environments:".format(env_context_qualifier))
colify([e.name for e in envs], indent=4)
msgs = []
if active_dpts:
if spec_to_dpts and not args.dependents:
msgs.append("use `spack uninstall --dependents` to remove dependents too")
if spec_envs:
if spec_to_other_envs:
msgs.append("use `spack env remove` to remove from environments")
print()
tty.die("There are still dependents.", *msgs)
elif args.dependents:
for spec, lst in active_dpts.items():
uninstall_list.extend(lst)
uninstall_list = list(set(uninstall_list))
# If we are in an environment, this will track specs in this environment
# which should only be removed from the environment rather than uninstalled
remove_only = set()
if args.remove and not args.force:
remove_only.update(spec_to_other_envs)
if remove_only:
tty.info(
"The following specs will be removed but not uninstalled because"
" they are also used by another environment: {speclist}".format(
speclist=", ".join(x.name for x in remove_only)
)
)
# only force-remove (don't completely uninstall) specs that still
# have external dependent envs or pkgs
removes = set(inactive_dpts)
if env:
removes.update(spec_envs)
# Compute the set of specs that should be removed from the current env.
# This may overlap (some specs may be uninstalled and also removed from
# the current environment).
if args.remove:
remove_specs = set(base_uninstall_specs)
if args.dependents:
# Any spec matched from the cli, or dependent of, should be removed
# from the environment
for spec, lst in active_dpts.items():
remove_specs.update(lst)
else:
remove_specs = set()
# remove anything in removes from the uninstall list
uninstall_list = set(uninstall_list) - removes
all_uninstall_specs -= remove_only
# Inefficient topological sort: uninstall dependents before dependencies
all_uninstall_specs = sorted(
all_uninstall_specs, key=lambda x: sum(1 for i in x.traverse()), reverse=True
)
return uninstall_list, removes
return list(all_uninstall_specs), list(remove_specs)
def uninstall_specs(args, specs):
env = ev.active_environment()
uninstall_list, remove_list = get_uninstall_list(args, specs, env)
anything_to_do = set(uninstall_list).union(set(remove_list))
if not anything_to_do:
if not uninstall_list:
tty.warn("There are no package to uninstall.")
return
if not args.yes_to_all:
confirm_removal(anything_to_do)
if env:
# Remove all the specs that are supposed to be uninstalled or just
# removed.
with env.write_transaction():
for spec in itertools.chain(remove_list, uninstall_list):
_remove_from_env(spec, env)
env.write()
confirm_removal(uninstall_list)
# Uninstall everything on the list
do_uninstall(env, uninstall_list, args.force)
if env:
with env.write_transaction():
for spec in remove_list:
_remove_from_env(spec, env)
env.write()
env.regenerate_views()
def confirm_removal(specs):
"""Display the list of specs to be removed and ask for confirmation.

View File

@@ -49,12 +49,26 @@
"clang": "llvm+clang",
"oneapi": "intel-oneapi-compilers",
"rocmcc": "llvm-amdgpu",
"intel@2020:": "intel-oneapi-compilers-classic",
}
# TODO: generating this from the previous dict causes docs errors
package_name_to_compiler_name = {
"llvm": "clang",
"intel-oneapi-compilers": "oneapi",
"llvm-amdgpu": "rocmcc",
"intel-oneapi-compilers-classic": "intel",
}
def pkg_spec_for_compiler(cspec):
"""Return the spec of the package that provides the compiler."""
spec_str = "%s@%s" % (_compiler_to_pkg.get(cspec.name, cspec.name), cspec.versions)
for spec, package in _compiler_to_pkg.items():
if cspec.satisfies(spec):
spec_str = "%s@%s" % (package, cspec.versions)
break
else:
spec_str = str(cspec)
return spack.spec.Spec(spec_str)

View File

@@ -45,7 +45,7 @@
import llnl.util.lang
import llnl.util.tty as tty
from llnl.util.filesystem import mkdirp, rename
from llnl.util.filesystem import mkdirp, write_tmp_and_move
import spack.compilers
import spack.paths
@@ -287,10 +287,8 @@ def _write_section(self, section):
parent = os.path.dirname(self.path)
mkdirp(parent)
tmp = os.path.join(parent, ".%s.tmp" % os.path.basename(self.path))
with open(tmp, "w") as f:
with write_tmp_and_move(self.path) as f:
syaml.dump_config(data_to_write, stream=f, default_flow_style=False)
rename(tmp, self.path)
except (yaml.YAMLError, IOError) as e:
raise ConfigFileError("Error writing to config file: '%s'" % str(e))

View File

@@ -102,7 +102,7 @@ def __init__(self, root, **kwargs):
@property
def hidden_file_regexes(self):
return (re.escape(self.metadata_dir),)
return ("^{0}$".format(re.escape(self.metadata_dir)),)
def relative_path_for_spec(self, spec):
_check_concrete(spec)

View File

@@ -1322,30 +1322,25 @@ def _concretize_together(self, tests=False):
if user_specs_did_not_change:
return []
# Check that user specs don't have duplicate packages
counter = collections.defaultdict(int)
for user_spec in self.user_specs:
counter[user_spec.name] += 1
duplicates = []
for name, count in counter.items():
if count > 1:
duplicates.append(name)
if duplicates:
msg = (
"environment that are configured to concretize specs"
" together cannot contain more than one spec for each"
" package [{0}]".format(", ".join(duplicates))
)
raise SpackEnvironmentError(msg)
# Proceed with concretization
self.concretized_user_specs = []
self.concretized_order = []
self.specs_by_hash = {}
concrete_specs = spack.concretize.concretize_specs_together(*self.user_specs, tests=tests)
try:
concrete_specs = spack.concretize.concretize_specs_together(
*self.user_specs, tests=tests
)
except spack.error.UnsatisfiableSpecError as e:
# "Enhance" the error message for multiple root specs, suggest a less strict
# form of concretization.
if len(self.user_specs) > 1:
e.message += (
". Consider setting `concretizer:unify` to `when_possible` "
"or `false` to relax the concretizer strictness."
)
raise
concretized_specs = [x for x in zip(self.user_specs, concrete_specs)]
for abstract, concrete in concretized_specs:
self._add_concrete_spec(abstract, concrete)

View File

@@ -186,44 +186,39 @@ def install_sbang():
``sbang`` here ensures that users can access the script and that
``sbang`` itself is in a short path.
"""
# copy in a new version of sbang if it differs from what's in spack
sbang_path = sbang_install_path()
if os.path.exists(sbang_path) and filecmp.cmp(spack.paths.sbang_script, sbang_path):
return
# make $install_tree/bin
all = spack.spec.Spec("all")
group_name = spack.package_prefs.get_package_group(all)
config_mode = spack.package_prefs.get_package_dir_permissions(all)
group_id = grp.getgrnam(group_name).gr_gid if group_name else None
# First setup the bin dir correctly.
sbang_bin_dir = os.path.dirname(sbang_path)
fs.mkdirp(sbang_bin_dir)
if not os.path.isdir(sbang_bin_dir):
fs.mkdirp(sbang_bin_dir)
# get permissions for bin dir from configuration files
group_name = spack.package_prefs.get_package_group(spack.spec.Spec("all"))
config_mode = spack.package_prefs.get_package_dir_permissions(spack.spec.Spec("all"))
if group_name:
os.chmod(sbang_bin_dir, config_mode) # Use package directory permissions
# Set group and ownership like we do on package directories
if group_id:
os.chown(sbang_bin_dir, os.stat(sbang_bin_dir).st_uid, group_id)
os.chmod(sbang_bin_dir, config_mode)
else:
fs.set_install_permissions(sbang_bin_dir)
# set group on sbang_bin_dir if not already set (only if set in configuration)
# TODO: after we drop python2 support, use shutil.chown to avoid gid lookups that
# can fail for remote groups
if group_name and os.stat(sbang_bin_dir).st_gid != grp.getgrnam(group_name).gr_gid:
os.chown(sbang_bin_dir, os.stat(sbang_bin_dir).st_uid, grp.getgrnam(group_name).gr_gid)
# Then check if we need to install sbang itself.
try:
already_installed = filecmp.cmp(spack.paths.sbang_script, sbang_path)
except (IOError, OSError):
already_installed = False
# copy over the fresh copy of `sbang`
sbang_tmp_path = os.path.join(
os.path.dirname(sbang_path),
".%s.tmp" % os.path.basename(sbang_path),
)
shutil.copy(spack.paths.sbang_script, sbang_tmp_path)
if not already_installed:
with fs.write_tmp_and_move(sbang_path) as f:
shutil.copy(spack.paths.sbang_script, f.name)
# set permissions on `sbang` (including group if set in configuration)
os.chmod(sbang_tmp_path, config_mode)
if group_name:
os.chown(sbang_tmp_path, os.stat(sbang_tmp_path).st_uid, grp.getgrnam(group_name).gr_gid)
# Finally, move the new `sbang` into place atomically
os.rename(sbang_tmp_path, sbang_path)
# Set permissions on `sbang` (including group if set in configuration)
os.chmod(sbang_path, config_mode)
if group_id:
os.chown(sbang_path, os.stat(sbang_path).st_uid, group_id)
def post_install(spec):

View File

@@ -803,8 +803,34 @@ def _add_bootstrap_compilers(self, compiler, architecture, pkgs, request, all_de
"""
packages = _packages_needed_to_bootstrap_compiler(compiler, architecture, pkgs)
for (comp_pkg, is_compiler) in packages:
if package_id(comp_pkg) not in self.build_tasks:
pkgid = package_id(comp_pkg)
if pkgid not in self.build_tasks:
self._add_init_task(comp_pkg, request, is_compiler, all_deps)
elif is_compiler:
# ensure it's queued as a compiler
self._modify_existing_task(pkgid, "compiler", True)
def _modify_existing_task(self, pkgid, attr, value):
"""
Update a task in-place to modify its behavior.
Currently used to update the ``compiler`` field on tasks
that were originally created as a dependency of a compiler,
but are compilers in their own right.
For example, ``intel-oneapi-compilers-classic`` depends on
``intel-oneapi-compilers``, which can cause the latter to be
queued first as a non-compiler, and only later as a compiler.
"""
for i, tup in enumerate(self.build_pq):
key, task = tup
if task.pkg_id == pkgid:
tty.debug(
"Modifying task for {0} to treat it as a compiler".format(pkgid),
level=2,
)
setattr(task, attr, value)
self.build_pq[i] = (key, task)
def _add_init_task(self, pkg, request, is_compiler, all_deps):
"""
@@ -1215,6 +1241,12 @@ def _add_tasks(self, request, all_deps):
fail_fast = request.install_args.get("fail_fast")
self.fail_fast = self.fail_fast or fail_fast
def _add_compiler_package_to_config(self, pkg):
compiler_search_prefix = getattr(pkg, "compiler_search_prefix", pkg.spec.prefix)
spack.compilers.add_compilers_to_config(
spack.compilers.find_compilers([compiler_search_prefix])
)
def _install_task(self, task):
"""
Perform the installation of the requested spec and/or dependency
@@ -1240,9 +1272,7 @@ def _install_task(self, task):
if use_cache and _install_from_cache(pkg, cache_only, explicit, unsigned):
self._update_installed(task)
if task.compiler:
spack.compilers.add_compilers_to_config(
spack.compilers.find_compilers([pkg.spec.prefix])
)
self._add_compiler_package_to_config(pkg)
return
pkg.run_tests = tests is True or tests and pkg.name in tests
@@ -1270,9 +1300,7 @@ def _install_task(self, task):
# If a compiler, ensure it is added to the configuration
if task.compiler:
spack.compilers.add_compilers_to_config(
spack.compilers.find_compilers([pkg.spec.prefix])
)
self._add_compiler_package_to_config(pkg)
except spack.build_environment.StopPhase as e:
# A StopPhase exception means that do_install was asked to
# stop early from clients, and is not an error at this point
@@ -1691,9 +1719,7 @@ def install(self):
# It's an already installed compiler, add it to the config
if task.compiler:
spack.compilers.add_compilers_to_config(
spack.compilers.find_compilers([pkg.spec.prefix])
)
self._add_compiler_package_to_config(pkg)
else:
# At this point we've failed to get a write or a read
@@ -1747,6 +1773,16 @@ def install(self):
spack.hooks.on_install_cancel(task.request.pkg.spec)
raise
except binary_distribution.NoChecksumException as exc:
if not task.cache_only:
# Checking hash on downloaded binary failed.
err = "Failed to install {0} from binary cache due to {1}:"
err += " Requeueing to install from source."
tty.error(err.format(pkg.name, str(exc)))
task.use_cache = False
self._requeue_task(task)
continue
except (Exception, SystemExit) as exc:
self._update_failed(task, True, exc)
spack.hooks.on_install_failure(task.request.pkg.spec)

View File

@@ -343,17 +343,21 @@ def add_command(self, cmd_name):
self._remove_action(self._actions[-1])
self.subparsers = self.add_subparsers(metavar="COMMAND", dest="command")
# each command module implements a parser() function, to which we
# pass its subparser for setup.
module = spack.cmd.get_module(cmd_name)
if cmd_name not in self.subparsers._name_parser_map:
# each command module implements a parser() function, to which we
# pass its subparser for setup.
module = spack.cmd.get_module(cmd_name)
# build a list of aliases
alias_list = [k for k, v in aliases.items() if v == cmd_name]
# build a list of aliases
alias_list = [k for k, v in aliases.items() if v == cmd_name]
subparser = self.subparsers.add_parser(
cmd_name, aliases=alias_list, help=module.description, description=module.description
)
module.setup_parser(subparser)
subparser = self.subparsers.add_parser(
cmd_name,
aliases=alias_list,
help=module.description,
description=module.description,
)
module.setup_parser(subparser)
# return the callable function for the command
return spack.cmd.get_command(cmd_name)

View File

@@ -184,22 +184,10 @@ def provides(self):
# If it is in the list of supported compilers family -> compiler
if self.spec.name in spack.compilers.supported_compilers():
provides["compiler"] = spack.spec.CompilerSpec(str(self.spec))
# Special case for llvm
if self.spec.name == "llvm":
provides["compiler"] = spack.spec.CompilerSpec(str(self.spec))
provides["compiler"].name = "clang"
# Special case for llvm-amdgpu
if self.spec.name == "llvm-amdgpu":
provides["compiler"] = spack.spec.CompilerSpec(str(self.spec))
provides["compiler"].name = "rocmcc"
# Special case for oneapi
if self.spec.name == "intel-oneapi-compilers":
provides["compiler"] = spack.spec.CompilerSpec(str(self.spec))
provides["compiler"].name = "oneapi"
# Special case for oneapi classic
if self.spec.name == "intel-oneapi-compilers-classic":
provides["compiler"] = spack.spec.CompilerSpec(str(self.spec))
provides["compiler"].name = "intel"
elif self.spec.name in spack.compilers.package_name_to_compiler_name:
# If it is the package for a supported compiler, but of a different name
cname = spack.compilers.package_name_to_compiler_name[self.spec.name]
provides["compiler"] = spack.spec.CompilerSpec("%s@%s" % (cname, self.spec.version))
# All the other tokens in the hierarchy must be virtual dependencies
for x in self.hierarchy_tokens:

View File

@@ -195,23 +195,23 @@ def _package(maybe_abstract_spec):
def is_spec_buildable(spec):
"""Return true if the spec is configured as buildable"""
allpkgs = spack.config.get("packages")
all_buildable = allpkgs.get("all", {}).get("buildable", True)
so_far = all_buildable # the default "so far"
def _package(s):
pkg_cls = spack.repo.path.get_pkg_class(s.name)
return pkg_cls(s)
# Get the list of names for which all_buildable is overridden
reverse = [
name
# check whether any providers for this package override the default
if any(
_package(spec).provides(name) and entry.get("buildable", so_far) != so_far
for name, entry in allpkgs.items()
if entry.get("buildable", all_buildable) != all_buildable
]
# Does this spec override all_buildable
spec_reversed = spec.name in reverse or any(_package(spec).provides(name) for name in reverse)
return not all_buildable if spec_reversed else all_buildable
):
so_far = not so_far
spec_buildable = allpkgs.get(spec.name, {}).get("buildable", so_far)
return spec_buildable
def get_package_dir_permissions(spec):

View File

@@ -102,7 +102,15 @@ def getter(node):
ast_sym = ast_getter("symbol", "term")
#: Order of precedence for version origins. Topmost types are preferred.
version_origin_fields = ["spec", "external", "packages_yaml", "package_py", "installed"]
version_origin_fields = [
"spec",
"dev_spec",
"external",
"packages_yaml",
"package_py",
"installed",
]
#: Look up version precedence strings by enum id
version_origin_str = {i: name for i, name in enumerate(version_origin_fields)}
@@ -1463,6 +1471,12 @@ class Body(object):
if concrete_build_deps or dtype != "build":
clauses.append(fn.depends_on(spec.name, dep.name, dtype))
# Ensure Spack will not coconcretize this with another provider
# for the same virtual
for virtual in dep.package.virtuals_provided:
clauses.append(fn.virtual_node(virtual.name))
clauses.append(fn.provider(dep.name, virtual.name))
# imposing hash constraints for all but pure build deps of
# already-installed concrete specs.
if concrete_build_deps or dspec.deptypes != ("build",):
@@ -1483,7 +1497,7 @@ class Body(object):
return clauses
def build_version_dict(self, possible_pkgs, specs):
def build_version_dict(self, possible_pkgs):
"""Declare any versions in specs not declared in packages."""
self.declared_versions = collections.defaultdict(list)
self.possible_versions = collections.defaultdict(set)
@@ -1524,6 +1538,8 @@ def key_fn(item):
DeclaredVersion(version=ver, idx=idx, origin=version_provenance.packages_yaml)
)
def add_concrete_versions_from_specs(self, specs, origin):
"""Add concrete versions to possible versions from lists of CLI/dev specs."""
for spec in specs:
for dep in spec.traverse():
if not dep.versions.concrete:
@@ -1547,7 +1563,7 @@ def key_fn(item):
# about*, add it to the known versions. Use idx=0, which is the
# best possible, so they're guaranteed to be used preferentially.
self.declared_versions[dep.name].append(
DeclaredVersion(version=dep.version, idx=0, origin=version_provenance.spec)
DeclaredVersion(version=dep.version, idx=0, origin=origin)
)
self.possible_versions[dep.name].add(dep.version)
@@ -1938,11 +1954,28 @@ def setup(self, driver, specs, reuse=None):
# rules to generate an ASP program.
self.gen = driver
# Calculate develop specs
# they will be used in addition to command line specs
# in determining known versions/targets/os
dev_specs = ()
env = ev.active_environment()
if env:
dev_specs = tuple(
spack.spec.Spec(info["spec"]).constrained(
"dev_path=%s"
% spack.util.path.canonicalize_path(info["path"], default_wd=env.path)
)
for name, info in env.dev_specs.items()
)
specs = tuple(specs) # ensure compatible types to add
# get possible compilers
self.possible_compilers = self.generate_possible_compilers(specs)
# traverse all specs and packages to build dict of possible versions
self.build_version_dict(possible, specs)
self.build_version_dict(possible)
self.add_concrete_versions_from_specs(specs, version_provenance.spec)
self.add_concrete_versions_from_specs(dev_specs, version_provenance.dev_spec)
self.gen.h1("Concrete input spec definitions")
self.define_concrete_input_specs(specs, possible)
@@ -1960,8 +1993,8 @@ def setup(self, driver, specs, reuse=None):
# architecture defaults
self.platform_defaults()
self.os_defaults(specs)
self.target_defaults(specs)
self.os_defaults(specs + dev_specs)
self.target_defaults(specs + dev_specs)
self.virtual_providers()
self.provider_defaults()
@@ -1978,11 +2011,8 @@ def setup(self, driver, specs, reuse=None):
self.target_preferences(pkg)
# Inject dev_path from environment
env = ev.active_environment()
if env:
for spec in sorted(specs):
for dep in spec.traverse():
_develop_specs_from_env(dep, env)
for ds in dev_specs:
self.condition(spack.spec.Spec(ds.name), ds, msg="%s is a develop spec" % ds.name)
self.gen.h1("Spec Constraints")
self.literal_specs(specs)
@@ -2305,8 +2335,7 @@ def _develop_specs_from_env(spec, env):
"Internal Error: The dev_path for spec {name} is not connected to a valid environment"
"path. Please note that develop specs can only be used inside an environment"
"These paths should be the same:\n\tdev_path:{dev_path}\n\tenv_based_path:{env_path}"
)
error_msg.format(name=spec.name, dev_path=spec.variants["dev_path"], env_path=path)
).format(name=spec.name, dev_path=spec.variants["dev_path"], env_path=path)
assert spec.variants["dev_path"].value == path, error_msg
else:

View File

@@ -44,6 +44,8 @@ node_flag_set(Package, Flag, Value) :- attr("node_flag_set", Package, Flag, Val
node_compiler_version_set(Package, Compiler, Version)
:- attr("node_compiler_version_set", Package, Compiler, Version).
node_compiler_set(Package, Compiler)
:- attr("node_compiler_set", Package, Compiler).
variant_default_value_from_cli(Package, Variant, Value)
:- attr("variant_default_value_from_cli", Package, Variant, Value).
@@ -73,7 +75,8 @@ version_declared(Package, Version, Weight) :- version_declared(Package, Version,
:- version_declared(Package, Version, Weight, "installed"),
version(Package, Version),
version_weight(Package, Weight),
not hash(Package, _).
not hash(Package, _),
internal_error("Reuse version weight used for built package").
% versions are declared w/priority -- declared with priority implies declared
version_declared(Package, Version) :- version_declared(Package, Version, _).
@@ -119,19 +122,22 @@ possible_version_weight(Package, Weight)
:- version(Package, Version),
version_weight(Package, Weight),
version_declared(Package, Version, Weight, "external"),
not external(Package).
not external(Package),
internal_error("External weight used for built package").
% we can't use a weight from an installed spec if we are building it
% and vice-versa
:- version(Package, Version),
version_weight(Package, Weight),
version_declared(Package, Version, Weight, "installed"),
build(Package).
build(Package),
internal_error("Reuse version weight used for build package").
:- version(Package, Version),
version_weight(Package, Weight),
not version_declared(Package, Version, Weight, "installed"),
not build(Package).
not build(Package),
internal_error("Build version weight used for reused package").
1 { version_weight(Package, Weight) : version_declared(Package, Version, Weight) } 1
:- version(Package, Version),
@@ -195,12 +201,14 @@ attr(Name, A1, A2, A3, A4) :- impose(ID), imposed_constraint(ID, Name, A1, A2, A
% we cannot have additional variant values when we are working with concrete specs
:- node(Package), hash(Package, Hash),
variant_value(Package, Variant, Value),
not imposed_constraint(Hash, "variant_value", Package, Variant, Value).
not imposed_constraint(Hash, "variant_value", Package, Variant, Value),
internal_error("imposed hash without imposing all variant values").
% we cannot have additional flag values when we are working with concrete specs
:- node(Package), hash(Package, Hash),
node_flag(Package, FlagType, Flag),
not imposed_constraint(Hash, "node_flag", Package, FlagType, Flag).
not imposed_constraint(Hash, "node_flag", Package, FlagType, Flag),
internal_error("imposed hash without imposing all flag values").
#defined condition/2.
#defined condition_requirement/3.
@@ -835,7 +843,8 @@ os_compatible(OS1, OS3) :- os_compatible(OS1, OS2), os_compatible(OS2, OS3).
% for which we can build software. We need a cardinality constraint
% since we might have more than one "buildable_os(OS)" fact.
:- not 1 { os_compatible(CurrentOS, ReusedOS) : buildable_os(CurrentOS) },
node_os(Package, ReusedOS).
node_os(Package, ReusedOS),
internal_error("Reused OS incompatible with build OS").
% If an OS is set explicitly respect the value
node_os(Package, OS) :- node_os_set(Package, OS), node(Package).
@@ -960,10 +969,15 @@ error(2, "'{0}' compiler constraints '%{1}@{2}' and '%{3}@{4}' are incompatible"
node_compiler(Package, Compiler) :- node_compiler_version(Package, Compiler, _).
% We can't have a compiler be enforced and select the version from another compiler
:- node_compiler(Package, Compiler1),
node_compiler_version(Package, Compiler2, _),
Compiler1 != Compiler2,
internal_error("Mismatch between selected compiler and compiler version").
error(2, "Cannot concretize {0} with two compilers {1}@{2} and {3}@{4}", Package, C1, V1, C2, V2)
:- node_compiler_version(Package, C1, V1),
node_compiler_version(Package, C2, V2),
(C1, V1) != (C2, V2).
error(2, "Cannot concretize {0} with two compilers {1} and {2}@{3}", Package, Compiler1, Compiler2, Version)
:- node_compiler(Package, Compiler1),
node_compiler_version(Package, Compiler2, Version),
Compiler1 != Compiler2.
% If the compiler of a node cannot be satisfied, raise
error(1, "No valid compiler for {0} satisfies '%{1}'", Package, Compiler)
@@ -1011,6 +1025,12 @@ compiler_match(Package, Dependency)
compiler_mismatch(Package, Dependency)
:- depends_on(Package, Dependency),
not node_compiler_set(Dependency, _),
not compiler_match(Package, Dependency).
compiler_mismatch_required(Package, Dependency)
:- depends_on(Package, Dependency),
node_compiler_set(Dependency, _),
not compiler_match(Package, Dependency).
#defined node_compiler_set/2.
@@ -1032,7 +1052,8 @@ compiler_weight(Package, 100)
not default_compiler_preference(Compiler, Version, _).
% For the time being, be strict and reuse only if the compiler match one we have on the system
:- node_compiler_version(Package, Compiler, Version), not compiler_version(Compiler, Version).
error(2, "Compiler {1}@{2} requested for {0} cannot be found. Set install_missing_compilers:true if intended.", Package, Compiler, Version)
:- node_compiler_version(Package, Compiler, Version), not compiler_version(Compiler, Version).
#defined node_compiler_preference/4.
#defined default_compiler_preference/3.
@@ -1282,7 +1303,7 @@ opt_criterion(45, "preferred providers (non-roots)").
}.
% Try to minimize the number of compiler mismatches in the DAG.
opt_criterion(40, "compiler mismatches").
opt_criterion(40, "compiler mismatches that are not from CLI").
#minimize{ 0@240: #true }.
#minimize{ 0@40: #true }.
#minimize{
@@ -1291,6 +1312,15 @@ opt_criterion(40, "compiler mismatches").
build_priority(Package, Priority)
}.
opt_criterion(39, "compiler mismatches that are not from CLI").
#minimize{ 0@239: #true }.
#minimize{ 0@39: #true }.
#minimize{
1@39+Priority,Package,Dependency
: compiler_mismatch_required(Package, Dependency),
build_priority(Package, Priority)
}.
% Try to minimize the number of compiler mismatches in the DAG.
opt_criterion(35, "OS mismatches").
#minimize{ 0@235: #true }.

View File

@@ -1558,10 +1558,24 @@ def _set_compiler(self, compiler):
def _add_dependency(self, spec, deptypes):
"""Called by the parser to add another spec as a dependency."""
if spec.name in self._dependencies:
if spec.name not in self._dependencies:
self.add_dependency_edge(spec, deptypes)
return
# Keep the intersection of constraints when a dependency is added
# multiple times. Currently we only allow identical edge types.
orig = self._dependencies[spec.name]
try:
dspec = next(dspec for dspec in orig if deptypes == dspec.deptypes)
except StopIteration:
raise DuplicateDependencyError("Cannot depend on '%s' twice" % spec)
self.add_dependency_edge(spec, deptypes)
try:
dspec.spec.constrain(spec)
except spack.error.UnsatisfiableSpecError:
raise DuplicateDependencyError(
"Cannot depend on incompatible specs '%s' and '%s'" % (dspec.spec, spec)
)
def add_dependency_edge(self, dependency_spec, deptype):
"""Add a dependency edge to this spec.
@@ -3552,7 +3566,9 @@ def _constrain_dependencies(self, other):
)
# Update with additional constraints from other spec
for name in other.dep_difference(self):
# operate on direct dependencies only, because a concrete dep
# represented by hash may have structure that needs to be preserved
for name in other.direct_dep_difference(self):
dep_spec_copy = other._get_dependency(name)
dep_copy = dep_spec_copy.spec
deptypes = dep_spec_copy.deptypes
@@ -3573,10 +3589,10 @@ def constrained(self, other, deps=True):
clone.constrain(other, deps)
return clone
def dep_difference(self, other):
def direct_dep_difference(self, other):
"""Returns dependencies in self that are not in other."""
mine = set(s.name for s in self.traverse(root=False))
mine.difference_update(s.name for s in other.traverse(root=False))
mine = set(dname for dname in self._dependencies)
mine.difference_update(dname for dname in other._dependencies)
return mine
def _autospec(self, spec_like):
@@ -4855,7 +4871,7 @@ def merge_abstract_anonymous_specs(*abstract_specs):
merged_spec[name].constrain(current_spec_constraint[name], deps=False)
# Update with additional constraints from other spec
for name in current_spec_constraint.dep_difference(merged_spec):
for name in current_spec_constraint.direct_dep_difference(merged_spec):
edge = next(iter(current_spec_constraint.edges_to_dependencies(name)))
merged_spec._add_dependency(edge.spec.copy(), edge.deptypes)

View File

@@ -3,6 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import functools
import warnings
import six
@@ -34,6 +35,14 @@ def _impl(self, other):
return _impl
#: Translation table from archspec deprecated names
_DEPRECATED_ARCHSPEC_NAMES = {
"graviton": "cortex_a72",
"graviton2": "neoverse_n1",
"graviton3": "neoverse_v1",
}
class Target(object):
def __init__(self, name, module_name=None):
"""Target models microarchitectures and their compatibility.
@@ -45,6 +54,10 @@ def __init__(self, name, module_name=None):
like Cray (e.g. craype-compiler)
"""
if not isinstance(name, archspec.cpu.Microarchitecture):
if name in _DEPRECATED_ARCHSPEC_NAMES:
msg = "'target={}' is deprecated, use 'target={}' instead"
name, old_name = _DEPRECATED_ARCHSPEC_NAMES[name], name
warnings.warn(msg.format(old_name, name))
name = archspec.cpu.TARGETS.get(name, archspec.cpu.generic_microarchitecture(name))
self.microarchitecture = name
self.module_name = module_name

View File

@@ -1011,7 +1011,6 @@ def mystrip(s):
assert "--keep-stage" in install_parts
assert "--no-check-signature" not in install_parts
assert "--no-add" in install_parts
assert "-f" in install_parts
flag_index = install_parts.index("-f")
assert "archive-files.json" in install_parts[flag_index + 1]
@@ -1261,7 +1260,7 @@ def test_push_mirror_contents(
with open(json_path, "w") as ypfd:
ypfd.write(spec_json)
install_cmd("--keep-stage", json_path)
install_cmd("--add", "--keep-stage", json_path)
# env, spec, json_path, mirror_url, build_id, sign_binaries
ci.push_mirror_contents(env, json_path, mirror_url, True)
@@ -1623,7 +1622,7 @@ def test_ci_rebuild_index(
with open(json_path, "w") as ypfd:
ypfd.write(spec_json)
install_cmd("--keep-stage", "-f", json_path)
install_cmd("--add", "--keep-stage", "-f", json_path)
buildcache_cmd("create", "-u", "-a", "-f", "--mirror-url", mirror_url, "callpath")
ci_cmd("rebuild-index")

View File

@@ -254,13 +254,18 @@ def test_dev_build_env_version_mismatch(
def test_dev_build_multiple(
tmpdir, mock_packages, install_mockery, mutable_mock_env_path, mock_fetch
):
"""Test spack install with multiple developer builds"""
"""Test spack install with multiple developer builds
Test that only the root needs to be specified in the environment
Test that versions known only from the dev specs are included in the solve,
even if they come from a non-root
"""
# setup dev-build-test-install package for dev build
# Wait to concretize inside the environment to set dev_path on the specs;
# without the environment, the user would need to set dev_path for both the
# root and dependency if they wanted a dev build for both.
leaf_dir = tmpdir.mkdir("leaf")
leaf_spec = spack.spec.Spec("dev-build-test-install@0.0.0")
leaf_spec = spack.spec.Spec("dev-build-test-install@1.0.0")
leaf_pkg_cls = spack.repo.path.get_pkg_class(leaf_spec.name)
with leaf_dir.as_cwd():
with open(leaf_pkg_cls.filename, "w") as f:
@@ -283,13 +288,12 @@ def test_dev_build_multiple(
"""\
env:
specs:
- dev-build-test-install@0.0.0
- dev-build-test-dependent@0.0.0
develop:
dev-build-test-install:
path: %s
spec: dev-build-test-install@0.0.0
spec: dev-build-test-install@1.0.0
dev-build-test-dependent:
spec: dev-build-test-dependent@0.0.0
path: %s
@@ -300,6 +304,7 @@ def test_dev_build_multiple(
env("create", "test", "./spack.yaml")
with ev.read("test"):
# Do concretization inside environment for dev info
# These specs are the source of truth to compare against the installs
leaf_spec.concretize()
root_spec.concretize()

View File

@@ -18,6 +18,7 @@
import spack.cmd.env
import spack.environment as ev
import spack.environment.shell
import spack.error
import spack.modules
import spack.paths
import spack.repo
@@ -220,7 +221,7 @@ def test_env_install_single_spec(install_mockery, mock_fetch):
e = ev.read("test")
with e:
install("cmake-client")
install("--add", "cmake-client")
e = ev.read("test")
assert e.user_specs[0].name == "cmake-client"
@@ -255,7 +256,7 @@ def test_env_modifications_error_on_activate(install_mockery, mock_fetch, monkey
e = ev.read("test")
with e:
install("cmake-client")
install("--add", "cmake-client")
def setup_error(pkg, env):
raise RuntimeError("cmake-client had issues!")
@@ -276,7 +277,7 @@ def test_activate_adds_transitive_run_deps_to_path(install_mockery, mock_fetch,
e = ev.read("test")
with e:
install("depends-on-run-env")
install("--add", "depends-on-run-env")
env_variables = {}
spack.environment.shell.activate(e).apply_modifications(env_variables)
@@ -289,7 +290,7 @@ def test_env_install_same_spec_twice(install_mockery, mock_fetch):
e = ev.read("test")
with e:
# The first installation outputs the package prefix, updates the view
out = install("cmake-client")
out = install("--add", "cmake-client")
assert "Updating view at" in out
# The second installation reports all packages already installed
@@ -448,7 +449,7 @@ def test_env_status_broken_view(
):
env_dir = str(tmpdir)
with ev.Environment(env_dir):
install("trivial-install-test-package")
install("--add", "trivial-install-test-package")
# switch to a new repo that doesn't include the installed package
# test that Spack detects the missing package and warns the user
@@ -467,7 +468,7 @@ def test_env_activate_broken_view(
mutable_mock_env_path, mock_archive, mock_fetch, mock_custom_repository, install_mockery
):
with ev.create("test"):
install("trivial-install-test-package")
install("--add", "trivial-install-test-package")
# switch to a new repo that doesn't include the installed package
# test that Spack detects the missing package and fails gracefully
@@ -1056,7 +1057,9 @@ def test_roots_display_with_variants():
assert "boost +shared" in out
def test_uninstall_removes_from_env(mock_stage, mock_fetch, install_mockery):
def test_uninstall_keeps_in_env(mock_stage, mock_fetch, install_mockery):
# 'spack uninstall' without --remove should not change the environment
# spack.yaml file, just uninstall specs
env("create", "test")
with ev.read("test"):
add("mpileaks")
@@ -1064,12 +1067,32 @@ def test_uninstall_removes_from_env(mock_stage, mock_fetch, install_mockery):
install("--fake")
test = ev.read("test")
assert any(s.name == "mpileaks" for s in test.specs_by_hash.values())
assert any(s.name == "libelf" for s in test.specs_by_hash.values())
# Save this spec to check later if it is still in the env
(mpileaks_hash,) = list(x for x, y in test.specs_by_hash.items() if y.name == "mpileaks")
orig_user_specs = test.user_specs
orig_concretized_specs = test.concretized_order
with ev.read("test"):
uninstall("-ya")
test = ev.read("test")
assert test.concretized_order == orig_concretized_specs
assert test.user_specs.specs == orig_user_specs.specs
assert mpileaks_hash in test.specs_by_hash
assert not test.specs_by_hash[mpileaks_hash].package.installed
def test_uninstall_removes_from_env(mock_stage, mock_fetch, install_mockery):
# 'spack uninstall --remove' should update the environment
env("create", "test")
with ev.read("test"):
add("mpileaks")
add("libelf")
install("--fake")
with ev.read("test"):
uninstall("-y", "-a", "--remove")
test = ev.read("test")
assert not test.specs_by_hash
assert not test.concretized_order
@@ -1255,7 +1278,7 @@ def test_env_updates_view_install_package(tmpdir, mock_stage, mock_fetch, instal
view_dir = tmpdir.join("view")
env("create", "--with-view=%s" % view_dir, "test")
with ev.read("test"):
install("--fake", "mpileaks")
install("--fake", "--add", "mpileaks")
assert os.path.exists(str(view_dir.join(".spack/mpileaks")))
@@ -1275,7 +1298,7 @@ def test_env_updates_view_uninstall(tmpdir, mock_stage, mock_fetch, install_mock
view_dir = tmpdir.join("view")
env("create", "--with-view=%s" % view_dir, "test")
with ev.read("test"):
install("--fake", "mpileaks")
install("--fake", "--add", "mpileaks")
check_mpileaks_and_deps_in_view(view_dir)
@@ -1324,7 +1347,7 @@ def test_env_updates_view_force_remove(tmpdir, mock_stage, mock_fetch, install_m
view_dir = tmpdir.join("view")
env("create", "--with-view=%s" % view_dir, "test")
with ev.read("test"):
install("--fake", "mpileaks")
install("--add", "--fake", "mpileaks")
check_mpileaks_and_deps_in_view(view_dir)
@@ -2403,7 +2426,9 @@ def test_duplicate_packages_raise_when_concretizing_together():
e.add("mpileaks~opt")
e.add("mpich")
with pytest.raises(ev.SpackEnvironmentError, match=r"cannot contain more"):
with pytest.raises(
spack.error.UnsatisfiableSpecError, match=r"relax the concretizer strictness"
):
e.concretize()

View File

@@ -356,7 +356,7 @@ def test_find_prefix_in_env(
"""Test `find` formats requiring concrete specs work in environments."""
env("create", "test")
with ev.read("test"):
install("mpileaks")
install("--add", "mpileaks")
find("-p")
find("-l")
find("-L")

View File

@@ -771,7 +771,7 @@ def test_install_only_dependencies_in_env(
dep = Spec("dependency-install").concretized()
root = Spec("dependent-install").concretized()
install("-v", "--only", "dependencies", "dependent-install")
install("-v", "--only", "dependencies", "--add", "dependent-install")
assert os.path.exists(dep.prefix)
assert not os.path.exists(root.prefix)
@@ -800,7 +800,7 @@ def test_install_only_dependencies_of_all_in_env(
def test_install_no_add_in_env(tmpdir, mock_fetch, install_mockery, mutable_mock_env_path):
# To test behavior of --no-add option, we create the following environment:
# To test behavior of --add option, we create the following environment:
#
# mpileaks
# ^callpath
@@ -849,18 +849,19 @@ def test_install_no_add_in_env(tmpdir, mock_fetch, install_mockery, mutable_mock
# Assert using --no-add with a spec not in the env fails
inst_out = install("--no-add", "boost", fail_on_error=False, output=str)
assert "no such spec exists in environment" in inst_out
assert "You can add it to the environment with 'spack add " in inst_out
# Ensure using --no-add with an ambiguous spec fails
# Without --add, ensure that install fails if the spec matches more
# than one root
with pytest.raises(ev.SpackEnvironmentError) as err:
inst_out = install("--no-add", "a", output=str)
inst_out = install("a", output=str)
assert "a matches multiple specs in the env" in str(err)
# With "--no-add", install an unambiguous dependency spec (that already
# exists as a dep in the environment) using --no-add and make sure it
# gets installed (w/ deps), but is not added to the environment.
install("--no-add", "dyninst")
# Install an unambiguous dependency spec (that already exists as a dep
# in the environment) and make sure it gets installed (w/ deps),
# but is not added to the environment.
install("dyninst")
find_output = find("-l", output=str)
assert "dyninst" in find_output
@@ -872,31 +873,30 @@ def test_install_no_add_in_env(tmpdir, mock_fetch, install_mockery, mutable_mock
assert all([s in env_specs for s in post_install_specs])
# Make sure we can install a concrete dependency spec from a spec.json
# file on disk, using the ``--no-add` option, and the spec is installed
# but not added as a root
# file on disk, and the spec is installed but not added as a root
mpi_spec_json_path = tmpdir.join("{0}.json".format(mpi_spec.name))
with open(mpi_spec_json_path.strpath, "w") as fd:
fd.write(mpi_spec.to_json(hash=ht.dag_hash))
install("--no-add", "-f", mpi_spec_json_path.strpath)
install("-f", mpi_spec_json_path.strpath)
assert mpi_spec not in e.roots()
find_output = find("-l", output=str)
assert mpi_spec.name in find_output
# Without "--no-add", install an unambiguous depependency spec (that
# already exists as a dep in the environment) without --no-add and make
# sure it is added as a root of the environment as well as installed.
# Install an unambiguous depependency spec (that already exists as a
# dep in the environment) with --add and make sure it is added as a
# root of the environment as well as installed.
assert b_spec not in e.roots()
install("b")
install("--add", "b")
assert b_spec in e.roots()
assert b_spec not in e.uninstalled_specs()
# Without "--no-add", install a novel spec and make sure it is added
# as a root and installed.
install("bowtie")
# Install a novel spec with --add and make sure it is added as a root
# and installed.
install("--add", "bowtie")
assert any([s.name == "bowtie" for s in e.roots()])
assert not any([s.name == "bowtie" for s in e.uninstalled_specs()])

View File

@@ -3,10 +3,13 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys
import pytest
import llnl.util.tty as tty
import spack.environment
import spack.store
from spack.main import SpackCommand, SpackCommandError
@@ -166,3 +169,187 @@ def _warn(*args, **kwargs):
monkeypatch.setattr(tty, "warn", _warn)
# Now try to uninstall and check this doesn't trigger warnings
uninstall("-y", "-a")
# Note: I want to use https://docs.pytest.org/en/7.1.x/how-to/skipping.html#skip-all-test-functions-of-a-class-or-module
# the style formatter insists on separating these two lines.
pytest.mark.skipif(sys.platform == "win32", reason="Envs unsupported on Windows")
class TestUninstallFromEnv(object):
"""Tests an installation with two environments e1 and e2, which each have
shared package installations:
e1 has dt-diamond-left -> dt-diamond-bottom
e2 has dt-diamond-right -> dt-diamond-bottom
"""
env = SpackCommand("env")
add = SpackCommand("add")
concretize = SpackCommand("concretize")
find = SpackCommand("find")
@pytest.fixture
def environment_setup(
self, mutable_mock_env_path, config, mock_packages, mutable_database, install_mockery
):
TestUninstallFromEnv.env("create", "e1")
e1 = spack.environment.read("e1")
with e1:
TestUninstallFromEnv.add("dt-diamond-left")
TestUninstallFromEnv.add("dt-diamond-bottom")
TestUninstallFromEnv.concretize()
install("--fake")
TestUninstallFromEnv.env("create", "e2")
e2 = spack.environment.read("e2")
with e2:
TestUninstallFromEnv.add("dt-diamond-right")
TestUninstallFromEnv.add("dt-diamond-bottom")
TestUninstallFromEnv.concretize()
install("--fake")
def test_basic_env_sanity(self, environment_setup):
for env_name in ["e1", "e2"]:
e = spack.environment.read(env_name)
with e:
for _, concretized_spec in e.concretized_specs():
assert concretized_spec.package.installed
def test_uninstall_force_dependency_shared_between_envs(self, environment_setup):
"""If you "spack uninstall -f --dependents dt-diamond-bottom" from
e1, then all packages should be uninstalled (but not removed) from
both e1 and e2.
"""
e1 = spack.environment.read("e1")
with e1:
uninstall("-f", "-y", "--dependents", "dt-diamond-bottom")
# The specs should still be in the environment, since
# --remove was not specified
assert set(root.name for (root, _) in e1.concretized_specs()) == set(
["dt-diamond-left", "dt-diamond-bottom"]
)
for _, concretized_spec in e1.concretized_specs():
assert not concretized_spec.package.installed
# Everything in e2 depended on dt-diamond-bottom, so should also
# have been uninstalled. The roots should be unchanged though.
e2 = spack.environment.read("e2")
with e2:
assert set(root.name for (root, _) in e2.concretized_specs()) == set(
["dt-diamond-right", "dt-diamond-bottom"]
)
for _, concretized_spec in e2.concretized_specs():
assert not concretized_spec.package.installed
def test_uninstall_remove_dependency_shared_between_envs(self, environment_setup):
"""If you "spack uninstall --dependents --remove dt-diamond-bottom" from
e1, then all packages are removed from e1 (it is now empty);
dt-diamond-left is also uninstalled (since only e1 needs it) but
dt-diamond-bottom is not uninstalled (since e2 needs it).
"""
e1 = spack.environment.read("e1")
with e1:
dtdiamondleft = next(
concrete
for (_, concrete) in e1.concretized_specs()
if concrete.name == "dt-diamond-left"
)
output = uninstall("-y", "--dependents", "--remove", "dt-diamond-bottom")
assert "The following specs will be removed but not uninstalled" in output
assert not list(e1.roots())
assert not dtdiamondleft.package.installed
# Since -f was not specified, all specs in e2 should still be installed
# (and e2 should be unchanged)
e2 = spack.environment.read("e2")
with e2:
assert set(root.name for (root, _) in e2.concretized_specs()) == set(
["dt-diamond-right", "dt-diamond-bottom"]
)
for _, concretized_spec in e2.concretized_specs():
assert concretized_spec.package.installed
def test_uninstall_dependency_shared_between_envs_fail(self, environment_setup):
"""If you "spack uninstall --dependents dt-diamond-bottom" from
e1 (without --remove or -f), then this should fail (this is needed by
e2).
"""
e1 = spack.environment.read("e1")
with e1:
output = uninstall("-y", "--dependents", "dt-diamond-bottom", fail_on_error=False)
assert "There are still dependents." in output
assert "use `spack env remove`" in output
# The environment should be unchanged and nothing should have been
# uninstalled
assert set(root.name for (root, _) in e1.concretized_specs()) == set(
["dt-diamond-left", "dt-diamond-bottom"]
)
for _, concretized_spec in e1.concretized_specs():
assert concretized_spec.package.installed
def test_uninstall_force_and_remove_dependency_shared_between_envs(self, environment_setup):
"""If you "spack uninstall -f --dependents --remove dt-diamond-bottom" from
e1, then all packages should be uninstalled and removed from e1.
All packages will also be uninstalled from e2, but the roots will
remain unchanged.
"""
e1 = spack.environment.read("e1")
with e1:
dtdiamondleft = next(
concrete
for (_, concrete) in e1.concretized_specs()
if concrete.name == "dt-diamond-left"
)
uninstall("-f", "-y", "--dependents", "--remove", "dt-diamond-bottom")
assert not list(e1.roots())
assert not dtdiamondleft.package.installed
e2 = spack.environment.read("e2")
with e2:
assert set(root.name for (root, _) in e2.concretized_specs()) == set(
["dt-diamond-right", "dt-diamond-bottom"]
)
for _, concretized_spec in e2.concretized_specs():
assert not concretized_spec.package.installed
def test_uninstall_keep_dependents_dependency_shared_between_envs(self, environment_setup):
"""If you "spack uninstall -f --remove dt-diamond-bottom" from
e1, then dt-diamond-bottom should be uninstalled, which leaves
"dangling" references in both environments, since
dt-diamond-left and dt-diamond-right both need it.
"""
e1 = spack.environment.read("e1")
with e1:
dtdiamondleft = next(
concrete
for (_, concrete) in e1.concretized_specs()
if concrete.name == "dt-diamond-left"
)
uninstall("-f", "-y", "--remove", "dt-diamond-bottom")
# dt-diamond-bottom was removed from the list of roots (note that
# it would still be installed since dt-diamond-left depends on it)
assert set(x.name for x in e1.roots()) == set(["dt-diamond-left"])
assert dtdiamondleft.package.installed
e2 = spack.environment.read("e2")
with e2:
assert set(root.name for (root, _) in e2.concretized_specs()) == set(
["dt-diamond-right", "dt-diamond-bottom"]
)
dtdiamondright = next(
concrete
for (_, concrete) in e2.concretized_specs()
if concrete.name == "dt-diamond-right"
)
assert dtdiamondright.package.installed
dtdiamondbottom = next(
concrete
for (_, concrete) in e2.concretized_specs()
if concrete.name == "dt-diamond-bottom"
)
assert not dtdiamondbottom.package.installed

View File

@@ -10,6 +10,7 @@
import spack.util.spack_yaml as s_yaml
from spack.main import SpackCommand
from spack.spec import Spec
activate = SpackCommand("activate")
extensions = SpackCommand("extensions")
@@ -261,3 +262,34 @@ def test_view_fails_with_missing_projections_file(tmpdir):
projection_file = os.path.join(str(tmpdir), "nonexistent")
with pytest.raises(SystemExit):
view("symlink", "--projection-file", projection_file, viewpath, "foo")
@pytest.mark.parametrize("with_projection", [False, True])
@pytest.mark.parametrize("cmd", ["symlink", "copy"])
def test_view_files_not_ignored(
tmpdir, mock_packages, mock_archive, mock_fetch, config, install_mockery, cmd, with_projection
):
spec = Spec("view-not-ignored").concretized()
pkg = spec.package
pkg.do_install()
pkg.assert_installed(spec.prefix)
install("view-dir-file") # Arbitrary package to add noise
viewpath = str(tmpdir.mkdir("view_{0}".format(cmd)))
if with_projection:
proj = str(tmpdir.join("proj.yaml"))
with open(proj, "w") as f:
f.write('{"projections":{"all":"{name}"}}')
prefix_in_view = os.path.join(viewpath, "view-not-ignored")
args = ["--projection-file", proj]
else:
prefix_in_view = viewpath
args = []
view(cmd, *(args + [viewpath, "view-not-ignored", "view-dir-file"]))
pkg.assert_installed(prefix_in_view)
view("remove", viewpath, "view-not-ignored")
pkg.assert_not_installed(prefix_in_view)

View File

@@ -352,6 +352,17 @@ def test_concretize_propagate_compiler_flag_not_passed_to_dependent(self):
assert set(spec.compiler_flags["cflags"]) == set(["-g"])
assert spec.satisfies("^openblas cflags='-O3'")
def test_mixing_compilers_only_affects_subdag(self):
spack.config.set("packages:all:compiler", ["clang", "gcc"])
spec = Spec("dt-diamond%gcc ^dt-diamond-bottom%clang").concretized()
for dep in spec.traverse():
assert ("%clang" in dep) == (dep.name == "dt-diamond-bottom")
def test_compiler_inherited_upwards(self):
spec = Spec("dt-diamond ^dt-diamond-bottom%clang").concretized()
for dep in spec.traverse():
assert "%clang" in dep
def test_architecture_inheritance(self):
"""test_architecture_inheritance is likely to fail with an
UnavailableCompilerVersionError if the architecture is concretized
@@ -1695,6 +1706,28 @@ def test_best_effort_coconcretize_preferences(self, specs, expected_spec, occura
counter += 1
assert counter == occurances, concrete_specs
def test_coconcretize_reuse_and_virtuals(self):
import spack.solver.asp
if spack.config.get("config:concretizer") == "original":
pytest.skip("Original concretizer cannot reuse")
reusable_specs = []
for s in ["mpileaks ^mpich", "zmpi"]:
reusable_specs.extend(spack.spec.Spec(s).concretized().traverse(root=True))
root_specs = [spack.spec.Spec("mpileaks"), spack.spec.Spec("zmpi")]
import spack.solver.asp
with spack.config.override("concretizer:reuse", True):
solver = spack.solver.asp.Solver()
setup = spack.solver.asp.SpackSolverSetup()
result, _, _ = solver.driver.solve(setup, root_specs, reuse=reusable_specs)
for spec in result.specs:
assert "zmpi" in spec
@pytest.mark.regression("30864")
def test_misleading_error_message_on_version(self, mutable_database):
# For this bug to be triggered we need a reusable dependency

View File

@@ -8,8 +8,6 @@
import pytest
import archspec
import spack.config
import spack.package_prefs
import spack.repo
@@ -105,28 +103,16 @@ def test_preferred_variants_from_wildcard(self):
update_packages("multivalue-variant", "variants", "foo=bar")
assert_variant_values("multivalue-variant foo=*", foo=("bar",))
def test_preferred_compilers(self):
@pytest.mark.parametrize(
"compiler_str,spec_str",
[("gcc@4.5.0", "mpileaks"), ("clang@12.0.0", "mpileaks"), ("gcc@4.5.0", "openmpi")],
)
def test_preferred_compilers(self, compiler_str, spec_str):
"""Test preferred compilers are applied correctly"""
if spack.config.get("config:concretizer") == "original":
pytest.skip("Fixing the parser broke this test for the original concretizer.")
# Need to make sure the test uses an available compiler
arch = spack.spec.ArchSpec(("test", "redhat6", archspec.cpu.host().name))
compiler_list = spack.compilers.compiler_specs_for_arch(arch)
assert compiler_list
# Try the first available compiler
compiler = str(compiler_list[0])
update_packages("mpileaks", "compiler", [compiler])
spec = concretize("mpileaks")
assert spec.compiler == spack.spec.CompilerSpec(compiler)
# Try the last available compiler
compiler = str(compiler_list[-1])
update_packages("mpileaks", "compiler", [compiler])
spec = concretize("mpileaks os=redhat6")
assert spec.compiler == spack.spec.CompilerSpec(compiler)
spec = spack.spec.Spec(spec_str)
update_packages(spec.name, "compiler", [compiler_str])
spec.concretize()
assert spec.compiler == spack.spec.CompilerSpec(compiler_str)
def test_preferred_target(self, mutable_mock_repo):
"""Test preferred targets are applied correctly"""
@@ -395,6 +381,23 @@ def test_buildable_false_all_true_virtual(self):
spec = Spec("mpich")
assert spack.package_prefs.is_spec_buildable(spec)
def test_buildable_false_virtual_true_pacakge(self):
conf = syaml.load_config(
"""\
mpi:
buildable: false
mpich:
buildable: true
"""
)
spack.config.set("packages", conf, scope="concretize")
spec = Spec("zmpi")
assert not spack.package_prefs.is_spec_buildable(spec)
spec = Spec("mpich")
assert spack.package_prefs.is_spec_buildable(spec)
def test_config_permissions_from_all(self, configure_permissions):
# Although these aren't strictly about concretization, they are
# configured in the same file and therefore convenient to test here.

View File

@@ -379,6 +379,17 @@ def test_substitute_config_variables(mock_low_high_config, monkeypatch):
os.path.join(mock_low_high_config.scopes["low"].path, os.path.join("foo", "bar", "baz"))
)
# test architecture information is in replacements
assert spack_path.canonicalize_path(
os.path.join("foo", "$platform", "bar")
) == os.path.abspath(os.path.join("foo", "test", "bar"))
host_target = spack.platforms.host().target("default_target")
host_target_family = str(host_target.microarchitecture.family)
assert spack_path.canonicalize_path(
os.path.join("foo", "$target_family", "bar")
) == os.path.abspath(os.path.join("foo", host_target_family, "bar"))
packages_merge_low = {"packages": {"foo": {"variants": ["+v1"]}, "bar": {"variants": ["+v2"]}}}

View File

@@ -467,6 +467,37 @@ def _conc_spec(compiler):
assert packages
def test_update_tasks_for_compiler_packages_as_compiler(mock_packages, config, monkeypatch):
spec = spack.spec.Spec("trivial-install-test-package").concretized()
installer = inst.PackageInstaller([(spec.package, {})])
# Add a task to the queue
installer._add_init_task(spec.package, installer.build_requests[0], False, {})
# monkeypatch to make the list of compilers be what we test
def fake_package_list(compiler, architecture, pkgs):
return [(spec.package, True)]
monkeypatch.setattr(inst, "_packages_needed_to_bootstrap_compiler", fake_package_list)
installer._add_bootstrap_compilers("fake", "fake", "fake", None, {})
# Check that the only task is now a compiler task
assert len(installer.build_pq) == 1
assert installer.build_pq[0][1].compiler
def test_bootstrapping_compilers_with_different_names_from_spec(
install_mockery, mutable_config, mock_fetch
):
with spack.config.override("config:install_missing_compilers", True):
with spack.concretize.disable_compiler_existence_check():
spec = spack.spec.Spec("trivial-install-test-package%oneapi@22.2.0").concretized()
spec.package.do_install()
assert spack.spec.CompilerSpec("oneapi@22.2.0") in spack.compilers.all_compiler_specs()
def test_dump_packages_deps_ok(install_mockery, tmpdir, mock_packages):
"""Test happy path for dump_packages with dependencies."""

View File

@@ -81,6 +81,15 @@ def test_file_layout(self, compiler, provider, factory, module_configuration):
else:
assert repetitions == 1
def test_compilers_provided_different_name(self, factory, module_configuration):
module_configuration("complex_hierarchy")
module, spec = factory("intel-oneapi-compilers%clang@3.3")
provides = module.conf.provides
assert "compiler" in provides
assert provides["compiler"] == spack.spec.CompilerSpec("oneapi@3.0")
def test_simple_case(self, modulefile_content, module_configuration):
"""Tests the generation of a simple TCL module file."""
@@ -298,7 +307,7 @@ def test_modules_relative_to_view(
):
with ev.Environment(str(tmpdir), with_view=True) as e:
module_configuration("with_view")
install("cmake")
install("--add", "cmake")
spec = spack.spec.Spec("cmake").concretized()

View File

@@ -200,3 +200,22 @@ def test_spec_list_matrix_exclude(self, mock_packages):
]
speclist = SpecList("specs", matrix)
assert len(speclist.specs) == 1
@pytest.mark.regression("22991")
def test_spec_list_constraints_with_structure(
self, mock_packages, mock_fetch, install_mockery
):
# Setup by getting hash and installing package with dep
libdwarf_spec = Spec("libdwarf").concretized()
libdwarf_spec.package.do_install()
# Create matrix
matrix = {
"matrix": [["mpileaks"], ["^callpath"], ["^libdwarf/%s" % libdwarf_spec.dag_hash()]]
}
# ensure the concrete spec was retained in the matrix entry of which
# it is a dependency
speclist = SpecList("specs", [matrix])
assert len(speclist.specs) == 1
assert libdwarf_spec in speclist.specs[0]

View File

@@ -293,6 +293,12 @@ def test_canonicalize(self):
self.check_parse("x ^y", "x@: ^y@:")
def test_parse_redundant_deps(self):
self.check_parse("x ^y@foo", "x ^y@foo ^y@foo")
self.check_parse("x ^y@foo+bar", "x ^y@foo ^y+bar")
self.check_parse("x ^y@foo+bar", "x ^y@foo+bar ^y")
self.check_parse("x ^y@foo+bar", "x ^y ^y@foo+bar")
def test_parse_errors(self):
errors = ["x@@1.2", "x ^y@@1.2", "x@1.2::", "x::"]
self._check_raises(SpecParseError, errors)
@@ -481,7 +487,7 @@ def test_multiple_versions(self):
self._check_raises(MultipleVersionError, multiples)
def test_duplicate_dependency(self):
self._check_raises(DuplicateDependencyError, ["x ^y ^y"])
self._check_raises(DuplicateDependencyError, ["x ^y@1 ^y@2"])
def test_duplicate_compiler(self):
duplicates = [

View File

@@ -178,6 +178,71 @@ async def f():
"""
match_literal = """\
match status:
case 400:
return "Bad request"
case 404 | 418:
return "Not found"
case _:
return "Something's wrong with the internet"
"""
match_with_noop = """\
match status:
case 400:
return "Bad request"
"""
match_literal_and_variable = """\
match point:
case (0, 0):
print("Origin")
case (0, y):
print(f"Y={y}")
case (x, 0):
print(f"X={x}")
case (x, y):
print(f"X={x}, Y={y}")
case _:
raise ValueError("Not a point")
"""
match_classes = """\
class Point:
x: int
y: int
def location(point):
match point:
case Point(x=0, y=0):
print("Origin is the point's location.")
case Point(x=0, y=y):
print(f"Y={y} and the point is on the y-axis.")
case Point(x=x, y=0):
print(f"X={x} and the point is on the x-axis.")
case Point():
print("The point is located somewhere else on the plane.")
case _:
print("Not a point")
"""
match_nested = """\
match points:
case []:
print("No points in the list.")
case [Point(0, 0)]:
print("The origin is the only point in the list.")
case [Point(x, y)]:
print(f"A single point {x}, {y} is in the list.")
case [Point(0, y1), Point(0, y2)]:
print(f"Two points on the Y axis at {y1}, {y2} are in the list.")
case _:
print("Something else is found in the list.")
"""
def check_ast_roundtrip(code1, filename="internal", mode="exec"):
ast1 = compile(str(code1), filename, mode, ast.PyCF_ONLY_AST)
code2 = spack.util.unparse.unparse(ast1)
@@ -512,3 +577,12 @@ def test_async_with():
@pytest.mark.skipif(sys.version_info < (3, 5), reason="Not supported < 3.5")
def test_async_with_as():
check_ast_roundtrip(async_with_as)
@pytest.mark.skipif(sys.version_info < (3, 10), reason="Not supported < 3.10")
@pytest.mark.parametrize(
"literal",
[match_literal, match_with_noop, match_literal_and_variable, match_classes, match_nested],
)
def test_match_literal(literal):
check_ast_roundtrip(literal)

View File

@@ -27,16 +27,49 @@
__all__ = ["substitute_config_variables", "substitute_path_variables", "canonicalize_path"]
def architecture():
# break circular import
import spack.platforms
import spack.spec
host_platform = spack.platforms.host()
host_os = host_platform.operating_system("default_os")
host_target = host_platform.target("default_target")
return spack.spec.ArchSpec((str(host_platform), str(host_os), str(host_target)))
def get_user():
# User pwd where available because it accounts for effective uids when using ksu and similar
try:
# user pwd for unix systems
import pwd
return pwd.getpwuid(os.geteuid()).pw_name
except ImportError:
# fallback on getpass
return getpass.getuser()
# Substitutions to perform
def replacements():
# break circular import from spack.util.executable
import spack.paths
arch = architecture()
return {
"spack": spack.paths.prefix,
"user": getpass.getuser(),
"user": get_user(),
"tempdir": tempfile.gettempdir(),
"user_cache_path": spack.paths.user_cache_path,
"architecture": str(arch),
"arch": str(arch),
"platform": str(arch.platform),
"operating_system": str(arch.os),
"os": str(arch.os),
"target": str(arch.target),
"target_family": str(arch.target.microarchitecture.family),
}
@@ -245,6 +278,13 @@ def substitute_config_variables(path):
- $tempdir Default temporary directory returned by tempfile.gettempdir()
- $user The current user's username
- $user_cache_path The user cache directory (~/.spack, unless overridden)
- $architecture The spack architecture triple for the current system
- $arch The spack architecture triple for the current system
- $platform The spack platform for the current system
- $os The OS of the current system
- $operating_system The OS of the current system
- $target The ISA target detected for the system
- $target_family The family of the target detected for the system
These are substituted case-insensitively into the path, and users can
use either ``$var`` or ``${var}`` syntax for the variables. $env is only

View File

@@ -1243,3 +1243,95 @@ def visit_withitem(self, node):
if node.optional_vars:
self.write(" as ")
self.dispatch(node.optional_vars)
def visit_Match(self, node):
self.fill("match ")
self.dispatch(node.subject)
with self.block():
for case in node.cases:
self.dispatch(case)
def visit_match_case(self, node):
self.fill("case ")
self.dispatch(node.pattern)
if node.guard:
self.write(" if ")
self.dispatch(node.guard)
with self.block():
self.dispatch(node.body)
def visit_MatchValue(self, node):
self.dispatch(node.value)
def visit_MatchSingleton(self, node):
self._write_constant(node.value)
def visit_MatchSequence(self, node):
with self.delimit("[", "]"):
interleave(lambda: self.write(", "), self.dispatch, node.patterns)
def visit_MatchStar(self, node):
name = node.name
if name is None:
name = "_"
self.write("*{}".format(name))
def visit_MatchMapping(self, node):
def write_key_pattern_pair(pair):
k, p = pair
self.dispatch(k)
self.write(": ")
self.dispatch(p)
with self.delimit("{", "}"):
keys = node.keys
interleave(
lambda: self.write(", "),
write_key_pattern_pair,
zip(keys, node.patterns),
)
rest = node.rest
if rest is not None:
if keys:
self.write(", ")
self.write("**{}".format(rest))
def visit_MatchClass(self, node):
self.set_precedence(_Precedence.ATOM, node.cls)
self.dispatch(node.cls)
with self.delimit("(", ")"):
patterns = node.patterns
interleave(lambda: self.write(", "), self.dispatch, patterns)
attrs = node.kwd_attrs
if attrs:
def write_attr_pattern(pair):
attr, pattern = pair
self.write("{}=".format(attr))
self.dispatch(pattern)
if patterns:
self.write(", ")
interleave(
lambda: self.write(", "),
write_attr_pattern,
zip(attrs, node.kwd_patterns),
)
def visit_MatchAs(self, node):
name = node.name
pattern = node.pattern
if name is None:
self.write("_")
elif pattern is None:
self.write(node.name)
else:
with self.require_parens(_Precedence.TEST, node):
self.set_precedence(_Precedence.BOR, node.pattern)
self.dispatch(node.pattern)
self.write(" as {}".format(node.name))
def visit_MatchOr(self, node):
with self.require_parens(_Precedence.BOR, node):
self.set_precedence(pnext(_Precedence.BOR), *node.patterns)
interleave(lambda: self.write(" | "), self.dispatch, node.patterns)

View File

View File

@@ -0,0 +1,59 @@
import os
import sys
from os.path import dirname as dn
def main(argv=None):
# Find spack's location and its prefix.
this_file = os.path.realpath(os.path.expanduser(__file__))
spack_prefix = dn(dn(dn(dn(this_file))))
# Allow spack libs to be imported in our scripts
spack_lib_path = os.path.join(spack_prefix, "lib", "spack")
sys.path.insert(0, spack_lib_path)
# Add external libs
spack_external_libs = os.path.join(spack_lib_path, "external")
if sys.version_info[:2] <= (2, 7):
sys.path.insert(0, os.path.join(spack_external_libs, "py2"))
sys.path.insert(0, spack_external_libs)
# Here we delete ruamel.yaml in case it has been already imported from site
# (see #9206 for a broader description of the issue).
#
# Briefly: ruamel.yaml produces a .pth file when installed with pip that
# makes the site installed package the preferred one, even though sys.path
# is modified to point to another version of ruamel.yaml.
if "ruamel.yaml" in sys.modules:
del sys.modules["ruamel.yaml"]
if "ruamel" in sys.modules:
del sys.modules["ruamel"]
# The following code is here to avoid failures when updating
# the develop version, due to spurious argparse.pyc files remaining
# in the libs/spack/external directory, see:
# https://github.com/spack/spack/pull/25376
# TODO: Remove in v0.18.0 or later
try:
import argparse # noqa: F401
except ImportError:
argparse_pyc = os.path.join(spack_external_libs, "argparse.pyc")
if not os.path.exists(argparse_pyc):
raise
try:
os.remove(argparse_pyc)
import argparse # noqa: F401
except Exception:
msg = (
"The file\n\n\t{0}\n\nis corrupted and cannot be deleted by Spack. "
"Either delete it manually or ask some administrator to "
"delete it for you."
)
print(msg.format(argparse_pyc))
sys.exit(1)
import spack.main # noqa: E402
sys.exit(spack.main.main(argv))

View File

@@ -1,3 +1,74 @@
[project]
name="spack"
description="The spack package manager"
dependencies=[
"clingo",
"setuptools",
"six",
"types-six",
]
dynamic = ["version"]
[project.scripts]
spack = "lib.spack.spack_installable.main:main"
[tool.hatch.version]
path = "lib/spack/spack/__init__.py"
[project.optional-dependencies]
dev = [
"pip>=21.3",
"pytest",
"pytest-xdist",
"setuptools",
"click==8.0.2",
'black==21.12b0',
"mypy",
"isort",
"flake8",
"vermin",
]
ci = [
"pytest-cov",
"codecov[toml]",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
include = [
"/bin",
"/etc",
"/lib",
"/share",
"/var",
"CITATION.cff",
"COPYRIGHT",
"LICENSE-APACHE",
"LICENSE-MIT",
"NOTICE",
"README.md",
"SECURITY.md",
]
[tool.hatch.envs.default]
features = [
"dev",
]
[tool.hatch.envs.default.scripts]
spack = "./bin/spack"
style = "./bin/spack style"
test = "./bin/spack unit-test"
[tool.hatch.envs.ci]
features = [
"dev",
"ci",
]
[tool.black]
line-length = 99
target-version = ['py27', 'py35', 'py36', 'py37', 'py38', 'py39', 'py310']

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/spack
@@ -245,6 +246,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.aarch64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf '2322c175fb092b426f9eb6c24ee22d94ffa6759c3d0c260b74d81abd8120122b gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/spack
@@ -242,6 +243,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/spack
@@ -152,6 +153,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.aarch64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf '2322c175fb092b426f9eb6c24ee22d94ffa6759c3d0c260b74d81abd8120122b gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/spack
@@ -163,6 +164,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: when_possible
config:
build_jobs: 32
install_tree:
root: /home/software/spack
padded_length: 512
@@ -36,6 +37,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
install_tree:
root: /home/software/spack
padded_length: 512
@@ -49,6 +50,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -216,6 +216,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.powerpc64le-linux-gnu.tar.gz' -o gmake.tar.gz
- printf '8096d202fe0a0c400b8c0573c4b9e009f2f10d2fa850a3f495340f16e9c42454 gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: when_possible
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/spack
@@ -268,6 +269,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/spack
@@ -258,6 +259,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/spack
@@ -23,6 +24,9 @@ spack:
# Horovod
- py-horovod
# Hugging Face
- py-transformers
# JAX
# https://github.com/google/jax/issues/12614
# - py-jax
@@ -89,6 +93,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/spack
@@ -26,6 +27,9 @@ spack:
# Horovod
- py-horovod
# Hugging Face
- py-transformers
# JAX
# https://github.com/google/jax/issues/12614
# - py-jax
@@ -92,6 +96,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/spack
@@ -28,6 +29,9 @@ spack:
# Horovod
- py-horovod
# Hugging Face
- py-transformers
# JAX
# https://github.com/google/jax/issues/12614
# - py-jax
@@ -95,6 +99,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/spack
@@ -58,6 +59,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.aarch64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf '2322c175fb092b426f9eb6c24ee22d94ffa6759c3d0c260b74d81abd8120122b gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/spack
@@ -63,6 +64,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
concretizer: clingo
install_tree:
root: /home/software/radiuss
@@ -66,6 +67,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -6,6 +6,7 @@ spack:
unify: false
config:
build_jobs: 32
install_tree:
root: /home/software/spack
padded_length: 512
@@ -67,6 +68,7 @@ spack:
script:
- uname -a || true
- grep -E 'vendor|model name' /proc/cpuinfo 2>/dev/null | sort -u || head -n10 /proc/cpuinfo 2>/dev/null || true
- nproc
- curl -Lfs 'https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz' -o gmake.tar.gz
- printf 'fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz' | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null

View File

@@ -1203,7 +1203,7 @@ _spack_info() {
_spack_install() {
if $list_options
then
SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --fail-fast --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --use-buildcache --include-build-deps --no-check-signature --show-log-on-error --source -n --no-checksum --deprecated -v --verbose --fake --only-concrete --no-add -f --file --clean --dirty --test --log-format --log-file --help-cdash --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp -y --yes-to-all -U --fresh --reuse"
SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --fail-fast --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --use-buildcache --include-build-deps --no-check-signature --show-log-on-error --source -n --no-checksum --deprecated -v --verbose --fake --only-concrete --add --no-add -f --file --clean --dirty --test --log-format --log-file --help-cdash --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp -y --yes-to-all -U --fresh --reuse"
else
_all_packages
fi
@@ -1824,7 +1824,7 @@ _spack_undevelop() {
_spack_uninstall() {
if $list_options
then
SPACK_COMPREPLY="-h --help -f --force -R --dependents -y --yes-to-all -a --all --origin"
SPACK_COMPREPLY="-h --help -f --force --remove -R --dependents -y --yes-to-all -a --all --origin"
else
_installed_packages
fi

View File

@@ -0,0 +1,28 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class IntelOneapiCompilers(Package):
"""Simple compiler package."""
homepage = "http://www.example.com"
url = "http://www.example.com/oneapi-1.0.tar.gz"
version("1.0", "0123456789abcdef0123456789abcdef")
version("2.0", "abcdef0123456789abcdef0123456789")
version("3.0", "def0123456789abcdef0123456789abc")
@property
def compiler_search_prefix(self):
return self.prefix.foo.bar.baz.bin
def install(self, spec, prefix):
# Create the minimal compiler that will fool `spack compiler find`
mkdirp(self.compiler_search_prefix)
with open(self.compiler_search_prefix.icx, "w") as f:
f.write('#!/bin/bash\necho "oneAPI DPC++ Compiler %s"' % str(spec.version))
set_executable(self.compiler_search_prefix.icx)

View File

@@ -0,0 +1,46 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os.path
from spack.package import *
class ViewNotIgnored(Package):
"""Install files that should not be ignored by spack."""
homepage = "http://www.spack.org"
url = "http://www.spack.org/downloads/aml-1.0.tar.gz"
has_code = False
version("0.1.0", sha256="cc89a8768693f1f11539378b21cdca9f0ce3fc5cb564f9b3e4154a051dcea69b")
install_test_files = [
"foo.spack",
".spack.bar",
"aspack",
"bin/foo.spack",
"bin/.spack.bar",
"bin/aspack",
]
def install(self, spec, prefix):
for test_file in self.install_test_files:
path = os.path.join(prefix, test_file)
mkdirp(os.path.dirname(path))
with open(path, "w") as f:
f.write(test_file)
@classmethod
def assert_installed(cls, prefix):
for test_file in cls.install_test_files:
path = os.path.join(prefix, test_file)
assert os.path.exists(path), "Missing installed file: {}".format(path)
@classmethod
def assert_not_installed(cls, prefix):
for test_file in cls.install_test_files:
path = os.path.join(prefix, test_file)
assert not os.path.exists(path), "File was not uninstalled: {}".format(path)

View File

@@ -28,9 +28,12 @@ class Cmake(Package):
executables = ["^cmake$"]
version("master", branch="master")
version("3.24.3", sha256="b53aa10fa82bff84ccdb59065927b72d3bee49f4d86261249fc0984b3b367291")
version("3.24.2", sha256="0d9020f06f3ddf17fb537dc228e1a56c927ee506b486f55fe2dc19f69bf0c8db")
version("3.24.1", sha256="4931e277a4db1a805f13baa7013a7757a0cbfe5b7932882925c7061d9d1fa82b")
version("3.24.0", sha256="c2b61f7cdecb1576cad25f918a8f42b8685d88a832fd4b62b9e0fa32e915a658")
version("3.23.5", sha256="f2944cde7a140b992ba5ccea2009a987a92413762250de22ebbace2319a0f47d")
version("3.23.4", sha256="aa8b6c17a5adf04de06e42c06adc7e25b21e4fe8378f44f703a861e5f6ac59c7")
version("3.23.3", sha256="06fefaf0ad94989724b56f733093c2623f6f84356e5beb955957f9ce3ee28809")
version("3.23.2", sha256="f316b40053466f9a416adf981efda41b160ca859e97f6a484b447ea299ff26aa")
version("3.23.1", sha256="33fd10a8ec687a4d0d5b42473f10459bb92b3ae7def2b745dc10b192760869f3")

View File

@@ -33,6 +33,7 @@ class Cosma(CMakePackage):
variant("cuda", default=False, description="Build with cuBLAS support")
variant("rocm", default=False, description="Build with rocBLAS support")
variant("scalapack", default=False, description="Build with ScaLAPACK API")
variant("shared", default=False, description="Build the shared library version")
depends_on("cmake@3.12:", type="build")
depends_on("mpi@3:")
@@ -91,10 +92,11 @@ def cosma_scalapack_cmake_arg(self):
def cmake_args(self):
return [
self.define("COSMA_WITH_TESTS", "OFF"),
self.define("COSMA_WITH_APPS", "OFF"),
self.define("COSMA_WITH_PROFILING", "OFF"),
self.define("COSMA_WITH_BENCHMARKS", "OFF"),
self.define("COSMA_WITH_TESTS", False),
self.define("COSMA_WITH_APPS", False),
self.define("COSMA_WITH_PROFILING", False),
self.define("COSMA_WITH_BENCHMARKS", False),
self.define("COSMA_BLAS", self.cosma_blas_cmake_arg()),
self.define("COSMA_SCALAPACK", self.cosma_scalapack_cmake_arg()),
self.define_from_variant("BUILD_SHARED_LIBS", "shared"),
]

View File

@@ -137,4 +137,12 @@ def cmake_args(self):
# (the LAPACK backend is still built though):
args.append(self.define("ENABLE_LAPACK", "linalg=lapack" in self.spec))
if "+admin" in self.spec and "+termlib" in self.spec["ncurses"]:
# Make sure that libeckit_cmd is linked to a library that resolves 'setupterm',
# 'tputs', etc. That is either libncurses (when 'ncurses~termlib') or libtinfo (when
# 'ncurses+termlib'). CMake considers the latter only if CURSES_NEED_NCURSES is set to
# TRUE. Note that the installation of eckit does not fail without this but the building
# of a dependent package (e.g. fdb) might fail due to the undefined references.
args.append(self.define("CURSES_NEED_NCURSES", True))
return args

View File

@@ -24,6 +24,7 @@ class Elfutils(AutotoolsPackage, SourcewarePackage):
maintainers = ["mwkrentel"]
version("0.188", sha256="fb8b0e8d0802005b9a309c60c1d8de32dd2951b56f0c3a3cb56d21ce01595dff")
version("0.187", sha256="e70b0dfbe610f90c4d1fe0d71af142a4e25c3c4ef9ebab8d2d72b65159d454c8")
version("0.186", sha256="7f6fb9149b1673d38d9178a0d3e0fb8a1ec4f53a9f4c2ff89469609879641177")
version("0.185", sha256="dc8d3e74ab209465e7f568e1b3bb9a5a142f8656e2b57d10049a73da2ae6b5a6")
@@ -46,6 +47,7 @@ class Elfutils(AutotoolsPackage, SourcewarePackage):
# Libraries for reading compressed DWARF sections.
variant("bzip2", default=False, description="Support bzip2 compressed sections.")
variant("xz", default=False, description="Support xz (lzma) compressed sections.")
variant("zstd", default=False, description="Support zstd compressed sections.", when="@0.182:")
# Native language support from libintl.
variant("nls", default=True, description="Enable Native Language Support.")
@@ -69,6 +71,7 @@ class Elfutils(AutotoolsPackage, SourcewarePackage):
depends_on("bzip2", type="link", when="+bzip2")
depends_on("xz", type="link", when="+xz")
depends_on("zstd", type="link", when="+zstd")
depends_on("zlib", type="link")
depends_on("gettext", when="+nls")
depends_on("m4", type="build")
@@ -114,6 +117,8 @@ def configure_args(self):
else:
args.append("--without-lzma")
args.extend(self.with_or_without("zstd", activation_value="prefix"))
# zlib is required
args.append("--with-zlib=%s" % spec["zlib"].prefix)

View File

@@ -13,10 +13,11 @@ class EnvironmentModules(Package):
"""
homepage = "https://cea-hpc.github.io/modules/"
url = "https://github.com/cea-hpc/modules/releases/download/v5.1.1/modules-5.1.1.tar.gz"
url = "https://github.com/cea-hpc/modules/releases/download/v5.2.0/modules-5.2.0.tar.gz"
maintainers = ["xdelaruelle"]
version("5.2.0", sha256="48f9f10864303df628a48cab17074820a6251ad8cd7d66dd62aa7798af479254")
version("5.1.1", sha256="1985f79e0337f63d6564b08db0238cf96a276a4184def822bb8ad37996dc8295")
version("5.1.0", sha256="1ab1e859b9c8bca8a8d332945366567fae4cf8dd7e312a689daaff46e7ffa949")
version("5.0.1", sha256="33a598eaff0713de09e479c2636ecde188b982584e56377f234e5065a61be7ba")

View File

@@ -12,10 +12,10 @@ class Flac(AutotoolsPackage):
homepage = "https://xiph.org/flac/index.html"
url = "http://downloads.xiph.org/releases/flac/flac-1.3.2.tar.xz"
version("1.4.2", sha256="e322d58a1f48d23d9dd38f432672865f6f79e73a6f9cc5a5f57fcaa83eb5a8e4")
version("1.3.3", sha256="213e82bd716c9de6db2f98bcadbc4c24c7e2efe8c75939a1a84e28539c4e1748")
version("1.3.2", sha256="91cfc3ed61dc40f47f050a109b08610667d73477af6ef36dcad31c31a4a8d53f")
version("1.3.1", sha256="4773c0099dba767d963fd92143263be338c48702172e8754b9bc5103efe1c56c")
version("1.3.0", sha256="fa2d64aac1f77e31dfbb270aeb08f5b32e27036a52ad15e69a77e309528010dc")
depends_on("libvorbis")
depends_on("id3lib")
depends_on("libogg@1.1.2:")

View File

@@ -13,6 +13,7 @@ class Fmt(CMakePackage):
homepage = "https://fmt.dev/"
url = "https://github.com/fmtlib/fmt/releases/download/7.1.3/fmt-7.1.3.zip"
maintainers = ["msimberg"]
version("9.1.0", sha256="cceb4cb9366e18a5742128cb3524ce5f50e88b476f1e54737a47ffdf4df4c996")
version("9.0.0", sha256="fc96dd2d2fdf2bded630787adba892c23cb9e35c6fd3273c136b0c57d4651ad6")
@@ -70,6 +71,13 @@ class Fmt(CMakePackage):
# Only allow [[attributes]] on C++11 and higher
patch("fmt-attributes-cpp11_4.1.0.patch", when="@4.1.0")
# Fix compilation with hipcc/dpcpp: https://github.com/fmtlib/fmt/issues/3005
patch(
"https://github.com/fmtlib/fmt/commit/0b0f7cfbfcebd021c910078003d413354bd843e2.patch?full_index=1",
sha256="08fb707bf8b4fc890d6eed29217ead666558cbae38f9249e22ddb82212f0eb4a",
when="@9.0.0:9.1.0",
)
def cmake_args(self):
spec = self.spec
args = []

View File

@@ -2,6 +2,9 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys
import spack.build_systems.makefile
from spack.package import *
@@ -13,7 +16,7 @@ class Glvis(MakefilePackage):
git = "https://github.com/glvis/glvis.git"
tags = ["radiuss"]
maintainers = ["goxberry", "v-dobrev", "tzanio", "tomstitt"]
maintainers = ["v-dobrev", "tzanio", "tomstitt", "goxberry"]
# glvis (like mfem) is downloaded from a URL shortener at request
# of upstream author Tzanio Kolev <tzanio@llnl.gov>. See here:
@@ -39,6 +42,20 @@ class Glvis(MakefilePackage):
version("develop", branch="master")
version(
"4.2",
sha256="314fb04040cd0a8128d6dac62ba67d7067c2c097364e5747182ee8371049b42a",
url="https://bit.ly/glvis-4-2",
extension=".tar.gz",
)
version(
"4.1",
sha256="7542c2942167533eec10d59b8331d18241798bbd86a7efbe51dc479db4127407",
url="https://bit.ly/glvis-4-1",
extension=".tar.gz",
)
version(
"4.0",
sha256="68331eaea8b93968ed6bf395388c2730b27bbcb4b7809ce44277726edccd9f08",
@@ -83,22 +100,31 @@ class Glvis(MakefilePackage):
variant("fonts", default=True, description="Use antialiased fonts via freetype & fontconfig")
depends_on("mfem@develop", when="@develop")
depends_on("mfem@4.4.0:", when="@4.2")
depends_on("mfem@4.3.0:", when="@4.1")
depends_on("mfem@4.0.0:", when="@4.0")
depends_on("mfem@3.4.0", when="@3.4")
depends_on("mfem@3.3", when="@3.3")
depends_on("mfem@3.2", when="@3.2")
depends_on("mfem@3.1", when="@3.1")
depends_on("gl")
depends_on("glu")
depends_on("libx11", when="@:3.5")
with when("@:3"):
depends_on("gl")
depends_on("glu")
depends_on("libx11")
with when("@4.0:,develop"):
with when("@4.0:"):
# On Mac, we use the OpenGL framework
if sys.platform.startswith("linux"):
depends_on("gl")
depends_on("sdl2")
depends_on("glm")
# On Mac, use external glew, e.g. from Homebrew
depends_on("glew")
# On Mac, use external freetype and fontconfig, e.g. from /opt/X11
depends_on("freetype")
depends_on("fontconfig")
depends_on("xxd", type="build")
with when("+fonts"):
depends_on("freetype")
@@ -106,7 +132,6 @@ class Glvis(MakefilePackage):
depends_on("libpng", when="screenshots=png")
depends_on("libtiff", when="screenshots=tiff")
depends_on("uuid", when="platform=linux")
class MakefileBuilder(spack.build_systems.makefile.MakefileBuilder):
@@ -127,15 +152,20 @@ def common_args(self):
"CONFIG_MK={0}".format(self.spec["mfem"].package.config_mk),
]
if self.spec.satisfies("@4.0:") or self.spec.satisfies("@develop"):
# TODO: glu and fontconfig dirs
if self.spec.satisfies("@4.0:"):
# Spack will inject the necessary include dirs and link paths via
# its compiler wrapper, so we can skip them:
result += [
"GLM_DIR={0}".format(spec["glm"].prefix),
"SDL_DIR={0}".format(spec["sdl2"].prefix),
"GLEW_DIR={0}".format(spec["glew"].prefix),
"FREETYPE_DIR={0}".format(spec["freetype"].prefix),
"OPENGL_DIR={0}".format(spec["gl"].home),
"GLM_DIR=",
"SDL_DIR=",
"GLEW_DIR=",
"FREETYPE_DIR=",
"OPENGL_DIR=",
]
# Spack will not inject include dirs like /usr/include/freetype2,
# so we need to do it ourselves:
if spec["freetype"].external:
result += ["GL_OPTS={0}".format(spec["freetype"].headers.cpp_flags)]
else:
gl_libs = spec["glu"].libs + spec["gl"].libs + spec["libx11"].libs
@@ -174,13 +204,13 @@ def fonts_args(self):
]
def xwd_args(self):
if self.spec.satisfies("@4.0:") or self.spec.satisfies("@develop"):
if self.spec.satisfies("@4.0:"):
return ["GLVIS_USE_LIBPNG=NO", "GLVIS_USE_LIBTIFF=NO"]
return ["USE_LIBPNG=NO", "USE_LIBTIFF=NO"]
def png_args(self):
prefix_args = ["USE_LIBPNG=YES", "USE_LIBTIFF=NO"]
if self.spec.satisfies("@4.0:") or self.spec.satisfies("@develop"):
if self.spec.satisfies("@4.0:"):
prefix_args = ["GLVIS_USE_LIBPNG=YES", "GLVIS_USE_LIBTIFF=NO"]
libpng = self.spec["libpng"]
@@ -191,7 +221,7 @@ def png_args(self):
def tiff_args(self):
prefix_args = ["USE_LIBPNG=NO", "USE_LIBTIFF=YES"]
if self.spec.satisfies("@4.0:") or self.spec.satisfies("@develop"):
if self.spec.satisfies("@4.0:"):
prefix_args = ["GLVIS_USE_LIBPNG=NO", "GLVIS_USE_LIBTIFF=YES"]
libtiff = self.spec["libtiff"]

View File

@@ -489,6 +489,13 @@ def cmake_args(self):
options.append(
"-DFFTWF_LIBRARIES={0}".format(self.spec["amdfftw"].libs.joined(";"))
)
elif "^armpl-gcc" in self.spec:
options.append(
"-DFFTWF_INCLUDE_DIR={0}".format(self.spec["armpl-gcc"].headers.directories[0])
)
options.append(
"-DFFTWF_LIBRARY={0}".format(self.spec["armpl-gcc"].libs.joined(";"))
)
# Ensure that the GROMACS log files report how the code was patched
# during the build, so that any problems are easier to diagnose.

View File

@@ -18,6 +18,7 @@ class Hepmc3(CMakePackage):
maintainers = ["vvolkl"]
version("3.2.5", sha256="cd0f75c80f75549c59cc2a829ece7601c77de97cb2a5ab75790cac8e1d585032")
version("3.2.4", sha256="e088fccfd1a6c2f8e1089f457101bee1e5c7a9777e9d51c6419c8a288a49e1bb")
version("3.2.3", sha256="8caadacc2c969883cd1f994b622795fc885fb4b15dad8c8ae64bcbdbf0cbd47d")
version("3.2.2", sha256="0e8cb4f78f804e38f7d29875db66f65e4c77896749d723548cc70fb7965e2d41")

View File

@@ -97,6 +97,7 @@ class Hypre(AutotoolsPackage, CudaPackage, ROCmPackage):
depends_on("rocsparse", when="+rocm")
depends_on("rocthrust", when="+rocm")
depends_on("rocrand", when="+rocm")
depends_on("rocprim", when="+rocm")
depends_on("umpire", when="+umpire")
for sm_ in CudaPackage.cuda_arch_values:
depends_on(

View File

@@ -27,6 +27,12 @@ class IntelOneapiAdvisor(IntelOneApiPackage):
)
if platform.system() == "Linux":
version(
"2022.3.1",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/18985/l_oneapi_advisor_p_2022.3.1.15323_offline.sh",
sha256="f05b58c2f13972b3ac979e4796bcc12a234b1e077400b5d00fc5df46cd228899",
expand=False,
)
version(
"2022.3.0",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/18872/l_oneapi_advisor_p_2022.3.0.8704_offline.sh",

View File

@@ -30,6 +30,12 @@ class IntelOneapiCcl(IntelOneApiLibraryPackage):
depends_on("intel-oneapi-mpi")
if platform.system() == "Linux":
version(
"2021.7.1",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/19029/l_oneapi_ccl_p_2021.7.1.16948_offline.sh",
sha256="daab05a0779db343b600253df8fea93ab0ed20bd630d89883dd651b6b540b1b2",
expand=False,
)
version(
"2021.7.0",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/18891/l_oneapi_ccl_p_2021.7.0.8733_offline.sh",

View File

@@ -2,6 +2,9 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
from llnl.util.link_tree import LinkTree
from spack.package import *
@@ -19,8 +22,6 @@ class IntelOneapiCompilersClassic(Package):
has_code = False
phases = []
# Versions before 2021 are in the `intel` package
# intel-oneapi versions before 2022 use intel@19.0.4
for ver, oneapi_ver in {
@@ -35,8 +36,19 @@ class IntelOneapiCompilersClassic(Package):
version(ver)
depends_on("intel-oneapi-compilers@" + oneapi_ver, when="@" + ver, type="run")
@property
def oneapi_compiler_prefix(self):
oneapi_version = self.spec["intel-oneapi-compilers"].version
return self.spec["intel-oneapi-compilers"].prefix.compiler.join(str(oneapi_version))
def setup_run_environment(self, env):
"""Adds environment variables to the generated module file."""
"""Adds environment variables to the generated module file.
These environment variables come from running:
.. code-block:: console
$ source {prefix}/{component}/{version}/env/vars.sh
and from setting CC/CXX/F77/FC
"""
oneapi_pkg = self.spec["intel-oneapi-compilers"].package
oneapi_pkg.setup_run_environment(env)
@@ -46,3 +58,26 @@ def setup_run_environment(self, env):
env.set("CXX", bin_prefix.icpc)
env.set("F77", bin_prefix.ifort)
env.set("FC", bin_prefix.ifort)
def install(self, spec, prefix):
# If we symlink top-level directories directly, files won't show up in views
# Create real dirs and symlink files instead
self.symlink_dir(self.oneapi_compiler_prefix.linux.bin.intel64, prefix.bin)
self.symlink_dir(self.oneapi_compiler_prefix.linux.lib, prefix.lib)
self.symlink_dir(self.oneapi_compiler_prefix.linux.include, prefix.include)
self.symlink_dir(self.oneapi_compiler_prefix.linux.compiler, prefix.compiler)
self.symlink_dir(self.oneapi_compiler_prefix.documentation.en.man, prefix.man)
def symlink_dir(self, src, dest):
# Create a real directory at dest
mkdirp(dest)
# Symlink all files in src to dest keeping directories as dirs
for entry in os.listdir(src):
src_path = os.path.join(src, entry)
dest_path = os.path.join(dest, entry)
if os.path.isdir(src_path) and os.access(src_path, os.X_OK):
link_tree = LinkTree(src_path)
link_tree.merge(dest_path)
else:
os.symlink(src_path, dest_path)

View File

@@ -10,6 +10,17 @@
from spack.package import *
linux_versions = [
{
"version": "2022.2.1",
"cpp": {
"url": "https://registrationcenter-download.intel.com/akdlm/irc_nas/19049/l_dpcpp-cpp-compiler_p_2022.2.1.16991_offline.sh",
"sha256": "3f0f02f9812a0cdf01922d2df9348910c6a4cb4f9dfe50fc7477a59bbb1f7173",
},
"ftn": {
"url": "https://registrationcenter-download.intel.com/akdlm/irc_nas/18998/l_fortran-compiler_p_2022.2.1.16992_offline.sh",
"sha256": "64f1d1efbcdc3ac2182bec18313ca23f800d94f69758db83a1394490d9d4b042",
},
},
{
"version": "2022.2.0",
"cpp": {
@@ -138,6 +149,10 @@ class IntelOneapiCompilers(IntelOneApiPackage):
def component_dir(self):
return "compiler"
@property
def compiler_search_prefix(self):
return self.prefix.compiler.join(str(self.version)).linux.bin
def setup_run_environment(self, env):
"""Adds environment variables to the generated module file.
@@ -158,7 +173,7 @@ def setup_run_environment(self, env):
def install(self, spec, prefix):
# Copy instead of install to speed up debugging
# install_tree('/opt/intel/oneapi/compiler', self.prefix)
# install_tree("/opt/intel/oneapi/compiler", self.prefix)
# install cpp
super(IntelOneapiCompilers, self).install(spec, prefix)

View File

@@ -29,6 +29,12 @@ class IntelOneapiDal(IntelOneApiLibraryPackage):
)
if platform.system() == "Linux":
version(
"2021.7.1",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/19032/l_daal_oneapi_p_2021.7.1.16996_offline.sh",
sha256="2328927480b0ba5d380028f981717b63ee323f8a1616a491a160a0a0b239e285",
expand=False,
)
version(
"2021.7.0",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/18895/l_daal_oneapi_p_2021.7.0.8746_offline.sh",

View File

@@ -29,6 +29,12 @@ class IntelOneapiDnn(IntelOneApiLibraryPackage):
)
if platform.system() == "Linux":
version(
"2022.2.1",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/19035/l_onednn_p_2022.2.1.16994_offline.sh",
sha256="2102964a36a5b58b529385706e6829456ee5225111c33dfce6326fff5175aace",
expand=False,
)
version(
"2022.2.0",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/18933/l_onednn_p_2022.2.0.8750_offline.sh",

View File

@@ -22,6 +22,12 @@ class IntelOneapiDpct(IntelOneApiPackage):
homepage = "https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compatibility-tool.html#gs.2p8km6"
if platform.system() == "Linux":
version(
"2022.2.1",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/18991/l_dpcpp-ct_p_2022.2.1.14994_offline.sh",
sha256="ea2fbe36de70eb3c78c97133f81e0b2a2fbcfc9525e77125a183d7af446ef3e6",
expand=False,
)
version(
"2022.2.0",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/18908/l_dpcpp-ct_p_2022.2.0.8701_offline.sh",

View File

@@ -25,6 +25,12 @@ class IntelOneapiDpl(IntelOneApiLibraryPackage):
homepage = "https://github.com/oneapi-src/oneDPL"
if platform.system() == "Linux":
version(
"2021.7.2",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/19046/l_oneDPL_p_2021.7.2.15007_offline.sh",
sha256="84d60a6b1978ff45d2c416f18ca7df542eaa8c0b18dc3abf4bb0824a91b4fc44",
expand=False,
)
version(
"2021.7.1",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/18846/l_oneDPL_p_2021.7.1.8713_offline.sh",

View File

@@ -27,6 +27,12 @@ class IntelOneapiInspector(IntelOneApiPackage):
homepage = "https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/inspector.html"
if platform.system() == "Linux":
version(
"2022.3.1",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/19005/l_inspector_oneapi_p_2022.3.1.15318_offline.sh",
sha256="62aa2abf6928c0f4fc60ccfb69375297f823c183aea2519d7344e09c9734c1f8",
expand=False,
)
version(
"2022.3.0",
url="https://registrationcenter-download.intel.com/akdlm/irc_nas/18924/l_inspector_oneapi_p_2022.3.0.8706_offline.sh",

Some files were not shown because too many files have changed in this diff Show More