Compare commits

...

134 Commits

Author SHA1 Message Date
Harmen Stoppels
6467ddefa3 Set version to v0.21.3 2024-10-02 20:05:46 +02:00
Massimiliano Culpo
f0fee0c223 Add compatibility of sequoia with previous macOS versions (#45127)
* Add compatibility of sequoia with previous macOS versions

* Add compatibility of sequoia with previous macOS versions
2024-10-02 20:05:46 +02:00
Massimiliano Culpo
4668616f55 ASP-based solver: update os compatibility for macOS (#43862) 2024-10-02 20:05:46 +02:00
Adam J. Stewart
280a77424d Add support for macOS Sequoia (#45018) 2024-10-02 20:05:46 +02:00
Harmen Stoppels
71fc3e74bd archspec: revert f2c8cdd1bc532f6e55209ef8112f79c5565f082d 2024-10-02 20:05:46 +02:00
Massimiliano Culpo
80fbef84bd Bump archspec to latest commit (#46445)
This should fix an issue with Neoverse XX detection

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-10-02 20:05:46 +02:00
Massimiliano Culpo
bd9186038a Update archspec to v0.2.5-dev (7e6740012b897ae4a950f0bba7e9726b767e921f) (#45721) 2024-10-02 20:05:46 +02:00
Massimiliano Culpo
4da21d8f60 Update vendored archspec to v0.2.4 (#44005) 2024-10-02 20:05:46 +02:00
Massimiliano Culpo
581a339cf9 Update archspec to v0.2.3 (#42854) 2024-10-02 20:05:46 +02:00
Harmen Stoppels
45f673b822 remove macos-11: it no longer exists 2024-10-02 20:05:46 +02:00
Harmen Stoppels
4a9a5b6eed Unit tests: skip tests that intermittently fail on Windows (#42909) 2024-10-02 20:05:46 +02:00
Massimiliano Culpo
a2b92822d4 Add macos-14 as a runner (Apple M1) (#42728)
* Add macos-14 as a runner (Apple M1)

* Mark a test xfail

We need to check later if this test needs modifications
on Apple Silicon chips.

---------

Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Co-authored-by: alalazo <alalazo@users.noreply.github.com>
2024-10-02 20:05:46 +02:00
Harmen Stoppels
0b79ead099 tests: fix wrong install name (#46711) 2024-10-02 20:05:46 +02:00
Todd Gamblin
8fe02b8d50 bugfix: make test_requires_directive work on more platforms (#41943)
Literal compiler config in `test_requires_directive` specifically lists `target:
x86_64`, but it doesn't need to, and the unnecessary target makes the test fail on
non-`x86_64` machines.

- [x] Remove target from config yaml in `test_requires_directive`
2024-10-02 20:05:46 +02:00
Harmen Stoppels
3530e44c02 url join: fix oci scheme (#46483)
* url.py: also special case oci scheme in join

* avoid fetching keys from oci mirror
2024-10-02 20:05:46 +02:00
Harmen Stoppels
532d844f26 spack.util.url: fix join breakage in python 3.12.6 (#46453) 2024-10-02 20:05:46 +02:00
Harmen Stoppels
907238a7e8 directives: forward compat for c, cxx, fortran deps 2024-10-02 20:05:46 +02:00
Harmen Stoppels
ed816d3f0c Add pkg- prefix to builtin.mock a b c d ... (#45205) 2024-10-02 20:05:46 +02:00
Massimiliano Culpo
d2096e8fc1 Allow unit test to work on Apple M1/M2 (#43363) 2024-10-02 20:05:46 +02:00
Harmen Stoppels
0c766f2e9b Set version to v0.21.3.dev0 2024-10-02 20:05:46 +02:00
Massimiliano Culpo
89319413d5 Update CHANGELOG and set version to v0.21.2 2024-03-03 16:02:50 +01:00
Axel Huebl
a6ef73f7f2 Fix mgard: OpenMP on AppleClang (#42933)
macOS AppleClang does not provide OpenMP by default with XCode.
Use LLVM's OpenMP to fix compile errors of mgard with OpenMP (default).
2024-03-03 16:02:49 +01:00
Adam J. Stewart
4a84039795 py-transformers: add v4.35.2 (#41266) 2024-03-02 21:09:49 +01:00
Alec Scott
e5f4c6ad75 rust: add v1.75.0 & v1.74.0, merge related variants into +dev, add rust-analyzer (#41903)
* Add rust-analyzer as variant to rust build

* Expose cargo module only when +cargo

* rust: add v1.74.0 and v1.75.0 and remove variants in favor of +dev

* [@spackbot] updating style on behalf of alecbcs

* Fix variant typo

---------

Co-authored-by: alecbcs <alecbcs@users.noreply.github.com>
2024-03-01 21:01:52 +01:00
Alec Scott
c0c66a0fed rust: add v1.73.0 and add support for external openssl certs (#41161)
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
2024-03-01 21:01:51 +01:00
Massimiliano Culpo
3d73193ecc Fix style tests for backports 2024-02-29 11:35:38 +01:00
Massimiliano Culpo
eb27ee7ae8 ASP-based solver: fix issue with conditional requirements and trigger conditions (#42566)
The lack of a rule to avoid enforcing requirements on multi-valued variants, when the condition activating the environment was not met, resulted in multiple optimal solutions. The fix is to prevent imposing a requirement if the when= rule activating it is not met.
2024-02-29 11:35:38 +01:00
Harmen Stoppels
2883559b49 compilers: fixup order of arguments to satisfies (#42682) 2024-02-29 11:35:38 +01:00
Massimiliano Culpo
7ce9f621d9 Fix copyright year for CI 2024-02-29 11:35:38 +01:00
Matthew Whitlock
0616290c7f Update packages_yaml.rst (#42438)
Fix an incorrect example.
2024-02-29 11:35:38 +01:00
Greg Becker
9963e2a20c Fix using sticky variants in externals (#42253) 2024-02-29 11:35:38 +01:00
Harmen Stoppels
cb4312996c repo.py: pass package name not fully qualified package name (#42217) 2024-02-29 11:35:38 +01:00
Harmen Stoppels
b107de072b oci: use pickleable errors (#42160) 2024-02-29 11:35:38 +01:00
Harmen Stoppels
dd58e922e7 oci: only push in parallel when forking (#42143) 2024-02-29 11:35:38 +01:00
Massimiliano Culpo
b23a829c4c Fix a bug when a required provider is requested for multiple virtuals (#42088) 2024-02-29 11:35:38 +01:00
Massimiliano Culpo
9af5eca9ec Fix using fully-qualified namespaces from root specs (#41957)
Explicitly requested namespaces are annotated during
the setup phase, and used to retrieve the correct package
class.

An attribute for the namespace has been added for each node.

Currently, a single namespace per package is allowed
during concretization.
2024-02-29 11:35:38 +01:00
Jordan Galby
2489b137d9 Fix setup-env when going back and forth between instances (#40924)
* setup-env: Fix back and forth between two instances

* setup-env.csh: Fix SPACK_ROOT when switch to a different instance

i.e. Always look for the current SPACK_ROOT

* setup-env: Update comments
2024-02-29 11:35:38 +01:00
Owen Solberg
64d046100a Containerize: accommodate nested or pre-existing spack-env paths (#41558)
The current `mkdir {{ paths.environment }}` will generate an error if:
* `{{ paths.environment }}` already exists, or
* `{{ paths.environment }}` is nested in non-existing dirs.

Adding `-p` to the command will make this robust to both possibilities.

Set noclobber bash option when writing manifest.
2024-02-29 11:35:38 +01:00
Massimiliano Culpo
000fe1b5a1 Set version to 0.21.2.dev0 2024-02-29 11:35:38 +01:00
Massimiliano Culpo
e30fedab10 Update CHANGELOG and version 2024-01-12 10:16:58 +01:00
eugeneswalker
a7c6df1b5a e4s ci: disable gpu test stack (#41296) 2024-01-12 10:16:58 +01:00
Harmen Stoppels
19e86088cd binary_distribution.py: support build cache layout 2 (#41773)
Add forward compatibility for tarballs created by Spack 0.22, which
use build cache layout version 2.

Spack 0.21 continues to produce build cache layout version 1 tarballs.

Build cache layout version 2 also lists parent directories of the
package prefix in the tarball, which is required for certain container
runtimes.
2024-01-11 09:40:22 +01:00
Harmen Stoppels
0295b466d7 Dont expect __qualname__ to exist (#41989) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
68e9547615 installer.py: do not tty.die when cache only fails (#41990) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
0ab9c8b904 installer.py: don't dereference stage before installing from binaries (#41986)
This fixes an issue where pkg.stage throws because a patch cannot be found,
but the patch is redundant because the spec is reused from a build cache and
will be installed from existing binaries.
2024-01-11 09:40:22 +01:00
Miguel Dias Costa
dd768bb6c3 update BerkeleyGW source urls (#38218)
* update url for BerkeleyGW version 3.0.1
* update source urls and add version 3.1.0 to berkeleygw package
2024-01-11 09:40:22 +01:00
Harmen Stoppels
b15f9d011c Spec.format: error on old style format strings (#41934) 2024-01-11 09:40:22 +01:00
Massimiliano Culpo
4f7cce68b8 ASP-based solver: don't error for type mismatch on preferences (#41138)
This commit discards type mismatches or failures to validate a package preference during concretization. The values discarded are logged as debug level messages. It also adds a config audit to help users spot misconfigurations in packages.yaml preferences.
2024-01-11 09:40:22 +01:00
Massimiliano Culpo
f39a1e5fc8 Fix an issue with deconcretization/reconcretization of environments (#41294) 2024-01-11 09:40:22 +01:00
Massimiliano Culpo
df2a3bd531 ASP-based solver: use a unique ID counter (#41290)
* solver: use a unique counter for condition, triggers and effects

* Do not reset counters when re-running setup

  What we need is just a unique ID, it doesn't need
  to start from zero every time.
2024-01-11 09:40:22 +01:00
Todd Gamblin
e29049d9c0 bugfix: sort variants in spack info --variants-by-name (#41389)
This was missed while backporting the new `spack info` command from #40326.

Variants should be sorted by name when invoking `spack info --variants-by-name`.
2024-01-11 09:40:22 +01:00
Stephen Sachs
de2249c334 clingo-bootstrap: use new Spack API for environment modifications (#41574) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
03d7643480 tests: fix more cases of env variables (#41226) 2024-01-11 09:40:22 +01:00
Massimiliano Culpo
9f2b8eef7a Refactor a test to not use the "working_env" fixture (#41308)
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2024-01-11 09:40:22 +01:00
Harmen Stoppels
f57ac8d2da tests: fix issue with os.environ binding (#41342) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
539fa5c39a test_variant_propagation_with_unify_false: missing fixture (#41345) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
50710c2b6e tests: fix side effects of default_config fixture (#41361)
* tests: default_config drop scope

* use default_config elsewhere

* use parse_install_tree for missing defaults in default config
2024-01-11 09:40:22 +01:00
Harmen Stoppels
f1b2515ce1 tests: add missing mutable db (#41359) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
37f65e769c unit tests: replace /bin/bash with /bin/sh (#41495) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
16b600c193 tests: use temporary_store (#41369) 2024-01-11 09:40:22 +01:00
Harmen Stoppels
bb62f71aa0 asp.py: remove "CLI" reference (#41718)
Can also be an environment root, or programatically
`Spec("x").concretized()`.
2024-01-11 09:40:22 +01:00
Juan Miguel Carceller
fff8e16cdd Add a webgui patch (#41404)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-01-11 09:40:22 +01:00
Dave Keeshan
33c287eed8 Fix filter_compiler_wrapper where compiler is None (#41502)
Fix filer_compiler_wrapper for cases where the compiler returned in None, this happens on some installed gcc systems that do not have fortran built into them as standard, e.g. gcc@11.4.0 on ubuntu 22.04
2024-01-11 09:40:22 +01:00
Robert Cohn
2d5ccd3068 handle use of an unconfigured compiler (#41213) 2024-01-10 20:28:14 +01:00
Michael Kuhn
125085c580 Fix multi-word aliases (#41126)
PR #40929 reverted the argument parsing to make `spack --verbose
install` work again. It looks like `--verbose` is the only instance
where this kind of argument inheritance is used since all other commands
override arguments with the same name instead. For instance, `spack
--bootstrap clean` does not invoke `spack clean --bootstrap`.

Therefore, fix multi-line aliases again by parsing the resolved
arguments and instead explicitly pass down `args.verbose` to commands.
2024-01-10 20:28:14 +01:00
Massimiliano Culpo
e70e401be1 spack graph: fix coloring with environments (#41240)
If we use all specs, we won't color correctly build-only dependencies
2024-01-10 20:28:14 +01:00
John W. Parent
e8bbd7763c MSVC preview version breaks clingo build (#41185)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2024-01-10 20:28:14 +01:00
Harmen Stoppels
712db8c85c setup_platform_environment before package env mods (#41205)
This roughly restores the order of operation from Spack 0.20,
where where `AutotoolsPackage.setup_build_environment` would
override the env variable set in `setup_platform_environment` on
macOS.
2024-01-10 20:28:14 +01:00
Massimiliano Culpo
47c560d526 ASP-based solver: don't emit spurious debug output (#41218)
When improving the error message, we started #showing in the
answer set a lot more symbols - but we forgot to suppress the
debug messages warning about UNKNOWN SYMBOLs
2024-01-10 20:28:14 +01:00
Harmen Stoppels
5b386cf9b1 test_which: do not mutate os.environ 2024-01-10 20:28:14 +01:00
Harmen Stoppels
2cbe84b1ee docs: document how spack picks a version / variant (#41070) 2024-01-10 20:28:14 +01:00
Massimiliano Culpo
e1f98fd206 Improve the error message for deprecated preferences (#41075)
Improves the warning for deprecated preferences, and adds a configuration
audit to get files:lines details of the issues.

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-01-10 20:28:14 +01:00
Massimiliano Culpo
d5ef4f8c83 Add audit check to spot when= arguments using wrong named specs (#41107)
* Add audit check to spot when= arguments using named specs

* Fix package issues caught by the new audit
2024-01-10 20:28:14 +01:00
Harmen Stoppels
b49022822d docs: packages config on separate page, demote bootstrapping (#41085) 2024-01-10 20:28:14 +01:00
Massimiliano Culpo
198bd87914 Fix infinite recursion when computing concretization errors (#41061) 2024-01-10 20:28:14 +01:00
Harmen Stoppels
080a781b81 Set version to 0.21.1.dev0 2024-01-10 20:28:14 +01:00
Todd Gamblin
65d3221a9c Update version and CHANGELOG.md for v0.21.0
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2023-11-11 03:32:22 -08:00
Todd Gamblin
f6b17f6329 update release branch for tutorial command 2023-11-11 03:32:22 -08:00
Greg Becker
09e9bb5c3d spack deconcretize command (#38803)
We have two ways to concretize now:
* `spack concretize` concretizes only the root specs that are not concrete in the environment.
* `spack concretize -f` eliminates all cached concretization data and reconcretizes the *entire* environment.

This PR adds `spack deconcretize`, which eliminates cached concretization data for a spec.  This allows
users greater control over what is preserved from their `spack.lock` file and what is reused when not
using `spack concretize -f`.  If you want to update a spec installed in your environment, you can call
`spack deconcretize` on it, and that spec and any relevant dependents will be removed from the lock file.

`spack concretize` has two options:
* `--root`: limits deconcretized specs to *specific* roots in the environment. You can use this to
  deconcretize exactly one root in a `unify: false` environment.  i.e., if `foo` root is a dependent
  of `bar`, both roots, `spack deconcretize bar` will *not* deconcretize `foo`.
* `--all`: deconcretize *all* specs that match the input spec. By default `spack deconcretize`
  will complain about multiple matches, like `spack uninstall`.
2023-11-11 03:32:22 -08:00
Massimiliano Culpo
f6dc557764 builtin.repo: fix ^mkl pattern in minor packages (#41003)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-11-11 03:32:22 -08:00
Massimiliano Culpo
74172fb9d2 gromacs et al: fix ^mkl pattern (#41002)
The ^mkl pattern was used to refer to three packages
even though none of software using it was depending
on "mkl".

This pattern, which follows Hyrum's law, is now being
removed in favor of a more explicit one.

In this PR gromacs, abinit, lammps, and quantum-espresso
are modified.

Intel packages are also modified to provide "lapack"
and "blas" together.
2023-11-11 03:32:22 -08:00
Harmen Stoppels
c266e69cde env: compute env mods only for installed roots (#40997)
And improve the error message (load vs unload).

Of course you could have some uninstalled dependency too, but as long as
it doesn't implement `setup_run_environment` etc, I don't think it hurts
to attempt to load the root anyways, given that failure to do so is a
warning, not a fatal error.
2023-11-11 03:32:22 -08:00
Todd Gamblin
fe57ec2ab7 info: rework spack info command to display variants better (#40998)
This changes variant display to use a much more legible format, and to use screen space
much better (particularly on narrow terminals). It also adds color the variant display
to match other parts of `spack info`.

Descriptions and variant value lists that were frequently squished into a tiny column
before now have closer to the full terminal width.

This change also preserves any whitespace formatting present in `package.py`, so package
maintainers can make easer-to-read descriptions of variant values if they want. For
example, `gasnet` has had a nice description of the `conduits` variant for a while, but
it was wrapped and made illegible by `spack info`. That is now fixed and the original
newlines are kept.

Conditional variants are grouped by their when clauses by default, but if you do not
like the grouping, you can display all the variants in order with `--variants-by-name`.
I'm not sure when people will prefer this, but it makes it easier to tell that a
particular variant is/isn't there. I do think grouping by `when` is the better default.
2023-11-11 03:32:22 -08:00
Adam J. Stewart
3c3476a176 py-black: add v23.10: (#40959) 2023-11-11 03:32:22 -08:00
Scott Wittenburg
67f20c3e5c buildcache: skip unrecognized metadata files (#40941)
This commit improves forward compatibility of Spack with newer build cache metadata formats.

Before this commit, invalid or unrecognized metadata would be fatal errors, now they just cause
a mirror to be skipped.

Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2023-11-11 03:32:22 -08:00
Satish Balay
1baf712b87 intel-oneapi-mkl: do not set __INTEL_POST_CFLAGS env variable (#40947)
This triggers warnings from icx compiler - that breaks petsc configure

$ I_MPI_CC=icx /opt/intel/oneapi/mpi/2021.7.0/bin/mpiicc -E a.c > /dev/null
$ __INTEL_POST_CFLAGS=-Wl,-rpath,/opt/intel/oneapi/mkl/2022.2.0/lib/intel64 I_MPI_CC=icx /opt/intel/oneapi/mpi/2021.7.0/bin/mpiicc -E a.c > /dev/null
icx: warning: -Wl,-rpath,/opt/intel/oneapi/mkl/2022.2.0/lib/intel64: 'linker' input unused [-Wunused-command-line-argument]
2023-11-09 11:44:57 +01:00
Massimiliano Culpo
c73ec0b36d modules: remove deprecated code and test data (#40966)
This removes a few deprecated attributes from the
schema of the "modules" section. Test data for
deprecated options is removed as well.
2023-11-09 11:44:57 +01:00
Harmen Stoppels
9d58d5e645 modules: restore exclude_implicits (#40958) 2023-11-09 11:44:57 +01:00
Massimiliano Culpo
fc5fd7fc60 tcl: filter compiler wrappers to avoid pointing to Spack (#40946) 2023-11-09 11:44:57 +01:00
Tom Vander Aa
efa59510a8 libevent: always autogen.sh (#40945)
The libevent release tarballs ship with a `configure` script generated by an old `libtool`. The `libtool` generated by `configure` is not compatible with `MACOSX_DEPLOYMENT_VERSION` > 10. Regeneration of the `configure` scripts fixes build on macOS. 

Original configure contains:
```
    case $host_os in
    rhapsody* | darwin1.[012])
      _lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;;
    darwin1.*)
      _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
    darwin*) # darwin 5.x on
      # if running on 10.5 or later, the deployment target defaults
      # to the OS version, if on x86, and 10.4, the deployment
      # target defaults to 10.4. Don't you love it?
      case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in
        10.0,*86*-darwin8*|10.0,*-darwin[91]*)
          _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
        10.[012][,.]*)
          _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
        10.*)
          _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
      esac
```

After re-running `autogen.sh`:
```
    case $host_os in
    rhapsody* | darwin1.[012])
      _lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;;
    darwin1.*)
      _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
    darwin*)
      case $MACOSX_DEPLOYMENT_TARGET,$host in
        10.[012],*|,*powerpc*-darwin[5-8]*)
          _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
        *)
          _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
      esac
```
2023-11-09 11:44:57 +01:00
Harmen Stoppels
6094ee5eae Revert "defaults/modules.yaml: hide implicits (#40906)" (#40955)
This reverts commit a2f00886e9.
2023-11-09 11:44:57 +01:00
Greg Becker
f7cacdbf40 tutorial stack: update for changes to the basics section for SC23 (#40942) 2023-11-08 08:56:33 +01:00
Harmen Stoppels
5152738084 tutorial: use lmod@8.7.18 because @8.7.19: has bugs (#40939) 2023-11-08 08:56:33 +01:00
Richarda Butler
9ba8d60789 Propagate variant across nodes that don't have that variant (#38512)
Before this PR, variant were not propagated to leaf nodes that could accept 
the propagated value, if some intermediate node couldn't accept it.

This PR fixes that issue by marking nodes as "candidate" for propagation
and by setting the variant only if it can be accepted by the node.

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-11-08 08:56:33 +01:00
Harmen Stoppels
3c8cd8d30c Ensure global command line arguments end up in args like before (#40929) 2023-11-08 08:56:33 +01:00
Harmen Stoppels
53a31bbbbf tutorial pipeline: force gcc@12.3.0 (#40937) 2023-11-08 08:56:33 +01:00
Harmen Stoppels
bac314a4f1 spack tutorial: use backports/v0.21.0 branch 2023-11-08 08:56:33 +01:00
Harmen Stoppels
10ba172611 catch exceptions in which_string (#40935) 2023-11-08 08:56:33 +01:00
Massimiliano Culpo
c232bf435a Change container labeling so that "latest" is the latest tag (#40593)
* Use `major.minor.patch`, `major.minor`, `major` in tags

* Ensure `latest` is the semver largest version, and not "latest in time"

* Remove Ubuntu 18.04 from the list of images
2023-11-07 11:53:36 +01:00
Massimiliano Culpo
f3537bc66b ASP: targets, compilers and providers soft-preferences are only global (#31261)
Modify the packages.yaml schema so that soft-preferences on targets,
compilers and providers can only be specified under the "all" attribute.
This makes them effectively global preferences.

Version preferences instead can only be specified under a package
specific section.

If a preference attribute is found in a section where it should
not be, it will be ignored and a warning is printed to screen.
2023-11-07 07:46:06 +01:00
Massimiliano Culpo
4004f27bc0 archspec: update to v0.2.2 (#40917)
Adds support for Neoverse V2
2023-11-07 07:44:52 +01:00
Todd Gamblin
910190f55b database: optimize query() by skipping unnecessary virtual checks (#40898)
Most queries will end up calling `spec.satisfies(query)` on everything in the DB, which
will cause Spack to ask whether the query spec is virtual if its name doesn't match the
target spec's. This can be expensive, because it can cause Spack to check if any new
virtuals showed up in *all* the packages it knows about. That can currently trigger
thousands of `stat()` calls.

We can avoid the virtual check for most successful queries if we consider that if there
*is* a match by name, the query spec *can't* be virtual. This PR adds an optimization to
the query loop to save any comparisons that would trigger a virtual check for last.

- [x] Add a `deferred` list to the `query()` loop.
- [x] First run through the `query()` loop *only* checks for name matches.
- [x] Query loop now returns early if there's a name match, skipping most `satisfies()` calls.
- [x] Second run through the `deferred()` list only runs if query spec is virtual.
- [x] Fix up handling of concrete specs.
- [x] Add test for querying virtuals in DB.
- [x] Avoid allocating deferred if not necessary.

---------

Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2023-11-07 01:00:37 +00:00
Harmen Stoppels
4ce80b95f3 spack compiler find --[no]-mixed-toolchain (#40902)
Currently there's some hacky logic in the AppleClang compiler that makes
it also accept `gfortran` as a fortran compiler if `flang` is not found.

This is guarded by `if sys.platform` checks s.t. it only applies to
Darwin.

But on Linux the feature of detecting mixed toolchains is highly
requested too, cause it's rather annoying to run into a failed build of
`openblas` after dozens of minutes of compiling its dependencies, just
because clang doesn't have a fortran compiler.

In particular in CI where the system compilers may change during system
updates, it's typically impossible to fix compilers in a hand-written
compilers.yaml config file: the config will almost certainly be outdated
sooner or later, and maintaining one config file per target machine and
writing logic to select the correct config is rather undesirable too.

---

This PR introduces a flag `spack compiler find --mixed-toolchain` that
fills out missing `fc` and `f77` entries in `clang` / `apple-clang` by
picking the best matching `gcc`.

It is enabled by default on macOS, but not on Linux, matching current
behavior of `spack compiler find`.

The "best matching gcc" logic and compiler path updates are identical to
how compiler path dictionaries are currently flattened "horizontally"
(per compiler id). This just adds logic to do the same "vertically"
(across different compiler ids).

So, with this change on Ubuntu 22.04:

```
$ spack compiler find --mixed-toolchain
==> Added 6 new compilers to /home/harmen/.spack/linux/compilers.yaml
    gcc@13.1.0  gcc@12.3.0  gcc@11.4.0  gcc@10.5.0  clang@16.0.0  clang@15.0.7
==> Compilers are defined in the following files:
    /home/harmen/.spack/linux/compilers.yaml

```

you finally get:

```
compilers:
- compiler:
    spec: clang@=15.0.7
    paths:
      cc: /usr/bin/clang
      cxx: /usr/bin/clang++
      f77: /usr/bin/gfortran
      fc: /usr/bin/gfortran
    flags: {}
    operating_system: ubuntu23.04
    target: x86_64
    modules: []
    environment: {}
    extra_rpaths: []
- compiler:
    spec: clang@=16.0.0
    paths:
      cc: /usr/bin/clang-16
      cxx: /usr/bin/clang++-16
      f77: /usr/bin/gfortran
      fc: /usr/bin/gfortran
    flags: {}
    operating_system: ubuntu23.04
    target: x86_64
    modules: []
    environment: {}
    extra_rpaths: []
```

The "best gcc" is automatically default system gcc, since it has no
suffixes / prefixes.
2023-11-06 15:17:31 -08:00
Sinan
8f1f9048ec package/qgis: add latest ltr (#40752)
* package/qgis: add latest ltr

* fix bug

* [@spackbot] updating style on behalf of Sinan81

* make flake happy

---------

Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Sinan81 <Sinan81@users.noreply.github.com>
2023-11-06 15:55:20 -07:00
Harmen Stoppels
e7372a54a1 docs: expand section about relocation, suggest padding (#40909) 2023-11-06 14:49:54 -08:00
Michael Kuhn
5074b7e922 Add support for aliases (#17229)
Add a new config section: `config:aliases`, which is a dictionary mapping aliases
to commands.

For instance:


```yaml
config:
    aliases:
        sp: spec -I
```

will define a new command `sp` that will execute `spec` with the `-I`
argument. 

Aliases cannot override existing commands, and this is ensured with a test.

We cannot currently alias subcommands. Spack will warn about any aliases
containing a space, but will not error, which leaves room for subcommand
aliases in the future.

---------

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2023-11-06 14:37:46 -08:00
Harmen Stoppels
461eb944bd Don't let runtime env variables of compiler like deps leak into the build environment (#40916)
* Test that setup_run_environment changes to CC/CXX/FC/F77 are dropped in build env

* compilers set in run env shouldn't impact build

Adds `drop` to EnvironmentModifications courtesy of @haampie, and uses
it to clear modifications of CC, CXX, F77 and FC made by
`setup_{,dependent_}run_environment` routines when producing an
environment in BUILD context.

* comment / style

* comment

---------

Co-authored-by: Tom Scogland <scogland1@llnl.gov>
2023-11-06 14:30:27 -08:00
Harmen Stoppels
4700108b5b fix prefix_inspections keys in example (#40904) 2023-11-06 13:22:13 -08:00
Harmen Stoppels
3384181868 docs: mention public build cache for GHA (#40908) 2023-11-06 13:21:16 -08:00
Vicente Bolea
f0f6e54b29 adios2: add v2.9.2 release (#40832) 2023-11-06 12:15:29 -08:00
Harmen Stoppels
a2f00886e9 defaults/modules.yaml: hide implicits (#40906) 2023-11-06 10:37:29 -08:00
Harmen Stoppels
1235084c20 Introduce default_args context manager (#39964)
This adds a rather trivial context manager that lets you deduplicate repeated
arguments in directives, e.g.

```python
depends_on("py-x@1", when="@1", type=("build", "run"))
depends_on("py-x@2", when="@2", type=("build", "run"))
depends_on("py-x@3", when="@3", type=("build", "run"))
depends_on("py-x@4", when="@4", type=("build", "run"))
```

can be condensed to

```python
with default_args(type=("build", "run")):
    depends_on("py-x@1", when="@1")
    depends_on("py-x@2", when="@2")
    depends_on("py-x@3", when="@3")
    depends_on("py-x@4", when="@4")
```

The advantage is it's clear for humans, the downside it's less clear for type checkers due to type erasure.
2023-11-06 10:22:29 -08:00
Greg Becker
b5538960c3 error messages: condition chaining (#40173)
Create chains of causation for error messages.

The current implementation is only completed for some of the many errors presented by the concretizer. The rest will need to be filled out over time, but this demonstrates the capability.

The basic idea is to associate conditions in the solver with one another in causal relationships, and to associate errors with the proximate causes of their facts in the condition graph. Then we can construct causal trees to explain errors, which will hopefully present users with useful information to avoid the error or report issues.

Technically, this is implemented as a secondary solve. The concretizer computes the optimal model, and if the optimal model contains an error, then a secondary solve computes causation information about the error(s) in the concretizer output.

Examples:

$ spack solve hdf5 ^cmake@3.0.1
==> Error: concretization failed for the following reasons:

   1. Cannot satisfy 'cmake@3.0.1'
   2. Cannot satisfy 'cmake@3.0.1'
        required because hdf5 ^cmake@3.0.1 requested from CLI 
   3. Cannot satisfy 'cmake@3.18:' and 'cmake@3.0.1
        required because hdf5 ^cmake@3.0.1 requested from CLI 
        required because hdf5 depends on cmake@3.18: when @1.13: 
          required because hdf5 ^cmake@3.0.1 requested from CLI 
   4. Cannot satisfy 'cmake@3.12:' and 'cmake@3.0.1
        required because hdf5 depends on cmake@3.12: 
          required because hdf5 ^cmake@3.0.1 requested from CLI 
        required because hdf5 ^cmake@3.0.1 requested from CLI

$ spack spec cmake ^curl~ldap   # <-- with curl configured non-buildable and an external with `+ldap`
==> Error: concretization failed for the following reasons:

   1. Attempted to use external for 'curl' which does not satisfy any configured external spec
   2. Attempted to build package curl which is not buildable and does not have a satisfying external
        attr('variant_value', 'curl', 'ldap', 'True') is an external constraint for curl which was not satisfied
   3. Attempted to build package curl which is not buildable and does not have a satisfying external
        attr('variant_value', 'curl', 'gssapi', 'True') is an external constraint for curl which was not satisfied
   4. Attempted to build package curl which is not buildable and does not have a satisfying external
        'curl+ldap' is an external constraint for curl which was not satisfied
        'curl~ldap' required
        required because cmake ^curl~ldap requested from CLI 

$ spack solve yambo+mpi ^hdf5~mpi
==> Error: concretization failed for the following reasons:

   1. 'hdf5' required multiple values for single-valued variant 'mpi'
   2. 'hdf5' required multiple values for single-valued variant 'mpi'
    Requested '~mpi' and '+mpi'
        required because yambo depends on hdf5+mpi when +mpi 
          required because yambo+mpi ^hdf5~mpi requested from CLI 
        required because yambo+mpi ^hdf5~mpi requested from CLI 
   3. 'hdf5' required multiple values for single-valued variant 'mpi'
    Requested '~mpi' and '+mpi'
        required because netcdf-c depends on hdf5+mpi when +mpi 
          required because netcdf-fortran depends on netcdf-c 
            required because yambo depends on netcdf-fortran 
              required because yambo+mpi ^hdf5~mpi requested from CLI 
          required because netcdf-fortran depends on netcdf-c@4.7.4: when @4.5.3: 
            required because yambo depends on netcdf-fortran 
              required because yambo+mpi ^hdf5~mpi requested from CLI 
          required because yambo depends on netcdf-c 
            required because yambo+mpi ^hdf5~mpi requested from CLI 
          required because yambo depends on netcdf-c+mpi when +mpi 
            required because yambo+mpi ^hdf5~mpi requested from CLI 
        required because yambo+mpi ^hdf5~mpi requested from CLI 

Future work:

In addition to fleshing out the causes of other errors, I would like to find a way to associate different components of the error messages with different causes. In this example it's pretty easy to infer which part is which, but I'm not confident that will always be the case. 

See the previous PR #34500 for discussion of how the condition chains are incomplete. In the future, we may need custom logic for individual attributes to associate some important choice rules with conditions such that clingo choices or other derivations can be part of the explanation.

---------

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-11-06 09:55:21 -08:00
Michael Kuhn
d3d82e8d6b c-blosc2: add v2.11.1 (#40889) 2023-11-06 09:48:42 -08:00
Tamara Dahlgren
17a9198c78 Environments: remove environments created with SpackYAMLErrors (#40878) 2023-11-06 18:48:28 +01:00
Juan Miguel Carceller
c6c689be28 pythia8: fix configure args (#40644)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-11-06 09:33:23 -08:00
AMD Toolchain Support
ab563c09d2 enable threading in amdlibflame (#40852)
Co-authored-by: vkallesh <Vijay-teekinavar.Kallesh@amd.com>
2023-11-06 09:20:19 -08:00
Sergio Sánchez Ramírez
abdac36fd5 Add Python as build dependency of Julia (#40903) 2023-11-06 09:03:38 -07:00
Harmen Stoppels
b8a18f0a78 mpich: remove unnecessary tuples and upperbounds (#40899)
* mpich: remove unnecessary tuples

* remove redundant :3.3.99 upperbound
2023-11-06 07:58:50 -07:00
Wouter Deconinck
17656b2ea0 qt: new version 5.15.11 (#40884)
* qt: new version 5.15.11

* qt: open end patch for qtlocation when gcc-10:
2023-11-06 06:08:19 -07:00
Harmen Stoppels
3c641c8509 spack env activate: create & activate default environment without args (#40756)
This PR implements the concept of "default environment", which doesn't have to be
created explicitly. The aim is to lower the barrier for adopting environments.

To (create and) activate the default environment, run

```
$ spack env activate
```

This mimics the behavior of

```
$ cd
```

which brings you to your home directory.

This is not a breaking change, since `spack env activate` without arguments
currently errors. It is similar to the already existing `spack env activate --temp`
command which always creates an env in a temporary directory, the difference
is that the default environment is a managed / named environment named `default`.

The name `default` is not a reserved name, it's just that `spack env activate`
creates it for you if you don't have it already.

With this change, you can get started with environments faster:

```
$ spack env activate [--prompt]
$ spack install --add x y z
```

instead of

```
$ spack env create default
==> Created environment 'default in /Users/harmenstoppels/spack/var/spack/environments/default
==> You can activate this environment with:
==>   spack env activate default
$ spack env activate [--prompt] default 
$ spack install --add x y z
```

Notice that Spack supports switching (but not stacking) environments, so the
parallel with `cd` is pretty clear:

```
$ spack env activate named_env
$ spack env status
==> In environment named_env
$ spack env activate
$ spack env status
==> In environment default
```
2023-11-05 22:53:26 -08:00
Michael Kuhn
141c7de5d8 Add command and package suggestions (#40895)
* Add command suggestions

This adds suggestions of similar commands in case users mistype a
command. Before:
```
$ spack spack
==> Error: spack is not a recognized Spack command or extension command; check with `spack commands`.
```
After:
```
$ spack spack
==> Error: spack is not a recognized Spack command or extension command; check with `spack commands`.

Did you mean one of the following commands?
  spec
  patch
```

* Add package name suggestions

* Remove suggestion to run spack clean -m
2023-11-05 14:32:09 -08:00
Todd Gamblin
f6b23b4653 bugfix: compress aliases for first command in completion (#40890)
This completes to `spack concretize`:

```
spack conc<tab>
```

but this still gets hung up on the difference between `concretize` and `concretise`:

```
spack -e . conc<tab>
```

We were checking `"$COMP_CWORD" = 1`, which tracks the word on the command line
including any flags and their args, but we should track `"$COMP_CWORD_NO_FLAGS" = 1` to
figure out if the arg we're completing is the first real command.
2023-11-05 10:15:37 +00:00
Harmen Stoppels
4755b28398 Hidden modules: always append hash (#40868) 2023-11-05 08:56:11 +01:00
Tamara Dahlgren
c9dfb9b0fd Environments: Add support for including definitions files (#33960)
This PR adds support for including separate definitions from `spack.yaml`.

Supporting the inclusion of files with definitions enables user to make
curated/standardized collections of packages that can re-used by others.
2023-11-05 00:47:06 -07:00
Veselin Dobrev
5a67c578b7 mfem: allow cuda/rocm builds with superlu-dist built without cuda/rocm (#40847) 2023-11-04 20:15:56 -05:00
Michael Kuhn
e47be18acb c-blosc: add v1.21.5 (#40888) 2023-11-04 16:51:37 -07:00
Harmen Stoppels
6593d22c4e spack.modules.commmon: pass spec to SetupContext (#40886)
Currently module globals aren't set before running
`setup_[dependent_]run_environment` to compute environment modifications
for module files. This commit fixes that.
2023-11-04 20:42:47 +00:00
Massimiliano Culpo
f51dad976e hdf5-vol-async: better specify dependency condition (#40882) 2023-11-04 20:31:52 +01:00
Cameron Rutherford
ff8cd597e0 hiop: fix cuda constraints (#40875) 2023-11-04 13:09:59 -05:00
eugeneswalker
fd22d109a6 sundials +sycl: add cxxflags=-fsycl via flag_handler (#40845) 2023-11-04 08:55:19 -05:00
zv-io
88ee3a0fba linux-headers: support multiple versions (#40877)
The download URL for linux-headers was hardcoded to 4.x;
we need to derive the correct URL from the version number.
2023-11-04 12:21:12 +01:00
Massimiliano Culpo
f50377de7f environment: solve one spec per child process (#40876)
Looking at the memory profiles of concurrent solves
for environment with unify:false, it seems memory
is only ramping up.

This exchange in the potassco mailing list:
 https://sourceforge.net/p/potassco/mailman/potassco-users/thread/b55b5b8c2e8945409abb3fa3c935c27e%40lohn.at/#msg36517698

Seems to suggest that clingo doesn't release memory
until end of the application.

Since when unify:false we distribute work to processes,
here we give a maxtaskperchild=1, so we clean memory
after each solve.
2023-11-03 23:10:42 +00:00
Adam J. Stewart
8e96d3a051 GDAL: add v3.7.3 (#40865) 2023-11-03 22:59:52 +01:00
277 changed files with 8267 additions and 3778 deletions

View File

@@ -176,7 +176,7 @@ jobs:
runs-on: ${{ matrix.macos-version }}
strategy:
matrix:
macos-version: ['macos-11', 'macos-12']
macos-version: ['macos-12']
steps:
- name: Install dependencies
run: |

View File

@@ -38,12 +38,11 @@ jobs:
# Meaning of the various items in the matrix list
# 0: Container name (e.g. ubuntu-bionic)
# 1: Platforms to build for
# 2: Base image (e.g. ubuntu:18.04)
# 2: Base image (e.g. ubuntu:22.04)
dockerfile: [[amazon-linux, 'linux/amd64,linux/arm64', 'amazonlinux:2'],
[centos7, 'linux/amd64,linux/arm64,linux/ppc64le', 'centos:7'],
[centos-stream, 'linux/amd64,linux/arm64,linux/ppc64le', 'centos:stream'],
[leap15, 'linux/amd64,linux/arm64,linux/ppc64le', 'opensuse/leap:15'],
[ubuntu-bionic, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:18.04'],
[ubuntu-focal, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:20.04'],
[ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04'],
[almalinux8, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:8'],
@@ -58,18 +57,20 @@ jobs:
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # @v2
- name: Set Container Tag Normal (Nightly)
run: |
container="${{ matrix.dockerfile[0] }}:latest"
echo "container=${container}" >> $GITHUB_ENV
echo "versioned=${container}" >> $GITHUB_ENV
# On a new release create a container with the same tag as the release.
- name: Set Container Tag on Release
if: github.event_name == 'release'
run: |
versioned="${{matrix.dockerfile[0]}}:${GITHUB_REF##*/}"
echo "versioned=${versioned}" >> $GITHUB_ENV
- uses: docker/metadata-action@96383f45573cb7f253c731d3b3ab81c87ef81934
id: docker_meta
with:
images: |
ghcr.io/${{ github.repository_owner }}/${{ matrix.dockerfile[0] }}
${{ github.repository_owner }}/${{ matrix.dockerfile[0] }}
tags: |
type=schedule,pattern=nightly
type=schedule,pattern=develop
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=ref,event=branch
type=ref,event=pr
- name: Generate the Dockerfile
env:
@@ -92,13 +93,13 @@ jobs:
path: dockerfiles
- name: Set up QEMU
uses: docker/setup-qemu-action@68827325e0b33c7199eb31dd4e31fbe9023e06e3 # @v1
uses: docker/setup-qemu-action@68827325e0b33c7199eb31dd4e31fbe9023e06e3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@f95db51fddba0c2d1ec667646a06c2ce06100226 # @v1
uses: docker/setup-buildx-action@f95db51fddba0c2d1ec667646a06c2ce06100226
- name: Log in to GitHub Container Registry
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # @v1
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -106,21 +107,18 @@ jobs:
- name: Log in to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # @v1
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Deploy ${{ matrix.dockerfile[0] }}
uses: docker/build-push-action@0565240e2d4ab88bba5387d719585280857ece09 # @v2
uses: docker/build-push-action@0565240e2d4ab88bba5387d719585280857ece09
with:
context: dockerfiles/${{ matrix.dockerfile[0] }}
platforms: ${{ matrix.dockerfile[1] }}
push: ${{ github.event_name != 'pull_request' }}
cache-from: type=gha
cache-to: type=gha,mode=max
tags: |
spack/${{ env.container }}
spack/${{ env.versioned }}
ghcr.io/spack/${{ env.container }}
ghcr.io/spack/${{ env.versioned }}
tags: ${{ steps.docker_meta.outputs.tags }}
labels: ${{ steps.docker_meta.outputs.labels }}

View File

@@ -1,3 +1,345 @@
# v0.21.3 (2024-10-02)
## Bugfixes
- Forward compatibility with Spack 0.23 packages with language dependencies (#45205, #45191)
- Forward compatibility with `urllib` from Python 3.12.6+ (#46453, #46483)
- Bump `archspec` to 0.2.5-dev for better aarch64 and Windows support (#42854, #44005,
#45721, #46445)
- Support macOS Sequoia (#45018, #45127, #43862)
- CI and test maintenance (#42909, #42728, #46711, #41943, #43363)
# v0.21.2 (2024-03-01)
## Bugfixes
- Containerize: accommodate nested or pre-existing spack-env paths (#41558)
- Fix setup-env script, when going back and forth between instances (#40924)
- Fix using fully-qualified namespaces from root specs (#41957)
- Fix a bug when a required provider is requested for multiple virtuals (#42088)
- OCI buildcaches:
- only push in parallel when forking (#42143)
- use pickleable errors (#42160)
- Fix using sticky variants in externals (#42253)
- Fix a rare issue with conditional requirements and multi-valued variants (#42566)
## Package updates
- rust: add v1.75, rework a few variants (#41161,#41903)
- py-transformers: add v4.35.2 (#41266)
- mgard: fix OpenMP on AppleClang (#42933)
# v0.21.1 (2024-01-11)
## New features
- Add support for reading buildcaches created by Spack v0.22 (#41773)
## Bugfixes
- spack graph: fix coloring with environments (#41240)
- spack info: sort variants in --variants-by-name (#41389)
- Spec.format: error on old style format strings (#41934)
- ASP-based solver:
- fix infinite recursion when computing concretization errors (#41061)
- don't error for type mismatch on preferences (#41138)
- don't emit spurious debug output (#41218)
- Improve the error message for deprecated preferences (#41075)
- Fix MSVC preview version breaking clingo build on Windows (#41185)
- Fix multi-word aliases (#41126)
- Add a warning for unconfigured compiler (#41213)
- environment: fix an issue with deconcretization/reconcretization of specs (#41294)
- buildcache: don't error if a patch is missing, when installing from binaries (#41986)
- Multiple improvements to unit-tests (#41215,#41369,#41495,#41359,#41361,#41345,#41342,#41308,#41226)
## Package updates
- root: add a webgui patch to address security issue (#41404)
- BerkeleyGW: update source urls (#38218)
# v0.21.0 (2023-11-11)
`v0.21.0` is a major feature release.
## Features in this release
1. **Better error messages with condition chaining**
In v0.18, we added better error messages that could tell you what problem happened,
but they couldn't tell you *why* it happened. `0.21` adds *condition chaining* to the
solver, and Spack can now trace back through the conditions that led to an error and
build a tree of causes potential causes and where they came from. For example:
```console
$ spack solve hdf5 ^cmake@3.0.1
==> Error: concretization failed for the following reasons:
1. Cannot satisfy 'cmake@3.0.1'
2. Cannot satisfy 'cmake@3.0.1'
required because hdf5 ^cmake@3.0.1 requested from CLI
3. Cannot satisfy 'cmake@3.18:' and 'cmake@3.0.1
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 depends on cmake@3.18: when @1.13:
required because hdf5 ^cmake@3.0.1 requested from CLI
4. Cannot satisfy 'cmake@3.12:' and 'cmake@3.0.1
required because hdf5 depends on cmake@3.12:
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 ^cmake@3.0.1 requested from CLI
```
More details in #40173.
2. **OCI build caches**
You can now use an arbitrary [OCI](https://opencontainers.org) registry as a build
cache:
```console
$ spack mirror add my_registry oci://user/image # Dockerhub
$ spack mirror add my_registry oci://ghcr.io/haampie/spack-test # GHCR
$ spack mirror set --push --oci-username ... --oci-password ... my_registry # set login creds
$ spack buildcache push my_registry [specs...]
```
And you can optionally add a base image to get *runnable* images:
```console
$ spack buildcache push --base-image ubuntu:23.04 my_registry python
Pushed ... as [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
$ docker run --rm -it [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
```
This creates a container image from the Spack installations on the host system,
without the need to run `spack install` from a `Dockerfile` or `sif` file. It also
addresses the inconvenience of losing binaries of dependencies when `RUN spack
install` fails inside `docker build`.
Further, the container image layers and build cache tarballs are the same files. This
means that `spack install` and `docker pull` use the exact same underlying binaries.
If you previously used `spack install` inside of `docker build`, this feature helps
you save storage by a factor two.
More details in #38358.
3. **Multiple versions of build dependencies**
Increasingly, complex package builds require multiple versions of some build
dependencies. For example, Python packages frequently require very specific versions
of `setuptools`, `cython`, and sometimes different physics packages require different
versions of Python to build. The concretizer enforced that every solve was *unified*,
i.e., that there only be one version of every package. The concretizer now supports
"duplicate" nodes for *build dependencies*, but enforces unification through
transitive link and run dependencies. This will allow it to better resolve complex
dependency graphs in ecosystems like Python, and it also gets us very close to
modeling compilers as proper dependencies.
This change required a major overhaul of the concretizer, as well as a number of
performance optimizations. See #38447, #39621.
4. **Cherry-picking virtual dependencies**
You can now select only a subset of virtual dependencies from a spec that may provide
more. For example, if you want `mpich` to be your `mpi` provider, you can be explicit
by writing:
```
hdf5 ^[virtuals=mpi] mpich
```
Or, if you want to use, e.g., `intel-parallel-studio` for `blas` along with an external
`lapack` like `openblas`, you could write:
```
strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
```
The `virtuals=mpi` is an edge attribute, and dependency edges in Spack graphs now
track which virtuals they satisfied. More details in #17229 and #35322.
Note for packaging: in Spack 0.21 `spec.satisfies("^virtual")` is true if and only if
the package specifies `depends_on("virtual")`. This is different from Spack 0.20,
where depending on a provider implied depending on the virtual provided. See #41002
for an example where `^mkl` was being used to test for several `mkl` providers in a
package that did not depend on `mkl`.
5. **License directive**
Spack packages can now have license metadata, with the new `license()` directive:
```python
license("Apache-2.0")
```
Licenses use [SPDX identifiers](https://spdx.org/licenses), and you can use SPDX
expressions to combine them:
```python
license("Apache-2.0 OR MIT")
```
Like other directives in Spack, it's conditional, so you can handle complex cases like
Spack itself:
```python
license("LGPL-2.1", when="@:0.11")
license("Apache-2.0 OR MIT", when="@0.12:")
```
More details in #39346, #40598.
6. **`spack deconcretize` command**
We are getting close to having a `spack update` command for environments, but we're
not quite there yet. This is the next best thing. `spack deconcretize` gives you
control over what you want to update in an already concrete environment. If you have
an environment built with, say, `meson`, and you want to update your `meson` version,
you can run:
```console
spack deconcretize meson
```
and have everything that depends on `meson` rebuilt the next time you run `spack
concretize`. In a future Spack version, we'll handle all of this in a single command,
but for now you can use this to drop bits of your lockfile and resolve your
dependencies again. More in #38803.
7. **UI Improvements**
The venerable `spack info` command was looking shabby compared to the rest of Spack's
UI, so we reworked it to have a bit more flair. `spack info` now makes much better
use of terminal space and shows variants, their values, and their descriptions much
more clearly. Conditional variants are grouped separately so you can more easily
understand how packages are structured. More in #40998.
`spack checksum` now allows you to filter versions from your editor, or by version
range. It also notifies you about potential download URL changes. See #40403.
8. **Environments can include definitions**
Spack did not previously support using `include:` with The
[definitions](https://spack.readthedocs.io/en/latest/environments.html#spec-list-references)
section of an environment, but now it does. You can use this to curate lists of specs
and more easily reuse them across environments. See #33960.
9. **Aliases**
You can now add aliases to Spack commands in `config.yaml`, e.g. this might enshrine
your favorite args to `spack find` as `spack f`:
```yaml
config:
aliases:
f: find -lv
```
See #17229.
10. **Improved autoloading of modules**
Spack 0.20 was the first release to enable autoloading of direct dependencies in
module files.
The downside of this was that `module avail` and `module load` tab completion would
show users too many modules to choose from, and many users disabled generating
modules for dependencies through `exclude_implicits: true`. Further, it was
necessary to keep hashes in module names to avoid file name clashes.
In this release, you can start using `hide_implicits: true` instead, which exposes
only explicitly installed packages to the user, while still autoloading
dependencies. On top of that, you can safely use `hash_length: 0`, as this config
now only applies to the modules exposed to the user -- you don't have to worry about
file name clashes for hidden dependencies.
Note: for `tcl` this feature requires Modules 4.7 or higher
11. **Updated container labeling**
Nightly Docker images from the `develop` branch will now be tagged as `:develop` and
`:nightly`. The `:latest` tag is no longer associated with `:develop`, but with the
latest stable release. Releases will be tagged with `:{major}`, `:{major}.{minor}`
and `:{major}.{minor}.{patch}`. `ubuntu:18.04` has also been removed from the list of
generated Docker images, as it is no longer supported. See #40593.
## Other new commands and directives
* `spack env activate` without arguments now loads a `default` environment that you do
not have to create (#40756).
* `spack find -H` / `--hashes`: a new shortcut for piping `spack find` output to
other commands (#38663)
* Add `spack checksum --verify`, fix `--add` (#38458)
* New `default_args` context manager factors out common args for directives (#39964)
* `spack compiler find --[no]-mixed-toolchain` lets you easily mix `clang` and
`gfortran` on Linux (#40902)
## Performance improvements
* `spack external find` execution is now much faster (#39843)
* `spack location -i` now much faster on success (#40898)
* Drop redundant rpaths post install (#38976)
* ASP-based solver: avoid cycles in clingo using hidden directive (#40720)
* Fix multiple quadratic complexity issues in environments (#38771)
## Other new features of note
* archspec: update to v0.2.2, support for Sapphire Rapids, Power10, Neoverse V2 (#40917)
* Propagate variants across nodes that don't have that variant (#38512)
* Implement fish completion (#29549)
* Can now distinguish between source/binary mirror; don't ping mirror.spack.io as much (#34523)
* Improve status reporting on install (add [n/total] display) (#37903)
## Windows
This release has the best Windows support of any Spack release yet, with numerous
improvements and much larger swaths of tests passing:
* MSVC and SDK improvements (#37711, #37930, #38500, #39823, #39180)
* Windows external finding: update default paths; treat .bat as executable on Windows (#39850)
* Windows decompression: fix removal of intermediate file (#38958)
* Windows: executable/path handling (#37762)
* Windows build systems: use ninja and enable tests (#33589)
* Windows testing (#36970, #36972, #36973, #36840, #36977, #36792, #36834, #34696, #36971)
* Windows PowerShell support (#39118, #37951)
* Windows symlinking and libraries (#39933, #38599, #34701, #38578, #34701)
## Notable refactors
* User-specified flags take precedence over others in Spack compiler wrappers (#37376)
* Improve setup of build, run, and test environments (#35737, #40916)
* `make` is no longer a required system dependency of Spack (#40380)
* Support Python 3.12 (#40404, #40155, #40153)
* docs: Replace package list with packages.spack.io (#40251)
* Drop Python 2 constructs in Spack (#38720, #38718, #38703)
## Binary cache and stack updates
* e4s arm stack: duplicate and target neoverse v1 (#40369)
* Add macOS ML CI stacks (#36586)
* E4S Cray CI Stack (#37837)
* e4s cray: expand spec list (#38947)
* e4s cray sles ci: expand spec list (#39081)
## Removals, deprecations, and syntax changes
* ASP: targets, compilers and providers soft-preferences are only global (#31261)
* Parser: fix ambiguity with whitespace in version ranges (#40344)
* Module file generation is disabled by default; you'll need to enable it to use it (#37258)
* Remove deprecated "extra_instructions" option for containers (#40365)
* Stand-alone test feature deprecation postponed to v0.22 (#40600)
* buildcache push: make `--allow-root` the default and deprecate the option (#38878)
## Notable Bugfixes
* Bugfix: propagation of multivalued variants (#39833)
* Allow `/` in git versions (#39398)
* Fetch & patch: actually acquire stage lock, and many more issues (#38903)
* Environment/depfile: better escaping of targets with Git versions (#37560)
* Prevent "spack external find" to error out on wrong permissions (#38755)
* lmod: allow core compiler to be specified with a version range (#37789)
## Spack community stats
* 7,469 total packages, 303 new since `v0.20.0`
* 150 new Python packages
* 34 new R packages
* 353 people contributed to this release
* 336 committers to packages
* 65 committers to core
# v0.20.3 (2023-10-31)
## Bugfixes

View File

@@ -229,3 +229,11 @@ config:
flags:
# Whether to keep -Werror flags active in package builds.
keep_werror: 'none'
# A mapping of aliases that can be used to define new commands. For instance,
# `sp: spec -I` will define a new command `sp` that will execute `spec` with
# the `-I` argument. Aliases cannot override existing commands.
aliases:
concretise: concretize
containerise: containerize
rm: remove

View File

@@ -155,6 +155,33 @@ List of popular build caches
* `Extreme-scale Scientific Software Stack (E4S) <https://e4s-project.github.io/>`_: `build cache <https://oaciss.uoregon.edu/e4s/inventory.html>`_
----------
Relocation
----------
When using buildcaches across different machines, it is likely that the install
root will be different from the one used to build the binaries.
To address this issue, Spack automatically relocates all paths encoded in binaries
and scripts to their new location upon install.
Note that there are some cases where this is not possible: if binaries are built in
a relatively short path, and then installed to a longer path, there may not be enough
space in the binary to encode the new path. In this case, Spack will fail to install
the package from the build cache, and a source build is required.
To reduce the likelihood of this happening, it is highly recommended to add padding to
the install root during the build, as specified in the :ref:`config <config-yaml>`
section of the configuration:
.. code-block:: yaml
config:
install_tree:
root: /opt/spack
padded_length: 128
-----------------------------------------
OCI / Docker V2 registries as build cache
@@ -216,29 +243,34 @@ other system dependencies. However, they are still compatible with tools like
are `alternative drivers <https://docs.docker.com/storage/storagedriver/>`_.
------------------------------------
Using a buildcache in GitHub Actions
Spack build cache for GitHub Actions
------------------------------------
GitHub Actions is a popular CI/CD platform for building and testing software,
but each CI job has limited resources, making from source builds too slow for
many applications. Spack build caches can be used to share binaries between CI
runs, speeding up CI significantly.
To significantly speed up Spack in GitHub Actions, binaries can be cached in
GitHub Packages. This service is an OCI registry that can be linked to a GitHub
repository.
A typical workflow is to include a ``spack.yaml`` environment in your repository
that specifies the packages to install:
that specifies the packages to install, the target architecture, and the build
cache to use under ``mirrors``:
.. code-block:: yaml
spack:
specs: [pkg-x, pkg-y]
packages:
all:
require: target=x86_64_v2
mirrors:
github_packages: oci://ghcr.io/<user>/<repo>
specs:
- python@3.11
config:
install_tree:
root: /opt/spack
padded_length: 128
packages:
all:
require: target=x86_64_v2
mirrors:
local-buildcache: oci://ghcr.io/<organization>/<repository>
And a GitHub action that sets up Spack, installs packages from the build cache
or from sources, and pushes newly built binaries to the build cache:
A GitHub action can then be used to install the packages and push them to the
build cache:
.. code-block:: yaml
@@ -252,26 +284,35 @@ or from sources, and pushes newly built binaries to the build cache:
jobs:
example:
runs-on: ubuntu-22.04
permissions:
packages: write
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Spack
run: |
git clone --depth=1 https://github.com/spack/spack.git
echo "$PWD/spack/bin/" >> "$GITHUB_PATH"
- name: Checkout Spack
uses: actions/checkout@v3
with:
repository: spack/spack
path: spack
- name: Setup Spack
run: echo "$PWD/spack/bin" >> "$GITHUB_PATH"
- name: Concretize
run: spack -e . concretize
- name: Install
run: spack -e . install --no-check-signature --fail-fast
run: spack -e . install --no-check-signature
- name: Run tests
run: ./my_view/bin/python3 -c 'print("hello world")'
- name: Push to buildcache
run: |
spack -e . mirror set --oci-username <user> --oci-password "${{ secrets.GITHUB_TOKEN }}" github_packages
spack -e . buildcache push --base-image ubuntu:22.04 --unsigned --update-index github_packages
if: always()
spack -e . mirror set --oci-username ${{ github.actor }} --oci-password "${{ secrets.GITHUB_TOKEN }}" local-buildcache
spack -e . buildcache push --base-image ubuntu:22.04 --unsigned --update-index local-buildcache
if: ${{ !cancelled() }}
The first time this action runs, it will build the packages from source and
push them to the build cache. Subsequent runs will pull the binaries from the
@@ -281,15 +322,15 @@ over source builds.
The build cache entries appear in the GitHub Packages section of your repository,
and contain instructions for pulling and running them with ``docker`` or ``podman``.
----------
Relocation
----------
Initial build and later installation do not necessarily happen at the same
location. Spack provides a relocation capability and corrects for RPATHs and
non-relocatable scripts. However, many packages compile paths into binary
artifacts directly. In such cases, the build instructions of this package would
need to be adjusted for better re-locatability.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using Spack's public build cache for GitHub Actions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack offers a public build cache for GitHub Actions with a set of common packages,
which lets you get started quickly. See the following resources for more information:
* `spack/github-actions-buildcache <https://github.com/spack/github-actions-buildcache>`_
.. _cmd-spack-buildcache:

View File

@@ -37,7 +37,11 @@ to enable reuse for a single installation, and you can use:
spack install --fresh <spec>
to do a fresh install if ``reuse`` is enabled by default.
``reuse: true`` is the default.
``reuse: dependencies`` is the default.
.. seealso::
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
------------------------------------------
Selection of the target microarchitectures
@@ -99,551 +103,3 @@ while `py-numpy` still needs an older version:
Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the
default behavior is ``duplicates:strategy:minimal``.
.. _build-settings:
================================
Package Settings (packages.yaml)
================================
Spack allows you to customize how your software is built through the
``packages.yaml`` file. Using it, you can make Spack prefer particular
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
or you can make it prefer to build with particular compilers. You can
also tell Spack to use *external* software installations already
present on your system.
At a high level, the ``packages.yaml`` file is structured like this:
.. code-block:: yaml
packages:
package1:
# settings for package1
package2:
# settings for package2
# ...
all:
# settings that apply to all packages.
So you can either set build preferences specifically for *one* package,
or you can specify that certain settings should apply to *all* packages.
The types of settings you can customize are described in detail below.
Spack's build defaults are in the default
``etc/spack/defaults/packages.yaml`` file. You can override them in
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
details on how this works, see :ref:`configuration-scopes`.
.. _sec-external-packages:
-----------------
External Packages
-----------------
Spack can be configured to use externally-installed
packages rather than building its own packages. This may be desirable
if machines ship with system packages, such as a customized MPI
that should be used instead of Spack building its own MPI.
External packages are configured through the ``packages.yaml`` file.
Here's an example of an external configuration:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This example lists three installations of OpenMPI, one built with GCC,
one built with GCC and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the pre-installed OpenMPI in
the given directory. Note that the specified path is the top-level
install prefix, not the ``bin`` subdirectory.
``packages.yaml`` can also be used to specify modules to load instead
of the installation prefixes. The following example says that module
``CMake/3.7.2`` provides cmake version 3.7.2.
.. code-block:: yaml
cmake:
externals:
- spec: cmake@3.7.2
modules:
- CMake/3.7.2
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
by a list of package names. To specify externals, add an ``externals:``
attribute under the package name, which lists externals.
Each external should specify a ``spec:`` string that should be as
well-defined as reasonably possible. If a
package lacks a spec component, such as missing a compiler or
package version, then Spack will guess the missing component based
on its most-favored packages, and it may guess incorrectly.
Each package version and compiler listed in an external should
have entries in Spack's packages and compiler configuration, even
though the package and compiler may not ever be built.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Prevent packages from being built from sources
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
but it does not prevent Spack from building packages from sources. In the above example,
Spack might choose for many valid reasons to start building and linking with the
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
To prevent this, the ``packages.yaml`` configuration also allows packages
to be flagged as non-buildable. The previous example could be modified to
be:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
buildable: False
The addition of the ``buildable`` flag tells Spack that it should never build
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
OpenMPI.
.. note::
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
pre-built specs include specs already available from a local store, an upstream store, a registered
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
external specs in ``packages.yaml`` are included in the list of pre-built specs.
If an external module is specified as not buildable, then Spack will load the
external module into the build environment which can be used for linking.
The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Non-buildable virtual packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Virtual packages in Spack can also be specified as not buildable, and
external implementations can be provided. In the example above,
OpenMPI is configured as not buildable, but Spack will often prefer
other MPI implementations over the externally available OpenMPI. Spack
can be configured with every MPI provider not buildable individually,
but more conveniently:
.. code-block:: yaml
packages:
mpi:
buildable: False
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
Spack can then use any of the listed external implementations of MPI
to satisfy a dependency, and will choose depending on the compiler and
architecture.
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
(available via stores or buildcaches) are not wanted, Spack can be configured to require
specs matching only the available externals:
.. code-block:: yaml
packages:
mpi:
buildable: False
require:
- one_of: [
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
]
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
:ref:`package-requirements`.
.. _cmd-spack-external-find:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatically Find External Packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can run the :ref:`spack external find <spack-external-find>` command
to search for system-provided packages and add them to ``packages.yaml``.
After running this command your ``packages.yaml`` may include new entries:
.. code-block:: yaml
packages:
cmake:
externals:
- spec: cmake@3.17.2
prefix: /usr
Generally this is useful for detecting a small set of commonly-used packages;
for now this is generally limited to finding build-only dependencies.
Specific limitations include:
* Packages are not discoverable by default: For a package to be
discoverable with ``spack external find``, it needs to add special
logic. See :ref:`here <make-package-findable>` for more details.
* The logic does not search through module files, it can only detect
packages with executables defined in ``PATH``; you can help Spack locate
externals which use module files by loading any associated modules for
packages that you want Spack to know about before running
``spack external find``.
* Spack does not overwrite existing entries in the package configuration:
If there is an external defined for a spec at any configuration scope,
then Spack will not add a new external entry (``spack config blame packages``
can help locate all external entries).
.. _package-requirements:
--------------------
Package Requirements
--------------------
Spack can be configured to always use certain compilers, package
versions, and variants during concretization through package
requirements.
Package requirements are useful when you find yourself repeatedly
specifying the same constraints on the command line, and wish that
Spack respects these constraints whether you mention them explicitly
or not. Another use case is specifying constraints that should apply
to all root specs in an environment, without having to repeat the
constraint everywhere.
Apart from that, requirements config is more flexible than constraints
on the command line, because it can specify constraints on packages
*when they occur* as a dependency. In contrast, on the command line it
is not possible to specify constraints on dependencies while also keeping
those dependencies optional.
^^^^^^^^^^^^^^^^^^^
Requirements syntax
^^^^^^^^^^^^^^^^^^^
The package requirements configuration is specified in ``packages.yaml``,
keyed by package name and expressed using the Spec syntax. In the simplest
case you can specify attributes that you always want the package to have
by providing a single spec string to ``require``:
.. code-block:: yaml
packages:
libfabric:
require: "@1.13.2"
In the above example, ``libfabric`` will always build with version 1.13.2. If you
need to compose multiple configuration scopes ``require`` accepts a list of
strings:
.. code-block:: yaml
packages:
libfabric:
require:
- "@1.13.2"
- "%gcc"
In this case ``libfabric`` will always build with version 1.13.2 **and** using GCC
as a compiler.
For more complex use cases, require accepts also a list of objects. These objects
must have either a ``any_of`` or a ``one_of`` field, containing a list of spec strings,
and they can optionally have a ``when`` and a ``message`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["@4.1.5", "%gcc"]
message: "in this example only 4.1.5 can build with other compilers"
``any_of`` is a list of specs. One of those specs must be satisfied
and it is also allowed for the concretized spec to match more than one.
In the above example, that means you could build ``openmpi@4.1.5%gcc``,
``openmpi@4.1.5%clang`` or ``openmpi@3.9%gcc``, but
not ``openmpi@3.9%clang``.
If a custom message is provided, and the requirement is not satisfiable,
Spack will print the custom error message:
.. code-block:: console
$ spack spec openmpi@3.9%clang
==> Error: in this example only 4.1.5 can build with other compilers
We could express a similar requirement using the ``when`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["%gcc"]
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC.
For readability, Spack also allows a ``spec`` key accepting a string when there is only a single
constraint:
.. code-block:: yaml
packages:
openmpi:
require:
- spec: "%gcc"
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
This code snippet and the one before it are semantically equivalent.
Finally, instead of ``any_of`` you can use ``one_of`` which also takes a list of specs. The final
concretized spec must match one and only one of them:
.. code-block:: yaml
packages:
mpich:
require:
- one_of: ["+cuda", "+rocm"]
In the example above, that means you could build ``mpich+cuda`` or ``mpich+rocm`` but not ``mpich+cuda+rocm``.
.. note::
For ``any_of`` and ``one_of``, the order of specs indicates a
preference: items that appear earlier in the list are preferred
(note that these preferences can be ignored in favor of others).
.. note::
When using a conditional requirement, Spack is allowed to actively avoid the triggering
condition (the ``when=...`` spec) if that leads to a concrete spec with better scores in
the optimization criteria. To check the current optimization criteria and their
priorities you can run ``spack solve zlib``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting default requirements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can also set default requirements for all packages under ``all``
like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
which means every spec will be required to use ``clang`` as a compiler.
Note that in this case ``all`` represents a *default set of requirements* -
if there are specific package requirements, then the default requirements
under ``all`` are disregarded. For example, with a configuration like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
cmake:
require: '%gcc'
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
dependencies) to use ``clang``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting requirements on virtual specs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
This can be useful for fixing which virtual provider you want to use:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
Requirements on the virtual spec and on the specific provider are both applied, if
present. For instance with a configuration like:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
mvapich2:
require: '~cuda'
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
.. _package-preferences:
-------------------
Package Preferences
-------------------
In some cases package requirements can be too strong, and package
preferences are the better option. Package preferences do not impose
constraints on packages for particular versions or variants values,
they rather only set defaults -- the concretizer is free to change
them if it must due to other constraints. Also note that package
preferences are of lower priority than reuse of already installed
packages.
Here's an example ``packages.yaml`` file that sets preferred packages:
.. code-block:: yaml
packages:
opencv:
compiler: [gcc@4.9]
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
all:
compiler: [gcc@4.4.7, 'gcc@4.6:', intel, clang, pgi]
target: [sandybridge]
providers:
mpi: [mvapich2, mpich, openmpi]
At a high level, this example is specifying how packages are preferably
concretized. The opencv package should prefer using GCC 4.9 and
be built with debug options. The gperftools package should prefer version
2.2 over 2.4. Every package on the system should prefer mvapich2 for
its MPI and GCC 4.4.7 (except for opencv, which overrides this by preferring GCC 4.9).
These options are used to fill in implicit defaults. Any of them can be overwritten
on the command line if explicitly requested.
Package preferences accept the follow keys or components under
the specific package (or ``all``) section: ``compiler``, ``variants``,
``version``, ``providers``, and ``target``. Each component has an
ordered list of spec ``constraints``, with earlier entries in the
list being preferred over later entries.
Sometimes a package installation may have constraints that forbid
the first concretization rule, in which case Spack will use the first
legal concretization rule. Going back to the example, if a user
requests gperftools 2.3 or later, then Spack will install version 2.4
as the 2.4 version of gperftools is preferred over 2.3.
An explicit concretization rule in the preferred section will always
take preference over unlisted concretizations. In the above example,
xlc isn't listed in the compiler list. Every listed compiler from
gcc to pgi will thus be preferred over the xlc compiler.
The syntax for the ``provider`` section differs slightly from other
concretization rules. A provider lists a value that packages may
``depends_on`` (e.g, MPI) and a list of rules for fulfilling that
dependency.
.. _package_permissions:
-------------------
Package Permissions
-------------------
Spack can be configured to assign permissions to the files installed
by a package.
In the ``packages.yaml`` file under ``permissions``, the attributes
``read``, ``write``, and ``group`` control the package
permissions. These attributes can be set per-package, or for all
packages under ``all``. If permissions are set under ``all`` and for a
specific package, the package-specific settings take precedence.
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
and ``world``.
.. code-block:: yaml
packages:
all:
permissions:
write: group
group: spack
my_app:
permissions:
read: group
group: my_team
The permissions settings describe the broadest level of access to
installations of the specified packages. The execute permissions of
the file are set to the same level as read permissions for those files
that are executable. The default setting for ``read`` is ``world``,
and for ``write`` is ``user``. In the example above, installations of
``my_app`` will be installed with user and group permissions but no
world permissions, and owned by the group ``my_team``. All other
packages will be installed with user and group write privileges, and
world read privileges. Those packages will be owned by the group
``spack``.
The ``group`` attribute assigns a Unix-style group to a package. All
files installed by the package will be owned by the assigned group,
and the sticky group bit will be set on the install prefix and all
directories inside the install prefix. This will ensure that even
manually placed files within the install prefix are owned by the
assigned group. If no group is assigned, Spack will allow the OS
default behavior to go as expected.
----------------------------
Assigning Package Attributes
----------------------------
You can assign class-level attributes in the configuration:
.. code-block:: yaml
packages:
mpileaks:
# Override existing attributes
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
# ... or add new ones
x: 1
Attributes set this way will be accessible to any method executed
in the package.py file (e.g. the ``install()`` method). Values for these
attributes may be any value parseable by yaml.
These can only be applied to specific packages, not "all" or
virtual packages.

View File

@@ -392,7 +392,7 @@ See section
:ref:`Configuration Scopes <configuration-scopes>`
for an explanation about the different files
and section
:ref:`Build customization <build-settings>`
:ref:`Build customization <packages-config>`
for specifics and examples for ``packages.yaml`` files.
.. If your system administrator did not provide modules for pre-installed Intel

View File

@@ -304,3 +304,17 @@ To work properly, this requires your terminal to reset its title after
Spack has finished its work, otherwise Spack's status information will
remain in the terminal's title indefinitely. Most terminals should already
be set up this way and clear Spack's status information.
-----------
``aliases``
-----------
Aliases can be used to define new Spack commands. They can be either shortcuts
for longer commands or include specific arguments for convenience. For instance,
if users want to use ``spack install``'s ``-v`` argument all the time, they can
create a new alias called ``inst`` that will always call ``install -v``:
.. code-block:: yaml
aliases:
inst: install -v

View File

@@ -17,7 +17,7 @@ case you want to skip directly to specific docs:
* :ref:`config.yaml <config-yaml>`
* :ref:`mirrors.yaml <mirrors>`
* :ref:`modules.yaml <modules>`
* :ref:`packages.yaml <build-settings>`
* :ref:`packages.yaml <packages-config>`
* :ref:`repos.yaml <repositories>`
You can also add any of these as inline configuration in the YAML

View File

@@ -0,0 +1,77 @@
.. Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
==========================
Frequently Asked Questions
==========================
This page contains answers to frequently asked questions about Spack.
If you have questions that are not answered here, feel free to ask on
`Slack <https://slack.spack.io>`_ or `GitHub Discussions
<https://github.com/spack/spack/discussions>`_. If you've learned the
answer to a question that you think should be here, please consider
contributing to this page.
.. _faq-concretizer-precedence:
-----------------------------------------------------
Why does Spack pick particular versions and variants?
-----------------------------------------------------
This question comes up in a variety of forms:
1. Why does Spack seem to ignore my package preferences from ``packages.yaml`` config?
2. Why does Spack toggle a variant instead of using the default from the ``package.py`` file?
The short answer is that Spack always picks an optimal configuration
based on a complex set of criteria\ [#f1]_. These criteria are more nuanced
than always choosing the latest versions or default variants.
.. note::
As a rule of thumb: requirements + constraints > reuse > preferences > defaults.
The following set of criteria (from lowest to highest precedence) explain
common cases where concretization output may seem surprising at first.
1. :ref:`Package preferences <package-preferences>` configured in ``packages.yaml``
override variant defaults from ``package.py`` files, and influence the optimal
ordering of versions. Preferences are specified as follows:
.. code-block:: yaml
packages:
foo:
version: [1.0, 1.1]
variants: ~mpi
2. :ref:`Reuse concretization <concretizer-options>` configured in ``concretizer.yaml``
overrides preferences, since it's typically faster to reuse an existing spec than to
build a preferred one from sources. When build caches are enabled, specs may be reused
from a remote location too. Reuse concretization is configured as follows:
.. code-block:: yaml
concretizer:
reuse: dependencies # other options are 'true' and 'false'
3. :ref:`Package requirements <package-requirements>` configured in ``packages.yaml``,
and constraints from the command line as well as ``package.py`` files override all
of the above. Requirements are specified as follows:
.. code-block:: yaml
packages:
foo:
require:
- "@1.2: +mpi"
Requirements and constraints restrict the set of possible solutions, while reuse
behavior and preferences influence what an optimal solution looks like.
.. rubric:: Footnotes
.. [#f1] The exact list of criteria can be retrieved with the ``spack solve`` command

View File

@@ -55,6 +55,7 @@ or refer to the full manual below.
getting_started
basic_usage
replace_conda_homebrew
frequently_asked_questions
.. toctree::
:maxdepth: 2
@@ -70,7 +71,7 @@ or refer to the full manual below.
configuration
config_yaml
bootstrapping
packages_yaml
build_settings
environments
containers
@@ -78,6 +79,7 @@ or refer to the full manual below.
module_file_support
repositories
binary_caches
bootstrapping
command_index
chain
extensions

View File

@@ -519,11 +519,11 @@ inspections and customize them per-module-set.
modules:
prefix_inspections:
bin:
./bin:
- PATH
man:
./man:
- MANPATH
'':
./:
- CMAKE_PREFIX_PATH
Prefix inspections are only applied if the relative path inside the
@@ -579,7 +579,7 @@ the view.
view_relative_modules:
use_view: my_view
prefix_inspections:
bin:
./bin:
- PATH
view:
my_view:

View File

@@ -0,0 +1,560 @@
.. Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _packages-config:
================================
Package Settings (packages.yaml)
================================
Spack allows you to customize how your software is built through the
``packages.yaml`` file. Using it, you can make Spack prefer particular
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
or you can make it prefer to build with particular compilers. You can
also tell Spack to use *external* software installations already
present on your system.
At a high level, the ``packages.yaml`` file is structured like this:
.. code-block:: yaml
packages:
package1:
# settings for package1
package2:
# settings for package2
# ...
all:
# settings that apply to all packages.
So you can either set build preferences specifically for *one* package,
or you can specify that certain settings should apply to *all* packages.
The types of settings you can customize are described in detail below.
Spack's build defaults are in the default
``etc/spack/defaults/packages.yaml`` file. You can override them in
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
details on how this works, see :ref:`configuration-scopes`.
.. _sec-external-packages:
-----------------
External Packages
-----------------
Spack can be configured to use externally-installed
packages rather than building its own packages. This may be desirable
if machines ship with system packages, such as a customized MPI
that should be used instead of Spack building its own MPI.
External packages are configured through the ``packages.yaml`` file.
Here's an example of an external configuration:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This example lists three installations of OpenMPI, one built with GCC,
one built with GCC and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the pre-installed OpenMPI in
the given directory. Note that the specified path is the top-level
install prefix, not the ``bin`` subdirectory.
``packages.yaml`` can also be used to specify modules to load instead
of the installation prefixes. The following example says that module
``CMake/3.7.2`` provides cmake version 3.7.2.
.. code-block:: yaml
cmake:
externals:
- spec: cmake@3.7.2
modules:
- CMake/3.7.2
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
by a list of package names. To specify externals, add an ``externals:``
attribute under the package name, which lists externals.
Each external should specify a ``spec:`` string that should be as
well-defined as reasonably possible. If a
package lacks a spec component, such as missing a compiler or
package version, then Spack will guess the missing component based
on its most-favored packages, and it may guess incorrectly.
Each package version and compiler listed in an external should
have entries in Spack's packages and compiler configuration, even
though the package and compiler may not ever be built.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Prevent packages from being built from sources
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
but it does not prevent Spack from building packages from sources. In the above example,
Spack might choose for many valid reasons to start building and linking with the
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
To prevent this, the ``packages.yaml`` configuration also allows packages
to be flagged as non-buildable. The previous example could be modified to
be:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
buildable: False
The addition of the ``buildable`` flag tells Spack that it should never build
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
OpenMPI.
.. note::
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
pre-built specs include specs already available from a local store, an upstream store, a registered
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
external specs in ``packages.yaml`` are included in the list of pre-built specs.
If an external module is specified as not buildable, then Spack will load the
external module into the build environment which can be used for linking.
The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Non-buildable virtual packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Virtual packages in Spack can also be specified as not buildable, and
external implementations can be provided. In the example above,
OpenMPI is configured as not buildable, but Spack will often prefer
other MPI implementations over the externally available OpenMPI. Spack
can be configured with every MPI provider not buildable individually,
but more conveniently:
.. code-block:: yaml
packages:
mpi:
buildable: False
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
Spack can then use any of the listed external implementations of MPI
to satisfy a dependency, and will choose depending on the compiler and
architecture.
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
(available via stores or buildcaches) are not wanted, Spack can be configured to require
specs matching only the available externals:
.. code-block:: yaml
packages:
mpi:
buildable: False
require:
- one_of: [
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
]
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
:ref:`package-requirements`.
.. _cmd-spack-external-find:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatically Find External Packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can run the :ref:`spack external find <spack-external-find>` command
to search for system-provided packages and add them to ``packages.yaml``.
After running this command your ``packages.yaml`` may include new entries:
.. code-block:: yaml
packages:
cmake:
externals:
- spec: cmake@3.17.2
prefix: /usr
Generally this is useful for detecting a small set of commonly-used packages;
for now this is generally limited to finding build-only dependencies.
Specific limitations include:
* Packages are not discoverable by default: For a package to be
discoverable with ``spack external find``, it needs to add special
logic. See :ref:`here <make-package-findable>` for more details.
* The logic does not search through module files, it can only detect
packages with executables defined in ``PATH``; you can help Spack locate
externals which use module files by loading any associated modules for
packages that you want Spack to know about before running
``spack external find``.
* Spack does not overwrite existing entries in the package configuration:
If there is an external defined for a spec at any configuration scope,
then Spack will not add a new external entry (``spack config blame packages``
can help locate all external entries).
.. _package-requirements:
--------------------
Package Requirements
--------------------
Spack can be configured to always use certain compilers, package
versions, and variants during concretization through package
requirements.
Package requirements are useful when you find yourself repeatedly
specifying the same constraints on the command line, and wish that
Spack respects these constraints whether you mention them explicitly
or not. Another use case is specifying constraints that should apply
to all root specs in an environment, without having to repeat the
constraint everywhere.
Apart from that, requirements config is more flexible than constraints
on the command line, because it can specify constraints on packages
*when they occur* as a dependency. In contrast, on the command line it
is not possible to specify constraints on dependencies while also keeping
those dependencies optional.
.. seealso::
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
^^^^^^^^^^^^^^^^^^^
Requirements syntax
^^^^^^^^^^^^^^^^^^^
The package requirements configuration is specified in ``packages.yaml``,
keyed by package name and expressed using the Spec syntax. In the simplest
case you can specify attributes that you always want the package to have
by providing a single spec string to ``require``:
.. code-block:: yaml
packages:
libfabric:
require: "@1.13.2"
In the above example, ``libfabric`` will always build with version 1.13.2. If you
need to compose multiple configuration scopes ``require`` accepts a list of
strings:
.. code-block:: yaml
packages:
libfabric:
require:
- "@1.13.2"
- "%gcc"
In this case ``libfabric`` will always build with version 1.13.2 **and** using GCC
as a compiler.
For more complex use cases, require accepts also a list of objects. These objects
must have either a ``any_of`` or a ``one_of`` field, containing a list of spec strings,
and they can optionally have a ``when`` and a ``message`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["@4.1.5", "%gcc"]
message: "in this example only 4.1.5 can build with other compilers"
``any_of`` is a list of specs. One of those specs must be satisfied
and it is also allowed for the concretized spec to match more than one.
In the above example, that means you could build ``openmpi@4.1.5%gcc``,
``openmpi@4.1.5%clang`` or ``openmpi@3.9%gcc``, but
not ``openmpi@3.9%clang``.
If a custom message is provided, and the requirement is not satisfiable,
Spack will print the custom error message:
.. code-block:: console
$ spack spec openmpi@3.9%clang
==> Error: in this example only 4.1.5 can build with other compilers
We could express a similar requirement using the ``when`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["%gcc"]
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC.
For readability, Spack also allows a ``spec`` key accepting a string when there is only a single
constraint:
.. code-block:: yaml
packages:
openmpi:
require:
- spec: "%gcc"
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
This code snippet and the one before it are semantically equivalent.
Finally, instead of ``any_of`` you can use ``one_of`` which also takes a list of specs. The final
concretized spec must match one and only one of them:
.. code-block:: yaml
packages:
mpich:
require:
- one_of: ["+cuda", "+rocm"]
In the example above, that means you could build ``mpich+cuda`` or ``mpich+rocm`` but not ``mpich+cuda+rocm``.
.. note::
For ``any_of`` and ``one_of``, the order of specs indicates a
preference: items that appear earlier in the list are preferred
(note that these preferences can be ignored in favor of others).
.. note::
When using a conditional requirement, Spack is allowed to actively avoid the triggering
condition (the ``when=...`` spec) if that leads to a concrete spec with better scores in
the optimization criteria. To check the current optimization criteria and their
priorities you can run ``spack solve zlib``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting default requirements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can also set default requirements for all packages under ``all``
like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
which means every spec will be required to use ``clang`` as a compiler.
Note that in this case ``all`` represents a *default set of requirements* -
if there are specific package requirements, then the default requirements
under ``all`` are disregarded. For example, with a configuration like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
cmake:
require: '%gcc'
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
dependencies) to use ``clang``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting requirements on virtual specs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
This can be useful for fixing which virtual provider you want to use:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
Requirements on the virtual spec and on the specific provider are both applied, if
present. For instance with a configuration like:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
mvapich2:
require: '~cuda'
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
.. _package-preferences:
-------------------
Package Preferences
-------------------
In some cases package requirements can be too strong, and package
preferences are the better option. Package preferences do not impose
constraints on packages for particular versions or variants values,
they rather only set defaults. The concretizer is free to change
them if it must, due to other constraints, and also prefers reusing
installed packages over building new ones that are a better match for
preferences.
.. seealso::
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
Most package preferences (``compilers``, ``target`` and ``providers``)
can only be set globally under the ``all`` section of ``packages.yaml``:
.. code-block:: yaml
packages:
all:
compiler: [gcc@12.2.0, clang@12:, oneapi@2023:]
target: [x86_64_v3]
providers:
mpi: [mvapich2, mpich, openmpi]
These preferences override Spack's default and effectively reorder priorities
when looking for the best compiler, target or virtual package provider. Each
preference takes an ordered list of spec constraints, with earlier entries in
the list being preferred over later entries.
In the example above all packages prefer to be compiled with ``gcc@12.2.0``,
to target the ``x86_64_v3`` microarchitecture and to use ``mvapich2`` if they
depend on ``mpi``.
The ``variants`` and ``version`` preferences can be set under
package specific sections of the ``packages.yaml`` file:
.. code-block:: yaml
packages:
opencv:
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
In this case, the preference for ``opencv`` is to build with debug options, while
``gperftools`` prefers version 2.2 over 2.4.
Any preference can be overwritten on the command line if explicitly requested.
Preferences cannot overcome explicit constraints, as they only set a preferred
ordering among homogeneous attribute values. Going back to the example, if
``gperftools@2.3:`` was requested, then Spack will install version 2.4
since the most preferred version 2.2 is prohibited by the version constraint.
.. _package_permissions:
-------------------
Package Permissions
-------------------
Spack can be configured to assign permissions to the files installed
by a package.
In the ``packages.yaml`` file under ``permissions``, the attributes
``read``, ``write``, and ``group`` control the package
permissions. These attributes can be set per-package, or for all
packages under ``all``. If permissions are set under ``all`` and for a
specific package, the package-specific settings take precedence.
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
and ``world``.
.. code-block:: yaml
packages:
all:
permissions:
write: group
group: spack
my_app:
permissions:
read: group
group: my_team
The permissions settings describe the broadest level of access to
installations of the specified packages. The execute permissions of
the file are set to the same level as read permissions for those files
that are executable. The default setting for ``read`` is ``world``,
and for ``write`` is ``user``. In the example above, installations of
``my_app`` will be installed with user and group permissions but no
world permissions, and owned by the group ``my_team``. All other
packages will be installed with user and group write privileges, and
world read privileges. Those packages will be owned by the group
``spack``.
The ``group`` attribute assigns a Unix-style group to a package. All
files installed by the package will be owned by the assigned group,
and the sticky group bit will be set on the install prefix and all
directories inside the install prefix. This will ensure that even
manually placed files within the install prefix are owned by the
assigned group. If no group is assigned, Spack will allow the OS
default behavior to go as expected.
----------------------------
Assigning Package Attributes
----------------------------
You can assign class-level attributes in the configuration:
.. code-block:: yaml
packages:
mpileaks:
package_attributes:
# Override existing attributes
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
# ... or add new ones
x: 1
Attributes set this way will be accessible to any method executed
in the package.py file (e.g. the ``install()`` method). Values for these
attributes may be any value parseable by yaml.
These can only be applied to specific packages, not "all" or
virtual packages.

View File

@@ -3503,6 +3503,56 @@ is equivalent to:
Constraints from nested context managers are also combined together, but they are rarely
needed or recommended.
.. _default_args:
------------------------
Common default arguments
------------------------
Similarly, if directives have a common set of default arguments, you can
group them together in a ``with default_args()`` block:
.. code-block:: python
class PyExample(PythonPackage):
with default_args(type=("build", "run")):
depends_on("py-foo")
depends_on("py-foo@2:", when="@2:")
depends_on("py-bar")
depends_on("py-bz")
The above is short for:
.. code-block:: python
class PyExample(PythonPackage):
depends_on("py-foo", type=("build", "run"))
depends_on("py-foo@2:", when="@2:", type=("build", "run"))
depends_on("py-bar", type=("build", "run"))
depends_on("py-bz", type=("build", "run"))
.. note::
The ``with when()`` context manager is composable, while ``with default_args()``
merely overrides the default. For example:
.. code-block:: python
with default_args(when="+feature"):
depends_on("foo")
depends_on("bar")
depends_on("baz", when="+baz")
is equivalent to:
.. code-block:: python
depends_on("foo", when="+feature")
depends_on("bar", when="+feature")
depends_on("baz", when="+baz") # Note: not when="+feature+baz"
.. _install-method:
------------------

View File

@@ -18,7 +18,7 @@
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.2.1 (commit df43a1834460bf94516136951c4729a3100603ec)
* Version: 0.2.5-dev (commit cbb1fd5eb397a70d466e5160b393b87b0dbcc78f)
astunparse
----------------

View File

@@ -1,2 +1,3 @@
"""Init file to avoid namespace packages"""
__version__ = "0.2.1"
__version__ = "0.2.4"

View File

@@ -3,6 +3,7 @@
"""
import sys
from .cli import main
sys.exit(main())

View File

@@ -46,7 +46,11 @@ def _make_parser() -> argparse.ArgumentParser:
def cpu() -> int:
"""Run the `archspec cpu` subcommand."""
print(archspec.cpu.host())
try:
print(archspec.cpu.host())
except FileNotFoundError as exc:
print(exc)
return 1
return 0

View File

@@ -5,10 +5,14 @@
"""The "cpu" package permits to query and compare different
CPU microarchitectures.
"""
from .microarchitecture import Microarchitecture, UnsupportedMicroarchitecture
from .microarchitecture import TARGETS, generic_microarchitecture
from .microarchitecture import version_components
from .detect import host
from .detect import brand_string, host
from .microarchitecture import (
TARGETS,
Microarchitecture,
UnsupportedMicroarchitecture,
generic_microarchitecture,
version_components,
)
__all__ = [
"Microarchitecture",
@@ -17,4 +21,5 @@
"generic_microarchitecture",
"host",
"version_components",
"brand_string",
]

View File

@@ -4,15 +4,17 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Detection of CPU microarchitectures"""
import collections
import functools
import os
import platform
import re
import struct
import subprocess
import warnings
from typing import Dict, List, Optional, Set, Tuple, Union
from .microarchitecture import generic_microarchitecture, TARGETS
from .schema import TARGETS_JSON
from ..vendor.cpuid.cpuid import CPUID
from .microarchitecture import TARGETS, Microarchitecture, generic_microarchitecture
from .schema import CPUID_JSON, TARGETS_JSON
#: Mapping from operating systems to chain of commands
#: to obtain a dictionary of raw info on the current cpu
@@ -22,43 +24,51 @@
#: functions checking the compatibility of the host with a given target
COMPATIBILITY_CHECKS = {}
# Constants for commonly used architectures
X86_64 = "x86_64"
AARCH64 = "aarch64"
PPC64LE = "ppc64le"
PPC64 = "ppc64"
RISCV64 = "riscv64"
def info_dict(operating_system):
"""Decorator to mark functions that are meant to return raw info on
the current cpu.
def detection(operating_system: str):
"""Decorator to mark functions that are meant to return partial information on the current cpu.
Args:
operating_system (str or tuple): operating system for which the marked
function is a viable factory of raw info dictionaries.
operating_system: operating system where this function can be used.
"""
def decorator(factory):
INFO_FACTORY[operating_system].append(factory)
@functools.wraps(factory)
def _impl():
info = factory()
# Check that info contains a few mandatory fields
msg = 'field "{0}" is missing from raw info dictionary'
assert "vendor_id" in info, msg.format("vendor_id")
assert "flags" in info, msg.format("flags")
assert "model" in info, msg.format("model")
assert "model_name" in info, msg.format("model_name")
return info
return _impl
return factory
return decorator
@info_dict(operating_system="Linux")
def proc_cpuinfo():
"""Returns a raw info dictionary by parsing the first entry of
``/proc/cpuinfo``
"""
info = {}
def partial_uarch(
name: str = "",
vendor: str = "",
features: Optional[Set[str]] = None,
generation: int = 0,
cpu_part: str = "",
) -> Microarchitecture:
"""Construct a partial microarchitecture, from information gathered during system scan."""
return Microarchitecture(
name=name,
parents=[],
vendor=vendor,
features=features or set(),
compilers={},
generation=generation,
cpu_part=cpu_part,
)
@detection(operating_system="Linux")
def proc_cpuinfo() -> Microarchitecture:
"""Returns a partial Microarchitecture, obtained from scanning ``/proc/cpuinfo``"""
data = {}
with open("/proc/cpuinfo") as file: # pylint: disable=unspecified-encoding
for line in file:
key, separator, value = line.partition(":")
@@ -70,11 +80,122 @@ def proc_cpuinfo():
#
# we are on a blank line separating two cpus. Exit early as
# we want to read just the first entry in /proc/cpuinfo
if separator != ":" and info:
if separator != ":" and data:
break
info[key.strip()] = value.strip()
return info
data[key.strip()] = value.strip()
architecture = _machine()
if architecture == X86_64:
return partial_uarch(
vendor=data.get("vendor_id", "generic"), features=_feature_set(data, key="flags")
)
if architecture == AARCH64:
return partial_uarch(
vendor=_canonicalize_aarch64_vendor(data),
features=_feature_set(data, key="Features"),
cpu_part=data.get("CPU part", ""),
)
if architecture in (PPC64LE, PPC64):
generation_match = re.search(r"POWER(\d+)", data.get("cpu", ""))
try:
generation = int(generation_match.group(1))
except AttributeError:
# There might be no match under emulated environments. For instance
# emulating a ppc64le with QEMU and Docker still reports the host
# /proc/cpuinfo and not a Power
generation = 0
return partial_uarch(generation=generation)
if architecture == RISCV64:
if data.get("uarch") == "sifive,u74-mc":
data["uarch"] = "u74mc"
return partial_uarch(name=data.get("uarch", RISCV64))
return generic_microarchitecture(architecture)
class CpuidInfoCollector:
"""Collects the information we need on the host CPU from cpuid"""
# pylint: disable=too-few-public-methods
def __init__(self):
self.cpuid = CPUID()
registers = self.cpuid.registers_for(**CPUID_JSON["vendor"]["input"])
self.highest_basic_support = registers.eax
self.vendor = struct.pack("III", registers.ebx, registers.edx, registers.ecx).decode(
"utf-8"
)
registers = self.cpuid.registers_for(**CPUID_JSON["highest_extension_support"]["input"])
self.highest_extension_support = registers.eax
self.features = self._features()
def _features(self):
result = set()
def check_features(data):
registers = self.cpuid.registers_for(**data["input"])
for feature_check in data["bits"]:
current = getattr(registers, feature_check["register"])
if self._is_bit_set(current, feature_check["bit"]):
result.add(feature_check["name"])
for call_data in CPUID_JSON["flags"]:
if call_data["input"]["eax"] > self.highest_basic_support:
continue
check_features(call_data)
for call_data in CPUID_JSON["extension-flags"]:
if call_data["input"]["eax"] > self.highest_extension_support:
continue
check_features(call_data)
return result
def _is_bit_set(self, register: int, bit: int) -> bool:
mask = 1 << bit
return register & mask > 0
def brand_string(self) -> Optional[str]:
"""Returns the brand string, if available."""
if self.highest_extension_support < 0x80000004:
return None
r1 = self.cpuid.registers_for(eax=0x80000002, ecx=0)
r2 = self.cpuid.registers_for(eax=0x80000003, ecx=0)
r3 = self.cpuid.registers_for(eax=0x80000004, ecx=0)
result = struct.pack(
"IIIIIIIIIIII",
r1.eax,
r1.ebx,
r1.ecx,
r1.edx,
r2.eax,
r2.ebx,
r2.ecx,
r2.edx,
r3.eax,
r3.ebx,
r3.ecx,
r3.edx,
).decode("utf-8")
return result.strip("\x00")
@detection(operating_system="Windows")
def cpuid_info():
"""Returns a partial Microarchitecture, obtained from running the cpuid instruction"""
architecture = _machine()
if architecture == X86_64:
data = CpuidInfoCollector()
return partial_uarch(vendor=data.vendor, features=data.features)
return generic_microarchitecture(architecture)
def _check_output(args, env):
@@ -83,14 +204,25 @@ def _check_output(args, env):
return str(output.decode("utf-8"))
WINDOWS_MAPPING = {
"AMD64": X86_64,
"ARM64": AARCH64,
}
def _machine():
""" "Return the machine architecture we are on"""
"""Return the machine architecture we are on"""
operating_system = platform.system()
# If we are not on Darwin, trust what Python tells us
if operating_system != "Darwin":
# If we are not on Darwin or Windows, trust what Python tells us
if operating_system not in ("Darwin", "Windows"):
return platform.machine()
# Normalize windows specific names
if operating_system == "Windows":
platform_machine = platform.machine()
return WINDOWS_MAPPING.get(platform_machine, platform_machine)
# On Darwin it might happen that we are on M1, but using an interpreter
# built for x86_64. In that case "platform.machine() == 'x86_64'", so we
# need to fix that.
@@ -103,54 +235,47 @@ def _machine():
if "Apple" in output:
# Note that a native Python interpreter on Apple M1 would return
# "arm64" instead of "aarch64". Here we normalize to the latter.
return "aarch64"
return AARCH64
return "x86_64"
return X86_64
@info_dict(operating_system="Darwin")
def sysctl_info_dict():
@detection(operating_system="Darwin")
def sysctl_info() -> Microarchitecture:
"""Returns a raw info dictionary parsing the output of sysctl."""
child_environment = _ensure_bin_usrbin_in_path()
def sysctl(*args):
def sysctl(*args: str) -> str:
return _check_output(["sysctl"] + list(args), env=child_environment).strip()
if _machine() == "x86_64":
flags = (
sysctl("-n", "machdep.cpu.features").lower()
+ " "
+ sysctl("-n", "machdep.cpu.leaf7_features").lower()
if _machine() == X86_64:
features = (
f'{sysctl("-n", "machdep.cpu.features").lower()} '
f'{sysctl("-n", "machdep.cpu.leaf7_features").lower()}'
)
info = {
"vendor_id": sysctl("-n", "machdep.cpu.vendor"),
"flags": flags,
"model": sysctl("-n", "machdep.cpu.model"),
"model name": sysctl("-n", "machdep.cpu.brand_string"),
}
else:
model = "unknown"
model_str = sysctl("-n", "machdep.cpu.brand_string").lower()
if "m2" in model_str:
model = "m2"
elif "m1" in model_str:
model = "m1"
elif "apple" in model_str:
model = "m1"
features = set(features.split())
info = {
"vendor_id": "Apple",
"flags": [],
"model": model,
"CPU implementer": "Apple",
"model name": sysctl("-n", "machdep.cpu.brand_string"),
}
return info
# Flags detected on Darwin turned to their linux counterpart
for darwin_flag, linux_flag in TARGETS_JSON["conversions"]["darwin_flags"].items():
if darwin_flag in features:
features.update(linux_flag.split())
return partial_uarch(vendor=sysctl("-n", "machdep.cpu.vendor"), features=features)
model = "unknown"
model_str = sysctl("-n", "machdep.cpu.brand_string").lower()
if "m2" in model_str:
model = "m2"
elif "m1" in model_str:
model = "m1"
elif "apple" in model_str:
model = "m1"
return partial_uarch(name=model, vendor="Apple")
def _ensure_bin_usrbin_in_path():
# Make sure that /sbin and /usr/sbin are in PATH as sysctl is
# usually found there
# Make sure that /sbin and /usr/sbin are in PATH as sysctl is usually found there
child_environment = dict(os.environ.items())
search_paths = child_environment.get("PATH", "").split(os.pathsep)
for additional_path in ("/sbin", "/usr/sbin"):
@@ -160,22 +285,10 @@ def _ensure_bin_usrbin_in_path():
return child_environment
def adjust_raw_flags(info):
"""Adjust the flags detected on the system to homogenize
slightly different representations.
"""
# Flags detected on Darwin turned to their linux counterpart
flags = info.get("flags", [])
d2l = TARGETS_JSON["conversions"]["darwin_flags"]
for darwin_flag, linux_flag in d2l.items():
if darwin_flag in flags:
info["flags"] += " " + linux_flag
def adjust_raw_vendor(info):
"""Adjust the vendor field to make it human readable"""
if "CPU implementer" not in info:
return
def _canonicalize_aarch64_vendor(data: Dict[str, str]) -> str:
"""Adjust the vendor field to make it human-readable"""
if "CPU implementer" not in data:
return "generic"
# Mapping numeric codes to vendor (ARM). This list is a merge from
# different sources:
@@ -185,43 +298,37 @@ def adjust_raw_vendor(info):
# https://github.com/gcc-mirror/gcc/blob/master/gcc/config/aarch64/aarch64-cores.def
# https://patchwork.kernel.org/patch/10524949/
arm_vendors = TARGETS_JSON["conversions"]["arm_vendors"]
arm_code = info["CPU implementer"]
if arm_code in arm_vendors:
info["CPU implementer"] = arm_vendors[arm_code]
arm_code = data["CPU implementer"]
return arm_vendors.get(arm_code, arm_code)
def raw_info_dictionary():
"""Returns a dictionary with information on the cpu of the current host.
def _feature_set(data: Dict[str, str], key: str) -> Set[str]:
return set(data.get(key, "").split())
This function calls all the viable factories one after the other until
there's one that is able to produce the requested information.
def detected_info() -> Microarchitecture:
"""Returns a partial Microarchitecture with information on the CPU of the current host.
This function calls all the viable factories one after the other until there's one that is
able to produce the requested information. Falls-back to a generic microarchitecture, if none
of the calls succeed.
"""
# pylint: disable=broad-except
info = {}
for factory in INFO_FACTORY[platform.system()]:
try:
info = factory()
return factory()
except Exception as exc:
warnings.warn(str(exc))
if info:
adjust_raw_flags(info)
adjust_raw_vendor(info)
break
return info
return generic_microarchitecture(_machine())
def compatible_microarchitectures(info):
"""Returns an unordered list of known micro-architectures that are
compatible with the info dictionary passed as argument.
Args:
info (dict): dictionary containing information on the host cpu
def compatible_microarchitectures(info: Microarchitecture) -> List[Microarchitecture]:
"""Returns an unordered list of known micro-architectures that are compatible with the
partial Microarchitecture passed as input.
"""
architecture_family = _machine()
# If a tester is not registered, be conservative and assume no known
# target is compatible with the host
# If a tester is not registered, assume no known target is compatible with the host
tester = COMPATIBILITY_CHECKS.get(architecture_family, lambda x, y: False)
return [x for x in TARGETS.values() if tester(info, x)] or [
generic_microarchitecture(architecture_family)
@@ -230,8 +337,8 @@ def compatible_microarchitectures(info):
def host():
"""Detects the host micro-architecture and returns it."""
# Retrieve a dictionary with raw information on the host's cpu
info = raw_info_dictionary()
# Retrieve information on the host's cpu
info = detected_info()
# Get a list of possible candidates for this micro-architecture
candidates = compatible_microarchitectures(info)
@@ -244,6 +351,10 @@ def sorting_fn(item):
generic_candidates = [c for c in candidates if c.vendor == "generic"]
best_generic = max(generic_candidates, key=sorting_fn)
# Relevant for AArch64. Filter on "cpu_part" if we have any match
if info.cpu_part != "" and any(c for c in candidates if info.cpu_part == c.cpu_part):
candidates = [c for c in candidates if info.cpu_part == c.cpu_part]
# Filter the candidates to be descendant of the best generic candidate.
# This is to avoid that the lack of a niche feature that can be disabled
# from e.g. BIOS prevents detection of a reasonably performant architecture
@@ -258,16 +369,15 @@ def sorting_fn(item):
return max(candidates, key=sorting_fn)
def compatibility_check(architecture_family):
def compatibility_check(architecture_family: Union[str, Tuple[str, ...]]):
"""Decorator to register a function as a proper compatibility check.
A compatibility check function takes the raw info dictionary as a first
argument and an arbitrary target as the second argument. It returns True
if the target is compatible with the info dictionary, False otherwise.
A compatibility check function takes a partial Microarchitecture object as a first argument,
and an arbitrary target Microarchitecture as the second argument. It returns True if the
target is compatible with first argument, False otherwise.
Args:
architecture_family (str or tuple): architecture family for which
this test can be used, e.g. x86_64 or ppc64le etc.
architecture_family: architecture family for which this test can be used
"""
# Turn the argument into something iterable
if isinstance(architecture_family, str):
@@ -280,86 +390,70 @@ def decorator(func):
return decorator
@compatibility_check(architecture_family=("ppc64le", "ppc64"))
@compatibility_check(architecture_family=(PPC64LE, PPC64))
def compatibility_check_for_power(info, target):
"""Compatibility check for PPC64 and PPC64LE architectures."""
basename = platform.machine()
generation_match = re.search(r"POWER(\d+)", info.get("cpu", ""))
try:
generation = int(generation_match.group(1))
except AttributeError:
# There might be no match under emulated environments. For instance
# emulating a ppc64le with QEMU and Docker still reports the host
# /proc/cpuinfo and not a Power
generation = 0
# We can use a target if it descends from our machine type and our
# generation (9 for POWER9, etc) is at least its generation.
arch_root = TARGETS[basename]
arch_root = TARGETS[_machine()]
return (
target == arch_root or arch_root in target.ancestors
) and target.generation <= generation
) and target.generation <= info.generation
@compatibility_check(architecture_family="x86_64")
@compatibility_check(architecture_family=X86_64)
def compatibility_check_for_x86_64(info, target):
"""Compatibility check for x86_64 architectures."""
basename = "x86_64"
vendor = info.get("vendor_id", "generic")
features = set(info.get("flags", "").split())
# We can use a target if it descends from our machine type, is from our
# vendor, and we have all of its features
arch_root = TARGETS[basename]
arch_root = TARGETS[X86_64]
return (
(target == arch_root or arch_root in target.ancestors)
and target.vendor in (vendor, "generic")
and target.features.issubset(features)
and target.vendor in (info.vendor, "generic")
and target.features.issubset(info.features)
)
@compatibility_check(architecture_family="aarch64")
@compatibility_check(architecture_family=AARCH64)
def compatibility_check_for_aarch64(info, target):
"""Compatibility check for AARCH64 architectures."""
basename = "aarch64"
features = set(info.get("Features", "").split())
vendor = info.get("CPU implementer", "generic")
# At the moment it's not clear how to detect compatibility with
# At the moment, it's not clear how to detect compatibility with
# a specific version of the architecture
if target.vendor == "generic" and target.name != "aarch64":
if target.vendor == "generic" and target.name != AARCH64:
return False
arch_root = TARGETS[basename]
arch_root = TARGETS[AARCH64]
arch_root_and_vendor = arch_root == target.family and target.vendor in (
vendor,
info.vendor,
"generic",
)
# On macOS it seems impossible to get all the CPU features
# with syctl info, but for ARM we can get the exact model
if platform.system() == "Darwin":
model_key = info.get("model", basename)
model = TARGETS[model_key]
model = TARGETS[info.name]
return arch_root_and_vendor and (target == model or target in model.ancestors)
return arch_root_and_vendor and target.features.issubset(features)
return arch_root_and_vendor and target.features.issubset(info.features)
@compatibility_check(architecture_family="riscv64")
@compatibility_check(architecture_family=RISCV64)
def compatibility_check_for_riscv64(info, target):
"""Compatibility check for riscv64 architectures."""
basename = "riscv64"
uarch = info.get("uarch")
# sifive unmatched board
if uarch == "sifive,u74-mc":
uarch = "u74mc"
# catch-all for unknown uarchs
else:
uarch = "riscv64"
arch_root = TARGETS[basename]
arch_root = TARGETS[RISCV64]
return (target == arch_root or arch_root in target.ancestors) and (
target == uarch or target.vendor == "generic"
target.name == info.name or target.vendor == "generic"
)
def brand_string() -> Optional[str]:
"""Returns the brand string of the host, if detected, or None."""
if platform.system() == "Darwin":
return _check_output(
["sysctl", "-n", "machdep.cpu.brand_string"], env=_ensure_bin_usrbin_in_path()
).strip()
if host().family == X86_64:
return CpuidInfoCollector().brand_string()
return None

View File

@@ -2,9 +2,7 @@
# Archspec Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Types and functions to manage information
on CPU microarchitectures.
"""
"""Types and functions to manage information on CPU microarchitectures."""
import functools
import platform
import re
@@ -13,6 +11,7 @@
import archspec
import archspec.cpu.alias
import archspec.cpu.schema
from .alias import FEATURE_ALIASES
from .schema import LazyDictionary
@@ -47,7 +46,7 @@ class Microarchitecture:
which has "broadwell" as a parent, supports running binaries
optimized for "broadwell".
vendor (str): vendor of the micro-architecture
features (list of str): supported CPU flags. Note that the semantic
features (set of str): supported CPU flags. Note that the semantic
of the flags in this field might vary among architectures, if
at all present. For instance x86_64 processors will list all
the flags supported by a given CPU while Arm processors will
@@ -64,21 +63,24 @@ class Microarchitecture:
passed in as argument above.
* versions: versions that support this micro-architecture.
generation (int): generation of the micro-architecture, if
relevant.
generation (int): generation of the micro-architecture, if relevant.
cpu_part (str): cpu part of the architecture, if relevant.
"""
# pylint: disable=too-many-arguments
# pylint: disable=too-many-arguments,too-many-instance-attributes
#: Aliases for micro-architecture's features
feature_aliases = FEATURE_ALIASES
def __init__(self, name, parents, vendor, features, compilers, generation=0):
def __init__(self, name, parents, vendor, features, compilers, generation=0, cpu_part=""):
self.name = name
self.parents = parents
self.vendor = vendor
self.features = features
self.compilers = compilers
# Only relevant for PowerPC
self.generation = generation
# Only relevant for AArch64
self.cpu_part = cpu_part
# Cache the ancestor computation
self._ancestors = None
@@ -110,6 +112,7 @@ def __eq__(self, other):
and self.parents == other.parents # avoid ancestors here
and self.compilers == other.compilers
and self.generation == other.generation
and self.cpu_part == other.cpu_part
)
@coerce_target_names
@@ -142,7 +145,8 @@ def __repr__(self):
cls_name = self.__class__.__name__
fmt = (
cls_name + "({0.name!r}, {0.parents!r}, {0.vendor!r}, "
"{0.features!r}, {0.compilers!r}, {0.generation!r})"
"{0.features!r}, {0.compilers!r}, generation={0.generation!r}, "
"cpu_part={0.cpu_part!r})"
)
return fmt.format(self)
@@ -180,24 +184,30 @@ def generic(self):
generics = [x for x in [self] + self.ancestors if x.vendor == "generic"]
return max(generics, key=lambda x: len(x.ancestors))
def to_dict(self, return_list_of_items=False):
"""Returns a dictionary representation of this object.
def to_dict(self):
"""Returns a dictionary representation of this object."""
return {
"name": str(self.name),
"vendor": str(self.vendor),
"features": sorted(str(x) for x in self.features),
"generation": self.generation,
"parents": [str(x) for x in self.parents],
"compilers": self.compilers,
"cpupart": self.cpu_part,
}
Args:
return_list_of_items (bool): if True returns an ordered list of
items instead of the dictionary
"""
list_of_items = [
("name", str(self.name)),
("vendor", str(self.vendor)),
("features", sorted(str(x) for x in self.features)),
("generation", self.generation),
("parents", [str(x) for x in self.parents]),
]
if return_list_of_items:
return list_of_items
return dict(list_of_items)
@staticmethod
def from_dict(data) -> "Microarchitecture":
"""Construct a microarchitecture from a dictionary representation."""
return Microarchitecture(
name=data["name"],
parents=[TARGETS[x] for x in data["parents"]],
vendor=data["vendor"],
features=set(data["features"]),
compilers=data.get("compilers", {}),
generation=data.get("generation", 0),
cpu_part=data.get("cpupart", ""),
)
def optimization_flags(self, compiler, version):
"""Returns a string containing the optimization flags that needs
@@ -271,9 +281,7 @@ def tuplify(ver):
flags = flags_fmt.format(**compiler_entry)
return flags
msg = (
"cannot produce optimized binary for micro-architecture '{0}' with {1}@{2}"
)
msg = "cannot produce optimized binary for micro-architecture '{0}' with {1}@{2}"
if compiler_info:
versions = [x["versions"] for x in compiler_info]
msg += f' [supported compiler versions are {", ".join(versions)}]'
@@ -289,9 +297,7 @@ def generic_microarchitecture(name):
Args:
name (str): name of the micro-architecture
"""
return Microarchitecture(
name, parents=[], vendor="generic", features=[], compilers={}
)
return Microarchitecture(name, parents=[], vendor="generic", features=set(), compilers={})
def version_components(version):
@@ -344,9 +350,10 @@ def fill_target_from_dict(name, data, targets):
features = set(values["features"])
compilers = values.get("compilers", {})
generation = values.get("generation", 0)
cpu_part = values.get("cpupart", "")
targets[name] = Microarchitecture(
name, parents, vendor, features, compilers, generation
name, parents, vendor, features, compilers, generation=generation, cpu_part=cpu_part
)
known_targets = {}

View File

@@ -7,7 +7,9 @@
"""
import collections.abc
import json
import os.path
import os
import pathlib
from typing import Tuple
class LazyDictionary(collections.abc.MutableMapping):
@@ -46,21 +48,65 @@ def __len__(self):
return len(self.data)
def _load_json_file(json_file):
json_dir = os.path.join(os.path.dirname(__file__), "..", "json", "cpu")
json_dir = os.path.abspath(json_dir)
#: Environment variable that might point to a directory with a user defined JSON file
DIR_FROM_ENVIRONMENT = "ARCHSPEC_CPU_DIR"
def _factory():
filename = os.path.join(json_dir, json_file)
with open(filename, "r", encoding="utf-8") as file:
return json.load(file)
#: Environment variable that might point to a directory with extensions to JSON files
EXTENSION_DIR_FROM_ENVIRONMENT = "ARCHSPEC_EXTENSION_CPU_DIR"
return _factory
def _json_file(filename: str, allow_custom: bool = False) -> Tuple[pathlib.Path, pathlib.Path]:
"""Given a filename, returns the absolute path for the main JSON file, and an
optional absolute path for an extension JSON file.
Args:
filename: filename for the JSON file
allow_custom: if True, allows overriding the location where the file resides
"""
json_dir = pathlib.Path(__file__).parent / ".." / "json" / "cpu"
if allow_custom and DIR_FROM_ENVIRONMENT in os.environ:
json_dir = pathlib.Path(os.environ[DIR_FROM_ENVIRONMENT])
json_dir = json_dir.absolute()
json_file = json_dir / filename
extension_file = None
if allow_custom and EXTENSION_DIR_FROM_ENVIRONMENT in os.environ:
extension_dir = pathlib.Path(os.environ[EXTENSION_DIR_FROM_ENVIRONMENT])
extension_dir.absolute()
extension_file = extension_dir / filename
return json_file, extension_file
def _load(json_file: pathlib.Path, extension_file: pathlib.Path):
with open(json_file, "r", encoding="utf-8") as file:
data = json.load(file)
if not extension_file or not extension_file.exists():
return data
with open(extension_file, "r", encoding="utf-8") as file:
extension_data = json.load(file)
top_level_sections = list(data.keys())
for key in top_level_sections:
if key not in extension_data:
continue
data[key].update(extension_data[key])
return data
#: In memory representation of the data in microarchitectures.json,
#: loaded on first access
TARGETS_JSON = LazyDictionary(_load_json_file("microarchitectures.json"))
TARGETS_JSON = LazyDictionary(_load, *_json_file("microarchitectures.json", allow_custom=True))
#: JSON schema for microarchitectures.json, loaded on first access
SCHEMA = LazyDictionary(_load_json_file("microarchitectures_schema.json"))
TARGETS_JSON_SCHEMA = LazyDictionary(_load, *_json_file("microarchitectures_schema.json"))
#: Information on how to call 'cpuid' to get information on the HOST CPU
CPUID_JSON = LazyDictionary(_load, *_json_file("cpuid.json", allow_custom=True))
#: JSON schema for cpuid.json, loaded on first access
CPUID_JSON_SCHEMA = LazyDictionary(_load, *_json_file("cpuid_schema.json"))

View File

@@ -9,11 +9,11 @@ language specific APIs.
Currently the repository contains the following JSON files:
```console
.
├── COPYRIGHT
── cpu
   ├── microarchitectures.json # Contains information on CPU microarchitectures
   └── microarchitectures_schema.json # Schema for the file above
cpu/
├── cpuid.json # Contains information on CPUID calls to retrieve vendor and features on x86_64
── cpuid_schema.json # Schema for the file above
├── microarchitectures.json # Contains information on CPU microarchitectures
└── microarchitectures_schema.json # Schema for the file above
```

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,134 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Schema for microarchitecture definitions and feature aliases",
"type": "object",
"additionalProperties": false,
"properties": {
"vendor": {
"type": "object",
"additionalProperties": false,
"properties": {
"description": {
"type": "string"
},
"input": {
"type": "object",
"additionalProperties": false,
"properties": {
"eax": {
"type": "integer"
},
"ecx": {
"type": "integer"
}
}
}
}
},
"highest_extension_support": {
"type": "object",
"additionalProperties": false,
"properties": {
"description": {
"type": "string"
},
"input": {
"type": "object",
"additionalProperties": false,
"properties": {
"eax": {
"type": "integer"
},
"ecx": {
"type": "integer"
}
}
}
}
},
"flags": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": false,
"properties": {
"description": {
"type": "string"
},
"input": {
"type": "object",
"additionalProperties": false,
"properties": {
"eax": {
"type": "integer"
},
"ecx": {
"type": "integer"
}
}
},
"bits": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": false,
"properties": {
"name": {
"type": "string"
},
"register": {
"type": "string"
},
"bit": {
"type": "integer"
}
}
}
}
}
}
},
"extension-flags": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": false,
"properties": {
"description": {
"type": "string"
},
"input": {
"type": "object",
"additionalProperties": false,
"properties": {
"eax": {
"type": "integer"
},
"ecx": {
"type": "integer"
}
}
},
"bits": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": false,
"properties": {
"name": {
"type": "string"
},
"register": {
"type": "string"
},
"bit": {
"type": "integer"
}
}
}
}
}
}
}
}
}

View File

@@ -2225,10 +2225,14 @@
],
"nvhpc": [
{
"versions": "21.11:",
"versions": "21.11:23.8",
"name": "zen3",
"flags": "-tp {name}",
"warnings": "zen4 is not fully supported by nvhpc yet, falling back to zen3"
"warnings": "zen4 is not fully supported by nvhpc versions < 23.9, falling back to zen3"
},
{
"versions": "23.9:",
"flags": "-tp {name}"
}
]
}
@@ -2318,6 +2322,26 @@
]
}
},
"power10": {
"from": ["power9"],
"vendor": "IBM",
"generation": 10,
"features": [],
"compilers": {
"gcc": [
{
"versions": "11.1:",
"flags": "-mcpu={name} -mtune={name}"
}
],
"clang": [
{
"versions": "11.0:",
"flags": "-mcpu={name} -mtune={name}"
}
]
}
},
"ppc64le": {
"from": [],
"vendor": "generic",
@@ -2405,6 +2429,29 @@
]
}
},
"power10le": {
"from": ["power9le"],
"vendor": "IBM",
"generation": 10,
"features": [],
"compilers": {
"gcc": [
{
"name": "power10",
"versions": "11.1:",
"flags": "-mcpu={name} -mtune={name}"
}
],
"clang": [
{
"versions": "11.0:",
"family": "ppc64le",
"name": "power10",
"flags": "-mcpu={name} -mtune={name}"
}
]
}
},
"aarch64": {
"from": [],
"vendor": "generic",
@@ -2592,6 +2639,37 @@
]
}
},
"armv9.0a": {
"from": ["armv8.5a"],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "12:",
"flags": "-march=armv9-a -mtune=generic"
}
],
"clang": [
{
"versions": "14:",
"flags": "-march=armv9-a -mtune=generic"
}
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv9-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv9-a -mtune=generic"
}
]
}
},
"thunderx2": {
"from": ["armv8.1a"],
"vendor": "Cavium",
@@ -2637,7 +2715,8 @@
"flags": "-mcpu=thunderx2t99"
}
]
}
},
"cpupart": "0x0af"
},
"a64fx": {
"from": ["armv8.2a"],
@@ -2705,7 +2784,8 @@
"flags": "-march=armv8.2-a+crc+crypto+fp16+sve"
}
]
}
},
"cpupart": "0x001"
},
"cortex_a72": {
"from": ["aarch64"],
@@ -2742,7 +2822,8 @@
"flags" : "-mcpu=cortex-a72"
}
]
}
},
"cpupart": "0xd08"
},
"neoverse_n1": {
"from": ["cortex_a72", "armv8.2a"],
@@ -2763,8 +2844,7 @@
"asimdrdm",
"lrcpc",
"dcpop",
"asimddp",
"ssbs"
"asimddp"
],
"compilers" : {
"gcc": [
@@ -2813,8 +2893,12 @@
],
"arm" : [
{
"versions": "20:",
"versions": "20:21.9",
"flags" : "-march=armv8.2-a+fp16+rcpc+dotprod+crypto"
},
{
"versions": "22:",
"flags" : "-mcpu=neoverse-n1"
}
],
"nvhpc" : [
@@ -2824,7 +2908,8 @@
"flags": "-tp {name}"
}
]
}
},
"cpupart": "0xd0c"
},
"neoverse_v1": {
"from": ["neoverse_n1", "armv8.4a"],
@@ -2848,8 +2933,6 @@
"lrcpc",
"dcpop",
"sha3",
"sm3",
"sm4",
"asimddp",
"sha512",
"sve",
@@ -2858,9 +2941,6 @@
"uscat",
"ilrcpc",
"flagm",
"ssbs",
"paca",
"pacg",
"dcpodp",
"svei8mm",
"svebf16",
@@ -2928,7 +3008,7 @@
},
{
"versions": "11:",
"flags" : "-march=armv8.4-a+sve+ssbs+fp16+bf16+crypto+i8mm+rng"
"flags" : "-march=armv8.4-a+sve+fp16+bf16+crypto+i8mm+rng"
},
{
"versions": "12:",
@@ -2942,7 +3022,7 @@
},
{
"versions": "22:",
"flags" : "-march=armv8.4-a+sve+ssbs+fp16+bf16+crypto+i8mm+rng"
"flags" : "-mcpu=neoverse-v1"
}
],
"nvhpc" : [
@@ -2952,7 +3032,224 @@
"flags": "-tp {name}"
}
]
}
},
"cpupart": "0xd40"
},
"neoverse_v2": {
"from": ["neoverse_n1", "armv9.0a"],
"vendor": "ARM",
"features": [
"fp",
"asimd",
"evtstrm",
"aes",
"pmull",
"sha1",
"sha2",
"crc32",
"atomics",
"fphp",
"asimdhp",
"cpuid",
"asimdrdm",
"jscvt",
"fcma",
"lrcpc",
"dcpop",
"sha3",
"asimddp",
"sha512",
"sve",
"asimdfhm",
"uscat",
"ilrcpc",
"flagm",
"sb",
"dcpodp",
"sve2",
"flagm2",
"frint",
"svei8mm",
"svebf16",
"i8mm",
"bf16"
],
"compilers" : {
"gcc": [
{
"versions": "4.8:5.99",
"flags": "-march=armv8-a"
},
{
"versions": "6:6.99",
"flags" : "-march=armv8.1-a"
},
{
"versions": "7.0:7.99",
"flags" : "-march=armv8.2-a -mtune=cortex-a72"
},
{
"versions": "8.0:8.99",
"flags" : "-march=armv8.4-a+sve -mtune=cortex-a72"
},
{
"versions": "9.0:9.99",
"flags" : "-march=armv8.5-a+sve -mtune=cortex-a76"
},
{
"versions": "10.0:11.3.99",
"flags" : "-march=armv8.5-a+sve+sve2+i8mm+bf16 -mtune=cortex-a77"
},
{
"versions": "11.4:11.99",
"flags" : "-mcpu=neoverse-v2"
},
{
"versions": "12.0:12.2.99",
"flags" : "-march=armv9-a+i8mm+bf16 -mtune=cortex-a710"
},
{
"versions": "12.3:",
"flags" : "-mcpu=neoverse-v2"
}
],
"clang" : [
{
"versions": "9.0:10.99",
"flags" : "-march=armv8.5-a+sve"
},
{
"versions": "11.0:13.99",
"flags" : "-march=armv8.5-a+sve+sve2+i8mm+bf16"
},
{
"versions": "14.0:15.99",
"flags" : "-march=armv9-a+i8mm+bf16"
},
{
"versions": "16.0:",
"flags" : "-mcpu=neoverse-v2"
}
],
"arm" : [
{
"versions": "23.04.0:",
"flags" : "-mcpu=neoverse-v2"
}
],
"nvhpc" : [
{
"versions": "23.3:",
"name": "neoverse-v2",
"flags": "-tp {name}"
}
]
},
"cpupart": "0xd4f"
},
"neoverse_n2": {
"from": ["neoverse_n1", "armv9.0a"],
"vendor": "ARM",
"features": [
"fp",
"asimd",
"evtstrm",
"aes",
"pmull",
"sha1",
"sha2",
"crc32",
"atomics",
"fphp",
"asimdhp",
"cpuid",
"asimdrdm",
"jscvt",
"fcma",
"lrcpc",
"dcpop",
"sha3",
"asimddp",
"sha512",
"sve",
"asimdfhm",
"uscat",
"ilrcpc",
"flagm",
"sb",
"dcpodp",
"sve2",
"flagm2",
"frint",
"svei8mm",
"svebf16",
"i8mm",
"bf16"
],
"compilers" : {
"gcc": [
{
"versions": "4.8:5.99",
"flags": "-march=armv8-a"
},
{
"versions": "6:6.99",
"flags" : "-march=armv8.1-a"
},
{
"versions": "7.0:7.99",
"flags" : "-march=armv8.2-a -mtune=cortex-a72"
},
{
"versions": "8.0:8.99",
"flags" : "-march=armv8.4-a+sve -mtune=cortex-a72"
},
{
"versions": "9.0:9.99",
"flags" : "-march=armv8.5-a+sve -mtune=cortex-a76"
},
{
"versions": "10.0:10.99",
"flags" : "-march=armv8.5-a+sve+sve2+i8mm+bf16 -mtune=cortex-a77"
},
{
"versions": "11.0:",
"flags" : "-mcpu=neoverse-n2"
}
],
"clang" : [
{
"versions": "9.0:10.99",
"flags" : "-march=armv8.5-a+sve"
},
{
"versions": "11.0:13.99",
"flags" : "-march=armv8.5-a+sve+sve2+i8mm+bf16"
},
{
"versions": "14.0:15.99",
"flags" : "-march=armv9-a+i8mm+bf16"
},
{
"versions": "16.0:",
"flags" : "-mcpu=neoverse-n2"
}
],
"arm" : [
{
"versions": "23.04.0:",
"flags" : "-mcpu=neoverse-n2"
}
],
"nvhpc" : [
{
"versions": "23.3:",
"name": "neoverse-n1",
"flags": "-tp {name}"
}
]
},
"cpupart": "0xd49"
},
"m1": {
"from": ["armv8.4a"],
@@ -3018,7 +3315,8 @@
"flags" : "-mcpu=apple-m1"
}
]
}
},
"cpupart": "0x022"
},
"m2": {
"from": ["m1", "armv8.5a"],
@@ -3096,7 +3394,8 @@
"flags" : "-mcpu=apple-m2"
}
]
}
},
"cpupart": "0x032"
},
"arm": {
"from": [],

View File

@@ -52,6 +52,9 @@
}
}
}
},
"cpupart": {
"type": "string"
}
},
"required": [
@@ -107,4 +110,4 @@
"additionalProperties": false
}
}
}
}

View File

@@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2014 Anders Høst
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -0,0 +1,76 @@
cpuid.py
========
Now, this is silly!
Pure Python library for accessing information about x86 processors
by querying the [CPUID](http://en.wikipedia.org/wiki/CPUID)
instruction. Well, not exactly pure Python...
It works by allocating a small piece of virtual memory, copying
a raw x86 function to that memory, giving the memory execute
permissions and then calling the memory as a function. The injected
function executes the CPUID instruction and copies the result back
to a ctypes.Structure where is can be read by Python.
It should work fine on both 32 and 64 bit versions of Windows and Linux
running x86 processors. Apple OS X and other BSD systems should also work,
not tested though...
Why?
----
For poops and giggles. Plus, having access to a low-level feature
without having to compile a C wrapper is pretty neat.
Examples
--------
Getting info with eax=0:
import cpuid
q = cpuid.CPUID()
eax, ebx, ecx, edx = q(0)
Running the files:
$ python example.py
Vendor ID : GenuineIntel
CPU name : Intel(R) Xeon(R) CPU W3550 @ 3.07GHz
Vector instructions supported:
SSE : Yes
SSE2 : Yes
SSE3 : Yes
SSSE3 : Yes
SSE4.1 : Yes
SSE4.2 : Yes
SSE4a : --
AVX : --
AVX2 : --
$ python cpuid.py
CPUID A B C D
00000000 0000000b 756e6547 6c65746e 49656e69
00000001 000106a5 00100800 009ce3bd bfebfbff
00000002 55035a01 00f0b2e4 00000000 09ca212c
00000003 00000000 00000000 00000000 00000000
00000004 00000000 00000000 00000000 00000000
00000005 00000040 00000040 00000003 00001120
00000006 00000003 00000002 00000001 00000000
00000007 00000000 00000000 00000000 00000000
00000008 00000000 00000000 00000000 00000000
00000009 00000000 00000000 00000000 00000000
0000000a 07300403 00000044 00000000 00000603
0000000b 00000000 00000000 00000095 00000000
80000000 80000008 00000000 00000000 00000000
80000001 00000000 00000000 00000001 28100800
80000002 65746e49 2952286c 6f655820 2952286e
80000003 55504320 20202020 20202020 57202020
80000004 30353533 20402020 37302e33 007a4847
80000005 00000000 00000000 00000000 00000000
80000006 00000000 00000000 01006040 00000000
80000007 00000000 00000000 00000000 00000100
80000008 00003024 00000000 00000000 00000000

View File

@@ -0,0 +1,172 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2024 Anders Høst
#
from __future__ import print_function
import platform
import os
import ctypes
from ctypes import c_uint32, c_long, c_ulong, c_size_t, c_void_p, POINTER, CFUNCTYPE
# Posix x86_64:
# Three first call registers : RDI, RSI, RDX
# Volatile registers : RAX, RCX, RDX, RSI, RDI, R8-11
# Windows x86_64:
# Three first call registers : RCX, RDX, R8
# Volatile registers : RAX, RCX, RDX, R8-11
# cdecl 32 bit:
# Three first call registers : Stack (%esp)
# Volatile registers : EAX, ECX, EDX
_POSIX_64_OPC = [
0x53, # push %rbx
0x89, 0xf0, # mov %esi,%eax
0x89, 0xd1, # mov %edx,%ecx
0x0f, 0xa2, # cpuid
0x89, 0x07, # mov %eax,(%rdi)
0x89, 0x5f, 0x04, # mov %ebx,0x4(%rdi)
0x89, 0x4f, 0x08, # mov %ecx,0x8(%rdi)
0x89, 0x57, 0x0c, # mov %edx,0xc(%rdi)
0x5b, # pop %rbx
0xc3 # retq
]
_WINDOWS_64_OPC = [
0x53, # push %rbx
0x89, 0xd0, # mov %edx,%eax
0x49, 0x89, 0xc9, # mov %rcx,%r9
0x44, 0x89, 0xc1, # mov %r8d,%ecx
0x0f, 0xa2, # cpuid
0x41, 0x89, 0x01, # mov %eax,(%r9)
0x41, 0x89, 0x59, 0x04, # mov %ebx,0x4(%r9)
0x41, 0x89, 0x49, 0x08, # mov %ecx,0x8(%r9)
0x41, 0x89, 0x51, 0x0c, # mov %edx,0xc(%r9)
0x5b, # pop %rbx
0xc3 # retq
]
_CDECL_32_OPC = [
0x53, # push %ebx
0x57, # push %edi
0x8b, 0x7c, 0x24, 0x0c, # mov 0xc(%esp),%edi
0x8b, 0x44, 0x24, 0x10, # mov 0x10(%esp),%eax
0x8b, 0x4c, 0x24, 0x14, # mov 0x14(%esp),%ecx
0x0f, 0xa2, # cpuid
0x89, 0x07, # mov %eax,(%edi)
0x89, 0x5f, 0x04, # mov %ebx,0x4(%edi)
0x89, 0x4f, 0x08, # mov %ecx,0x8(%edi)
0x89, 0x57, 0x0c, # mov %edx,0xc(%edi)
0x5f, # pop %edi
0x5b, # pop %ebx
0xc3 # ret
]
is_windows = os.name == "nt"
is_64bit = ctypes.sizeof(ctypes.c_voidp) == 8
class CPUID_struct(ctypes.Structure):
_register_names = ("eax", "ebx", "ecx", "edx")
_fields_ = [(r, c_uint32) for r in _register_names]
def __getitem__(self, item):
if item not in self._register_names:
raise KeyError(item)
return getattr(self, item)
def __repr__(self):
return "eax=0x{:x}, ebx=0x{:x}, ecx=0x{:x}, edx=0x{:x}".format(self.eax, self.ebx, self.ecx, self.edx)
class CPUID(object):
def __init__(self):
if platform.machine() not in ("AMD64", "x86_64", "x86", "i686"):
raise SystemError("Only available for x86")
if is_windows:
if is_64bit:
# VirtualAlloc seems to fail under some weird
# circumstances when ctypes.windll.kernel32 is
# used under 64 bit Python. CDLL fixes this.
self.win = ctypes.CDLL("kernel32.dll")
opc = _WINDOWS_64_OPC
else:
# Here ctypes.windll.kernel32 is needed to get the
# right DLL. Otherwise it will fail when running
# 32 bit Python on 64 bit Windows.
self.win = ctypes.windll.kernel32
opc = _CDECL_32_OPC
else:
opc = _POSIX_64_OPC if is_64bit else _CDECL_32_OPC
size = len(opc)
code = (ctypes.c_ubyte * size)(*opc)
if is_windows:
self.win.VirtualAlloc.restype = c_void_p
self.win.VirtualAlloc.argtypes = [ctypes.c_void_p, ctypes.c_size_t, ctypes.c_ulong, ctypes.c_ulong]
self.addr = self.win.VirtualAlloc(None, size, 0x1000, 0x40)
if not self.addr:
raise MemoryError("Could not allocate RWX memory")
ctypes.memmove(self.addr, code, size)
else:
from mmap import (
mmap,
MAP_PRIVATE,
MAP_ANONYMOUS,
PROT_WRITE,
PROT_READ,
PROT_EXEC,
)
self.mm = mmap(
-1,
size,
flags=MAP_PRIVATE | MAP_ANONYMOUS,
prot=PROT_WRITE | PROT_READ | PROT_EXEC,
)
self.mm.write(code)
self.addr = ctypes.addressof(ctypes.c_int.from_buffer(self.mm))
func_type = CFUNCTYPE(None, POINTER(CPUID_struct), c_uint32, c_uint32)
self.func_ptr = func_type(self.addr)
def __call__(self, eax, ecx=0):
struct = self.registers_for(eax=eax, ecx=ecx)
return struct.eax, struct.ebx, struct.ecx, struct.edx
def registers_for(self, eax, ecx=0):
"""Calls cpuid with eax and ecx set as the input arguments, and returns a structure
containing eax, ebx, ecx, and edx.
"""
struct = CPUID_struct()
self.func_ptr(struct, eax, ecx)
return struct
def __del__(self):
if is_windows:
self.win.VirtualFree.restype = c_long
self.win.VirtualFree.argtypes = [c_void_p, c_size_t, c_ulong]
self.win.VirtualFree(self.addr, 0, 0x8000)
else:
self.mm.close()
if __name__ == "__main__":
def valid_inputs():
cpuid = CPUID()
for eax in (0x0, 0x80000000):
highest, _, _, _ = cpuid(eax)
while eax <= highest:
regs = cpuid(eax)
yield (eax, regs)
eax += 1
print(" ".join(x.ljust(8) for x in ("CPUID", "A", "B", "C", "D")).strip())
for eax, regs in valid_inputs():
print("%08x" % eax, " ".join("%08x" % reg for reg in regs))

View File

@@ -0,0 +1,62 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2024 Anders Høst
#
from __future__ import print_function
import struct
import cpuid
def cpu_vendor(cpu):
_, b, c, d = cpu(0)
return struct.pack("III", b, d, c).decode("utf-8")
def cpu_name(cpu):
name = "".join((struct.pack("IIII", *cpu(0x80000000 + i)).decode("utf-8")
for i in range(2, 5)))
return name.split('\x00', 1)[0]
def is_set(cpu, leaf, subleaf, reg_idx, bit):
"""
@param {leaf} %eax
@param {sublead} %ecx, 0 in most cases
@param {reg_idx} idx of [%eax, %ebx, %ecx, %edx], 0-based
@param {bit} bit of reg selected by {reg_idx}, 0-based
"""
regs = cpu(leaf, subleaf)
if (1 << bit) & regs[reg_idx]:
return "Yes"
else:
return "--"
if __name__ == "__main__":
cpu = cpuid.CPUID()
print("Vendor ID : %s" % cpu_vendor(cpu))
print("CPU name : %s" % cpu_name(cpu))
print()
print("Vector instructions supported:")
print("SSE : %s" % is_set(cpu, 1, 0, 3, 25))
print("SSE2 : %s" % is_set(cpu, 1, 0, 3, 26))
print("SSE3 : %s" % is_set(cpu, 1, 0, 2, 0))
print("SSSE3 : %s" % is_set(cpu, 1, 0, 2, 9))
print("SSE4.1 : %s" % is_set(cpu, 1, 0, 2, 19))
print("SSE4.2 : %s" % is_set(cpu, 1, 0, 2, 20))
print("SSE4a : %s" % is_set(cpu, 0x80000001, 0, 2, 6))
print("AVX : %s" % is_set(cpu, 1, 0, 2, 28))
print("AVX2 : %s" % is_set(cpu, 7, 0, 1, 5))
print("BMI1 : %s" % is_set(cpu, 7, 0, 1, 3))
print("BMI2 : %s" % is_set(cpu, 7, 0, 1, 8))
# Intel RDT CMT/MBM
print("L3 Monitoring : %s" % is_set(cpu, 0xf, 0, 3, 1))
print("L3 Occupancy : %s" % is_set(cpu, 0xf, 1, 3, 0))
print("L3 Total BW : %s" % is_set(cpu, 0xf, 1, 3, 1))
print("L3 Local BW : %s" % is_set(cpu, 0xf, 1, 3, 2))

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
__version__ = "0.21.0.dev0"
__version__ = "0.21.3"
spack_version = __version__

View File

@@ -40,6 +40,7 @@ def _search_duplicate_compilers(error_cls):
import collections.abc
import glob
import inspect
import io
import itertools
import pathlib
import pickle
@@ -54,6 +55,7 @@ def _search_duplicate_compilers(error_cls):
import spack.repo
import spack.spec
import spack.util.crypto
import spack.util.spack_yaml as syaml
import spack.variant
#: Map an audit tag to a list of callables implementing checks
@@ -250,6 +252,88 @@ def _search_duplicate_specs_in_externals(error_cls):
return errors
@config_packages
def _deprecated_preferences(error_cls):
"""Search package preferences deprecated in v0.21 (and slated for removal in v0.22)"""
# TODO (v0.22): remove this audit as the attributes will not be allowed in config
errors = []
packages_yaml = spack.config.CONFIG.get_config("packages")
def make_error(attribute_name, config_data, summary):
s = io.StringIO()
s.write("Occurring in the following file:\n")
dict_view = syaml.syaml_dict((k, v) for k, v in config_data.items() if k == attribute_name)
syaml.dump_config(dict_view, stream=s, blame=True)
return error_cls(summary=summary, details=[s.getvalue()])
if "all" in packages_yaml and "version" in packages_yaml["all"]:
summary = "Using the deprecated 'version' attribute under 'packages:all'"
errors.append(make_error("version", packages_yaml["all"], summary))
for package_name in packages_yaml:
if package_name == "all":
continue
package_conf = packages_yaml[package_name]
for attribute in ("compiler", "providers", "target"):
if attribute not in package_conf:
continue
summary = (
f"Using the deprecated '{attribute}' attribute " f"under 'packages:{package_name}'"
)
errors.append(make_error(attribute, package_conf, summary))
return errors
@config_packages
def _avoid_mismatched_variants(error_cls):
"""Warns if variant preferences have mismatched types or names."""
errors = []
packages_yaml = spack.config.CONFIG.get_config("packages")
def make_error(config_data, summary):
s = io.StringIO()
s.write("Occurring in the following file:\n")
syaml.dump_config(config_data, stream=s, blame=True)
return error_cls(summary=summary, details=[s.getvalue()])
for pkg_name in packages_yaml:
# 'all:' must be more forgiving, since it is setting defaults for everything
if pkg_name == "all" or "variants" not in packages_yaml[pkg_name]:
continue
preferences = packages_yaml[pkg_name]["variants"]
if not isinstance(preferences, list):
preferences = [preferences]
for variants in preferences:
current_spec = spack.spec.Spec(variants)
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
for variant in current_spec.variants.values():
# Variant does not exist at all
if variant.name not in pkg_cls.variants:
summary = (
f"Setting a preference for the '{pkg_name}' package to the "
f"non-existing variant '{variant.name}'"
)
errors.append(make_error(preferences, summary))
continue
# Variant cannot accept this value
s = spack.spec.Spec(pkg_name)
try:
s.update_variant_validate(variant.name, variant.value)
except Exception:
summary = (
f"Setting the variant '{variant.name}' of the '{pkg_name}' package "
f"to the invalid value '{str(variant)}'"
)
errors.append(make_error(preferences, summary))
return errors
#: Sanity checks on package directives
package_directives = AuditClass(
group="packages",
@@ -776,7 +860,7 @@ def _version_constraints_are_satisfiable_by_some_version_in_repo(pkgs, error_cls
)
except Exception:
summary = (
"{0}: dependency on {1} cannot be satisfied " "by known versions of {1.name}"
"{0}: dependency on {1} cannot be satisfied by known versions of {1.name}"
).format(pkg_name, s)
details = ["happening in " + filename]
if dependency_pkg_cls is not None:
@@ -818,6 +902,53 @@ def _analyze_variants_in_directive(pkg, constraint, directive, error_cls):
return errors
@package_directives
def _named_specs_in_when_arguments(pkgs, error_cls):
"""Reports named specs in the 'when=' attribute of a directive.
Note that 'conflicts' is the only directive allowing that.
"""
errors = []
for pkg_name in pkgs:
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
def _extracts_errors(triggers, summary):
_errors = []
for trigger in list(triggers):
when_spec = spack.spec.Spec(trigger)
if when_spec.name is not None and when_spec.name != pkg_name:
details = [f"using '{trigger}', should be '^{trigger}'"]
_errors.append(error_cls(summary=summary, details=details))
return _errors
for dname, triggers in pkg_cls.dependencies.items():
summary = f"{pkg_name}: wrong 'when=' condition for the '{dname}' dependency"
errors.extend(_extracts_errors(triggers, summary))
for vname, (variant, triggers) in pkg_cls.variants.items():
summary = f"{pkg_name}: wrong 'when=' condition for the '{vname}' variant"
errors.extend(_extracts_errors(triggers, summary))
for provided, triggers in pkg_cls.provided.items():
summary = f"{pkg_name}: wrong 'when=' condition for the '{provided}' virtual"
errors.extend(_extracts_errors(triggers, summary))
for _, triggers in pkg_cls.requirements.items():
triggers = [when_spec for when_spec, _, _ in triggers]
summary = f"{pkg_name}: wrong 'when=' condition in 'requires' directive"
errors.extend(_extracts_errors(triggers, summary))
triggers = list(pkg_cls.patches)
summary = f"{pkg_name}: wrong 'when=' condition in 'patch' directives"
errors.extend(_extracts_errors(triggers, summary))
triggers = list(pkg_cls.resources)
summary = f"{pkg_name}: wrong 'when=' condition in 'resource' directives"
errors.extend(_extracts_errors(triggers, summary))
return llnl.util.lang.dedupe(errors)
#: Sanity checks on package directives
external_detection = AuditClass(
group="externals",

View File

@@ -66,8 +66,10 @@
from spack.stage import Stage
from spack.util.executable import which
_build_cache_relative_path = "build_cache"
_build_cache_keys_relative_path = "_pgp"
BUILD_CACHE_RELATIVE_PATH = "build_cache"
BUILD_CACHE_KEYS_RELATIVE_PATH = "_pgp"
CURRENT_BUILD_CACHE_LAYOUT_VERSION = 1
FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION = 2
class BuildCacheDatabase(spack_db.Database):
@@ -481,7 +483,7 @@ def _fetch_and_cache_index(self, mirror_url, cache_entry={}):
scheme = urllib.parse.urlparse(mirror_url).scheme
if scheme != "oci" and not web_util.url_exists(
url_util.join(mirror_url, _build_cache_relative_path, "index.json")
url_util.join(mirror_url, BUILD_CACHE_RELATIVE_PATH, "index.json")
):
return False
@@ -600,6 +602,10 @@ def __init__(self, msg):
super().__init__(msg)
class InvalidMetadataFile(spack.error.SpackError):
pass
class UnsignedPackageException(spack.error.SpackError):
"""
Raised if installation of unsigned package is attempted without
@@ -614,11 +620,11 @@ def compute_hash(data):
def build_cache_relative_path():
return _build_cache_relative_path
return BUILD_CACHE_RELATIVE_PATH
def build_cache_keys_relative_path():
return _build_cache_keys_relative_path
return BUILD_CACHE_KEYS_RELATIVE_PATH
def build_cache_prefix(prefix):
@@ -1401,7 +1407,7 @@ def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, option
spec_dict = sjson.load(content)
else:
raise ValueError("{0} not a valid spec file type".format(spec_file))
spec_dict["buildcache_layout_version"] = 1
spec_dict["buildcache_layout_version"] = CURRENT_BUILD_CACHE_LAYOUT_VERSION
spec_dict["binary_cache_checksum"] = {"hash_algorithm": "sha256", "hash": checksum}
with open(specfile_path, "w") as outfile:
@@ -1560,6 +1566,42 @@ def _delete_staged_downloads(download_result):
download_result["specfile_stage"].destroy()
def _get_valid_spec_file(path: str, max_supported_layout: int) -> Tuple[Dict, int]:
"""Read and validate a spec file, returning the spec dict with its layout version, or raising
InvalidMetadataFile if invalid."""
try:
with open(path, "rb") as f:
binary_content = f.read()
except OSError:
raise InvalidMetadataFile(f"No such file: {path}")
# In the future we may support transparently decompressing compressed spec files.
if binary_content[:2] == b"\x1f\x8b":
raise InvalidMetadataFile("Compressed spec files are not supported")
try:
as_string = binary_content.decode("utf-8")
if path.endswith(".json.sig"):
spec_dict = Spec.extract_json_from_clearsig(as_string)
else:
spec_dict = json.loads(as_string)
except Exception as e:
raise InvalidMetadataFile(f"Could not parse {path} due to: {e}") from e
# Ensure this version is not too new.
try:
layout_version = int(spec_dict.get("buildcache_layout_version", 0))
except ValueError as e:
raise InvalidMetadataFile("Could not parse layout version") from e
if layout_version > max_supported_layout:
raise InvalidMetadataFile(
f"Layout version {layout_version} is too new for this version of Spack"
)
return spec_dict, layout_version
def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
"""
Download binary tarball for given package into stage area, returning
@@ -1652,6 +1694,18 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
try:
local_specfile_stage.fetch()
local_specfile_stage.check()
try:
_get_valid_spec_file(
local_specfile_stage.save_filename,
FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION,
)
except InvalidMetadataFile as e:
tty.warn(
f"Ignoring binary package for {spec.name}/{spec.dag_hash()[:7]} "
f"from {mirror} due to invalid metadata file: {e}"
)
local_specfile_stage.destroy()
continue
except Exception:
continue
local_specfile_stage.cache_local()
@@ -1674,14 +1728,26 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
else:
ext = "json.sig" if try_signed else "json"
specfile_path = url_util.join(mirror, _build_cache_relative_path, specfile_prefix)
specfile_path = url_util.join(mirror, BUILD_CACHE_RELATIVE_PATH, specfile_prefix)
specfile_url = f"{specfile_path}.{ext}"
spackfile_url = url_util.join(mirror, _build_cache_relative_path, tarball)
spackfile_url = url_util.join(mirror, BUILD_CACHE_RELATIVE_PATH, tarball)
local_specfile_stage = try_fetch(specfile_url)
if local_specfile_stage:
local_specfile_path = local_specfile_stage.save_filename
signature_verified = False
try:
_get_valid_spec_file(
local_specfile_path, FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION
)
except InvalidMetadataFile as e:
tty.warn(
f"Ignoring binary package for {spec.name}/{spec.dag_hash()[:7]} "
f"from {mirror} due to invalid metadata file: {e}"
)
local_specfile_stage.destroy()
continue
if try_signed and not unsigned:
# If we found a signed specfile at the root, try to verify
# the signature immediately. We will not download the
@@ -1961,11 +2027,12 @@ def _extract_inner_tarball(spec, filename, extract_to, unsigned, remote_checksum
def _tar_strip_component(tar: tarfile.TarFile, prefix: str):
"""Strip the top-level directory `prefix` from the member names in a tarfile."""
"""Yield all members of tarfile that start with given prefix, and strip that prefix (including
symlinks)"""
# Including trailing /, otherwise we end up with absolute paths.
regex = re.compile(re.escape(prefix) + "/*")
# Remove the top-level directory from the member (link)names.
# Only yield members in the package prefix.
# Note: when a tarfile is created, relative in-prefix symlinks are
# expanded to matching member names of tarfile entries. So, we have
# to ensure that those are updated too.
@@ -1973,12 +2040,14 @@ def _tar_strip_component(tar: tarfile.TarFile, prefix: str):
# them.
for m in tar.getmembers():
result = regex.match(m.name)
assert result is not None
if not result:
continue
m.name = m.name[result.end() :]
if m.linkname:
result = regex.match(m.linkname)
if result:
m.linkname = m.linkname[result.end() :]
yield m
def extract_tarball(spec, download_result, unsigned=False, force=False, timer=timer.NULL_TIMER):
@@ -2001,24 +2070,16 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
)
specfile_path = download_result["specfile_stage"].save_filename
with open(specfile_path, "r") as inputfile:
content = inputfile.read()
if specfile_path.endswith(".json.sig"):
spec_dict = Spec.extract_json_from_clearsig(content)
else:
spec_dict = sjson.load(content)
spec_dict, layout_version = _get_valid_spec_file(
specfile_path, FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION
)
bchecksum = spec_dict["binary_cache_checksum"]
filename = download_result["tarball_stage"].save_filename
signature_verified = download_result["signature_verified"]
tmpdir = None
if (
"buildcache_layout_version" not in spec_dict
or int(spec_dict["buildcache_layout_version"]) < 1
):
if layout_version == 0:
# Handle the older buildcache layout where the .spack file
# contains a spec json, maybe an .asc file (signature),
# and another tarball containing the actual install tree.
@@ -2029,7 +2090,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
_delete_staged_downloads(download_result)
shutil.rmtree(tmpdir)
raise e
else:
elif 1 <= layout_version <= 2:
# Newer buildcache layout: the .spack file contains just
# in the install tree, the signature, if it exists, is
# wrapped around the spec.json at the root. If sig verify
@@ -2053,12 +2114,13 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
raise NoChecksumException(
tarfile_path, size, contents, "sha256", expected, local_checksum
)
try:
with closing(tarfile.open(tarfile_path, "r")) as tar:
# Remove install prefix from tarfil to extract directly into spec.prefix
_tar_strip_component(tar, prefix=_ensure_common_prefix(tar))
tar.extractall(path=spec.prefix)
tar.extractall(
path=spec.prefix,
members=_tar_strip_component(tar, prefix=_ensure_common_prefix(tar)),
)
except Exception:
shutil.rmtree(spec.prefix, ignore_errors=True)
_delete_staged_downloads(download_result)
@@ -2093,20 +2155,47 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
def _ensure_common_prefix(tar: tarfile.TarFile) -> str:
# Get the shortest length directory.
common_prefix = min((e.name for e in tar.getmembers() if e.isdir()), key=len, default=None)
# Find the lowest `binary_distribution` file (hard-coded forward slash is on purpose).
binary_distribution = min(
(
e.name
for e in tar.getmembers()
if e.isfile() and e.name.endswith(".spack/binary_distribution")
),
key=len,
default=None,
)
if common_prefix is None:
raise ValueError("Tarball does not contain a common prefix")
if binary_distribution is None:
raise ValueError("Tarball is not a Spack package, missing binary_distribution file")
# Validate that each file starts with the prefix
pkg_path = pathlib.PurePosixPath(binary_distribution).parent.parent
# Even the most ancient Spack version has required to list the dir of the package itself, so
# guard against broken tarballs where `path.parent.parent` is empty.
if pkg_path == pathlib.PurePosixPath():
raise ValueError("Invalid tarball, missing package prefix dir")
pkg_prefix = str(pkg_path)
# Ensure all tar entries are in the pkg_prefix dir, and if they're not, they should be parent
# dirs of it.
has_prefix = False
for member in tar.getmembers():
if not member.name.startswith(common_prefix):
raise ValueError(
f"Tarball contains file {member.name} outside of prefix {common_prefix}"
)
stripped = member.name.rstrip("/")
if not (
stripped.startswith(pkg_prefix) or member.isdir() and pkg_prefix.startswith(stripped)
):
raise ValueError(f"Tarball contains file {stripped} outside of prefix {pkg_prefix}")
if member.isdir() and stripped == pkg_prefix:
has_prefix = True
return common_prefix
# This is technically not required, but let's be defensive about the existence of the package
# prefix dir.
if not has_prefix:
raise ValueError(f"Tarball does not contain a common prefix {pkg_prefix}")
return pkg_prefix
def install_root_node(spec, unsigned=False, force=False, sha256=None):
@@ -2184,10 +2273,10 @@ def try_direct_fetch(spec, mirrors=None):
for mirror in binary_mirrors:
buildcache_fetch_url_json = url_util.join(
mirror.fetch_url, _build_cache_relative_path, specfile_name
mirror.fetch_url, BUILD_CACHE_RELATIVE_PATH, specfile_name
)
buildcache_fetch_url_signed_json = url_util.join(
mirror.fetch_url, _build_cache_relative_path, signed_specfile_name
mirror.fetch_url, BUILD_CACHE_RELATIVE_PATH, signed_specfile_name
)
try:
_, _, fs = web_util.read_from_url(buildcache_fetch_url_signed_json)
@@ -2291,8 +2380,11 @@ def get_keys(install=False, trust=False, force=False, mirrors=None):
for mirror in mirror_collection.values():
fetch_url = mirror.fetch_url
# TODO: oci:// does not support signing.
if fetch_url.startswith("oci://"):
continue
keys_url = url_util.join(
fetch_url, _build_cache_relative_path, _build_cache_keys_relative_path
fetch_url, BUILD_CACHE_RELATIVE_PATH, BUILD_CACHE_KEYS_RELATIVE_PATH
)
keys_index = url_util.join(keys_url, "index.json")
@@ -2357,7 +2449,7 @@ def push_keys(*mirrors, **kwargs):
for mirror in mirrors:
push_url = getattr(mirror, "push_url", mirror)
keys_url = url_util.join(
push_url, _build_cache_relative_path, _build_cache_keys_relative_path
push_url, BUILD_CACHE_RELATIVE_PATH, BUILD_CACHE_KEYS_RELATIVE_PATH
)
keys_local = url_util.local_file_path(keys_url)
@@ -2495,11 +2587,11 @@ def download_buildcache_entry(file_descriptions, mirror_url=None):
)
if mirror_url:
mirror_root = os.path.join(mirror_url, _build_cache_relative_path)
mirror_root = os.path.join(mirror_url, BUILD_CACHE_RELATIVE_PATH)
return _download_buildcache_entry(mirror_root, file_descriptions)
for mirror in spack.mirror.MirrorCollection(binary=True).values():
mirror_root = os.path.join(mirror.fetch_url, _build_cache_relative_path)
mirror_root = os.path.join(mirror.fetch_url, BUILD_CACHE_RELATIVE_PATH)
if _download_buildcache_entry(mirror_root, file_descriptions):
return True
@@ -2590,7 +2682,7 @@ def __init__(self, url, local_hash, urlopen=web_util.urlopen):
def get_remote_hash(self):
# Failure to fetch index.json.hash is not fatal
url_index_hash = url_util.join(self.url, _build_cache_relative_path, "index.json.hash")
url_index_hash = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json.hash")
try:
response = self.urlopen(urllib.request.Request(url_index_hash, headers=self.headers))
except urllib.error.URLError:
@@ -2611,7 +2703,7 @@ def conditional_fetch(self) -> FetchIndexResult:
return FetchIndexResult(etag=None, hash=None, data=None, fresh=True)
# Otherwise, download index.json
url_index = url_util.join(self.url, _build_cache_relative_path, "index.json")
url_index = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json")
try:
response = self.urlopen(urllib.request.Request(url_index, headers=self.headers))
@@ -2655,7 +2747,7 @@ def __init__(self, url, etag, urlopen=web_util.urlopen):
def conditional_fetch(self) -> FetchIndexResult:
# Just do a conditional fetch immediately
url = url_util.join(self.url, _build_cache_relative_path, "index.json")
url = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json")
headers = {
"User-Agent": web_util.SPACK_USER_AGENT,
"If-None-Match": '"{}"'.format(self.etag),

View File

@@ -213,7 +213,8 @@ def _root_spec(spec_str: str) -> str:
if str(spack.platforms.host()) == "darwin":
spec_str += " %apple-clang"
elif str(spack.platforms.host()) == "windows":
spec_str += " %msvc"
# TODO (johnwparent): Remove version constraint when clingo patch is up
spec_str += " %msvc@:19.37"
else:
spec_str += " %gcc"

View File

@@ -143,7 +143,9 @@ def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
def _add_compilers_if_missing() -> None:
arch = spack.spec.ArchSpec.frontend_arch()
if not spack.compilers.compilers_for_arch(arch):
new_compilers = spack.compilers.find_new_compilers()
new_compilers = spack.compilers.find_new_compilers(
mixed_toolchain=sys.platform == "darwin"
)
if new_compilers:
spack.compilers.add_compilers_to_config(new_compilers, init_config=False)

View File

@@ -324,19 +324,29 @@ def set_compiler_environment_variables(pkg, env):
# ttyout, ttyerr, etc.
link_dir = spack.paths.build_env_path
# Set SPACK compiler variables so that our wrapper knows what to call
# Set SPACK compiler variables so that our wrapper knows what to
# call. If there is no compiler configured then use a default
# wrapper which will emit an error if it is used.
if compiler.cc:
env.set("SPACK_CC", compiler.cc)
env.set("CC", os.path.join(link_dir, compiler.link_paths["cc"]))
else:
env.set("CC", os.path.join(link_dir, "cc"))
if compiler.cxx:
env.set("SPACK_CXX", compiler.cxx)
env.set("CXX", os.path.join(link_dir, compiler.link_paths["cxx"]))
else:
env.set("CC", os.path.join(link_dir, "c++"))
if compiler.f77:
env.set("SPACK_F77", compiler.f77)
env.set("F77", os.path.join(link_dir, compiler.link_paths["f77"]))
else:
env.set("F77", os.path.join(link_dir, "f77"))
if compiler.fc:
env.set("SPACK_FC", compiler.fc)
env.set("FC", os.path.join(link_dir, compiler.link_paths["fc"]))
else:
env.set("FC", os.path.join(link_dir, "fc"))
# Set SPACK compiler rpath flags so that our wrapper knows what to use
env.set("SPACK_CC_RPATH_ARG", compiler.cc_rpath_arg)
@@ -743,15 +753,16 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
set_compiler_environment_variables(pkg, env_mods)
set_wrapper_variables(pkg, env_mods)
tty.debug("setup_package: grabbing modifications from dependencies")
env_mods.extend(setup_context.get_env_modifications())
tty.debug("setup_package: collected all modifications from dependencies")
# architecture specific setup
# Platform specific setup goes before package specific setup. This is for setting
# defaults like MACOSX_DEPLOYMENT_TARGET on macOS.
platform = spack.platforms.by_name(pkg.spec.architecture.platform)
target = platform.target(pkg.spec.architecture.target)
platform.setup_platform_environment(pkg, env_mods)
tty.debug("setup_package: grabbing modifications from dependencies")
env_mods.extend(setup_context.get_env_modifications())
tty.debug("setup_package: collected all modifications from dependencies")
if context == Context.TEST:
env_mods.prepend_path("PATH", ".")
elif context == Context.BUILD and not dirty and not env_mods.is_unset("CPATH"):
@@ -778,7 +789,7 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
for mod in ["cray-mpich", "cray-libsci"]:
module("unload", mod)
if target.module_name:
if target and target.module_name:
load_module(target.module_name)
load_external_modules(pkg)
@@ -1016,10 +1027,17 @@ def get_env_modifications(self) -> EnvironmentModifications:
self._make_runnable(dspec, env)
if self.should_setup_run_env & flag:
run_env_mods = EnvironmentModifications()
for spec in dspec.dependents(deptype=dt.LINK | dt.RUN):
if id(spec) in self.nodes_in_subdag:
pkg.setup_dependent_run_environment(env, spec)
pkg.setup_run_environment(env)
pkg.setup_dependent_run_environment(run_env_mods, spec)
pkg.setup_run_environment(run_env_mods)
if self.context == Context.BUILD:
# Don't let the runtime environment of comiler like dependencies leak into the
# build env
run_env_mods.drop("CC", "CXX", "F77", "FC")
env.extend(run_env_mods)
return env
def _make_buildtime_detectable(self, dep: spack.spec.Spec, env: EnvironmentModifications):
@@ -1315,7 +1333,7 @@ def make_stack(tb, stack=None):
# don't provide context if the code is actually in the base classes.
obj = frame.f_locals["self"]
func = getattr(obj, tb.tb_frame.f_code.co_name, "")
if func:
if func and hasattr(func, "__qualname__"):
typename, *_ = func.__qualname__.partition(".")
if isinstance(obj, CONTEXT_BASES) and typename not in basenames:
break

View File

@@ -9,11 +9,10 @@
import shutil
from os.path import basename, dirname, isdir
from llnl.util.filesystem import find_headers, find_libraries, join_path
from llnl.util.filesystem import find_headers, find_libraries, join_path, mkdirp
from llnl.util.link_tree import LinkTree
from spack.directives import conflicts, variant
from spack.package import mkdirp
from spack.util.environment import EnvironmentModifications
from spack.util.executable import Executable
@@ -212,3 +211,7 @@ def link_flags(self):
@property
def ld_flags(self):
return "{0} {1}".format(self.search_flags, self.link_flags)
#: Tuple of Intel math libraries, exported to packages
INTEL_MATH_LIBRARIES = ("intel-mkl", "intel-oneapi-mkl", "intel-parallel-studio")

View File

@@ -2,6 +2,8 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import warnings
import llnl.util.tty as tty
import llnl.util.tty.colify
import llnl.util.tty.color as cl
@@ -52,8 +54,10 @@ def setup_parser(subparser):
def configs(parser, args):
reports = spack.audit.run_group(args.subcommand)
_process_reports(reports)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
reports = spack.audit.run_group(args.subcommand)
_process_reports(reports)
def packages(parser, args):

View File

@@ -7,13 +7,14 @@
import glob
import hashlib
import json
import multiprocessing
import multiprocessing.pool
import os
import shutil
import sys
import tempfile
import urllib.request
from typing import Dict, List, Optional, Tuple
from typing import Dict, List, Optional, Tuple, Union
import llnl.util.tty as tty
from llnl.string import plural
@@ -307,8 +308,30 @@ def _progress(i: int, total: int):
return ""
def _make_pool():
return multiprocessing.pool.Pool(determine_number_of_jobs(parallel=True))
class NoPool:
def map(self, func, args):
return [func(a) for a in args]
def starmap(self, func, args):
return [func(*a) for a in args]
def __enter__(self):
return self
def __exit__(self, *args):
pass
MaybePool = Union[multiprocessing.pool.Pool, NoPool]
def _make_pool() -> MaybePool:
"""Can't use threading because it's unsafe, and can't use spawned processes because of globals.
That leaves only forking"""
if multiprocessing.get_start_method() == "fork":
return multiprocessing.pool.Pool(determine_number_of_jobs(parallel=True))
else:
return NoPool()
def push_fn(args):
@@ -591,7 +614,7 @@ def _push_oci(
image_ref: ImageReference,
installed_specs_with_deps: List[Spec],
tmpdir: str,
pool: multiprocessing.pool.Pool,
pool: MaybePool,
) -> List[str]:
"""Push specs to an OCI registry
@@ -692,11 +715,10 @@ def _config_from_tag(image_ref: ImageReference, tag: str) -> Optional[dict]:
return config if "spec" in config else None
def _update_index_oci(
image_ref: ImageReference, tmpdir: str, pool: multiprocessing.pool.Pool
) -> None:
response = spack.oci.opener.urlopen(urllib.request.Request(url=image_ref.tags_url()))
spack.oci.opener.ensure_status(response, 200)
def _update_index_oci(image_ref: ImageReference, tmpdir: str, pool: MaybePool) -> None:
request = urllib.request.Request(url=image_ref.tags_url())
response = spack.oci.opener.urlopen(request)
spack.oci.opener.ensure_status(request, response, 200)
tags = json.load(response)["tags"]
# Fetch all image config files in parallel

View File

@@ -796,7 +796,9 @@ def names(args: Namespace, out: IO) -> None:
commands = copy.copy(spack.cmd.all_commands())
if args.aliases:
commands.extend(spack.main.aliases.keys())
aliases = spack.config.get("config:aliases")
if aliases:
commands.extend(aliases.keys())
colify(commands, output=out)
@@ -812,8 +814,10 @@ def bash(args: Namespace, out: IO) -> None:
parser = spack.main.make_argument_parser()
spack.main.add_all_commands(parser)
aliases = ";".join(f"{key}:{val}" for key, val in spack.main.aliases.items())
out.write(f'SPACK_ALIASES="{aliases}"\n\n')
aliases_config = spack.config.get("config:aliases")
if aliases_config:
aliases = ";".join(f"{key}:{val}" for key, val in aliases_config.items())
out.write(f'SPACK_ALIASES="{aliases}"\n\n')
writer = BashCompletionWriter(parser.prog, out, args.aliases)
writer.write(parser)

View File

@@ -0,0 +1,30 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys
from typing import List
import llnl.util.tty as tty
import spack.cmd
display_args = {"long": True, "show_flags": False, "variants": False, "indent": 4}
def confirm_action(specs: List[spack.spec.Spec], participle: str, noun: str):
"""Display the list of specs to be acted on and ask for confirmation.
Args:
specs: specs to be removed
participle: action expressed as a participle, e.g. "uninstalled"
noun: action expressed as a noun, e.g. "uninstallation"
"""
tty.msg(f"The following {len(specs)} packages will be {participle}:\n")
spack.cmd.display_specs(specs, **display_args)
print("")
answer = tty.get_yes_or_no("Do you want to proceed?", default=False)
if not answer:
tty.msg(f"Aborting {noun}")
sys.exit(0)

View File

@@ -31,6 +31,19 @@ def setup_parser(subparser):
aliases=["add"],
help="search the system for compilers to add to Spack configuration",
)
mixed_toolchain_group = find_parser.add_mutually_exclusive_group()
mixed_toolchain_group.add_argument(
"--mixed-toolchain",
action="store_true",
default=sys.platform == "darwin",
help="Allow mixed toolchains (for example: clang, clang++, gfortran)",
)
mixed_toolchain_group.add_argument(
"--no-mixed-toolchain",
action="store_false",
dest="mixed_toolchain",
help="Do not allow mixed toolchains (for example: clang, clang++, gfortran)",
)
find_parser.add_argument("add_paths", nargs=argparse.REMAINDER)
find_parser.add_argument(
"--scope",
@@ -86,7 +99,9 @@ def compiler_find(args):
# Below scope=None because we want new compilers that don't appear
# in any other configuration.
new_compilers = spack.compilers.find_new_compilers(paths, scope=None)
new_compilers = spack.compilers.find_new_compilers(
paths, scope=None, mixed_toolchain=args.mixed_toolchain
)
if new_compilers:
spack.compilers.add_compilers_to_config(new_compilers, scope=args.scope, init_config=False)
n = len(new_compilers)

View File

@@ -407,7 +407,9 @@ def config_prefer_upstream(args):
pkgs = {}
for spec in pref_specs:
# Collect all the upstream compilers and versions for this package.
pkg = pkgs.get(spec.name, {"version": [], "compiler": []})
pkg = pkgs.get(spec.name, {"version": []})
all = pkgs.get("all", {"compiler": []})
pkgs["all"] = all
pkgs[spec.name] = pkg
# We have no existing variant if this is our first added version.
@@ -418,8 +420,8 @@ def config_prefer_upstream(args):
pkg["version"].append(version)
compiler = str(spec.compiler)
if compiler not in pkg["compiler"]:
pkg["compiler"].append(compiler)
if compiler not in all["compiler"]:
all["compiler"].append(compiler)
# Get and list all the variants that differ from the default.
variants = []

View File

@@ -0,0 +1,103 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse
import sys
from typing import List
import llnl.util.tty as tty
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.cmd.common.confirmation as confirmation
import spack.environment as ev
import spack.spec
description = "remove specs from the concretized lockfile of an environment"
section = "environments"
level = "long"
# Arguments for display_specs when we find ambiguity
display_args = {"long": True, "show_flags": False, "variants": False, "indent": 4}
def setup_parser(subparser):
subparser.add_argument(
"--root", action="store_true", help="deconcretize only specific environment roots"
)
arguments.add_common_arguments(subparser, ["yes_to_all", "specs"])
subparser.add_argument(
"-a",
"--all",
action="store_true",
dest="all",
help="deconcretize ALL specs that match each supplied spec",
)
def get_deconcretize_list(
args: argparse.Namespace, specs: List[spack.spec.Spec], env: ev.Environment
) -> List[spack.spec.Spec]:
"""
Get list of environment roots to deconcretize
"""
env_specs = [s for _, s in env.concretized_specs()]
to_deconcretize = []
errors = []
for s in specs:
if args.root:
# find all roots matching given spec
to_deconc = [e for e in env_specs if e.satisfies(s)]
else:
# find all roots matching or depending on a matching spec
to_deconc = [e for e in env_specs if any(d.satisfies(s) for d in e.traverse())]
if len(to_deconc) < 1:
tty.warn(f"No matching specs to deconcretize for {s}")
elif len(to_deconc) > 1 and not args.all:
errors.append((s, to_deconc))
to_deconcretize.extend(to_deconc)
if errors:
for spec, matching in errors:
tty.error(f"{spec} matches multiple concrete specs:")
sys.stderr.write("\n")
spack.cmd.display_specs(matching, output=sys.stderr, **display_args)
sys.stderr.write("\n")
sys.stderr.flush()
tty.die("Use '--all' to deconcretize all matching specs, or be more specific")
return to_deconcretize
def deconcretize_specs(args, specs):
env = spack.cmd.require_active_env(cmd_name="deconcretize")
if args.specs:
deconcretize_list = get_deconcretize_list(args, specs, env)
else:
deconcretize_list = [s for _, s in env.concretized_specs()]
if not args.yes_to_all:
confirmation.confirm_action(deconcretize_list, "deconcretized", "deconcretization")
with env.write_transaction():
for spec in deconcretize_list:
env.deconcretize(spec)
env.write()
def deconcretize(parser, args):
if not args.specs and not args.all:
tty.die(
"deconcretize requires at least one spec argument.",
" Use `spack deconcretize --all` to deconcretize ALL specs.",
)
specs = spack.cmd.parse_specs(args.specs) if args.specs else [any]
deconcretize_specs(args, specs)

View File

@@ -99,10 +99,7 @@ def dev_build(self, args):
spec = specs[0]
if not spack.repo.PATH.exists(spec.name):
tty.die(
"No package for '{0}' was found.".format(spec.name),
" Use `spack create` to create a new package",
)
raise spack.repo.UnknownPackageError(spec.name)
if not spec.versions.concrete_range_as_version:
tty.die(

View File

@@ -43,10 +43,7 @@ def edit_package(name, repo_path, namespace):
if not os.access(path, os.R_OK):
tty.die("Insufficient permissions on '%s'!" % path)
else:
tty.die(
"No package for '{0}' was found.".format(spec.name),
" Use `spack create` to create a new package",
)
raise spack.repo.UnknownPackageError(spec.name)
editor(path)

View File

@@ -5,6 +5,7 @@
import argparse
import os
import shlex
import shutil
import sys
import tempfile
@@ -144,10 +145,13 @@ def create_temp_env_directory():
return tempfile.mkdtemp(prefix="spack-")
def env_activate(args):
if not args.activate_env and not args.dir and not args.temp:
tty.die("spack env activate requires an environment name, directory, or --temp")
def _tty_info(msg):
"""tty.info like function that prints the equivalent printf statement for eval."""
decorated = f'{colorize("@*b{==>}")} {msg}\n'
print(f"printf {shlex.quote(decorated)};")
def env_activate(args):
if not args.shell:
spack.cmd.common.shell_init_instructions(
"spack env activate", " eval `spack env activate {sh_arg} [...]`"
@@ -160,12 +164,25 @@ def env_activate(args):
env_name_or_dir = args.activate_env or args.dir
# When executing `spack env activate` without further arguments, activate
# the default environment. It's created when it doesn't exist yet.
if not env_name_or_dir and not args.temp:
short_name = "default"
if not ev.exists(short_name):
ev.create(short_name)
action = "Created and activated"
else:
action = "Activated"
env_path = ev.root(short_name)
_tty_info(f"{action} default environment in {env_path}")
# Temporary environment
if args.temp:
elif args.temp:
env = create_temp_env_directory()
env_path = os.path.abspath(env)
short_name = os.path.basename(env_path)
ev.create_in_dir(env).write(regenerate=False)
_tty_info(f"Created and activated temporary environment in {env_path}")
# Managed environment
elif ev.exists(env_name_or_dir) and not args.dir:
@@ -385,7 +402,7 @@ def env_remove(args):
try:
env = ev.read(env_name)
read_envs.append(env)
except spack.config.ConfigFormatError:
except (spack.config.ConfigFormatError, ev.SpackEnvironmentConfigError):
bad_envs.append(env_name)
if not args.yes_to_all:
@@ -553,8 +570,8 @@ def env_update_setup_parser(subparser):
def env_update(args):
manifest_file = ev.manifest_file(args.update_env)
backup_file = manifest_file + ".bkp"
needs_update = not ev.is_latest_format(manifest_file)
needs_update = not ev.is_latest_format(manifest_file)
if not needs_update:
tty.msg('No update needed for the environment "{0}"'.format(args.update_env))
return

View File

@@ -6,6 +6,7 @@
import llnl.util.tty as tty
import spack.cmd.common.arguments
import spack.cmd.common.confirmation
import spack.cmd.uninstall
import spack.environment as ev
import spack.store
@@ -41,6 +42,6 @@ def gc(parser, args):
return
if not args.yes_to_all:
spack.cmd.uninstall.confirm_removal(specs)
spack.cmd.common.confirmation.confirm_action(specs, "uninstalled", "uninstallation")
spack.cmd.uninstall.do_uninstall(specs, force=False)

View File

@@ -61,7 +61,7 @@ def graph(parser, args):
args.dot = True
env = ev.active_environment()
if env:
specs = env.all_specs()
specs = env.concrete_roots()
else:
specs = spack.store.STORE.db.query()

View File

@@ -3,6 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys
import textwrap
from itertools import zip_longest
@@ -16,6 +17,7 @@
import spack.install_test
import spack.repo
import spack.spec
import spack.version
from spack.package_base import preferred_version
description = "get detailed information on a particular package"
@@ -53,6 +55,7 @@ def setup_parser(subparser):
("--tags", print_tags.__doc__),
("--tests", print_tests.__doc__),
("--virtuals", print_virtuals.__doc__),
("--variants-by-name", "list variants in strict name order; don't group by condition"),
]
for opt, help_comment in options:
subparser.add_argument(opt, action="store_true", help=help_comment)
@@ -77,35 +80,10 @@ def license(s):
class VariantFormatter:
def __init__(self, variants):
self.variants = variants
def __init__(self, pkg):
self.variants = pkg.variants
self.headers = ("Name [Default]", "When", "Allowed values", "Description")
# Formats
fmt_name = "{0} [{1}]"
# Initialize column widths with the length of the
# corresponding headers, as they cannot be shorter
# than that
self.column_widths = [len(x) for x in self.headers]
# Expand columns based on max line lengths
for k, e in variants.items():
v, w = e
candidate_max_widths = (
len(fmt_name.format(k, self.default(v))), # Name [Default]
len(str(w)),
len(v.allowed_values), # Allowed values
len(v.description), # Description
)
self.column_widths = (
max(self.column_widths[0], candidate_max_widths[0]),
max(self.column_widths[1], candidate_max_widths[1]),
max(self.column_widths[2], candidate_max_widths[2]),
max(self.column_widths[3], candidate_max_widths[3]),
)
# Don't let name or possible values be less than max widths
_, cols = tty.terminal_size()
max_name = min(self.column_widths[0], 30)
@@ -137,6 +115,8 @@ def default(self, v):
def lines(self):
if not self.variants:
yield " None"
return
else:
yield " " + self.fmt % self.headers
underline = tuple([w * "=" for w in self.column_widths])
@@ -271,15 +251,165 @@ def print_tests(pkg):
color.cprint(" None")
def print_variants(pkg):
def _fmt_value(v):
if v is None or isinstance(v, bool):
return str(v).lower()
else:
return str(v)
def _fmt_name_and_default(variant):
"""Print colorized name [default] for a variant."""
return color.colorize(f"@c{{{variant.name}}} @C{{[{_fmt_value(variant.default)}]}}")
def _fmt_when(when, indent):
return color.colorize(f"{indent * ' '}@B{{when}} {color.cescape(when)}")
def _fmt_variant_description(variant, width, indent):
"""Format a variant's description, preserving explicit line breaks."""
return "\n".join(
textwrap.fill(
line, width=width, initial_indent=indent * " ", subsequent_indent=indent * " "
)
for line in variant.description.split("\n")
)
def _fmt_variant(variant, max_name_default_len, indent, when=None, out=None):
out = out or sys.stdout
_, cols = tty.terminal_size()
name_and_default = _fmt_name_and_default(variant)
name_default_len = color.clen(name_and_default)
values = variant.values
if not isinstance(variant.values, (tuple, list, spack.variant.DisjointSetsOfValues)):
values = [variant.values]
# put 'none' first, sort the rest by value
sorted_values = sorted(values, key=lambda v: (v != "none", v))
pad = 4 # min padding between 'name [default]' and values
value_indent = (indent + max_name_default_len + pad) * " " # left edge of values
# This preserves any formatting (i.e., newlines) from how the description was
# written in package.py, but still wraps long lines for small terminals.
# This allows some packages to provide detailed help on their variants (see, e.g., gasnet).
formatted_values = "\n".join(
textwrap.wrap(
f"{', '.join(_fmt_value(v) for v in sorted_values)}",
width=cols - 2,
initial_indent=value_indent,
subsequent_indent=value_indent,
)
)
formatted_values = formatted_values[indent + name_default_len + pad :]
# name [default] value1, value2, value3, ...
padding = pad * " "
color.cprint(f"{indent * ' '}{name_and_default}{padding}@c{{{formatted_values}}}", stream=out)
# when <spec>
description_indent = indent + 4
if when is not None and when != spack.spec.Spec():
out.write(_fmt_when(when, description_indent - 2))
out.write("\n")
# description, preserving explicit line breaks from the way it's written in the package file
out.write(_fmt_variant_description(variant, cols - 2, description_indent))
out.write("\n")
def _variants_by_name_when(pkg):
"""Adaptor to get variants keyed by { name: { when: { [Variant...] } }."""
# TODO: replace with pkg.variants_by_name(when=True) when unified directive dicts are merged.
variants = {}
for name, (variant, whens) in sorted(pkg.variants.items()):
for when in whens:
variants.setdefault(name, {}).setdefault(when, []).append(variant)
return variants
def _variants_by_when_name(pkg):
"""Adaptor to get variants keyed by { when: { name: Variant } }"""
# TODO: replace with pkg.variants when unified directive dicts are merged.
variants = {}
for name, (variant, whens) in pkg.variants.items():
for when in whens:
variants.setdefault(when, {})[name] = variant
return variants
def _print_variants_header(pkg):
"""output variants"""
if not pkg.variants:
print(" None")
return
color.cprint("")
color.cprint(section_title("Variants:"))
formatter = VariantFormatter(pkg.variants)
for line in formatter.lines:
color.cprint(color.cescape(line))
variants_by_name = _variants_by_name_when(pkg)
# Calculate the max length of the "name [default]" part of the variant display
# This lets us know where to print variant values.
max_name_default_len = max(
color.clen(_fmt_name_and_default(variant))
for name, when_variants in variants_by_name.items()
for variants in when_variants.values()
for variant in variants
)
return max_name_default_len, variants_by_name
def _unconstrained_ver_first(item):
"""sort key that puts specs with open version ranges first"""
spec, _ = item
return (spack.version.any_version not in spec.versions, spec)
def print_variants_grouped_by_when(pkg):
max_name_default_len, _ = _print_variants_header(pkg)
indent = 4
variants = _variants_by_when_name(pkg)
for when, variants_by_name in sorted(variants.items(), key=_unconstrained_ver_first):
padded_values = max_name_default_len + 4
start_indent = indent
if when != spack.spec.Spec():
sys.stdout.write("\n")
sys.stdout.write(_fmt_when(when, indent))
sys.stdout.write("\n")
# indent names slightly inside 'when', but line up values
padded_values -= 2
start_indent += 2
for name, variant in sorted(variants_by_name.items()):
_fmt_variant(variant, padded_values, start_indent, None, out=sys.stdout)
def print_variants_by_name(pkg):
max_name_default_len, variants_by_name = _print_variants_header(pkg)
max_name_default_len += 4
indent = 4
for name, when_variants in variants_by_name.items():
for when, variants in sorted(when_variants.items(), key=_unconstrained_ver_first):
for variant in variants:
_fmt_variant(variant, max_name_default_len, indent, when, out=sys.stdout)
sys.stdout.write("\n")
def print_variants(pkg):
"""output variants"""
print_variants_grouped_by_when(pkg)
def print_versions(pkg):
@@ -300,18 +430,24 @@ def print_versions(pkg):
pad = padder(pkg.versions, 4)
preferred = preferred_version(pkg)
url = ""
if pkg.has_code:
url = fs.for_package_version(pkg, preferred)
def get_url(version):
try:
return fs.for_package_version(pkg, version)
except spack.fetch_strategy.InvalidArgsError:
return "No URL"
url = get_url(preferred) if pkg.has_code else ""
line = version(" {0}".format(pad(preferred))) + color.cescape(url)
color.cprint(line)
color.cwrite(line)
print()
safe = []
deprecated = []
for v in reversed(sorted(pkg.versions)):
if pkg.has_code:
url = fs.for_package_version(pkg, v)
url = get_url(v)
if pkg.versions[v].get("deprecated", False):
deprecated.append((v, url))
else:
@@ -384,7 +520,12 @@ def info(parser, args):
else:
color.cprint(" None")
color.cprint(section_title("Homepage: ") + pkg.homepage)
if getattr(pkg, "homepage"):
color.cprint(section_title("Homepage: ") + pkg.homepage)
_print_variants = (
print_variants_by_name if args.variants_by_name else print_variants_grouped_by_when
)
# Now output optional information in expected order
sections = [
@@ -392,7 +533,7 @@ def info(parser, args):
(args.all or args.detectable, print_detectable),
(args.all or args.tags, print_tags),
(args.all or not args.no_versions, print_versions),
(args.all or not args.no_variants, print_variants),
(args.all or not args.no_variants, _print_variants),
(args.all or args.phases, print_phases),
(args.all or not args.no_dependencies, print_dependencies),
(args.all or args.virtuals, print_virtuals),

View File

@@ -23,7 +23,7 @@
# tutorial configuration parameters
tutorial_branch = "releases/v0.20"
tutorial_branch = "releases/v0.21"
tutorial_mirror = "file:///mirror"
tutorial_key = os.path.join(spack.paths.share_path, "keys", "tutorial.pub")

View File

@@ -11,10 +11,9 @@
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.cmd.common.confirmation as confirmation
import spack.environment as ev
import spack.error
import spack.package_base
import spack.repo
import spack.spec
import spack.store
import spack.traverse as traverse
@@ -278,7 +277,7 @@ def uninstall_specs(args, specs):
return
if not args.yes_to_all:
confirm_removal(uninstall_list)
confirmation.confirm_action(uninstall_list, "uninstalled", "uninstallation")
# Uninstall everything on the list
do_uninstall(uninstall_list, args.force)
@@ -292,21 +291,6 @@ def uninstall_specs(args, specs):
env.regenerate_views()
def confirm_removal(specs: List[spack.spec.Spec]):
"""Display the list of specs to be removed and ask for confirmation.
Args:
specs: specs to be removed
"""
tty.msg("The following {} packages will be uninstalled:\n".format(len(specs)))
spack.cmd.display_specs(specs, **display_args)
print("")
answer = tty.get_yes_or_no("Do you want to proceed?", default=False)
if not answer:
tty.msg("Aborting uninstallation")
sys.exit(0)
def uninstall(parser, args):
if not args.specs and not args.all:
tty.die(

View File

@@ -10,7 +10,7 @@
import itertools
import multiprocessing.pool
import os
from typing import Dict, List
from typing import Dict, List, Optional, Tuple
import archspec.cpu
@@ -21,6 +21,7 @@
import spack.compiler
import spack.config
import spack.error
import spack.operating_systems
import spack.paths
import spack.platforms
import spack.spec
@@ -111,16 +112,16 @@ def _to_dict(compiler):
def get_compiler_config(scope=None, init_config=True):
"""Return the compiler configuration for the specified architecture."""
config = spack.config.get("compilers", scope=scope) or []
config = spack.config.CONFIG.get("compilers", scope=scope) or []
if config or not init_config:
return config
merged_config = spack.config.get("compilers")
merged_config = spack.config.CONFIG.get("compilers")
if merged_config:
return config
_init_compiler_config(scope=scope)
config = spack.config.get("compilers", scope=scope)
config = spack.config.CONFIG.get("compilers", scope=scope)
return config
@@ -153,6 +154,14 @@ def add_compilers_to_config(compilers, scope=None, init_config=True):
"""
compiler_config = get_compiler_config(scope, init_config)
for compiler in compilers:
if not compiler.cc:
tty.debug(f"{compiler.spec} does not have a C compiler")
if not compiler.cxx:
tty.debug(f"{compiler.spec} does not have a C++ compiler")
if not compiler.f77:
tty.debug(f"{compiler.spec} does not have a Fortran77 compiler")
if not compiler.fc:
tty.debug(f"{compiler.spec} does not have a Fortran compiler")
compiler_config.append(_to_dict(compiler))
spack.config.set("compilers", compiler_config, scope=scope)
@@ -223,13 +232,16 @@ def all_compiler_specs(scope=None, init_config=True):
]
def find_compilers(path_hints=None):
def find_compilers(
path_hints: Optional[List[str]] = None, *, mixed_toolchain=False
) -> List["spack.compiler.Compiler"]:
"""Return the list of compilers found in the paths given as arguments.
Args:
path_hints (list or None): list of path hints where to look for.
A sensible default based on the ``PATH`` environment variable
will be used if the value is None
path_hints: list of path hints where to look for. A sensible default based on the ``PATH``
environment variable will be used if the value is None
mixed_toolchain: allow mixing compilers from different toolchains if otherwise missing for
a certain language
"""
if path_hints is None:
path_hints = get_path("PATH")
@@ -250,7 +262,7 @@ def find_compilers(path_hints=None):
finally:
tp.close()
def valid_version(item):
def valid_version(item: Tuple[Optional[DetectVersionArgs], Optional[str]]) -> bool:
value, error = item
if error is None:
return True
@@ -262,25 +274,37 @@ def valid_version(item):
pass
return False
def remove_errors(item):
def remove_errors(
item: Tuple[Optional[DetectVersionArgs], Optional[str]]
) -> DetectVersionArgs:
value, _ = item
assert value is not None
return value
return make_compiler_list(map(remove_errors, filter(valid_version, detected_versions)))
return make_compiler_list(
[remove_errors(detected) for detected in detected_versions if valid_version(detected)],
mixed_toolchain=mixed_toolchain,
)
def find_new_compilers(path_hints=None, scope=None):
def find_new_compilers(
path_hints: Optional[List[str]] = None,
scope: Optional[str] = None,
*,
mixed_toolchain: bool = False,
):
"""Same as ``find_compilers`` but return only the compilers that are not
already in compilers.yaml.
Args:
path_hints (list or None): list of path hints where to look for.
A sensible default based on the ``PATH`` environment variable
will be used if the value is None
scope (str): scope to look for a compiler. If None consider the
merged configuration.
path_hints: list of path hints where to look for. A sensible default based on the ``PATH``
environment variable will be used if the value is None
scope: scope to look for a compiler. If None consider the merged configuration.
mixed_toolchain: allow mixing compilers from different toolchains if otherwise missing for
a certain language
"""
compilers = find_compilers(path_hints)
compilers = find_compilers(path_hints, mixed_toolchain=mixed_toolchain)
return select_new_compilers(compilers, scope)
@@ -490,9 +514,10 @@ def get_compilers(config, cspec=None, arch_spec=None):
for items in config:
items = items["compiler"]
# NOTE: in principle this should be equality not satisfies, but config can still
# be written in old format gcc@10.1.0 instead of gcc@=10.1.0.
if cspec and not cspec.satisfies(items["spec"]):
# We might use equality here.
if cspec and not spack.spec.parse_with_version_concrete(
items["spec"], compiler=True
).satisfies(cspec):
continue
# If an arch spec is given, confirm that this compiler
@@ -638,7 +663,9 @@ def all_compiler_types():
)
def arguments_to_detect_version_fn(operating_system, paths):
def arguments_to_detect_version_fn(
operating_system: spack.operating_systems.OperatingSystem, paths: List[str]
) -> List[DetectVersionArgs]:
"""Returns a list of DetectVersionArgs tuples to be used in a
corresponding function to detect compiler versions.
@@ -646,8 +673,7 @@ def arguments_to_detect_version_fn(operating_system, paths):
function by providing a method called with the same name.
Args:
operating_system (spack.operating_systems.OperatingSystem): the operating system
on which we are looking for compilers
operating_system: the operating system on which we are looking for compilers
paths: paths to search for compilers
Returns:
@@ -656,10 +682,10 @@ def arguments_to_detect_version_fn(operating_system, paths):
compilers in this OS.
"""
def _default(search_paths):
command_arguments = []
def _default(search_paths: List[str]) -> List[DetectVersionArgs]:
command_arguments: List[DetectVersionArgs] = []
files_to_be_tested = fs.files_in(*search_paths)
for compiler_name in spack.compilers.supported_compilers_for_host_platform():
for compiler_name in supported_compilers_for_host_platform():
compiler_cls = class_for_compiler_name(compiler_name)
for language in ("cc", "cxx", "f77", "fc"):
@@ -684,7 +710,9 @@ def _default(search_paths):
return fn(paths)
def detect_version(detect_version_args):
def detect_version(
detect_version_args: DetectVersionArgs,
) -> Tuple[Optional[DetectVersionArgs], Optional[str]]:
"""Computes the version of a compiler and adds it to the information
passed as input.
@@ -693,8 +721,7 @@ def detect_version(detect_version_args):
needs to be checked by the code dispatching the calls.
Args:
detect_version_args (DetectVersionArgs): information on the
compiler for which we should detect the version.
detect_version_args: information on the compiler for which we should detect the version.
Returns:
A ``(DetectVersionArgs, error)`` tuple. If ``error`` is ``None`` the
@@ -710,7 +737,7 @@ def _default(fn_args):
path = fn_args.path
# Get compiler names and the callback to detect their versions
callback = getattr(compiler_cls, "{0}_version".format(language))
callback = getattr(compiler_cls, f"{language}_version")
try:
version = callback(path)
@@ -736,13 +763,15 @@ def _default(fn_args):
return fn(detect_version_args)
def make_compiler_list(detected_versions):
def make_compiler_list(
detected_versions: List[DetectVersionArgs], mixed_toolchain: bool = False
) -> List["spack.compiler.Compiler"]:
"""Process a list of detected versions and turn them into a list of
compiler specs.
Args:
detected_versions (list): list of DetectVersionArgs containing a
valid version
detected_versions: list of DetectVersionArgs containing a valid version
mixed_toolchain: allow mixing compilers from different toolchains if langauge is missing
Returns:
list: list of Compiler objects
@@ -751,7 +780,7 @@ def make_compiler_list(detected_versions):
sorted_compilers = sorted(detected_versions, key=group_fn)
# Gather items in a dictionary by the id, name variation and language
compilers_d = {}
compilers_d: Dict[CompilerID, Dict[NameVariation, dict]] = {}
for sort_key, group in itertools.groupby(sorted_compilers, key=group_fn):
compiler_id, name_variation, language = sort_key
by_compiler_id = compilers_d.setdefault(compiler_id, {})
@@ -760,7 +789,7 @@ def make_compiler_list(detected_versions):
def _default_make_compilers(cmp_id, paths):
operating_system, compiler_name, version = cmp_id
compiler_cls = spack.compilers.class_for_compiler_name(compiler_name)
compiler_cls = class_for_compiler_name(compiler_name)
spec = spack.spec.CompilerSpec(compiler_cls.name, f"={version}")
paths = [paths.get(x, None) for x in ("cc", "cxx", "f77", "fc")]
# TODO: johnwparent - revist the following line as per discussion at:
@@ -782,13 +811,14 @@ def _default_make_compilers(cmp_id, paths):
getattr(variation, "suffix", None),
)
compilers = []
# Flatten to a list of compiler id, primary variation and compiler dictionary
flat_compilers: List[Tuple[CompilerID, NameVariation, dict]] = []
for compiler_id, by_compiler_id in compilers_d.items():
ordered = sorted(by_compiler_id, key=sort_fn)
selected_variation = ordered[0]
selected = by_compiler_id[selected_variation]
# fill any missing parts from subsequent entries
# Fill any missing parts from subsequent entries (without mixing toolchains)
for lang in ["cxx", "f77", "fc"]:
if lang not in selected:
next_lang = next(
@@ -797,14 +827,63 @@ def _default_make_compilers(cmp_id, paths):
if next_lang:
selected[lang] = next_lang
operating_system, _, _ = compiler_id
make_compilers = getattr(operating_system, "make_compilers", _default_make_compilers)
flat_compilers.append((compiler_id, selected_variation, selected))
compilers.extend(make_compilers(compiler_id, selected))
# Next, fill out the blanks of missing compilers by creating a mixed toolchain (if requested)
if mixed_toolchain:
make_mixed_toolchain(flat_compilers)
# Finally, create the compiler list
compilers = []
for compiler_id, _, compiler in flat_compilers:
make_compilers = getattr(compiler_id.os, "make_compilers", _default_make_compilers)
compilers.extend(make_compilers(compiler_id, compiler))
return compilers
def make_mixed_toolchain(compilers: List[Tuple[CompilerID, NameVariation, dict]]) -> None:
"""Add missing compilers across toolchains when they are missing for a particular language.
This currently only adds the most sensible gfortran to (apple)-clang if it doesn't have a
fortran compiler (no flang)."""
# First collect the clangs that are missing a fortran compiler
clangs_without_flang = [
(id, variation, compiler)
for id, variation, compiler in compilers
if id.compiler_name in ("clang", "apple-clang")
and "f77" not in compiler
and "fc" not in compiler
]
if not clangs_without_flang:
return
# Filter on GCCs with fortran compiler
gccs_with_fortran = [
(id, variation, compiler)
for id, variation, compiler in compilers
if id.compiler_name == "gcc" and "f77" in compiler and "fc" in compiler
]
# Sort these GCCs by "best variation" (no prefix / suffix first)
gccs_with_fortran.sort(
key=lambda x: (getattr(x[1], "prefix", None), getattr(x[1], "suffix", None))
)
# Attach the optimal GCC fortran compiler to the clangs that don't have one
for clang_id, _, clang_compiler in clangs_without_flang:
gcc_compiler = next(
(gcc[2] for gcc in gccs_with_fortran if gcc[0].os == clang_id.os), None
)
if not gcc_compiler:
continue
# Update the fc / f77 entries
clang_compiler["f77"] = gcc_compiler["f77"]
clang_compiler["fc"] = gcc_compiler["fc"]
def is_mixed_toolchain(compiler):
"""Returns True if the current compiler is a mixed toolchain,
False otherwise.

View File

@@ -5,7 +5,6 @@
import os
import re
import sys
import llnl.util.lang
@@ -114,17 +113,6 @@ def extract_version_from_output(cls, output):
return ".".join(match.groups())
return "unknown"
@classmethod
def fc_version(cls, fortran_compiler):
if sys.platform == "darwin":
return cls.default_version("clang")
return cls.default_version(fortran_compiler)
@classmethod
def f77_version(cls, f77):
return cls.fc_version(f77)
@property
def stdcxx_libs(self):
return ("-lstdc++",)

View File

@@ -5,7 +5,6 @@
import os
import re
import sys
import llnl.util.lang
@@ -39,10 +38,10 @@ class Clang(Compiler):
cxx_names = ["clang++"]
# Subclasses use possible names of Fortran 77 compiler
f77_names = ["flang", "gfortran", "xlf_r"]
f77_names = ["flang"]
# Subclasses use possible names of Fortran 90 compiler
fc_names = ["flang", "gfortran", "xlf90_r"]
fc_names = ["flang"]
version_argument = "--version"
@@ -182,16 +181,3 @@ def extract_version_from_output(cls, output):
if match:
ver = match.group(match.lastindex)
return ver
@classmethod
def fc_version(cls, fc):
# We could map from gcc/gfortran version to clang version, but on macOS
# we normally mix any version of gfortran with any version of clang.
if sys.platform == "darwin":
return cls.default_version("clang")
else:
return cls.default_version(fc)
@classmethod
def f77_version(cls, f77):
return cls.fc_version(f77)

View File

@@ -9,6 +9,8 @@
import sys
from typing import Dict, List, Set
import archspec.cpu
import spack.compiler
import spack.operating_systems.windows_os
import spack.platforms
@@ -185,6 +187,9 @@ def __init__(self, *args, **kwargs):
# get current platform architecture and format for vcvars argument
arch = spack.platforms.real_host().default.lower()
arch = arch.replace("-", "_")
if str(archspec.cpu.host().family) == "x86_64":
arch = "amd64"
self.vcvars_call = VCVarsInvocation(vcvars_script_path, arch, self.msvc_version)
env_cmds.append(self.vcvars_call)
# Below is a check for a valid fortran path

View File

@@ -69,6 +69,7 @@
SECTION_SCHEMAS = {
"compilers": spack.schema.compilers.schema,
"concretizer": spack.schema.concretizer.schema,
"definitions": spack.schema.definitions.schema,
"mirrors": spack.schema.mirrors.schema,
"repos": spack.schema.repos.schema,
"packages": spack.schema.packages.schema,
@@ -994,6 +995,7 @@ def read_config_file(filename, schema=None):
key = next(iter(data))
schema = _ALL_SCHEMAS[key]
validate(data, schema)
return data
except StopIteration:

View File

@@ -1522,14 +1522,18 @@ def _query(
# TODO: like installed and known that can be queried? Or are
# TODO: these really special cases that only belong here?
# Just look up concrete specs with hashes; no fancy search.
if isinstance(query_spec, spack.spec.Spec) and query_spec.concrete:
# TODO: handling of hashes restriction is not particularly elegant.
hash_key = query_spec.dag_hash()
if hash_key in self._data and (not hashes or hash_key in hashes):
return [self._data[hash_key].spec]
else:
return []
if query_spec is not any:
if not isinstance(query_spec, spack.spec.Spec):
query_spec = spack.spec.Spec(query_spec)
# Just look up concrete specs with hashes; no fancy search.
if query_spec.concrete:
# TODO: handling of hashes restriction is not particularly elegant.
hash_key = query_spec.dag_hash()
if hash_key in self._data and (not hashes or hash_key in hashes):
return [self._data[hash_key].spec]
else:
return []
# Abstract specs require more work -- currently we test
# against everything.
@@ -1537,6 +1541,9 @@ def _query(
start_date = start_date or datetime.datetime.min
end_date = end_date or datetime.datetime.max
# save specs whose name doesn't match for last, to avoid a virtual check
deferred = []
for key, rec in self._data.items():
if hashes is not None and rec.spec.dag_hash() not in hashes:
continue
@@ -1561,8 +1568,26 @@ def _query(
if not (start_date < inst_date < end_date):
continue
if query_spec is any or rec.spec.satisfies(query_spec):
if query_spec is any:
results.append(rec.spec)
continue
# check anon specs and exact name matches first
if not query_spec.name or rec.spec.name == query_spec.name:
if rec.spec.satisfies(query_spec):
results.append(rec.spec)
# save potential virtual matches for later, but not if we already found a match
elif not results:
deferred.append(rec.spec)
# Checking for virtuals is expensive, so we save it for last and only if needed.
# If we get here, we didn't find anything in the DB that matched by name.
# If we did fine something, the query spec can't be virtual b/c we matched an actual
# package installation, so skip the virtual check entirely. If we *didn't* find anything,
# check all the deferred specs *if* the query is virtual.
if not results and query_spec is not any and deferred and query_spec.virtual:
results = [spec for spec in deferred if spec.satisfies(query_spec)]
return results

View File

@@ -137,6 +137,7 @@ class DirectiveMeta(type):
_directive_dict_names: Set[str] = set()
_directives_to_be_executed: List[str] = []
_when_constraints_from_context: List[str] = []
_default_args: List[dict] = []
def __new__(cls, name, bases, attr_dict):
# Initialize the attribute containing the list of directives
@@ -199,6 +200,16 @@ def pop_from_context():
"""Pop the last constraint from the context"""
return DirectiveMeta._when_constraints_from_context.pop()
@staticmethod
def push_default_args(default_args):
"""Push default arguments"""
DirectiveMeta._default_args.append(default_args)
@staticmethod
def pop_default_args():
"""Pop default arguments"""
return DirectiveMeta._default_args.pop()
@staticmethod
def directive(dicts=None):
"""Decorator for Spack directives.
@@ -259,7 +270,13 @@ def _decorator(decorated_function):
directive_names.append(decorated_function.__name__)
@functools.wraps(decorated_function)
def _wrapper(*args, **kwargs):
def _wrapper(*args, **_kwargs):
# First merge default args with kwargs
kwargs = dict()
for default_args in DirectiveMeta._default_args:
kwargs.update(default_args)
kwargs.update(_kwargs)
# Inject when arguments from the context
if DirectiveMeta._when_constraints_from_context:
# Check that directives not yet supporting the when= argument
@@ -446,6 +463,8 @@ def _depends_on(pkg, spec, when=None, type=dt.DEFAULT_TYPES, patches=None):
dep_spec = spack.spec.Spec(spec)
if not dep_spec.name:
raise DependencyError("Invalid dependency specification in package '%s':" % pkg.name, spec)
elif dep_spec.name in ("c", "cxx", "fortran"): # forward compat for language deps
return
if pkg.name == dep_spec.name:
raise CircularReferenceError("Package '%s' cannot depend on itself." % pkg.name)

View File

@@ -339,6 +339,7 @@
from .environment import (
TOP_LEVEL_KEY,
Environment,
SpackEnvironmentConfigError,
SpackEnvironmentError,
SpackEnvironmentViewError,
activate,
@@ -372,6 +373,7 @@
__all__ = [
"TOP_LEVEL_KEY",
"Environment",
"SpackEnvironmentConfigError",
"SpackEnvironmentError",
"SpackEnvironmentViewError",
"activate",

View File

@@ -342,7 +342,7 @@ def create_in_dir(
manifest.flush()
except spack.config.ConfigFormatError as e:
except (spack.config.ConfigFormatError, SpackEnvironmentConfigError) as e:
shutil.rmtree(manifest_dir)
raise e
@@ -396,7 +396,13 @@ def all_environments():
def _read_yaml(str_or_file):
"""Read YAML from a file for round-trip parsing."""
data = syaml.load_config(str_or_file)
try:
data = syaml.load_config(str_or_file)
except syaml.SpackYAMLError as e:
raise SpackEnvironmentConfigError(
f"Invalid environment configuration detected: {e.message}"
)
filename = getattr(str_or_file, "name", None)
default_data = spack.config.validate(data, spack.schema.env.schema, filename)
return data, default_data
@@ -781,10 +787,18 @@ def _re_read(self):
"""Reinitialize the environment object."""
self.clear(re_read=True)
self.manifest = EnvironmentManifestFile(self.path)
self._read()
self._read(re_read=True)
def _read(self):
self._construct_state_from_manifest()
def _read(self, re_read=False):
# If the manifest has included files, then some of the information
# (e.g., definitions) MAY be in those files. So we need to ensure
# the config is populated with any associated spec lists in order
# to fully construct the manifest state.
includes = self.manifest[TOP_LEVEL_KEY].get("include", [])
if includes and not re_read:
prepare_config_scope(self)
self._construct_state_from_manifest(re_read)
if os.path.exists(self.lock_path):
with open(self.lock_path) as f:
@@ -798,21 +812,30 @@ def write_transaction(self):
"""Get a write lock context manager for use in a `with` block."""
return lk.WriteTransaction(self.txlock, acquire=self._re_read)
def _construct_state_from_manifest(self):
def _process_definition(self, item):
"""Process a single spec definition item."""
entry = copy.deepcopy(item)
when = _eval_conditional(entry.pop("when", "True"))
assert len(entry) == 1
if when:
name, spec_list = next(iter(entry.items()))
user_specs = SpecList(name, spec_list, self.spec_lists.copy())
if name in self.spec_lists:
self.spec_lists[name].extend(user_specs)
else:
self.spec_lists[name] = user_specs
def _construct_state_from_manifest(self, re_read=False):
"""Read manifest file and set up user specs."""
self.spec_lists = collections.OrderedDict()
if not re_read:
for item in spack.config.get("definitions", []):
self._process_definition(item)
env_configuration = self.manifest[TOP_LEVEL_KEY]
for item in env_configuration.get("definitions", []):
entry = copy.deepcopy(item)
when = _eval_conditional(entry.pop("when", "True"))
assert len(entry) == 1
if when:
name, spec_list = next(iter(entry.items()))
user_specs = SpecList(name, spec_list, self.spec_lists.copy())
if name in self.spec_lists:
self.spec_lists[name].extend(user_specs)
else:
self.spec_lists[name] = user_specs
self._process_definition(item)
spec_list = env_configuration.get(user_speclist_name, [])
user_specs = SpecList(
@@ -857,7 +880,9 @@ def clear(self, re_read=False):
yaml, and need to be maintained when re-reading an existing
environment.
"""
self.spec_lists = {user_speclist_name: SpecList()} # specs from yaml
self.spec_lists = collections.OrderedDict()
self.spec_lists[user_speclist_name] = SpecList()
self.dev_specs = {} # dev-build specs from yaml
self.concretized_user_specs = [] # user specs from last concretize
self.concretized_order = [] # roots of last concretize, in order
@@ -1006,7 +1031,8 @@ def included_config_scopes(self):
elif include_url.scheme:
raise ValueError(
"Unsupported URL scheme for environment include: {}".format(config_path)
f"Unsupported URL scheme ({include_url.scheme}) for "
f"environment include: {config_path}"
)
# treat relative paths as relative to the environment
@@ -1068,8 +1094,10 @@ def update_stale_references(self, from_list=None):
from_list = next(iter(self.spec_lists.keys()))
index = list(self.spec_lists.keys()).index(from_list)
# spec_lists is an OrderedDict, all list entries after the modified
# list may refer to the modified list. Update stale references
# spec_lists is an OrderedDict to ensure lists read from the manifest
# are maintainted in order, hence, all list entries after the modified
# list may refer to the modified list requiring stale references to be
# updated.
for i, (name, speclist) in enumerate(
list(self.spec_lists.items())[index + 1 :], index + 1
):
@@ -1167,7 +1195,7 @@ def change_existing_spec(
def remove(self, query_spec, list_name=user_speclist_name, force=False):
"""Remove specs from an environment that match a query_spec"""
err_msg_header = (
f"cannot remove {query_spec} from '{list_name}' definition "
f"Cannot remove '{query_spec}' from '{list_name}' definition "
f"in {self.manifest.manifest_file}"
)
query_spec = Spec(query_spec)
@@ -1198,11 +1226,10 @@ def remove(self, query_spec, list_name=user_speclist_name, force=False):
list_to_change.remove(spec)
self.update_stale_references(list_name)
new_specs = set(self.user_specs)
except spack.spec_list.SpecListError:
except spack.spec_list.SpecListError as e:
# define new specs list
new_specs = set(self.user_specs)
msg = f"Spec '{spec}' is part of a spec matrix and "
msg += f"cannot be removed from list '{list_to_change}'."
msg = str(e)
if force:
msg += " It will be removed from the concrete specs."
# Mock new specs, so we can remove this spec from concrete spec lists
@@ -1331,7 +1358,7 @@ def concretize(self, force=False, tests=False):
# Remove concrete specs that no longer correlate to a user spec
for spec in set(self.concretized_user_specs) - set(self.user_specs):
self.deconcretize(spec)
self.deconcretize(spec, concrete=False)
# Pick the right concretization strategy
if self.unify == "when_possible":
@@ -1346,15 +1373,36 @@ def concretize(self, force=False, tests=False):
msg = "concretization strategy not implemented [{0}]"
raise SpackEnvironmentError(msg.format(self.unify))
def deconcretize(self, spec):
def deconcretize(self, spec: spack.spec.Spec, concrete: bool = True):
"""
Remove specified spec from environment concretization
Arguments:
spec: Spec to deconcretize. This must be a root of the environment
concrete: If True, find all instances of spec as concrete in the environemnt.
If False, find a single instance of the abstract spec as root of the environment.
"""
# spec has to be a root of the environment
index = self.concretized_user_specs.index(spec)
dag_hash = self.concretized_order.pop(index)
del self.concretized_user_specs[index]
if concrete:
dag_hash = spec.dag_hash()
pairs = zip(self.concretized_user_specs, self.concretized_order)
filtered = [(spec, h) for spec, h in pairs if h != dag_hash]
# Cannot use zip and unpack two values; it fails if filtered is empty
self.concretized_user_specs = [s for s, _ in filtered]
self.concretized_order = [h for _, h in filtered]
else:
index = self.concretized_user_specs.index(spec)
dag_hash = self.concretized_order.pop(index)
del self.concretized_user_specs[index]
# If this was the only user spec that concretized to this concrete spec, remove it
if dag_hash not in self.concretized_order:
del self.specs_by_hash[dag_hash]
# if we deconcretized a dependency that doesn't correspond to a root, it
# won't be here.
if dag_hash in self.specs_by_hash:
del self.specs_by_hash[dag_hash]
def _get_specs_to_concretize(
self,
@@ -1525,7 +1573,11 @@ def _concretize_separately(self, tests=False):
batch = []
for j, (i, concrete, duration) in enumerate(
spack.util.parallel.imap_unordered(
_concretize_task, args, processes=num_procs, debug=tty.is_debug()
_concretize_task,
args,
processes=num_procs,
debug=tty.is_debug(),
maxtaskperchild=1,
)
):
batch.append((i, concrete))
@@ -1708,11 +1760,14 @@ def _env_modifications_for_view(
self, view: ViewDescriptor, reverse: bool = False
) -> spack.util.environment.EnvironmentModifications:
try:
mods = uenv.environment_modifications_for_specs(*self.concrete_roots(), view=view)
with spack.store.STORE.db.read_transaction():
installed_roots = [s for s in self.concrete_roots() if s.installed]
mods = uenv.environment_modifications_for_specs(*installed_roots, view=view)
except Exception as e:
# Failing to setup spec-specific changes shouldn't be a hard error.
tty.warn(
"couldn't load runtime environment due to {}: {}".format(e.__class__.__name__, e)
f"could not {'unload' if reverse else 'load'} runtime environment due "
f"to {e.__class__.__name__}: {e}"
)
return spack.util.environment.EnvironmentModifications()
return mods.reversed() if reverse else mods
@@ -2063,7 +2118,7 @@ def matching_spec(self, spec):
def removed_specs(self):
"""Tuples of (user spec, concrete spec) for all specs that will be
removed on nexg concretize."""
removed on next concretize."""
needed = set()
for s, c in self.concretized_specs():
if s in self.user_specs:
@@ -2722,7 +2777,7 @@ def override_user_spec(self, user_spec: str, idx: int) -> None:
self.changed = True
def add_definition(self, user_spec: str, list_name: str) -> None:
"""Appends a user spec to the first active definition mathing the name passed as argument.
"""Appends a user spec to the first active definition matching the name passed as argument.
Args:
user_spec: user spec to be appended
@@ -2935,3 +2990,7 @@ class SpackEnvironmentError(spack.error.SpackError):
class SpackEnvironmentViewError(SpackEnvironmentError):
"""Class for errors regarding view generation."""
class SpackEnvironmentConfigError(SpackEnvironmentError):
"""Class for Spack environment-specific configuration errors."""

View File

@@ -5,6 +5,7 @@
"""Service functions and classes to implement the hooks
for Spack's command extensions.
"""
import difflib
import importlib
import os
import re
@@ -176,10 +177,19 @@ class CommandNotFoundError(spack.error.SpackError):
"""
def __init__(self, cmd_name):
super().__init__(
msg = (
"{0} is not a recognized Spack command or extension command;"
" check with `spack commands`.".format(cmd_name)
)
long_msg = None
similar = difflib.get_close_matches(cmd_name, spack.cmd.all_commands())
if 1 <= len(similar) <= 5:
long_msg = "\nDid you mean one of the following commands?\n "
long_msg += "\n ".join(similar)
super().__init__(msg, long_msg)
class ExtensionNamingError(spack.error.SpackError):

View File

@@ -3,17 +3,22 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from typing import Optional, Set
from llnl.util import tty
import spack.config
import spack.modules
import spack.spec
def _for_each_enabled(spec, method_name, explicit=None):
def _for_each_enabled(
spec: spack.spec.Spec, method_name: str, explicit: Optional[bool] = None
) -> None:
"""Calls a method for each enabled module"""
set_names = set(spack.config.get("modules", {}).keys())
set_names: Set[str] = set(spack.config.get("modules", {}).keys())
for name in set_names:
enabled = spack.config.get("modules:%s:enable" % name)
enabled = spack.config.get(f"modules:{name}:enable")
if not enabled:
tty.debug("NO MODULE WRITTEN: list of enabled module files is empty")
continue
@@ -28,7 +33,7 @@ def _for_each_enabled(spec, method_name, explicit=None):
tty.warn(msg.format(method_name, str(e)))
def post_install(spec, explicit):
def post_install(spec, explicit: bool):
import spack.environment as ev # break import cycle
if ev.active_environment():

View File

@@ -380,14 +380,13 @@ def _print_timer(pre: str, pkg_id: str, timer: timer.BaseTimer) -> None:
def _install_from_cache(
pkg: "spack.package_base.PackageBase", cache_only: bool, explicit: bool, unsigned: bool = False
pkg: "spack.package_base.PackageBase", explicit: bool, unsigned: bool = False
) -> bool:
"""
Extract the package from binary cache
Install the package from binary cache
Args:
pkg: package to install from the binary cache
cache_only: only extract from binary cache
explicit: ``True`` if installing the package was explicitly
requested by the user, otherwise, ``False``
unsigned: ``True`` if binary package signatures to be checked,
@@ -399,15 +398,11 @@ def _install_from_cache(
installed_from_cache = _try_install_from_binary_cache(
pkg, explicit, unsigned=unsigned, timer=t
)
pkg_id = package_id(pkg)
if not installed_from_cache:
pre = f"No binary for {pkg_id} found"
if cache_only:
tty.die(f"{pre} when cache-only specified")
tty.msg(f"{pre}: installing from source")
return False
t.stop()
pkg_id = package_id(pkg)
tty.debug(f"Successfully extracted {pkg_id} from binary cache")
_write_timer_json(pkg, t, True)
@@ -1335,7 +1330,6 @@ def _prepare_for_install(self, task: BuildTask) -> None:
"""
install_args = task.request.install_args
keep_prefix = install_args.get("keep_prefix")
restage = install_args.get("restage")
# Make sure the package is ready to be locally installed.
self._ensure_install_ready(task.pkg)
@@ -1367,10 +1361,6 @@ def _prepare_for_install(self, task: BuildTask) -> None:
else:
tty.debug(f"{task.pkg_id} is partially installed")
# Destroy the stage for a locally installed, non-DIYStage, package
if restage and task.pkg.stage.managed_by_spack:
task.pkg.stage.destroy()
if (
rec
and installed_in_db
@@ -1671,11 +1661,16 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
task.status = STATUS_INSTALLING
# Use the binary cache if requested
if use_cache and _install_from_cache(pkg, cache_only, explicit, unsigned):
self._update_installed(task)
if task.compiler:
self._add_compiler_package_to_config(pkg)
return
if use_cache:
if _install_from_cache(pkg, explicit, unsigned):
self._update_installed(task)
if task.compiler:
self._add_compiler_package_to_config(pkg)
return
elif cache_only:
raise InstallError("No binary found when cache-only was specified", pkg=pkg)
else:
tty.msg(f"No binary for {pkg_id} found: installing from source")
pkg.run_tests = tests if isinstance(tests, bool) else pkg.name in tests
@@ -1691,6 +1686,10 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
try:
self._setup_install_dir(pkg)
# Create stage object now and let it be serialized for the child process. That
# way monkeypatch in tests works correctly.
pkg.stage
# Create a child process to do the actual installation.
# Preserve verbosity settings across installs.
spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process(
@@ -2223,11 +2222,6 @@ def install(self) -> None:
if not keep_prefix and not action == InstallAction.OVERWRITE:
pkg.remove_prefix()
# The subprocess *may* have removed the build stage. Mark it
# not created so that the next time pkg.stage is invoked, we
# check the filesystem for it.
pkg.stage.created = False
# Perform basic task cleanup for the installed spec to
# include downgrading the write to a read lock
self._cleanup_task(pkg)
@@ -2297,6 +2291,9 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
# whether to keep the build stage after installation
self.keep_stage = install_args.get("keep_stage", False)
# whether to restage
self.restage = install_args.get("restage", False)
# whether to skip the patch phase
self.skip_patch = install_args.get("skip_patch", False)
@@ -2327,9 +2324,13 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
def run(self) -> bool:
"""Main entry point from ``build_process`` to kick off install in child."""
self.pkg.stage.keep = self.keep_stage
stage = self.pkg.stage
stage.keep = self.keep_stage
with self.pkg.stage:
if self.restage:
stage.destroy()
with stage:
self.timer.start("stage")
if not self.fake:

View File

@@ -16,11 +16,13 @@
import os.path
import pstats
import re
import shlex
import signal
import subprocess as sp
import sys
import traceback
import warnings
from typing import List, Tuple
import archspec.cpu
@@ -49,9 +51,6 @@
#: names of profile statistics
stat_names = pstats.Stats.sort_arg_dict_default
#: top-level aliases for Spack commands
aliases = {"concretise": "concretize", "containerise": "containerize", "rm": "remove"}
#: help levels in order of detail (i.e., number of commands shown)
levels = ["short", "long"]
@@ -359,7 +358,10 @@ def add_command(self, cmd_name):
module = spack.cmd.get_module(cmd_name)
# build a list of aliases
alias_list = [k for k, v in aliases.items() if v == cmd_name]
alias_list = []
aliases = spack.config.get("config:aliases")
if aliases:
alias_list = [k for k, v in aliases.items() if shlex.split(v)[0] == cmd_name]
subparser = self.subparsers.add_parser(
cmd_name,
@@ -670,7 +672,6 @@ def __init__(self, command_name, subprocess=False):
Windows, where it is always False.
"""
self.parser = make_argument_parser()
self.command = self.parser.add_command(command_name)
self.command_name = command_name
# TODO: figure out how to support this on windows
self.subprocess = subprocess if sys.platform != "win32" else False
@@ -702,13 +703,14 @@ def __call__(self, *argv, **kwargs):
if self.subprocess:
p = sp.Popen(
[spack.paths.spack_script, self.command_name] + prepend + list(argv),
[spack.paths.spack_script] + prepend + [self.command_name] + list(argv),
stdout=sp.PIPE,
stderr=sp.STDOUT,
)
out, self.returncode = p.communicate()
out = out.decode()
else:
command = self.parser.add_command(self.command_name)
args, unknown = self.parser.parse_known_args(
prepend + [self.command_name] + list(argv)
)
@@ -716,7 +718,7 @@ def __call__(self, *argv, **kwargs):
out = io.StringIO()
try:
with log_output(out, echo=True):
self.returncode = _invoke_command(self.command, self.parser, args, unknown)
self.returncode = _invoke_command(command, self.parser, args, unknown)
except SystemExit as e:
self.returncode = e.code
@@ -870,6 +872,46 @@ def restore_macos_dyld_vars():
os.environ[dyld_var] = os.environ[stored_var_name]
def resolve_alias(cmd_name: str, cmd: List[str]) -> Tuple[str, List[str]]:
"""Resolves aliases in the given command.
Args:
cmd_name: command name.
cmd: command line arguments.
Returns:
new command name and arguments.
"""
all_commands = spack.cmd.all_commands()
aliases = spack.config.get("config:aliases")
if aliases:
for key, value in aliases.items():
if " " in key:
tty.warn(
f"Alias '{key}' (mapping to '{value}') contains a space"
", which is not supported."
)
if key in all_commands:
tty.warn(
f"Alias '{key}' (mapping to '{value}') attempts to override"
" built-in command."
)
if cmd_name not in all_commands:
alias = None
if aliases:
alias = aliases.get(cmd_name)
if alias is not None:
alias_parts = shlex.split(alias)
cmd_name = alias_parts[0]
cmd = alias_parts + cmd[1:]
return cmd_name, cmd
def _main(argv=None):
"""Logic for the main entry point for the Spack command.
@@ -962,7 +1004,7 @@ def _main(argv=None):
# Try to load the particular command the caller asked for.
cmd_name = args.command[0]
cmd_name = aliases.get(cmd_name, cmd_name)
cmd_name, args.command = resolve_alias(cmd_name, args.command)
# set up a bootstrap context, if asked.
# bootstrap context needs to include parsing the command, b/c things
@@ -974,14 +1016,16 @@ def _main(argv=None):
bootstrap_context = bootstrap.ensure_bootstrap_configuration()
with bootstrap_context:
return finish_parse_and_run(parser, cmd_name, env_format_error)
return finish_parse_and_run(parser, cmd_name, args, env_format_error)
def finish_parse_and_run(parser, cmd_name, env_format_error):
def finish_parse_and_run(parser, cmd_name, main_args, env_format_error):
"""Finish parsing after we know the command to run."""
# add the found command to the parser and re-run then re-parse
command = parser.add_command(cmd_name)
args, unknown = parser.parse_known_args()
args, unknown = parser.parse_known_args(main_args.command)
# we need to inherit verbose since the install command checks for it
args.verbose = main_args.verbose
# Now that we know what command this is and what its args are, determine
# whether we can continue with a bad environment and raise if not.

View File

@@ -93,7 +93,7 @@ def _filter_compiler_wrappers_impl(pkg_or_builder):
replacements = []
for idx, (env_var, compiler_path) in enumerate(compiler_vars):
if env_var in os.environ:
if env_var in os.environ and compiler_path is not None:
# filter spack wrapper and links to spack wrapper in case
# build system runs realpath
wrapper = os.environ[env_var]

View File

@@ -7,10 +7,15 @@
include Tcl non-hierarchical modules, Lua hierarchical modules, and others.
"""
from .common import disable_modules
from typing import Dict, Type
from .common import BaseModuleFileWriter, disable_modules
from .lmod import LmodModulefileWriter
from .tcl import TclModulefileWriter
__all__ = ["TclModulefileWriter", "LmodModulefileWriter", "disable_modules"]
module_types = {"tcl": TclModulefileWriter, "lmod": LmodModulefileWriter}
module_types: Dict[str, Type[BaseModuleFileWriter]] = {
"tcl": TclModulefileWriter,
"lmod": LmodModulefileWriter,
}

View File

@@ -35,7 +35,7 @@
import os.path
import re
import string
from typing import Optional
from typing import List, Optional
import llnl.util.filesystem
import llnl.util.tty as tty
@@ -50,6 +50,7 @@
import spack.projections as proj
import spack.repo
import spack.schema.environment
import spack.spec
import spack.store
import spack.tengine as tengine
import spack.util.environment
@@ -395,16 +396,14 @@ class BaseConfiguration:
default_projections = {"all": "{name}/{version}-{compiler.name}-{compiler.version}"}
def __init__(self, spec, module_set_name, explicit=None):
def __init__(self, spec: spack.spec.Spec, module_set_name: str, explicit: bool) -> None:
# Module where type(self) is defined
self.module = inspect.getmodule(self)
m = inspect.getmodule(self)
assert m is not None # make mypy happy
self.module = m
# Spec for which we want to generate a module file
self.spec = spec
self.name = module_set_name
# Software installation has been explicitly asked (get this information from
# db when querying an existing module, like during a refresh or rm operations)
if explicit is None:
explicit = spec._installed_explicitly()
self.explicit = explicit
# Dictionary of configuration options that should be applied
# to the spec
@@ -458,7 +457,11 @@ def suffixes(self):
if constraint in self.spec:
suffixes.append(suffix)
suffixes = list(dedupe(suffixes))
if self.hash:
# For hidden modules we can always add a fixed length hash as suffix, since it guards
# against file name clashes, and the module is not exposed to the user anyways.
if self.hidden:
suffixes.append(self.spec.dag_hash(length=7))
elif self.hash:
suffixes.append(self.hash)
return suffixes
@@ -483,43 +486,35 @@ def excluded(self):
spec = self.spec
conf = self.module.configuration(self.name)
# Compute the list of include rules that match
include_rules = conf.get("include", [])
include_matches = [x for x in include_rules if spec.satisfies(x)]
# Compute the list of exclude rules that match
exclude_rules = conf.get("exclude", [])
exclude_matches = [x for x in exclude_rules if spec.satisfies(x)]
# Compute the list of matching include / exclude rules, and whether excluded as implicit
include_matches = [x for x in conf.get("include", []) if spec.satisfies(x)]
exclude_matches = [x for x in conf.get("exclude", []) if spec.satisfies(x)]
excluded_as_implicit = not self.explicit and conf.get("exclude_implicits", False)
def debug_info(line_header, match_list):
if match_list:
msg = "\t{0} : {1}".format(line_header, spec.cshort_spec)
tty.debug(msg)
tty.debug(f"\t{line_header} : {spec.cshort_spec}")
for rule in match_list:
tty.debug("\t\tmatches rule: {0}".format(rule))
tty.debug(f"\t\tmatches rule: {rule}")
debug_info("INCLUDE", include_matches)
debug_info("EXCLUDE", exclude_matches)
if not include_matches and exclude_matches:
return True
if excluded_as_implicit:
tty.debug(f"\tEXCLUDED_AS_IMPLICIT : {spec.cshort_spec}")
return False
return not include_matches and (exclude_matches or excluded_as_implicit)
@property
def hidden(self):
"""Returns True if the module has been hidden, False otherwise."""
# A few variables for convenience of writing the method
spec = self.spec
conf = self.module.configuration(self.name)
hidden_as_implicit = not self.explicit and conf.get(
"hide_implicits", conf.get("exclude_implicits", False)
)
hidden_as_implicit = not self.explicit and conf.get("hide_implicits", False)
if hidden_as_implicit:
tty.debug(f"\tHIDDEN_AS_IMPLICIT : {spec.cshort_spec}")
tty.debug(f"\tHIDDEN_AS_IMPLICIT : {self.spec.cshort_spec}")
return hidden_as_implicit
@@ -551,8 +546,7 @@ def exclude_env_vars(self):
def _create_list_for(self, what):
include = []
for item in self.conf[what]:
conf = type(self)(item, self.name)
if not conf.excluded:
if not self.module.make_configuration(item, self.name).excluded:
include.append(item)
return include
@@ -731,7 +725,9 @@ def environment_modifications(self):
# for that to work, globals have to be set on the package modules, and the
# whole chain of setup_dependent_package has to be followed from leaf to spec.
# So: just run it here, but don't collect env mods.
spack.build_environment.SetupContext(context=Context.RUN).set_all_package_py_globals()
spack.build_environment.SetupContext(
spec, context=Context.RUN
).set_all_package_py_globals()
# Then run setup_dependent_run_environment before setup_run_environment.
for dep in spec.dependencies(deptype=("link", "run")):
@@ -824,8 +820,7 @@ def autoload(self):
def _create_module_list_of(self, what):
m = self.conf.module
name = self.conf.name
explicit = self.conf.explicit
return [m.make_layout(x, name, explicit).use_name for x in getattr(self.conf, what)]
return [m.make_layout(x, name).use_name for x in getattr(self.conf, what)]
@tengine.context_property
def verbose(self):
@@ -834,12 +829,19 @@ def verbose(self):
class BaseModuleFileWriter:
def __init__(self, spec, module_set_name, explicit=None):
default_template: str
hide_cmd_format: str
modulerc_header: List[str]
def __init__(
self, spec: spack.spec.Spec, module_set_name: str, explicit: Optional[bool] = None
) -> None:
self.spec = spec
# This class is meant to be derived. Get the module of the
# actual writer.
self.module = inspect.getmodule(self)
assert self.module is not None # make mypy happy
m = self.module
# Create the triplet of configuration/layout/context

View File

@@ -6,8 +6,7 @@
import collections
import itertools
import os.path
import posixpath
from typing import Any, Dict, List
from typing import Dict, List, Optional, Tuple
import llnl.util.filesystem as fs
import llnl.util.lang as lang
@@ -24,18 +23,19 @@
#: lmod specific part of the configuration
def configuration(module_set_name):
config_path = "modules:%s:lmod" % module_set_name
config = spack.config.get(config_path, {})
return config
def configuration(module_set_name: str) -> dict:
return spack.config.get(f"modules:{module_set_name}:lmod", {})
# Caches the configuration {spec_hash: configuration}
configuration_registry: Dict[str, Any] = {}
configuration_registry: Dict[Tuple[str, str, bool], BaseConfiguration] = {}
def make_configuration(spec, module_set_name, explicit):
def make_configuration(
spec: spack.spec.Spec, module_set_name: str, explicit: Optional[bool] = None
) -> BaseConfiguration:
"""Returns the lmod configuration for spec"""
explicit = bool(spec._installed_explicitly()) if explicit is None else explicit
key = (spec.dag_hash(), module_set_name, explicit)
try:
return configuration_registry[key]
@@ -45,16 +45,18 @@ def make_configuration(spec, module_set_name, explicit):
)
def make_layout(spec, module_set_name, explicit):
def make_layout(
spec: spack.spec.Spec, module_set_name: str, explicit: Optional[bool] = None
) -> BaseFileLayout:
"""Returns the layout information for spec"""
conf = make_configuration(spec, module_set_name, explicit)
return LmodFileLayout(conf)
return LmodFileLayout(make_configuration(spec, module_set_name, explicit))
def make_context(spec, module_set_name, explicit):
def make_context(
spec: spack.spec.Spec, module_set_name: str, explicit: Optional[bool] = None
) -> BaseContext:
"""Returns the context information for spec"""
conf = make_configuration(spec, module_set_name, explicit)
return LmodContext(conf)
return LmodContext(make_configuration(spec, module_set_name, explicit))
def guess_core_compilers(name, store=False) -> List[spack.spec.CompilerSpec]:
@@ -97,10 +99,7 @@ def guess_core_compilers(name, store=False) -> List[spack.spec.CompilerSpec]:
class LmodConfiguration(BaseConfiguration):
"""Configuration class for lmod module files."""
# Note: Posixpath is used here as well as below as opposed to
# os.path.join due to spack.spec.Spec.format
# requiring forward slash path seperators at this stage
default_projections = {"all": posixpath.join("{name}", "{version}")}
default_projections = {"all": "{name}/{version}"}
@property
def core_compilers(self) -> List[spack.spec.CompilerSpec]:
@@ -274,19 +273,16 @@ def filename(self):
hierarchy_name = os.path.join(*parts)
# Compute the absolute path
fullname = os.path.join(
return os.path.join(
self.arch_dirname, # root for lmod files on this architecture
hierarchy_name, # relative path
".".join([self.use_name, self.extension]), # file name
f"{self.use_name}.{self.extension}", # file name
)
return fullname
@property
def modulerc(self):
"""Returns the modulerc file associated with current module file"""
return os.path.join(
os.path.dirname(self.filename), ".".join([".modulerc", self.extension])
)
return os.path.join(os.path.dirname(self.filename), f".modulerc.{self.extension}")
def token_to_path(self, name, value):
"""Transforms a hierarchy token into the corresponding path part.
@@ -319,9 +315,7 @@ def path_part_fmt(token):
# we need to append a hash to the version to distinguish
# among flavors of the same library (e.g. openblas~openmp vs.
# openblas+openmp)
path = path_part_fmt(token=value)
path = "-".join([path, value.dag_hash(length=7)])
return path
return f"{path_part_fmt(token=value)}-{value.dag_hash(length=7)}"
@property
def available_path_parts(self):
@@ -333,8 +327,7 @@ def available_path_parts(self):
# List of services that are part of the hierarchy
hierarchy = self.conf.hierarchy_tokens
# Tokenize each part that is both in the hierarchy and available
parts = [self.token_to_path(x, available[x]) for x in hierarchy if x in available]
return parts
return [self.token_to_path(x, available[x]) for x in hierarchy if x in available]
@property
@lang.memoized
@@ -452,7 +445,7 @@ def missing(self):
@lang.memoized
def unlocked_paths(self):
"""Returns the list of paths that are unlocked unconditionally."""
layout = make_layout(self.spec, self.conf.name, self.conf.explicit)
layout = make_layout(self.spec, self.conf.name)
return [os.path.join(*parts) for parts in layout.unlocked_paths[None]]
@tengine.context_property
@@ -460,7 +453,7 @@ def conditionally_unlocked_paths(self):
"""Returns the list of paths that are unlocked conditionally.
Each item in the list is a tuple with the structure (condition, path).
"""
layout = make_layout(self.spec, self.conf.name, self.conf.explicit)
layout = make_layout(self.spec, self.conf.name)
value = []
conditional_paths = layout.unlocked_paths
conditional_paths.pop(None)
@@ -482,9 +475,9 @@ def manipulate_path(token):
class LmodModulefileWriter(BaseModuleFileWriter):
"""Writer class for lmod module files."""
default_template = posixpath.join("modules", "modulefile.lua")
default_template = "modules/modulefile.lua"
modulerc_header: list = []
modulerc_header = []
hide_cmd_format = 'hide_version("%s")'

View File

@@ -7,28 +7,29 @@
non-hierarchical modules.
"""
import os.path
import posixpath
from typing import Any, Dict
from typing import Dict, Optional, Tuple
import spack.config
import spack.spec
import spack.tengine as tengine
from .common import BaseConfiguration, BaseContext, BaseFileLayout, BaseModuleFileWriter
#: Tcl specific part of the configuration
def configuration(module_set_name):
config_path = "modules:%s:tcl" % module_set_name
config = spack.config.get(config_path, {})
return config
def configuration(module_set_name: str) -> dict:
return spack.config.get(f"modules:{module_set_name}:tcl", {})
# Caches the configuration {spec_hash: configuration}
configuration_registry: Dict[str, Any] = {}
configuration_registry: Dict[Tuple[str, str, bool], BaseConfiguration] = {}
def make_configuration(spec, module_set_name, explicit):
def make_configuration(
spec: spack.spec.Spec, module_set_name: str, explicit: Optional[bool] = None
) -> BaseConfiguration:
"""Returns the tcl configuration for spec"""
explicit = bool(spec._installed_explicitly()) if explicit is None else explicit
key = (spec.dag_hash(), module_set_name, explicit)
try:
return configuration_registry[key]
@@ -38,16 +39,18 @@ def make_configuration(spec, module_set_name, explicit):
)
def make_layout(spec, module_set_name, explicit):
def make_layout(
spec: spack.spec.Spec, module_set_name: str, explicit: Optional[bool] = None
) -> BaseFileLayout:
"""Returns the layout information for spec"""
conf = make_configuration(spec, module_set_name, explicit)
return TclFileLayout(conf)
return TclFileLayout(make_configuration(spec, module_set_name, explicit))
def make_context(spec, module_set_name, explicit):
def make_context(
spec: spack.spec.Spec, module_set_name: str, explicit: Optional[bool] = None
) -> BaseContext:
"""Returns the context information for spec"""
conf = make_configuration(spec, module_set_name, explicit)
return TclContext(conf)
return TclContext(make_configuration(spec, module_set_name, explicit))
class TclConfiguration(BaseConfiguration):
@@ -75,10 +78,7 @@ def prerequisites(self):
class TclModulefileWriter(BaseModuleFileWriter):
"""Writer class for tcl module files."""
# Note: Posixpath is used here as opposed to
# os.path.join due to spack.spec.Spec.format
# requiring forward slash path seperators at this stage
default_template = posixpath.join("modules", "modulefile.tcl")
default_template = "modules/modulefile.tcl"
modulerc_header = ["#%Module4.7"]

View File

@@ -26,6 +26,7 @@
"""
import functools
import inspect
from contextlib import contextmanager
from llnl.util.lang import caller_locals
@@ -271,6 +272,13 @@ def __exit__(self, exc_type, exc_val, exc_tb):
spack.directives.DirectiveMeta.pop_from_context()
@contextmanager
def default_args(**kwargs):
spack.directives.DirectiveMeta.push_default_args(kwargs)
yield
spack.directives.DirectiveMeta.pop_default_args()
class MultiMethodError(spack.error.SpackError):
"""Superclass for multimethod dispatch errors"""

View File

@@ -134,7 +134,7 @@ def upload_blob(
return True
# Otherwise, do another PUT request.
spack.oci.opener.ensure_status(response, 202)
spack.oci.opener.ensure_status(request, response, 202)
assert "Location" in response.headers
# Can be absolute or relative, joining handles both
@@ -143,19 +143,16 @@ def upload_blob(
)
f.seek(0)
response = _urlopen(
Request(
url=upload_url,
method="PUT",
data=f,
headers={
"Content-Type": "application/octet-stream",
"Content-Length": str(file_size),
},
)
request = Request(
url=upload_url,
method="PUT",
data=f,
headers={"Content-Type": "application/octet-stream", "Content-Length": str(file_size)},
)
spack.oci.opener.ensure_status(response, 201)
response = _urlopen(request)
spack.oci.opener.ensure_status(request, response, 201)
# print elapsed time and # MB/s
_log_upload_progress(digest, file_size, time.time() - start)
@@ -189,16 +186,16 @@ def upload_manifest(
if not tag:
ref = ref.with_digest(digest)
response = _urlopen(
Request(
url=ref.manifest_url(),
method="PUT",
data=data,
headers={"Content-Type": oci_manifest["mediaType"]},
)
request = Request(
url=ref.manifest_url(),
method="PUT",
data=data,
headers={"Content-Type": oci_manifest["mediaType"]},
)
spack.oci.opener.ensure_status(response, 201)
response = _urlopen(request)
spack.oci.opener.ensure_status(request, response, 201)
return digest, size

View File

@@ -310,19 +310,15 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
# Login failed, avoid infinite recursion where we go back and
# forth between auth server and registry
if hasattr(req, "login_attempted"):
raise urllib.error.HTTPError(
req.full_url, code, f"Failed to login to {req.full_url}: {msg}", headers, fp
raise spack.util.web.DetailedHTTPError(
req, code, f"Failed to login: {msg}", headers, fp
)
# On 401 Unauthorized, parse the WWW-Authenticate header
# to determine what authentication is required
if "WWW-Authenticate" not in headers:
raise urllib.error.HTTPError(
req.full_url,
code,
"Cannot login to registry, missing WWW-Authenticate header",
headers,
fp,
raise spack.util.web.DetailedHTTPError(
req, code, "Cannot login to registry, missing WWW-Authenticate header", headers, fp
)
header_value = headers["WWW-Authenticate"]
@@ -330,8 +326,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
try:
challenge = get_bearer_challenge(parse_www_authenticate(header_value))
except ValueError as e:
raise urllib.error.HTTPError(
req.full_url,
raise spack.util.web.DetailedHTTPError(
req,
code,
f"Cannot login to registry, malformed WWW-Authenticate header: {header_value}",
headers,
@@ -340,8 +336,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
# If there is no bearer challenge, we can't handle it
if not challenge:
raise urllib.error.HTTPError(
req.full_url,
raise spack.util.web.DetailedHTTPError(
req,
code,
f"Cannot login to registry, unsupported authentication scheme: {header_value}",
headers,
@@ -356,8 +352,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
timeout=req.timeout,
)
except ValueError as e:
raise urllib.error.HTTPError(
req.full_url,
raise spack.util.web.DetailedHTTPError(
req,
code,
f"Cannot login to registry, failed to obtain bearer token: {e}",
headers,
@@ -412,13 +408,13 @@ def create_opener():
return opener
def ensure_status(response: HTTPResponse, status: int):
def ensure_status(request: urllib.request.Request, response: HTTPResponse, status: int):
"""Raise an error if the response status is not the expected one."""
if response.status == status:
return
raise urllib.error.HTTPError(
response.geturl(), response.status, response.reason, response.info(), None
raise spack.util.web.DetailedHTTPError(
request, response.status, response.reason, response.info(), None
)

View File

@@ -143,6 +143,7 @@ def __init__(self):
"12": "monterey",
"13": "ventura",
"14": "sonoma",
"15": "sequoia",
}
version = macos_version()

View File

@@ -49,6 +49,7 @@
from spack.build_systems.nmake import NMakePackage
from spack.build_systems.octave import OctavePackage
from spack.build_systems.oneapi import (
INTEL_MATH_LIBRARIES,
IntelOneApiLibraryPackage,
IntelOneApiPackage,
IntelOneApiStaticLibraryList,
@@ -85,7 +86,7 @@
UpstreamPackageError,
)
from spack.mixins import filter_compiler_wrappers
from spack.multimethod import when
from spack.multimethod import default_args, when
from spack.package_base import (
DependencyConflictError,
build_system_flags,

View File

@@ -24,8 +24,9 @@
import textwrap
import time
import traceback
import typing
import warnings
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type, TypeVar
from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Tuple, Type, TypeVar, Union
import llnl.util.filesystem as fsys
import llnl.util.tty as tty
@@ -682,13 +683,13 @@ def __init__(self, spec):
@classmethod
def possible_dependencies(
cls,
transitive=True,
expand_virtuals=True,
transitive: bool = True,
expand_virtuals: bool = True,
depflag: dt.DepFlag = dt.ALL,
visited=None,
missing=None,
virtuals=None,
):
visited: Optional[dict] = None,
missing: Optional[dict] = None,
virtuals: Optional[set] = None,
) -> Dict[str, Set[str]]:
"""Return dict of possible dependencies of this package.
Args:
@@ -2449,14 +2450,21 @@ def flatten_dependencies(spec, flat_dir):
dep_files.merge(flat_dir + "/" + name)
def possible_dependencies(*pkg_or_spec, **kwargs):
def possible_dependencies(
*pkg_or_spec: Union[str, spack.spec.Spec, typing.Type[PackageBase]],
transitive: bool = True,
expand_virtuals: bool = True,
depflag: dt.DepFlag = dt.ALL,
missing: Optional[dict] = None,
virtuals: Optional[set] = None,
) -> Dict[str, Set[str]]:
"""Get the possible dependencies of a number of packages.
See ``PackageBase.possible_dependencies`` for details.
"""
packages = []
for pos in pkg_or_spec:
if isinstance(pos, PackageMeta):
if isinstance(pos, PackageMeta) and issubclass(pos, PackageBase):
packages.append(pos)
continue
@@ -2469,9 +2477,16 @@ def possible_dependencies(*pkg_or_spec, **kwargs):
else:
packages.append(pos.package_class)
visited = {}
visited: Dict[str, Set[str]] = {}
for pkg in packages:
pkg.possible_dependencies(visited=visited, **kwargs)
pkg.possible_dependencies(
visited=visited,
transitive=transitive,
expand_virtuals=expand_virtuals,
depflag=depflag,
missing=missing,
virtuals=virtuals,
)
return visited

View File

@@ -6,6 +6,7 @@
import abc
import collections.abc
import contextlib
import difflib
import errno
import functools
import importlib
@@ -489,7 +490,7 @@ def read(self, stream):
self.index = spack.tag.TagIndex.from_json(stream, self.repository)
def update(self, pkg_fullname):
self.index.update_package(pkg_fullname)
self.index.update_package(pkg_fullname.split(".")[-1])
def write(self, stream):
self.index.to_json(stream)
@@ -1516,7 +1517,18 @@ def __init__(self, name, repo=None):
long_msg = "Did you mean to specify a filename with './{0}'?"
long_msg = long_msg.format(name)
else:
long_msg = "You may need to run 'spack clean -m'."
long_msg = "Use 'spack create' to create a new package."
if not repo:
repo = spack.repo.PATH
# We need to compare the base package name
pkg_name = name.rsplit(".", 1)[-1]
similar = difflib.get_close_matches(pkg_name, repo.all_package_names())
if 1 <= len(similar) <= 5:
long_msg += "\n\nDid you mean one of the following packages?\n "
long_msg += "\n ".join(similar)
super().__init__(msg, long_msg)
self.name = name

View File

@@ -62,3 +62,25 @@ def _deprecated_properties(validator, deprecated, instance, schema):
Validator = llnl.util.lang.Singleton(_make_validator)
spec_list_schema = {
"type": "array",
"default": [],
"items": {
"anyOf": [
{
"type": "object",
"additionalProperties": False,
"properties": {
"matrix": {
"type": "array",
"items": {"type": "array", "items": {"type": "string"}},
},
"exclude": {"type": "array", "items": {"type": "string"}},
},
},
{"type": "string"},
{"type": "null"},
]
},
}

View File

@@ -92,6 +92,7 @@
"url_fetch_method": {"type": "string", "enum": ["urllib", "curl"]},
"additional_external_search_paths": {"type": "array", "items": {"type": "string"}},
"binary_index_ttl": {"type": "integer", "minimum": 0},
"aliases": {"type": "object", "patternProperties": {r"\w[\w-]*": {"type": "string"}}},
},
"deprecatedProperties": {
"properties": ["terminal_title"],

View File

@@ -0,0 +1,34 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Schema for definitions
.. literalinclude:: _spack_root/lib/spack/spack/schema/definitions.py
:lines: 13-
"""
import spack.schema
#: Properties for inclusion in other schemas
properties = {
"definitions": {
"type": "array",
"default": [],
"items": {
"type": "object",
"properties": {"when": {"type": "string"}},
"patternProperties": {r"^(?!when$)\w*": spack.schema.spec_list_schema},
},
}
}
#: Full schema with metadata
schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack definitions configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties,
}

View File

@@ -12,34 +12,11 @@
import spack.schema.gitlab_ci # DEPRECATED
import spack.schema.merged
import spack.schema.packages
import spack.schema.projections
#: Top level key in a manifest file
TOP_LEVEL_KEY = "spack"
spec_list_schema = {
"type": "array",
"default": [],
"items": {
"anyOf": [
{
"type": "object",
"additionalProperties": False,
"properties": {
"matrix": {
"type": "array",
"items": {"type": "array", "items": {"type": "string"}},
},
"exclude": {"type": "array", "items": {"type": "string"}},
},
},
{"type": "string"},
{"type": "null"},
]
},
}
projections_scheme = spack.schema.projections.properties["projections"]
schema = {
@@ -75,16 +52,7 @@
}
},
},
"definitions": {
"type": "array",
"default": [],
"items": {
"type": "object",
"properties": {"when": {"type": "string"}},
"patternProperties": {r"^(?!when$)\w*": spec_list_schema},
},
},
"specs": spec_list_schema,
"specs": spack.schema.spec_list_schema,
"view": {
"anyOf": [
{"type": "boolean"},

View File

@@ -17,6 +17,7 @@
import spack.schema.concretizer
import spack.schema.config
import spack.schema.container
import spack.schema.definitions
import spack.schema.mirrors
import spack.schema.modules
import spack.schema.packages
@@ -32,6 +33,7 @@
spack.schema.config.properties,
spack.schema.container.properties,
spack.schema.ci.properties,
spack.schema.definitions.properties,
spack.schema.mirrors.properties,
spack.schema.modules.properties,
spack.schema.packages.properties,

View File

@@ -18,9 +18,7 @@
#: IS ADDED IMMEDIATELY BELOW THE MODULE TYPE ATTRIBUTE
spec_regex = (
r"(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|"
r"whitelist|blacklist|" # DEPRECATED: remove in 0.20.
r"include|exclude|" # use these more inclusive/consistent options
r"projections|naming_scheme|core_compilers|all)(^\w[\w-]*)"
r"include|exclude|projections|naming_scheme|core_compilers|all)(^\w[\w-]*)"
)
#: Matches a valid name for a module set
@@ -46,14 +44,7 @@
"default": {},
"additionalProperties": False,
"properties": {
# DEPRECATED: remove in 0.20.
"environment_blacklist": {
"type": "array",
"default": [],
"items": {"type": "string"},
},
# use exclude_env_vars instead
"exclude_env_vars": {"type": "array", "default": [], "items": {"type": "string"}},
"exclude_env_vars": {"type": "array", "default": [], "items": {"type": "string"}}
},
},
"template": {"type": "string"},
@@ -80,11 +71,6 @@
"properties": {
"verbose": {"type": "boolean", "default": False},
"hash_length": {"type": "integer", "minimum": 0, "default": 7},
# DEPRECATED: remove in 0.20.
"whitelist": array_of_strings,
"blacklist": array_of_strings,
"blacklist_implicits": {"type": "boolean", "default": False},
# whitelist/blacklist have been replaced with include/exclude
"include": array_of_strings,
"exclude": array_of_strings,
"exclude_implicits": {"type": "boolean", "default": False},
@@ -188,52 +174,3 @@
"additionalProperties": False,
"properties": properties,
}
# deprecated keys and their replacements
old_to_new_key = {"exclude_implicits": "hide_implicits"}
def update_keys(data, key_translations):
"""Change blacklist/whitelist to exclude/include.
Arguments:
data (dict): data from a valid modules configuration.
key_translations (dict): A dictionary of keys to translate to
their respective values.
Return:
(bool) whether anything was changed in data
"""
changed = False
if isinstance(data, dict):
keys = list(data.keys())
for key in keys:
value = data[key]
translation = key_translations.get(key)
if translation:
data[translation] = data.pop(key)
changed = True
changed |= update_keys(value, key_translations)
elif isinstance(data, list):
for elt in data:
changed |= update_keys(elt, key_translations)
return changed
def update(data):
"""Update the data in place to remove deprecated properties.
Args:
data (dict): dictionary to be updated
Returns:
True if data was changed, False otherwise
"""
# translate blacklist/whitelist to exclude/include
return update_keys(data, old_to_new_key)

View File

@@ -8,6 +8,68 @@
:lines: 13-
"""
permissions = {
"type": "object",
"additionalProperties": False,
"properties": {
"read": {"type": "string", "enum": ["user", "group", "world"]},
"write": {"type": "string", "enum": ["user", "group", "world"]},
"group": {"type": "string"},
},
}
variants = {"oneOf": [{"type": "string"}, {"type": "array", "items": {"type": "string"}}]}
requirements = {
"oneOf": [
# 'require' can be a list of requirement_groups.
# each requirement group is a list of one or more
# specs. Either at least one or exactly one spec
# in the group must be satisfied (depending on
# whether you use "any_of" or "one_of",
# repectively)
{
"type": "array",
"items": {
"oneOf": [
{
"type": "object",
"additionalProperties": False,
"properties": {
"one_of": {"type": "array", "items": {"type": "string"}},
"any_of": {"type": "array", "items": {"type": "string"}},
"spec": {"type": "string"},
"message": {"type": "string"},
"when": {"type": "string"},
},
},
{"type": "string"},
]
},
},
# Shorthand for a single requirement group with
# one member
{"type": "string"},
]
}
permissions = {
"type": "object",
"additionalProperties": False,
"properties": {
"read": {"type": "string", "enum": ["user", "group", "world"]},
"write": {"type": "string", "enum": ["user", "group", "world"]},
"group": {"type": "string"},
},
}
package_attributes = {
"type": "object",
"additionalProperties": False,
"patternProperties": {r"\w+": {}},
}
REQUIREMENT_URL = "https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements"
#: Properties for inclusion in other schemas
properties = {
@@ -15,57 +77,14 @@
"type": "object",
"default": {},
"additionalProperties": False,
"patternProperties": {
r"\w[\w-]*": { # package name
"properties": {
"all": { # package name
"type": "object",
"default": {},
"additionalProperties": False,
"properties": {
"require": {
"oneOf": [
# 'require' can be a list of requirement_groups.
# each requirement group is a list of one or more
# specs. Either at least one or exactly one spec
# in the group must be satisfied (depending on
# whether you use "any_of" or "one_of",
# repectively)
{
"type": "array",
"items": {
"oneOf": [
{
"type": "object",
"additionalProperties": False,
"properties": {
"one_of": {
"type": "array",
"items": {"type": "string"},
},
"any_of": {
"type": "array",
"items": {"type": "string"},
},
"spec": {"type": "string"},
"message": {"type": "string"},
"when": {"type": "string"},
},
},
{"type": "string"},
]
},
},
# Shorthand for a single requirement group with
# one member
{"type": "string"},
]
},
"version": {
"type": "array",
"default": [],
# version strings (type should be string, number is still possible
# but deprecated. this is to avoid issues with e.g. 3.10 -> 3.1)
"items": {"anyOf": [{"type": "string"}, {"type": "number"}]},
},
"require": requirements,
"version": {}, # Here only to warn users on ignored properties
"target": {
"type": "array",
"default": [],
@@ -78,22 +97,10 @@
"items": {"type": "string"},
}, # compiler specs
"buildable": {"type": "boolean", "default": True},
"permissions": {
"type": "object",
"additionalProperties": False,
"properties": {
"read": {"type": "string", "enum": ["user", "group", "world"]},
"write": {"type": "string", "enum": ["user", "group", "world"]},
"group": {"type": "string"},
},
},
"permissions": permissions,
# If 'get_full_repo' is promoted to a Package-level
# attribute, it could be useful to set it here
"package_attributes": {
"type": "object",
"additionalProperties": False,
"patternProperties": {r"\w+": {}},
},
"package_attributes": package_attributes,
"providers": {
"type": "object",
"default": {},
@@ -106,12 +113,40 @@
}
},
},
"variants": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}},
]
"variants": variants,
},
"deprecatedProperties": {
"properties": ["version"],
"message": "setting version preferences in the 'all' section of packages.yaml "
"is deprecated and will be removed in v0.22\n\n\tThese preferences "
"will be ignored by Spack. You can set them only in package-specific sections "
"of the same file.\n",
"error": False,
},
}
},
"patternProperties": {
r"(?!^all$)(^\w[\w-]*)": { # package name
"type": "object",
"default": {},
"additionalProperties": False,
"properties": {
"require": requirements,
"version": {
"type": "array",
"default": [],
# version strings
"items": {"anyOf": [{"type": "string"}, {"type": "number"}]},
},
"target": {}, # Here only to warn users on ignored properties
"compiler": {}, # Here only to warn users on ignored properties
"buildable": {"type": "boolean", "default": True},
"permissions": permissions,
# If 'get_full_repo' is promoted to a Package-level
# attribute, it could be useful to set it here
"package_attributes": package_attributes,
"providers": {}, # Here only to warn users on ignored properties
"variants": variants,
"externals": {
"type": "array",
"items": {
@@ -127,6 +162,18 @@
},
},
},
"deprecatedProperties": {
"properties": ["target", "compiler", "providers"],
"message": "setting 'compiler:', 'target:' or 'provider:' preferences in "
"a package-specific section of packages.yaml is deprecated, and will be "
"removed in v0.22.\n\n\tThese preferences will be ignored by Spack, and "
"can be set only in the 'all' section of the same file. "
"You can run:\n\n\t\t$ spack audit configs\n\n\tto get better diagnostics, "
"including files:lines where the deprecated attributes are used.\n\n"
"\tUse requirements to enforce conditions on specific packages: "
f"{REQUIREMENT_URL}\n",
"error": False,
},
}
},
}

View File

@@ -8,11 +8,13 @@
import enum
import itertools
import os
import pathlib
import pprint
import re
import types
import typing
import warnings
from typing import Dict, List, NamedTuple, Optional, Sequence, Tuple, Union
from typing import Callable, Dict, List, NamedTuple, Optional, Sequence, Set, Tuple, Union
import archspec.cpu
@@ -337,6 +339,13 @@ def __getattr__(self, name):
fn = AspFunctionBuilder()
TransformFunction = Callable[[spack.spec.Spec, List[AspFunction]], List[AspFunction]]
def remove_node(spec: spack.spec.Spec, facts: List[AspFunction]) -> List[AspFunction]:
"""Transformation that removes all "node" and "virtual_node" from the input list of facts."""
return list(filter(lambda x: x.args[0] not in ("node", "virtual_node"), facts))
def _create_counter(specs, tests):
strategy = spack.config.CONFIG.get("concretizer:duplicates:strategy", "none")
@@ -371,7 +380,7 @@ def check_packages_exist(specs):
for spec in specs:
for s in spec.traverse():
try:
check_passed = repo.exists(s.name) or repo.is_virtual(s.name)
check_passed = repo.repo_for_pkg(s).exists(s.name) or repo.is_virtual(s.name)
except Exception as e:
msg = "Cannot find package: {0}".format(str(e))
check_passed = False
@@ -684,7 +693,7 @@ def extract_args(model, predicate_name):
class ErrorHandler:
def __init__(self, model):
self.model = model
self.error_args = extract_args(model, "error")
self.full_model = None
def multiple_values_error(self, attribute, pkg):
return f'Cannot select a single "{attribute}" for package "{pkg}"'
@@ -692,6 +701,48 @@ def multiple_values_error(self, attribute, pkg):
def no_value_error(self, attribute, pkg):
return f'Cannot select a single "{attribute}" for package "{pkg}"'
def _get_cause_tree(
self,
cause: Tuple[str, str],
conditions: Dict[str, str],
condition_causes: List[Tuple[Tuple[str, str], Tuple[str, str]]],
seen: Set,
indent: str = " ",
) -> List[str]:
"""
Implementation of recursion for self.get_cause_tree. Much of this operates on tuples
(condition_id, set_id) in which the latter idea means that the condition represented by
the former held in the condition set represented by the latter.
"""
seen.add(cause)
parents = [c for e, c in condition_causes if e == cause and c not in seen]
local = "required because %s " % conditions[cause[0]]
return [indent + local] + [
c
for parent in parents
for c in self._get_cause_tree(
parent, conditions, condition_causes, seen, indent=indent + " "
)
]
def get_cause_tree(self, cause: Tuple[str, str]) -> List[str]:
"""
Get the cause tree associated with the given cause.
Arguments:
cause: The root cause of the tree (final condition)
Returns:
A list of strings describing the causes, formatted to display tree structure.
"""
conditions: Dict[str, str] = dict(extract_args(self.full_model, "condition_reason"))
condition_causes: List[Tuple[Tuple[str, str], Tuple[str, str]]] = list(
((Effect, EID), (Cause, CID))
for Effect, EID, Cause, CID in extract_args(self.full_model, "condition_cause")
)
return self._get_cause_tree(cause, conditions, condition_causes, set())
def handle_error(self, msg, *args):
"""Handle an error state derived by the solver."""
if msg == "multiple_values_error":
@@ -700,14 +751,31 @@ def handle_error(self, msg, *args):
if msg == "no_value_error":
return self.no_value_error(*args)
try:
idx = args.index("startcauses")
except ValueError:
msg_args = args
causes = []
else:
msg_args = args[:idx]
cause_args = args[idx + 1 :]
cause_args_conditions = cause_args[::2]
cause_args_ids = cause_args[1::2]
causes = list(zip(cause_args_conditions, cause_args_ids))
msg = msg.format(*msg_args)
# For variant formatting, we sometimes have to construct specs
# to format values properly. Find/replace all occurances of
# Spec(...) with the string representation of the spec mentioned
msg = msg.format(*args)
specs_to_construct = re.findall(r"Spec\(([^)]*)\)", msg)
for spec_str in specs_to_construct:
msg = msg.replace("Spec(%s)" % spec_str, str(spack.spec.Spec(spec_str)))
for cause in set(causes):
for c in self.get_cause_tree(cause):
msg += f"\n{c}"
return msg
def message(self, errors) -> str:
@@ -719,13 +787,40 @@ def message(self, errors) -> str:
return "\n".join([header] + messages)
def raise_if_errors(self):
if not self.error_args:
initial_error_args = extract_args(self.model, "error")
if not initial_error_args:
return
error_causation = clingo.Control()
parent_dir = pathlib.Path(__file__).parent
errors_lp = parent_dir / "error_messages.lp"
def on_model(model):
self.full_model = model.symbols(shown=True, terms=True)
with error_causation.backend() as backend:
for atom in self.model:
atom_id = backend.add_atom(atom)
backend.add_rule([atom_id], [], choice=False)
error_causation.load(str(errors_lp))
error_causation.ground([("base", []), ("error_messages", [])])
_ = error_causation.solve(on_model=on_model)
# No choices so there will be only one model
error_args = extract_args(self.full_model, "error")
errors = sorted(
[(int(priority), msg, args) for priority, msg, *args in self.error_args], reverse=True
[(int(priority), msg, args) for priority, msg, *args in error_args], reverse=True
)
msg = self.message(errors)
try:
msg = self.message(errors)
except Exception as e:
msg = (
f"unexpected error during concretization [{str(e)}]. "
f"Please report a bug at https://github.com/spack/spack/issues"
)
raise spack.error.SpackError(msg)
raise UnsatisfiableSpecError(msg)
@@ -919,14 +1014,6 @@ def on_model(model):
# record the possible dependencies in the solve
result.possible_dependencies = setup.pkgs
# print any unknown functions in the model
for sym in best_model:
if sym.name not in ("attr", "error", "opt_criterion"):
tty.debug(
"UNKNOWN SYMBOL: %s(%s)"
% (sym.name, ", ".join(intermediate_repr(sym.arguments)))
)
elif cores:
result.control = self.control
result.cores.extend(cores)
@@ -1031,11 +1118,8 @@ def __init__(self, tests=False):
self.reusable_and_possible = ConcreteSpecsByHash()
# id for dummy variables
self._condition_id_counter = itertools.count()
self._trigger_id_counter = itertools.count()
self._id_counter = itertools.count()
self._trigger_cache = collections.defaultdict(dict)
self._effect_id_counter = itertools.count()
self._effect_cache = collections.defaultdict(dict)
# Caches to optimize the setup phase of the solver
@@ -1049,6 +1133,7 @@ def __init__(self, tests=False):
# Set during the call to setup
self.pkgs = None
self.explicitly_required_namespaces = {}
def pkg_version_rules(self, pkg):
"""Output declared versions of a package.
@@ -1061,7 +1146,9 @@ def key_fn(version):
# Origins are sorted by "provenance" first, see the Provenance enumeration above
return version.origin, version.idx
pkg = packagize(pkg)
if isinstance(pkg, str):
pkg = self.pkg_class(pkg)
declared_versions = self.declared_versions[pkg.name]
partially_sorted_versions = sorted(set(declared_versions), key=key_fn)
@@ -1116,7 +1203,7 @@ def conflict_rules(self, pkg):
default_msg = "{0}: '{1}' conflicts with '{2}'"
no_constraint_msg = "{0}: conflicts with '{1}'"
for trigger, constraints in pkg.conflicts.items():
trigger_msg = "conflict trigger %s" % str(trigger)
trigger_msg = f"conflict is triggered when {str(trigger)}"
trigger_spec = spack.spec.Spec(trigger)
trigger_id = self.condition(
trigger_spec, name=trigger_spec.name or pkg.name, msg=trigger_msg
@@ -1128,7 +1215,11 @@ def conflict_rules(self, pkg):
conflict_msg = no_constraint_msg.format(pkg.name, trigger)
else:
conflict_msg = default_msg.format(pkg.name, trigger, constraint)
constraint_msg = "conflict constraint %s" % str(constraint)
spec_for_msg = (
spack.spec.Spec(pkg.name) if constraint == spack.spec.Spec() else constraint
)
constraint_msg = f"conflict applies to spec {str(spec_for_msg)}"
constraint_id = self.condition(constraint, name=pkg.name, msg=constraint_msg)
self.gen.fact(
fn.pkg_fact(pkg.name, fn.conflict(trigger_id, constraint_id, conflict_msg))
@@ -1167,32 +1258,9 @@ def compiler_facts(self):
matches = sorted(indexed_possible_compilers, key=lambda x: ppk(x[1].spec))
for weight, (compiler_id, cspec) in enumerate(matches):
f = fn.default_compiler_preference(compiler_id, weight)
f = fn.compiler_weight(compiler_id, weight)
self.gen.fact(f)
def package_compiler_defaults(self, pkg):
"""Facts about packages' compiler prefs."""
packages = spack.config.get("packages")
pkg_prefs = packages.get(pkg.name)
if not pkg_prefs or "compiler" not in pkg_prefs:
return
compiler_list = self.possible_compilers
compiler_list = sorted(compiler_list, key=lambda x: (x.name, x.version), reverse=True)
ppk = spack.package_prefs.PackagePrefs(pkg.name, "compiler", all=False)
matches = sorted(compiler_list, key=lambda x: ppk(x.spec))
for i, compiler in enumerate(reversed(matches)):
self.gen.fact(
fn.pkg_fact(
pkg.name,
fn.node_compiler_preference(
compiler.spec.name, compiler.spec.version, -i * 100
),
)
)
def package_requirement_rules(self, pkg):
rules = self.requirement_rules_from_package_py(pkg)
rules.extend(self.requirement_rules_from_packages_yaml(pkg))
@@ -1272,7 +1340,10 @@ def _rule_from_str(
)
def pkg_rules(self, pkg, tests):
pkg = packagize(pkg)
pkg = self.pkg_class(pkg)
# Namespace of the package
self.gen.fact(fn.pkg_fact(pkg.name, fn.namespace(pkg.namespace)))
# versions
self.pkg_version_rules(pkg)
@@ -1284,9 +1355,6 @@ def pkg_rules(self, pkg, tests):
# conflicts
self.conflict_rules(pkg)
# default compilers for this package
self.package_compiler_defaults(pkg)
# virtuals
self.package_provider_rules(pkg)
@@ -1310,7 +1378,7 @@ def trigger_rules(self):
self.gen.h2("Trigger conditions")
for name in self._trigger_cache:
cache = self._trigger_cache[name]
for spec_str, (trigger_id, requirements) in cache.items():
for (spec_str, _), (trigger_id, requirements) in cache.items():
self.gen.fact(fn.pkg_fact(name, fn.trigger_id(trigger_id)))
self.gen.fact(fn.pkg_fact(name, fn.trigger_msg(spec_str)))
for predicate in requirements:
@@ -1323,7 +1391,7 @@ def effect_rules(self):
self.gen.h2("Imposed requirements")
for name in self._effect_cache:
cache = self._effect_cache[name]
for spec_str, (effect_id, requirements) in cache.items():
for (spec_str, _), (effect_id, requirements) in cache.items():
self.gen.fact(fn.pkg_fact(name, fn.effect_id(effect_id)))
self.gen.fact(fn.pkg_fact(name, fn.effect_msg(spec_str)))
for predicate in requirements:
@@ -1422,18 +1490,26 @@ def variant_rules(self, pkg):
self.gen.newline()
def condition(self, required_spec, imposed_spec=None, name=None, msg=None, node=False):
def condition(
self,
required_spec: spack.spec.Spec,
imposed_spec: Optional[spack.spec.Spec] = None,
name: Optional[str] = None,
msg: Optional[str] = None,
transform_required: Optional[TransformFunction] = None,
transform_imposed: Optional[TransformFunction] = remove_node,
):
"""Generate facts for a dependency or virtual provider condition.
Arguments:
required_spec (spack.spec.Spec): the spec that triggers this condition
imposed_spec (spack.spec.Spec or None): the spec with constraints that
are imposed when this condition is triggered
name (str or None): name for `required_spec` (required if
required_spec is anonymous, ignored if not)
msg (str or None): description of the condition
node (bool): if False does not emit "node" or "virtual_node" requirements
from the imposed spec
required_spec: the constraints that triggers this condition
imposed_spec: the constraints that are imposed when this condition is triggered
name: name for `required_spec` (required if required_spec is anonymous, ignored if not)
msg: description of the condition
transform_required: transformation applied to facts from the required spec. Defaults
to leave facts as they are.
transform_imposed: transformation applied to facts from the imposed spec. Defaults
to removing "node" and "virtual_node" facts.
Returns:
int: id of the condition created by this function
"""
@@ -1445,16 +1521,20 @@ def condition(self, required_spec, imposed_spec=None, name=None, msg=None, node=
# In this way, if a condition can't be emitted but the exception is handled in the caller,
# we won't emit partial facts.
condition_id = next(self._condition_id_counter)
condition_id = next(self._id_counter)
self.gen.fact(fn.pkg_fact(named_cond.name, fn.condition(condition_id)))
self.gen.fact(fn.condition_reason(condition_id, msg))
cache = self._trigger_cache[named_cond.name]
named_cond_key = str(named_cond)
named_cond_key = (str(named_cond), transform_required)
if named_cond_key not in cache:
trigger_id = next(self._trigger_id_counter)
trigger_id = next(self._id_counter)
requirements = self.spec_clauses(named_cond, body=True, required_from=name)
if transform_required:
requirements = transform_required(named_cond, requirements)
cache[named_cond_key] = (trigger_id, requirements)
trigger_id, requirements = cache[named_cond_key]
self.gen.fact(fn.pkg_fact(named_cond.name, fn.condition_trigger(condition_id, trigger_id)))
@@ -1463,14 +1543,14 @@ def condition(self, required_spec, imposed_spec=None, name=None, msg=None, node=
return condition_id
cache = self._effect_cache[named_cond.name]
imposed_spec_key = str(imposed_spec)
imposed_spec_key = (str(imposed_spec), transform_imposed)
if imposed_spec_key not in cache:
effect_id = next(self._effect_id_counter)
effect_id = next(self._id_counter)
requirements = self.spec_clauses(imposed_spec, body=False, required_from=name)
if not node:
requirements = list(
filter(lambda x: x.args[0] not in ("node", "virtual_node"), requirements)
)
if transform_imposed:
requirements = transform_imposed(imposed_spec, requirements)
cache[imposed_spec_key] = (effect_id, requirements)
effect_id, requirements = cache[imposed_spec_key]
self.gen.fact(fn.pkg_fact(named_cond.name, fn.condition_effect(condition_id, effect_id)))
@@ -1530,21 +1610,32 @@ def package_dependencies_rules(self, pkg):
if not depflag:
continue
msg = "%s depends on %s" % (pkg.name, dep.spec.name)
msg = f"{pkg.name} depends on {dep.spec}"
if cond != spack.spec.Spec():
msg += " when %s" % cond
msg += f" when {cond}"
else:
pass
condition_id = self.condition(cond, dep.spec, pkg.name, msg)
self.gen.fact(
fn.pkg_fact(pkg.name, fn.dependency_condition(condition_id, dep.spec.name))
)
def track_dependencies(input_spec, requirements):
return requirements + [fn.attr("track_dependencies", input_spec.name)]
for t in dt.ALL_FLAGS:
if t & depflag:
# there is a declared dependency of type t
self.gen.fact(fn.dependency_type(condition_id, dt.flag_to_string(t)))
def dependency_holds(input_spec, requirements):
return remove_node(input_spec, requirements) + [
fn.attr(
"dependency_holds", pkg.name, input_spec.name, dt.flag_to_string(t)
)
for t in dt.ALL_FLAGS
if t & depflag
]
self.condition(
cond,
dep.spec,
name=pkg.name,
msg=msg,
transform_required=track_dependencies,
transform_imposed=dependency_holds,
)
self.gen.newline()
@@ -1559,6 +1650,7 @@ def virtual_preferences(self, pkg_name, func):
for i, provider in enumerate(providers):
provider_name = spack.spec.Spec(provider).name
func(vspec, provider_name, i)
self.gen.newline()
def provider_defaults(self):
self.gen.h2("Default virtual providers")
@@ -1584,9 +1676,10 @@ def provider_requirements(self):
rules = self._rules_from_requirements(
virtual_str, requirements, kind=RequirementKind.VIRTUAL
)
self.emit_facts_from_requirement_rules(rules)
self.trigger_rules()
self.effect_rules()
if rules:
self.emit_facts_from_requirement_rules(rules)
self.trigger_rules()
self.effect_rules()
def emit_facts_from_requirement_rules(self, rules: List[RequirementRule]):
"""Generate facts to enforce requirements.
@@ -1639,8 +1732,17 @@ def emit_facts_from_requirement_rules(self, rules: List[RequirementRule]):
when_spec = spack.spec.Spec(pkg_name)
try:
# With virtual we want to emit "node" and "virtual_node" in imposed specs
transform: Optional[TransformFunction] = remove_node
if virtual:
transform = None
member_id = self.condition(
required_spec=when_spec, imposed_spec=spec, name=pkg_name, node=virtual
required_spec=when_spec,
imposed_spec=spec,
name=pkg_name,
transform_imposed=transform,
msg=f"{spec_str} is a requirement for package {pkg_name}",
)
except Exception as e:
# Do not raise if the rule comes from the 'all' subsection, since usability
@@ -1703,8 +1805,13 @@ def external_packages(self):
# Declare external conditions with a local index into packages.yaml
for local_idx, spec in enumerate(external_specs):
msg = "%s available as external when satisfying %s" % (spec.name, spec)
condition_id = self.condition(spec, msg=msg)
self.gen.fact(fn.pkg_fact(pkg_name, fn.possible_external(condition_id, local_idx)))
def external_imposition(input_spec, requirements):
return requirements + [
fn.attr("external_conditions_hold", input_spec.name, local_idx)
]
self.condition(spec, spec, msg=msg, transform_imposed=external_imposition)
self.possible_versions[spec.name].add(spec.version)
self.gen.newline()
@@ -1726,7 +1833,13 @@ def preferred_variants(self, pkg_name):
# perform validation of the variant and values
spec = spack.spec.Spec(pkg_name)
spec.update_variant_validate(variant_name, values)
try:
spec.update_variant_validate(variant_name, values)
except (spack.variant.InvalidVariantValueError, KeyError, ValueError) as e:
tty.debug(
f"[SETUP]: rejected {str(variant)} as a preference for {pkg_name}: {str(e)}"
)
continue
for value in values:
self.variant_values_from_specs.add((pkg_name, variant.name, value))
@@ -1734,8 +1847,8 @@ def preferred_variants(self, pkg_name):
fn.variant_default_value_from_packages_yaml(pkg_name, variant.name, value)
)
def target_preferences(self, pkg_name):
key_fn = spack.package_prefs.PackagePrefs(pkg_name, "target")
def target_preferences(self):
key_fn = spack.package_prefs.PackagePrefs("all", "target")
if not self.target_specs_cache:
self.target_specs_cache = [
@@ -1745,17 +1858,25 @@ def target_preferences(self, pkg_name):
package_targets = self.target_specs_cache[:]
package_targets.sort(key=key_fn)
offset = 0
best_default = self.default_targets[0][1]
for i, preferred in enumerate(package_targets):
if str(preferred.architecture.target) == best_default and i != 0:
offset = 100
self.gen.fact(
fn.pkg_fact(
pkg_name, fn.target_weight(str(preferred.architecture.target), i + offset)
)
)
self.gen.fact(fn.target_weight(str(preferred.architecture.target), i))
def flag_defaults(self):
self.gen.h2("Compiler flag defaults")
# types of flags that can be on specs
for flag in spack.spec.FlagMap.valid_compiler_flags():
self.gen.fact(fn.flag_type(flag))
self.gen.newline()
# flags from compilers.yaml
compilers = all_compilers_in_config()
for compiler in compilers:
for name, flags in compiler.flags.items():
for flag in flags:
self.gen.fact(
fn.compiler_version_flag(compiler.name, compiler.version, name, flag)
)
def spec_clauses(self, *args, **kwargs):
"""Wrap a call to `_spec_clauses()` into a try/except block that
@@ -1808,7 +1929,7 @@ class Head:
node_flag = fn.attr("node_flag_set")
node_flag_source = fn.attr("node_flag_source")
node_flag_propagate = fn.attr("node_flag_propagate")
variant_propagate = fn.attr("variant_propagate")
variant_propagation_candidate = fn.attr("variant_propagation_candidate")
class Body:
node = fn.attr("node")
@@ -1822,7 +1943,7 @@ class Body:
node_flag = fn.attr("node_flag")
node_flag_source = fn.attr("node_flag_source")
node_flag_propagate = fn.attr("node_flag_propagate")
variant_propagate = fn.attr("variant_propagate")
variant_propagation_candidate = fn.attr("variant_propagation_candidate")
f = Body if body else Head
@@ -1857,7 +1978,7 @@ class Body:
if not spec.concrete:
reserved_names = spack.directives.reserved_names
if not spec.virtual and vname not in reserved_names:
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
pkg_cls = self.pkg_class(spec.name)
try:
variant_def, _ = pkg_cls.variants[vname]
except KeyError:
@@ -1871,7 +1992,9 @@ class Body:
clauses.append(f.variant_value(spec.name, vname, value))
if variant.propagate:
clauses.append(f.variant_propagate(spec.name, vname, value, spec.name))
clauses.append(
f.variant_propagation_candidate(spec.name, vname, value, spec.name)
)
# Tell the concretizer that this is a possible value for the
# variant, to account for things like int/str values where we
@@ -1918,6 +2041,7 @@ class Body:
if not body:
for virtual in virtuals:
clauses.append(fn.attr("provider_set", spec.name, virtual))
clauses.append(fn.attr("virtual_node", virtual))
else:
for virtual in virtuals:
clauses.append(fn.attr("virtual_on_incoming_edges", spec.name, virtual))
@@ -1973,7 +2097,7 @@ def define_package_versions_and_validate_preferences(
"""Declare any versions in specs not declared in packages."""
packages_yaml = spack.config.get("packages")
for pkg_name in possible_pkgs:
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
pkg_cls = self.pkg_class(pkg_name)
# All the versions from the corresponding package.py file. Since concepts
# like being a "develop" version or being preferred exist only at a
@@ -2061,7 +2185,7 @@ def _supported_targets(self, compiler_name, compiler_version, targets):
try:
with warnings.catch_warnings():
warnings.simplefilter("ignore")
target.optimization_flags(compiler_name, compiler_version)
target.optimization_flags(compiler_name, str(compiler_version))
supported.append(target)
except archspec.cpu.UnsupportedMicroarchitecture:
continue
@@ -2208,6 +2332,8 @@ def target_defaults(self, specs):
self.default_targets = list(sorted(set(self.default_targets)))
self.target_preferences()
def virtual_providers(self):
self.gen.h2("Virtual providers")
msg = (
@@ -2431,14 +2557,8 @@ def setup(
reuse: list of concrete specs that can be reused
allow_deprecated: if True adds deprecated versions into the solve
"""
self._condition_id_counter = itertools.count()
# preliminary checks
check_packages_exist(specs)
# get list of all possible dependencies
self.possible_virtuals = set(x.name for x in specs if x.virtual)
node_counter = _create_counter(specs, tests=self.tests)
self.possible_virtuals = node_counter.possible_virtuals()
self.pkgs = node_counter.possible_dependencies()
@@ -2451,6 +2571,10 @@ def setup(
if missing_deps:
raise spack.spec.InvalidDependencyError(spec.name, missing_deps)
for node in spack.traverse.traverse_nodes(specs):
if node.namespace is not None:
self.explicitly_required_namespaces[node.name] = node.namespace
# driver is used by all the functions below to add facts and
# rules to generate an ASP program.
self.gen = driver
@@ -2529,7 +2653,6 @@ def setup(
self.pkg_rules(pkg, tests=self.tests)
self.gen.h2("Package preferences: %s" % pkg)
self.preferred_variants(pkg)
self.target_preferences(pkg)
self.gen.h1("Develop specs")
# Inject dev_path from environment
@@ -2555,20 +2678,43 @@ def setup(
self.define_target_constraints()
def literal_specs(self, specs):
for idx, spec in enumerate(specs):
for spec in specs:
self.gen.h2("Spec: %s" % str(spec))
self.gen.fact(fn.literal(idx))
condition_id = next(self._id_counter)
trigger_id = next(self._id_counter)
self.gen.fact(fn.literal(idx, "virtual_root" if spec.virtual else "root", spec.name))
for clause in self.spec_clauses(spec):
self.gen.fact(fn.literal(idx, *clause.args))
if clause.args[0] == "variant_set":
self.gen.fact(
fn.literal(idx, "variant_default_value_from_cli", *clause.args[1:])
# Special condition triggered by "literal_solved"
self.gen.fact(fn.literal(trigger_id))
self.gen.fact(fn.pkg_fact(spec.name, fn.condition_trigger(condition_id, trigger_id)))
self.gen.fact(fn.condition_reason(condition_id, f"{spec} requested explicitly"))
imposed_spec_key = str(spec), None
cache = self._effect_cache[spec.name]
if imposed_spec_key in cache:
effect_id, requirements = cache[imposed_spec_key]
else:
effect_id = next(self._id_counter)
requirements = self.spec_clauses(spec)
root_name = spec.name
for clause in requirements:
clause_name = clause.args[0]
if clause_name == "variant_set":
requirements.append(
fn.attr("variant_default_value_from_cli", *clause.args[1:])
)
elif clause_name in ("node", "virtual_node", "hash"):
# These facts are needed to compute the "condition_set" of the root
pkg_name = clause.args[1]
self.gen.fact(fn.mentioned_in_literal(trigger_id, root_name, pkg_name))
requirements.append(fn.attr("virtual_root" if spec.virtual else "root", spec.name))
cache[imposed_spec_key] = (effect_id, requirements)
self.gen.fact(fn.pkg_fact(spec.name, fn.condition_effect(condition_id, effect_id)))
if self.concretize_everything:
self.gen.fact(fn.solve_literal(idx))
self.gen.fact(fn.solve_literal(trigger_id))
self.effect_rules()
def validate_and_define_versions_from_requirements(
self, *, allow_deprecated: bool, require_checksum: bool
@@ -2638,6 +2784,13 @@ def _specs_from_requires(self, pkg_name, section):
for s in spec_group[key]:
yield _spec_with_default_name(s, pkg_name)
def pkg_class(self, pkg_name: str) -> typing.Type["spack.package_base.PackageBase"]:
request = pkg_name
if pkg_name in self.explicitly_required_namespaces:
namespace = self.explicitly_required_namespaces[pkg_name]
request = f"{namespace}.{pkg_name}"
return spack.repo.PATH.get_pkg_class(request)
class SpecBuilder:
"""Class with actions to rebuild a spec from ASP results."""
@@ -2649,9 +2802,11 @@ class SpecBuilder:
r"^.*_propagate$",
r"^.*_satisfies$",
r"^.*_set$",
r"^dependency_holds$",
r"^node_compiler$",
r"^package_hash$",
r"^root$",
r"^track_dependencies$",
r"^variant_default_value_from_cli$",
r"^virtual_node$",
r"^virtual_root$",
@@ -2695,6 +2850,9 @@ def _arch(self, node):
self._specs[node].architecture = arch
return arch
def namespace(self, node, namespace):
self._specs[node].namespace = namespace
def node_platform(self, node, platform):
self._arch(node).platform = platform
@@ -2909,14 +3067,6 @@ def build_specs(self, function_tuples):
action(*args)
# namespace assignment is done after the fact, as it is not
# currently part of the solve
for spec in self._specs.values():
if spec.namespace:
continue
repo = spack.repo.PATH.repo_for_pkg(spec)
spec.namespace = repo.namespace
# fix flags after all specs are constructed
self.reorder_flags()

View File

@@ -10,9 +10,8 @@
% ID of the nodes in the "root" link-run sub-DAG
#const min_dupe_id = 0.
#const link_run = 0.
#const direct_link_run =1.
#const direct_build = 2.
#const direct_link_run = 0.
#const direct_build = 1.
% Allow clingo to create nodes
{ attr("node", node(0..X-1, Package)) } :- max_dupes(Package, X), not virtual(Package).
@@ -30,23 +29,24 @@
:- attr("variant_value", PackageNode, _, _), not attr("node", PackageNode).
:- attr("node_flag_compiler_default", PackageNode), not attr("node", PackageNode).
:- attr("node_flag", PackageNode, _, _), not attr("node", PackageNode).
:- attr("node_flag_source", PackageNode, _, _), not attr("node", PackageNode).
:- attr("no_flags", PackageNode, _), not attr("node", PackageNode).
:- attr("external_spec_selected", PackageNode, _), not attr("node", PackageNode).
:- attr("depends_on", ParentNode, _, _), not attr("node", ParentNode).
:- attr("depends_on", _, ChildNode, _), not attr("node", ChildNode).
:- attr("node_flag_source", ParentNode, _, _), not attr("node", ParentNode).
:- attr("node_flag_source", _, _, ChildNode), not attr("node", ChildNode).
:- attr("virtual_node", VirtualNode), not provider(_, VirtualNode), internal_error("virtual node with no provider").
:- provider(_, VirtualNode), not attr("virtual_node", VirtualNode), internal_error("provider with no virtual node").
:- provider(PackageNode, _), not attr("node", PackageNode), internal_error("provider with no real node").
:- attr("virtual_node", VirtualNode), not provider(_, VirtualNode).
:- provider(_, VirtualNode), not attr("virtual_node", VirtualNode).
:- provider(PackageNode, _), not attr("node", PackageNode).
:- attr("root", node(ID, PackageNode)), ID > min_dupe_id.
:- attr("root", node(ID, PackageNode)), ID > min_dupe_id, internal_error("root with a non-minimal duplicate ID").
% Nodes in the "root" unification set cannot depend on non-root nodes if the dependency is "link" or "run"
:- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "link"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)).
:- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "run"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)).
:- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "link"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)), internal_error("link dependency out of the root unification set").
:- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "run"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)), internal_error("run dependency out of the root unification set").
% Namespaces are statically assigned by a package fact
attr("namespace", node(ID, Package), Namespace) :- attr("node", node(ID, Package)), pkg_fact(Package, namespace(Namespace)).
% Rules on "unification sets", i.e. on sets of nodes allowing a single configuration of any given package
unify(SetID, PackageName) :- unification_set(SetID, node(_, PackageName)).
@@ -86,22 +86,24 @@ unification_set(SetID, VirtualNode)
%----
% In the "root" unification set only ID = 0 are allowed
:- unification_set("root", node(ID, _)), ID != 0.
:- unification_set("root", node(ID, _)), ID != 0, internal_error("root unification set has node with non-zero unification set ID").
% In the "root" unification set we allow only packages from the link-run possible subDAG
:- unification_set("root", node(_, Package)), not possible_in_link_run(Package), not virtual(Package).
:- unification_set("root", node(_, Package)), not possible_in_link_run(Package), not virtual(Package), internal_error("package outside possible link/run graph in root unification set").
% Each node must belong to at least one unification set
:- attr("node", PackageNode), not unification_set(_, PackageNode).
:- attr("node", PackageNode), not unification_set(_, PackageNode), internal_error("node belongs to no unification set").
% Cannot have a node with an ID, if lower ID of the same package are not used
:- attr("node", node(ID1, Package)),
not attr("node", node(ID2, Package)),
max_dupes(Package, X), ID1=0..X-1, ID2=0..X-1, ID2 < ID1.
max_dupes(Package, X), ID1=0..X-1, ID2=0..X-1, ID2 < ID1,
internal_error("node skipped id number").
:- attr("virtual_node", node(ID1, Package)),
not attr("virtual_node", node(ID2, Package)),
max_dupes(Package, X), ID1=0..X-1, ID2=0..X-1, ID2 < ID1.
max_dupes(Package, X), ID1=0..X-1, ID2=0..X-1, ID2 < ID1,
internal_error("virtual node skipped id number").
%-----------------------------------------------------------------------------
% Map literal input specs to facts that drive the solve
@@ -115,29 +117,28 @@ multiple_nodes_attribute("depends_on").
multiple_nodes_attribute("virtual_on_edge").
multiple_nodes_attribute("provider_set").
% Map constraint on the literal ID to facts on the node
attr(Name, node(min_dupe_id, A1)) :- literal(LiteralID, Name, A1), solve_literal(LiteralID).
attr(Name, node(min_dupe_id, A1), A2) :- literal(LiteralID, Name, A1, A2), solve_literal(LiteralID), not multiple_nodes_attribute(Name).
attr(Name, node(min_dupe_id, A1), A2, A3) :- literal(LiteralID, Name, A1, A2, A3), solve_literal(LiteralID), not multiple_nodes_attribute(Name).
attr(Name, node(min_dupe_id, A1), A2, A3, A4) :- literal(LiteralID, Name, A1, A2, A3, A4), solve_literal(LiteralID).
trigger_condition_holds(TriggerID, node(min_dupe_id, Package)) :-
solve_literal(TriggerID),
pkg_fact(Package, condition_trigger(_, TriggerID)),
literal(TriggerID).
% Special cases where nodes occur in arguments other than A1
attr("node_flag_source", node(min_dupe_id, A1), A2, node(min_dupe_id, A3)) :- literal(LiteralID, "node_flag_source", A1, A2, A3), solve_literal(LiteralID).
attr("depends_on", node(min_dupe_id, A1), node(min_dupe_id, A2), A3) :- literal(LiteralID, "depends_on", A1, A2, A3), solve_literal(LiteralID).
trigger_node(TriggerID, Node, Node) :-
trigger_condition_holds(TriggerID, Node),
literal(TriggerID).
attr("virtual_node", node(min_dupe_id, Virtual)) :- literal(LiteralID, "provider_set", _, Virtual), solve_literal(LiteralID).
attr("provider_set", node(min_dupe_id, Provider), node(min_dupe_id, Virtual)) :- literal(LiteralID, "provider_set", Provider, Virtual), solve_literal(LiteralID).
provider(node(min_dupe_id, Provider), node(min_dupe_id, Virtual)) :- literal(LiteralID, "provider_set", Provider, Virtual), solve_literal(LiteralID).
% Since we trigger the existence of literal nodes from a condition, we need to construct
% the condition_set/2 manually below
mentioned_in_literal(Root, Mentioned) :- mentioned_in_literal(TriggerID, Root, Mentioned), solve_literal(TriggerID).
condition_set(node(min_dupe_id, Root), node(min_dupe_id, Mentioned)) :- mentioned_in_literal(Root, Mentioned).
% Discriminate between "roots" that have been explicitly requested, and roots that are deduced from "virtual roots"
explicitly_requested_root(node(min_dupe_id, A1)) :- literal(LiteralID, "root", A1), solve_literal(LiteralID).
explicitly_requested_root(node(min_dupe_id, Package)) :-
solve_literal(TriggerID),
trigger_and_effect(Package, TriggerID, EffectID),
imposed_constraint(EffectID, "root", Package).
#defined concretize_everything/0.
#defined literal/1.
#defined literal/3.
#defined literal/4.
#defined literal/5.
#defined literal/6.
% Attributes for node packages which must have a single value
attr_single_value("version").
@@ -235,7 +236,8 @@ possible_version_weight(node(ID, Package), Weight)
1 { version_weight(node(ID, Package), Weight) : pkg_fact(Package, version_declared(Version, Weight)) } 1
:- attr("version", node(ID, Package), Version),
attr("node", node(ID, Package)).
attr("node", node(ID, Package)),
internal_error("version weights must exist and be unique").
% node_version_satisfies implies that exactly one of the satisfying versions
% is the package's version, and vice versa.
@@ -249,7 +251,8 @@ possible_version_weight(node(ID, Package), Weight)
% bound on the choice rule to avoid false positives with the error below
1 { attr("version", node(ID, Package), Version) : pkg_fact(Package, version_satisfies(Constraint, Version)) }
:- attr("node_version_satisfies", node(ID, Package), Constraint),
pkg_fact(Package, version_satisfies(Constraint, _)).
pkg_fact(Package, version_satisfies(Constraint, _)),
internal_error("must choose a single version to satisfy version constraints").
% More specific error message if the version cannot satisfy some constraint
% Otherwise covered by `no_version_error` and `versions_conflict_error`.
@@ -362,7 +365,7 @@ imposed_nodes(ConditionID, PackageNode, node(X, A1))
% Conditions that hold impose may impose constraints on other specs
attr(Name, node(X, A1)) :- impose(ID, PackageNode), imposed_constraint(ID, Name, A1), imposed_nodes(ID, PackageNode, node(X, A1)).
attr(Name, node(X, A1), A2) :- impose(ID, PackageNode), imposed_constraint(ID, Name, A1, A2), imposed_nodes(ID, PackageNode, node(X, A1)).
attr(Name, node(X, A1), A2) :- impose(ID, PackageNode), imposed_constraint(ID, Name, A1, A2), imposed_nodes(ID, PackageNode, node(X, A1)), not multiple_nodes_attribute(Name).
attr(Name, node(X, A1), A2, A3) :- impose(ID, PackageNode), imposed_constraint(ID, Name, A1, A2, A3), imposed_nodes(ID, PackageNode, node(X, A1)), not multiple_nodes_attribute(Name).
attr(Name, node(X, A1), A2, A3, A4) :- impose(ID, PackageNode), imposed_constraint(ID, Name, A1, A2, A3, A4), imposed_nodes(ID, PackageNode, node(X, A1)).
@@ -373,6 +376,16 @@ attr("node_flag_source", node(X, A1), A2, node(Y, A3))
imposed_constraint(ID, "node_flag_source", A1, A2, A3),
condition_set(node(Y, A3), node(X, A1)).
% Provider set is relevant only for literals, since it's the only place where `^[virtuals=foo] bar`
% might appear in the HEAD of a rule
attr("provider_set", node(min_dupe_id, Provider), node(min_dupe_id, Virtual))
:- solve_literal(TriggerID),
trigger_and_effect(_, TriggerID, EffectID),
impose(EffectID, _),
imposed_constraint(EffectID, "provider_set", Provider, Virtual).
provider(ProviderNode, VirtualNode) :- attr("provider_set", ProviderNode, VirtualNode).
% Here we can't use the condition set because it's a recursive definition, that doesn't define the
% node index, and leads to unsatisfiability. Hence we say that one and only one node index must
% satisfy the dependency.
@@ -432,24 +445,11 @@ depends_on(PackageNode, DependencyNode) :- attr("depends_on", PackageNode, Depen
% concrete. We chop off dependencies for externals, and dependencies of
% concrete specs don't need to be resolved -- they arise from the concrete
% specs themselves.
dependency_holds(node(NodeID, Package), Dependency, Type) :-
pkg_fact(Package, dependency_condition(ID, Dependency)),
dependency_type(ID, Type),
build(node(NodeID, Package)),
not external(node(NodeID, Package)),
condition_holds(ID, node(NodeID, Package)).
% We cut off dependencies of externals (as we don't really know them).
% Don't impose constraints on dependencies that don't exist.
do_not_impose(EffectID, node(NodeID, Package)) :-
not dependency_holds(node(NodeID, Package), Dependency, _),
attr("node", node(NodeID, Package)),
pkg_fact(Package, dependency_condition(ID, Dependency)),
pkg_fact(Package, condition_effect(ID, EffectID)).
attr("track_dependencies", Node) :- build(Node), not external(Node).
% If a dependency holds on a package node, there must be one and only one dependency node satisfying it
1 { attr("depends_on", PackageNode, node(0..Y-1, Dependency), Type) : max_dupes(Dependency, Y) } 1
:- dependency_holds(PackageNode, Dependency, Type),
:- attr("dependency_holds", PackageNode, Dependency, Type),
not virtual(Dependency).
% all nodes in the graph must be reachable from some root
@@ -499,7 +499,7 @@ error(100, "Package '{0}' needs to provide both '{1}' and '{2}' together, but pr
% if a package depends on a virtual, it's not external and we have a
% provider for that virtual then it depends on the provider
node_depends_on_virtual(PackageNode, Virtual, Type)
:- dependency_holds(PackageNode, Virtual, Type),
:- attr("dependency_holds", PackageNode, Virtual, Type),
virtual(Virtual),
not external(PackageNode).
@@ -509,7 +509,7 @@ node_depends_on_virtual(PackageNode, Virtual) :- node_depends_on_virtual(Package
:- node_depends_on_virtual(PackageNode, Virtual, Type).
attr("virtual_on_edge", PackageNode, ProviderNode, Virtual)
:- dependency_holds(PackageNode, Virtual, Type),
:- attr("dependency_holds", PackageNode, Virtual, Type),
attr("depends_on", PackageNode, ProviderNode, Type),
provider(ProviderNode, node(_, Virtual)),
not external(PackageNode).
@@ -592,21 +592,15 @@ possible_provider_weight(DependencyNode, VirtualNode, 0, "external")
:- provider(DependencyNode, VirtualNode),
external(DependencyNode).
% A provider mentioned in packages.yaml can use a weight
% according to its priority in the list of providers
possible_provider_weight(node(DependencyID, Dependency), node(VirtualID, Virtual), Weight, "packages_yaml")
:- provider(node(DependencyID, Dependency), node(VirtualID, Virtual)),
depends_on(node(ID, Package), node(DependencyID, Dependency)),
pkg_fact(Package, provider_preference(Virtual, Dependency, Weight)).
% A provider mentioned in the default configuration can use a weight
% according to its priority in the list of providers
possible_provider_weight(node(DependencyID, Dependency), node(VirtualID, Virtual), Weight, "default")
:- provider(node(DependencyID, Dependency), node(VirtualID, Virtual)),
default_provider_preference(Virtual, Dependency, Weight).
possible_provider_weight(node(ProviderID, Provider), node(VirtualID, Virtual), Weight, "default")
:- provider(node(ProviderID, Provider), node(VirtualID, Virtual)),
default_provider_preference(Virtual, Provider, Weight).
% Any provider can use 100 as a weight, which is very high and discourage its use
possible_provider_weight(node(DependencyID, Dependency), VirtualNode, 100, "fallback") :- provider(node(DependencyID, Dependency), VirtualNode).
possible_provider_weight(node(ProviderID, Provider), VirtualNode, 100, "fallback")
:- provider(node(ProviderID, Provider), VirtualNode).
% do not warn if generated program contains none of these.
#defined virtual/1.
@@ -624,11 +618,11 @@ possible_provider_weight(node(DependencyID, Dependency), VirtualNode, 100, "fall
pkg_fact(Package, version_declared(Version, Weight, "external")) }
:- external(node(ID, Package)).
error(100, "Attempted to use external for '{0}' which does not satisfy any configured external spec", Package)
error(100, "Attempted to use external for '{0}' which does not satisfy any configured external spec version", Package)
:- external(node(ID, Package)),
not external_version(node(ID, Package), _, _).
error(100, "Attempted to use external for '{0}' which does not satisfy any configured external spec", Package)
error(100, "Attempted to use external for '{0}' which does not satisfy a unique configured external spec version", Package)
:- external(node(ID, Package)),
2 { external_version(node(ID, Package), Version, Weight) }.
@@ -657,18 +651,15 @@ external(PackageNode) :- attr("external_spec_selected", PackageNode, _).
% determine if an external spec has been selected
attr("external_spec_selected", node(ID, Package), LocalIndex) :-
external_conditions_hold(node(ID, Package), LocalIndex),
attr("external_conditions_hold", node(ID, Package), LocalIndex),
attr("node", node(ID, Package)),
not attr("hash", node(ID, Package), _).
external_conditions_hold(node(PackageID, Package), LocalIndex) :-
pkg_fact(Package, possible_external(ID, LocalIndex)), condition_holds(ID, node(PackageID, Package)).
% it cannot happen that a spec is external, but none of the external specs
% conditions hold.
error(100, "Attempted to use external for '{0}' which does not satisfy any configured external spec", Package)
:- external(node(ID, Package)),
not external_conditions_hold(node(ID, Package), _).
not attr("external_conditions_hold", node(ID, Package), _).
%-----------------------------------------------------------------------------
% Config required semantics
@@ -707,6 +698,26 @@ requirement_group_satisfied(node(ID, Package), X) :-
activate_requirement(node(ID, Package), X),
requirement_group(Package, X).
% Do not impose requirements, if the conditional requirement is not active
do_not_impose(EffectID, node(ID, Package)) :-
trigger_condition_holds(TriggerID, node(ID, Package)),
pkg_fact(Package, condition_trigger(ConditionID, TriggerID)),
pkg_fact(Package, condition_effect(ConditionID, EffectID)),
requirement_group_member(ConditionID , Package, RequirementID),
not activate_requirement(node(ID, Package), RequirementID).
% When we have a required provider, we need to ensure that the provider/2 facts respect
% the requirement. This is particularly important for packages that could provide multiple
% virtuals independently
required_provider(Provider, Virtual)
:- requirement_group_member(ConditionID, Virtual, RequirementID),
condition_holds(ConditionID, _),
virtual(Virtual),
pkg_fact(Virtual, condition_effect(ConditionID, EffectID)),
imposed_constraint(EffectID, "node", Provider).
:- provider(node(Y, Package), node(X, Virtual)), required_provider(Provider, Virtual), Package != Provider.
% TODO: the following two choice rules allow the solver to add compiler
% flags if their only source is from a requirement. This is overly-specific
% and should use a more-generic approach like in https://github.com/spack/spack/pull/37180
@@ -769,23 +780,36 @@ node_has_variant(node(ID, Package), Variant) :-
pkg_fact(Package, variant(Variant)),
attr("node", node(ID, Package)).
attr("variant_propagate", PackageNode, Variant, Value, Source) :-
% Variant propagation is forwarded to dependencies
attr("variant_propagation_candidate", PackageNode, Variant, Value, Source) :-
attr("node", PackageNode),
depends_on(ParentNode, PackageNode),
attr("variant_propagate", ParentNode, Variant, Value, Source),
not attr("variant_set", PackageNode, Variant).
attr("variant_value", node(_, Source), Variant, Value),
attr("variant_propagation_candidate", ParentNode, Variant, _, Source).
attr("variant_value", node(ID, Package), Variant, Value) :-
attr("node", node(ID, Package)),
% If the node is a candidate, and it has the variant and value,
% then those variant and value should be propagated
attr("variant_propagate", node(ID, Package), Variant, Value, Source) :-
attr("variant_propagation_candidate", node(ID, Package), Variant, Value, Source),
node_has_variant(node(ID, Package), Variant),
attr("variant_propagate", node(ID, Package), Variant, Value, _),
pkg_fact(Package, variant_possible_value(Variant, Value)).
pkg_fact(Package, variant_possible_value(Variant, Value)),
not attr("variant_set", node(ID, Package), Variant).
% Propagate the value, if there is the corresponding attribute
attr("variant_value", PackageNode, Variant, Value) :- attr("variant_propagate", PackageNode, Variant, Value, _).
% If a variant is propagated, we cannot have extraneous values (this is for multi valued variants)
variant_is_propagated(PackageNode, Variant) :- attr("variant_propagate", PackageNode, Variant, _, _).
:- variant_is_propagated(PackageNode, Variant),
attr("variant_value", PackageNode, Variant, Value),
not attr("variant_propagate", PackageNode, Variant, Value, _).
% Cannot receive different values from different sources on the same variant
error(100, "{0} and {1} cannot both propagate variant '{2}' to package {3} with values '{4}' and '{5}'", Source1, Source2, Variant, Package, Value1, Value2) :-
attr("variant_propagate", node(X, Package), Variant, Value1, Source1),
attr("variant_propagate", node(X, Package), Variant, Value2, Source2),
node_has_variant(node(X, Package), Variant),
Value1 < Value2.
Value1 < Value2, Source1 < Source2.
% a variant cannot be set if it is not a variant on the package
error(100, "Cannot set variant '{0}' for package '{1}' because the variant condition cannot be satisfied for the given spec", Variant, Package)
@@ -887,8 +911,9 @@ variant_default_not_used(node(ID, Package), Variant, Value)
% The variant is set in an external spec
external_with_variant_set(node(NodeID, Package), Variant, Value)
:- attr("variant_value", node(NodeID, Package), Variant, Value),
condition_requirement(ID, "variant_value", Package, Variant, Value),
pkg_fact(Package, possible_external(ID, _)),
condition_requirement(TriggerID, "variant_value", Package, Variant, Value),
trigger_and_effect(Package, TriggerID, EffectID),
imposed_constraint(EffectID, "external_conditions_hold", Package, _),
external(node(NodeID, Package)),
attr("node", node(NodeID, Package)).
@@ -1064,7 +1089,7 @@ attr("node_target", PackageNode, Target)
node_target_weight(node(ID, Package), Weight)
:- attr("node", node(ID, Package)),
attr("node_target", node(ID, Package), Target),
pkg_fact(Package, target_weight(Target, Weight)).
target_weight(Target, Weight).
% compatibility rules for targets among nodes
node_target_match(ParentNode, DependencyNode)
@@ -1186,23 +1211,17 @@ compiler_mismatch_required(PackageNode, DependencyNode)
#defined allow_compiler/2.
% compilers weighted by preference according to packages.yaml
compiler_weight(node(ID, Package), Weight)
node_compiler_weight(node(ID, Package), Weight)
:- node_compiler(node(ID, Package), CompilerID),
compiler_name(CompilerID, Compiler),
compiler_version(CompilerID, V),
pkg_fact(Package, node_compiler_preference(Compiler, V, Weight)).
compiler_weight(node(ID, Package), Weight)
compiler_weight(CompilerID, Weight).
node_compiler_weight(node(ID, Package), 100)
:- node_compiler(node(ID, Package), CompilerID),
compiler_name(CompilerID, Compiler),
compiler_version(CompilerID, V),
not pkg_fact(Package, node_compiler_preference(Compiler, V, _)),
default_compiler_preference(CompilerID, Weight).
compiler_weight(node(ID, Package), 100)
:- node_compiler(node(ID, Package), CompilerID),
compiler_name(CompilerID, Compiler),
compiler_version(CompilerID, V),
not pkg_fact(Package, node_compiler_preference(Compiler, V, _)),
not default_compiler_preference(CompilerID, _).
not compiler_weight(CompilerID, _).
% For the time being, be strict and reuse only if the compiler match one we have on the system
error(100, "Compiler {1}@{2} requested for {0} cannot be found. Set install_missing_compilers:true if intended.", Package, Compiler, Version)
@@ -1210,7 +1229,7 @@ error(100, "Compiler {1}@{2} requested for {0} cannot be found. Set install_miss
not node_compiler(node(ID, Package), _).
#defined node_compiler_preference/4.
#defined default_compiler_preference/3.
#defined compiler_weight/3.
%-----------------------------------------------------------------------------
% Compiler flags
@@ -1534,7 +1553,7 @@ opt_criterion(15, "non-preferred compilers").
#minimize{ 0@15: #true }.
#minimize{
Weight@15+Priority,PackageNode
: compiler_weight(PackageNode, Weight),
: node_compiler_weight(PackageNode, Weight),
build_priority(PackageNode, Priority)
}.

View File

@@ -24,4 +24,29 @@
#show error/5.
#show error/6.
% for error causation
#show condition_reason/2.
% For error messages to use later
#show pkg_fact/2.
#show condition_holds/2.
#show imposed_constraint/3.
#show imposed_constraint/4.
#show imposed_constraint/5.
#show imposed_constraint/6.
#show condition_requirement/3.
#show condition_requirement/4.
#show condition_requirement/5.
#show condition_requirement/6.
#show node_has_variant/2.
#show build/1.
#show external/1.
#show external_version/3.
#show trigger_and_effect/3.
#show unification_set/2.
#show provider/2.
#show condition_nodes/3.
#show trigger_node/3.
#show imposed_nodes/3.
% debug

View File

@@ -0,0 +1,239 @@
% Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
% Spack Project Developers. See the top-level COPYRIGHT file for details.
%
% SPDX-License-Identifier: (Apache-2.0 OR MIT)
%=============================================================================
% This logic program adds detailed error messages to Spack's concretizer
%=============================================================================
#program error_messages.
% Create a causal tree between trigger conditions by locating the effect conditions
% that are triggers for another condition. Condition2 is caused by Condition1
condition_cause(Condition2, ID2, Condition1, ID1) :-
condition_holds(Condition2, node(ID2, Package2)),
pkg_fact(Package2, condition_trigger(Condition2, Trigger)),
condition_requirement(Trigger, Name, Package),
condition_nodes(Trigger, TriggerNode, node(ID, Package)),
trigger_node(Trigger, TriggerNode, node(ID2, Package2)),
attr(Name, node(ID, Package)),
condition_holds(Condition1, node(ID1, Package1)),
pkg_fact(Package1, condition_effect(Condition1, Effect)),
imposed_constraint(Effect, Name, Package),
imposed_nodes(Effect, node(ID1, Package1), node(ID, Package)).
condition_cause(Condition2, ID2, Condition1, ID1) :-
condition_holds(Condition2, node(ID2, Package2)),
pkg_fact(Package2, condition_trigger(Condition2, Trigger)),
condition_requirement(Trigger, Name, Package, A1),
condition_nodes(Trigger, TriggerNode, node(ID, Package)),
trigger_node(Trigger, TriggerNode, node(ID2, Package2)),
attr(Name, node(ID, Package), A1),
condition_holds(Condition1, node(ID1, Package1)),
pkg_fact(Package1, condition_effect(Condition1, Effect)),
imposed_constraint(Effect, Name, Package, A1),
imposed_nodes(Effect, node(ID1, Package1), node(ID, Package)).
condition_cause(Condition2, ID2, Condition1, ID1) :-
condition_holds(Condition2, node(ID2, Package2)),
pkg_fact(Package2, condition_trigger(Condition2, Trigger)),
condition_requirement(Trigger, Name, Package, A1, A2),
condition_nodes(Trigger, TriggerNode, node(ID, Package)),
trigger_node(Trigger, TriggerNode, node(ID2, Package2)),
attr(Name, node(ID, Package), A1, A2),
condition_holds(Condition1, node(ID1, Package1)),
pkg_fact(Package1, condition_effect(Condition1, Effect)),
imposed_constraint(Effect, Name, Package, A1, A2),
imposed_nodes(Effect, node(ID1, Package1), node(ID, Package)).
condition_cause(Condition2, ID2, Condition1, ID1) :-
condition_holds(Condition2, node(ID2, Package2)),
pkg_fact(Package2, condition_trigger(Condition2, Trigger)),
condition_requirement(Trigger, Name, Package, A1, A2, A3),
condition_nodes(Trigger, TriggerNode, node(ID, Package)),
trigger_node(Trigger, TriggerNode, node(ID2, Package2)),
attr(Name, node(ID, Package), A1, A2, A3),
condition_holds(Condition1, node(ID1, Package1)),
pkg_fact(Package1, condition_effect(Condition1, Effect)),
imposed_constraint(Effect, Name, Package, A1, A2, A3),
imposed_nodes(Effect, node(ID1, Package1), node(ID, Package)).
% special condition cause for dependency conditions
% we can't simply impose the existence of the node for dependency conditions
% because we need to allow for the choice of which dupe ID the node gets
condition_cause(Condition2, ID2, Condition1, ID1) :-
condition_holds(Condition2, node(ID2, Package2)),
pkg_fact(Package2, condition_trigger(Condition2, Trigger)),
condition_requirement(Trigger, "node", Package),
condition_nodes(Trigger, TriggerNode, node(ID, Package)),
trigger_node(Trigger, TriggerNode, node(ID2, Package2)),
attr("node", node(ID, Package)),
condition_holds(Condition1, node(ID1, Package1)),
pkg_fact(Package1, condition_effect(Condition1, Effect)),
imposed_constraint(Effect, "dependency_holds", Parent, Package, Type),
imposed_nodes(Effect, node(ID1, Package1), node(ID, Package)),
attr("depends_on", node(X, Parent), node(ID, Package), Type).
% The literal startcauses is used to separate the variables that are part of the error from the
% ones describing the causal tree of the error. After startcauses, each successive pair must be
% a condition and a condition_set id for which it holds.
% More specific error message if the version cannot satisfy some constraint
% Otherwise covered by `no_version_error` and `versions_conflict_error`.
error(1, "Cannot satisfy '{0}@{1}'", Package, Constraint, startcauses, ConstraintCause, CauseID)
:- attr("node_version_satisfies", node(ID, Package), Constraint),
pkg_fact(TriggerPkg, condition_effect(ConstraintCause, EffectID)),
imposed_constraint(EffectID, "node_version_satisfies", Package, Constraint),
condition_holds(ConstraintCause, node(CauseID, TriggerPkg)),
attr("version", node(ID, Package), Version),
not pkg_fact(Package, version_satisfies(Constraint, Version)).
error(0, "Cannot satisfy '{0}@{1}' and '{0}@{2}", Package, Constraint1, Constraint2, startcauses, Cause1, C1ID, Cause2, C2ID)
:- attr("node_version_satisfies", node(ID, Package), Constraint1),
pkg_fact(TriggerPkg1, condition_effect(Cause1, EffectID1)),
imposed_constraint(EffectID1, "node_version_satisfies", Package, Constraint1),
condition_holds(Cause1, node(C1ID, TriggerPkg1)),
% two constraints
attr("node_version_satisfies", node(ID, Package), Constraint2),
pkg_fact(TriggerPkg2, condition_effect(Cause2, EffectID2)),
imposed_constraint(EffectID2, "node_version_satisfies", Package, Constraint2),
condition_holds(Cause2, node(C2ID, TriggerPkg2)),
% version chosen
attr("version", node(ID, Package), Version),
% version satisfies one but not the other
pkg_fact(Package, version_satisfies(Constraint1, Version)),
not pkg_fact(Package, version_satisfies(Constraint2, Version)).
% causation tracking error for no or multiple virtual providers
error(0, "Cannot find a valid provider for virtual {0}", Virtual, startcauses, Cause, CID)
:- attr("virtual_node", node(X, Virtual)),
not provider(_, node(X, Virtual)),
imposed_constraint(EID, "dependency_holds", Parent, Virtual, Type),
pkg_fact(TriggerPkg, condition_effect(Cause, EID)),
condition_holds(Cause, node(CID, TriggerPkg)).
% At most one variant value for single-valued variants
error(0, "'{0}' required multiple values for single-valued variant '{1}'\n Requested 'Spec({1}={2})' and 'Spec({1}={3})'", Package, Variant, Value1, Value2, startcauses, Cause1, X, Cause2, X)
:- attr("node", node(X, Package)),
node_has_variant(node(X, Package), Variant),
pkg_fact(Package, variant_single_value(Variant)),
build(node(X, Package)),
attr("variant_value", node(X, Package), Variant, Value1),
imposed_constraint(EID1, "variant_set", Package, Variant, Value1),
pkg_fact(TriggerPkg1, condition_effect(Cause1, EID1)),
condition_holds(Cause1, node(X, TriggerPkg1)),
attr("variant_value", node(X, Package), Variant, Value2),
imposed_constraint(EID2, "variant_set", Package, Variant, Value2),
pkg_fact(TriggerPkg2, condition_effect(Cause2, EID2)),
condition_holds(Cause2, node(X, TriggerPkg2)),
Value1 < Value2. % see[1] in concretize.lp
% Externals have to specify external conditions
error(0, "Attempted to use external for {0} which does not satisfy any configured external spec version", Package, startcauses, ExternalCause, CID)
:- external(node(ID, Package)),
attr("external_spec_selected", node(ID, Package), Index),
imposed_constraint(EID, "external_conditions_hold", Package, Index),
pkg_fact(TriggerPkg, condition_effect(ExternalCause, EID)),
condition_holds(ExternalCause, node(CID, TriggerPkg)),
not external_version(node(ID, Package), _, _).
error(0, "Attempted to build package {0} which is not buildable and does not have a satisfying external\n attr('{1}', '{2}') is an external constraint for {0} which was not satisfied", Package, Name, A1)
:- external(node(ID, Package)),
not attr("external_conditions_hold", node(ID, Package), _),
imposed_constraint(EID, "external_conditions_hold", Package, _),
trigger_and_effect(Package, TID, EID),
condition_requirement(TID, Name, A1),
not attr(Name, node(_, A1)).
error(0, "Attempted to build package {0} which is not buildable and does not have a satisfying external\n attr('{1}', '{2}', '{3}') is an external constraint for {0} which was not satisfied", Package, Name, A1, A2)
:- external(node(ID, Package)),
not attr("external_conditions_hold", node(ID, Package), _),
imposed_constraint(EID, "external_conditions_hold", Package, _),
trigger_and_effect(Package, TID, EID),
condition_requirement(TID, Name, A1, A2),
not attr(Name, node(_, A1), A2).
error(0, "Attempted to build package {0} which is not buildable and does not have a satisfying external\n attr('{1}', '{2}', '{3}', '{4}') is an external constraint for {0} which was not satisfied", Package, Name, A1, A2, A3)
:- external(node(ID, Package)),
not attr("external_conditions_hold", node(ID, Package), _),
imposed_constraint(EID, "external_conditions_hold", Package, _),
trigger_and_effect(Package, TID, EID),
condition_requirement(TID, Name, A1, A2, A3),
not attr(Name, node(_, A1), A2, A3).
error(0, "Attempted to build package {0} which is not buildable and does not have a satisfying external\n 'Spec({0} {1}={2})' is an external constraint for {0} which was not satisfied\n 'Spec({0} {1}={3})' required", Package, Variant, Value, OtherValue, startcauses, OtherValueCause, CID)
:- external(node(ID, Package)),
not attr("external_conditions_hold", node(ID, Package), _),
imposed_constraint(EID, "external_conditions_hold", Package, _),
trigger_and_effect(Package, TID, EID),
condition_requirement(TID, "variant_value", Package, Variant, Value),
not attr("variant_value", node(ID, Package), Variant, Value),
attr("variant_value", node(ID, Package), Variant, OtherValue),
imposed_constraint(EID2, "variant_set", Package, Variant, OtherValue),
pkg_fact(TriggerPkg, condition_effect(OtherValueCause, EID2)),
condition_holds(OtherValueCause, node(CID, TriggerPkg)).
error(0, "Attempted to build package {0} which is not buildable and does not have a satisfying external\n attr('{1}', '{2}', '{3}', '{4}', '{5}') is an external constraint for {0} which was not satisfied", Package, Name, A1, A2, A3, A4)
:- external(node(ID, Package)),
not attr("external_conditions_hold", node(ID, Package), _),
imposed_constraint(EID, "external_conditions_hold", Package, _),
trigger_and_effect(Package, TID, EID),
condition_requirement(TID, Name, A1, A2, A3, A4),
not attr(Name, node(_, A1), A2, A3, A4).
% error message with causes for conflicts
error(0, Msg, startcauses, TriggerID, ID1, ConstraintID, ID2)
:- attr("node", node(ID, Package)),
pkg_fact(Package, conflict(TriggerID, ConstraintID, Msg)),
% node(ID1, TriggerPackage) is node(ID2, Package) in most, but not all, cases
condition_holds(TriggerID, node(ID1, TriggerPackage)),
condition_holds(ConstraintID, node(ID2, Package)),
unification_set(X, node(ID2, Package)),
unification_set(X, node(ID1, TriggerPackage)),
not external(node(ID, Package)), % ignore conflicts for externals
not attr("hash", node(ID, Package), _). % ignore conflicts for installed packages
% variables to show
#show error/2.
#show error/3.
#show error/4.
#show error/5.
#show error/6.
#show error/7.
#show error/8.
#show error/9.
#show error/10.
#show error/11.
#show condition_cause/4.
#show condition_reason/2.
% Define all variables used to avoid warnings at runtime when the model doesn't happen to have one
#defined error/2.
#defined error/3.
#defined error/4.
#defined error/5.
#defined error/6.
#defined attr/2.
#defined attr/3.
#defined attr/4.
#defined attr/5.
#defined pkg_fact/2.
#defined imposed_constraint/3.
#defined imposed_constraint/4.
#defined imposed_constraint/5.
#defined imposed_constraint/6.
#defined condition_requirement/3.
#defined condition_requirement/4.
#defined condition_requirement/5.
#defined condition_requirement/6.
#defined condition_holds/2.
#defined unification_set/2.
#defined external/1.
#defined trigger_and_effect/3.
#defined build/1.
#defined node_has_variant/2.
#defined provider/2.
#defined external_version/3.

View File

@@ -11,19 +11,14 @@
%-----------------
% Domain heuristic
%-----------------
#heuristic attr("hash", node(0, Package), Hash) : literal(_, "root", Package). [45, init]
#heuristic attr("root", node(0, Package)) : literal(_, "root", Package). [45, true]
#heuristic attr("node", node(0, Package)) : literal(_, "root", Package). [45, true]
#heuristic attr("node", node(0, Package)) : literal(_, "node", Package). [45, true]
% Root node
#heuristic attr("version", node(0, Package), Version) : pkg_fact(Package, version_declared(Version, 0)), attr("root", node(0, Package)). [35, true]
#heuristic version_weight(node(0, Package), 0) : pkg_fact(Package, version_declared(Version, 0)), attr("root", node(0, Package)). [35, true]
#heuristic attr("variant_value", node(0, Package), Variant, Value) : variant_default_value(Package, Variant, Value), attr("root", node(0, Package)). [35, true]
#heuristic attr("node_target", node(0, Package), Target) : pkg_fact(Package, target_weight(Target, 0)), attr("root", node(0, Package)). [35, true]
#heuristic attr("node_target", node(0, Package), Target) : target_weight(Target, 0), attr("root", node(0, Package)). [35, true]
#heuristic node_target_weight(node(0, Package), 0) : attr("root", node(0, Package)). [35, true]
#heuristic node_compiler(node(0, Package), CompilerID) : default_compiler_preference(ID, 0), compiler_id(ID), attr("root", node(0, Package)). [35, true]
#heuristic node_compiler(node(0, Package), CompilerID) : compiler_weight(ID, 0), compiler_id(ID), attr("root", node(0, Package)). [35, true]
% Providers
#heuristic attr("node", node(0, Package)) : default_provider_preference(Virtual, Package, 0), possible_in_link_run(Package). [30, true]

View File

@@ -13,7 +13,7 @@
#heuristic attr("variant_value", node(ID, Package), Variant, Value) : variant_default_value(Package, Variant, Value), attr("node", node(ID, Package)), ID > 0. [25-5*ID, true]
#heuristic attr("node_target", node(ID, Package), Target) : pkg_fact(Package, target_weight(Target, 0)), attr("node", node(ID, Package)), ID > 0. [25-5*ID, true]
#heuristic node_target_weight(node(ID, Package), 0) : attr("node", node(ID, Package)), ID > 0. [25-5*ID, true]
#heuristic node_compiler(node(ID, Package), CompilerID) : default_compiler_preference(CompilerID, 0), compiler_id(CompilerID), attr("node", node(ID, Package)), ID > 0. [25-5*ID, true]
#heuristic node_compiler(node(ID, Package), CompilerID) : compiler_weight(CompilerID, 0), compiler_id(CompilerID), attr("node", node(ID, Package)), ID > 0. [25-5*ID, true]
% node(ID, _), split build dependencies
#heuristic attr("version", node(ID, Package), Version) : pkg_fact(Package, version_declared(Version, 0)), attr("node", node(ID, Package)), multiple_unification_sets(Package), ID > 0. [25, true]
@@ -21,4 +21,4 @@
#heuristic attr("variant_value", node(ID, Package), Variant, Value) : variant_default_value(Package, Variant, Value), attr("node", node(ID, Package)), multiple_unification_sets(Package), ID > 0. [25, true]
#heuristic attr("node_target", node(ID, Package), Target) : pkg_fact(Package, target_weight(Target, 0)), attr("node", node(ID, Package)), multiple_unification_sets(Package), ID > 0. [25, true]
#heuristic node_target_weight(node(ID, Package), 0) : attr("node", node(ID, Package)), multiple_unification_sets(Package), ID > 0. [25, true]
#heuristic node_compiler(node(ID, Package), CompilerID) : default_compiler_preference(CompilerID, 0), compiler_id(CompilerID), attr("node", node(ID, Package)), multiple_unification_sets(Package), ID > 0. [25, true]
#heuristic node_compiler(node(ID, Package), CompilerID) : compiler_weight(CompilerID, 0), compiler_id(CompilerID), attr("node", node(ID, Package)), multiple_unification_sets(Package), ID > 0. [25, true]

View File

@@ -10,6 +10,9 @@
%=============================================================================
% macOS
os_compatible("sequoia", "sonoma").
os_compatible("sonoma", "ventura").
os_compatible("ventura", "monterey").
os_compatible("monterey", "bigsur").
os_compatible("bigsur", "catalina").

View File

@@ -213,6 +213,19 @@ def __call__(self, match):
return clr.colorize(re.sub(_SEPARATORS, insert_color(), str(spec)) + "@.")
OLD_STYLE_FMT_RE = re.compile(r"\${[A-Z]+}")
def ensure_modern_format_string(fmt: str) -> None:
"""Ensure that the format string does not contain old ${...} syntax."""
result = OLD_STYLE_FMT_RE.search(fmt)
if result:
raise SpecFormatStringError(
f"Format string `{fmt}` contains old syntax `{result.group(0)}`. "
"This is no longer supported."
)
@lang.lazy_lexicographic_ordering
class ArchSpec:
"""Aggregate the target platform, the operating system and the target microarchitecture."""
@@ -4360,6 +4373,7 @@ def format(self, format_string=DEFAULT_FORMAT, **kwargs):
that accepts a string and returns another one
"""
ensure_modern_format_string(format_string)
color = kwargs.get("color", False)
transform = kwargs.get("transform", {})

View File

@@ -93,8 +93,8 @@ def remove(self, spec):
if (isinstance(s, str) and not s.startswith("$")) and Spec(s) == Spec(spec)
]
if not remove:
msg = "Cannot remove %s from SpecList %s\n" % (spec, self.name)
msg += "Either %s is not in %s or %s is " % (spec, self.name, spec)
msg = f"Cannot remove {spec} from SpecList {self.name}.\n"
msg += f"Either {spec} is not in {self.name} or {spec} is "
msg += "expanded from a matrix and cannot be removed directly."
raise SpecListError(msg)
@@ -133,9 +133,8 @@ def _parse_reference(self, name):
# Make sure the reference is valid
if name not in self._reference:
msg = "SpecList %s refers to " % self.name
msg += "named list %s " % name
msg += "which does not appear in its reference dict"
msg = f"SpecList '{self.name}' refers to named list '{name}'"
msg += " which does not appear in its reference dict."
raise UndefinedReferenceError(msg)
return (name, sigil)

View File

@@ -102,7 +102,10 @@ def to_dict_or_value(self):
if self.microarchitecture.vendor == "generic":
return str(self)
return syaml.syaml_dict(self.microarchitecture.to_dict(return_list_of_items=True))
# Get rid of compiler flag information before turning the uarch into a dict
uarch_dict = self.microarchitecture.to_dict()
uarch_dict.pop("compilers", None)
return syaml.syaml_dict(uarch_dict.items())
def __repr__(self):
cls_name = self.__class__.__name__

View File

@@ -8,13 +8,16 @@
import pytest
import archspec.cpu
import llnl.util.filesystem as fs
import spack.compilers
import spack.concretize
import spack.operating_systems
import spack.platforms
import spack.target
from spack.spec import ArchSpec, CompilerSpec, Spec
from spack.spec import ArchSpec, Spec
@pytest.fixture(scope="module")
@@ -121,52 +124,60 @@ def test_arch_spec_container_semantic(item, architecture_str):
@pytest.mark.parametrize(
"compiler_spec,target_name,expected_flags",
[
# Check compilers with version numbers from a single toolchain
# Homogeneous compilers
("gcc@4.7.2", "ivybridge", "-march=core-avx-i -mtune=core-avx-i"),
# Check mixed toolchains
("clang@8.0.0", "broadwell", ""),
("clang@3.5", "x86_64", "-march=x86-64 -mtune=generic"),
# Check Apple's Clang compilers
("apple-clang@9.1.0", "x86_64", "-march=x86-64"),
# Mixed toolchain
("clang@8.0.0", "broadwell", ""),
],
)
@pytest.mark.filterwarnings("ignore:microarchitecture specific")
def test_optimization_flags(compiler_spec, target_name, expected_flags, config):
def test_optimization_flags(compiler_spec, target_name, expected_flags, compiler_factory):
target = spack.target.Target(target_name)
compiler = spack.compilers.compilers_for_spec(compiler_spec).pop()
compiler_dict = compiler_factory(spec=compiler_spec, operating_system="")["compiler"]
if compiler_spec == "clang@8.0.0":
compiler_dict["paths"] = {
"cc": "/path/to/clang-8",
"cxx": "/path/to/clang++-8",
"f77": "/path/to/gfortran-9",
"fc": "/path/to/gfortran-9",
}
compiler = spack.compilers.compiler_from_dict(compiler_dict)
opt_flags = target.optimization_flags(compiler)
assert opt_flags == expected_flags
@pytest.mark.parametrize(
"compiler,real_version,target_str,expected_flags",
"compiler_str,real_version,target_str,expected_flags",
[
(CompilerSpec("gcc@=9.2.0"), None, "haswell", "-march=haswell -mtune=haswell"),
("gcc@=9.2.0", None, "haswell", "-march=haswell -mtune=haswell"),
# Check that custom string versions are accepted
(
CompilerSpec("gcc@=10foo"),
"9.2.0",
"icelake",
"-march=icelake-client -mtune=icelake-client",
),
("gcc@=10foo", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
# Check that we run version detection (4.4.0 doesn't support icelake)
(
CompilerSpec("gcc@=4.4.0-special"),
"9.2.0",
"icelake",
"-march=icelake-client -mtune=icelake-client",
),
("gcc@=4.4.0-special", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
# Check that the special case for Apple's clang is treated correctly
# i.e. it won't try to detect the version again
(CompilerSpec("apple-clang@=9.1.0"), None, "x86_64", "-march=x86-64"),
("apple-clang@=9.1.0", None, "x86_64", "-march=x86-64"),
],
)
def test_optimization_flags_with_custom_versions(
compiler, real_version, target_str, expected_flags, monkeypatch, config
compiler_str,
real_version,
target_str,
expected_flags,
monkeypatch,
mutable_config,
compiler_factory,
):
target = spack.target.Target(target_str)
compiler_dict = compiler_factory(spec=compiler_str, operating_system="redhat6")
mutable_config.set("compilers", [compiler_dict])
if real_version:
monkeypatch.setattr(spack.compiler.Compiler, "get_real_version", lambda x: real_version)
compiler = spack.compilers.compiler_from_dict(compiler_dict["compiler"])
opt_flags = target.optimization_flags(compiler)
assert opt_flags == expected_flags
@@ -201,13 +212,16 @@ def test_satisfy_strict_constraint_when_not_concrete(architecture_tuple, constra
)
@pytest.mark.usefixtures("mock_packages", "config")
@pytest.mark.only_clingo("Fixing the parser broke this test for the original concretizer.")
@pytest.mark.skipif(
str(archspec.cpu.host().family) != "x86_64", reason="tests are for x86_64 uarch ranges"
)
def test_concretize_target_ranges(root_target_range, dep_target_range, result, monkeypatch):
# Monkeypatch so that all concretization is done as if the machine is core2
monkeypatch.setattr(spack.platforms.test.Test, "default", "core2")
spec = Spec(f"a %gcc@10 foobar=bar target={root_target_range} ^b target={dep_target_range}")
spec = Spec(
f"pkg-a %gcc@10 foobar=bar target={root_target_range} ^pkg-b target={dep_target_range}"
)
with spack.concretize.disable_compiler_existence_check():
spec.concretize()
assert spec.target == spec["b"].target == result
assert spec.target == spec["pkg-b"].target == result
@pytest.mark.parametrize(

View File

@@ -4,7 +4,9 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import filecmp
import glob
import gzip
import io
import json
import os
import platform
import sys
@@ -17,6 +19,8 @@
import py
import pytest
import archspec.cpu
from llnl.util.filesystem import join_path, visit_directory_tree
import spack.binary_distribution as bindist
@@ -199,6 +203,9 @@ def dummy_prefix(tmpdir):
with open(data, "w") as f:
f.write("hello world")
with open(p.join(".spack", "binary_distribution"), "w") as f:
f.write("{}")
os.symlink("app", relative_app_link)
os.symlink(app, absolute_app_link)
@@ -568,11 +575,20 @@ def test_update_sbang(tmpdir, test_mirror):
uninstall_cmd("-y", "/%s" % new_spec.dag_hash())
def test_install_legacy_buildcache_layout(install_mockery_mutable_config):
@pytest.mark.skipif(
str(archspec.cpu.host().family) != "x86_64",
reason="test data uses gcc 4.5.0 which does not support aarch64",
)
def test_install_legacy_buildcache_layout(
mutable_config, compiler_factory, install_mockery_mutable_config
):
"""Legacy buildcache layout involved a nested archive structure
where the .spack file contained a repeated spec.json and another
compressed archive file containing the install tree. This test
makes sure we can still read that layout."""
mutable_config.set(
"compilers", [compiler_factory(spec="gcc@4.5.0", operating_system="debian6")]
)
legacy_layout_dir = os.path.join(test_path, "data", "mirrors", "legacy_layout")
mirror_url = "file://{0}".format(legacy_layout_dir)
filename = (
@@ -1022,7 +1038,9 @@ def test_tarball_common_prefix(dummy_prefix, tmpdir):
bindist._tar_strip_component(tar, common_prefix)
# Extract into prefix2
tar.extractall(path="prefix2")
tar.extractall(
path="prefix2", members=bindist._tar_strip_component(tar, common_prefix)
)
# Verify files are all there at the correct level.
assert set(os.listdir("prefix2")) == {"bin", "share", ".spack"}
@@ -1042,13 +1060,30 @@ def test_tarball_common_prefix(dummy_prefix, tmpdir):
)
def test_tarfile_missing_binary_distribution_file(tmpdir):
"""A tarfile that does not contain a .spack/binary_distribution file cannot be
used to install."""
with tmpdir.as_cwd():
# An empty .spack dir.
with tarfile.open("empty.tar", mode="w") as tar:
tarinfo = tarfile.TarInfo(name="example/.spack")
tarinfo.type = tarfile.DIRTYPE
tar.addfile(tarinfo)
with pytest.raises(ValueError, match="missing binary_distribution file"):
bindist._ensure_common_prefix(tarfile.open("empty.tar", mode="r"))
def test_tarfile_without_common_directory_prefix_fails(tmpdir):
"""A tarfile that only contains files without a common package directory
should fail to extract, as we won't know where to put the files."""
with tmpdir.as_cwd():
# Create a broken tarball with just a file, no directories.
with tarfile.open("empty.tar", mode="w") as tar:
tar.addfile(tarfile.TarInfo(name="example/file"), fileobj=io.BytesIO(b"hello"))
tar.addfile(
tarfile.TarInfo(name="example/.spack/binary_distribution"),
fileobj=io.BytesIO(b"hello"),
)
with pytest.raises(ValueError, match="Tarball does not contain a common prefix"):
bindist._ensure_common_prefix(tarfile.open("empty.tar", mode="r"))
@@ -1112,3 +1147,77 @@ def test_tarfile_of_spec_prefix(tmpdir):
assert tar.getmember(f"{expected_prefix}/b_directory/file").isreg()
assert tar.getmember(f"{expected_prefix}/c_directory").isdir()
assert tar.getmember(f"{expected_prefix}/c_directory/file").isreg()
@pytest.mark.parametrize("layout,expect_success", [(None, True), (1, True), (2, False)])
def test_get_valid_spec_file(tmp_path, layout, expect_success):
# Test reading a spec.json file that does not specify a layout version.
spec_dict = Spec("example").to_dict()
path = tmp_path / "spec.json"
effective_layout = layout or 0 # If not specified it should be 0
# Add a layout version
if layout is not None:
spec_dict["buildcache_layout_version"] = layout
# Save to file
with open(path, "w") as f:
json.dump(spec_dict, f)
try:
spec_dict_disk, layout_disk = bindist._get_valid_spec_file(
str(path), max_supported_layout=1
)
assert expect_success
assert spec_dict_disk == spec_dict
assert layout_disk == effective_layout
except bindist.InvalidMetadataFile:
assert not expect_success
def test_get_valid_spec_file_doesnt_exist(tmp_path):
with pytest.raises(bindist.InvalidMetadataFile, match="No such file"):
bindist._get_valid_spec_file(str(tmp_path / "no-such-file"), max_supported_layout=1)
def test_get_valid_spec_file_gzipped(tmp_path):
# Create a gzipped file, contents don't matter
path = tmp_path / "spec.json.gz"
with gzip.open(path, "wb") as f:
f.write(b"hello")
with pytest.raises(
bindist.InvalidMetadataFile, match="Compressed spec files are not supported"
):
bindist._get_valid_spec_file(str(path), max_supported_layout=1)
@pytest.mark.parametrize("filename", ["spec.json", "spec.json.sig"])
def test_get_valid_spec_file_no_json(tmp_path, filename):
tmp_path.joinpath(filename).write_text("not json")
with pytest.raises(bindist.InvalidMetadataFile):
bindist._get_valid_spec_file(str(tmp_path / filename), max_supported_layout=1)
def test_download_tarball_with_unsupported_layout_fails(tmp_path, mutable_config, capsys):
layout_version = bindist.FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION + 1
spec = Spec("gmake@4.4.1%gcc@13.1.0 arch=linux-ubuntu23.04-zen2")
spec._mark_concrete()
spec_dict = spec.to_dict()
spec_dict["buildcache_layout_version"] = layout_version
# Setup a basic local build cache structure
path = (
tmp_path / bindist.build_cache_relative_path() / bindist.tarball_name(spec, ".spec.json")
)
path.parent.mkdir(parents=True)
with open(path, "w") as f:
json.dump(spec_dict, f)
# Configure as a mirror.
mirror_cmd("add", "test-mirror", str(tmp_path))
# Shouldn't be able "download" this.
assert bindist.download_tarball(spec, unsigned=True) is None
# And there should be a warning about an unsupported layout version.
assert f"Layout version {layout_version} is too new" in capsys.readouterr().err

View File

@@ -437,14 +437,14 @@ def test_parallel_false_is_not_propagating(default_mock_concretization):
# a foobar=bar (parallel = False)
# |
# b (parallel =True)
s = default_mock_concretization("a foobar=bar")
s = default_mock_concretization("pkg-a foobar=bar")
spack.build_environment.set_package_py_globals(s.package)
assert s["a"].package.module.make_jobs == 1
assert s["pkg-a"].package.module.make_jobs == 1
spack.build_environment.set_package_py_globals(s["b"].package)
assert s["b"].package.module.make_jobs == spack.build_environment.determine_number_of_jobs(
parallel=s["b"].package.parallel
spack.build_environment.set_package_py_globals(s["pkg-b"].package)
assert s["pkg-b"].package.module.make_jobs == spack.build_environment.determine_number_of_jobs(
parallel=s["pkg-b"].package.parallel
)
@@ -540,7 +540,7 @@ def test_dirty_disable_module_unload(config, mock_packages, working_env, mock_mo
"""Test that on CRAY platform 'module unload' is not called if the 'dirty'
option is on.
"""
s = spack.spec.Spec("a").concretized()
s = spack.spec.Spec("pkg-a").concretized()
# If called with "dirty" we don't unload modules, so no calls to the
# `module` function on Cray
@@ -652,3 +652,18 @@ def test_monkey_patching_works_across_virtual(default_mock_concretization):
s["mpich"].foo = "foo"
assert s["mpich"].foo == "foo"
assert s["mpi"].foo == "foo"
def test_clear_compiler_related_runtime_variables_of_build_deps(default_mock_concretization):
"""Verify that Spack drops CC, CXX, FC and F77 from the dependencies related build environment
variable changes if they are set in setup_run_environment. Spack manages those variables
elsewhere."""
s = default_mock_concretization("build-env-compiler-var-a")
ctx = spack.build_environment.SetupContext(s, context=Context.BUILD)
result = {}
ctx.get_env_modifications().apply_modifications(result)
assert "CC" not in result
assert "CXX" not in result
assert "FC" not in result
assert "F77" not in result
assert result["ANOTHER_VAR"] == "this-should-be-present"

View File

@@ -9,6 +9,8 @@
import py.path
import pytest
import archspec.cpu
import llnl.util.filesystem as fs
import spack.build_systems.autotools
@@ -95,7 +97,7 @@ def test_negative_ninja_check(self, input_dir, test_dir, concretize_and_setup):
@pytest.mark.usefixtures("config", "mock_packages")
class TestAutotoolsPackage:
def test_with_or_without(self, default_mock_concretization):
s = default_mock_concretization("a")
s = default_mock_concretization("pkg-a")
options = s.package.with_or_without("foo")
# Ensure that values that are not representing a feature
@@ -127,7 +129,7 @@ def activate(value):
assert "--without-lorem-ipsum" in options
def test_none_is_allowed(self, default_mock_concretization):
s = default_mock_concretization("a foo=none")
s = default_mock_concretization("pkg-a foo=none")
options = s.package.with_or_without("foo")
# Ensure that values that are not representing a feature
@@ -209,6 +211,9 @@ def test_autotools_gnuconfig_replacement_disabled(
assert "gnuconfig version of config.guess" not in f.read()
@pytest.mark.disable_clean_stage_check
@pytest.mark.skipif(
str(archspec.cpu.host().family) != "x86_64", reason="test data is specific for x86_64"
)
def test_autotools_gnuconfig_replacement_no_gnuconfig(self, mutable_database, monkeypatch):
"""
Tests whether a useful error message is shown when patch_config_files is

View File

@@ -25,7 +25,7 @@ def test_error_when_multiple_specs_are_given():
assert "only takes one spec" in output
@pytest.mark.parametrize("args", [("--", "/bin/bash", "-c", "echo test"), ("--",), ()])
@pytest.mark.parametrize("args", [("--", "/bin/sh", "-c", "echo test"), ("--",), ()])
@pytest.mark.usefixtures("config", "mock_packages", "working_env")
def test_build_env_requires_a_spec(args):
output = build_env(*args, fail_on_error=False)
@@ -35,7 +35,7 @@ def test_build_env_requires_a_spec(args):
_out_file = "env.out"
@pytest.mark.parametrize("shell", ["pwsh", "bat"] if sys.platform == "win32" else ["bash"])
@pytest.mark.parametrize("shell", ["pwsh", "bat"] if sys.platform == "win32" else ["sh"])
@pytest.mark.usefixtures("config", "mock_packages", "working_env")
def test_dump(shell_as, shell, tmpdir):
with tmpdir.as_cwd():

Some files were not shown because too many files have changed in this diff Show More