Compare commits
63 Commits
backports/
...
develop-20
Author | SHA1 | Date | |
---|---|---|---|
![]() |
15dcd3c65c | ||
![]() |
49c2894def | ||
![]() |
1ae37f6720 | ||
![]() |
15f6368c7f | ||
![]() |
57b63228ce | ||
![]() |
13abfb7013 | ||
![]() |
b41fc1ec79 | ||
![]() |
124e41da23 | ||
![]() |
f6039d1d45 | ||
![]() |
8871bd5ba5 | ||
![]() |
efe85755d8 | ||
![]() |
7aaa17856d | ||
![]() |
fbf02b561a | ||
![]() |
4027a2139b | ||
![]() |
f0ced1af42 | ||
![]() |
2e45edf4e3 | ||
![]() |
4bcfb01566 | ||
![]() |
b8bb8a70ce | ||
![]() |
dd2b436b5a | ||
![]() |
da2cc2351c | ||
![]() |
383ec19a0c | ||
![]() |
45f8a0e42c | ||
![]() |
4636a7f14f | ||
![]() |
38f3f57a54 | ||
![]() |
b17d7cd0e6 | ||
![]() |
b5e2f23b6c | ||
![]() |
7a4df732e1 | ||
![]() |
7e6aaf9458 | ||
![]() |
2d35d29e0f | ||
![]() |
1baed0d833 | ||
![]() |
cadc2a1aa5 | ||
![]() |
78449ba92b | ||
![]() |
26d6bfbb7f | ||
![]() |
3405fe60f1 | ||
![]() |
53c266b161 | ||
![]() |
ed8ecc469e | ||
![]() |
b2840acd52 | ||
![]() |
c35250b313 | ||
![]() |
e114853115 | ||
![]() |
89fc9a9d47 | ||
![]() |
afc693645a | ||
![]() |
4ac0e511ad | ||
![]() |
b0355d6cc0 | ||
![]() |
300d53d6f8 | ||
![]() |
0b344e0fd3 | ||
![]() |
15adb308bf | ||
![]() |
050d565375 | ||
![]() |
f6ef2c254e | ||
![]() |
62c27b1924 | ||
![]() |
2ff0766aa4 | ||
![]() |
dc245e87f9 | ||
![]() |
c1f134e2a0 | ||
![]() |
391940d2eb | ||
![]() |
8c061e51e3 | ||
![]() |
5774df6b7a | ||
![]() |
3a5c1eb5f3 | ||
![]() |
3a2ec729f7 | ||
![]() |
a093f4a8ce | ||
![]() |
b8302a8277 | ||
![]() |
32f319157d | ||
![]() |
75dfad8788 | ||
![]() |
f3ba20db26 | ||
![]() |
6301edbd5d |
2
.github/workflows/bootstrap.yml
vendored
2
.github/workflows/bootstrap.yml
vendored
@@ -176,7 +176,7 @@ jobs:
|
||||
runs-on: ${{ matrix.macos-version }}
|
||||
strategy:
|
||||
matrix:
|
||||
macos-version: ['macos-12']
|
||||
macos-version: ['macos-11', 'macos-12']
|
||||
steps:
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
|
342
CHANGELOG.md
342
CHANGELOG.md
@@ -1,345 +1,3 @@
|
||||
# v0.21.3 (2024-10-02)
|
||||
|
||||
## Bugfixes
|
||||
- Forward compatibility with Spack 0.23 packages with language dependencies (#45205, #45191)
|
||||
- Forward compatibility with `urllib` from Python 3.12.6+ (#46453, #46483)
|
||||
- Bump `archspec` to 0.2.5-dev for better aarch64 and Windows support (#42854, #44005,
|
||||
#45721, #46445)
|
||||
- Support macOS Sequoia (#45018, #45127, #43862)
|
||||
- CI and test maintenance (#42909, #42728, #46711, #41943, #43363)
|
||||
|
||||
# v0.21.2 (2024-03-01)
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- Containerize: accommodate nested or pre-existing spack-env paths (#41558)
|
||||
- Fix setup-env script, when going back and forth between instances (#40924)
|
||||
- Fix using fully-qualified namespaces from root specs (#41957)
|
||||
- Fix a bug when a required provider is requested for multiple virtuals (#42088)
|
||||
- OCI buildcaches:
|
||||
- only push in parallel when forking (#42143)
|
||||
- use pickleable errors (#42160)
|
||||
- Fix using sticky variants in externals (#42253)
|
||||
- Fix a rare issue with conditional requirements and multi-valued variants (#42566)
|
||||
|
||||
## Package updates
|
||||
- rust: add v1.75, rework a few variants (#41161,#41903)
|
||||
- py-transformers: add v4.35.2 (#41266)
|
||||
- mgard: fix OpenMP on AppleClang (#42933)
|
||||
|
||||
# v0.21.1 (2024-01-11)
|
||||
|
||||
## New features
|
||||
- Add support for reading buildcaches created by Spack v0.22 (#41773)
|
||||
|
||||
## Bugfixes
|
||||
|
||||
- spack graph: fix coloring with environments (#41240)
|
||||
- spack info: sort variants in --variants-by-name (#41389)
|
||||
- Spec.format: error on old style format strings (#41934)
|
||||
- ASP-based solver:
|
||||
- fix infinite recursion when computing concretization errors (#41061)
|
||||
- don't error for type mismatch on preferences (#41138)
|
||||
- don't emit spurious debug output (#41218)
|
||||
- Improve the error message for deprecated preferences (#41075)
|
||||
- Fix MSVC preview version breaking clingo build on Windows (#41185)
|
||||
- Fix multi-word aliases (#41126)
|
||||
- Add a warning for unconfigured compiler (#41213)
|
||||
- environment: fix an issue with deconcretization/reconcretization of specs (#41294)
|
||||
- buildcache: don't error if a patch is missing, when installing from binaries (#41986)
|
||||
- Multiple improvements to unit-tests (#41215,#41369,#41495,#41359,#41361,#41345,#41342,#41308,#41226)
|
||||
|
||||
## Package updates
|
||||
- root: add a webgui patch to address security issue (#41404)
|
||||
- BerkeleyGW: update source urls (#38218)
|
||||
|
||||
# v0.21.0 (2023-11-11)
|
||||
|
||||
`v0.21.0` is a major feature release.
|
||||
|
||||
## Features in this release
|
||||
|
||||
1. **Better error messages with condition chaining**
|
||||
|
||||
In v0.18, we added better error messages that could tell you what problem happened,
|
||||
but they couldn't tell you *why* it happened. `0.21` adds *condition chaining* to the
|
||||
solver, and Spack can now trace back through the conditions that led to an error and
|
||||
build a tree of causes potential causes and where they came from. For example:
|
||||
|
||||
```console
|
||||
$ spack solve hdf5 ^cmake@3.0.1
|
||||
==> Error: concretization failed for the following reasons:
|
||||
|
||||
1. Cannot satisfy 'cmake@3.0.1'
|
||||
2. Cannot satisfy 'cmake@3.0.1'
|
||||
required because hdf5 ^cmake@3.0.1 requested from CLI
|
||||
3. Cannot satisfy 'cmake@3.18:' and 'cmake@3.0.1
|
||||
required because hdf5 ^cmake@3.0.1 requested from CLI
|
||||
required because hdf5 depends on cmake@3.18: when @1.13:
|
||||
required because hdf5 ^cmake@3.0.1 requested from CLI
|
||||
4. Cannot satisfy 'cmake@3.12:' and 'cmake@3.0.1
|
||||
required because hdf5 depends on cmake@3.12:
|
||||
required because hdf5 ^cmake@3.0.1 requested from CLI
|
||||
required because hdf5 ^cmake@3.0.1 requested from CLI
|
||||
```
|
||||
|
||||
More details in #40173.
|
||||
|
||||
2. **OCI build caches**
|
||||
|
||||
You can now use an arbitrary [OCI](https://opencontainers.org) registry as a build
|
||||
cache:
|
||||
|
||||
```console
|
||||
$ spack mirror add my_registry oci://user/image # Dockerhub
|
||||
$ spack mirror add my_registry oci://ghcr.io/haampie/spack-test # GHCR
|
||||
$ spack mirror set --push --oci-username ... --oci-password ... my_registry # set login creds
|
||||
$ spack buildcache push my_registry [specs...]
|
||||
```
|
||||
|
||||
And you can optionally add a base image to get *runnable* images:
|
||||
|
||||
```console
|
||||
$ spack buildcache push --base-image ubuntu:23.04 my_registry python
|
||||
Pushed ... as [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
|
||||
|
||||
$ docker run --rm -it [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
|
||||
```
|
||||
|
||||
This creates a container image from the Spack installations on the host system,
|
||||
without the need to run `spack install` from a `Dockerfile` or `sif` file. It also
|
||||
addresses the inconvenience of losing binaries of dependencies when `RUN spack
|
||||
install` fails inside `docker build`.
|
||||
|
||||
Further, the container image layers and build cache tarballs are the same files. This
|
||||
means that `spack install` and `docker pull` use the exact same underlying binaries.
|
||||
If you previously used `spack install` inside of `docker build`, this feature helps
|
||||
you save storage by a factor two.
|
||||
|
||||
More details in #38358.
|
||||
|
||||
3. **Multiple versions of build dependencies**
|
||||
|
||||
Increasingly, complex package builds require multiple versions of some build
|
||||
dependencies. For example, Python packages frequently require very specific versions
|
||||
of `setuptools`, `cython`, and sometimes different physics packages require different
|
||||
versions of Python to build. The concretizer enforced that every solve was *unified*,
|
||||
i.e., that there only be one version of every package. The concretizer now supports
|
||||
"duplicate" nodes for *build dependencies*, but enforces unification through
|
||||
transitive link and run dependencies. This will allow it to better resolve complex
|
||||
dependency graphs in ecosystems like Python, and it also gets us very close to
|
||||
modeling compilers as proper dependencies.
|
||||
|
||||
This change required a major overhaul of the concretizer, as well as a number of
|
||||
performance optimizations. See #38447, #39621.
|
||||
|
||||
4. **Cherry-picking virtual dependencies**
|
||||
|
||||
You can now select only a subset of virtual dependencies from a spec that may provide
|
||||
more. For example, if you want `mpich` to be your `mpi` provider, you can be explicit
|
||||
by writing:
|
||||
|
||||
```
|
||||
hdf5 ^[virtuals=mpi] mpich
|
||||
```
|
||||
|
||||
Or, if you want to use, e.g., `intel-parallel-studio` for `blas` along with an external
|
||||
`lapack` like `openblas`, you could write:
|
||||
|
||||
```
|
||||
strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
|
||||
```
|
||||
|
||||
The `virtuals=mpi` is an edge attribute, and dependency edges in Spack graphs now
|
||||
track which virtuals they satisfied. More details in #17229 and #35322.
|
||||
|
||||
Note for packaging: in Spack 0.21 `spec.satisfies("^virtual")` is true if and only if
|
||||
the package specifies `depends_on("virtual")`. This is different from Spack 0.20,
|
||||
where depending on a provider implied depending on the virtual provided. See #41002
|
||||
for an example where `^mkl` was being used to test for several `mkl` providers in a
|
||||
package that did not depend on `mkl`.
|
||||
|
||||
5. **License directive**
|
||||
|
||||
Spack packages can now have license metadata, with the new `license()` directive:
|
||||
|
||||
```python
|
||||
license("Apache-2.0")
|
||||
```
|
||||
|
||||
Licenses use [SPDX identifiers](https://spdx.org/licenses), and you can use SPDX
|
||||
expressions to combine them:
|
||||
|
||||
```python
|
||||
license("Apache-2.0 OR MIT")
|
||||
```
|
||||
|
||||
Like other directives in Spack, it's conditional, so you can handle complex cases like
|
||||
Spack itself:
|
||||
|
||||
```python
|
||||
license("LGPL-2.1", when="@:0.11")
|
||||
license("Apache-2.0 OR MIT", when="@0.12:")
|
||||
```
|
||||
|
||||
More details in #39346, #40598.
|
||||
|
||||
6. **`spack deconcretize` command**
|
||||
|
||||
We are getting close to having a `spack update` command for environments, but we're
|
||||
not quite there yet. This is the next best thing. `spack deconcretize` gives you
|
||||
control over what you want to update in an already concrete environment. If you have
|
||||
an environment built with, say, `meson`, and you want to update your `meson` version,
|
||||
you can run:
|
||||
|
||||
```console
|
||||
spack deconcretize meson
|
||||
```
|
||||
|
||||
and have everything that depends on `meson` rebuilt the next time you run `spack
|
||||
concretize`. In a future Spack version, we'll handle all of this in a single command,
|
||||
but for now you can use this to drop bits of your lockfile and resolve your
|
||||
dependencies again. More in #38803.
|
||||
|
||||
7. **UI Improvements**
|
||||
|
||||
The venerable `spack info` command was looking shabby compared to the rest of Spack's
|
||||
UI, so we reworked it to have a bit more flair. `spack info` now makes much better
|
||||
use of terminal space and shows variants, their values, and their descriptions much
|
||||
more clearly. Conditional variants are grouped separately so you can more easily
|
||||
understand how packages are structured. More in #40998.
|
||||
|
||||
`spack checksum` now allows you to filter versions from your editor, or by version
|
||||
range. It also notifies you about potential download URL changes. See #40403.
|
||||
|
||||
8. **Environments can include definitions**
|
||||
|
||||
Spack did not previously support using `include:` with The
|
||||
[definitions](https://spack.readthedocs.io/en/latest/environments.html#spec-list-references)
|
||||
section of an environment, but now it does. You can use this to curate lists of specs
|
||||
and more easily reuse them across environments. See #33960.
|
||||
|
||||
9. **Aliases**
|
||||
|
||||
You can now add aliases to Spack commands in `config.yaml`, e.g. this might enshrine
|
||||
your favorite args to `spack find` as `spack f`:
|
||||
|
||||
```yaml
|
||||
config:
|
||||
aliases:
|
||||
f: find -lv
|
||||
```
|
||||
|
||||
See #17229.
|
||||
|
||||
10. **Improved autoloading of modules**
|
||||
|
||||
Spack 0.20 was the first release to enable autoloading of direct dependencies in
|
||||
module files.
|
||||
|
||||
The downside of this was that `module avail` and `module load` tab completion would
|
||||
show users too many modules to choose from, and many users disabled generating
|
||||
modules for dependencies through `exclude_implicits: true`. Further, it was
|
||||
necessary to keep hashes in module names to avoid file name clashes.
|
||||
|
||||
In this release, you can start using `hide_implicits: true` instead, which exposes
|
||||
only explicitly installed packages to the user, while still autoloading
|
||||
dependencies. On top of that, you can safely use `hash_length: 0`, as this config
|
||||
now only applies to the modules exposed to the user -- you don't have to worry about
|
||||
file name clashes for hidden dependencies.
|
||||
|
||||
Note: for `tcl` this feature requires Modules 4.7 or higher
|
||||
|
||||
11. **Updated container labeling**
|
||||
|
||||
Nightly Docker images from the `develop` branch will now be tagged as `:develop` and
|
||||
`:nightly`. The `:latest` tag is no longer associated with `:develop`, but with the
|
||||
latest stable release. Releases will be tagged with `:{major}`, `:{major}.{minor}`
|
||||
and `:{major}.{minor}.{patch}`. `ubuntu:18.04` has also been removed from the list of
|
||||
generated Docker images, as it is no longer supported. See #40593.
|
||||
|
||||
## Other new commands and directives
|
||||
|
||||
* `spack env activate` without arguments now loads a `default` environment that you do
|
||||
not have to create (#40756).
|
||||
* `spack find -H` / `--hashes`: a new shortcut for piping `spack find` output to
|
||||
other commands (#38663)
|
||||
* Add `spack checksum --verify`, fix `--add` (#38458)
|
||||
* New `default_args` context manager factors out common args for directives (#39964)
|
||||
* `spack compiler find --[no]-mixed-toolchain` lets you easily mix `clang` and
|
||||
`gfortran` on Linux (#40902)
|
||||
|
||||
## Performance improvements
|
||||
|
||||
* `spack external find` execution is now much faster (#39843)
|
||||
* `spack location -i` now much faster on success (#40898)
|
||||
* Drop redundant rpaths post install (#38976)
|
||||
* ASP-based solver: avoid cycles in clingo using hidden directive (#40720)
|
||||
* Fix multiple quadratic complexity issues in environments (#38771)
|
||||
|
||||
## Other new features of note
|
||||
|
||||
* archspec: update to v0.2.2, support for Sapphire Rapids, Power10, Neoverse V2 (#40917)
|
||||
* Propagate variants across nodes that don't have that variant (#38512)
|
||||
* Implement fish completion (#29549)
|
||||
* Can now distinguish between source/binary mirror; don't ping mirror.spack.io as much (#34523)
|
||||
* Improve status reporting on install (add [n/total] display) (#37903)
|
||||
|
||||
## Windows
|
||||
|
||||
This release has the best Windows support of any Spack release yet, with numerous
|
||||
improvements and much larger swaths of tests passing:
|
||||
|
||||
* MSVC and SDK improvements (#37711, #37930, #38500, #39823, #39180)
|
||||
* Windows external finding: update default paths; treat .bat as executable on Windows (#39850)
|
||||
* Windows decompression: fix removal of intermediate file (#38958)
|
||||
* Windows: executable/path handling (#37762)
|
||||
* Windows build systems: use ninja and enable tests (#33589)
|
||||
* Windows testing (#36970, #36972, #36973, #36840, #36977, #36792, #36834, #34696, #36971)
|
||||
* Windows PowerShell support (#39118, #37951)
|
||||
* Windows symlinking and libraries (#39933, #38599, #34701, #38578, #34701)
|
||||
|
||||
## Notable refactors
|
||||
* User-specified flags take precedence over others in Spack compiler wrappers (#37376)
|
||||
* Improve setup of build, run, and test environments (#35737, #40916)
|
||||
* `make` is no longer a required system dependency of Spack (#40380)
|
||||
* Support Python 3.12 (#40404, #40155, #40153)
|
||||
* docs: Replace package list with packages.spack.io (#40251)
|
||||
* Drop Python 2 constructs in Spack (#38720, #38718, #38703)
|
||||
|
||||
## Binary cache and stack updates
|
||||
* e4s arm stack: duplicate and target neoverse v1 (#40369)
|
||||
* Add macOS ML CI stacks (#36586)
|
||||
* E4S Cray CI Stack (#37837)
|
||||
* e4s cray: expand spec list (#38947)
|
||||
* e4s cray sles ci: expand spec list (#39081)
|
||||
|
||||
## Removals, deprecations, and syntax changes
|
||||
* ASP: targets, compilers and providers soft-preferences are only global (#31261)
|
||||
* Parser: fix ambiguity with whitespace in version ranges (#40344)
|
||||
* Module file generation is disabled by default; you'll need to enable it to use it (#37258)
|
||||
* Remove deprecated "extra_instructions" option for containers (#40365)
|
||||
* Stand-alone test feature deprecation postponed to v0.22 (#40600)
|
||||
* buildcache push: make `--allow-root` the default and deprecate the option (#38878)
|
||||
|
||||
## Notable Bugfixes
|
||||
* Bugfix: propagation of multivalued variants (#39833)
|
||||
* Allow `/` in git versions (#39398)
|
||||
* Fetch & patch: actually acquire stage lock, and many more issues (#38903)
|
||||
* Environment/depfile: better escaping of targets with Git versions (#37560)
|
||||
* Prevent "spack external find" to error out on wrong permissions (#38755)
|
||||
* lmod: allow core compiler to be specified with a version range (#37789)
|
||||
|
||||
## Spack community stats
|
||||
|
||||
* 7,469 total packages, 303 new since `v0.20.0`
|
||||
* 150 new Python packages
|
||||
* 34 new R packages
|
||||
* 353 people contributed to this release
|
||||
* 336 committers to packages
|
||||
* 65 committers to core
|
||||
|
||||
|
||||
# v0.20.3 (2023-10-31)
|
||||
|
||||
## Bugfixes
|
||||
|
@@ -37,11 +37,7 @@ to enable reuse for a single installation, and you can use:
|
||||
spack install --fresh <spec>
|
||||
|
||||
to do a fresh install if ``reuse`` is enabled by default.
|
||||
``reuse: dependencies`` is the default.
|
||||
|
||||
.. seealso::
|
||||
|
||||
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
|
||||
``reuse: true`` is the default.
|
||||
|
||||
------------------------------------------
|
||||
Selection of the target microarchitectures
|
||||
@@ -103,3 +99,547 @@ while `py-numpy` still needs an older version:
|
||||
|
||||
Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the
|
||||
default behavior is ``duplicates:strategy:minimal``.
|
||||
|
||||
.. _build-settings:
|
||||
|
||||
================================
|
||||
Package Settings (packages.yaml)
|
||||
================================
|
||||
|
||||
Spack allows you to customize how your software is built through the
|
||||
``packages.yaml`` file. Using it, you can make Spack prefer particular
|
||||
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
|
||||
or you can make it prefer to build with particular compilers. You can
|
||||
also tell Spack to use *external* software installations already
|
||||
present on your system.
|
||||
|
||||
At a high level, the ``packages.yaml`` file is structured like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
package1:
|
||||
# settings for package1
|
||||
package2:
|
||||
# settings for package2
|
||||
# ...
|
||||
all:
|
||||
# settings that apply to all packages.
|
||||
|
||||
So you can either set build preferences specifically for *one* package,
|
||||
or you can specify that certain settings should apply to *all* packages.
|
||||
The types of settings you can customize are described in detail below.
|
||||
|
||||
Spack's build defaults are in the default
|
||||
``etc/spack/defaults/packages.yaml`` file. You can override them in
|
||||
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
|
||||
details on how this works, see :ref:`configuration-scopes`.
|
||||
|
||||
.. _sec-external-packages:
|
||||
|
||||
-----------------
|
||||
External Packages
|
||||
-----------------
|
||||
|
||||
Spack can be configured to use externally-installed
|
||||
packages rather than building its own packages. This may be desirable
|
||||
if machines ship with system packages, such as a customized MPI
|
||||
that should be used instead of Spack building its own MPI.
|
||||
|
||||
External packages are configured through the ``packages.yaml`` file.
|
||||
Here's an example of an external configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
This example lists three installations of OpenMPI, one built with GCC,
|
||||
one built with GCC and debug information, and another built with Intel.
|
||||
If Spack is asked to build a package that uses one of these MPIs as a
|
||||
dependency, it will use the pre-installed OpenMPI in
|
||||
the given directory. Note that the specified path is the top-level
|
||||
install prefix, not the ``bin`` subdirectory.
|
||||
|
||||
``packages.yaml`` can also be used to specify modules to load instead
|
||||
of the installation prefixes. The following example says that module
|
||||
``CMake/3.7.2`` provides cmake version 3.7.2.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cmake:
|
||||
externals:
|
||||
- spec: cmake@3.7.2
|
||||
modules:
|
||||
- CMake/3.7.2
|
||||
|
||||
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
|
||||
by a list of package names. To specify externals, add an ``externals:``
|
||||
attribute under the package name, which lists externals.
|
||||
Each external should specify a ``spec:`` string that should be as
|
||||
well-defined as reasonably possible. If a
|
||||
package lacks a spec component, such as missing a compiler or
|
||||
package version, then Spack will guess the missing component based
|
||||
on its most-favored packages, and it may guess incorrectly.
|
||||
|
||||
Each package version and compiler listed in an external should
|
||||
have entries in Spack's packages and compiler configuration, even
|
||||
though the package and compiler may not ever be built.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Prevent packages from being built from sources
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
|
||||
but it does not prevent Spack from building packages from sources. In the above example,
|
||||
Spack might choose for many valid reasons to start building and linking with the
|
||||
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
|
||||
|
||||
To prevent this, the ``packages.yaml`` configuration also allows packages
|
||||
to be flagged as non-buildable. The previous example could be modified to
|
||||
be:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
buildable: False
|
||||
|
||||
The addition of the ``buildable`` flag tells Spack that it should never build
|
||||
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
|
||||
OpenMPI.
|
||||
|
||||
.. note::
|
||||
|
||||
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
|
||||
pre-built specs include specs already available from a local store, an upstream store, a registered
|
||||
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
|
||||
external specs in ``packages.yaml`` are included in the list of pre-built specs.
|
||||
|
||||
If an external module is specified as not buildable, then Spack will load the
|
||||
external module into the build environment which can be used for linking.
|
||||
|
||||
The ``buildable`` does not need to be paired with external packages.
|
||||
It could also be used alone to forbid packages that may be
|
||||
buggy or otherwise undesirable.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Non-buildable virtual packages
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Virtual packages in Spack can also be specified as not buildable, and
|
||||
external implementations can be provided. In the example above,
|
||||
OpenMPI is configured as not buildable, but Spack will often prefer
|
||||
other MPI implementations over the externally available OpenMPI. Spack
|
||||
can be configured with every MPI provider not buildable individually,
|
||||
but more conveniently:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
buildable: False
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
Spack can then use any of the listed external implementations of MPI
|
||||
to satisfy a dependency, and will choose depending on the compiler and
|
||||
architecture.
|
||||
|
||||
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
|
||||
(available via stores or buildcaches) are not wanted, Spack can be configured to require
|
||||
specs matching only the available externals:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
buildable: False
|
||||
require:
|
||||
- one_of: [
|
||||
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
|
||||
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
|
||||
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
]
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
|
||||
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
|
||||
:ref:`package-requirements`.
|
||||
|
||||
.. _cmd-spack-external-find:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Automatically Find External Packages
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can run the :ref:`spack external find <spack-external-find>` command
|
||||
to search for system-provided packages and add them to ``packages.yaml``.
|
||||
After running this command your ``packages.yaml`` may include new entries:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
cmake:
|
||||
externals:
|
||||
- spec: cmake@3.17.2
|
||||
prefix: /usr
|
||||
|
||||
Generally this is useful for detecting a small set of commonly-used packages;
|
||||
for now this is generally limited to finding build-only dependencies.
|
||||
Specific limitations include:
|
||||
|
||||
* Packages are not discoverable by default: For a package to be
|
||||
discoverable with ``spack external find``, it needs to add special
|
||||
logic. See :ref:`here <make-package-findable>` for more details.
|
||||
* The logic does not search through module files, it can only detect
|
||||
packages with executables defined in ``PATH``; you can help Spack locate
|
||||
externals which use module files by loading any associated modules for
|
||||
packages that you want Spack to know about before running
|
||||
``spack external find``.
|
||||
* Spack does not overwrite existing entries in the package configuration:
|
||||
If there is an external defined for a spec at any configuration scope,
|
||||
then Spack will not add a new external entry (``spack config blame packages``
|
||||
can help locate all external entries).
|
||||
|
||||
.. _package-requirements:
|
||||
|
||||
--------------------
|
||||
Package Requirements
|
||||
--------------------
|
||||
|
||||
Spack can be configured to always use certain compilers, package
|
||||
versions, and variants during concretization through package
|
||||
requirements.
|
||||
|
||||
Package requirements are useful when you find yourself repeatedly
|
||||
specifying the same constraints on the command line, and wish that
|
||||
Spack respects these constraints whether you mention them explicitly
|
||||
or not. Another use case is specifying constraints that should apply
|
||||
to all root specs in an environment, without having to repeat the
|
||||
constraint everywhere.
|
||||
|
||||
Apart from that, requirements config is more flexible than constraints
|
||||
on the command line, because it can specify constraints on packages
|
||||
*when they occur* as a dependency. In contrast, on the command line it
|
||||
is not possible to specify constraints on dependencies while also keeping
|
||||
those dependencies optional.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
Requirements syntax
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The package requirements configuration is specified in ``packages.yaml``,
|
||||
keyed by package name and expressed using the Spec syntax. In the simplest
|
||||
case you can specify attributes that you always want the package to have
|
||||
by providing a single spec string to ``require``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
libfabric:
|
||||
require: "@1.13.2"
|
||||
|
||||
In the above example, ``libfabric`` will always build with version 1.13.2. If you
|
||||
need to compose multiple configuration scopes ``require`` accepts a list of
|
||||
strings:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
libfabric:
|
||||
require:
|
||||
- "@1.13.2"
|
||||
- "%gcc"
|
||||
|
||||
In this case ``libfabric`` will always build with version 1.13.2 **and** using GCC
|
||||
as a compiler.
|
||||
|
||||
For more complex use cases, require accepts also a list of objects. These objects
|
||||
must have either a ``any_of`` or a ``one_of`` field, containing a list of spec strings,
|
||||
and they can optionally have a ``when`` and a ``message`` attribute:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- any_of: ["@4.1.5", "%gcc"]
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
``any_of`` is a list of specs. One of those specs must be satisfied
|
||||
and it is also allowed for the concretized spec to match more than one.
|
||||
In the above example, that means you could build ``openmpi@4.1.5%gcc``,
|
||||
``openmpi@4.1.5%clang`` or ``openmpi@3.9%gcc``, but
|
||||
not ``openmpi@3.9%clang``.
|
||||
|
||||
If a custom message is provided, and the requirement is not satisfiable,
|
||||
Spack will print the custom error message:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack spec openmpi@3.9%clang
|
||||
==> Error: in this example only 4.1.5 can build with other compilers
|
||||
|
||||
We could express a similar requirement using the ``when`` attribute:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- any_of: ["%gcc"]
|
||||
when: "@:4.1.4"
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC.
|
||||
For readability, Spack also allows a ``spec`` key accepting a string when there is only a single
|
||||
constraint:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- spec: "%gcc"
|
||||
when: "@:4.1.4"
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
This code snippet and the one before it are semantically equivalent.
|
||||
|
||||
Finally, instead of ``any_of`` you can use ``one_of`` which also takes a list of specs. The final
|
||||
concretized spec must match one and only one of them:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpich:
|
||||
require:
|
||||
- one_of: ["+cuda", "+rocm"]
|
||||
|
||||
In the example above, that means you could build ``mpich+cuda`` or ``mpich+rocm`` but not ``mpich+cuda+rocm``.
|
||||
|
||||
.. note::
|
||||
|
||||
For ``any_of`` and ``one_of``, the order of specs indicates a
|
||||
preference: items that appear earlier in the list are preferred
|
||||
(note that these preferences can be ignored in favor of others).
|
||||
|
||||
.. note::
|
||||
|
||||
When using a conditional requirement, Spack is allowed to actively avoid the triggering
|
||||
condition (the ``when=...`` spec) if that leads to a concrete spec with better scores in
|
||||
the optimization criteria. To check the current optimization criteria and their
|
||||
priorities you can run ``spack solve zlib``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Setting default requirements
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can also set default requirements for all packages under ``all``
|
||||
like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
require: '%clang'
|
||||
|
||||
which means every spec will be required to use ``clang`` as a compiler.
|
||||
|
||||
Note that in this case ``all`` represents a *default set of requirements* -
|
||||
if there are specific package requirements, then the default requirements
|
||||
under ``all`` are disregarded. For example, with a configuration like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
require: '%clang'
|
||||
cmake:
|
||||
require: '%gcc'
|
||||
|
||||
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
|
||||
dependencies) to use ``clang``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Setting requirements on virtual specs
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
|
||||
This can be useful for fixing which virtual provider you want to use:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
require: 'mvapich2 %gcc'
|
||||
|
||||
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
|
||||
|
||||
Requirements on the virtual spec and on the specific provider are both applied, if
|
||||
present. For instance with a configuration like:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
require: 'mvapich2 %gcc'
|
||||
mvapich2:
|
||||
require: '~cuda'
|
||||
|
||||
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
|
||||
|
||||
.. _package-preferences:
|
||||
|
||||
-------------------
|
||||
Package Preferences
|
||||
-------------------
|
||||
|
||||
In some cases package requirements can be too strong, and package
|
||||
preferences are the better option. Package preferences do not impose
|
||||
constraints on packages for particular versions or variants values,
|
||||
they rather only set defaults. The concretizer is free to change
|
||||
them if it must, due to other constraints, and also prefers reusing
|
||||
installed packages over building new ones that are a better match for
|
||||
preferences.
|
||||
|
||||
Most package preferences (``compilers``, ``target`` and ``providers``)
|
||||
can only be set globally under the ``all`` section of ``packages.yaml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
compiler: [gcc@12.2.0, clang@12:, oneapi@2023:]
|
||||
target: [x86_64_v3]
|
||||
providers:
|
||||
mpi: [mvapich2, mpich, openmpi]
|
||||
|
||||
These preferences override Spack's default and effectively reorder priorities
|
||||
when looking for the best compiler, target or virtual package provider. Each
|
||||
preference takes an ordered list of spec constraints, with earlier entries in
|
||||
the list being preferred over later entries.
|
||||
|
||||
In the example above all packages prefer to be compiled with ``gcc@12.2.0``,
|
||||
to target the ``x86_64_v3`` microarchitecture and to use ``mvapich2`` if they
|
||||
depend on ``mpi``.
|
||||
|
||||
The ``variants`` and ``version`` preferences can be set under
|
||||
package specific sections of the ``packages.yaml`` file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
opencv:
|
||||
variants: +debug
|
||||
gperftools:
|
||||
version: [2.2, 2.4, 2.3]
|
||||
|
||||
In this case, the preference for ``opencv`` is to build with debug options, while
|
||||
``gperftools`` prefers version 2.2 over 2.4.
|
||||
|
||||
Any preference can be overwritten on the command line if explicitly requested.
|
||||
|
||||
Preferences cannot overcome explicit constraints, as they only set a preferred
|
||||
ordering among homogeneous attribute values. Going back to the example, if
|
||||
``gperftools@2.3:`` was requested, then Spack will install version 2.4
|
||||
since the most preferred version 2.2 is prohibited by the version constraint.
|
||||
|
||||
.. _package_permissions:
|
||||
|
||||
-------------------
|
||||
Package Permissions
|
||||
-------------------
|
||||
|
||||
Spack can be configured to assign permissions to the files installed
|
||||
by a package.
|
||||
|
||||
In the ``packages.yaml`` file under ``permissions``, the attributes
|
||||
``read``, ``write``, and ``group`` control the package
|
||||
permissions. These attributes can be set per-package, or for all
|
||||
packages under ``all``. If permissions are set under ``all`` and for a
|
||||
specific package, the package-specific settings take precedence.
|
||||
|
||||
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
|
||||
and ``world``.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
permissions:
|
||||
write: group
|
||||
group: spack
|
||||
my_app:
|
||||
permissions:
|
||||
read: group
|
||||
group: my_team
|
||||
|
||||
The permissions settings describe the broadest level of access to
|
||||
installations of the specified packages. The execute permissions of
|
||||
the file are set to the same level as read permissions for those files
|
||||
that are executable. The default setting for ``read`` is ``world``,
|
||||
and for ``write`` is ``user``. In the example above, installations of
|
||||
``my_app`` will be installed with user and group permissions but no
|
||||
world permissions, and owned by the group ``my_team``. All other
|
||||
packages will be installed with user and group write privileges, and
|
||||
world read privileges. Those packages will be owned by the group
|
||||
``spack``.
|
||||
|
||||
The ``group`` attribute assigns a Unix-style group to a package. All
|
||||
files installed by the package will be owned by the assigned group,
|
||||
and the sticky group bit will be set on the install prefix and all
|
||||
directories inside the install prefix. This will ensure that even
|
||||
manually placed files within the install prefix are owned by the
|
||||
assigned group. If no group is assigned, Spack will allow the OS
|
||||
default behavior to go as expected.
|
||||
|
||||
----------------------------
|
||||
Assigning Package Attributes
|
||||
----------------------------
|
||||
|
||||
You can assign class-level attributes in the configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpileaks:
|
||||
# Override existing attributes
|
||||
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
|
||||
# ... or add new ones
|
||||
x: 1
|
||||
|
||||
Attributes set this way will be accessible to any method executed
|
||||
in the package.py file (e.g. the ``install()`` method). Values for these
|
||||
attributes may be any value parseable by yaml.
|
||||
|
||||
These can only be applied to specific packages, not "all" or
|
||||
virtual packages.
|
||||
|
@@ -392,7 +392,7 @@ See section
|
||||
:ref:`Configuration Scopes <configuration-scopes>`
|
||||
for an explanation about the different files
|
||||
and section
|
||||
:ref:`Build customization <packages-config>`
|
||||
:ref:`Build customization <build-settings>`
|
||||
for specifics and examples for ``packages.yaml`` files.
|
||||
|
||||
.. If your system administrator did not provide modules for pre-installed Intel
|
||||
|
@@ -17,7 +17,7 @@ case you want to skip directly to specific docs:
|
||||
* :ref:`config.yaml <config-yaml>`
|
||||
* :ref:`mirrors.yaml <mirrors>`
|
||||
* :ref:`modules.yaml <modules>`
|
||||
* :ref:`packages.yaml <packages-config>`
|
||||
* :ref:`packages.yaml <build-settings>`
|
||||
* :ref:`repos.yaml <repositories>`
|
||||
|
||||
You can also add any of these as inline configuration in the YAML
|
||||
@@ -243,9 +243,11 @@ lower-precedence settings. Completely ignoring higher-level configuration
|
||||
options is supported with the ``::`` notation for keys (see
|
||||
:ref:`config-overrides` below).
|
||||
|
||||
There are also special notations for string concatenation and precendense override.
|
||||
Using the ``+:`` notation can be used to force *prepending* strings or lists. For lists, this is identical
|
||||
to the default behavior. Using the ``-:`` works similarly, but for *appending* values.
|
||||
There are also special notations for string concatenation and precendense override:
|
||||
|
||||
* ``+:`` will force *prepending* strings or lists. For lists, this is the default behavior.
|
||||
* ``-:`` works similarly, but for *appending* values.
|
||||
|
||||
:ref:`config-prepend-append`
|
||||
|
||||
^^^^^^^^^^^
|
||||
|
@@ -1,77 +0,0 @@
|
||||
.. Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
|
||||
Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
|
||||
SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
==========================
|
||||
Frequently Asked Questions
|
||||
==========================
|
||||
|
||||
This page contains answers to frequently asked questions about Spack.
|
||||
If you have questions that are not answered here, feel free to ask on
|
||||
`Slack <https://slack.spack.io>`_ or `GitHub Discussions
|
||||
<https://github.com/spack/spack/discussions>`_. If you've learned the
|
||||
answer to a question that you think should be here, please consider
|
||||
contributing to this page.
|
||||
|
||||
.. _faq-concretizer-precedence:
|
||||
|
||||
-----------------------------------------------------
|
||||
Why does Spack pick particular versions and variants?
|
||||
-----------------------------------------------------
|
||||
|
||||
This question comes up in a variety of forms:
|
||||
|
||||
1. Why does Spack seem to ignore my package preferences from ``packages.yaml`` config?
|
||||
2. Why does Spack toggle a variant instead of using the default from the ``package.py`` file?
|
||||
|
||||
The short answer is that Spack always picks an optimal configuration
|
||||
based on a complex set of criteria\ [#f1]_. These criteria are more nuanced
|
||||
than always choosing the latest versions or default variants.
|
||||
|
||||
.. note::
|
||||
|
||||
As a rule of thumb: requirements + constraints > reuse > preferences > defaults.
|
||||
|
||||
The following set of criteria (from lowest to highest precedence) explain
|
||||
common cases where concretization output may seem surprising at first.
|
||||
|
||||
1. :ref:`Package preferences <package-preferences>` configured in ``packages.yaml``
|
||||
override variant defaults from ``package.py`` files, and influence the optimal
|
||||
ordering of versions. Preferences are specified as follows:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
foo:
|
||||
version: [1.0, 1.1]
|
||||
variants: ~mpi
|
||||
|
||||
2. :ref:`Reuse concretization <concretizer-options>` configured in ``concretizer.yaml``
|
||||
overrides preferences, since it's typically faster to reuse an existing spec than to
|
||||
build a preferred one from sources. When build caches are enabled, specs may be reused
|
||||
from a remote location too. Reuse concretization is configured as follows:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
concretizer:
|
||||
reuse: dependencies # other options are 'true' and 'false'
|
||||
|
||||
3. :ref:`Package requirements <package-requirements>` configured in ``packages.yaml``,
|
||||
and constraints from the command line as well as ``package.py`` files override all
|
||||
of the above. Requirements are specified as follows:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
foo:
|
||||
require:
|
||||
- "@1.2: +mpi"
|
||||
|
||||
Requirements and constraints restrict the set of possible solutions, while reuse
|
||||
behavior and preferences influence what an optimal solution looks like.
|
||||
|
||||
|
||||
.. rubric:: Footnotes
|
||||
|
||||
.. [#f1] The exact list of criteria can be retrieved with the ``spack solve`` command
|
@@ -55,7 +55,6 @@ or refer to the full manual below.
|
||||
getting_started
|
||||
basic_usage
|
||||
replace_conda_homebrew
|
||||
frequently_asked_questions
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
@@ -71,7 +70,7 @@ or refer to the full manual below.
|
||||
|
||||
configuration
|
||||
config_yaml
|
||||
packages_yaml
|
||||
bootstrapping
|
||||
build_settings
|
||||
environments
|
||||
containers
|
||||
@@ -79,7 +78,6 @@ or refer to the full manual below.
|
||||
module_file_support
|
||||
repositories
|
||||
binary_caches
|
||||
bootstrapping
|
||||
command_index
|
||||
chain
|
||||
extensions
|
||||
|
@@ -1,560 +0,0 @@
|
||||
.. Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
|
||||
Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
|
||||
SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
|
||||
.. _packages-config:
|
||||
|
||||
================================
|
||||
Package Settings (packages.yaml)
|
||||
================================
|
||||
|
||||
Spack allows you to customize how your software is built through the
|
||||
``packages.yaml`` file. Using it, you can make Spack prefer particular
|
||||
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
|
||||
or you can make it prefer to build with particular compilers. You can
|
||||
also tell Spack to use *external* software installations already
|
||||
present on your system.
|
||||
|
||||
At a high level, the ``packages.yaml`` file is structured like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
package1:
|
||||
# settings for package1
|
||||
package2:
|
||||
# settings for package2
|
||||
# ...
|
||||
all:
|
||||
# settings that apply to all packages.
|
||||
|
||||
So you can either set build preferences specifically for *one* package,
|
||||
or you can specify that certain settings should apply to *all* packages.
|
||||
The types of settings you can customize are described in detail below.
|
||||
|
||||
Spack's build defaults are in the default
|
||||
``etc/spack/defaults/packages.yaml`` file. You can override them in
|
||||
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
|
||||
details on how this works, see :ref:`configuration-scopes`.
|
||||
|
||||
.. _sec-external-packages:
|
||||
|
||||
-----------------
|
||||
External Packages
|
||||
-----------------
|
||||
|
||||
Spack can be configured to use externally-installed
|
||||
packages rather than building its own packages. This may be desirable
|
||||
if machines ship with system packages, such as a customized MPI
|
||||
that should be used instead of Spack building its own MPI.
|
||||
|
||||
External packages are configured through the ``packages.yaml`` file.
|
||||
Here's an example of an external configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
This example lists three installations of OpenMPI, one built with GCC,
|
||||
one built with GCC and debug information, and another built with Intel.
|
||||
If Spack is asked to build a package that uses one of these MPIs as a
|
||||
dependency, it will use the pre-installed OpenMPI in
|
||||
the given directory. Note that the specified path is the top-level
|
||||
install prefix, not the ``bin`` subdirectory.
|
||||
|
||||
``packages.yaml`` can also be used to specify modules to load instead
|
||||
of the installation prefixes. The following example says that module
|
||||
``CMake/3.7.2`` provides cmake version 3.7.2.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cmake:
|
||||
externals:
|
||||
- spec: cmake@3.7.2
|
||||
modules:
|
||||
- CMake/3.7.2
|
||||
|
||||
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
|
||||
by a list of package names. To specify externals, add an ``externals:``
|
||||
attribute under the package name, which lists externals.
|
||||
Each external should specify a ``spec:`` string that should be as
|
||||
well-defined as reasonably possible. If a
|
||||
package lacks a spec component, such as missing a compiler or
|
||||
package version, then Spack will guess the missing component based
|
||||
on its most-favored packages, and it may guess incorrectly.
|
||||
|
||||
Each package version and compiler listed in an external should
|
||||
have entries in Spack's packages and compiler configuration, even
|
||||
though the package and compiler may not ever be built.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Prevent packages from being built from sources
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
|
||||
but it does not prevent Spack from building packages from sources. In the above example,
|
||||
Spack might choose for many valid reasons to start building and linking with the
|
||||
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
|
||||
|
||||
To prevent this, the ``packages.yaml`` configuration also allows packages
|
||||
to be flagged as non-buildable. The previous example could be modified to
|
||||
be:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
buildable: False
|
||||
|
||||
The addition of the ``buildable`` flag tells Spack that it should never build
|
||||
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
|
||||
OpenMPI.
|
||||
|
||||
.. note::
|
||||
|
||||
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
|
||||
pre-built specs include specs already available from a local store, an upstream store, a registered
|
||||
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
|
||||
external specs in ``packages.yaml`` are included in the list of pre-built specs.
|
||||
|
||||
If an external module is specified as not buildable, then Spack will load the
|
||||
external module into the build environment which can be used for linking.
|
||||
|
||||
The ``buildable`` does not need to be paired with external packages.
|
||||
It could also be used alone to forbid packages that may be
|
||||
buggy or otherwise undesirable.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Non-buildable virtual packages
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Virtual packages in Spack can also be specified as not buildable, and
|
||||
external implementations can be provided. In the example above,
|
||||
OpenMPI is configured as not buildable, but Spack will often prefer
|
||||
other MPI implementations over the externally available OpenMPI. Spack
|
||||
can be configured with every MPI provider not buildable individually,
|
||||
but more conveniently:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
buildable: False
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
Spack can then use any of the listed external implementations of MPI
|
||||
to satisfy a dependency, and will choose depending on the compiler and
|
||||
architecture.
|
||||
|
||||
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
|
||||
(available via stores or buildcaches) are not wanted, Spack can be configured to require
|
||||
specs matching only the available externals:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
buildable: False
|
||||
require:
|
||||
- one_of: [
|
||||
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
|
||||
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
|
||||
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
]
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.4.3
|
||||
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
|
||||
prefix: /opt/openmpi-1.4.3-debug
|
||||
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
|
||||
prefix: /opt/openmpi-1.6.5-intel
|
||||
|
||||
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
|
||||
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
|
||||
:ref:`package-requirements`.
|
||||
|
||||
.. _cmd-spack-external-find:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Automatically Find External Packages
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can run the :ref:`spack external find <spack-external-find>` command
|
||||
to search for system-provided packages and add them to ``packages.yaml``.
|
||||
After running this command your ``packages.yaml`` may include new entries:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
cmake:
|
||||
externals:
|
||||
- spec: cmake@3.17.2
|
||||
prefix: /usr
|
||||
|
||||
Generally this is useful for detecting a small set of commonly-used packages;
|
||||
for now this is generally limited to finding build-only dependencies.
|
||||
Specific limitations include:
|
||||
|
||||
* Packages are not discoverable by default: For a package to be
|
||||
discoverable with ``spack external find``, it needs to add special
|
||||
logic. See :ref:`here <make-package-findable>` for more details.
|
||||
* The logic does not search through module files, it can only detect
|
||||
packages with executables defined in ``PATH``; you can help Spack locate
|
||||
externals which use module files by loading any associated modules for
|
||||
packages that you want Spack to know about before running
|
||||
``spack external find``.
|
||||
* Spack does not overwrite existing entries in the package configuration:
|
||||
If there is an external defined for a spec at any configuration scope,
|
||||
then Spack will not add a new external entry (``spack config blame packages``
|
||||
can help locate all external entries).
|
||||
|
||||
.. _package-requirements:
|
||||
|
||||
--------------------
|
||||
Package Requirements
|
||||
--------------------
|
||||
|
||||
Spack can be configured to always use certain compilers, package
|
||||
versions, and variants during concretization through package
|
||||
requirements.
|
||||
|
||||
Package requirements are useful when you find yourself repeatedly
|
||||
specifying the same constraints on the command line, and wish that
|
||||
Spack respects these constraints whether you mention them explicitly
|
||||
or not. Another use case is specifying constraints that should apply
|
||||
to all root specs in an environment, without having to repeat the
|
||||
constraint everywhere.
|
||||
|
||||
Apart from that, requirements config is more flexible than constraints
|
||||
on the command line, because it can specify constraints on packages
|
||||
*when they occur* as a dependency. In contrast, on the command line it
|
||||
is not possible to specify constraints on dependencies while also keeping
|
||||
those dependencies optional.
|
||||
|
||||
.. seealso::
|
||||
|
||||
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
|
||||
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
Requirements syntax
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The package requirements configuration is specified in ``packages.yaml``,
|
||||
keyed by package name and expressed using the Spec syntax. In the simplest
|
||||
case you can specify attributes that you always want the package to have
|
||||
by providing a single spec string to ``require``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
libfabric:
|
||||
require: "@1.13.2"
|
||||
|
||||
In the above example, ``libfabric`` will always build with version 1.13.2. If you
|
||||
need to compose multiple configuration scopes ``require`` accepts a list of
|
||||
strings:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
libfabric:
|
||||
require:
|
||||
- "@1.13.2"
|
||||
- "%gcc"
|
||||
|
||||
In this case ``libfabric`` will always build with version 1.13.2 **and** using GCC
|
||||
as a compiler.
|
||||
|
||||
For more complex use cases, require accepts also a list of objects. These objects
|
||||
must have either a ``any_of`` or a ``one_of`` field, containing a list of spec strings,
|
||||
and they can optionally have a ``when`` and a ``message`` attribute:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- any_of: ["@4.1.5", "%gcc"]
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
``any_of`` is a list of specs. One of those specs must be satisfied
|
||||
and it is also allowed for the concretized spec to match more than one.
|
||||
In the above example, that means you could build ``openmpi@4.1.5%gcc``,
|
||||
``openmpi@4.1.5%clang`` or ``openmpi@3.9%gcc``, but
|
||||
not ``openmpi@3.9%clang``.
|
||||
|
||||
If a custom message is provided, and the requirement is not satisfiable,
|
||||
Spack will print the custom error message:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack spec openmpi@3.9%clang
|
||||
==> Error: in this example only 4.1.5 can build with other compilers
|
||||
|
||||
We could express a similar requirement using the ``when`` attribute:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- any_of: ["%gcc"]
|
||||
when: "@:4.1.4"
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC.
|
||||
For readability, Spack also allows a ``spec`` key accepting a string when there is only a single
|
||||
constraint:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
require:
|
||||
- spec: "%gcc"
|
||||
when: "@:4.1.4"
|
||||
message: "in this example only 4.1.5 can build with other compilers"
|
||||
|
||||
This code snippet and the one before it are semantically equivalent.
|
||||
|
||||
Finally, instead of ``any_of`` you can use ``one_of`` which also takes a list of specs. The final
|
||||
concretized spec must match one and only one of them:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpich:
|
||||
require:
|
||||
- one_of: ["+cuda", "+rocm"]
|
||||
|
||||
In the example above, that means you could build ``mpich+cuda`` or ``mpich+rocm`` but not ``mpich+cuda+rocm``.
|
||||
|
||||
.. note::
|
||||
|
||||
For ``any_of`` and ``one_of``, the order of specs indicates a
|
||||
preference: items that appear earlier in the list are preferred
|
||||
(note that these preferences can be ignored in favor of others).
|
||||
|
||||
.. note::
|
||||
|
||||
When using a conditional requirement, Spack is allowed to actively avoid the triggering
|
||||
condition (the ``when=...`` spec) if that leads to a concrete spec with better scores in
|
||||
the optimization criteria. To check the current optimization criteria and their
|
||||
priorities you can run ``spack solve zlib``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Setting default requirements
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can also set default requirements for all packages under ``all``
|
||||
like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
require: '%clang'
|
||||
|
||||
which means every spec will be required to use ``clang`` as a compiler.
|
||||
|
||||
Note that in this case ``all`` represents a *default set of requirements* -
|
||||
if there are specific package requirements, then the default requirements
|
||||
under ``all`` are disregarded. For example, with a configuration like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
require: '%clang'
|
||||
cmake:
|
||||
require: '%gcc'
|
||||
|
||||
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
|
||||
dependencies) to use ``clang``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Setting requirements on virtual specs
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
|
||||
This can be useful for fixing which virtual provider you want to use:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
require: 'mvapich2 %gcc'
|
||||
|
||||
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
|
||||
|
||||
Requirements on the virtual spec and on the specific provider are both applied, if
|
||||
present. For instance with a configuration like:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpi:
|
||||
require: 'mvapich2 %gcc'
|
||||
mvapich2:
|
||||
require: '~cuda'
|
||||
|
||||
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
|
||||
|
||||
.. _package-preferences:
|
||||
|
||||
-------------------
|
||||
Package Preferences
|
||||
-------------------
|
||||
|
||||
In some cases package requirements can be too strong, and package
|
||||
preferences are the better option. Package preferences do not impose
|
||||
constraints on packages for particular versions or variants values,
|
||||
they rather only set defaults. The concretizer is free to change
|
||||
them if it must, due to other constraints, and also prefers reusing
|
||||
installed packages over building new ones that are a better match for
|
||||
preferences.
|
||||
|
||||
.. seealso::
|
||||
|
||||
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
|
||||
|
||||
|
||||
Most package preferences (``compilers``, ``target`` and ``providers``)
|
||||
can only be set globally under the ``all`` section of ``packages.yaml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
compiler: [gcc@12.2.0, clang@12:, oneapi@2023:]
|
||||
target: [x86_64_v3]
|
||||
providers:
|
||||
mpi: [mvapich2, mpich, openmpi]
|
||||
|
||||
These preferences override Spack's default and effectively reorder priorities
|
||||
when looking for the best compiler, target or virtual package provider. Each
|
||||
preference takes an ordered list of spec constraints, with earlier entries in
|
||||
the list being preferred over later entries.
|
||||
|
||||
In the example above all packages prefer to be compiled with ``gcc@12.2.0``,
|
||||
to target the ``x86_64_v3`` microarchitecture and to use ``mvapich2`` if they
|
||||
depend on ``mpi``.
|
||||
|
||||
The ``variants`` and ``version`` preferences can be set under
|
||||
package specific sections of the ``packages.yaml`` file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
opencv:
|
||||
variants: +debug
|
||||
gperftools:
|
||||
version: [2.2, 2.4, 2.3]
|
||||
|
||||
In this case, the preference for ``opencv`` is to build with debug options, while
|
||||
``gperftools`` prefers version 2.2 over 2.4.
|
||||
|
||||
Any preference can be overwritten on the command line if explicitly requested.
|
||||
|
||||
Preferences cannot overcome explicit constraints, as they only set a preferred
|
||||
ordering among homogeneous attribute values. Going back to the example, if
|
||||
``gperftools@2.3:`` was requested, then Spack will install version 2.4
|
||||
since the most preferred version 2.2 is prohibited by the version constraint.
|
||||
|
||||
.. _package_permissions:
|
||||
|
||||
-------------------
|
||||
Package Permissions
|
||||
-------------------
|
||||
|
||||
Spack can be configured to assign permissions to the files installed
|
||||
by a package.
|
||||
|
||||
In the ``packages.yaml`` file under ``permissions``, the attributes
|
||||
``read``, ``write``, and ``group`` control the package
|
||||
permissions. These attributes can be set per-package, or for all
|
||||
packages under ``all``. If permissions are set under ``all`` and for a
|
||||
specific package, the package-specific settings take precedence.
|
||||
|
||||
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
|
||||
and ``world``.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
all:
|
||||
permissions:
|
||||
write: group
|
||||
group: spack
|
||||
my_app:
|
||||
permissions:
|
||||
read: group
|
||||
group: my_team
|
||||
|
||||
The permissions settings describe the broadest level of access to
|
||||
installations of the specified packages. The execute permissions of
|
||||
the file are set to the same level as read permissions for those files
|
||||
that are executable. The default setting for ``read`` is ``world``,
|
||||
and for ``write`` is ``user``. In the example above, installations of
|
||||
``my_app`` will be installed with user and group permissions but no
|
||||
world permissions, and owned by the group ``my_team``. All other
|
||||
packages will be installed with user and group write privileges, and
|
||||
world read privileges. Those packages will be owned by the group
|
||||
``spack``.
|
||||
|
||||
The ``group`` attribute assigns a Unix-style group to a package. All
|
||||
files installed by the package will be owned by the assigned group,
|
||||
and the sticky group bit will be set on the install prefix and all
|
||||
directories inside the install prefix. This will ensure that even
|
||||
manually placed files within the install prefix are owned by the
|
||||
assigned group. If no group is assigned, Spack will allow the OS
|
||||
default behavior to go as expected.
|
||||
|
||||
----------------------------
|
||||
Assigning Package Attributes
|
||||
----------------------------
|
||||
|
||||
You can assign class-level attributes in the configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
mpileaks:
|
||||
package_attributes:
|
||||
# Override existing attributes
|
||||
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
|
||||
# ... or add new ones
|
||||
x: 1
|
||||
|
||||
Attributes set this way will be accessible to any method executed
|
||||
in the package.py file (e.g. the ``install()`` method). Values for these
|
||||
attributes may be any value parseable by yaml.
|
||||
|
||||
These can only be applied to specific packages, not "all" or
|
||||
virtual packages.
|
2
lib/spack/external/__init__.py
vendored
2
lib/spack/external/__init__.py
vendored
@@ -18,7 +18,7 @@
|
||||
|
||||
* Homepage: https://pypi.python.org/pypi/archspec
|
||||
* Usage: Labeling, comparison and detection of microarchitectures
|
||||
* Version: 0.2.5-dev (commit cbb1fd5eb397a70d466e5160b393b87b0dbcc78f)
|
||||
* Version: 0.2.2 (commit 1dc58a5776dd77e6fc6e4ba5626af5b1fb24996e)
|
||||
|
||||
astunparse
|
||||
----------------
|
||||
|
3
lib/spack/external/archspec/__init__.py
vendored
3
lib/spack/external/archspec/__init__.py
vendored
@@ -1,3 +1,2 @@
|
||||
"""Init file to avoid namespace packages"""
|
||||
|
||||
__version__ = "0.2.4"
|
||||
__version__ = "0.2.2"
|
||||
|
1
lib/spack/external/archspec/__main__.py
vendored
1
lib/spack/external/archspec/__main__.py
vendored
@@ -3,7 +3,6 @@
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
from .cli import main
|
||||
|
||||
sys.exit(main())
|
||||
|
6
lib/spack/external/archspec/cli.py
vendored
6
lib/spack/external/archspec/cli.py
vendored
@@ -46,11 +46,7 @@ def _make_parser() -> argparse.ArgumentParser:
|
||||
|
||||
def cpu() -> int:
|
||||
"""Run the `archspec cpu` subcommand."""
|
||||
try:
|
||||
print(archspec.cpu.host())
|
||||
except FileNotFoundError as exc:
|
||||
print(exc)
|
||||
return 1
|
||||
print(archspec.cpu.host())
|
||||
return 0
|
||||
|
||||
|
||||
|
13
lib/spack/external/archspec/cpu/__init__.py
vendored
13
lib/spack/external/archspec/cpu/__init__.py
vendored
@@ -5,14 +5,10 @@
|
||||
"""The "cpu" package permits to query and compare different
|
||||
CPU microarchitectures.
|
||||
"""
|
||||
from .detect import brand_string, host
|
||||
from .microarchitecture import (
|
||||
TARGETS,
|
||||
Microarchitecture,
|
||||
UnsupportedMicroarchitecture,
|
||||
generic_microarchitecture,
|
||||
version_components,
|
||||
)
|
||||
from .microarchitecture import Microarchitecture, UnsupportedMicroarchitecture
|
||||
from .microarchitecture import TARGETS, generic_microarchitecture
|
||||
from .microarchitecture import version_components
|
||||
from .detect import host
|
||||
|
||||
__all__ = [
|
||||
"Microarchitecture",
|
||||
@@ -21,5 +17,4 @@
|
||||
"generic_microarchitecture",
|
||||
"host",
|
||||
"version_components",
|
||||
"brand_string",
|
||||
]
|
||||
|
420
lib/spack/external/archspec/cpu/detect.py
vendored
420
lib/spack/external/archspec/cpu/detect.py
vendored
@@ -4,17 +4,15 @@
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
"""Detection of CPU microarchitectures"""
|
||||
import collections
|
||||
import functools
|
||||
import os
|
||||
import platform
|
||||
import re
|
||||
import struct
|
||||
import subprocess
|
||||
import warnings
|
||||
from typing import Dict, List, Optional, Set, Tuple, Union
|
||||
|
||||
from ..vendor.cpuid.cpuid import CPUID
|
||||
from .microarchitecture import TARGETS, Microarchitecture, generic_microarchitecture
|
||||
from .schema import CPUID_JSON, TARGETS_JSON
|
||||
from .microarchitecture import generic_microarchitecture, TARGETS
|
||||
from .schema import TARGETS_JSON
|
||||
|
||||
#: Mapping from operating systems to chain of commands
|
||||
#: to obtain a dictionary of raw info on the current cpu
|
||||
@@ -24,51 +22,43 @@
|
||||
#: functions checking the compatibility of the host with a given target
|
||||
COMPATIBILITY_CHECKS = {}
|
||||
|
||||
# Constants for commonly used architectures
|
||||
X86_64 = "x86_64"
|
||||
AARCH64 = "aarch64"
|
||||
PPC64LE = "ppc64le"
|
||||
PPC64 = "ppc64"
|
||||
RISCV64 = "riscv64"
|
||||
|
||||
|
||||
def detection(operating_system: str):
|
||||
"""Decorator to mark functions that are meant to return partial information on the current cpu.
|
||||
def info_dict(operating_system):
|
||||
"""Decorator to mark functions that are meant to return raw info on
|
||||
the current cpu.
|
||||
|
||||
Args:
|
||||
operating_system: operating system where this function can be used.
|
||||
operating_system (str or tuple): operating system for which the marked
|
||||
function is a viable factory of raw info dictionaries.
|
||||
"""
|
||||
|
||||
def decorator(factory):
|
||||
INFO_FACTORY[operating_system].append(factory)
|
||||
return factory
|
||||
|
||||
@functools.wraps(factory)
|
||||
def _impl():
|
||||
info = factory()
|
||||
|
||||
# Check that info contains a few mandatory fields
|
||||
msg = 'field "{0}" is missing from raw info dictionary'
|
||||
assert "vendor_id" in info, msg.format("vendor_id")
|
||||
assert "flags" in info, msg.format("flags")
|
||||
assert "model" in info, msg.format("model")
|
||||
assert "model_name" in info, msg.format("model_name")
|
||||
|
||||
return info
|
||||
|
||||
return _impl
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
def partial_uarch(
|
||||
name: str = "",
|
||||
vendor: str = "",
|
||||
features: Optional[Set[str]] = None,
|
||||
generation: int = 0,
|
||||
cpu_part: str = "",
|
||||
) -> Microarchitecture:
|
||||
"""Construct a partial microarchitecture, from information gathered during system scan."""
|
||||
return Microarchitecture(
|
||||
name=name,
|
||||
parents=[],
|
||||
vendor=vendor,
|
||||
features=features or set(),
|
||||
compilers={},
|
||||
generation=generation,
|
||||
cpu_part=cpu_part,
|
||||
)
|
||||
|
||||
|
||||
@detection(operating_system="Linux")
|
||||
def proc_cpuinfo() -> Microarchitecture:
|
||||
"""Returns a partial Microarchitecture, obtained from scanning ``/proc/cpuinfo``"""
|
||||
data = {}
|
||||
@info_dict(operating_system="Linux")
|
||||
def proc_cpuinfo():
|
||||
"""Returns a raw info dictionary by parsing the first entry of
|
||||
``/proc/cpuinfo``
|
||||
"""
|
||||
info = {}
|
||||
with open("/proc/cpuinfo") as file: # pylint: disable=unspecified-encoding
|
||||
for line in file:
|
||||
key, separator, value = line.partition(":")
|
||||
@@ -80,122 +70,11 @@ def proc_cpuinfo() -> Microarchitecture:
|
||||
#
|
||||
# we are on a blank line separating two cpus. Exit early as
|
||||
# we want to read just the first entry in /proc/cpuinfo
|
||||
if separator != ":" and data:
|
||||
if separator != ":" and info:
|
||||
break
|
||||
|
||||
data[key.strip()] = value.strip()
|
||||
|
||||
architecture = _machine()
|
||||
if architecture == X86_64:
|
||||
return partial_uarch(
|
||||
vendor=data.get("vendor_id", "generic"), features=_feature_set(data, key="flags")
|
||||
)
|
||||
|
||||
if architecture == AARCH64:
|
||||
return partial_uarch(
|
||||
vendor=_canonicalize_aarch64_vendor(data),
|
||||
features=_feature_set(data, key="Features"),
|
||||
cpu_part=data.get("CPU part", ""),
|
||||
)
|
||||
|
||||
if architecture in (PPC64LE, PPC64):
|
||||
generation_match = re.search(r"POWER(\d+)", data.get("cpu", ""))
|
||||
try:
|
||||
generation = int(generation_match.group(1))
|
||||
except AttributeError:
|
||||
# There might be no match under emulated environments. For instance
|
||||
# emulating a ppc64le with QEMU and Docker still reports the host
|
||||
# /proc/cpuinfo and not a Power
|
||||
generation = 0
|
||||
return partial_uarch(generation=generation)
|
||||
|
||||
if architecture == RISCV64:
|
||||
if data.get("uarch") == "sifive,u74-mc":
|
||||
data["uarch"] = "u74mc"
|
||||
return partial_uarch(name=data.get("uarch", RISCV64))
|
||||
|
||||
return generic_microarchitecture(architecture)
|
||||
|
||||
|
||||
class CpuidInfoCollector:
|
||||
"""Collects the information we need on the host CPU from cpuid"""
|
||||
|
||||
# pylint: disable=too-few-public-methods
|
||||
def __init__(self):
|
||||
self.cpuid = CPUID()
|
||||
|
||||
registers = self.cpuid.registers_for(**CPUID_JSON["vendor"]["input"])
|
||||
self.highest_basic_support = registers.eax
|
||||
self.vendor = struct.pack("III", registers.ebx, registers.edx, registers.ecx).decode(
|
||||
"utf-8"
|
||||
)
|
||||
|
||||
registers = self.cpuid.registers_for(**CPUID_JSON["highest_extension_support"]["input"])
|
||||
self.highest_extension_support = registers.eax
|
||||
|
||||
self.features = self._features()
|
||||
|
||||
def _features(self):
|
||||
result = set()
|
||||
|
||||
def check_features(data):
|
||||
registers = self.cpuid.registers_for(**data["input"])
|
||||
for feature_check in data["bits"]:
|
||||
current = getattr(registers, feature_check["register"])
|
||||
if self._is_bit_set(current, feature_check["bit"]):
|
||||
result.add(feature_check["name"])
|
||||
|
||||
for call_data in CPUID_JSON["flags"]:
|
||||
if call_data["input"]["eax"] > self.highest_basic_support:
|
||||
continue
|
||||
check_features(call_data)
|
||||
|
||||
for call_data in CPUID_JSON["extension-flags"]:
|
||||
if call_data["input"]["eax"] > self.highest_extension_support:
|
||||
continue
|
||||
check_features(call_data)
|
||||
|
||||
return result
|
||||
|
||||
def _is_bit_set(self, register: int, bit: int) -> bool:
|
||||
mask = 1 << bit
|
||||
return register & mask > 0
|
||||
|
||||
def brand_string(self) -> Optional[str]:
|
||||
"""Returns the brand string, if available."""
|
||||
if self.highest_extension_support < 0x80000004:
|
||||
return None
|
||||
|
||||
r1 = self.cpuid.registers_for(eax=0x80000002, ecx=0)
|
||||
r2 = self.cpuid.registers_for(eax=0x80000003, ecx=0)
|
||||
r3 = self.cpuid.registers_for(eax=0x80000004, ecx=0)
|
||||
result = struct.pack(
|
||||
"IIIIIIIIIIII",
|
||||
r1.eax,
|
||||
r1.ebx,
|
||||
r1.ecx,
|
||||
r1.edx,
|
||||
r2.eax,
|
||||
r2.ebx,
|
||||
r2.ecx,
|
||||
r2.edx,
|
||||
r3.eax,
|
||||
r3.ebx,
|
||||
r3.ecx,
|
||||
r3.edx,
|
||||
).decode("utf-8")
|
||||
return result.strip("\x00")
|
||||
|
||||
|
||||
@detection(operating_system="Windows")
|
||||
def cpuid_info():
|
||||
"""Returns a partial Microarchitecture, obtained from running the cpuid instruction"""
|
||||
architecture = _machine()
|
||||
if architecture == X86_64:
|
||||
data = CpuidInfoCollector()
|
||||
return partial_uarch(vendor=data.vendor, features=data.features)
|
||||
|
||||
return generic_microarchitecture(architecture)
|
||||
info[key.strip()] = value.strip()
|
||||
return info
|
||||
|
||||
|
||||
def _check_output(args, env):
|
||||
@@ -204,25 +83,14 @@ def _check_output(args, env):
|
||||
return str(output.decode("utf-8"))
|
||||
|
||||
|
||||
WINDOWS_MAPPING = {
|
||||
"AMD64": X86_64,
|
||||
"ARM64": AARCH64,
|
||||
}
|
||||
|
||||
|
||||
def _machine():
|
||||
"""Return the machine architecture we are on"""
|
||||
""" "Return the machine architecture we are on"""
|
||||
operating_system = platform.system()
|
||||
|
||||
# If we are not on Darwin or Windows, trust what Python tells us
|
||||
if operating_system not in ("Darwin", "Windows"):
|
||||
# If we are not on Darwin, trust what Python tells us
|
||||
if operating_system != "Darwin":
|
||||
return platform.machine()
|
||||
|
||||
# Normalize windows specific names
|
||||
if operating_system == "Windows":
|
||||
platform_machine = platform.machine()
|
||||
return WINDOWS_MAPPING.get(platform_machine, platform_machine)
|
||||
|
||||
# On Darwin it might happen that we are on M1, but using an interpreter
|
||||
# built for x86_64. In that case "platform.machine() == 'x86_64'", so we
|
||||
# need to fix that.
|
||||
@@ -235,47 +103,54 @@ def _machine():
|
||||
if "Apple" in output:
|
||||
# Note that a native Python interpreter on Apple M1 would return
|
||||
# "arm64" instead of "aarch64". Here we normalize to the latter.
|
||||
return AARCH64
|
||||
return "aarch64"
|
||||
|
||||
return X86_64
|
||||
return "x86_64"
|
||||
|
||||
|
||||
@detection(operating_system="Darwin")
|
||||
def sysctl_info() -> Microarchitecture:
|
||||
@info_dict(operating_system="Darwin")
|
||||
def sysctl_info_dict():
|
||||
"""Returns a raw info dictionary parsing the output of sysctl."""
|
||||
child_environment = _ensure_bin_usrbin_in_path()
|
||||
|
||||
def sysctl(*args: str) -> str:
|
||||
def sysctl(*args):
|
||||
return _check_output(["sysctl"] + list(args), env=child_environment).strip()
|
||||
|
||||
if _machine() == X86_64:
|
||||
features = (
|
||||
f'{sysctl("-n", "machdep.cpu.features").lower()} '
|
||||
f'{sysctl("-n", "machdep.cpu.leaf7_features").lower()}'
|
||||
if _machine() == "x86_64":
|
||||
flags = (
|
||||
sysctl("-n", "machdep.cpu.features").lower()
|
||||
+ " "
|
||||
+ sysctl("-n", "machdep.cpu.leaf7_features").lower()
|
||||
)
|
||||
features = set(features.split())
|
||||
info = {
|
||||
"vendor_id": sysctl("-n", "machdep.cpu.vendor"),
|
||||
"flags": flags,
|
||||
"model": sysctl("-n", "machdep.cpu.model"),
|
||||
"model name": sysctl("-n", "machdep.cpu.brand_string"),
|
||||
}
|
||||
else:
|
||||
model = "unknown"
|
||||
model_str = sysctl("-n", "machdep.cpu.brand_string").lower()
|
||||
if "m2" in model_str:
|
||||
model = "m2"
|
||||
elif "m1" in model_str:
|
||||
model = "m1"
|
||||
elif "apple" in model_str:
|
||||
model = "m1"
|
||||
|
||||
# Flags detected on Darwin turned to their linux counterpart
|
||||
for darwin_flag, linux_flag in TARGETS_JSON["conversions"]["darwin_flags"].items():
|
||||
if darwin_flag in features:
|
||||
features.update(linux_flag.split())
|
||||
|
||||
return partial_uarch(vendor=sysctl("-n", "machdep.cpu.vendor"), features=features)
|
||||
|
||||
model = "unknown"
|
||||
model_str = sysctl("-n", "machdep.cpu.brand_string").lower()
|
||||
if "m2" in model_str:
|
||||
model = "m2"
|
||||
elif "m1" in model_str:
|
||||
model = "m1"
|
||||
elif "apple" in model_str:
|
||||
model = "m1"
|
||||
|
||||
return partial_uarch(name=model, vendor="Apple")
|
||||
info = {
|
||||
"vendor_id": "Apple",
|
||||
"flags": [],
|
||||
"model": model,
|
||||
"CPU implementer": "Apple",
|
||||
"model name": sysctl("-n", "machdep.cpu.brand_string"),
|
||||
}
|
||||
return info
|
||||
|
||||
|
||||
def _ensure_bin_usrbin_in_path():
|
||||
# Make sure that /sbin and /usr/sbin are in PATH as sysctl is usually found there
|
||||
# Make sure that /sbin and /usr/sbin are in PATH as sysctl is
|
||||
# usually found there
|
||||
child_environment = dict(os.environ.items())
|
||||
search_paths = child_environment.get("PATH", "").split(os.pathsep)
|
||||
for additional_path in ("/sbin", "/usr/sbin"):
|
||||
@@ -285,10 +160,22 @@ def _ensure_bin_usrbin_in_path():
|
||||
return child_environment
|
||||
|
||||
|
||||
def _canonicalize_aarch64_vendor(data: Dict[str, str]) -> str:
|
||||
"""Adjust the vendor field to make it human-readable"""
|
||||
if "CPU implementer" not in data:
|
||||
return "generic"
|
||||
def adjust_raw_flags(info):
|
||||
"""Adjust the flags detected on the system to homogenize
|
||||
slightly different representations.
|
||||
"""
|
||||
# Flags detected on Darwin turned to their linux counterpart
|
||||
flags = info.get("flags", [])
|
||||
d2l = TARGETS_JSON["conversions"]["darwin_flags"]
|
||||
for darwin_flag, linux_flag in d2l.items():
|
||||
if darwin_flag in flags:
|
||||
info["flags"] += " " + linux_flag
|
||||
|
||||
|
||||
def adjust_raw_vendor(info):
|
||||
"""Adjust the vendor field to make it human readable"""
|
||||
if "CPU implementer" not in info:
|
||||
return
|
||||
|
||||
# Mapping numeric codes to vendor (ARM). This list is a merge from
|
||||
# different sources:
|
||||
@@ -298,37 +185,43 @@ def _canonicalize_aarch64_vendor(data: Dict[str, str]) -> str:
|
||||
# https://github.com/gcc-mirror/gcc/blob/master/gcc/config/aarch64/aarch64-cores.def
|
||||
# https://patchwork.kernel.org/patch/10524949/
|
||||
arm_vendors = TARGETS_JSON["conversions"]["arm_vendors"]
|
||||
arm_code = data["CPU implementer"]
|
||||
return arm_vendors.get(arm_code, arm_code)
|
||||
arm_code = info["CPU implementer"]
|
||||
if arm_code in arm_vendors:
|
||||
info["CPU implementer"] = arm_vendors[arm_code]
|
||||
|
||||
|
||||
def _feature_set(data: Dict[str, str], key: str) -> Set[str]:
|
||||
return set(data.get(key, "").split())
|
||||
def raw_info_dictionary():
|
||||
"""Returns a dictionary with information on the cpu of the current host.
|
||||
|
||||
|
||||
def detected_info() -> Microarchitecture:
|
||||
"""Returns a partial Microarchitecture with information on the CPU of the current host.
|
||||
|
||||
This function calls all the viable factories one after the other until there's one that is
|
||||
able to produce the requested information. Falls-back to a generic microarchitecture, if none
|
||||
of the calls succeed.
|
||||
This function calls all the viable factories one after the other until
|
||||
there's one that is able to produce the requested information.
|
||||
"""
|
||||
# pylint: disable=broad-except
|
||||
info = {}
|
||||
for factory in INFO_FACTORY[platform.system()]:
|
||||
try:
|
||||
return factory()
|
||||
info = factory()
|
||||
except Exception as exc:
|
||||
warnings.warn(str(exc))
|
||||
|
||||
return generic_microarchitecture(_machine())
|
||||
if info:
|
||||
adjust_raw_flags(info)
|
||||
adjust_raw_vendor(info)
|
||||
break
|
||||
|
||||
return info
|
||||
|
||||
|
||||
def compatible_microarchitectures(info: Microarchitecture) -> List[Microarchitecture]:
|
||||
"""Returns an unordered list of known micro-architectures that are compatible with the
|
||||
partial Microarchitecture passed as input.
|
||||
def compatible_microarchitectures(info):
|
||||
"""Returns an unordered list of known micro-architectures that are
|
||||
compatible with the info dictionary passed as argument.
|
||||
|
||||
Args:
|
||||
info (dict): dictionary containing information on the host cpu
|
||||
"""
|
||||
architecture_family = _machine()
|
||||
# If a tester is not registered, assume no known target is compatible with the host
|
||||
# If a tester is not registered, be conservative and assume no known
|
||||
# target is compatible with the host
|
||||
tester = COMPATIBILITY_CHECKS.get(architecture_family, lambda x, y: False)
|
||||
return [x for x in TARGETS.values() if tester(info, x)] or [
|
||||
generic_microarchitecture(architecture_family)
|
||||
@@ -337,8 +230,8 @@ def compatible_microarchitectures(info: Microarchitecture) -> List[Microarchitec
|
||||
|
||||
def host():
|
||||
"""Detects the host micro-architecture and returns it."""
|
||||
# Retrieve information on the host's cpu
|
||||
info = detected_info()
|
||||
# Retrieve a dictionary with raw information on the host's cpu
|
||||
info = raw_info_dictionary()
|
||||
|
||||
# Get a list of possible candidates for this micro-architecture
|
||||
candidates = compatible_microarchitectures(info)
|
||||
@@ -351,10 +244,6 @@ def sorting_fn(item):
|
||||
generic_candidates = [c for c in candidates if c.vendor == "generic"]
|
||||
best_generic = max(generic_candidates, key=sorting_fn)
|
||||
|
||||
# Relevant for AArch64. Filter on "cpu_part" if we have any match
|
||||
if info.cpu_part != "" and any(c for c in candidates if info.cpu_part == c.cpu_part):
|
||||
candidates = [c for c in candidates if info.cpu_part == c.cpu_part]
|
||||
|
||||
# Filter the candidates to be descendant of the best generic candidate.
|
||||
# This is to avoid that the lack of a niche feature that can be disabled
|
||||
# from e.g. BIOS prevents detection of a reasonably performant architecture
|
||||
@@ -369,15 +258,16 @@ def sorting_fn(item):
|
||||
return max(candidates, key=sorting_fn)
|
||||
|
||||
|
||||
def compatibility_check(architecture_family: Union[str, Tuple[str, ...]]):
|
||||
def compatibility_check(architecture_family):
|
||||
"""Decorator to register a function as a proper compatibility check.
|
||||
|
||||
A compatibility check function takes a partial Microarchitecture object as a first argument,
|
||||
and an arbitrary target Microarchitecture as the second argument. It returns True if the
|
||||
target is compatible with first argument, False otherwise.
|
||||
A compatibility check function takes the raw info dictionary as a first
|
||||
argument and an arbitrary target as the second argument. It returns True
|
||||
if the target is compatible with the info dictionary, False otherwise.
|
||||
|
||||
Args:
|
||||
architecture_family: architecture family for which this test can be used
|
||||
architecture_family (str or tuple): architecture family for which
|
||||
this test can be used, e.g. x86_64 or ppc64le etc.
|
||||
"""
|
||||
# Turn the argument into something iterable
|
||||
if isinstance(architecture_family, str):
|
||||
@@ -390,70 +280,86 @@ def decorator(func):
|
||||
return decorator
|
||||
|
||||
|
||||
@compatibility_check(architecture_family=(PPC64LE, PPC64))
|
||||
@compatibility_check(architecture_family=("ppc64le", "ppc64"))
|
||||
def compatibility_check_for_power(info, target):
|
||||
"""Compatibility check for PPC64 and PPC64LE architectures."""
|
||||
basename = platform.machine()
|
||||
generation_match = re.search(r"POWER(\d+)", info.get("cpu", ""))
|
||||
try:
|
||||
generation = int(generation_match.group(1))
|
||||
except AttributeError:
|
||||
# There might be no match under emulated environments. For instance
|
||||
# emulating a ppc64le with QEMU and Docker still reports the host
|
||||
# /proc/cpuinfo and not a Power
|
||||
generation = 0
|
||||
|
||||
# We can use a target if it descends from our machine type and our
|
||||
# generation (9 for POWER9, etc) is at least its generation.
|
||||
arch_root = TARGETS[_machine()]
|
||||
arch_root = TARGETS[basename]
|
||||
return (
|
||||
target == arch_root or arch_root in target.ancestors
|
||||
) and target.generation <= info.generation
|
||||
) and target.generation <= generation
|
||||
|
||||
|
||||
@compatibility_check(architecture_family=X86_64)
|
||||
@compatibility_check(architecture_family="x86_64")
|
||||
def compatibility_check_for_x86_64(info, target):
|
||||
"""Compatibility check for x86_64 architectures."""
|
||||
basename = "x86_64"
|
||||
vendor = info.get("vendor_id", "generic")
|
||||
features = set(info.get("flags", "").split())
|
||||
|
||||
# We can use a target if it descends from our machine type, is from our
|
||||
# vendor, and we have all of its features
|
||||
arch_root = TARGETS[X86_64]
|
||||
arch_root = TARGETS[basename]
|
||||
return (
|
||||
(target == arch_root or arch_root in target.ancestors)
|
||||
and target.vendor in (info.vendor, "generic")
|
||||
and target.features.issubset(info.features)
|
||||
and target.vendor in (vendor, "generic")
|
||||
and target.features.issubset(features)
|
||||
)
|
||||
|
||||
|
||||
@compatibility_check(architecture_family=AARCH64)
|
||||
@compatibility_check(architecture_family="aarch64")
|
||||
def compatibility_check_for_aarch64(info, target):
|
||||
"""Compatibility check for AARCH64 architectures."""
|
||||
# At the moment, it's not clear how to detect compatibility with
|
||||
basename = "aarch64"
|
||||
features = set(info.get("Features", "").split())
|
||||
vendor = info.get("CPU implementer", "generic")
|
||||
|
||||
# At the moment it's not clear how to detect compatibility with
|
||||
# a specific version of the architecture
|
||||
if target.vendor == "generic" and target.name != AARCH64:
|
||||
if target.vendor == "generic" and target.name != "aarch64":
|
||||
return False
|
||||
|
||||
arch_root = TARGETS[AARCH64]
|
||||
arch_root = TARGETS[basename]
|
||||
arch_root_and_vendor = arch_root == target.family and target.vendor in (
|
||||
info.vendor,
|
||||
vendor,
|
||||
"generic",
|
||||
)
|
||||
|
||||
# On macOS it seems impossible to get all the CPU features
|
||||
# with syctl info, but for ARM we can get the exact model
|
||||
if platform.system() == "Darwin":
|
||||
model = TARGETS[info.name]
|
||||
model_key = info.get("model", basename)
|
||||
model = TARGETS[model_key]
|
||||
return arch_root_and_vendor and (target == model or target in model.ancestors)
|
||||
|
||||
return arch_root_and_vendor and target.features.issubset(info.features)
|
||||
return arch_root_and_vendor and target.features.issubset(features)
|
||||
|
||||
|
||||
@compatibility_check(architecture_family=RISCV64)
|
||||
@compatibility_check(architecture_family="riscv64")
|
||||
def compatibility_check_for_riscv64(info, target):
|
||||
"""Compatibility check for riscv64 architectures."""
|
||||
arch_root = TARGETS[RISCV64]
|
||||
basename = "riscv64"
|
||||
uarch = info.get("uarch")
|
||||
|
||||
# sifive unmatched board
|
||||
if uarch == "sifive,u74-mc":
|
||||
uarch = "u74mc"
|
||||
# catch-all for unknown uarchs
|
||||
else:
|
||||
uarch = "riscv64"
|
||||
|
||||
arch_root = TARGETS[basename]
|
||||
return (target == arch_root or arch_root in target.ancestors) and (
|
||||
target.name == info.name or target.vendor == "generic"
|
||||
target == uarch or target.vendor == "generic"
|
||||
)
|
||||
|
||||
|
||||
def brand_string() -> Optional[str]:
|
||||
"""Returns the brand string of the host, if detected, or None."""
|
||||
if platform.system() == "Darwin":
|
||||
return _check_output(
|
||||
["sysctl", "-n", "machdep.cpu.brand_string"], env=_ensure_bin_usrbin_in_path()
|
||||
).strip()
|
||||
|
||||
if host().family == X86_64:
|
||||
return CpuidInfoCollector().brand_string()
|
||||
|
||||
return None
|
||||
|
@@ -2,7 +2,9 @@
|
||||
# Archspec Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
"""Types and functions to manage information on CPU microarchitectures."""
|
||||
"""Types and functions to manage information
|
||||
on CPU microarchitectures.
|
||||
"""
|
||||
import functools
|
||||
import platform
|
||||
import re
|
||||
@@ -11,7 +13,6 @@
|
||||
import archspec
|
||||
import archspec.cpu.alias
|
||||
import archspec.cpu.schema
|
||||
|
||||
from .alias import FEATURE_ALIASES
|
||||
from .schema import LazyDictionary
|
||||
|
||||
@@ -46,7 +47,7 @@ class Microarchitecture:
|
||||
which has "broadwell" as a parent, supports running binaries
|
||||
optimized for "broadwell".
|
||||
vendor (str): vendor of the micro-architecture
|
||||
features (set of str): supported CPU flags. Note that the semantic
|
||||
features (list of str): supported CPU flags. Note that the semantic
|
||||
of the flags in this field might vary among architectures, if
|
||||
at all present. For instance x86_64 processors will list all
|
||||
the flags supported by a given CPU while Arm processors will
|
||||
@@ -63,24 +64,21 @@ class Microarchitecture:
|
||||
passed in as argument above.
|
||||
* versions: versions that support this micro-architecture.
|
||||
|
||||
generation (int): generation of the micro-architecture, if relevant.
|
||||
cpu_part (str): cpu part of the architecture, if relevant.
|
||||
generation (int): generation of the micro-architecture, if
|
||||
relevant.
|
||||
"""
|
||||
|
||||
# pylint: disable=too-many-arguments,too-many-instance-attributes
|
||||
# pylint: disable=too-many-arguments
|
||||
#: Aliases for micro-architecture's features
|
||||
feature_aliases = FEATURE_ALIASES
|
||||
|
||||
def __init__(self, name, parents, vendor, features, compilers, generation=0, cpu_part=""):
|
||||
def __init__(self, name, parents, vendor, features, compilers, generation=0):
|
||||
self.name = name
|
||||
self.parents = parents
|
||||
self.vendor = vendor
|
||||
self.features = features
|
||||
self.compilers = compilers
|
||||
# Only relevant for PowerPC
|
||||
self.generation = generation
|
||||
# Only relevant for AArch64
|
||||
self.cpu_part = cpu_part
|
||||
# Cache the ancestor computation
|
||||
self._ancestors = None
|
||||
|
||||
@@ -112,7 +110,6 @@ def __eq__(self, other):
|
||||
and self.parents == other.parents # avoid ancestors here
|
||||
and self.compilers == other.compilers
|
||||
and self.generation == other.generation
|
||||
and self.cpu_part == other.cpu_part
|
||||
)
|
||||
|
||||
@coerce_target_names
|
||||
@@ -145,8 +142,7 @@ def __repr__(self):
|
||||
cls_name = self.__class__.__name__
|
||||
fmt = (
|
||||
cls_name + "({0.name!r}, {0.parents!r}, {0.vendor!r}, "
|
||||
"{0.features!r}, {0.compilers!r}, generation={0.generation!r}, "
|
||||
"cpu_part={0.cpu_part!r})"
|
||||
"{0.features!r}, {0.compilers!r}, {0.generation!r})"
|
||||
)
|
||||
return fmt.format(self)
|
||||
|
||||
@@ -184,30 +180,24 @@ def generic(self):
|
||||
generics = [x for x in [self] + self.ancestors if x.vendor == "generic"]
|
||||
return max(generics, key=lambda x: len(x.ancestors))
|
||||
|
||||
def to_dict(self):
|
||||
"""Returns a dictionary representation of this object."""
|
||||
return {
|
||||
"name": str(self.name),
|
||||
"vendor": str(self.vendor),
|
||||
"features": sorted(str(x) for x in self.features),
|
||||
"generation": self.generation,
|
||||
"parents": [str(x) for x in self.parents],
|
||||
"compilers": self.compilers,
|
||||
"cpupart": self.cpu_part,
|
||||
}
|
||||
def to_dict(self, return_list_of_items=False):
|
||||
"""Returns a dictionary representation of this object.
|
||||
|
||||
@staticmethod
|
||||
def from_dict(data) -> "Microarchitecture":
|
||||
"""Construct a microarchitecture from a dictionary representation."""
|
||||
return Microarchitecture(
|
||||
name=data["name"],
|
||||
parents=[TARGETS[x] for x in data["parents"]],
|
||||
vendor=data["vendor"],
|
||||
features=set(data["features"]),
|
||||
compilers=data.get("compilers", {}),
|
||||
generation=data.get("generation", 0),
|
||||
cpu_part=data.get("cpupart", ""),
|
||||
)
|
||||
Args:
|
||||
return_list_of_items (bool): if True returns an ordered list of
|
||||
items instead of the dictionary
|
||||
"""
|
||||
list_of_items = [
|
||||
("name", str(self.name)),
|
||||
("vendor", str(self.vendor)),
|
||||
("features", sorted(str(x) for x in self.features)),
|
||||
("generation", self.generation),
|
||||
("parents", [str(x) for x in self.parents]),
|
||||
]
|
||||
if return_list_of_items:
|
||||
return list_of_items
|
||||
|
||||
return dict(list_of_items)
|
||||
|
||||
def optimization_flags(self, compiler, version):
|
||||
"""Returns a string containing the optimization flags that needs
|
||||
@@ -281,7 +271,9 @@ def tuplify(ver):
|
||||
flags = flags_fmt.format(**compiler_entry)
|
||||
return flags
|
||||
|
||||
msg = "cannot produce optimized binary for micro-architecture '{0}' with {1}@{2}"
|
||||
msg = (
|
||||
"cannot produce optimized binary for micro-architecture '{0}' with {1}@{2}"
|
||||
)
|
||||
if compiler_info:
|
||||
versions = [x["versions"] for x in compiler_info]
|
||||
msg += f' [supported compiler versions are {", ".join(versions)}]'
|
||||
@@ -297,7 +289,9 @@ def generic_microarchitecture(name):
|
||||
Args:
|
||||
name (str): name of the micro-architecture
|
||||
"""
|
||||
return Microarchitecture(name, parents=[], vendor="generic", features=set(), compilers={})
|
||||
return Microarchitecture(
|
||||
name, parents=[], vendor="generic", features=[], compilers={}
|
||||
)
|
||||
|
||||
|
||||
def version_components(version):
|
||||
@@ -350,10 +344,9 @@ def fill_target_from_dict(name, data, targets):
|
||||
features = set(values["features"])
|
||||
compilers = values.get("compilers", {})
|
||||
generation = values.get("generation", 0)
|
||||
cpu_part = values.get("cpupart", "")
|
||||
|
||||
targets[name] = Microarchitecture(
|
||||
name, parents, vendor, features, compilers, generation=generation, cpu_part=cpu_part
|
||||
name, parents, vendor, features, compilers, generation
|
||||
)
|
||||
|
||||
known_targets = {}
|
||||
|
68
lib/spack/external/archspec/cpu/schema.py
vendored
68
lib/spack/external/archspec/cpu/schema.py
vendored
@@ -7,9 +7,7 @@
|
||||
"""
|
||||
import collections.abc
|
||||
import json
|
||||
import os
|
||||
import pathlib
|
||||
from typing import Tuple
|
||||
import os.path
|
||||
|
||||
|
||||
class LazyDictionary(collections.abc.MutableMapping):
|
||||
@@ -48,65 +46,21 @@ def __len__(self):
|
||||
return len(self.data)
|
||||
|
||||
|
||||
#: Environment variable that might point to a directory with a user defined JSON file
|
||||
DIR_FROM_ENVIRONMENT = "ARCHSPEC_CPU_DIR"
|
||||
def _load_json_file(json_file):
|
||||
json_dir = os.path.join(os.path.dirname(__file__), "..", "json", "cpu")
|
||||
json_dir = os.path.abspath(json_dir)
|
||||
|
||||
#: Environment variable that might point to a directory with extensions to JSON files
|
||||
EXTENSION_DIR_FROM_ENVIRONMENT = "ARCHSPEC_EXTENSION_CPU_DIR"
|
||||
def _factory():
|
||||
filename = os.path.join(json_dir, json_file)
|
||||
with open(filename, "r", encoding="utf-8") as file:
|
||||
return json.load(file)
|
||||
|
||||
|
||||
def _json_file(filename: str, allow_custom: bool = False) -> Tuple[pathlib.Path, pathlib.Path]:
|
||||
"""Given a filename, returns the absolute path for the main JSON file, and an
|
||||
optional absolute path for an extension JSON file.
|
||||
|
||||
Args:
|
||||
filename: filename for the JSON file
|
||||
allow_custom: if True, allows overriding the location where the file resides
|
||||
"""
|
||||
json_dir = pathlib.Path(__file__).parent / ".." / "json" / "cpu"
|
||||
if allow_custom and DIR_FROM_ENVIRONMENT in os.environ:
|
||||
json_dir = pathlib.Path(os.environ[DIR_FROM_ENVIRONMENT])
|
||||
json_dir = json_dir.absolute()
|
||||
json_file = json_dir / filename
|
||||
|
||||
extension_file = None
|
||||
if allow_custom and EXTENSION_DIR_FROM_ENVIRONMENT in os.environ:
|
||||
extension_dir = pathlib.Path(os.environ[EXTENSION_DIR_FROM_ENVIRONMENT])
|
||||
extension_dir.absolute()
|
||||
extension_file = extension_dir / filename
|
||||
|
||||
return json_file, extension_file
|
||||
|
||||
|
||||
def _load(json_file: pathlib.Path, extension_file: pathlib.Path):
|
||||
with open(json_file, "r", encoding="utf-8") as file:
|
||||
data = json.load(file)
|
||||
|
||||
if not extension_file or not extension_file.exists():
|
||||
return data
|
||||
|
||||
with open(extension_file, "r", encoding="utf-8") as file:
|
||||
extension_data = json.load(file)
|
||||
|
||||
top_level_sections = list(data.keys())
|
||||
for key in top_level_sections:
|
||||
if key not in extension_data:
|
||||
continue
|
||||
|
||||
data[key].update(extension_data[key])
|
||||
|
||||
return data
|
||||
return _factory
|
||||
|
||||
|
||||
#: In memory representation of the data in microarchitectures.json,
|
||||
#: loaded on first access
|
||||
TARGETS_JSON = LazyDictionary(_load, *_json_file("microarchitectures.json", allow_custom=True))
|
||||
TARGETS_JSON = LazyDictionary(_load_json_file("microarchitectures.json"))
|
||||
|
||||
#: JSON schema for microarchitectures.json, loaded on first access
|
||||
TARGETS_JSON_SCHEMA = LazyDictionary(_load, *_json_file("microarchitectures_schema.json"))
|
||||
|
||||
#: Information on how to call 'cpuid' to get information on the HOST CPU
|
||||
CPUID_JSON = LazyDictionary(_load, *_json_file("cpuid.json", allow_custom=True))
|
||||
|
||||
#: JSON schema for cpuid.json, loaded on first access
|
||||
CPUID_JSON_SCHEMA = LazyDictionary(_load, *_json_file("cpuid_schema.json"))
|
||||
SCHEMA = LazyDictionary(_load_json_file("microarchitectures_schema.json"))
|
||||
|
10
lib/spack/external/archspec/json/README.md
vendored
10
lib/spack/external/archspec/json/README.md
vendored
@@ -9,11 +9,11 @@ language specific APIs.
|
||||
|
||||
Currently the repository contains the following JSON files:
|
||||
```console
|
||||
cpu/
|
||||
├── cpuid.json # Contains information on CPUID calls to retrieve vendor and features on x86_64
|
||||
├── cpuid_schema.json # Schema for the file above
|
||||
├── microarchitectures.json # Contains information on CPU microarchitectures
|
||||
└── microarchitectures_schema.json # Schema for the file above
|
||||
.
|
||||
├── COPYRIGHT
|
||||
└── cpu
|
||||
├── microarchitectures.json # Contains information on CPU microarchitectures
|
||||
└── microarchitectures_schema.json # Schema for the file above
|
||||
```
|
||||
|
||||
|
||||
|
1050
lib/spack/external/archspec/json/cpu/cpuid.json
vendored
1050
lib/spack/external/archspec/json/cpu/cpuid.json
vendored
File diff suppressed because it is too large
Load Diff
@@ -1,134 +0,0 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Schema for microarchitecture definitions and feature aliases",
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"vendor": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"description": {
|
||||
"type": "string"
|
||||
},
|
||||
"input": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"eax": {
|
||||
"type": "integer"
|
||||
},
|
||||
"ecx": {
|
||||
"type": "integer"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"highest_extension_support": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"description": {
|
||||
"type": "string"
|
||||
},
|
||||
"input": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"eax": {
|
||||
"type": "integer"
|
||||
},
|
||||
"ecx": {
|
||||
"type": "integer"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"flags": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"description": {
|
||||
"type": "string"
|
||||
},
|
||||
"input": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"eax": {
|
||||
"type": "integer"
|
||||
},
|
||||
"ecx": {
|
||||
"type": "integer"
|
||||
}
|
||||
}
|
||||
},
|
||||
"bits": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string"
|
||||
},
|
||||
"register": {
|
||||
"type": "string"
|
||||
},
|
||||
"bit": {
|
||||
"type": "integer"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"extension-flags": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"description": {
|
||||
"type": "string"
|
||||
},
|
||||
"input": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"eax": {
|
||||
"type": "integer"
|
||||
},
|
||||
"ecx": {
|
||||
"type": "integer"
|
||||
}
|
||||
}
|
||||
},
|
||||
"bits": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string"
|
||||
},
|
||||
"register": {
|
||||
"type": "string"
|
||||
},
|
||||
"bit": {
|
||||
"type": "integer"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@@ -2225,14 +2225,10 @@
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": "21.11:23.8",
|
||||
"versions": "21.11:",
|
||||
"name": "zen3",
|
||||
"flags": "-tp {name}",
|
||||
"warnings": "zen4 is not fully supported by nvhpc versions < 23.9, falling back to zen3"
|
||||
},
|
||||
{
|
||||
"versions": "23.9:",
|
||||
"flags": "-tp {name}"
|
||||
"warnings": "zen4 is not fully supported by nvhpc yet, falling back to zen3"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -2715,8 +2711,7 @@
|
||||
"flags": "-mcpu=thunderx2t99"
|
||||
}
|
||||
]
|
||||
},
|
||||
"cpupart": "0x0af"
|
||||
}
|
||||
},
|
||||
"a64fx": {
|
||||
"from": ["armv8.2a"],
|
||||
@@ -2784,8 +2779,7 @@
|
||||
"flags": "-march=armv8.2-a+crc+crypto+fp16+sve"
|
||||
}
|
||||
]
|
||||
},
|
||||
"cpupart": "0x001"
|
||||
}
|
||||
},
|
||||
"cortex_a72": {
|
||||
"from": ["aarch64"],
|
||||
@@ -2822,8 +2816,7 @@
|
||||
"flags" : "-mcpu=cortex-a72"
|
||||
}
|
||||
]
|
||||
},
|
||||
"cpupart": "0xd08"
|
||||
}
|
||||
},
|
||||
"neoverse_n1": {
|
||||
"from": ["cortex_a72", "armv8.2a"],
|
||||
@@ -2844,7 +2837,8 @@
|
||||
"asimdrdm",
|
||||
"lrcpc",
|
||||
"dcpop",
|
||||
"asimddp"
|
||||
"asimddp",
|
||||
"ssbs"
|
||||
],
|
||||
"compilers" : {
|
||||
"gcc": [
|
||||
@@ -2908,8 +2902,7 @@
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
},
|
||||
"cpupart": "0xd0c"
|
||||
}
|
||||
},
|
||||
"neoverse_v1": {
|
||||
"from": ["neoverse_n1", "armv8.4a"],
|
||||
@@ -2933,6 +2926,8 @@
|
||||
"lrcpc",
|
||||
"dcpop",
|
||||
"sha3",
|
||||
"sm3",
|
||||
"sm4",
|
||||
"asimddp",
|
||||
"sha512",
|
||||
"sve",
|
||||
@@ -2941,6 +2936,9 @@
|
||||
"uscat",
|
||||
"ilrcpc",
|
||||
"flagm",
|
||||
"ssbs",
|
||||
"paca",
|
||||
"pacg",
|
||||
"dcpodp",
|
||||
"svei8mm",
|
||||
"svebf16",
|
||||
@@ -3008,7 +3006,7 @@
|
||||
},
|
||||
{
|
||||
"versions": "11:",
|
||||
"flags" : "-march=armv8.4-a+sve+fp16+bf16+crypto+i8mm+rng"
|
||||
"flags" : "-march=armv8.4-a+sve+ssbs+fp16+bf16+crypto+i8mm+rng"
|
||||
},
|
||||
{
|
||||
"versions": "12:",
|
||||
@@ -3032,8 +3030,7 @@
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
},
|
||||
"cpupart": "0xd40"
|
||||
}
|
||||
},
|
||||
"neoverse_v2": {
|
||||
"from": ["neoverse_n1", "armv9.0a"],
|
||||
@@ -3057,22 +3054,35 @@
|
||||
"lrcpc",
|
||||
"dcpop",
|
||||
"sha3",
|
||||
"sm3",
|
||||
"sm4",
|
||||
"asimddp",
|
||||
"sha512",
|
||||
"sve",
|
||||
"asimdfhm",
|
||||
"dit",
|
||||
"uscat",
|
||||
"ilrcpc",
|
||||
"flagm",
|
||||
"ssbs",
|
||||
"sb",
|
||||
"paca",
|
||||
"pacg",
|
||||
"dcpodp",
|
||||
"sve2",
|
||||
"sveaes",
|
||||
"svepmull",
|
||||
"svebitperm",
|
||||
"svesha3",
|
||||
"svesm4",
|
||||
"flagm2",
|
||||
"frint",
|
||||
"svei8mm",
|
||||
"svebf16",
|
||||
"i8mm",
|
||||
"bf16"
|
||||
"bf16",
|
||||
"dgh",
|
||||
"bti"
|
||||
],
|
||||
"compilers" : {
|
||||
"gcc": [
|
||||
@@ -3097,19 +3107,15 @@
|
||||
"flags" : "-march=armv8.5-a+sve -mtune=cortex-a76"
|
||||
},
|
||||
{
|
||||
"versions": "10.0:11.3.99",
|
||||
"versions": "10.0:11.99",
|
||||
"flags" : "-march=armv8.5-a+sve+sve2+i8mm+bf16 -mtune=cortex-a77"
|
||||
},
|
||||
{
|
||||
"versions": "11.4:11.99",
|
||||
"flags" : "-mcpu=neoverse-v2"
|
||||
},
|
||||
{
|
||||
"versions": "12.0:12.2.99",
|
||||
"versions": "12.0:12.99",
|
||||
"flags" : "-march=armv9-a+i8mm+bf16 -mtune=cortex-a710"
|
||||
},
|
||||
{
|
||||
"versions": "12.3:",
|
||||
"versions": "13.0:",
|
||||
"flags" : "-mcpu=neoverse-v2"
|
||||
}
|
||||
],
|
||||
@@ -3144,112 +3150,7 @@
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
},
|
||||
"cpupart": "0xd4f"
|
||||
},
|
||||
"neoverse_n2": {
|
||||
"from": ["neoverse_n1", "armv9.0a"],
|
||||
"vendor": "ARM",
|
||||
"features": [
|
||||
"fp",
|
||||
"asimd",
|
||||
"evtstrm",
|
||||
"aes",
|
||||
"pmull",
|
||||
"sha1",
|
||||
"sha2",
|
||||
"crc32",
|
||||
"atomics",
|
||||
"fphp",
|
||||
"asimdhp",
|
||||
"cpuid",
|
||||
"asimdrdm",
|
||||
"jscvt",
|
||||
"fcma",
|
||||
"lrcpc",
|
||||
"dcpop",
|
||||
"sha3",
|
||||
"asimddp",
|
||||
"sha512",
|
||||
"sve",
|
||||
"asimdfhm",
|
||||
"uscat",
|
||||
"ilrcpc",
|
||||
"flagm",
|
||||
"sb",
|
||||
"dcpodp",
|
||||
"sve2",
|
||||
"flagm2",
|
||||
"frint",
|
||||
"svei8mm",
|
||||
"svebf16",
|
||||
"i8mm",
|
||||
"bf16"
|
||||
],
|
||||
"compilers" : {
|
||||
"gcc": [
|
||||
{
|
||||
"versions": "4.8:5.99",
|
||||
"flags": "-march=armv8-a"
|
||||
},
|
||||
{
|
||||
"versions": "6:6.99",
|
||||
"flags" : "-march=armv8.1-a"
|
||||
},
|
||||
{
|
||||
"versions": "7.0:7.99",
|
||||
"flags" : "-march=armv8.2-a -mtune=cortex-a72"
|
||||
},
|
||||
{
|
||||
"versions": "8.0:8.99",
|
||||
"flags" : "-march=armv8.4-a+sve -mtune=cortex-a72"
|
||||
},
|
||||
{
|
||||
"versions": "9.0:9.99",
|
||||
"flags" : "-march=armv8.5-a+sve -mtune=cortex-a76"
|
||||
},
|
||||
{
|
||||
"versions": "10.0:10.99",
|
||||
"flags" : "-march=armv8.5-a+sve+sve2+i8mm+bf16 -mtune=cortex-a77"
|
||||
},
|
||||
{
|
||||
"versions": "11.0:",
|
||||
"flags" : "-mcpu=neoverse-n2"
|
||||
}
|
||||
],
|
||||
"clang" : [
|
||||
{
|
||||
"versions": "9.0:10.99",
|
||||
"flags" : "-march=armv8.5-a+sve"
|
||||
},
|
||||
{
|
||||
"versions": "11.0:13.99",
|
||||
"flags" : "-march=armv8.5-a+sve+sve2+i8mm+bf16"
|
||||
},
|
||||
{
|
||||
"versions": "14.0:15.99",
|
||||
"flags" : "-march=armv9-a+i8mm+bf16"
|
||||
},
|
||||
{
|
||||
"versions": "16.0:",
|
||||
"flags" : "-mcpu=neoverse-n2"
|
||||
}
|
||||
],
|
||||
"arm" : [
|
||||
{
|
||||
"versions": "23.04.0:",
|
||||
"flags" : "-mcpu=neoverse-n2"
|
||||
}
|
||||
],
|
||||
"nvhpc" : [
|
||||
{
|
||||
"versions": "23.3:",
|
||||
"name": "neoverse-n1",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
},
|
||||
"cpupart": "0xd49"
|
||||
}
|
||||
},
|
||||
"m1": {
|
||||
"from": ["armv8.4a"],
|
||||
@@ -3315,8 +3216,7 @@
|
||||
"flags" : "-mcpu=apple-m1"
|
||||
}
|
||||
]
|
||||
},
|
||||
"cpupart": "0x022"
|
||||
}
|
||||
},
|
||||
"m2": {
|
||||
"from": ["m1", "armv8.5a"],
|
||||
@@ -3394,8 +3294,7 @@
|
||||
"flags" : "-mcpu=apple-m2"
|
||||
}
|
||||
]
|
||||
},
|
||||
"cpupart": "0x032"
|
||||
}
|
||||
},
|
||||
"arm": {
|
||||
"from": [],
|
||||
|
@@ -52,9 +52,6 @@
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"cpupart": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
@@ -110,4 +107,4 @@
|
||||
"additionalProperties": false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
20
lib/spack/external/archspec/vendor/cpuid/LICENSE
vendored
20
lib/spack/external/archspec/vendor/cpuid/LICENSE
vendored
@@ -1,20 +0,0 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014 Anders Høst
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
@@ -1,76 +0,0 @@
|
||||
cpuid.py
|
||||
========
|
||||
|
||||
Now, this is silly!
|
||||
|
||||
Pure Python library for accessing information about x86 processors
|
||||
by querying the [CPUID](http://en.wikipedia.org/wiki/CPUID)
|
||||
instruction. Well, not exactly pure Python...
|
||||
|
||||
It works by allocating a small piece of virtual memory, copying
|
||||
a raw x86 function to that memory, giving the memory execute
|
||||
permissions and then calling the memory as a function. The injected
|
||||
function executes the CPUID instruction and copies the result back
|
||||
to a ctypes.Structure where is can be read by Python.
|
||||
|
||||
It should work fine on both 32 and 64 bit versions of Windows and Linux
|
||||
running x86 processors. Apple OS X and other BSD systems should also work,
|
||||
not tested though...
|
||||
|
||||
|
||||
Why?
|
||||
----
|
||||
For poops and giggles. Plus, having access to a low-level feature
|
||||
without having to compile a C wrapper is pretty neat.
|
||||
|
||||
|
||||
Examples
|
||||
--------
|
||||
Getting info with eax=0:
|
||||
|
||||
import cpuid
|
||||
|
||||
q = cpuid.CPUID()
|
||||
eax, ebx, ecx, edx = q(0)
|
||||
|
||||
Running the files:
|
||||
|
||||
$ python example.py
|
||||
Vendor ID : GenuineIntel
|
||||
CPU name : Intel(R) Xeon(R) CPU W3550 @ 3.07GHz
|
||||
|
||||
Vector instructions supported:
|
||||
SSE : Yes
|
||||
SSE2 : Yes
|
||||
SSE3 : Yes
|
||||
SSSE3 : Yes
|
||||
SSE4.1 : Yes
|
||||
SSE4.2 : Yes
|
||||
SSE4a : --
|
||||
AVX : --
|
||||
AVX2 : --
|
||||
|
||||
$ python cpuid.py
|
||||
CPUID A B C D
|
||||
00000000 0000000b 756e6547 6c65746e 49656e69
|
||||
00000001 000106a5 00100800 009ce3bd bfebfbff
|
||||
00000002 55035a01 00f0b2e4 00000000 09ca212c
|
||||
00000003 00000000 00000000 00000000 00000000
|
||||
00000004 00000000 00000000 00000000 00000000
|
||||
00000005 00000040 00000040 00000003 00001120
|
||||
00000006 00000003 00000002 00000001 00000000
|
||||
00000007 00000000 00000000 00000000 00000000
|
||||
00000008 00000000 00000000 00000000 00000000
|
||||
00000009 00000000 00000000 00000000 00000000
|
||||
0000000a 07300403 00000044 00000000 00000603
|
||||
0000000b 00000000 00000000 00000095 00000000
|
||||
80000000 80000008 00000000 00000000 00000000
|
||||
80000001 00000000 00000000 00000001 28100800
|
||||
80000002 65746e49 2952286c 6f655820 2952286e
|
||||
80000003 55504320 20202020 20202020 57202020
|
||||
80000004 30353533 20402020 37302e33 007a4847
|
||||
80000005 00000000 00000000 00000000 00000000
|
||||
80000006 00000000 00000000 01006040 00000000
|
||||
80000007 00000000 00000000 00000000 00000100
|
||||
80000008 00003024 00000000 00000000 00000000
|
||||
|
172
lib/spack/external/archspec/vendor/cpuid/cpuid.py
vendored
172
lib/spack/external/archspec/vendor/cpuid/cpuid.py
vendored
@@ -1,172 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (c) 2024 Anders Høst
|
||||
#
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import platform
|
||||
import os
|
||||
import ctypes
|
||||
from ctypes import c_uint32, c_long, c_ulong, c_size_t, c_void_p, POINTER, CFUNCTYPE
|
||||
|
||||
# Posix x86_64:
|
||||
# Three first call registers : RDI, RSI, RDX
|
||||
# Volatile registers : RAX, RCX, RDX, RSI, RDI, R8-11
|
||||
|
||||
# Windows x86_64:
|
||||
# Three first call registers : RCX, RDX, R8
|
||||
# Volatile registers : RAX, RCX, RDX, R8-11
|
||||
|
||||
# cdecl 32 bit:
|
||||
# Three first call registers : Stack (%esp)
|
||||
# Volatile registers : EAX, ECX, EDX
|
||||
|
||||
_POSIX_64_OPC = [
|
||||
0x53, # push %rbx
|
||||
0x89, 0xf0, # mov %esi,%eax
|
||||
0x89, 0xd1, # mov %edx,%ecx
|
||||
0x0f, 0xa2, # cpuid
|
||||
0x89, 0x07, # mov %eax,(%rdi)
|
||||
0x89, 0x5f, 0x04, # mov %ebx,0x4(%rdi)
|
||||
0x89, 0x4f, 0x08, # mov %ecx,0x8(%rdi)
|
||||
0x89, 0x57, 0x0c, # mov %edx,0xc(%rdi)
|
||||
0x5b, # pop %rbx
|
||||
0xc3 # retq
|
||||
]
|
||||
|
||||
_WINDOWS_64_OPC = [
|
||||
0x53, # push %rbx
|
||||
0x89, 0xd0, # mov %edx,%eax
|
||||
0x49, 0x89, 0xc9, # mov %rcx,%r9
|
||||
0x44, 0x89, 0xc1, # mov %r8d,%ecx
|
||||
0x0f, 0xa2, # cpuid
|
||||
0x41, 0x89, 0x01, # mov %eax,(%r9)
|
||||
0x41, 0x89, 0x59, 0x04, # mov %ebx,0x4(%r9)
|
||||
0x41, 0x89, 0x49, 0x08, # mov %ecx,0x8(%r9)
|
||||
0x41, 0x89, 0x51, 0x0c, # mov %edx,0xc(%r9)
|
||||
0x5b, # pop %rbx
|
||||
0xc3 # retq
|
||||
]
|
||||
|
||||
_CDECL_32_OPC = [
|
||||
0x53, # push %ebx
|
||||
0x57, # push %edi
|
||||
0x8b, 0x7c, 0x24, 0x0c, # mov 0xc(%esp),%edi
|
||||
0x8b, 0x44, 0x24, 0x10, # mov 0x10(%esp),%eax
|
||||
0x8b, 0x4c, 0x24, 0x14, # mov 0x14(%esp),%ecx
|
||||
0x0f, 0xa2, # cpuid
|
||||
0x89, 0x07, # mov %eax,(%edi)
|
||||
0x89, 0x5f, 0x04, # mov %ebx,0x4(%edi)
|
||||
0x89, 0x4f, 0x08, # mov %ecx,0x8(%edi)
|
||||
0x89, 0x57, 0x0c, # mov %edx,0xc(%edi)
|
||||
0x5f, # pop %edi
|
||||
0x5b, # pop %ebx
|
||||
0xc3 # ret
|
||||
]
|
||||
|
||||
is_windows = os.name == "nt"
|
||||
is_64bit = ctypes.sizeof(ctypes.c_voidp) == 8
|
||||
|
||||
|
||||
class CPUID_struct(ctypes.Structure):
|
||||
_register_names = ("eax", "ebx", "ecx", "edx")
|
||||
_fields_ = [(r, c_uint32) for r in _register_names]
|
||||
|
||||
def __getitem__(self, item):
|
||||
if item not in self._register_names:
|
||||
raise KeyError(item)
|
||||
return getattr(self, item)
|
||||
|
||||
def __repr__(self):
|
||||
return "eax=0x{:x}, ebx=0x{:x}, ecx=0x{:x}, edx=0x{:x}".format(self.eax, self.ebx, self.ecx, self.edx)
|
||||
|
||||
|
||||
class CPUID(object):
|
||||
def __init__(self):
|
||||
if platform.machine() not in ("AMD64", "x86_64", "x86", "i686"):
|
||||
raise SystemError("Only available for x86")
|
||||
|
||||
if is_windows:
|
||||
if is_64bit:
|
||||
# VirtualAlloc seems to fail under some weird
|
||||
# circumstances when ctypes.windll.kernel32 is
|
||||
# used under 64 bit Python. CDLL fixes this.
|
||||
self.win = ctypes.CDLL("kernel32.dll")
|
||||
opc = _WINDOWS_64_OPC
|
||||
else:
|
||||
# Here ctypes.windll.kernel32 is needed to get the
|
||||
# right DLL. Otherwise it will fail when running
|
||||
# 32 bit Python on 64 bit Windows.
|
||||
self.win = ctypes.windll.kernel32
|
||||
opc = _CDECL_32_OPC
|
||||
else:
|
||||
opc = _POSIX_64_OPC if is_64bit else _CDECL_32_OPC
|
||||
|
||||
size = len(opc)
|
||||
code = (ctypes.c_ubyte * size)(*opc)
|
||||
|
||||
if is_windows:
|
||||
self.win.VirtualAlloc.restype = c_void_p
|
||||
self.win.VirtualAlloc.argtypes = [ctypes.c_void_p, ctypes.c_size_t, ctypes.c_ulong, ctypes.c_ulong]
|
||||
self.addr = self.win.VirtualAlloc(None, size, 0x1000, 0x40)
|
||||
if not self.addr:
|
||||
raise MemoryError("Could not allocate RWX memory")
|
||||
ctypes.memmove(self.addr, code, size)
|
||||
else:
|
||||
from mmap import (
|
||||
mmap,
|
||||
MAP_PRIVATE,
|
||||
MAP_ANONYMOUS,
|
||||
PROT_WRITE,
|
||||
PROT_READ,
|
||||
PROT_EXEC,
|
||||
)
|
||||
self.mm = mmap(
|
||||
-1,
|
||||
size,
|
||||
flags=MAP_PRIVATE | MAP_ANONYMOUS,
|
||||
prot=PROT_WRITE | PROT_READ | PROT_EXEC,
|
||||
)
|
||||
self.mm.write(code)
|
||||
self.addr = ctypes.addressof(ctypes.c_int.from_buffer(self.mm))
|
||||
|
||||
func_type = CFUNCTYPE(None, POINTER(CPUID_struct), c_uint32, c_uint32)
|
||||
self.func_ptr = func_type(self.addr)
|
||||
|
||||
def __call__(self, eax, ecx=0):
|
||||
struct = self.registers_for(eax=eax, ecx=ecx)
|
||||
return struct.eax, struct.ebx, struct.ecx, struct.edx
|
||||
|
||||
def registers_for(self, eax, ecx=0):
|
||||
"""Calls cpuid with eax and ecx set as the input arguments, and returns a structure
|
||||
containing eax, ebx, ecx, and edx.
|
||||
"""
|
||||
struct = CPUID_struct()
|
||||
self.func_ptr(struct, eax, ecx)
|
||||
return struct
|
||||
|
||||
def __del__(self):
|
||||
if is_windows:
|
||||
self.win.VirtualFree.restype = c_long
|
||||
self.win.VirtualFree.argtypes = [c_void_p, c_size_t, c_ulong]
|
||||
self.win.VirtualFree(self.addr, 0, 0x8000)
|
||||
else:
|
||||
self.mm.close()
|
||||
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
def valid_inputs():
|
||||
cpuid = CPUID()
|
||||
for eax in (0x0, 0x80000000):
|
||||
highest, _, _, _ = cpuid(eax)
|
||||
while eax <= highest:
|
||||
regs = cpuid(eax)
|
||||
yield (eax, regs)
|
||||
eax += 1
|
||||
|
||||
|
||||
print(" ".join(x.ljust(8) for x in ("CPUID", "A", "B", "C", "D")).strip())
|
||||
for eax, regs in valid_inputs():
|
||||
print("%08x" % eax, " ".join("%08x" % reg for reg in regs))
|
@@ -1,62 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (c) 2024 Anders Høst
|
||||
#
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import struct
|
||||
import cpuid
|
||||
|
||||
|
||||
def cpu_vendor(cpu):
|
||||
_, b, c, d = cpu(0)
|
||||
return struct.pack("III", b, d, c).decode("utf-8")
|
||||
|
||||
|
||||
def cpu_name(cpu):
|
||||
name = "".join((struct.pack("IIII", *cpu(0x80000000 + i)).decode("utf-8")
|
||||
for i in range(2, 5)))
|
||||
|
||||
return name.split('\x00', 1)[0]
|
||||
|
||||
|
||||
def is_set(cpu, leaf, subleaf, reg_idx, bit):
|
||||
"""
|
||||
@param {leaf} %eax
|
||||
@param {sublead} %ecx, 0 in most cases
|
||||
@param {reg_idx} idx of [%eax, %ebx, %ecx, %edx], 0-based
|
||||
@param {bit} bit of reg selected by {reg_idx}, 0-based
|
||||
"""
|
||||
|
||||
regs = cpu(leaf, subleaf)
|
||||
|
||||
if (1 << bit) & regs[reg_idx]:
|
||||
return "Yes"
|
||||
else:
|
||||
return "--"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
cpu = cpuid.CPUID()
|
||||
|
||||
print("Vendor ID : %s" % cpu_vendor(cpu))
|
||||
print("CPU name : %s" % cpu_name(cpu))
|
||||
print()
|
||||
print("Vector instructions supported:")
|
||||
print("SSE : %s" % is_set(cpu, 1, 0, 3, 25))
|
||||
print("SSE2 : %s" % is_set(cpu, 1, 0, 3, 26))
|
||||
print("SSE3 : %s" % is_set(cpu, 1, 0, 2, 0))
|
||||
print("SSSE3 : %s" % is_set(cpu, 1, 0, 2, 9))
|
||||
print("SSE4.1 : %s" % is_set(cpu, 1, 0, 2, 19))
|
||||
print("SSE4.2 : %s" % is_set(cpu, 1, 0, 2, 20))
|
||||
print("SSE4a : %s" % is_set(cpu, 0x80000001, 0, 2, 6))
|
||||
print("AVX : %s" % is_set(cpu, 1, 0, 2, 28))
|
||||
print("AVX2 : %s" % is_set(cpu, 7, 0, 1, 5))
|
||||
print("BMI1 : %s" % is_set(cpu, 7, 0, 1, 3))
|
||||
print("BMI2 : %s" % is_set(cpu, 7, 0, 1, 8))
|
||||
# Intel RDT CMT/MBM
|
||||
print("L3 Monitoring : %s" % is_set(cpu, 0xf, 0, 3, 1))
|
||||
print("L3 Occupancy : %s" % is_set(cpu, 0xf, 1, 3, 0))
|
||||
print("L3 Total BW : %s" % is_set(cpu, 0xf, 1, 3, 1))
|
||||
print("L3 Local BW : %s" % is_set(cpu, 0xf, 1, 3, 2))
|
@@ -332,19 +332,7 @@ def close(self):
|
||||
class MultiProcessFd:
|
||||
"""Return an object which stores a file descriptor and can be passed as an
|
||||
argument to a function run with ``multiprocessing.Process``, such that
|
||||
the file descriptor is available in the subprocess. It provides access via
|
||||
the `fd` property.
|
||||
|
||||
This object takes control over the associated FD: files opened from this
|
||||
using `fdopen` need to use `closefd=False`.
|
||||
"""
|
||||
|
||||
# As for why you have to fdopen(..., closefd=False): when a
|
||||
# multiprocessing.connection.Connection object stores an fd, it assumes
|
||||
# control over it, and will attempt to close it when gc'ed during __del__;
|
||||
# if you fdopen(multiprocessfd.fd, closefd=True) then the resulting file
|
||||
# will also assume control, and you can see warnings when there is an
|
||||
# attempted double close.
|
||||
the file descriptor is available in the subprocess."""
|
||||
|
||||
def __init__(self, fd):
|
||||
self._connection = None
|
||||
@@ -357,20 +345,33 @@ def __init__(self, fd):
|
||||
@property
|
||||
def fd(self):
|
||||
if self._connection:
|
||||
return self._connection.fileno()
|
||||
return self._connection._handle
|
||||
else:
|
||||
return self._fd
|
||||
|
||||
def close(self):
|
||||
"""Rather than `.close()`ing any file opened from the associated
|
||||
`.fd`, the `MultiProcessFd` should be closed with this.
|
||||
"""
|
||||
if self._connection:
|
||||
self._connection.close()
|
||||
else:
|
||||
os.close(self._fd)
|
||||
|
||||
|
||||
def close_connection_and_file(multiprocess_fd, file):
|
||||
# MultiprocessFd is intended to transmit a FD
|
||||
# to a child process, this FD is then opened to a Python File object
|
||||
# (using fdopen). In >= 3.8, MultiprocessFd encapsulates a
|
||||
# multiprocessing.connection.Connection; Connection closes the FD
|
||||
# when it is deleted, and prints a warning about duplicate closure if
|
||||
# it is not explicitly closed. In < 3.8, MultiprocessFd encapsulates a
|
||||
# simple FD; closing the FD here appears to conflict with
|
||||
# closure of the File object (in < 3.8 that is). Therefore this needs
|
||||
# to choose whether to close the File or the Connection.
|
||||
if sys.version_info >= (3, 8):
|
||||
multiprocess_fd.close()
|
||||
else:
|
||||
file.close()
|
||||
|
||||
|
||||
@contextmanager
|
||||
def replace_environment(env):
|
||||
"""Replace the current environment (`os.environ`) with `env`.
|
||||
@@ -915,10 +916,10 @@ def _writer_daemon(
|
||||
# 1. Use line buffering (3rd param = 1) since Python 3 has a bug
|
||||
# that prevents unbuffered text I/O.
|
||||
# 2. Python 3.x before 3.7 does not open with UTF-8 encoding by default
|
||||
in_pipe = os.fdopen(read_multiprocess_fd.fd, "r", 1, encoding="utf-8", closefd=False)
|
||||
in_pipe = os.fdopen(read_multiprocess_fd.fd, "r", 1, encoding="utf-8")
|
||||
|
||||
if stdin_multiprocess_fd:
|
||||
stdin = os.fdopen(stdin_multiprocess_fd.fd, closefd=False)
|
||||
stdin = os.fdopen(stdin_multiprocess_fd.fd)
|
||||
else:
|
||||
stdin = None
|
||||
|
||||
@@ -1008,9 +1009,9 @@ def _writer_daemon(
|
||||
if isinstance(log_file, io.StringIO):
|
||||
control_pipe.send(log_file.getvalue())
|
||||
log_file_wrapper.close()
|
||||
read_multiprocess_fd.close()
|
||||
close_connection_and_file(read_multiprocess_fd, in_pipe)
|
||||
if stdin_multiprocess_fd:
|
||||
stdin_multiprocess_fd.close()
|
||||
close_connection_and_file(stdin_multiprocess_fd, stdin)
|
||||
|
||||
# send echo value back to the parent so it can be preserved.
|
||||
control_pipe.send(echo)
|
||||
|
@@ -4,7 +4,7 @@
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
|
||||
__version__ = "0.21.3"
|
||||
__version__ = "0.22.0.dev0"
|
||||
spack_version = __version__
|
||||
|
||||
|
||||
|
@@ -40,7 +40,6 @@ def _search_duplicate_compilers(error_cls):
|
||||
import collections.abc
|
||||
import glob
|
||||
import inspect
|
||||
import io
|
||||
import itertools
|
||||
import pathlib
|
||||
import pickle
|
||||
@@ -55,7 +54,6 @@ def _search_duplicate_compilers(error_cls):
|
||||
import spack.repo
|
||||
import spack.spec
|
||||
import spack.util.crypto
|
||||
import spack.util.spack_yaml as syaml
|
||||
import spack.variant
|
||||
|
||||
#: Map an audit tag to a list of callables implementing checks
|
||||
@@ -252,88 +250,6 @@ def _search_duplicate_specs_in_externals(error_cls):
|
||||
return errors
|
||||
|
||||
|
||||
@config_packages
|
||||
def _deprecated_preferences(error_cls):
|
||||
"""Search package preferences deprecated in v0.21 (and slated for removal in v0.22)"""
|
||||
# TODO (v0.22): remove this audit as the attributes will not be allowed in config
|
||||
errors = []
|
||||
packages_yaml = spack.config.CONFIG.get_config("packages")
|
||||
|
||||
def make_error(attribute_name, config_data, summary):
|
||||
s = io.StringIO()
|
||||
s.write("Occurring in the following file:\n")
|
||||
dict_view = syaml.syaml_dict((k, v) for k, v in config_data.items() if k == attribute_name)
|
||||
syaml.dump_config(dict_view, stream=s, blame=True)
|
||||
return error_cls(summary=summary, details=[s.getvalue()])
|
||||
|
||||
if "all" in packages_yaml and "version" in packages_yaml["all"]:
|
||||
summary = "Using the deprecated 'version' attribute under 'packages:all'"
|
||||
errors.append(make_error("version", packages_yaml["all"], summary))
|
||||
|
||||
for package_name in packages_yaml:
|
||||
if package_name == "all":
|
||||
continue
|
||||
|
||||
package_conf = packages_yaml[package_name]
|
||||
for attribute in ("compiler", "providers", "target"):
|
||||
if attribute not in package_conf:
|
||||
continue
|
||||
summary = (
|
||||
f"Using the deprecated '{attribute}' attribute " f"under 'packages:{package_name}'"
|
||||
)
|
||||
errors.append(make_error(attribute, package_conf, summary))
|
||||
|
||||
return errors
|
||||
|
||||
|
||||
@config_packages
|
||||
def _avoid_mismatched_variants(error_cls):
|
||||
"""Warns if variant preferences have mismatched types or names."""
|
||||
errors = []
|
||||
packages_yaml = spack.config.CONFIG.get_config("packages")
|
||||
|
||||
def make_error(config_data, summary):
|
||||
s = io.StringIO()
|
||||
s.write("Occurring in the following file:\n")
|
||||
syaml.dump_config(config_data, stream=s, blame=True)
|
||||
return error_cls(summary=summary, details=[s.getvalue()])
|
||||
|
||||
for pkg_name in packages_yaml:
|
||||
# 'all:' must be more forgiving, since it is setting defaults for everything
|
||||
if pkg_name == "all" or "variants" not in packages_yaml[pkg_name]:
|
||||
continue
|
||||
|
||||
preferences = packages_yaml[pkg_name]["variants"]
|
||||
if not isinstance(preferences, list):
|
||||
preferences = [preferences]
|
||||
|
||||
for variants in preferences:
|
||||
current_spec = spack.spec.Spec(variants)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
for variant in current_spec.variants.values():
|
||||
# Variant does not exist at all
|
||||
if variant.name not in pkg_cls.variants:
|
||||
summary = (
|
||||
f"Setting a preference for the '{pkg_name}' package to the "
|
||||
f"non-existing variant '{variant.name}'"
|
||||
)
|
||||
errors.append(make_error(preferences, summary))
|
||||
continue
|
||||
|
||||
# Variant cannot accept this value
|
||||
s = spack.spec.Spec(pkg_name)
|
||||
try:
|
||||
s.update_variant_validate(variant.name, variant.value)
|
||||
except Exception:
|
||||
summary = (
|
||||
f"Setting the variant '{variant.name}' of the '{pkg_name}' package "
|
||||
f"to the invalid value '{str(variant)}'"
|
||||
)
|
||||
errors.append(make_error(preferences, summary))
|
||||
|
||||
return errors
|
||||
|
||||
|
||||
#: Sanity checks on package directives
|
||||
package_directives = AuditClass(
|
||||
group="packages",
|
||||
@@ -860,7 +776,7 @@ def _version_constraints_are_satisfiable_by_some_version_in_repo(pkgs, error_cls
|
||||
)
|
||||
except Exception:
|
||||
summary = (
|
||||
"{0}: dependency on {1} cannot be satisfied by known versions of {1.name}"
|
||||
"{0}: dependency on {1} cannot be satisfied " "by known versions of {1.name}"
|
||||
).format(pkg_name, s)
|
||||
details = ["happening in " + filename]
|
||||
if dependency_pkg_cls is not None:
|
||||
@@ -902,53 +818,6 @@ def _analyze_variants_in_directive(pkg, constraint, directive, error_cls):
|
||||
return errors
|
||||
|
||||
|
||||
@package_directives
|
||||
def _named_specs_in_when_arguments(pkgs, error_cls):
|
||||
"""Reports named specs in the 'when=' attribute of a directive.
|
||||
|
||||
Note that 'conflicts' is the only directive allowing that.
|
||||
"""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
|
||||
def _extracts_errors(triggers, summary):
|
||||
_errors = []
|
||||
for trigger in list(triggers):
|
||||
when_spec = spack.spec.Spec(trigger)
|
||||
if when_spec.name is not None and when_spec.name != pkg_name:
|
||||
details = [f"using '{trigger}', should be '^{trigger}'"]
|
||||
_errors.append(error_cls(summary=summary, details=details))
|
||||
return _errors
|
||||
|
||||
for dname, triggers in pkg_cls.dependencies.items():
|
||||
summary = f"{pkg_name}: wrong 'when=' condition for the '{dname}' dependency"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
for vname, (variant, triggers) in pkg_cls.variants.items():
|
||||
summary = f"{pkg_name}: wrong 'when=' condition for the '{vname}' variant"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
for provided, triggers in pkg_cls.provided.items():
|
||||
summary = f"{pkg_name}: wrong 'when=' condition for the '{provided}' virtual"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
for _, triggers in pkg_cls.requirements.items():
|
||||
triggers = [when_spec for when_spec, _, _ in triggers]
|
||||
summary = f"{pkg_name}: wrong 'when=' condition in 'requires' directive"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
triggers = list(pkg_cls.patches)
|
||||
summary = f"{pkg_name}: wrong 'when=' condition in 'patch' directives"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
triggers = list(pkg_cls.resources)
|
||||
summary = f"{pkg_name}: wrong 'when=' condition in 'resource' directives"
|
||||
errors.extend(_extracts_errors(triggers, summary))
|
||||
|
||||
return llnl.util.lang.dedupe(errors)
|
||||
|
||||
|
||||
#: Sanity checks on package directives
|
||||
external_detection = AuditClass(
|
||||
group="externals",
|
||||
|
@@ -69,7 +69,6 @@
|
||||
BUILD_CACHE_RELATIVE_PATH = "build_cache"
|
||||
BUILD_CACHE_KEYS_RELATIVE_PATH = "_pgp"
|
||||
CURRENT_BUILD_CACHE_LAYOUT_VERSION = 1
|
||||
FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION = 2
|
||||
|
||||
|
||||
class BuildCacheDatabase(spack_db.Database):
|
||||
@@ -1697,7 +1696,7 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
|
||||
try:
|
||||
_get_valid_spec_file(
|
||||
local_specfile_stage.save_filename,
|
||||
FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION,
|
||||
CURRENT_BUILD_CACHE_LAYOUT_VERSION,
|
||||
)
|
||||
except InvalidMetadataFile as e:
|
||||
tty.warn(
|
||||
@@ -1738,7 +1737,7 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
|
||||
|
||||
try:
|
||||
_get_valid_spec_file(
|
||||
local_specfile_path, FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION
|
||||
local_specfile_path, CURRENT_BUILD_CACHE_LAYOUT_VERSION
|
||||
)
|
||||
except InvalidMetadataFile as e:
|
||||
tty.warn(
|
||||
@@ -2027,12 +2026,11 @@ def _extract_inner_tarball(spec, filename, extract_to, unsigned, remote_checksum
|
||||
|
||||
|
||||
def _tar_strip_component(tar: tarfile.TarFile, prefix: str):
|
||||
"""Yield all members of tarfile that start with given prefix, and strip that prefix (including
|
||||
symlinks)"""
|
||||
"""Strip the top-level directory `prefix` from the member names in a tarfile."""
|
||||
# Including trailing /, otherwise we end up with absolute paths.
|
||||
regex = re.compile(re.escape(prefix) + "/*")
|
||||
|
||||
# Only yield members in the package prefix.
|
||||
# Remove the top-level directory from the member (link)names.
|
||||
# Note: when a tarfile is created, relative in-prefix symlinks are
|
||||
# expanded to matching member names of tarfile entries. So, we have
|
||||
# to ensure that those are updated too.
|
||||
@@ -2040,14 +2038,12 @@ def _tar_strip_component(tar: tarfile.TarFile, prefix: str):
|
||||
# them.
|
||||
for m in tar.getmembers():
|
||||
result = regex.match(m.name)
|
||||
if not result:
|
||||
continue
|
||||
assert result is not None
|
||||
m.name = m.name[result.end() :]
|
||||
if m.linkname:
|
||||
result = regex.match(m.linkname)
|
||||
if result:
|
||||
m.linkname = m.linkname[result.end() :]
|
||||
yield m
|
||||
|
||||
|
||||
def extract_tarball(spec, download_result, unsigned=False, force=False, timer=timer.NULL_TIMER):
|
||||
@@ -2071,7 +2067,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
|
||||
|
||||
specfile_path = download_result["specfile_stage"].save_filename
|
||||
spec_dict, layout_version = _get_valid_spec_file(
|
||||
specfile_path, FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION
|
||||
specfile_path, CURRENT_BUILD_CACHE_LAYOUT_VERSION
|
||||
)
|
||||
bchecksum = spec_dict["binary_cache_checksum"]
|
||||
|
||||
@@ -2090,7 +2086,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
|
||||
_delete_staged_downloads(download_result)
|
||||
shutil.rmtree(tmpdir)
|
||||
raise e
|
||||
elif 1 <= layout_version <= 2:
|
||||
elif layout_version == 1:
|
||||
# Newer buildcache layout: the .spack file contains just
|
||||
# in the install tree, the signature, if it exists, is
|
||||
# wrapped around the spec.json at the root. If sig verify
|
||||
@@ -2117,10 +2113,8 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
|
||||
try:
|
||||
with closing(tarfile.open(tarfile_path, "r")) as tar:
|
||||
# Remove install prefix from tarfil to extract directly into spec.prefix
|
||||
tar.extractall(
|
||||
path=spec.prefix,
|
||||
members=_tar_strip_component(tar, prefix=_ensure_common_prefix(tar)),
|
||||
)
|
||||
_tar_strip_component(tar, prefix=_ensure_common_prefix(tar))
|
||||
tar.extractall(path=spec.prefix)
|
||||
except Exception:
|
||||
shutil.rmtree(spec.prefix, ignore_errors=True)
|
||||
_delete_staged_downloads(download_result)
|
||||
@@ -2155,47 +2149,20 @@ def extract_tarball(spec, download_result, unsigned=False, force=False, timer=ti
|
||||
|
||||
|
||||
def _ensure_common_prefix(tar: tarfile.TarFile) -> str:
|
||||
# Find the lowest `binary_distribution` file (hard-coded forward slash is on purpose).
|
||||
binary_distribution = min(
|
||||
(
|
||||
e.name
|
||||
for e in tar.getmembers()
|
||||
if e.isfile() and e.name.endswith(".spack/binary_distribution")
|
||||
),
|
||||
key=len,
|
||||
default=None,
|
||||
)
|
||||
# Get the shortest length directory.
|
||||
common_prefix = min((e.name for e in tar.getmembers() if e.isdir()), key=len, default=None)
|
||||
|
||||
if binary_distribution is None:
|
||||
raise ValueError("Tarball is not a Spack package, missing binary_distribution file")
|
||||
if common_prefix is None:
|
||||
raise ValueError("Tarball does not contain a common prefix")
|
||||
|
||||
pkg_path = pathlib.PurePosixPath(binary_distribution).parent.parent
|
||||
|
||||
# Even the most ancient Spack version has required to list the dir of the package itself, so
|
||||
# guard against broken tarballs where `path.parent.parent` is empty.
|
||||
if pkg_path == pathlib.PurePosixPath():
|
||||
raise ValueError("Invalid tarball, missing package prefix dir")
|
||||
|
||||
pkg_prefix = str(pkg_path)
|
||||
|
||||
# Ensure all tar entries are in the pkg_prefix dir, and if they're not, they should be parent
|
||||
# dirs of it.
|
||||
has_prefix = False
|
||||
# Validate that each file starts with the prefix
|
||||
for member in tar.getmembers():
|
||||
stripped = member.name.rstrip("/")
|
||||
if not (
|
||||
stripped.startswith(pkg_prefix) or member.isdir() and pkg_prefix.startswith(stripped)
|
||||
):
|
||||
raise ValueError(f"Tarball contains file {stripped} outside of prefix {pkg_prefix}")
|
||||
if member.isdir() and stripped == pkg_prefix:
|
||||
has_prefix = True
|
||||
if not member.name.startswith(common_prefix):
|
||||
raise ValueError(
|
||||
f"Tarball contains file {member.name} outside of prefix {common_prefix}"
|
||||
)
|
||||
|
||||
# This is technically not required, but let's be defensive about the existence of the package
|
||||
# prefix dir.
|
||||
if not has_prefix:
|
||||
raise ValueError(f"Tarball does not contain a common prefix {pkg_prefix}")
|
||||
|
||||
return pkg_prefix
|
||||
return common_prefix
|
||||
|
||||
|
||||
def install_root_node(spec, unsigned=False, force=False, sha256=None):
|
||||
@@ -2380,9 +2347,6 @@ def get_keys(install=False, trust=False, force=False, mirrors=None):
|
||||
|
||||
for mirror in mirror_collection.values():
|
||||
fetch_url = mirror.fetch_url
|
||||
# TODO: oci:// does not support signing.
|
||||
if fetch_url.startswith("oci://"):
|
||||
continue
|
||||
keys_url = url_util.join(
|
||||
fetch_url, BUILD_CACHE_RELATIVE_PATH, BUILD_CACHE_KEYS_RELATIVE_PATH
|
||||
)
|
||||
|
@@ -213,8 +213,7 @@ def _root_spec(spec_str: str) -> str:
|
||||
if str(spack.platforms.host()) == "darwin":
|
||||
spec_str += " %apple-clang"
|
||||
elif str(spack.platforms.host()) == "windows":
|
||||
# TODO (johnwparent): Remove version constraint when clingo patch is up
|
||||
spec_str += " %msvc@:19.37"
|
||||
spec_str += " %msvc"
|
||||
else:
|
||||
spec_str += " %gcc"
|
||||
|
||||
|
@@ -324,29 +324,19 @@ def set_compiler_environment_variables(pkg, env):
|
||||
# ttyout, ttyerr, etc.
|
||||
link_dir = spack.paths.build_env_path
|
||||
|
||||
# Set SPACK compiler variables so that our wrapper knows what to
|
||||
# call. If there is no compiler configured then use a default
|
||||
# wrapper which will emit an error if it is used.
|
||||
# Set SPACK compiler variables so that our wrapper knows what to call
|
||||
if compiler.cc:
|
||||
env.set("SPACK_CC", compiler.cc)
|
||||
env.set("CC", os.path.join(link_dir, compiler.link_paths["cc"]))
|
||||
else:
|
||||
env.set("CC", os.path.join(link_dir, "cc"))
|
||||
if compiler.cxx:
|
||||
env.set("SPACK_CXX", compiler.cxx)
|
||||
env.set("CXX", os.path.join(link_dir, compiler.link_paths["cxx"]))
|
||||
else:
|
||||
env.set("CC", os.path.join(link_dir, "c++"))
|
||||
if compiler.f77:
|
||||
env.set("SPACK_F77", compiler.f77)
|
||||
env.set("F77", os.path.join(link_dir, compiler.link_paths["f77"]))
|
||||
else:
|
||||
env.set("F77", os.path.join(link_dir, "f77"))
|
||||
if compiler.fc:
|
||||
env.set("SPACK_FC", compiler.fc)
|
||||
env.set("FC", os.path.join(link_dir, compiler.link_paths["fc"]))
|
||||
else:
|
||||
env.set("FC", os.path.join(link_dir, "fc"))
|
||||
|
||||
# Set SPACK compiler rpath flags so that our wrapper knows what to use
|
||||
env.set("SPACK_CC_RPATH_ARG", compiler.cc_rpath_arg)
|
||||
@@ -753,16 +743,15 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
|
||||
set_compiler_environment_variables(pkg, env_mods)
|
||||
set_wrapper_variables(pkg, env_mods)
|
||||
|
||||
# Platform specific setup goes before package specific setup. This is for setting
|
||||
# defaults like MACOSX_DEPLOYMENT_TARGET on macOS.
|
||||
platform = spack.platforms.by_name(pkg.spec.architecture.platform)
|
||||
target = platform.target(pkg.spec.architecture.target)
|
||||
platform.setup_platform_environment(pkg, env_mods)
|
||||
|
||||
tty.debug("setup_package: grabbing modifications from dependencies")
|
||||
env_mods.extend(setup_context.get_env_modifications())
|
||||
tty.debug("setup_package: collected all modifications from dependencies")
|
||||
|
||||
# architecture specific setup
|
||||
platform = spack.platforms.by_name(pkg.spec.architecture.platform)
|
||||
target = platform.target(pkg.spec.architecture.target)
|
||||
platform.setup_platform_environment(pkg, env_mods)
|
||||
|
||||
if context == Context.TEST:
|
||||
env_mods.prepend_path("PATH", ".")
|
||||
elif context == Context.BUILD and not dirty and not env_mods.is_unset("CPATH"):
|
||||
@@ -789,7 +778,7 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
|
||||
for mod in ["cray-mpich", "cray-libsci"]:
|
||||
module("unload", mod)
|
||||
|
||||
if target and target.module_name:
|
||||
if target.module_name:
|
||||
load_module(target.module_name)
|
||||
|
||||
load_external_modules(pkg)
|
||||
@@ -1098,7 +1087,7 @@ def _setup_pkg_and_run(
|
||||
# that the parent process is not going to read from it till we
|
||||
# are done with the child, so we undo Python's precaution.
|
||||
if input_multiprocess_fd is not None:
|
||||
sys.stdin = os.fdopen(input_multiprocess_fd.fd, closefd=False)
|
||||
sys.stdin = os.fdopen(input_multiprocess_fd.fd)
|
||||
|
||||
pkg = serialized_pkg.restore()
|
||||
|
||||
@@ -1333,7 +1322,7 @@ def make_stack(tb, stack=None):
|
||||
# don't provide context if the code is actually in the base classes.
|
||||
obj = frame.f_locals["self"]
|
||||
func = getattr(obj, tb.tb_frame.f_code.co_name, "")
|
||||
if func and hasattr(func, "__qualname__"):
|
||||
if func:
|
||||
typename, *_ = func.__qualname__.partition(".")
|
||||
if isinstance(obj, CONTEXT_BASES) and typename not in basenames:
|
||||
break
|
||||
|
@@ -34,6 +34,11 @@ def cmake_cache_option(name, boolean_value, comment="", force=False):
|
||||
return 'set({0} {1} CACHE BOOL "{2}"{3})\n'.format(name, value, comment, force_str)
|
||||
|
||||
|
||||
def cmake_cache_filepath(name, value, comment=""):
|
||||
"""Generate a string for a cmake cache variable of type FILEPATH"""
|
||||
return 'set({0} "{1}" CACHE FILEPATH "{2}")\n'.format(name, value, comment)
|
||||
|
||||
|
||||
class CachedCMakeBuilder(CMakeBuilder):
|
||||
#: Phases of a Cached CMake package
|
||||
#: Note: the initconfig phase is used for developer builds as a final phase to stop on
|
||||
@@ -257,6 +262,15 @@ def initconfig_hardware_entries(self):
|
||||
entries.append(
|
||||
cmake_cache_path("HIP_CXX_COMPILER", "{0}".format(self.spec["hip"].hipcc))
|
||||
)
|
||||
llvm_bin = spec["llvm-amdgpu"].prefix.bin
|
||||
llvm_prefix = spec["llvm-amdgpu"].prefix
|
||||
# Some ROCm systems seem to point to /<path>/rocm-<ver>/ and
|
||||
# others point to /<path>/rocm-<ver>/llvm
|
||||
if os.path.basename(os.path.normpath(llvm_prefix)) != "llvm":
|
||||
llvm_bin = os.path.join(llvm_prefix, "llvm/bin/")
|
||||
entries.append(
|
||||
cmake_cache_filepath("CMAKE_HIP_COMPILER", os.path.join(llvm_bin, "clang++"))
|
||||
)
|
||||
archs = self.spec.variants["amdgpu_target"].value
|
||||
if archs[0] != "none":
|
||||
arch_str = ";".join(archs)
|
||||
@@ -277,7 +291,7 @@ def std_initconfig_entries(self):
|
||||
"#------------------{0}".format("-" * 60),
|
||||
"# CMake executable path: {0}".format(self.pkg.spec["cmake"].command.path),
|
||||
"#------------------{0}\n".format("-" * 60),
|
||||
cmake_cache_path("CMAKE_PREFIX_PATH", cmake_prefix_path),
|
||||
cmake_cache_string("CMAKE_PREFIX_PATH", cmake_prefix_path),
|
||||
self.define_cmake_cache_from_variant("CMAKE_BUILD_TYPE", "build_type"),
|
||||
]
|
||||
|
||||
|
@@ -46,7 +46,22 @@
|
||||
from spack.reporters import CDash, CDashConfiguration
|
||||
from spack.reporters.cdash import build_stamp as cdash_build_stamp
|
||||
|
||||
JOB_RETRY_CONDITIONS = ["always"]
|
||||
# See https://docs.gitlab.com/ee/ci/yaml/#retry for descriptions of conditions
|
||||
JOB_RETRY_CONDITIONS = [
|
||||
# "always",
|
||||
"unknown_failure",
|
||||
"script_failure",
|
||||
"api_failure",
|
||||
"stuck_or_timeout_failure",
|
||||
"runner_system_failure",
|
||||
"runner_unsupported",
|
||||
"stale_schedule",
|
||||
# "job_execution_timeout",
|
||||
"archived_failure",
|
||||
"unmet_prerequisites",
|
||||
"scheduler_failure",
|
||||
"data_integrity_failure",
|
||||
]
|
||||
|
||||
TEMP_STORAGE_MIRROR_NAME = "ci_temporary_mirror"
|
||||
SPACK_RESERVED_TAGS = ["public", "protected", "notary"]
|
||||
|
@@ -2,8 +2,6 @@
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import warnings
|
||||
|
||||
import llnl.util.tty as tty
|
||||
import llnl.util.tty.colify
|
||||
import llnl.util.tty.color as cl
|
||||
@@ -54,10 +52,8 @@ def setup_parser(subparser):
|
||||
|
||||
|
||||
def configs(parser, args):
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore")
|
||||
reports = spack.audit.run_group(args.subcommand)
|
||||
_process_reports(reports)
|
||||
reports = spack.audit.run_group(args.subcommand)
|
||||
_process_reports(reports)
|
||||
|
||||
|
||||
def packages(parser, args):
|
||||
|
@@ -7,14 +7,13 @@
|
||||
import glob
|
||||
import hashlib
|
||||
import json
|
||||
import multiprocessing
|
||||
import multiprocessing.pool
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
import urllib.request
|
||||
from typing import Dict, List, Optional, Tuple, Union
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
|
||||
import llnl.util.tty as tty
|
||||
from llnl.string import plural
|
||||
@@ -308,30 +307,8 @@ def _progress(i: int, total: int):
|
||||
return ""
|
||||
|
||||
|
||||
class NoPool:
|
||||
def map(self, func, args):
|
||||
return [func(a) for a in args]
|
||||
|
||||
def starmap(self, func, args):
|
||||
return [func(*a) for a in args]
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *args):
|
||||
pass
|
||||
|
||||
|
||||
MaybePool = Union[multiprocessing.pool.Pool, NoPool]
|
||||
|
||||
|
||||
def _make_pool() -> MaybePool:
|
||||
"""Can't use threading because it's unsafe, and can't use spawned processes because of globals.
|
||||
That leaves only forking"""
|
||||
if multiprocessing.get_start_method() == "fork":
|
||||
return multiprocessing.pool.Pool(determine_number_of_jobs(parallel=True))
|
||||
else:
|
||||
return NoPool()
|
||||
def _make_pool():
|
||||
return multiprocessing.pool.Pool(determine_number_of_jobs(parallel=True))
|
||||
|
||||
|
||||
def push_fn(args):
|
||||
@@ -614,7 +591,7 @@ def _push_oci(
|
||||
image_ref: ImageReference,
|
||||
installed_specs_with_deps: List[Spec],
|
||||
tmpdir: str,
|
||||
pool: MaybePool,
|
||||
pool: multiprocessing.pool.Pool,
|
||||
) -> List[str]:
|
||||
"""Push specs to an OCI registry
|
||||
|
||||
@@ -715,10 +692,11 @@ def _config_from_tag(image_ref: ImageReference, tag: str) -> Optional[dict]:
|
||||
return config if "spec" in config else None
|
||||
|
||||
|
||||
def _update_index_oci(image_ref: ImageReference, tmpdir: str, pool: MaybePool) -> None:
|
||||
request = urllib.request.Request(url=image_ref.tags_url())
|
||||
response = spack.oci.opener.urlopen(request)
|
||||
spack.oci.opener.ensure_status(request, response, 200)
|
||||
def _update_index_oci(
|
||||
image_ref: ImageReference, tmpdir: str, pool: multiprocessing.pool.Pool
|
||||
) -> None:
|
||||
response = spack.oci.opener.urlopen(urllib.request.Request(url=image_ref.tags_url()))
|
||||
spack.oci.opener.ensure_status(response, 200)
|
||||
tags = json.load(response)["tags"]
|
||||
|
||||
# Fetch all image config files in parallel
|
||||
|
@@ -61,7 +61,7 @@ def graph(parser, args):
|
||||
args.dot = True
|
||||
env = ev.active_environment()
|
||||
if env:
|
||||
specs = env.concrete_roots()
|
||||
specs = env.all_specs()
|
||||
else:
|
||||
specs = spack.store.STORE.db.query()
|
||||
|
||||
|
@@ -327,7 +327,7 @@ def _variants_by_name_when(pkg):
|
||||
"""Adaptor to get variants keyed by { name: { when: { [Variant...] } }."""
|
||||
# TODO: replace with pkg.variants_by_name(when=True) when unified directive dicts are merged.
|
||||
variants = {}
|
||||
for name, (variant, whens) in sorted(pkg.variants.items()):
|
||||
for name, (variant, whens) in pkg.variants.items():
|
||||
for when in whens:
|
||||
variants.setdefault(name, {}).setdefault(when, []).append(variant)
|
||||
return variants
|
||||
|
@@ -112,16 +112,16 @@ def _to_dict(compiler):
|
||||
def get_compiler_config(scope=None, init_config=True):
|
||||
"""Return the compiler configuration for the specified architecture."""
|
||||
|
||||
config = spack.config.CONFIG.get("compilers", scope=scope) or []
|
||||
config = spack.config.get("compilers", scope=scope) or []
|
||||
if config or not init_config:
|
||||
return config
|
||||
|
||||
merged_config = spack.config.CONFIG.get("compilers")
|
||||
merged_config = spack.config.get("compilers")
|
||||
if merged_config:
|
||||
return config
|
||||
|
||||
_init_compiler_config(scope=scope)
|
||||
config = spack.config.CONFIG.get("compilers", scope=scope)
|
||||
config = spack.config.get("compilers", scope=scope)
|
||||
return config
|
||||
|
||||
|
||||
@@ -154,14 +154,6 @@ def add_compilers_to_config(compilers, scope=None, init_config=True):
|
||||
"""
|
||||
compiler_config = get_compiler_config(scope, init_config)
|
||||
for compiler in compilers:
|
||||
if not compiler.cc:
|
||||
tty.debug(f"{compiler.spec} does not have a C compiler")
|
||||
if not compiler.cxx:
|
||||
tty.debug(f"{compiler.spec} does not have a C++ compiler")
|
||||
if not compiler.f77:
|
||||
tty.debug(f"{compiler.spec} does not have a Fortran77 compiler")
|
||||
if not compiler.fc:
|
||||
tty.debug(f"{compiler.spec} does not have a Fortran compiler")
|
||||
compiler_config.append(_to_dict(compiler))
|
||||
spack.config.set("compilers", compiler_config, scope=scope)
|
||||
|
||||
@@ -514,10 +506,9 @@ def get_compilers(config, cspec=None, arch_spec=None):
|
||||
for items in config:
|
||||
items = items["compiler"]
|
||||
|
||||
# We might use equality here.
|
||||
if cspec and not spack.spec.parse_with_version_concrete(
|
||||
items["spec"], compiler=True
|
||||
).satisfies(cspec):
|
||||
# NOTE: in principle this should be equality not satisfies, but config can still
|
||||
# be written in old format gcc@10.1.0 instead of gcc@=10.1.0.
|
||||
if cspec and not cspec.satisfies(items["spec"]):
|
||||
continue
|
||||
|
||||
# If an arch spec is given, confirm that this compiler
|
||||
|
@@ -9,8 +9,6 @@
|
||||
import sys
|
||||
from typing import Dict, List, Set
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
import spack.compiler
|
||||
import spack.operating_systems.windows_os
|
||||
import spack.platforms
|
||||
@@ -187,9 +185,6 @@ def __init__(self, *args, **kwargs):
|
||||
# get current platform architecture and format for vcvars argument
|
||||
arch = spack.platforms.real_host().default.lower()
|
||||
arch = arch.replace("-", "_")
|
||||
if str(archspec.cpu.host().family) == "x86_64":
|
||||
arch = "amd64"
|
||||
|
||||
self.vcvars_call = VCVarsInvocation(vcvars_script_path, arch, self.msvc_version)
|
||||
env_cmds.append(self.vcvars_call)
|
||||
# Below is a check for a valid fortran path
|
||||
|
@@ -463,8 +463,6 @@ def _depends_on(pkg, spec, when=None, type=dt.DEFAULT_TYPES, patches=None):
|
||||
dep_spec = spack.spec.Spec(spec)
|
||||
if not dep_spec.name:
|
||||
raise DependencyError("Invalid dependency specification in package '%s':" % pkg.name, spec)
|
||||
elif dep_spec.name in ("c", "cxx", "fortran"): # forward compat for language deps
|
||||
return
|
||||
if pkg.name == dep_spec.name:
|
||||
raise CircularReferenceError("Package '%s' cannot depend on itself." % pkg.name)
|
||||
|
||||
|
@@ -380,13 +380,14 @@ def _print_timer(pre: str, pkg_id: str, timer: timer.BaseTimer) -> None:
|
||||
|
||||
|
||||
def _install_from_cache(
|
||||
pkg: "spack.package_base.PackageBase", explicit: bool, unsigned: bool = False
|
||||
pkg: "spack.package_base.PackageBase", cache_only: bool, explicit: bool, unsigned: bool = False
|
||||
) -> bool:
|
||||
"""
|
||||
Install the package from binary cache
|
||||
Extract the package from binary cache
|
||||
|
||||
Args:
|
||||
pkg: package to install from the binary cache
|
||||
cache_only: only extract from binary cache
|
||||
explicit: ``True`` if installing the package was explicitly
|
||||
requested by the user, otherwise, ``False``
|
||||
unsigned: ``True`` if binary package signatures to be checked,
|
||||
@@ -398,11 +399,15 @@ def _install_from_cache(
|
||||
installed_from_cache = _try_install_from_binary_cache(
|
||||
pkg, explicit, unsigned=unsigned, timer=t
|
||||
)
|
||||
pkg_id = package_id(pkg)
|
||||
if not installed_from_cache:
|
||||
pre = f"No binary for {pkg_id} found"
|
||||
if cache_only:
|
||||
tty.die(f"{pre} when cache-only specified")
|
||||
|
||||
tty.msg(f"{pre}: installing from source")
|
||||
return False
|
||||
t.stop()
|
||||
|
||||
pkg_id = package_id(pkg)
|
||||
tty.debug(f"Successfully extracted {pkg_id} from binary cache")
|
||||
|
||||
_write_timer_json(pkg, t, True)
|
||||
@@ -1330,6 +1335,7 @@ def _prepare_for_install(self, task: BuildTask) -> None:
|
||||
"""
|
||||
install_args = task.request.install_args
|
||||
keep_prefix = install_args.get("keep_prefix")
|
||||
restage = install_args.get("restage")
|
||||
|
||||
# Make sure the package is ready to be locally installed.
|
||||
self._ensure_install_ready(task.pkg)
|
||||
@@ -1361,6 +1367,10 @@ def _prepare_for_install(self, task: BuildTask) -> None:
|
||||
else:
|
||||
tty.debug(f"{task.pkg_id} is partially installed")
|
||||
|
||||
# Destroy the stage for a locally installed, non-DIYStage, package
|
||||
if restage and task.pkg.stage.managed_by_spack:
|
||||
task.pkg.stage.destroy()
|
||||
|
||||
if (
|
||||
rec
|
||||
and installed_in_db
|
||||
@@ -1661,16 +1671,11 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
|
||||
task.status = STATUS_INSTALLING
|
||||
|
||||
# Use the binary cache if requested
|
||||
if use_cache:
|
||||
if _install_from_cache(pkg, explicit, unsigned):
|
||||
self._update_installed(task)
|
||||
if task.compiler:
|
||||
self._add_compiler_package_to_config(pkg)
|
||||
return
|
||||
elif cache_only:
|
||||
raise InstallError("No binary found when cache-only was specified", pkg=pkg)
|
||||
else:
|
||||
tty.msg(f"No binary for {pkg_id} found: installing from source")
|
||||
if use_cache and _install_from_cache(pkg, cache_only, explicit, unsigned):
|
||||
self._update_installed(task)
|
||||
if task.compiler:
|
||||
self._add_compiler_package_to_config(pkg)
|
||||
return
|
||||
|
||||
pkg.run_tests = tests if isinstance(tests, bool) else pkg.name in tests
|
||||
|
||||
@@ -1686,10 +1691,6 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
|
||||
try:
|
||||
self._setup_install_dir(pkg)
|
||||
|
||||
# Create stage object now and let it be serialized for the child process. That
|
||||
# way monkeypatch in tests works correctly.
|
||||
pkg.stage
|
||||
|
||||
# Create a child process to do the actual installation.
|
||||
# Preserve verbosity settings across installs.
|
||||
spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process(
|
||||
@@ -2222,6 +2223,11 @@ def install(self) -> None:
|
||||
if not keep_prefix and not action == InstallAction.OVERWRITE:
|
||||
pkg.remove_prefix()
|
||||
|
||||
# The subprocess *may* have removed the build stage. Mark it
|
||||
# not created so that the next time pkg.stage is invoked, we
|
||||
# check the filesystem for it.
|
||||
pkg.stage.created = False
|
||||
|
||||
# Perform basic task cleanup for the installed spec to
|
||||
# include downgrading the write to a read lock
|
||||
self._cleanup_task(pkg)
|
||||
@@ -2291,9 +2297,6 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
|
||||
# whether to keep the build stage after installation
|
||||
self.keep_stage = install_args.get("keep_stage", False)
|
||||
|
||||
# whether to restage
|
||||
self.restage = install_args.get("restage", False)
|
||||
|
||||
# whether to skip the patch phase
|
||||
self.skip_patch = install_args.get("skip_patch", False)
|
||||
|
||||
@@ -2324,13 +2327,9 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
|
||||
def run(self) -> bool:
|
||||
"""Main entry point from ``build_process`` to kick off install in child."""
|
||||
|
||||
stage = self.pkg.stage
|
||||
stage.keep = self.keep_stage
|
||||
self.pkg.stage.keep = self.keep_stage
|
||||
|
||||
if self.restage:
|
||||
stage.destroy()
|
||||
|
||||
with stage:
|
||||
with self.pkg.stage:
|
||||
self.timer.start("stage")
|
||||
|
||||
if not self.fake:
|
||||
|
@@ -1016,16 +1016,14 @@ def _main(argv=None):
|
||||
bootstrap_context = bootstrap.ensure_bootstrap_configuration()
|
||||
|
||||
with bootstrap_context:
|
||||
return finish_parse_and_run(parser, cmd_name, args, env_format_error)
|
||||
return finish_parse_and_run(parser, cmd_name, args.command, env_format_error)
|
||||
|
||||
|
||||
def finish_parse_and_run(parser, cmd_name, main_args, env_format_error):
|
||||
def finish_parse_and_run(parser, cmd_name, cmd, env_format_error):
|
||||
"""Finish parsing after we know the command to run."""
|
||||
# add the found command to the parser and re-run then re-parse
|
||||
command = parser.add_command(cmd_name)
|
||||
args, unknown = parser.parse_known_args(main_args.command)
|
||||
# we need to inherit verbose since the install command checks for it
|
||||
args.verbose = main_args.verbose
|
||||
args, unknown = parser.parse_known_args()
|
||||
|
||||
# Now that we know what command this is and what its args are, determine
|
||||
# whether we can continue with a bad environment and raise if not.
|
||||
|
@@ -93,7 +93,7 @@ def _filter_compiler_wrappers_impl(pkg_or_builder):
|
||||
replacements = []
|
||||
|
||||
for idx, (env_var, compiler_path) in enumerate(compiler_vars):
|
||||
if env_var in os.environ and compiler_path is not None:
|
||||
if env_var in os.environ:
|
||||
# filter spack wrapper and links to spack wrapper in case
|
||||
# build system runs realpath
|
||||
wrapper = os.environ[env_var]
|
||||
|
@@ -134,7 +134,7 @@ def upload_blob(
|
||||
return True
|
||||
|
||||
# Otherwise, do another PUT request.
|
||||
spack.oci.opener.ensure_status(request, response, 202)
|
||||
spack.oci.opener.ensure_status(response, 202)
|
||||
assert "Location" in response.headers
|
||||
|
||||
# Can be absolute or relative, joining handles both
|
||||
@@ -143,16 +143,19 @@ def upload_blob(
|
||||
)
|
||||
f.seek(0)
|
||||
|
||||
request = Request(
|
||||
url=upload_url,
|
||||
method="PUT",
|
||||
data=f,
|
||||
headers={"Content-Type": "application/octet-stream", "Content-Length": str(file_size)},
|
||||
response = _urlopen(
|
||||
Request(
|
||||
url=upload_url,
|
||||
method="PUT",
|
||||
data=f,
|
||||
headers={
|
||||
"Content-Type": "application/octet-stream",
|
||||
"Content-Length": str(file_size),
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
response = _urlopen(request)
|
||||
|
||||
spack.oci.opener.ensure_status(request, response, 201)
|
||||
spack.oci.opener.ensure_status(response, 201)
|
||||
|
||||
# print elapsed time and # MB/s
|
||||
_log_upload_progress(digest, file_size, time.time() - start)
|
||||
@@ -186,16 +189,16 @@ def upload_manifest(
|
||||
if not tag:
|
||||
ref = ref.with_digest(digest)
|
||||
|
||||
request = Request(
|
||||
url=ref.manifest_url(),
|
||||
method="PUT",
|
||||
data=data,
|
||||
headers={"Content-Type": oci_manifest["mediaType"]},
|
||||
response = _urlopen(
|
||||
Request(
|
||||
url=ref.manifest_url(),
|
||||
method="PUT",
|
||||
data=data,
|
||||
headers={"Content-Type": oci_manifest["mediaType"]},
|
||||
)
|
||||
)
|
||||
|
||||
response = _urlopen(request)
|
||||
|
||||
spack.oci.opener.ensure_status(request, response, 201)
|
||||
spack.oci.opener.ensure_status(response, 201)
|
||||
return digest, size
|
||||
|
||||
|
||||
|
@@ -310,15 +310,19 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
|
||||
# Login failed, avoid infinite recursion where we go back and
|
||||
# forth between auth server and registry
|
||||
if hasattr(req, "login_attempted"):
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
req, code, f"Failed to login: {msg}", headers, fp
|
||||
raise urllib.error.HTTPError(
|
||||
req.full_url, code, f"Failed to login to {req.full_url}: {msg}", headers, fp
|
||||
)
|
||||
|
||||
# On 401 Unauthorized, parse the WWW-Authenticate header
|
||||
# to determine what authentication is required
|
||||
if "WWW-Authenticate" not in headers:
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
req, code, "Cannot login to registry, missing WWW-Authenticate header", headers, fp
|
||||
raise urllib.error.HTTPError(
|
||||
req.full_url,
|
||||
code,
|
||||
"Cannot login to registry, missing WWW-Authenticate header",
|
||||
headers,
|
||||
fp,
|
||||
)
|
||||
|
||||
header_value = headers["WWW-Authenticate"]
|
||||
@@ -326,8 +330,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
|
||||
try:
|
||||
challenge = get_bearer_challenge(parse_www_authenticate(header_value))
|
||||
except ValueError as e:
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
req,
|
||||
raise urllib.error.HTTPError(
|
||||
req.full_url,
|
||||
code,
|
||||
f"Cannot login to registry, malformed WWW-Authenticate header: {header_value}",
|
||||
headers,
|
||||
@@ -336,8 +340,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
|
||||
|
||||
# If there is no bearer challenge, we can't handle it
|
||||
if not challenge:
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
req,
|
||||
raise urllib.error.HTTPError(
|
||||
req.full_url,
|
||||
code,
|
||||
f"Cannot login to registry, unsupported authentication scheme: {header_value}",
|
||||
headers,
|
||||
@@ -352,8 +356,8 @@ def http_error_401(self, req: Request, fp, code, msg, headers):
|
||||
timeout=req.timeout,
|
||||
)
|
||||
except ValueError as e:
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
req,
|
||||
raise urllib.error.HTTPError(
|
||||
req.full_url,
|
||||
code,
|
||||
f"Cannot login to registry, failed to obtain bearer token: {e}",
|
||||
headers,
|
||||
@@ -408,13 +412,13 @@ def create_opener():
|
||||
return opener
|
||||
|
||||
|
||||
def ensure_status(request: urllib.request.Request, response: HTTPResponse, status: int):
|
||||
def ensure_status(response: HTTPResponse, status: int):
|
||||
"""Raise an error if the response status is not the expected one."""
|
||||
if response.status == status:
|
||||
return
|
||||
|
||||
raise spack.util.web.DetailedHTTPError(
|
||||
request, response.status, response.reason, response.info(), None
|
||||
raise urllib.error.HTTPError(
|
||||
response.geturl(), response.status, response.reason, response.info(), None
|
||||
)
|
||||
|
||||
|
||||
|
@@ -143,7 +143,6 @@ def __init__(self):
|
||||
"12": "monterey",
|
||||
"13": "ventura",
|
||||
"14": "sonoma",
|
||||
"15": "sequoia",
|
||||
}
|
||||
|
||||
version = macos_version()
|
||||
|
@@ -32,6 +32,7 @@
|
||||
from spack.build_systems.bundle import BundlePackage
|
||||
from spack.build_systems.cached_cmake import (
|
||||
CachedCMakePackage,
|
||||
cmake_cache_filepath,
|
||||
cmake_cache_option,
|
||||
cmake_cache_path,
|
||||
cmake_cache_string,
|
||||
|
@@ -24,9 +24,8 @@
|
||||
import textwrap
|
||||
import time
|
||||
import traceback
|
||||
import typing
|
||||
import warnings
|
||||
from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Tuple, Type, TypeVar, Union
|
||||
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type, TypeVar
|
||||
|
||||
import llnl.util.filesystem as fsys
|
||||
import llnl.util.tty as tty
|
||||
@@ -683,13 +682,13 @@ def __init__(self, spec):
|
||||
@classmethod
|
||||
def possible_dependencies(
|
||||
cls,
|
||||
transitive: bool = True,
|
||||
expand_virtuals: bool = True,
|
||||
transitive=True,
|
||||
expand_virtuals=True,
|
||||
depflag: dt.DepFlag = dt.ALL,
|
||||
visited: Optional[dict] = None,
|
||||
missing: Optional[dict] = None,
|
||||
virtuals: Optional[set] = None,
|
||||
) -> Dict[str, Set[str]]:
|
||||
visited=None,
|
||||
missing=None,
|
||||
virtuals=None,
|
||||
):
|
||||
"""Return dict of possible dependencies of this package.
|
||||
|
||||
Args:
|
||||
@@ -2450,21 +2449,14 @@ def flatten_dependencies(spec, flat_dir):
|
||||
dep_files.merge(flat_dir + "/" + name)
|
||||
|
||||
|
||||
def possible_dependencies(
|
||||
*pkg_or_spec: Union[str, spack.spec.Spec, typing.Type[PackageBase]],
|
||||
transitive: bool = True,
|
||||
expand_virtuals: bool = True,
|
||||
depflag: dt.DepFlag = dt.ALL,
|
||||
missing: Optional[dict] = None,
|
||||
virtuals: Optional[set] = None,
|
||||
) -> Dict[str, Set[str]]:
|
||||
def possible_dependencies(*pkg_or_spec, **kwargs):
|
||||
"""Get the possible dependencies of a number of packages.
|
||||
|
||||
See ``PackageBase.possible_dependencies`` for details.
|
||||
"""
|
||||
packages = []
|
||||
for pos in pkg_or_spec:
|
||||
if isinstance(pos, PackageMeta) and issubclass(pos, PackageBase):
|
||||
if isinstance(pos, PackageMeta):
|
||||
packages.append(pos)
|
||||
continue
|
||||
|
||||
@@ -2477,16 +2469,9 @@ def possible_dependencies(
|
||||
else:
|
||||
packages.append(pos.package_class)
|
||||
|
||||
visited: Dict[str, Set[str]] = {}
|
||||
visited = {}
|
||||
for pkg in packages:
|
||||
pkg.possible_dependencies(
|
||||
visited=visited,
|
||||
transitive=transitive,
|
||||
expand_virtuals=expand_virtuals,
|
||||
depflag=depflag,
|
||||
missing=missing,
|
||||
virtuals=virtuals,
|
||||
)
|
||||
pkg.possible_dependencies(visited=visited, **kwargs)
|
||||
|
||||
return visited
|
||||
|
||||
|
@@ -490,7 +490,7 @@ def read(self, stream):
|
||||
self.index = spack.tag.TagIndex.from_json(stream, self.repository)
|
||||
|
||||
def update(self, pkg_fullname):
|
||||
self.index.update_package(pkg_fullname.split(".")[-1])
|
||||
self.index.update_package(pkg_fullname)
|
||||
|
||||
def write(self, stream):
|
||||
self.index.to_json(stream)
|
||||
|
@@ -69,8 +69,6 @@
|
||||
"patternProperties": {r"\w+": {}},
|
||||
}
|
||||
|
||||
REQUIREMENT_URL = "https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements"
|
||||
|
||||
#: Properties for inclusion in other schemas
|
||||
properties = {
|
||||
"packages": {
|
||||
@@ -119,7 +117,7 @@
|
||||
"properties": ["version"],
|
||||
"message": "setting version preferences in the 'all' section of packages.yaml "
|
||||
"is deprecated and will be removed in v0.22\n\n\tThese preferences "
|
||||
"will be ignored by Spack. You can set them only in package-specific sections "
|
||||
"will be ignored by Spack. You can set them only in package specific sections "
|
||||
"of the same file.\n",
|
||||
"error": False,
|
||||
},
|
||||
@@ -164,14 +162,10 @@
|
||||
},
|
||||
"deprecatedProperties": {
|
||||
"properties": ["target", "compiler", "providers"],
|
||||
"message": "setting 'compiler:', 'target:' or 'provider:' preferences in "
|
||||
"a package-specific section of packages.yaml is deprecated, and will be "
|
||||
"removed in v0.22.\n\n\tThese preferences will be ignored by Spack, and "
|
||||
"can be set only in the 'all' section of the same file. "
|
||||
"You can run:\n\n\t\t$ spack audit configs\n\n\tto get better diagnostics, "
|
||||
"including files:lines where the deprecated attributes are used.\n\n"
|
||||
"\tUse requirements to enforce conditions on specific packages: "
|
||||
f"{REQUIREMENT_URL}\n",
|
||||
"message": "setting compiler, target or provider preferences in a package "
|
||||
"specific section of packages.yaml is deprecated, and will be removed in "
|
||||
"v0.22.\n\n\tThese preferences will be ignored by Spack. You "
|
||||
"can set them only in the 'all' section of the same file.\n",
|
||||
"error": False,
|
||||
},
|
||||
}
|
||||
|
@@ -12,7 +12,6 @@
|
||||
import pprint
|
||||
import re
|
||||
import types
|
||||
import typing
|
||||
import warnings
|
||||
from typing import Callable, Dict, List, NamedTuple, Optional, Sequence, Set, Tuple, Union
|
||||
|
||||
@@ -380,7 +379,7 @@ def check_packages_exist(specs):
|
||||
for spec in specs:
|
||||
for s in spec.traverse():
|
||||
try:
|
||||
check_passed = repo.repo_for_pkg(s).exists(s.name) or repo.is_virtual(s.name)
|
||||
check_passed = repo.exists(s.name) or repo.is_virtual(s.name)
|
||||
except Exception as e:
|
||||
msg = "Cannot find package: {0}".format(str(e))
|
||||
check_passed = False
|
||||
@@ -714,7 +713,7 @@ def _get_cause_tree(
|
||||
(condition_id, set_id) in which the latter idea means that the condition represented by
|
||||
the former held in the condition set represented by the latter.
|
||||
"""
|
||||
seen.add(cause)
|
||||
seen = set(seen) | set(cause)
|
||||
parents = [c for e, c in condition_causes if e == cause and c not in seen]
|
||||
local = "required because %s " % conditions[cause[0]]
|
||||
|
||||
@@ -813,14 +812,7 @@ def on_model(model):
|
||||
errors = sorted(
|
||||
[(int(priority), msg, args) for priority, msg, *args in error_args], reverse=True
|
||||
)
|
||||
try:
|
||||
msg = self.message(errors)
|
||||
except Exception as e:
|
||||
msg = (
|
||||
f"unexpected error during concretization [{str(e)}]. "
|
||||
f"Please report a bug at https://github.com/spack/spack/issues"
|
||||
)
|
||||
raise spack.error.SpackError(msg)
|
||||
msg = self.message(errors)
|
||||
raise UnsatisfiableSpecError(msg)
|
||||
|
||||
|
||||
@@ -1014,6 +1006,14 @@ def on_model(model):
|
||||
# record the possible dependencies in the solve
|
||||
result.possible_dependencies = setup.pkgs
|
||||
|
||||
# print any unknown functions in the model
|
||||
for sym in best_model:
|
||||
if sym.name not in ("attr", "error", "opt_criterion"):
|
||||
tty.debug(
|
||||
"UNKNOWN SYMBOL: %s(%s)"
|
||||
% (sym.name, ", ".join([str(s) for s in intermediate_repr(sym.arguments)]))
|
||||
)
|
||||
|
||||
elif cores:
|
||||
result.control = self.control
|
||||
result.cores.extend(cores)
|
||||
@@ -1118,8 +1118,11 @@ def __init__(self, tests=False):
|
||||
|
||||
self.reusable_and_possible = ConcreteSpecsByHash()
|
||||
|
||||
self._id_counter = itertools.count()
|
||||
# id for dummy variables
|
||||
self._condition_id_counter = itertools.count()
|
||||
self._trigger_id_counter = itertools.count()
|
||||
self._trigger_cache = collections.defaultdict(dict)
|
||||
self._effect_id_counter = itertools.count()
|
||||
self._effect_cache = collections.defaultdict(dict)
|
||||
|
||||
# Caches to optimize the setup phase of the solver
|
||||
@@ -1133,7 +1136,6 @@ def __init__(self, tests=False):
|
||||
|
||||
# Set during the call to setup
|
||||
self.pkgs = None
|
||||
self.explicitly_required_namespaces = {}
|
||||
|
||||
def pkg_version_rules(self, pkg):
|
||||
"""Output declared versions of a package.
|
||||
@@ -1146,9 +1148,7 @@ def key_fn(version):
|
||||
# Origins are sorted by "provenance" first, see the Provenance enumeration above
|
||||
return version.origin, version.idx
|
||||
|
||||
if isinstance(pkg, str):
|
||||
pkg = self.pkg_class(pkg)
|
||||
|
||||
pkg = packagize(pkg)
|
||||
declared_versions = self.declared_versions[pkg.name]
|
||||
partially_sorted_versions = sorted(set(declared_versions), key=key_fn)
|
||||
|
||||
@@ -1340,10 +1340,7 @@ def _rule_from_str(
|
||||
)
|
||||
|
||||
def pkg_rules(self, pkg, tests):
|
||||
pkg = self.pkg_class(pkg)
|
||||
|
||||
# Namespace of the package
|
||||
self.gen.fact(fn.pkg_fact(pkg.name, fn.namespace(pkg.namespace)))
|
||||
pkg = packagize(pkg)
|
||||
|
||||
# versions
|
||||
self.pkg_version_rules(pkg)
|
||||
@@ -1521,7 +1518,7 @@ def condition(
|
||||
# In this way, if a condition can't be emitted but the exception is handled in the caller,
|
||||
# we won't emit partial facts.
|
||||
|
||||
condition_id = next(self._id_counter)
|
||||
condition_id = next(self._condition_id_counter)
|
||||
self.gen.fact(fn.pkg_fact(named_cond.name, fn.condition(condition_id)))
|
||||
self.gen.fact(fn.condition_reason(condition_id, msg))
|
||||
|
||||
@@ -1529,7 +1526,7 @@ def condition(
|
||||
|
||||
named_cond_key = (str(named_cond), transform_required)
|
||||
if named_cond_key not in cache:
|
||||
trigger_id = next(self._id_counter)
|
||||
trigger_id = next(self._trigger_id_counter)
|
||||
requirements = self.spec_clauses(named_cond, body=True, required_from=name)
|
||||
|
||||
if transform_required:
|
||||
@@ -1545,7 +1542,7 @@ def condition(
|
||||
cache = self._effect_cache[named_cond.name]
|
||||
imposed_spec_key = (str(imposed_spec), transform_imposed)
|
||||
if imposed_spec_key not in cache:
|
||||
effect_id = next(self._id_counter)
|
||||
effect_id = next(self._effect_id_counter)
|
||||
requirements = self.spec_clauses(imposed_spec, body=False, required_from=name)
|
||||
|
||||
if transform_imposed:
|
||||
@@ -1676,10 +1673,9 @@ def provider_requirements(self):
|
||||
rules = self._rules_from_requirements(
|
||||
virtual_str, requirements, kind=RequirementKind.VIRTUAL
|
||||
)
|
||||
if rules:
|
||||
self.emit_facts_from_requirement_rules(rules)
|
||||
self.trigger_rules()
|
||||
self.effect_rules()
|
||||
self.emit_facts_from_requirement_rules(rules)
|
||||
self.trigger_rules()
|
||||
self.effect_rules()
|
||||
|
||||
def emit_facts_from_requirement_rules(self, rules: List[RequirementRule]):
|
||||
"""Generate facts to enforce requirements.
|
||||
@@ -1806,12 +1802,15 @@ def external_packages(self):
|
||||
for local_idx, spec in enumerate(external_specs):
|
||||
msg = "%s available as external when satisfying %s" % (spec.name, spec)
|
||||
|
||||
def external_imposition(input_spec, requirements):
|
||||
return requirements + [
|
||||
fn.attr("external_conditions_hold", input_spec.name, local_idx)
|
||||
]
|
||||
def external_imposition(input_spec, _):
|
||||
return [fn.attr("external_conditions_hold", input_spec.name, local_idx)]
|
||||
|
||||
self.condition(spec, spec, msg=msg, transform_imposed=external_imposition)
|
||||
self.condition(
|
||||
spec,
|
||||
spack.spec.Spec(spec.name),
|
||||
msg=msg,
|
||||
transform_imposed=external_imposition,
|
||||
)
|
||||
self.possible_versions[spec.name].add(spec.version)
|
||||
self.gen.newline()
|
||||
|
||||
@@ -1833,13 +1832,7 @@ def preferred_variants(self, pkg_name):
|
||||
|
||||
# perform validation of the variant and values
|
||||
spec = spack.spec.Spec(pkg_name)
|
||||
try:
|
||||
spec.update_variant_validate(variant_name, values)
|
||||
except (spack.variant.InvalidVariantValueError, KeyError, ValueError) as e:
|
||||
tty.debug(
|
||||
f"[SETUP]: rejected {str(variant)} as a preference for {pkg_name}: {str(e)}"
|
||||
)
|
||||
continue
|
||||
spec.update_variant_validate(variant_name, values)
|
||||
|
||||
for value in values:
|
||||
self.variant_values_from_specs.add((pkg_name, variant.name, value))
|
||||
@@ -1978,7 +1971,7 @@ class Body:
|
||||
if not spec.concrete:
|
||||
reserved_names = spack.directives.reserved_names
|
||||
if not spec.virtual and vname not in reserved_names:
|
||||
pkg_cls = self.pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
try:
|
||||
variant_def, _ = pkg_cls.variants[vname]
|
||||
except KeyError:
|
||||
@@ -2097,7 +2090,7 @@ def define_package_versions_and_validate_preferences(
|
||||
"""Declare any versions in specs not declared in packages."""
|
||||
packages_yaml = spack.config.get("packages")
|
||||
for pkg_name in possible_pkgs:
|
||||
pkg_cls = self.pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
|
||||
# All the versions from the corresponding package.py file. Since concepts
|
||||
# like being a "develop" version or being preferred exist only at a
|
||||
@@ -2185,7 +2178,7 @@ def _supported_targets(self, compiler_name, compiler_version, targets):
|
||||
try:
|
||||
with warnings.catch_warnings():
|
||||
warnings.simplefilter("ignore")
|
||||
target.optimization_flags(compiler_name, str(compiler_version))
|
||||
target.optimization_flags(compiler_name, compiler_version)
|
||||
supported.append(target)
|
||||
except archspec.cpu.UnsupportedMicroarchitecture:
|
||||
continue
|
||||
@@ -2557,8 +2550,14 @@ def setup(
|
||||
reuse: list of concrete specs that can be reused
|
||||
allow_deprecated: if True adds deprecated versions into the solve
|
||||
"""
|
||||
self._condition_id_counter = itertools.count()
|
||||
|
||||
# preliminary checks
|
||||
check_packages_exist(specs)
|
||||
|
||||
# get list of all possible dependencies
|
||||
self.possible_virtuals = set(x.name for x in specs if x.virtual)
|
||||
|
||||
node_counter = _create_counter(specs, tests=self.tests)
|
||||
self.possible_virtuals = node_counter.possible_virtuals()
|
||||
self.pkgs = node_counter.possible_dependencies()
|
||||
@@ -2571,10 +2570,6 @@ def setup(
|
||||
if missing_deps:
|
||||
raise spack.spec.InvalidDependencyError(spec.name, missing_deps)
|
||||
|
||||
for node in spack.traverse.traverse_nodes(specs):
|
||||
if node.namespace is not None:
|
||||
self.explicitly_required_namespaces[node.name] = node.namespace
|
||||
|
||||
# driver is used by all the functions below to add facts and
|
||||
# rules to generate an ASP program.
|
||||
self.gen = driver
|
||||
@@ -2680,21 +2675,23 @@ def setup(
|
||||
def literal_specs(self, specs):
|
||||
for spec in specs:
|
||||
self.gen.h2("Spec: %s" % str(spec))
|
||||
condition_id = next(self._id_counter)
|
||||
trigger_id = next(self._id_counter)
|
||||
condition_id = next(self._condition_id_counter)
|
||||
trigger_id = next(self._trigger_id_counter)
|
||||
|
||||
# Special condition triggered by "literal_solved"
|
||||
self.gen.fact(fn.literal(trigger_id))
|
||||
self.gen.fact(fn.pkg_fact(spec.name, fn.condition_trigger(condition_id, trigger_id)))
|
||||
self.gen.fact(fn.condition_reason(condition_id, f"{spec} requested explicitly"))
|
||||
self.gen.fact(fn.condition_reason(condition_id, f"{spec} requested from CLI"))
|
||||
|
||||
# Effect imposes the spec
|
||||
imposed_spec_key = str(spec), None
|
||||
cache = self._effect_cache[spec.name]
|
||||
if imposed_spec_key in cache:
|
||||
effect_id, requirements = cache[imposed_spec_key]
|
||||
else:
|
||||
effect_id = next(self._id_counter)
|
||||
requirements = self.spec_clauses(spec)
|
||||
msg = (
|
||||
"literal specs have different requirements. clear cache before computing literals"
|
||||
)
|
||||
assert imposed_spec_key not in cache, msg
|
||||
effect_id = next(self._effect_id_counter)
|
||||
requirements = self.spec_clauses(spec)
|
||||
root_name = spec.name
|
||||
for clause in requirements:
|
||||
clause_name = clause.args[0]
|
||||
@@ -2784,13 +2781,6 @@ def _specs_from_requires(self, pkg_name, section):
|
||||
for s in spec_group[key]:
|
||||
yield _spec_with_default_name(s, pkg_name)
|
||||
|
||||
def pkg_class(self, pkg_name: str) -> typing.Type["spack.package_base.PackageBase"]:
|
||||
request = pkg_name
|
||||
if pkg_name in self.explicitly_required_namespaces:
|
||||
namespace = self.explicitly_required_namespaces[pkg_name]
|
||||
request = f"{namespace}.{pkg_name}"
|
||||
return spack.repo.PATH.get_pkg_class(request)
|
||||
|
||||
|
||||
class SpecBuilder:
|
||||
"""Class with actions to rebuild a spec from ASP results."""
|
||||
@@ -2802,11 +2792,9 @@ class SpecBuilder:
|
||||
r"^.*_propagate$",
|
||||
r"^.*_satisfies$",
|
||||
r"^.*_set$",
|
||||
r"^dependency_holds$",
|
||||
r"^node_compiler$",
|
||||
r"^package_hash$",
|
||||
r"^root$",
|
||||
r"^track_dependencies$",
|
||||
r"^variant_default_value_from_cli$",
|
||||
r"^virtual_node$",
|
||||
r"^virtual_root$",
|
||||
@@ -2850,9 +2838,6 @@ def _arch(self, node):
|
||||
self._specs[node].architecture = arch
|
||||
return arch
|
||||
|
||||
def namespace(self, node, namespace):
|
||||
self._specs[node].namespace = namespace
|
||||
|
||||
def node_platform(self, node, platform):
|
||||
self._arch(node).platform = platform
|
||||
|
||||
@@ -3067,6 +3052,14 @@ def build_specs(self, function_tuples):
|
||||
|
||||
action(*args)
|
||||
|
||||
# namespace assignment is done after the fact, as it is not
|
||||
# currently part of the solve
|
||||
for spec in self._specs.values():
|
||||
if spec.namespace:
|
||||
continue
|
||||
repo = spack.repo.PATH.repo_for_pkg(spec)
|
||||
spec.namespace = repo.namespace
|
||||
|
||||
# fix flags after all specs are constructed
|
||||
self.reorder_flags()
|
||||
|
||||
|
@@ -45,9 +45,6 @@
|
||||
:- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "link"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)), internal_error("link dependency out of the root unification set").
|
||||
:- attr("depends_on", node(min_dupe_id, Package), node(ID, _), "run"), ID != min_dupe_id, unification_set("root", node(min_dupe_id, Package)), internal_error("run dependency out of the root unification set").
|
||||
|
||||
% Namespaces are statically assigned by a package fact
|
||||
attr("namespace", node(ID, Package), Namespace) :- attr("node", node(ID, Package)), pkg_fact(Package, namespace(Namespace)).
|
||||
|
||||
% Rules on "unification sets", i.e. on sets of nodes allowing a single configuration of any given package
|
||||
unify(SetID, PackageName) :- unification_set(SetID, node(_, PackageName)).
|
||||
:- 2 { unification_set(SetID, node(_, PackageName)) }, unify(SetID, PackageName).
|
||||
@@ -698,26 +695,6 @@ requirement_group_satisfied(node(ID, Package), X) :-
|
||||
activate_requirement(node(ID, Package), X),
|
||||
requirement_group(Package, X).
|
||||
|
||||
% Do not impose requirements, if the conditional requirement is not active
|
||||
do_not_impose(EffectID, node(ID, Package)) :-
|
||||
trigger_condition_holds(TriggerID, node(ID, Package)),
|
||||
pkg_fact(Package, condition_trigger(ConditionID, TriggerID)),
|
||||
pkg_fact(Package, condition_effect(ConditionID, EffectID)),
|
||||
requirement_group_member(ConditionID , Package, RequirementID),
|
||||
not activate_requirement(node(ID, Package), RequirementID).
|
||||
|
||||
% When we have a required provider, we need to ensure that the provider/2 facts respect
|
||||
% the requirement. This is particularly important for packages that could provide multiple
|
||||
% virtuals independently
|
||||
required_provider(Provider, Virtual)
|
||||
:- requirement_group_member(ConditionID, Virtual, RequirementID),
|
||||
condition_holds(ConditionID, _),
|
||||
virtual(Virtual),
|
||||
pkg_fact(Virtual, condition_effect(ConditionID, EffectID)),
|
||||
imposed_constraint(EffectID, "node", Provider).
|
||||
|
||||
:- provider(node(Y, Package), node(X, Virtual)), required_provider(Provider, Virtual), Package != Provider.
|
||||
|
||||
% TODO: the following two choice rules allow the solver to add compiler
|
||||
% flags if their only source is from a requirement. This is overly-specific
|
||||
% and should use a more-generic approach like in https://github.com/spack/spack/pull/37180
|
||||
|
@@ -10,9 +10,6 @@
|
||||
%=============================================================================
|
||||
|
||||
% macOS
|
||||
os_compatible("sequoia", "sonoma").
|
||||
os_compatible("sonoma", "ventura").
|
||||
os_compatible("ventura", "monterey").
|
||||
os_compatible("monterey", "bigsur").
|
||||
os_compatible("bigsur", "catalina").
|
||||
|
||||
|
@@ -213,19 +213,6 @@ def __call__(self, match):
|
||||
return clr.colorize(re.sub(_SEPARATORS, insert_color(), str(spec)) + "@.")
|
||||
|
||||
|
||||
OLD_STYLE_FMT_RE = re.compile(r"\${[A-Z]+}")
|
||||
|
||||
|
||||
def ensure_modern_format_string(fmt: str) -> None:
|
||||
"""Ensure that the format string does not contain old ${...} syntax."""
|
||||
result = OLD_STYLE_FMT_RE.search(fmt)
|
||||
if result:
|
||||
raise SpecFormatStringError(
|
||||
f"Format string `{fmt}` contains old syntax `{result.group(0)}`. "
|
||||
"This is no longer supported."
|
||||
)
|
||||
|
||||
|
||||
@lang.lazy_lexicographic_ordering
|
||||
class ArchSpec:
|
||||
"""Aggregate the target platform, the operating system and the target microarchitecture."""
|
||||
@@ -4373,7 +4360,6 @@ def format(self, format_string=DEFAULT_FORMAT, **kwargs):
|
||||
that accepts a string and returns another one
|
||||
|
||||
"""
|
||||
ensure_modern_format_string(format_string)
|
||||
color = kwargs.get("color", False)
|
||||
transform = kwargs.get("transform", {})
|
||||
|
||||
|
@@ -102,10 +102,7 @@ def to_dict_or_value(self):
|
||||
if self.microarchitecture.vendor == "generic":
|
||||
return str(self)
|
||||
|
||||
# Get rid of compiler flag information before turning the uarch into a dict
|
||||
uarch_dict = self.microarchitecture.to_dict()
|
||||
uarch_dict.pop("compilers", None)
|
||||
return syaml.syaml_dict(uarch_dict.items())
|
||||
return syaml.syaml_dict(self.microarchitecture.to_dict(return_list_of_items=True))
|
||||
|
||||
def __repr__(self):
|
||||
cls_name = self.__class__.__name__
|
||||
|
@@ -8,16 +8,13 @@
|
||||
|
||||
import pytest
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
import spack.compilers
|
||||
import spack.concretize
|
||||
import spack.operating_systems
|
||||
import spack.platforms
|
||||
import spack.target
|
||||
from spack.spec import ArchSpec, Spec
|
||||
from spack.spec import ArchSpec, CompilerSpec, Spec
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
@@ -124,60 +121,52 @@ def test_arch_spec_container_semantic(item, architecture_str):
|
||||
@pytest.mark.parametrize(
|
||||
"compiler_spec,target_name,expected_flags",
|
||||
[
|
||||
# Homogeneous compilers
|
||||
# Check compilers with version numbers from a single toolchain
|
||||
("gcc@4.7.2", "ivybridge", "-march=core-avx-i -mtune=core-avx-i"),
|
||||
("clang@3.5", "x86_64", "-march=x86-64 -mtune=generic"),
|
||||
("apple-clang@9.1.0", "x86_64", "-march=x86-64"),
|
||||
# Mixed toolchain
|
||||
# Check mixed toolchains
|
||||
("clang@8.0.0", "broadwell", ""),
|
||||
("clang@3.5", "x86_64", "-march=x86-64 -mtune=generic"),
|
||||
# Check Apple's Clang compilers
|
||||
("apple-clang@9.1.0", "x86_64", "-march=x86-64"),
|
||||
],
|
||||
)
|
||||
@pytest.mark.filterwarnings("ignore:microarchitecture specific")
|
||||
def test_optimization_flags(compiler_spec, target_name, expected_flags, compiler_factory):
|
||||
def test_optimization_flags(compiler_spec, target_name, expected_flags, config):
|
||||
target = spack.target.Target(target_name)
|
||||
compiler_dict = compiler_factory(spec=compiler_spec, operating_system="")["compiler"]
|
||||
if compiler_spec == "clang@8.0.0":
|
||||
compiler_dict["paths"] = {
|
||||
"cc": "/path/to/clang-8",
|
||||
"cxx": "/path/to/clang++-8",
|
||||
"f77": "/path/to/gfortran-9",
|
||||
"fc": "/path/to/gfortran-9",
|
||||
}
|
||||
compiler = spack.compilers.compiler_from_dict(compiler_dict)
|
||||
|
||||
compiler = spack.compilers.compilers_for_spec(compiler_spec).pop()
|
||||
opt_flags = target.optimization_flags(compiler)
|
||||
assert opt_flags == expected_flags
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"compiler_str,real_version,target_str,expected_flags",
|
||||
"compiler,real_version,target_str,expected_flags",
|
||||
[
|
||||
("gcc@=9.2.0", None, "haswell", "-march=haswell -mtune=haswell"),
|
||||
(CompilerSpec("gcc@=9.2.0"), None, "haswell", "-march=haswell -mtune=haswell"),
|
||||
# Check that custom string versions are accepted
|
||||
("gcc@=10foo", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
|
||||
(
|
||||
CompilerSpec("gcc@=10foo"),
|
||||
"9.2.0",
|
||||
"icelake",
|
||||
"-march=icelake-client -mtune=icelake-client",
|
||||
),
|
||||
# Check that we run version detection (4.4.0 doesn't support icelake)
|
||||
("gcc@=4.4.0-special", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
|
||||
(
|
||||
CompilerSpec("gcc@=4.4.0-special"),
|
||||
"9.2.0",
|
||||
"icelake",
|
||||
"-march=icelake-client -mtune=icelake-client",
|
||||
),
|
||||
# Check that the special case for Apple's clang is treated correctly
|
||||
# i.e. it won't try to detect the version again
|
||||
("apple-clang@=9.1.0", None, "x86_64", "-march=x86-64"),
|
||||
(CompilerSpec("apple-clang@=9.1.0"), None, "x86_64", "-march=x86-64"),
|
||||
],
|
||||
)
|
||||
def test_optimization_flags_with_custom_versions(
|
||||
compiler_str,
|
||||
real_version,
|
||||
target_str,
|
||||
expected_flags,
|
||||
monkeypatch,
|
||||
mutable_config,
|
||||
compiler_factory,
|
||||
compiler, real_version, target_str, expected_flags, monkeypatch, config
|
||||
):
|
||||
target = spack.target.Target(target_str)
|
||||
compiler_dict = compiler_factory(spec=compiler_str, operating_system="redhat6")
|
||||
mutable_config.set("compilers", [compiler_dict])
|
||||
if real_version:
|
||||
monkeypatch.setattr(spack.compiler.Compiler, "get_real_version", lambda x: real_version)
|
||||
compiler = spack.compilers.compiler_from_dict(compiler_dict["compiler"])
|
||||
|
||||
opt_flags = target.optimization_flags(compiler)
|
||||
assert opt_flags == expected_flags
|
||||
|
||||
@@ -212,16 +201,13 @@ def test_satisfy_strict_constraint_when_not_concrete(architecture_tuple, constra
|
||||
)
|
||||
@pytest.mark.usefixtures("mock_packages", "config")
|
||||
@pytest.mark.only_clingo("Fixing the parser broke this test for the original concretizer.")
|
||||
@pytest.mark.skipif(
|
||||
str(archspec.cpu.host().family) != "x86_64", reason="tests are for x86_64 uarch ranges"
|
||||
)
|
||||
def test_concretize_target_ranges(root_target_range, dep_target_range, result, monkeypatch):
|
||||
spec = Spec(
|
||||
f"pkg-a %gcc@10 foobar=bar target={root_target_range} ^pkg-b target={dep_target_range}"
|
||||
)
|
||||
# Monkeypatch so that all concretization is done as if the machine is core2
|
||||
monkeypatch.setattr(spack.platforms.test.Test, "default", "core2")
|
||||
spec = Spec(f"a %gcc@10 foobar=bar target={root_target_range} ^b target={dep_target_range}")
|
||||
with spack.concretize.disable_compiler_existence_check():
|
||||
spec.concretize()
|
||||
assert spec.target == spec["pkg-b"].target == result
|
||||
assert spec.target == spec["b"].target == result
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
|
@@ -19,8 +19,6 @@
|
||||
import py
|
||||
import pytest
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
from llnl.util.filesystem import join_path, visit_directory_tree
|
||||
|
||||
import spack.binary_distribution as bindist
|
||||
@@ -203,9 +201,6 @@ def dummy_prefix(tmpdir):
|
||||
with open(data, "w") as f:
|
||||
f.write("hello world")
|
||||
|
||||
with open(p.join(".spack", "binary_distribution"), "w") as f:
|
||||
f.write("{}")
|
||||
|
||||
os.symlink("app", relative_app_link)
|
||||
os.symlink(app, absolute_app_link)
|
||||
|
||||
@@ -575,20 +570,11 @@ def test_update_sbang(tmpdir, test_mirror):
|
||||
uninstall_cmd("-y", "/%s" % new_spec.dag_hash())
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
str(archspec.cpu.host().family) != "x86_64",
|
||||
reason="test data uses gcc 4.5.0 which does not support aarch64",
|
||||
)
|
||||
def test_install_legacy_buildcache_layout(
|
||||
mutable_config, compiler_factory, install_mockery_mutable_config
|
||||
):
|
||||
def test_install_legacy_buildcache_layout(install_mockery_mutable_config):
|
||||
"""Legacy buildcache layout involved a nested archive structure
|
||||
where the .spack file contained a repeated spec.json and another
|
||||
compressed archive file containing the install tree. This test
|
||||
makes sure we can still read that layout."""
|
||||
mutable_config.set(
|
||||
"compilers", [compiler_factory(spec="gcc@4.5.0", operating_system="debian6")]
|
||||
)
|
||||
legacy_layout_dir = os.path.join(test_path, "data", "mirrors", "legacy_layout")
|
||||
mirror_url = "file://{0}".format(legacy_layout_dir)
|
||||
filename = (
|
||||
@@ -1038,9 +1024,7 @@ def test_tarball_common_prefix(dummy_prefix, tmpdir):
|
||||
bindist._tar_strip_component(tar, common_prefix)
|
||||
|
||||
# Extract into prefix2
|
||||
tar.extractall(
|
||||
path="prefix2", members=bindist._tar_strip_component(tar, common_prefix)
|
||||
)
|
||||
tar.extractall(path="prefix2")
|
||||
|
||||
# Verify files are all there at the correct level.
|
||||
assert set(os.listdir("prefix2")) == {"bin", "share", ".spack"}
|
||||
@@ -1060,30 +1044,13 @@ def test_tarball_common_prefix(dummy_prefix, tmpdir):
|
||||
)
|
||||
|
||||
|
||||
def test_tarfile_missing_binary_distribution_file(tmpdir):
|
||||
"""A tarfile that does not contain a .spack/binary_distribution file cannot be
|
||||
used to install."""
|
||||
with tmpdir.as_cwd():
|
||||
# An empty .spack dir.
|
||||
with tarfile.open("empty.tar", mode="w") as tar:
|
||||
tarinfo = tarfile.TarInfo(name="example/.spack")
|
||||
tarinfo.type = tarfile.DIRTYPE
|
||||
tar.addfile(tarinfo)
|
||||
|
||||
with pytest.raises(ValueError, match="missing binary_distribution file"):
|
||||
bindist._ensure_common_prefix(tarfile.open("empty.tar", mode="r"))
|
||||
|
||||
|
||||
def test_tarfile_without_common_directory_prefix_fails(tmpdir):
|
||||
"""A tarfile that only contains files without a common package directory
|
||||
should fail to extract, as we won't know where to put the files."""
|
||||
with tmpdir.as_cwd():
|
||||
# Create a broken tarball with just a file, no directories.
|
||||
with tarfile.open("empty.tar", mode="w") as tar:
|
||||
tar.addfile(
|
||||
tarfile.TarInfo(name="example/.spack/binary_distribution"),
|
||||
fileobj=io.BytesIO(b"hello"),
|
||||
)
|
||||
tar.addfile(tarfile.TarInfo(name="example/file"), fileobj=io.BytesIO(b"hello"))
|
||||
|
||||
with pytest.raises(ValueError, match="Tarball does not contain a common prefix"):
|
||||
bindist._ensure_common_prefix(tarfile.open("empty.tar", mode="r"))
|
||||
@@ -1199,7 +1166,7 @@ def test_get_valid_spec_file_no_json(tmp_path, filename):
|
||||
|
||||
|
||||
def test_download_tarball_with_unsupported_layout_fails(tmp_path, mutable_config, capsys):
|
||||
layout_version = bindist.FORWARD_COMPAT_BUILD_CACHE_LAYOUT_VERSION + 1
|
||||
layout_version = bindist.CURRENT_BUILD_CACHE_LAYOUT_VERSION + 1
|
||||
spec = Spec("gmake@4.4.1%gcc@13.1.0 arch=linux-ubuntu23.04-zen2")
|
||||
spec._mark_concrete()
|
||||
spec_dict = spec.to_dict()
|
||||
|
@@ -437,14 +437,14 @@ def test_parallel_false_is_not_propagating(default_mock_concretization):
|
||||
# a foobar=bar (parallel = False)
|
||||
# |
|
||||
# b (parallel =True)
|
||||
s = default_mock_concretization("pkg-a foobar=bar")
|
||||
s = default_mock_concretization("a foobar=bar")
|
||||
|
||||
spack.build_environment.set_package_py_globals(s.package)
|
||||
assert s["pkg-a"].package.module.make_jobs == 1
|
||||
assert s["a"].package.module.make_jobs == 1
|
||||
|
||||
spack.build_environment.set_package_py_globals(s["pkg-b"].package)
|
||||
assert s["pkg-b"].package.module.make_jobs == spack.build_environment.determine_number_of_jobs(
|
||||
parallel=s["pkg-b"].package.parallel
|
||||
spack.build_environment.set_package_py_globals(s["b"].package)
|
||||
assert s["b"].package.module.make_jobs == spack.build_environment.determine_number_of_jobs(
|
||||
parallel=s["b"].package.parallel
|
||||
)
|
||||
|
||||
|
||||
@@ -540,7 +540,7 @@ def test_dirty_disable_module_unload(config, mock_packages, working_env, mock_mo
|
||||
"""Test that on CRAY platform 'module unload' is not called if the 'dirty'
|
||||
option is on.
|
||||
"""
|
||||
s = spack.spec.Spec("pkg-a").concretized()
|
||||
s = spack.spec.Spec("a").concretized()
|
||||
|
||||
# If called with "dirty" we don't unload modules, so no calls to the
|
||||
# `module` function on Cray
|
||||
|
@@ -9,8 +9,6 @@
|
||||
import py.path
|
||||
import pytest
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
import spack.build_systems.autotools
|
||||
@@ -97,7 +95,7 @@ def test_negative_ninja_check(self, input_dir, test_dir, concretize_and_setup):
|
||||
@pytest.mark.usefixtures("config", "mock_packages")
|
||||
class TestAutotoolsPackage:
|
||||
def test_with_or_without(self, default_mock_concretization):
|
||||
s = default_mock_concretization("pkg-a")
|
||||
s = default_mock_concretization("a")
|
||||
options = s.package.with_or_without("foo")
|
||||
|
||||
# Ensure that values that are not representing a feature
|
||||
@@ -129,7 +127,7 @@ def activate(value):
|
||||
assert "--without-lorem-ipsum" in options
|
||||
|
||||
def test_none_is_allowed(self, default_mock_concretization):
|
||||
s = default_mock_concretization("pkg-a foo=none")
|
||||
s = default_mock_concretization("a foo=none")
|
||||
options = s.package.with_or_without("foo")
|
||||
|
||||
# Ensure that values that are not representing a feature
|
||||
@@ -211,9 +209,6 @@ def test_autotools_gnuconfig_replacement_disabled(
|
||||
assert "gnuconfig version of config.guess" not in f.read()
|
||||
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
@pytest.mark.skipif(
|
||||
str(archspec.cpu.host().family) != "x86_64", reason="test data is specific for x86_64"
|
||||
)
|
||||
def test_autotools_gnuconfig_replacement_no_gnuconfig(self, mutable_database, monkeypatch):
|
||||
"""
|
||||
Tests whether a useful error message is shown when patch_config_files is
|
||||
|
@@ -25,7 +25,7 @@ def test_error_when_multiple_specs_are_given():
|
||||
assert "only takes one spec" in output
|
||||
|
||||
|
||||
@pytest.mark.parametrize("args", [("--", "/bin/sh", "-c", "echo test"), ("--",), ()])
|
||||
@pytest.mark.parametrize("args", [("--", "/bin/bash", "-c", "echo test"), ("--",), ()])
|
||||
@pytest.mark.usefixtures("config", "mock_packages", "working_env")
|
||||
def test_build_env_requires_a_spec(args):
|
||||
output = build_env(*args, fail_on_error=False)
|
||||
@@ -35,7 +35,7 @@ def test_build_env_requires_a_spec(args):
|
||||
_out_file = "env.out"
|
||||
|
||||
|
||||
@pytest.mark.parametrize("shell", ["pwsh", "bat"] if sys.platform == "win32" else ["sh"])
|
||||
@pytest.mark.parametrize("shell", ["pwsh", "bat"] if sys.platform == "win32" else ["bash"])
|
||||
@pytest.mark.usefixtures("config", "mock_packages", "working_env")
|
||||
def test_dump(shell_as, shell, tmpdir):
|
||||
with tmpdir.as_cwd():
|
||||
|
@@ -102,24 +102,24 @@ def test_specs_staging(config, tmpdir):
|
||||
|
||||
"""
|
||||
builder = repo.MockRepositoryBuilder(tmpdir)
|
||||
builder.add_package("pkg-g")
|
||||
builder.add_package("pkg-f")
|
||||
builder.add_package("pkg-e")
|
||||
builder.add_package("pkg-d", dependencies=[("pkg-f", None, None), ("pkg-g", None, None)])
|
||||
builder.add_package("pkg-c")
|
||||
builder.add_package("pkg-b", dependencies=[("pkg-d", None, None), ("pkg-e", None, None)])
|
||||
builder.add_package("pkg-a", dependencies=[("pkg-b", None, None), ("pkg-c", None, None)])
|
||||
builder.add_package("g")
|
||||
builder.add_package("f")
|
||||
builder.add_package("e")
|
||||
builder.add_package("d", dependencies=[("f", None, None), ("g", None, None)])
|
||||
builder.add_package("c")
|
||||
builder.add_package("b", dependencies=[("d", None, None), ("e", None, None)])
|
||||
builder.add_package("a", dependencies=[("b", None, None), ("c", None, None)])
|
||||
|
||||
with repo.use_repositories(builder.root):
|
||||
spec_a = Spec("pkg-a").concretized()
|
||||
spec_a = Spec("a").concretized()
|
||||
|
||||
spec_a_label = ci._spec_deps_key(spec_a)
|
||||
spec_b_label = ci._spec_deps_key(spec_a["pkg-b"])
|
||||
spec_c_label = ci._spec_deps_key(spec_a["pkg-c"])
|
||||
spec_d_label = ci._spec_deps_key(spec_a["pkg-d"])
|
||||
spec_e_label = ci._spec_deps_key(spec_a["pkg-e"])
|
||||
spec_f_label = ci._spec_deps_key(spec_a["pkg-f"])
|
||||
spec_g_label = ci._spec_deps_key(spec_a["pkg-g"])
|
||||
spec_b_label = ci._spec_deps_key(spec_a["b"])
|
||||
spec_c_label = ci._spec_deps_key(spec_a["c"])
|
||||
spec_d_label = ci._spec_deps_key(spec_a["d"])
|
||||
spec_e_label = ci._spec_deps_key(spec_a["e"])
|
||||
spec_f_label = ci._spec_deps_key(spec_a["f"])
|
||||
spec_g_label = ci._spec_deps_key(spec_a["g"])
|
||||
|
||||
spec_labels, dependencies, stages = ci.stage_spec_jobs([spec_a])
|
||||
|
||||
@@ -1256,7 +1256,7 @@ def test_ci_generate_override_runner_attrs(
|
||||
spack:
|
||||
specs:
|
||||
- flatten-deps
|
||||
- pkg-a
|
||||
- a
|
||||
mirrors:
|
||||
some-mirror: https://my.fake.mirror
|
||||
ci:
|
||||
@@ -1273,12 +1273,12 @@ def test_ci_generate_override_runner_attrs(
|
||||
- match:
|
||||
- dependency-install
|
||||
- match:
|
||||
- pkg-a
|
||||
- a
|
||||
build-job:
|
||||
tags:
|
||||
- specific-a-2
|
||||
- match:
|
||||
- pkg-a
|
||||
- a
|
||||
build-job-remove:
|
||||
tags:
|
||||
- toplevel2
|
||||
@@ -1338,8 +1338,8 @@ def test_ci_generate_override_runner_attrs(
|
||||
assert global_vars["SPACK_CHECKOUT_VERSION"] == git_version or "v0.20.0.test0"
|
||||
|
||||
for ci_key in yaml_contents.keys():
|
||||
if ci_key.startswith("pkg-a"):
|
||||
# Make sure pkg-a's attributes override variables, and all the
|
||||
if ci_key.startswith("a"):
|
||||
# Make sure a's attributes override variables, and all the
|
||||
# scripts. Also, make sure the 'toplevel' tag doesn't
|
||||
# appear twice, but that a's specific extra tag does appear
|
||||
the_elt = yaml_contents[ci_key]
|
||||
@@ -1798,7 +1798,7 @@ def test_ci_generate_read_broken_specs_url(
|
||||
tmpdir, mutable_mock_env_path, install_mockery, mock_packages, monkeypatch, ci_base_environment
|
||||
):
|
||||
"""Verify that `broken-specs-url` works as intended"""
|
||||
spec_a = Spec("pkg-a")
|
||||
spec_a = Spec("a")
|
||||
spec_a.concretize()
|
||||
a_dag_hash = spec_a.dag_hash()
|
||||
|
||||
@@ -1824,7 +1824,7 @@ def test_ci_generate_read_broken_specs_url(
|
||||
spack:
|
||||
specs:
|
||||
- flatten-deps
|
||||
- pkg-a
|
||||
- a
|
||||
mirrors:
|
||||
some-mirror: https://my.fake.mirror
|
||||
ci:
|
||||
@@ -1832,9 +1832,9 @@ def test_ci_generate_read_broken_specs_url(
|
||||
pipeline-gen:
|
||||
- submapping:
|
||||
- match:
|
||||
- pkg-a
|
||||
- a
|
||||
- flatten-deps
|
||||
- pkg-b
|
||||
- b
|
||||
- dependency-install
|
||||
build-job:
|
||||
tags:
|
||||
@@ -2000,7 +2000,7 @@ def test_ci_reproduce(
|
||||
|
||||
install_script = os.path.join(working_dir.strpath, "install.sh")
|
||||
with open(install_script, "w") as fd:
|
||||
fd.write("#!/bin/sh\n\n#fake install\nspack install blah\n")
|
||||
fd.write("#!/bin/bash\n\n#fake install\nspack install blah\n")
|
||||
|
||||
spack_info_file = os.path.join(working_dir.strpath, "spack_info.txt")
|
||||
with open(spack_info_file, "w") as fd:
|
||||
|
@@ -81,14 +81,14 @@ def test_match_spec_env(mock_packages, mutable_mock_env_path):
|
||||
"""
|
||||
# Initial sanity check: we are planning on choosing a non-default
|
||||
# value, so make sure that is in fact not the default.
|
||||
check_defaults = spack.cmd.parse_specs(["pkg-a"], concretize=True)[0]
|
||||
check_defaults = spack.cmd.parse_specs(["a"], concretize=True)[0]
|
||||
assert not check_defaults.satisfies("foobar=baz")
|
||||
|
||||
e = ev.create("test")
|
||||
e.add("pkg-a foobar=baz")
|
||||
e.add("a foobar=baz")
|
||||
e.concretize()
|
||||
with e:
|
||||
env_spec = spack.cmd.matching_spec_from_env(spack.cmd.parse_specs(["pkg-a"])[0])
|
||||
env_spec = spack.cmd.matching_spec_from_env(spack.cmd.parse_specs(["a"])[0])
|
||||
assert env_spec.satisfies("foobar=baz")
|
||||
assert env_spec.concrete
|
||||
|
||||
@@ -96,12 +96,12 @@ def test_match_spec_env(mock_packages, mutable_mock_env_path):
|
||||
@pytest.mark.usefixtures("config")
|
||||
def test_multiple_env_match_raises_error(mock_packages, mutable_mock_env_path):
|
||||
e = ev.create("test")
|
||||
e.add("pkg-a foobar=baz")
|
||||
e.add("pkg-a foobar=fee")
|
||||
e.add("a foobar=baz")
|
||||
e.add("a foobar=fee")
|
||||
e.concretize()
|
||||
with e:
|
||||
with pytest.raises(ev.SpackEnvironmentError) as exc_info:
|
||||
spack.cmd.matching_spec_from_env(spack.cmd.parse_specs(["pkg-a"])[0])
|
||||
spack.cmd.matching_spec_from_env(spack.cmd.parse_specs(["a"])[0])
|
||||
|
||||
assert "matches multiple specs" in exc_info.value.message
|
||||
|
||||
@@ -109,16 +109,16 @@ def test_multiple_env_match_raises_error(mock_packages, mutable_mock_env_path):
|
||||
@pytest.mark.usefixtures("config")
|
||||
def test_root_and_dep_match_returns_root(mock_packages, mutable_mock_env_path):
|
||||
e = ev.create("test")
|
||||
e.add("pkg-b@0.9")
|
||||
e.add("pkg-a foobar=bar") # Depends on b, should choose b@1.0
|
||||
e.add("b@0.9")
|
||||
e.add("a foobar=bar") # Depends on b, should choose b@1.0
|
||||
e.concretize()
|
||||
with e:
|
||||
# This query matches the root b and b as a dependency of a. In that
|
||||
# case the root instance should be preferred.
|
||||
env_spec1 = spack.cmd.matching_spec_from_env(spack.cmd.parse_specs(["pkg-b"])[0])
|
||||
env_spec1 = spack.cmd.matching_spec_from_env(spack.cmd.parse_specs(["b"])[0])
|
||||
assert env_spec1.satisfies("@0.9")
|
||||
|
||||
env_spec2 = spack.cmd.matching_spec_from_env(spack.cmd.parse_specs(["pkg-b@1.0"])[0])
|
||||
env_spec2 = spack.cmd.matching_spec_from_env(spack.cmd.parse_specs(["b@1.0"])[0])
|
||||
assert env_spec2
|
||||
|
||||
|
||||
|
@@ -111,10 +111,10 @@ def test_compiler_find_no_apple_gcc(no_compilers_yaml, working_env, mock_executa
|
||||
@pytest.mark.regression("37996")
|
||||
def test_compiler_remove(mutable_config, mock_packages):
|
||||
"""Tests that we can remove a compiler from configuration."""
|
||||
assert spack.spec.CompilerSpec("gcc@=4.8.0") in spack.compilers.all_compiler_specs()
|
||||
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@4.8.0", add_paths=[], scope=None)
|
||||
assert spack.spec.CompilerSpec("gcc@=4.5.0") in spack.compilers.all_compiler_specs()
|
||||
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@4.5.0", add_paths=[], scope=None)
|
||||
spack.cmd.compiler.compiler_remove(args)
|
||||
assert spack.spec.CompilerSpec("gcc@=4.8.0") not in spack.compilers.all_compiler_specs()
|
||||
assert spack.spec.CompilerSpec("gcc@=4.5.0") not in spack.compilers.all_compiler_specs()
|
||||
|
||||
|
||||
@pytest.mark.regression("37996")
|
||||
@@ -123,10 +123,10 @@ def test_removing_compilers_from_multiple_scopes(mutable_config, mock_packages):
|
||||
site_config = spack.config.get("compilers", scope="site")
|
||||
spack.config.set("compilers", site_config, scope="user")
|
||||
|
||||
assert spack.spec.CompilerSpec("gcc@=4.8.0") in spack.compilers.all_compiler_specs()
|
||||
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@4.8.0", add_paths=[], scope=None)
|
||||
assert spack.spec.CompilerSpec("gcc@=4.5.0") in spack.compilers.all_compiler_specs()
|
||||
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@4.5.0", add_paths=[], scope=None)
|
||||
spack.cmd.compiler.compiler_remove(args)
|
||||
assert spack.spec.CompilerSpec("gcc@=4.8.0") not in spack.compilers.all_compiler_specs()
|
||||
assert spack.spec.CompilerSpec("gcc@=4.5.0") not in spack.compilers.all_compiler_specs()
|
||||
|
||||
|
||||
@pytest.mark.not_on_windows("Cannot execute bash script on Windows")
|
||||
|
@@ -51,8 +51,8 @@ def test_concretize_root_test_dependencies_are_concretized(unify, mutable_mock_e
|
||||
|
||||
with ev.read("test") as e:
|
||||
e.unify = unify
|
||||
add("pkg-a")
|
||||
add("pkg-b")
|
||||
add("a")
|
||||
add("b")
|
||||
concretize("--test", "root")
|
||||
assert e.matching_spec("test-dependency")
|
||||
|
||||
|
@@ -15,26 +15,26 @@
|
||||
def test_env(mutable_mock_env_path, config, mock_packages):
|
||||
ev.create("test")
|
||||
with ev.read("test") as e:
|
||||
e.add("pkg-a@2.0 foobar=bar ^pkg-b@1.0")
|
||||
e.add("pkg-a@1.0 foobar=bar ^pkg-b@0.9")
|
||||
e.add("a@2.0 foobar=bar ^b@1.0")
|
||||
e.add("a@1.0 foobar=bar ^b@0.9")
|
||||
e.concretize()
|
||||
e.write()
|
||||
|
||||
|
||||
def test_deconcretize_dep(test_env):
|
||||
with ev.read("test") as e:
|
||||
deconcretize("-y", "pkg-b@1.0")
|
||||
deconcretize("-y", "b@1.0")
|
||||
specs = [s for s, _ in e.concretized_specs()]
|
||||
|
||||
assert len(specs) == 1
|
||||
assert specs[0].satisfies("pkg-a@1.0")
|
||||
assert specs[0].satisfies("a@1.0")
|
||||
|
||||
|
||||
def test_deconcretize_all_dep(test_env):
|
||||
with ev.read("test") as e:
|
||||
with pytest.raises(SpackCommandError):
|
||||
deconcretize("-y", "pkg-b")
|
||||
deconcretize("-y", "--all", "pkg-b")
|
||||
deconcretize("-y", "b")
|
||||
deconcretize("-y", "--all", "b")
|
||||
specs = [s for s, _ in e.concretized_specs()]
|
||||
|
||||
assert len(specs) == 0
|
||||
@@ -42,27 +42,27 @@ def test_deconcretize_all_dep(test_env):
|
||||
|
||||
def test_deconcretize_root(test_env):
|
||||
with ev.read("test") as e:
|
||||
output = deconcretize("-y", "--root", "pkg-b@1.0")
|
||||
output = deconcretize("-y", "--root", "b@1.0")
|
||||
assert "No matching specs to deconcretize" in output
|
||||
assert len(e.concretized_order) == 2
|
||||
|
||||
deconcretize("-y", "--root", "pkg-a@2.0")
|
||||
deconcretize("-y", "--root", "a@2.0")
|
||||
specs = [s for s, _ in e.concretized_specs()]
|
||||
|
||||
assert len(specs) == 1
|
||||
assert specs[0].satisfies("pkg-a@1.0")
|
||||
assert specs[0].satisfies("a@1.0")
|
||||
|
||||
|
||||
def test_deconcretize_all_root(test_env):
|
||||
with ev.read("test") as e:
|
||||
with pytest.raises(SpackCommandError):
|
||||
deconcretize("-y", "--root", "pkg-a")
|
||||
deconcretize("-y", "--root", "a")
|
||||
|
||||
output = deconcretize("-y", "--root", "--all", "pkg-b")
|
||||
output = deconcretize("-y", "--root", "--all", "b")
|
||||
assert "No matching specs to deconcretize" in output
|
||||
assert len(e.concretized_order) == 2
|
||||
|
||||
deconcretize("-y", "--root", "--all", "pkg-a")
|
||||
deconcretize("-y", "--root", "--all", "a")
|
||||
specs = [s for s, _ in e.concretized_specs()]
|
||||
|
||||
assert len(specs) == 0
|
||||
|
@@ -27,9 +27,7 @@
|
||||
import spack.package_base
|
||||
import spack.paths
|
||||
import spack.repo
|
||||
import spack.store
|
||||
import spack.util.spack_json as sjson
|
||||
import spack.util.spack_yaml
|
||||
from spack.cmd.env import _env_create
|
||||
from spack.main import SpackCommand, SpackCommandError
|
||||
from spack.spec import Spec
|
||||
@@ -348,7 +346,7 @@ def test_env_install_two_specs_same_dep(install_mockery, mock_fetch, tmpdir, cap
|
||||
"""\
|
||||
spack:
|
||||
specs:
|
||||
- pkg-a
|
||||
- a
|
||||
- depb
|
||||
"""
|
||||
)
|
||||
@@ -367,8 +365,8 @@ def test_env_install_two_specs_same_dep(install_mockery, mock_fetch, tmpdir, cap
|
||||
depb = spack.store.STORE.db.query_one("depb", installed=True)
|
||||
assert depb, "Expected depb to be installed"
|
||||
|
||||
a = spack.store.STORE.db.query_one("pkg-a", installed=True)
|
||||
assert a, "Expected pkg-a to be installed"
|
||||
a = spack.store.STORE.db.query_one("a", installed=True)
|
||||
assert a, "Expected a to be installed"
|
||||
|
||||
|
||||
def test_remove_after_concretize():
|
||||
@@ -637,7 +635,7 @@ def test_env_view_external_prefix(tmp_path, mutable_database, mock_packages):
|
||||
"""\
|
||||
spack:
|
||||
specs:
|
||||
- pkg-a
|
||||
- a
|
||||
view: true
|
||||
"""
|
||||
)
|
||||
@@ -645,9 +643,9 @@ def test_env_view_external_prefix(tmp_path, mutable_database, mock_packages):
|
||||
external_config = io.StringIO(
|
||||
"""\
|
||||
packages:
|
||||
pkg-a:
|
||||
a:
|
||||
externals:
|
||||
- spec: pkg-a@2.0
|
||||
- spec: a@2.0
|
||||
prefix: {a_prefix}
|
||||
buildable: false
|
||||
""".format(
|
||||
@@ -3540,7 +3538,7 @@ def test_environment_created_in_users_location(mutable_mock_env_path, tmp_path):
|
||||
assert os.path.isdir(os.path.join(env_dir, dir_name))
|
||||
|
||||
|
||||
def test_environment_created_from_lockfile_has_view(mock_packages, temporary_store, tmpdir):
|
||||
def test_environment_created_from_lockfile_has_view(mock_packages, tmpdir):
|
||||
"""When an env is created from a lockfile, a view should be generated for it"""
|
||||
env_a = str(tmpdir.join("a"))
|
||||
env_b = str(tmpdir.join("b"))
|
||||
|
@@ -248,7 +248,6 @@ def _determine_variants(cls, exes, version_str):
|
||||
assert gcc.external_path == os.path.sep + os.path.join("opt", "gcc", "bin")
|
||||
|
||||
|
||||
@pytest.mark.not_on_windows("Fails spuriously on Windows")
|
||||
def test_new_entries_are_reported_correctly(mock_executable, mutable_config, monkeypatch):
|
||||
# Prepare an environment to detect a fake gcc
|
||||
gcc_exe = mock_executable("gcc", output="echo 4.2.1")
|
||||
|
@@ -43,7 +43,7 @@ def test_find_gpg(cmd_name, version, tmpdir, mock_gnupghome, monkeypatch):
|
||||
f.write(TEMPLATE.format(version=version))
|
||||
fs.set_executable(fname)
|
||||
|
||||
monkeypatch.setenv("PATH", str(tmpdir))
|
||||
monkeypatch.setitem(os.environ, "PATH", str(tmpdir))
|
||||
if version == "undetectable" or version.endswith("1.3.4"):
|
||||
with pytest.raises(spack.util.gpg.SpackGPGError):
|
||||
spack.util.gpg.init(force=True)
|
||||
@@ -54,7 +54,7 @@ def test_find_gpg(cmd_name, version, tmpdir, mock_gnupghome, monkeypatch):
|
||||
|
||||
|
||||
def test_no_gpg_in_path(tmpdir, mock_gnupghome, monkeypatch, mutable_config):
|
||||
monkeypatch.setenv("PATH", str(tmpdir))
|
||||
monkeypatch.setitem(os.environ, "PATH", str(tmpdir))
|
||||
bootstrap("disable")
|
||||
with pytest.raises(RuntimeError):
|
||||
spack.util.gpg.init(force=True)
|
||||
|
@@ -88,7 +88,7 @@ def check(pkg):
|
||||
assert pkg.run_tests
|
||||
|
||||
monkeypatch.setattr(spack.package_base.PackageBase, "unit_test_check", check)
|
||||
install("--test=all", "pkg-a")
|
||||
install("--test=all", "a")
|
||||
|
||||
|
||||
def test_install_package_already_installed(
|
||||
@@ -570,58 +570,61 @@ def test_cdash_upload_build_error(tmpdir, mock_fetch, install_mockery, capfd):
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
def test_cdash_upload_clean_build(tmpdir, mock_fetch, install_mockery, capfd):
|
||||
# capfd interferes with Spack's capturing of e.g., Build.xml output
|
||||
with capfd.disabled(), tmpdir.as_cwd():
|
||||
install("--log-file=cdash_reports", "--log-format=cdash", "pkg-a")
|
||||
report_dir = tmpdir.join("cdash_reports")
|
||||
assert report_dir in tmpdir.listdir()
|
||||
report_file = report_dir.join("pkg-a_Build.xml")
|
||||
assert report_file in report_dir.listdir()
|
||||
content = report_file.open().read()
|
||||
assert "</Build>" in content
|
||||
assert "<Text>" not in content
|
||||
with capfd.disabled():
|
||||
with tmpdir.as_cwd():
|
||||
install("--log-file=cdash_reports", "--log-format=cdash", "a")
|
||||
report_dir = tmpdir.join("cdash_reports")
|
||||
assert report_dir in tmpdir.listdir()
|
||||
report_file = report_dir.join("a_Build.xml")
|
||||
assert report_file in report_dir.listdir()
|
||||
content = report_file.open().read()
|
||||
assert "</Build>" in content
|
||||
assert "<Text>" not in content
|
||||
|
||||
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
def test_cdash_upload_extra_params(tmpdir, mock_fetch, install_mockery, capfd):
|
||||
# capfd interferes with Spack's capture of e.g., Build.xml output
|
||||
with capfd.disabled(), tmpdir.as_cwd():
|
||||
install(
|
||||
"--log-file=cdash_reports",
|
||||
"--log-format=cdash",
|
||||
"--cdash-build=my_custom_build",
|
||||
"--cdash-site=my_custom_site",
|
||||
"--cdash-track=my_custom_track",
|
||||
"pkg-a",
|
||||
)
|
||||
report_dir = tmpdir.join("cdash_reports")
|
||||
assert report_dir in tmpdir.listdir()
|
||||
report_file = report_dir.join("pkg-a_Build.xml")
|
||||
assert report_file in report_dir.listdir()
|
||||
content = report_file.open().read()
|
||||
assert 'Site BuildName="my_custom_build - pkg-a"' in content
|
||||
assert 'Name="my_custom_site"' in content
|
||||
assert "-my_custom_track" in content
|
||||
with capfd.disabled():
|
||||
with tmpdir.as_cwd():
|
||||
install(
|
||||
"--log-file=cdash_reports",
|
||||
"--log-format=cdash",
|
||||
"--cdash-build=my_custom_build",
|
||||
"--cdash-site=my_custom_site",
|
||||
"--cdash-track=my_custom_track",
|
||||
"a",
|
||||
)
|
||||
report_dir = tmpdir.join("cdash_reports")
|
||||
assert report_dir in tmpdir.listdir()
|
||||
report_file = report_dir.join("a_Build.xml")
|
||||
assert report_file in report_dir.listdir()
|
||||
content = report_file.open().read()
|
||||
assert 'Site BuildName="my_custom_build - a"' in content
|
||||
assert 'Name="my_custom_site"' in content
|
||||
assert "-my_custom_track" in content
|
||||
|
||||
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
def test_cdash_buildstamp_param(tmpdir, mock_fetch, install_mockery, capfd):
|
||||
# capfd interferes with Spack's capture of e.g., Build.xml output
|
||||
with capfd.disabled(), tmpdir.as_cwd():
|
||||
cdash_track = "some_mocked_track"
|
||||
buildstamp_format = "%Y%m%d-%H%M-{0}".format(cdash_track)
|
||||
buildstamp = time.strftime(buildstamp_format, time.localtime(int(time.time())))
|
||||
install(
|
||||
"--log-file=cdash_reports",
|
||||
"--log-format=cdash",
|
||||
"--cdash-buildstamp={0}".format(buildstamp),
|
||||
"pkg-a",
|
||||
)
|
||||
report_dir = tmpdir.join("cdash_reports")
|
||||
assert report_dir in tmpdir.listdir()
|
||||
report_file = report_dir.join("pkg-a_Build.xml")
|
||||
assert report_file in report_dir.listdir()
|
||||
content = report_file.open().read()
|
||||
assert buildstamp in content
|
||||
with capfd.disabled():
|
||||
with tmpdir.as_cwd():
|
||||
cdash_track = "some_mocked_track"
|
||||
buildstamp_format = "%Y%m%d-%H%M-{0}".format(cdash_track)
|
||||
buildstamp = time.strftime(buildstamp_format, time.localtime(int(time.time())))
|
||||
install(
|
||||
"--log-file=cdash_reports",
|
||||
"--log-format=cdash",
|
||||
"--cdash-buildstamp={0}".format(buildstamp),
|
||||
"a",
|
||||
)
|
||||
report_dir = tmpdir.join("cdash_reports")
|
||||
assert report_dir in tmpdir.listdir()
|
||||
report_file = report_dir.join("a_Build.xml")
|
||||
assert report_file in report_dir.listdir()
|
||||
content = report_file.open().read()
|
||||
assert buildstamp in content
|
||||
|
||||
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
@@ -629,37 +632,38 @@ def test_cdash_install_from_spec_json(
|
||||
tmpdir, mock_fetch, install_mockery, capfd, mock_packages, mock_archive, config
|
||||
):
|
||||
# capfd interferes with Spack's capturing
|
||||
with capfd.disabled(), tmpdir.as_cwd():
|
||||
spec_json_path = str(tmpdir.join("spec.json"))
|
||||
with capfd.disabled():
|
||||
with tmpdir.as_cwd():
|
||||
spec_json_path = str(tmpdir.join("spec.json"))
|
||||
|
||||
pkg_spec = Spec("pkg-a")
|
||||
pkg_spec.concretize()
|
||||
pkg_spec = Spec("a")
|
||||
pkg_spec.concretize()
|
||||
|
||||
with open(spec_json_path, "w") as fd:
|
||||
fd.write(pkg_spec.to_json(hash=ht.dag_hash))
|
||||
with open(spec_json_path, "w") as fd:
|
||||
fd.write(pkg_spec.to_json(hash=ht.dag_hash))
|
||||
|
||||
install(
|
||||
"--log-format=cdash",
|
||||
"--log-file=cdash_reports",
|
||||
"--cdash-build=my_custom_build",
|
||||
"--cdash-site=my_custom_site",
|
||||
"--cdash-track=my_custom_track",
|
||||
"-f",
|
||||
spec_json_path,
|
||||
)
|
||||
install(
|
||||
"--log-format=cdash",
|
||||
"--log-file=cdash_reports",
|
||||
"--cdash-build=my_custom_build",
|
||||
"--cdash-site=my_custom_site",
|
||||
"--cdash-track=my_custom_track",
|
||||
"-f",
|
||||
spec_json_path,
|
||||
)
|
||||
|
||||
report_dir = tmpdir.join("cdash_reports")
|
||||
assert report_dir in tmpdir.listdir()
|
||||
report_file = report_dir.join("pkg-a_Configure.xml")
|
||||
assert report_file in report_dir.listdir()
|
||||
content = report_file.open().read()
|
||||
install_command_regex = re.compile(
|
||||
r"<ConfigureCommand>(.+)</ConfigureCommand>", re.MULTILINE | re.DOTALL
|
||||
)
|
||||
m = install_command_regex.search(content)
|
||||
assert m
|
||||
install_command = m.group(1)
|
||||
assert "pkg-a@" in install_command
|
||||
report_dir = tmpdir.join("cdash_reports")
|
||||
assert report_dir in tmpdir.listdir()
|
||||
report_file = report_dir.join("a_Configure.xml")
|
||||
assert report_file in report_dir.listdir()
|
||||
content = report_file.open().read()
|
||||
install_command_regex = re.compile(
|
||||
r"<ConfigureCommand>(.+)</ConfigureCommand>", re.MULTILINE | re.DOTALL
|
||||
)
|
||||
m = install_command_regex.search(content)
|
||||
assert m
|
||||
install_command = m.group(1)
|
||||
assert "a@" in install_command
|
||||
|
||||
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
@@ -791,15 +795,15 @@ def test_install_no_add_in_env(tmpdir, mock_fetch, install_mockery, mutable_mock
|
||||
# ^libdwarf
|
||||
# ^mpich
|
||||
# libelf@0.8.10
|
||||
# pkg-a~bvv
|
||||
# ^pkg-b
|
||||
# pkg-a
|
||||
# ^pkg-b
|
||||
# a~bvv
|
||||
# ^b
|
||||
# a
|
||||
# ^b
|
||||
e = ev.create("test", with_view=False)
|
||||
e.add("mpileaks")
|
||||
e.add("libelf@0.8.10") # so env has both root and dep libelf specs
|
||||
e.add("pkg-a")
|
||||
e.add("pkg-a ~bvv")
|
||||
e.add("a")
|
||||
e.add("a ~bvv")
|
||||
e.concretize()
|
||||
e.write()
|
||||
env_specs = e.all_specs()
|
||||
@@ -810,9 +814,9 @@ def test_install_no_add_in_env(tmpdir, mock_fetch, install_mockery, mutable_mock
|
||||
|
||||
# First find and remember some target concrete specs in the environment
|
||||
for e_spec in env_specs:
|
||||
if e_spec.satisfies(Spec("pkg-a ~bvv")):
|
||||
if e_spec.satisfies(Spec("a ~bvv")):
|
||||
a_spec = e_spec
|
||||
elif e_spec.name == "pkg-b":
|
||||
elif e_spec.name == "b":
|
||||
b_spec = e_spec
|
||||
elif e_spec.satisfies(Spec("mpi")):
|
||||
mpi_spec = e_spec
|
||||
@@ -835,8 +839,8 @@ def test_install_no_add_in_env(tmpdir, mock_fetch, install_mockery, mutable_mock
|
||||
assert "You can add specs to the environment with 'spack add " in inst_out
|
||||
|
||||
# Without --add, ensure that two packages "a" get installed
|
||||
inst_out = install("pkg-a", output=str)
|
||||
assert len([x for x in e.all_specs() if x.installed and x.name == "pkg-a"]) == 2
|
||||
inst_out = install("a", output=str)
|
||||
assert len([x for x in e.all_specs() if x.installed and x.name == "a"]) == 2
|
||||
|
||||
# Install an unambiguous dependency spec (that already exists as a dep
|
||||
# in the environment) and make sure it gets installed (w/ deps),
|
||||
@@ -869,7 +873,7 @@ def test_install_no_add_in_env(tmpdir, mock_fetch, install_mockery, mutable_mock
|
||||
# root of the environment as well as installed.
|
||||
assert b_spec not in e.roots()
|
||||
|
||||
install("--add", "pkg-b")
|
||||
install("--add", "b")
|
||||
|
||||
assert b_spec in e.roots()
|
||||
assert b_spec not in e.uninstalled_specs()
|
||||
@@ -900,37 +904,39 @@ def test_install_help_cdash():
|
||||
|
||||
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
def test_cdash_auth_token(tmpdir, mock_fetch, install_mockery, monkeypatch, capfd):
|
||||
def test_cdash_auth_token(tmpdir, mock_fetch, install_mockery, capfd):
|
||||
# capfd interferes with Spack's capturing
|
||||
with tmpdir.as_cwd(), capfd.disabled():
|
||||
monkeypatch.setenv("SPACK_CDASH_AUTH_TOKEN", "asdf")
|
||||
out = install("-v", "--log-file=cdash_reports", "--log-format=cdash", "pkg-a")
|
||||
assert "Using CDash auth token from environment" in out
|
||||
with tmpdir.as_cwd():
|
||||
with capfd.disabled():
|
||||
os.environ["SPACK_CDASH_AUTH_TOKEN"] = "asdf"
|
||||
out = install("-v", "--log-file=cdash_reports", "--log-format=cdash", "a")
|
||||
assert "Using CDash auth token from environment" in out
|
||||
|
||||
|
||||
@pytest.mark.not_on_windows("Windows log_output logs phase header out of order")
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
def test_cdash_configure_warning(tmpdir, mock_fetch, install_mockery, capfd):
|
||||
# capfd interferes with Spack's capturing of e.g., Build.xml output
|
||||
with capfd.disabled(), tmpdir.as_cwd():
|
||||
# Test would fail if install raised an error.
|
||||
with capfd.disabled():
|
||||
with tmpdir.as_cwd():
|
||||
# Test would fail if install raised an error.
|
||||
|
||||
# Ensure that even on non-x86_64 architectures, there are no
|
||||
# dependencies installed
|
||||
spec = Spec("configure-warning").concretized()
|
||||
spec.clear_dependencies()
|
||||
specfile = "./spec.json"
|
||||
with open(specfile, "w") as f:
|
||||
f.write(spec.to_json())
|
||||
# Ensure that even on non-x86_64 architectures, there are no
|
||||
# dependencies installed
|
||||
spec = spack.spec.Spec("configure-warning").concretized()
|
||||
spec.clear_dependencies()
|
||||
specfile = "./spec.json"
|
||||
with open(specfile, "w") as f:
|
||||
f.write(spec.to_json())
|
||||
|
||||
install("--log-file=cdash_reports", "--log-format=cdash", specfile)
|
||||
# Verify Configure.xml exists with expected contents.
|
||||
report_dir = tmpdir.join("cdash_reports")
|
||||
assert report_dir in tmpdir.listdir()
|
||||
report_file = report_dir.join("Configure.xml")
|
||||
assert report_file in report_dir.listdir()
|
||||
content = report_file.open().read()
|
||||
assert "foo: No such file or directory" in content
|
||||
install("--log-file=cdash_reports", "--log-format=cdash", specfile)
|
||||
# Verify Configure.xml exists with expected contents.
|
||||
report_dir = tmpdir.join("cdash_reports")
|
||||
assert report_dir in tmpdir.listdir()
|
||||
report_file = report_dir.join("Configure.xml")
|
||||
assert report_file in report_dir.listdir()
|
||||
content = report_file.open().read()
|
||||
assert "foo: No such file or directory" in content
|
||||
|
||||
|
||||
@pytest.mark.not_on_windows("ArchSpec gives test platform debian rather than windows")
|
||||
@@ -947,7 +953,7 @@ def test_compiler_bootstrap(
|
||||
assert CompilerSpec("gcc@=12.0") not in compilers.all_compiler_specs()
|
||||
|
||||
# Test succeeds if it does not raise an error
|
||||
install("pkg-a%gcc@=12.0")
|
||||
install("a%gcc@=12.0")
|
||||
|
||||
|
||||
@pytest.mark.not_on_windows("Binary mirrors not supported on windows")
|
||||
@@ -987,8 +993,8 @@ def test_compiler_bootstrap_from_binary_mirror(
|
||||
# Now make sure that when the compiler is installed from binary mirror,
|
||||
# it also gets configured as a compiler. Test succeeds if it does not
|
||||
# raise an error
|
||||
install("--no-check-signature", "--cache-only", "--only", "dependencies", "pkg-b%gcc@=10.2.0")
|
||||
install("--no-cache", "--only", "package", "pkg-b%gcc@10.2.0")
|
||||
install("--no-check-signature", "--cache-only", "--only", "dependencies", "b%gcc@=10.2.0")
|
||||
install("--no-cache", "--only", "package", "b%gcc@10.2.0")
|
||||
|
||||
|
||||
@pytest.mark.not_on_windows("ArchSpec gives test platform debian rather than windows")
|
||||
@@ -1008,7 +1014,7 @@ def test_compiler_bootstrap_already_installed(
|
||||
|
||||
# Test succeeds if it does not raise an error
|
||||
install("gcc@=12.0")
|
||||
install("pkg-a%gcc@=12.0")
|
||||
install("a%gcc@=12.0")
|
||||
|
||||
|
||||
def test_install_fails_no_args(tmpdir):
|
||||
@@ -1190,7 +1196,7 @@ def test_report_filename_for_cdash(install_mockery_mutable_config, mock_fetch):
|
||||
parser = argparse.ArgumentParser()
|
||||
spack.cmd.install.setup_parser(parser)
|
||||
args = parser.parse_args(
|
||||
["--cdash-upload-url", "https://blahblah/submit.php?project=debugging", "pkg-a"]
|
||||
["--cdash-upload-url", "https://blahblah/submit.php?project=debugging", "a"]
|
||||
)
|
||||
specs = spack.cmd.install.concrete_specs_from_cli(args, {})
|
||||
filename = spack.cmd.install.report_filename(args, specs)
|
||||
|
@@ -133,7 +133,7 @@ def test_maintainers_list_packages(mock_packages, capfd):
|
||||
|
||||
|
||||
def test_maintainers_list_fails(mock_packages, capfd):
|
||||
out = maintainers("pkg-a", fail_on_error=False)
|
||||
out = maintainers("a", fail_on_error=False)
|
||||
assert not out
|
||||
assert maintainers.returncode == 1
|
||||
|
||||
|
@@ -11,7 +11,6 @@
|
||||
import spack.config
|
||||
import spack.main
|
||||
import spack.modules
|
||||
import spack.spec
|
||||
import spack.store
|
||||
|
||||
module = spack.main.SpackCommand("module")
|
||||
@@ -179,8 +178,8 @@ def test_setdefault_command(mutable_database, mutable_config):
|
||||
}
|
||||
}
|
||||
spack.config.set("modules", data)
|
||||
# Install two different versions of pkg-a
|
||||
other_spec, preferred = "pkg-a@1.0", "pkg-a@2.0"
|
||||
# Install two different versions of a package
|
||||
other_spec, preferred = "a@1.0", "a@2.0"
|
||||
|
||||
spack.spec.Spec(other_spec).concretized().package.do_install(fake=True)
|
||||
spack.spec.Spec(preferred).concretized().package.do_install(fake=True)
|
||||
|
@@ -28,8 +28,8 @@ def install(self, spec, prefix):
|
||||
pass
|
||||
"""
|
||||
|
||||
abc = {"mockpkg-a", "mockpkg-b", "mockpkg-c"}
|
||||
abd = {"mockpkg-a", "mockpkg-b", "mockpkg-d"}
|
||||
abc = set(("pkg-a", "pkg-b", "pkg-c"))
|
||||
abd = set(("pkg-a", "pkg-b", "pkg-d"))
|
||||
|
||||
|
||||
# Force all tests to use a git repository *in* the mock packages repo.
|
||||
@@ -53,33 +53,27 @@ def mock_pkg_git_repo(git, tmpdir_factory):
|
||||
git("config", "user.name", "Spack Testing")
|
||||
git("-c", "commit.gpgsign=false", "commit", "-m", "initial mock repo commit")
|
||||
|
||||
# add commit with mockpkg-a, mockpkg-b, mockpkg-c packages
|
||||
mkdirp("mockpkg-a", "mockpkg-b", "mockpkg-c")
|
||||
with open("mockpkg-a/package.py", "w") as f:
|
||||
# add commit with pkg-a, pkg-b, pkg-c packages
|
||||
mkdirp("pkg-a", "pkg-b", "pkg-c")
|
||||
with open("pkg-a/package.py", "w") as f:
|
||||
f.write(pkg_template.format(name="PkgA"))
|
||||
with open("mockpkg-b/package.py", "w") as f:
|
||||
with open("pkg-b/package.py", "w") as f:
|
||||
f.write(pkg_template.format(name="PkgB"))
|
||||
with open("mockpkg-c/package.py", "w") as f:
|
||||
with open("pkg-c/package.py", "w") as f:
|
||||
f.write(pkg_template.format(name="PkgC"))
|
||||
git("add", "mockpkg-a", "mockpkg-b", "mockpkg-c")
|
||||
git("-c", "commit.gpgsign=false", "commit", "-m", "add mockpkg-a, mockpkg-b, mockpkg-c")
|
||||
git("add", "pkg-a", "pkg-b", "pkg-c")
|
||||
git("-c", "commit.gpgsign=false", "commit", "-m", "add pkg-a, pkg-b, pkg-c")
|
||||
|
||||
# remove mockpkg-c, add mockpkg-d
|
||||
with open("mockpkg-b/package.py", "a") as f:
|
||||
f.write("\n# change mockpkg-b")
|
||||
git("add", "mockpkg-b")
|
||||
mkdirp("mockpkg-d")
|
||||
with open("mockpkg-d/package.py", "w") as f:
|
||||
# remove pkg-c, add pkg-d
|
||||
with open("pkg-b/package.py", "a") as f:
|
||||
f.write("\n# change pkg-b")
|
||||
git("add", "pkg-b")
|
||||
mkdirp("pkg-d")
|
||||
with open("pkg-d/package.py", "w") as f:
|
||||
f.write(pkg_template.format(name="PkgD"))
|
||||
git("add", "mockpkg-d")
|
||||
git("rm", "-rf", "mockpkg-c")
|
||||
git(
|
||||
"-c",
|
||||
"commit.gpgsign=false",
|
||||
"commit",
|
||||
"-m",
|
||||
"change mockpkg-b, remove mockpkg-c, add mockpkg-d",
|
||||
)
|
||||
git("add", "pkg-d")
|
||||
git("rm", "-rf", "pkg-c")
|
||||
git("-c", "commit.gpgsign=false", "commit", "-m", "change pkg-b, remove pkg-c, add pkg-d")
|
||||
|
||||
with spack.repo.use_repositories(str(repo_path)):
|
||||
yield mock_repo_packages
|
||||
@@ -92,11 +86,12 @@ def mock_pkg_names():
|
||||
# Be sure to include virtual packages since packages with stand-alone
|
||||
# tests may inherit additional tests from the virtuals they provide,
|
||||
# such as packages that implement `mpi`.
|
||||
return {
|
||||
names = set(
|
||||
name
|
||||
for name in repo.all_package_names(include_virtuals=True)
|
||||
if not name.startswith("mockpkg-")
|
||||
}
|
||||
if not name.startswith("pkg-")
|
||||
)
|
||||
return names
|
||||
|
||||
|
||||
def split(output):
|
||||
@@ -118,17 +113,17 @@ def test_mock_packages_path(mock_packages):
|
||||
|
||||
def test_pkg_add(git, mock_pkg_git_repo):
|
||||
with working_dir(mock_pkg_git_repo):
|
||||
mkdirp("mockpkg-e")
|
||||
with open("mockpkg-e/package.py", "w") as f:
|
||||
mkdirp("pkg-e")
|
||||
with open("pkg-e/package.py", "w") as f:
|
||||
f.write(pkg_template.format(name="PkgE"))
|
||||
|
||||
pkg("add", "mockpkg-e")
|
||||
pkg("add", "pkg-e")
|
||||
|
||||
with working_dir(mock_pkg_git_repo):
|
||||
try:
|
||||
assert "A mockpkg-e/package.py" in git("status", "--short", output=str)
|
||||
assert "A pkg-e/package.py" in git("status", "--short", output=str)
|
||||
finally:
|
||||
shutil.rmtree("mockpkg-e")
|
||||
shutil.rmtree("pkg-e")
|
||||
# Removing a package mid-run disrupts Spack's caching
|
||||
if spack.repo.PATH.repos[0]._fast_package_checker:
|
||||
spack.repo.PATH.repos[0]._fast_package_checker.invalidate()
|
||||
@@ -143,10 +138,10 @@ def test_pkg_list(mock_pkg_git_repo, mock_pkg_names):
|
||||
assert sorted(mock_pkg_names) == sorted(out)
|
||||
|
||||
out = split(pkg("list", "HEAD^"))
|
||||
assert sorted(mock_pkg_names.union(["mockpkg-a", "mockpkg-b", "mockpkg-c"])) == sorted(out)
|
||||
assert sorted(mock_pkg_names.union(["pkg-a", "pkg-b", "pkg-c"])) == sorted(out)
|
||||
|
||||
out = split(pkg("list", "HEAD"))
|
||||
assert sorted(mock_pkg_names.union(["mockpkg-a", "mockpkg-b", "mockpkg-d"])) == sorted(out)
|
||||
assert sorted(mock_pkg_names.union(["pkg-a", "pkg-b", "pkg-d"])) == sorted(out)
|
||||
|
||||
# test with three dots to make sure pkg calls `git merge-base`
|
||||
out = split(pkg("list", "HEAD^^..."))
|
||||
@@ -156,25 +151,25 @@ def test_pkg_list(mock_pkg_git_repo, mock_pkg_names):
|
||||
@pytest.mark.not_on_windows("stdout format conflict")
|
||||
def test_pkg_diff(mock_pkg_git_repo, mock_pkg_names):
|
||||
out = split(pkg("diff", "HEAD^^", "HEAD^"))
|
||||
assert out == ["HEAD^:", "mockpkg-a", "mockpkg-b", "mockpkg-c"]
|
||||
assert out == ["HEAD^:", "pkg-a", "pkg-b", "pkg-c"]
|
||||
|
||||
out = split(pkg("diff", "HEAD^^", "HEAD"))
|
||||
assert out == ["HEAD:", "mockpkg-a", "mockpkg-b", "mockpkg-d"]
|
||||
assert out == ["HEAD:", "pkg-a", "pkg-b", "pkg-d"]
|
||||
|
||||
out = split(pkg("diff", "HEAD^", "HEAD"))
|
||||
assert out == ["HEAD^:", "mockpkg-c", "HEAD:", "mockpkg-d"]
|
||||
assert out == ["HEAD^:", "pkg-c", "HEAD:", "pkg-d"]
|
||||
|
||||
|
||||
@pytest.mark.not_on_windows("stdout format conflict")
|
||||
def test_pkg_added(mock_pkg_git_repo):
|
||||
out = split(pkg("added", "HEAD^^", "HEAD^"))
|
||||
assert ["mockpkg-a", "mockpkg-b", "mockpkg-c"] == out
|
||||
assert ["pkg-a", "pkg-b", "pkg-c"] == out
|
||||
|
||||
out = split(pkg("added", "HEAD^^", "HEAD"))
|
||||
assert ["mockpkg-a", "mockpkg-b", "mockpkg-d"] == out
|
||||
assert ["pkg-a", "pkg-b", "pkg-d"] == out
|
||||
|
||||
out = split(pkg("added", "HEAD^", "HEAD"))
|
||||
assert ["mockpkg-d"] == out
|
||||
assert ["pkg-d"] == out
|
||||
|
||||
out = split(pkg("added", "HEAD", "HEAD"))
|
||||
assert out == []
|
||||
@@ -189,7 +184,7 @@ def test_pkg_removed(mock_pkg_git_repo):
|
||||
assert out == []
|
||||
|
||||
out = split(pkg("removed", "HEAD^", "HEAD"))
|
||||
assert out == ["mockpkg-c"]
|
||||
assert out == ["pkg-c"]
|
||||
|
||||
|
||||
@pytest.mark.not_on_windows("stdout format conflict")
|
||||
@@ -201,34 +196,34 @@ def test_pkg_changed(mock_pkg_git_repo):
|
||||
assert out == []
|
||||
|
||||
out = split(pkg("changed", "--type", "a", "HEAD^^", "HEAD^"))
|
||||
assert out == ["mockpkg-a", "mockpkg-b", "mockpkg-c"]
|
||||
assert out == ["pkg-a", "pkg-b", "pkg-c"]
|
||||
|
||||
out = split(pkg("changed", "--type", "r", "HEAD^^", "HEAD^"))
|
||||
assert out == []
|
||||
|
||||
out = split(pkg("changed", "--type", "ar", "HEAD^^", "HEAD^"))
|
||||
assert out == ["mockpkg-a", "mockpkg-b", "mockpkg-c"]
|
||||
assert out == ["pkg-a", "pkg-b", "pkg-c"]
|
||||
|
||||
out = split(pkg("changed", "--type", "arc", "HEAD^^", "HEAD^"))
|
||||
assert out == ["mockpkg-a", "mockpkg-b", "mockpkg-c"]
|
||||
assert out == ["pkg-a", "pkg-b", "pkg-c"]
|
||||
|
||||
out = split(pkg("changed", "HEAD^", "HEAD"))
|
||||
assert out == ["mockpkg-b"]
|
||||
assert out == ["pkg-b"]
|
||||
|
||||
out = split(pkg("changed", "--type", "c", "HEAD^", "HEAD"))
|
||||
assert out == ["mockpkg-b"]
|
||||
assert out == ["pkg-b"]
|
||||
|
||||
out = split(pkg("changed", "--type", "a", "HEAD^", "HEAD"))
|
||||
assert out == ["mockpkg-d"]
|
||||
assert out == ["pkg-d"]
|
||||
|
||||
out = split(pkg("changed", "--type", "r", "HEAD^", "HEAD"))
|
||||
assert out == ["mockpkg-c"]
|
||||
assert out == ["pkg-c"]
|
||||
|
||||
out = split(pkg("changed", "--type", "ar", "HEAD^", "HEAD"))
|
||||
assert out == ["mockpkg-c", "mockpkg-d"]
|
||||
assert out == ["pkg-c", "pkg-d"]
|
||||
|
||||
out = split(pkg("changed", "--type", "arc", "HEAD^", "HEAD"))
|
||||
assert out == ["mockpkg-b", "mockpkg-c", "mockpkg-d"]
|
||||
assert out == ["pkg-b", "pkg-c", "pkg-d"]
|
||||
|
||||
# invalid type argument
|
||||
with pytest.raises(spack.main.SpackCommandError):
|
||||
@@ -294,7 +289,7 @@ def test_pkg_canonical_source(mock_packages):
|
||||
|
||||
|
||||
def test_pkg_hash(mock_packages):
|
||||
output = pkg("hash", "pkg-a", "pkg-b").strip().split()
|
||||
output = pkg("hash", "a", "b").strip().split()
|
||||
assert len(output) == 2 and all(len(elt) == 32 for elt in output)
|
||||
|
||||
output = pkg("hash", "multimethod").strip().split()
|
||||
|
@@ -59,7 +59,7 @@ def test_spec_concretizer_args(mutable_config, mutable_database):
|
||||
def test_spec_parse_dependency_variant_value():
|
||||
"""Verify that we can provide multiple key=value variants to multiple separate
|
||||
packages within a spec string."""
|
||||
output = spec("multivalue-variant fee=barbaz ^ pkg-a foobar=baz")
|
||||
output = spec("multivalue-variant fee=barbaz ^ a foobar=baz")
|
||||
|
||||
assert "fee=barbaz" in output
|
||||
assert "foobar=baz" in output
|
||||
|
@@ -10,14 +10,10 @@
|
||||
|
||||
from llnl.util.filesystem import copy_tree
|
||||
|
||||
import spack.cmd.common.arguments
|
||||
import spack.cmd.install
|
||||
import spack.cmd.test
|
||||
import spack.config
|
||||
import spack.install_test
|
||||
import spack.package_base
|
||||
import spack.paths
|
||||
import spack.spec
|
||||
import spack.store
|
||||
from spack.install_test import TestStatus
|
||||
from spack.main import SpackCommand
|
||||
|
@@ -11,7 +11,7 @@
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
import spack.compiler
|
||||
import spack.compilers
|
||||
import spack.compilers as compilers
|
||||
import spack.spec
|
||||
import spack.util.environment
|
||||
from spack.compiler import Compiler
|
||||
@@ -25,14 +25,12 @@ class MockOs:
|
||||
pass
|
||||
|
||||
compiler_name = "gcc"
|
||||
compiler_cls = spack.compilers.class_for_compiler_name(compiler_name)
|
||||
compiler_cls = compilers.class_for_compiler_name(compiler_name)
|
||||
monkeypatch.setattr(compiler_cls, "cc_version", lambda x: version)
|
||||
|
||||
compiler_id = spack.compilers.CompilerID(
|
||||
os=MockOs, compiler_name=compiler_name, version=None
|
||||
)
|
||||
variation = spack.compilers.NameVariation(prefix="", suffix="")
|
||||
return spack.compilers.DetectVersionArgs(
|
||||
compiler_id = compilers.CompilerID(os=MockOs, compiler_name=compiler_name, version=None)
|
||||
variation = compilers.NameVariation(prefix="", suffix="")
|
||||
return compilers.DetectVersionArgs(
|
||||
id=compiler_id, variation=variation, language="cc", path=path
|
||||
)
|
||||
|
||||
@@ -58,21 +56,15 @@ def test_multiple_conflicting_compiler_definitions(mutable_config):
|
||||
mutable_config.update_config("compilers", compiler_config)
|
||||
|
||||
arch_spec = spack.spec.ArchSpec(("test", "test", "test"))
|
||||
cmp = spack.compilers.compiler_for_spec("clang@=0.0.0", arch_spec)
|
||||
cmp = compilers.compiler_for_spec("clang@=0.0.0", arch_spec)
|
||||
assert cmp.f77 == "f77"
|
||||
|
||||
|
||||
def test_get_compiler_duplicates(mutable_config, compiler_factory):
|
||||
def test_get_compiler_duplicates(config):
|
||||
# In this case there is only one instance of the specified compiler in
|
||||
# the test configuration (so it is not actually a duplicate), but the
|
||||
# method behaves the same.
|
||||
cnl_compiler = compiler_factory(spec="gcc@4.5.0", operating_system="CNL")
|
||||
# CNL compiler has no target attribute, and this is essential to make detection pass
|
||||
del cnl_compiler["compiler"]["target"]
|
||||
mutable_config.set(
|
||||
"compilers", [compiler_factory(spec="gcc@4.5.0", operating_system="SuSE11"), cnl_compiler]
|
||||
)
|
||||
cfg_file_to_duplicates = spack.compilers.get_compiler_duplicates(
|
||||
cfg_file_to_duplicates = compilers.get_compiler_duplicates(
|
||||
"gcc@4.5.0", spack.spec.ArchSpec("cray-CNL-xeon")
|
||||
)
|
||||
|
||||
@@ -81,6 +73,13 @@ def test_get_compiler_duplicates(mutable_config, compiler_factory):
|
||||
assert len(duplicates) == 1
|
||||
|
||||
|
||||
def test_all_compilers(config):
|
||||
all_compilers = compilers.all_compilers()
|
||||
filtered = [x for x in all_compilers if str(x.spec) == "clang@=3.3"]
|
||||
filtered = [x for x in filtered if x.operating_system == "SuSE11"]
|
||||
assert len(filtered) == 1
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"input_version,expected_version,expected_error",
|
||||
[(None, None, "Couldn't get version for compiler /usr/bin/gcc"), ("4.9", "4.9", None)],
|
||||
@@ -89,7 +88,7 @@ def test_version_detection_is_empty(
|
||||
make_args_for_version, input_version, expected_version, expected_error
|
||||
):
|
||||
args = make_args_for_version(version=input_version)
|
||||
result, error = spack.compilers.detect_version(args)
|
||||
result, error = compilers.detect_version(args)
|
||||
if not error:
|
||||
assert result.id.version == expected_version
|
||||
|
||||
@@ -105,7 +104,7 @@ def test_compiler_flags_from_config_are_grouped():
|
||||
"modules": None,
|
||||
}
|
||||
|
||||
compiler = spack.compilers.compiler_from_dict(compiler_entry)
|
||||
compiler = compilers.compiler_from_dict(compiler_entry)
|
||||
assert any(x == "-foo-flag foo-val" for x in compiler.flags["cflags"])
|
||||
|
||||
|
||||
@@ -254,8 +253,8 @@ def test_get_compiler_link_paths_load_env(working_env, monkeypatch, tmpdir):
|
||||
gcc = str(tmpdir.join("gcc"))
|
||||
with open(gcc, "w") as f:
|
||||
f.write(
|
||||
"""#!/bin/sh
|
||||
if [ "$ENV_SET" = "1" ] && [ "$MODULE_LOADED" = "1" ]; then
|
||||
"""#!/bin/bash
|
||||
if [[ $ENV_SET == "1" && $MODULE_LOADED == "1" ]]; then
|
||||
echo '"""
|
||||
+ no_flag_output
|
||||
+ """'
|
||||
@@ -289,7 +288,7 @@ def flag_value(flag, spec):
|
||||
else:
|
||||
compiler_entry = copy(default_compiler_entry)
|
||||
compiler_entry["spec"] = spec
|
||||
compiler = spack.compilers.compiler_from_dict(compiler_entry)
|
||||
compiler = compilers.compiler_from_dict(compiler_entry)
|
||||
|
||||
return getattr(compiler, flag)
|
||||
|
||||
@@ -659,27 +658,9 @@ def test_xl_r_flags():
|
||||
"compiler_spec,expected_result",
|
||||
[("gcc@4.7.2", False), ("clang@3.3", False), ("clang@8.0.0", True)],
|
||||
)
|
||||
def test_detecting_mixed_toolchains(
|
||||
compiler_spec, expected_result, mutable_config, compiler_factory
|
||||
):
|
||||
mixed_c = compiler_factory(spec="clang@8.0.0", operating_system="debian6")
|
||||
mixed_c["compiler"]["paths"] = {
|
||||
"cc": "/path/to/clang-8",
|
||||
"cxx": "/path/to/clang++-8",
|
||||
"f77": "/path/to/gfortran-9",
|
||||
"fc": "/path/to/gfortran-9",
|
||||
}
|
||||
mutable_config.set(
|
||||
"compilers",
|
||||
[
|
||||
compiler_factory(spec="gcc@4.7.2", operating_system="debian6"),
|
||||
compiler_factory(spec="clang@3.3", operating_system="debian6"),
|
||||
mixed_c,
|
||||
],
|
||||
)
|
||||
|
||||
compiler = spack.compilers.compilers_for_spec(compiler_spec).pop()
|
||||
assert spack.compilers.is_mixed_toolchain(compiler) is expected_result
|
||||
def test_detecting_mixed_toolchains(compiler_spec, expected_result, config):
|
||||
compiler = compilers.compilers_for_spec(compiler_spec).pop()
|
||||
assert compilers.is_mixed_toolchain(compiler) is expected_result
|
||||
|
||||
|
||||
@pytest.mark.regression("14798,13733")
|
||||
@@ -720,8 +701,8 @@ def test_compiler_get_real_version(working_env, monkeypatch, tmpdir):
|
||||
gcc = str(tmpdir.join("gcc"))
|
||||
with open(gcc, "w") as f:
|
||||
f.write(
|
||||
"""#!/bin/sh
|
||||
if [ "$CMP_ON" = "1" ]; then
|
||||
"""#!/bin/bash
|
||||
if [[ $CMP_ON == "1" ]]; then
|
||||
echo "$CMP_VER"
|
||||
fi
|
||||
"""
|
||||
@@ -758,49 +739,6 @@ def module(*args):
|
||||
assert version == test_version
|
||||
|
||||
|
||||
@pytest.mark.regression("42679")
|
||||
def test_get_compilers(config):
|
||||
"""Tests that we can select compilers whose versions differ only for a suffix."""
|
||||
common = {
|
||||
"flags": {},
|
||||
"operating_system": "ubuntu23.10",
|
||||
"target": "x86_64",
|
||||
"modules": [],
|
||||
"environment": {},
|
||||
"extra_rpaths": [],
|
||||
}
|
||||
with_suffix = {
|
||||
"spec": "gcc@13.2.0-suffix",
|
||||
"paths": {
|
||||
"cc": "/usr/bin/gcc-13.2.0-suffix",
|
||||
"cxx": "/usr/bin/g++-13.2.0-suffix",
|
||||
"f77": "/usr/bin/gfortran-13.2.0-suffix",
|
||||
"fc": "/usr/bin/gfortran-13.2.0-suffix",
|
||||
},
|
||||
**common,
|
||||
}
|
||||
without_suffix = {
|
||||
"spec": "gcc@13.2.0",
|
||||
"paths": {
|
||||
"cc": "/usr/bin/gcc-13.2.0",
|
||||
"cxx": "/usr/bin/g++-13.2.0",
|
||||
"f77": "/usr/bin/gfortran-13.2.0",
|
||||
"fc": "/usr/bin/gfortran-13.2.0",
|
||||
},
|
||||
**common,
|
||||
}
|
||||
|
||||
compilers = [{"compiler": without_suffix}, {"compiler": with_suffix}]
|
||||
|
||||
assert spack.compilers.get_compilers(
|
||||
compilers, cspec=spack.spec.CompilerSpec("gcc@=13.2.0-suffix")
|
||||
) == [spack.compilers._compiler_from_config_entry(with_suffix)]
|
||||
|
||||
assert spack.compilers.get_compilers(
|
||||
compilers, cspec=spack.spec.CompilerSpec("gcc@=13.2.0")
|
||||
) == [spack.compilers._compiler_from_config_entry(without_suffix)]
|
||||
|
||||
|
||||
def test_compiler_get_real_version_fails(working_env, monkeypatch, tmpdir):
|
||||
# Test variables
|
||||
test_version = "2.2.2"
|
||||
@@ -809,8 +747,8 @@ def test_compiler_get_real_version_fails(working_env, monkeypatch, tmpdir):
|
||||
gcc = str(tmpdir.join("gcc"))
|
||||
with open(gcc, "w") as f:
|
||||
f.write(
|
||||
"""#!/bin/sh
|
||||
if [ "$CMP_ON" = "1" ]; then
|
||||
"""#!/bin/bash
|
||||
if [[ $CMP_ON == "1" ]]; then
|
||||
echo "$CMP_VER"
|
||||
fi
|
||||
"""
|
||||
@@ -863,7 +801,7 @@ def test_compiler_flags_use_real_version(working_env, monkeypatch, tmpdir):
|
||||
gcc = str(tmpdir.join("gcc"))
|
||||
with open(gcc, "w") as f:
|
||||
f.write(
|
||||
"""#!/bin/sh
|
||||
"""#!/bin/bash
|
||||
echo "4.4.4"
|
||||
"""
|
||||
) # Version for which c++11 flag is -std=c++0x
|
||||
|
@@ -23,8 +23,6 @@
|
||||
import spack.platforms
|
||||
import spack.repo
|
||||
import spack.solver.asp
|
||||
import spack.store
|
||||
import spack.util.file_cache
|
||||
import spack.variant as vt
|
||||
from spack.concretize import find_spec
|
||||
from spack.spec import CompilerSpec, Spec
|
||||
@@ -205,9 +203,7 @@ def change(self, changes=None):
|
||||
# TODO: in case tests using this fixture start failing.
|
||||
if sys.modules.get("spack.pkg.changing.changing"):
|
||||
del sys.modules["spack.pkg.changing.changing"]
|
||||
if sys.modules.get("spack.pkg.changing.root"):
|
||||
del sys.modules["spack.pkg.changing.root"]
|
||||
if sys.modules.get("spack.pkg.changing"):
|
||||
del sys.modules["spack.pkg.changing"]
|
||||
|
||||
# Change the recipe
|
||||
@@ -226,20 +222,6 @@ def change(self, changes=None):
|
||||
yield _changing_pkg
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def clang12_with_flags(compiler_factory):
|
||||
c = compiler_factory(spec="clang@12.2.0", operating_system="redhat6")
|
||||
c["compiler"]["flags"] = {"cflags": "-O3", "cxxflags": "-O3"}
|
||||
return c
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def gcc11_with_flags(compiler_factory):
|
||||
c = compiler_factory(spec="gcc@11.1.0", operating_system="redhat6")
|
||||
c["compiler"]["flags"] = {"cflags": "-O0 -g", "cxxflags": "-O0 -g", "fflags": "-O0 -g"}
|
||||
return c
|
||||
|
||||
|
||||
# This must use the mutable_config fixture because the test
|
||||
# adjusting_default_target_based_on_compiler uses the current_host fixture,
|
||||
# which changes the config.
|
||||
@@ -332,35 +314,18 @@ def test_provides_handles_multiple_providers_of_same_version(self):
|
||||
assert Spec("builtin.mock.multi-provider-mpi@1.10.0") in providers
|
||||
assert Spec("builtin.mock.multi-provider-mpi@1.8.8") in providers
|
||||
|
||||
def test_different_compilers_get_different_flags(
|
||||
self, mutable_config, clang12_with_flags, gcc11_with_flags
|
||||
):
|
||||
"""Tests that nodes get the flags of the associated compiler."""
|
||||
mutable_config.set("compilers", [clang12_with_flags, gcc11_with_flags])
|
||||
def test_different_compilers_get_different_flags(self):
|
||||
client = Spec(
|
||||
"cmake-client %gcc@11.1.0 platform=test os=fe target=fe"
|
||||
" ^cmake %clang@12.2.0 platform=test os=fe target=fe"
|
||||
).concretized()
|
||||
+ " ^cmake %clang@12.2.0 platform=test os=fe target=fe"
|
||||
)
|
||||
client.concretize()
|
||||
cmake = client["cmake"]
|
||||
assert set(client.compiler_flags["cflags"]) == {"-O0", "-g"}
|
||||
assert set(cmake.compiler_flags["cflags"]) == {"-O3"}
|
||||
assert set(client.compiler_flags["fflags"]) == {"-O0", "-g"}
|
||||
assert set(client.compiler_flags["cflags"]) == set(["-O0", "-g"])
|
||||
assert set(cmake.compiler_flags["cflags"]) == set(["-O3"])
|
||||
assert set(client.compiler_flags["fflags"]) == set(["-O0", "-g"])
|
||||
assert not set(cmake.compiler_flags["fflags"])
|
||||
|
||||
@pytest.mark.regression("9908")
|
||||
def test_spec_flags_maintain_order(self, mutable_config, gcc11_with_flags):
|
||||
"""Tests that Spack assembles flags in a consistent way (i.e. with the same ordering),
|
||||
for successive concretizations.
|
||||
"""
|
||||
mutable_config.set("compilers", [gcc11_with_flags])
|
||||
spec_str = "libelf %gcc@11.1.0 os=redhat6"
|
||||
for _ in range(3):
|
||||
s = Spec(spec_str).concretized()
|
||||
assert all(
|
||||
s.compiler_flags[x] == ["-O0", "-g"] for x in ("cflags", "cxxflags", "fflags")
|
||||
)
|
||||
|
||||
@pytest.mark.xfail(reason="Broken, needs to be fixed")
|
||||
def test_compiler_flags_from_compiler_and_dependent(self):
|
||||
client = Spec("cmake-client %clang@12.2.0 platform=test os=fe target=fe cflags==-g")
|
||||
client.concretize()
|
||||
@@ -368,10 +333,9 @@ def test_compiler_flags_from_compiler_and_dependent(self):
|
||||
for spec in [client, cmake]:
|
||||
assert spec.compiler_flags["cflags"] == ["-O3", "-g"]
|
||||
|
||||
def test_compiler_flags_differ_identical_compilers(self, mutable_config, clang12_with_flags):
|
||||
mutable_config.set("compilers", [clang12_with_flags])
|
||||
def test_compiler_flags_differ_identical_compilers(self):
|
||||
# Correct arch to use test compiler that has flags
|
||||
spec = Spec("pkg-a %clang@12.2.0 platform=test os=fe target=fe")
|
||||
spec = Spec("a %clang@12.2.0 platform=test os=fe target=fe")
|
||||
|
||||
# Get the compiler that matches the spec (
|
||||
compiler = spack.compilers.compiler_for_spec("clang@=12.2.0", spec.architecture)
|
||||
@@ -424,23 +388,28 @@ def test_compiler_inherited_upwards(self):
|
||||
for dep in spec.traverse():
|
||||
assert "%clang" in dep
|
||||
|
||||
def test_architecture_inheritance(self):
|
||||
"""test_architecture_inheritance is likely to fail with an
|
||||
UnavailableCompilerVersionError if the architecture is concretized
|
||||
incorrectly.
|
||||
"""
|
||||
spec = Spec("cmake-client %gcc@11.1.0 os=fe ^ cmake")
|
||||
spec.concretize()
|
||||
assert spec["cmake"].architecture == spec.architecture
|
||||
|
||||
@pytest.mark.only_clingo("Fixing the parser broke this test for the original concretizer")
|
||||
def test_architecture_deep_inheritance(self, mock_targets, compiler_factory):
|
||||
def test_architecture_deep_inheritance(self, mock_targets):
|
||||
"""Make sure that indirect dependencies receive architecture
|
||||
information from the root even when partial architecture information
|
||||
is provided by an intermediate dependency.
|
||||
"""
|
||||
cnl_compiler = compiler_factory(spec="gcc@4.5.0", operating_system="CNL")
|
||||
# CNL compiler has no target attribute, and this is essential to make detection pass
|
||||
del cnl_compiler["compiler"]["target"]
|
||||
with spack.config.override("compilers", [cnl_compiler]):
|
||||
spec_str = "mpileaks %gcc@4.5.0 os=CNL target=nocona ^dyninst os=CNL ^callpath os=CNL"
|
||||
spec = Spec(spec_str).concretized()
|
||||
for s in spec.traverse(root=False):
|
||||
assert s.architecture.target == spec.architecture.target
|
||||
spec_str = "mpileaks %gcc@4.5.0 os=CNL target=nocona" " ^dyninst os=CNL ^callpath os=CNL"
|
||||
spec = Spec(spec_str).concretized()
|
||||
for s in spec.traverse(root=False):
|
||||
assert s.architecture.target == spec.architecture.target
|
||||
|
||||
def test_compiler_flags_from_user_are_grouped(self):
|
||||
spec = Spec('pkg-a%gcc cflags="-O -foo-flag foo-val" platform=test')
|
||||
spec = Spec('a%gcc cflags="-O -foo-flag foo-val" platform=test')
|
||||
spec.concretize()
|
||||
cflags = spec.compiler_flags["cflags"]
|
||||
assert any(x == "-foo-flag foo-val" for x in cflags)
|
||||
@@ -548,20 +517,20 @@ def test_concretize_propagate_multivalue_variant(self):
|
||||
spec = Spec("multivalue-variant foo==baz,fee")
|
||||
spec.concretize()
|
||||
|
||||
assert spec.satisfies("^pkg-a foo=baz,fee")
|
||||
assert spec.satisfies("^pkg-b foo=baz,fee")
|
||||
assert not spec.satisfies("^pkg-a foo=bar")
|
||||
assert not spec.satisfies("^pkg-b foo=bar")
|
||||
assert spec.satisfies("^a foo=baz,fee")
|
||||
assert spec.satisfies("^b foo=baz,fee")
|
||||
assert not spec.satisfies("^a foo=bar")
|
||||
assert not spec.satisfies("^b foo=bar")
|
||||
|
||||
def test_no_matching_compiler_specs(self, mock_low_high_config):
|
||||
# only relevant when not building compilers as needed
|
||||
with spack.concretize.enable_compiler_existence_check():
|
||||
s = Spec("pkg-a %gcc@=0.0.0")
|
||||
s = Spec("a %gcc@=0.0.0")
|
||||
with pytest.raises(spack.concretize.UnavailableCompilerVersionError):
|
||||
s.concretize()
|
||||
|
||||
def test_no_compilers_for_arch(self):
|
||||
s = Spec("pkg-a arch=linux-rhel0-x86_64")
|
||||
s = Spec("a arch=linux-rhel0-x86_64")
|
||||
with pytest.raises(spack.error.SpackError):
|
||||
s.concretize()
|
||||
|
||||
@@ -770,7 +739,7 @@ def test_regression_issue_7941(self):
|
||||
# The string representation of a spec containing
|
||||
# an explicit multi-valued variant and a dependency
|
||||
# might be parsed differently than the originating spec
|
||||
s = Spec("pkg-a foobar=bar ^pkg-b")
|
||||
s = Spec("a foobar=bar ^b")
|
||||
t = Spec(str(s))
|
||||
|
||||
s.concretize()
|
||||
@@ -872,12 +841,9 @@ def test_concretize_anonymous_dep(self, spec_str):
|
||||
],
|
||||
)
|
||||
@pytest.mark.only_clingo("Original concretizer cannot work around conflicts")
|
||||
def test_compiler_conflicts_in_package_py(
|
||||
self, spec_str, expected_str, clang12_with_flags, gcc11_with_flags
|
||||
):
|
||||
with spack.config.override("compilers", [clang12_with_flags, gcc11_with_flags]):
|
||||
s = Spec(spec_str).concretized()
|
||||
assert s.satisfies(expected_str)
|
||||
def test_compiler_conflicts_in_package_py(self, spec_str, expected_str):
|
||||
s = Spec(spec_str).concretized()
|
||||
assert s.satisfies(expected_str)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"spec_str,expected,unexpected",
|
||||
@@ -1142,14 +1108,14 @@ def test_conditional_provides_or_depends_on(self):
|
||||
[
|
||||
# Check that True is treated correctly and attaches test deps
|
||||
# to all nodes in the DAG
|
||||
("pkg-a", True, ["pkg-a"], []),
|
||||
("pkg-a foobar=bar", True, ["pkg-a", "pkg-b"], []),
|
||||
("a", True, ["a"], []),
|
||||
("a foobar=bar", True, ["a", "b"], []),
|
||||
# Check that a list of names activates the dependency only for
|
||||
# packages in that list
|
||||
("pkg-a foobar=bar", ["pkg-a"], ["pkg-a"], ["pkg-b"]),
|
||||
("pkg-a foobar=bar", ["pkg-b"], ["pkg-b"], ["pkg-a"]),
|
||||
("a foobar=bar", ["a"], ["a"], ["b"]),
|
||||
("a foobar=bar", ["b"], ["b"], ["a"]),
|
||||
# Check that False disregard test dependencies
|
||||
("pkg-a foobar=bar", False, [], ["pkg-a", "pkg-b"]),
|
||||
("a foobar=bar", False, [], ["a", "b"]),
|
||||
],
|
||||
)
|
||||
def test_activating_test_dependencies(self, spec_str, tests_arg, with_dep, without_dep):
|
||||
@@ -1167,18 +1133,16 @@ def test_activating_test_dependencies(self, spec_str, tests_arg, with_dep, witho
|
||||
|
||||
@pytest.mark.regression("20019")
|
||||
@pytest.mark.only_clingo("Use case not supported by the original concretizer")
|
||||
def test_compiler_match_is_preferred_to_newer_version(self, compiler_factory):
|
||||
def test_compiler_match_is_preferred_to_newer_version(self):
|
||||
# This spec depends on openblas. Openblas has a conflict
|
||||
# that doesn't allow newer versions with gcc@4.4.0. Check
|
||||
# that an old version of openblas is selected, rather than
|
||||
# a different compiler for just that node.
|
||||
with spack.config.override(
|
||||
"compilers", [compiler_factory(spec="gcc@10.1.0", operating_system="redhat6")]
|
||||
):
|
||||
spec_str = "simple-inheritance+openblas %gcc@10.1.0 os=redhat6"
|
||||
s = Spec(spec_str).concretized()
|
||||
assert "openblas@0.2.15" in s
|
||||
assert s["openblas"].satisfies("%gcc@10.1.0")
|
||||
spec_str = "simple-inheritance+openblas %gcc@10.1.0 os=redhat6"
|
||||
s = Spec(spec_str).concretized()
|
||||
|
||||
assert "openblas@0.2.15" in s
|
||||
assert s["openblas"].satisfies("%gcc@10.1.0")
|
||||
|
||||
@pytest.mark.regression("19981")
|
||||
def test_target_ranges_in_conflicts(self):
|
||||
@@ -1203,12 +1167,8 @@ def test_variant_not_default(self):
|
||||
|
||||
@pytest.mark.regression("20055")
|
||||
@pytest.mark.only_clingo("Use case not supported by the original concretizer")
|
||||
def test_custom_compiler_version(self, mutable_config, compiler_factory, monkeypatch):
|
||||
mutable_config.set(
|
||||
"compilers", [compiler_factory(spec="gcc@10foo", operating_system="redhat6")]
|
||||
)
|
||||
monkeypatch.setattr(spack.compiler.Compiler, "real_version", "10.2.1")
|
||||
s = Spec("pkg-a %gcc@10foo os=redhat6").concretized()
|
||||
def test_custom_compiler_version(self):
|
||||
s = Spec("a %gcc@10foo os=redhat6").concretized()
|
||||
assert "%gcc@10foo" in s
|
||||
|
||||
def test_all_patches_applied(self):
|
||||
@@ -1333,10 +1293,10 @@ def test_reuse_installed_packages_when_package_def_changes(
|
||||
@pytest.mark.only_clingo("Use case not supported by the original concretizer")
|
||||
def test_reuse_with_flags(self, mutable_database, mutable_config):
|
||||
spack.config.set("concretizer:reuse", True)
|
||||
spec = Spec("pkg-a cflags=-g cxxflags=-g").concretized()
|
||||
spec = Spec("a cflags=-g cxxflags=-g").concretized()
|
||||
spack.store.STORE.db.add(spec, None)
|
||||
|
||||
testspec = Spec("pkg-a cflags=-g")
|
||||
testspec = Spec("a cflags=-g")
|
||||
testspec.concretize()
|
||||
assert testspec == spec
|
||||
|
||||
@@ -1407,16 +1367,11 @@ def test_external_with_non_default_variant_as_dependency(self):
|
||||
("mpileaks%gcc@10.2.1 platform=test os=redhat6", "os=redhat6"),
|
||||
],
|
||||
)
|
||||
def test_os_selection_when_multiple_choices_are_possible(
|
||||
self, spec_str, expected_os, compiler_factory
|
||||
):
|
||||
# GCC 10.2.1 is defined both for debian and for redhat
|
||||
with spack.config.override(
|
||||
"compilers", [compiler_factory(spec="gcc@10.2.1", operating_system="redhat6")]
|
||||
):
|
||||
s = Spec(spec_str).concretized()
|
||||
for node in s.traverse():
|
||||
assert node.satisfies(expected_os)
|
||||
def test_os_selection_when_multiple_choices_are_possible(self, spec_str, expected_os):
|
||||
s = Spec(spec_str).concretized()
|
||||
|
||||
for node in s.traverse():
|
||||
assert node.satisfies(expected_os)
|
||||
|
||||
@pytest.mark.regression("22718")
|
||||
@pytest.mark.parametrize(
|
||||
@@ -1544,30 +1499,6 @@ def test_sticky_variant_in_package(self):
|
||||
s = Spec("sticky-variant %clang").concretized()
|
||||
assert s.satisfies("%clang") and s.satisfies("~allow-gcc")
|
||||
|
||||
@pytest.mark.regression("42172")
|
||||
@pytest.mark.only_clingo("Original concretizer cannot use sticky variants")
|
||||
@pytest.mark.parametrize(
|
||||
"spec,allow_gcc",
|
||||
[
|
||||
("sticky-variant@1.0+allow-gcc", True),
|
||||
("sticky-variant@1.0~allow-gcc", False),
|
||||
("sticky-variant@1.0", False),
|
||||
],
|
||||
)
|
||||
def test_sticky_variant_in_external(self, spec, allow_gcc):
|
||||
# setup external for sticky-variant+allow-gcc
|
||||
config = {"externals": [{"spec": spec, "prefix": "/fake/path"}], "buildable": False}
|
||||
spack.config.set("packages:sticky-variant", config)
|
||||
|
||||
maybe = llnl.util.lang.nullcontext if allow_gcc else pytest.raises
|
||||
with maybe(spack.error.SpackError):
|
||||
s = Spec("sticky-variant-dependent%gcc").concretized()
|
||||
|
||||
if allow_gcc:
|
||||
assert s.satisfies("%gcc")
|
||||
assert s["sticky-variant"].satisfies("+allow-gcc")
|
||||
assert s["sticky-variant"].external
|
||||
|
||||
@pytest.mark.only_clingo("Use case not supported by the original concretizer")
|
||||
def test_do_not_invent_new_concrete_versions_unless_necessary(self):
|
||||
# ensure we select a known satisfying version rather than creating
|
||||
@@ -1673,35 +1604,33 @@ def test_installed_version_is_selected_only_for_reuse(
|
||||
assert not new_root["changing"].satisfies("@1.0")
|
||||
|
||||
@pytest.mark.regression("28259")
|
||||
def test_reuse_with_unknown_namespace_dont_raise(
|
||||
self, temporary_store, mock_custom_repository
|
||||
):
|
||||
def test_reuse_with_unknown_namespace_dont_raise(self, mock_custom_repository):
|
||||
with spack.repo.use_repositories(mock_custom_repository, override=False):
|
||||
s = Spec("pkg-c").concretized()
|
||||
s = Spec("c").concretized()
|
||||
assert s.namespace != "builtin.mock"
|
||||
s.package.do_install(fake=True, explicit=True)
|
||||
|
||||
with spack.config.override("concretizer:reuse", True):
|
||||
s = Spec("pkg-c").concretized()
|
||||
s = Spec("c").concretized()
|
||||
assert s.namespace == "builtin.mock"
|
||||
|
||||
@pytest.mark.regression("28259")
|
||||
def test_reuse_with_unknown_package_dont_raise(self, tmpdir, temporary_store, monkeypatch):
|
||||
builder = spack.repo.MockRepositoryBuilder(tmpdir.mkdir("mock.repo"), namespace="myrepo")
|
||||
builder.add_package("pkg-c")
|
||||
def test_reuse_with_unknown_package_dont_raise(self, tmpdir, monkeypatch):
|
||||
builder = spack.repo.MockRepositoryBuilder(tmpdir, namespace="myrepo")
|
||||
builder.add_package("c")
|
||||
with spack.repo.use_repositories(builder.root, override=False):
|
||||
s = Spec("pkg-c").concretized()
|
||||
s = Spec("c").concretized()
|
||||
assert s.namespace == "myrepo"
|
||||
s.package.do_install(fake=True, explicit=True)
|
||||
|
||||
del sys.modules["spack.pkg.myrepo.pkg-c"]
|
||||
del sys.modules["spack.pkg.myrepo.c"]
|
||||
del sys.modules["spack.pkg.myrepo"]
|
||||
builder.remove("pkg-c")
|
||||
builder.remove("c")
|
||||
with spack.repo.use_repositories(builder.root, override=False) as repos:
|
||||
# TODO (INJECT CONFIGURATION): unclear why the cache needs to be invalidated explicitly
|
||||
repos.repos[0]._pkg_checker.invalidate()
|
||||
with spack.config.override("concretizer:reuse", True):
|
||||
s = Spec("pkg-c").concretized()
|
||||
s = Spec("c").concretized()
|
||||
assert s.namespace == "builtin.mock"
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
@@ -1813,20 +1742,20 @@ def test_misleading_error_message_on_version(self, mutable_database):
|
||||
@pytest.mark.only_clingo("Use case not supported by the original concretizer")
|
||||
def test_version_weight_and_provenance(self):
|
||||
"""Test package preferences during coconcretization."""
|
||||
reusable_specs = [Spec(spec_str).concretized() for spec_str in ("pkg-b@0.9", "pkg-b@1.0")]
|
||||
root_spec = Spec("pkg-a foobar=bar")
|
||||
reusable_specs = [Spec(spec_str).concretized() for spec_str in ("b@0.9", "b@1.0")]
|
||||
root_spec = Spec("a foobar=bar")
|
||||
|
||||
with spack.config.override("concretizer:reuse", True):
|
||||
solver = spack.solver.asp.Solver()
|
||||
setup = spack.solver.asp.SpackSolverSetup()
|
||||
result, _, _ = solver.driver.solve(setup, [root_spec], reuse=reusable_specs)
|
||||
# The result here should have a single spec to build ('pkg-a')
|
||||
# and it should be using pkg-b@1.0 with a version badness of 2
|
||||
# The result here should have a single spec to build ('a')
|
||||
# and it should be using b@1.0 with a version badness of 2
|
||||
# The provenance is:
|
||||
# version_declared("pkg-b","1.0",0,"package_py").
|
||||
# version_declared("pkg-b","0.9",1,"package_py").
|
||||
# version_declared("pkg-b","1.0",2,"installed").
|
||||
# version_declared("pkg-b","0.9",3,"installed").
|
||||
# version_declared("b","1.0",0,"package_py").
|
||||
# version_declared("b","0.9",1,"package_py").
|
||||
# version_declared("b","1.0",2,"installed").
|
||||
# version_declared("b","0.9",3,"installed").
|
||||
#
|
||||
# Depending on the target, it may also use gnuconfig
|
||||
result_spec = result.specs[0]
|
||||
@@ -1839,12 +1768,12 @@ def test_version_weight_and_provenance(self):
|
||||
|
||||
for criterion in criteria:
|
||||
assert criterion in result.criteria
|
||||
assert result_spec.satisfies("^pkg-b@1.0")
|
||||
assert result_spec.satisfies("^b@1.0")
|
||||
|
||||
@pytest.mark.regression("31169")
|
||||
@pytest.mark.only_clingo("Use case not supported by the original concretizer")
|
||||
def test_not_reusing_incompatible_os_or_compiler(self):
|
||||
root_spec = Spec("pkg-b")
|
||||
root_spec = Spec("b")
|
||||
s = root_spec.concretized()
|
||||
wrong_compiler, wrong_os = s.copy(), s.copy()
|
||||
wrong_compiler.compiler = spack.spec.CompilerSpec("gcc@12.1.0")
|
||||
@@ -2099,7 +2028,7 @@ def test_external_python_extension_find_unified_python(self):
|
||||
"specs",
|
||||
[
|
||||
["mpileaks^ callpath ^dyninst@8.1.1:8 ^mpich2@1.3:1"],
|
||||
["multivalue-variant ^pkg-a@2:2"],
|
||||
["multivalue-variant ^a@2:2"],
|
||||
["v1-consumer ^conditional-provider@1:1 +disable-v1"],
|
||||
],
|
||||
)
|
||||
@@ -2149,7 +2078,7 @@ def test_compiler_match_constraints_when_selected(self):
|
||||
},
|
||||
]
|
||||
spack.config.set("compilers", compiler_configuration)
|
||||
s = Spec("pkg-a %gcc@:11").concretized()
|
||||
s = Spec("a %gcc@:11").concretized()
|
||||
assert s.compiler.version == ver("=11.1.0"), s
|
||||
|
||||
@pytest.mark.regression("36339")
|
||||
@@ -2170,7 +2099,7 @@ def test_compiler_with_custom_non_numeric_version(self, mock_executable):
|
||||
}
|
||||
]
|
||||
spack.config.set("compilers", compiler_configuration)
|
||||
s = Spec("pkg-a %gcc@foo").concretized()
|
||||
s = Spec("a %gcc@foo").concretized()
|
||||
assert s.compiler.version == ver("=foo")
|
||||
|
||||
@pytest.mark.regression("36628")
|
||||
@@ -2196,7 +2125,7 @@ def test_concretization_with_compilers_supporting_target_any(self):
|
||||
]
|
||||
|
||||
with spack.config.override("compilers", compiler_configuration):
|
||||
s = Spec("pkg-a").concretized()
|
||||
s = spack.spec.Spec("a").concretized()
|
||||
assert s.satisfies("%gcc@12.1.0")
|
||||
|
||||
@pytest.mark.parametrize("spec_str", ["mpileaks", "mpileaks ^mpich"])
|
||||
@@ -2231,7 +2160,7 @@ def test_dont_define_new_version_from_input_if_checksum_required(self, working_e
|
||||
with pytest.raises(spack.error.UnsatisfiableSpecError):
|
||||
# normally spack concretizes to @=3.0 if it's not defined in package.py, except
|
||||
# when checksums are required
|
||||
Spec("pkg-a@=3.0").concretized()
|
||||
Spec("a@=3.0").concretized()
|
||||
|
||||
@pytest.mark.regression("39570")
|
||||
@pytest.mark.db
|
||||
@@ -2251,33 +2180,6 @@ def test_reuse_python_from_cli_and_extension_from_db(self, mutable_database):
|
||||
|
||||
assert with_reuse.dag_hash() == without_reuse.dag_hash()
|
||||
|
||||
@pytest.mark.regression("35536")
|
||||
@pytest.mark.parametrize(
|
||||
"spec_str,expected_namespaces",
|
||||
[
|
||||
# Single node with fully qualified namespace
|
||||
("builtin.mock.gmake", {"gmake": "builtin.mock"}),
|
||||
# Dependency with fully qualified namespace
|
||||
("hdf5 ^builtin.mock.gmake", {"gmake": "builtin.mock", "hdf5": "duplicates.test"}),
|
||||
("hdf5 ^gmake", {"gmake": "duplicates.test", "hdf5": "duplicates.test"}),
|
||||
],
|
||||
)
|
||||
@pytest.mark.only_clingo("Uses specs requiring multiple gmake specs")
|
||||
def test_select_lower_priority_package_from_repository_stack(
|
||||
self, spec_str, expected_namespaces
|
||||
):
|
||||
"""Tests that a user can explicitly select a lower priority, fully qualified dependency
|
||||
from cli.
|
||||
"""
|
||||
# 'builtin.mock" and "duplicates.test" share a 'gmake' package
|
||||
additional_repo = os.path.join(spack.paths.repos_path, "duplicates.test")
|
||||
with spack.repo.use_repositories(additional_repo, override=False):
|
||||
s = Spec(spec_str).concretized()
|
||||
|
||||
for name, namespace in expected_namespaces.items():
|
||||
assert s[name].concrete
|
||||
assert s[name].namespace == namespace
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def duplicates_test_repository():
|
||||
@@ -2456,9 +2358,7 @@ def test_drop_moving_targets(v_str, v_opts, checksummed):
|
||||
class TestConcreteSpecsByHash:
|
||||
"""Tests the container of concrete specs"""
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"input_specs", [["pkg-a"], ["pkg-a foobar=bar", "pkg-b"], ["pkg-a foobar=baz", "pkg-b"]]
|
||||
)
|
||||
@pytest.mark.parametrize("input_specs", [["a"], ["a foobar=bar", "b"], ["a foobar=baz", "b"]])
|
||||
def test_adding_specs(self, input_specs, default_mock_concretization):
|
||||
"""Tests that concrete specs in the container are equivalent, but stored as different
|
||||
objects in memory.
|
||||
|
@@ -16,8 +16,8 @@
|
||||
version_error_messages = [
|
||||
"Cannot satisfy 'fftw@:1.0' and 'fftw@1.1:",
|
||||
" required because quantum-espresso depends on fftw@:1.0",
|
||||
" required because quantum-espresso ^fftw@1.1: requested explicitly",
|
||||
" required because quantum-espresso ^fftw@1.1: requested explicitly",
|
||||
" required because quantum-espresso ^fftw@1.1: requested from CLI",
|
||||
" required because quantum-espresso ^fftw@1.1: requested from CLI",
|
||||
]
|
||||
|
||||
external_error_messages = [
|
||||
@@ -30,15 +30,15 @@
|
||||
" which was not satisfied"
|
||||
),
|
||||
" 'quantum-espresso+veritas' required",
|
||||
" required because quantum-espresso+veritas requested explicitly",
|
||||
" required because quantum-espresso+veritas requested from CLI",
|
||||
]
|
||||
|
||||
variant_error_messages = [
|
||||
"'fftw' required multiple values for single-valued variant 'mpi'",
|
||||
" Requested '~mpi' and '+mpi'",
|
||||
" required because quantum-espresso depends on fftw+mpi when +invino",
|
||||
" required because quantum-espresso+invino ^fftw~mpi requested explicitly",
|
||||
" required because quantum-espresso+invino ^fftw~mpi requested explicitly",
|
||||
" required because quantum-espresso+invino ^fftw~mpi requested from CLI",
|
||||
" required because quantum-espresso+invino ^fftw~mpi requested from CLI",
|
||||
]
|
||||
|
||||
external_config = {
|
||||
|
@@ -105,7 +105,7 @@ def test_preferred_variants_from_wildcard(self):
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"compiler_str,spec_str",
|
||||
[("gcc@=4.8.0", "mpileaks"), ("clang@=12.0.0", "mpileaks"), ("gcc@=4.8.0", "openmpi")],
|
||||
[("gcc@=4.5.0", "mpileaks"), ("clang@=12.0.0", "mpileaks"), ("gcc@=4.5.0", "openmpi")],
|
||||
)
|
||||
def test_preferred_compilers(self, compiler_str, spec_str):
|
||||
"""Test preferred compilers are applied correctly"""
|
||||
@@ -504,13 +504,3 @@ def test_sticky_variant_accounts_for_packages_yaml(self):
|
||||
with spack.config.override("packages:sticky-variant", {"variants": "+allow-gcc"}):
|
||||
s = Spec("sticky-variant %gcc").concretized()
|
||||
assert s.satisfies("%gcc") and s.satisfies("+allow-gcc")
|
||||
|
||||
@pytest.mark.regression("41134")
|
||||
@pytest.mark.only_clingo("Not backporting the fix to the old concretizer")
|
||||
def test_default_preference_variant_different_type_does_not_error(self):
|
||||
"""Tests that a different type for an existing variant in the 'all:' section of
|
||||
packages.yaml doesn't fail with an error.
|
||||
"""
|
||||
with spack.config.override("packages:all", {"variants": "+foo"}):
|
||||
s = Spec("pkg-a").concretized()
|
||||
assert s.satisfies("foo=bar")
|
||||
|
@@ -872,8 +872,6 @@ def test_skip_requirement_when_default_requirement_condition_cannot_be_met(
|
||||
|
||||
def test_requires_directive(concretize_scope, mock_packages):
|
||||
compilers_yaml = pathlib.Path(concretize_scope) / "compilers.yaml"
|
||||
|
||||
# NOTE: target is omitted here so that the test works on aarch64, as well.
|
||||
compilers_yaml.write_text(
|
||||
"""
|
||||
compilers::
|
||||
@@ -885,6 +883,7 @@ def test_requires_directive(concretize_scope, mock_packages):
|
||||
f77: null
|
||||
fc: null
|
||||
operating_system: debian6
|
||||
target: x86_64
|
||||
modules: []
|
||||
"""
|
||||
)
|
||||
@@ -897,69 +896,3 @@ def test_requires_directive(concretize_scope, mock_packages):
|
||||
# This package can only be compiled with clang
|
||||
with pytest.raises(spack.error.SpackError, match="can only be compiled with Clang"):
|
||||
Spec("requires_clang").concretized()
|
||||
|
||||
|
||||
@pytest.mark.regression("42084")
|
||||
def test_requiring_package_on_multiple_virtuals(concretize_scope, mock_packages):
|
||||
update_packages_config(
|
||||
"""
|
||||
packages:
|
||||
all:
|
||||
providers:
|
||||
scalapack: [netlib-scalapack]
|
||||
blas:
|
||||
require: intel-parallel-studio
|
||||
lapack:
|
||||
require: intel-parallel-studio
|
||||
scalapack:
|
||||
require: intel-parallel-studio
|
||||
"""
|
||||
)
|
||||
s = Spec("dla-future").concretized()
|
||||
|
||||
assert s["blas"].name == "intel-parallel-studio"
|
||||
assert s["lapack"].name == "intel-parallel-studio"
|
||||
assert s["scalapack"].name == "intel-parallel-studio"
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"spec_str,expected,not_expected",
|
||||
[
|
||||
(
|
||||
"forward-multi-value +cuda cuda_arch=10 ^dependency-mv~cuda",
|
||||
["cuda_arch=10", "^dependency-mv~cuda"],
|
||||
["cuda_arch=11", "^dependency-mv cuda_arch=10", "^dependency-mv cuda_arch=11"],
|
||||
),
|
||||
(
|
||||
"forward-multi-value +cuda cuda_arch=10 ^dependency-mv+cuda",
|
||||
["cuda_arch=10", "^dependency-mv cuda_arch=10"],
|
||||
["cuda_arch=11", "^dependency-mv cuda_arch=11"],
|
||||
),
|
||||
(
|
||||
"forward-multi-value +cuda cuda_arch=11 ^dependency-mv+cuda",
|
||||
["cuda_arch=11", "^dependency-mv cuda_arch=11"],
|
||||
["cuda_arch=10", "^dependency-mv cuda_arch=10"],
|
||||
),
|
||||
(
|
||||
"forward-multi-value +cuda cuda_arch=10,11 ^dependency-mv+cuda",
|
||||
["cuda_arch=10,11", "^dependency-mv cuda_arch=10,11"],
|
||||
[],
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_forward_multi_valued_variant_using_requires(
|
||||
spec_str, expected, not_expected, config, mock_packages
|
||||
):
|
||||
"""Tests that a package can forward multivalue variants to dependencies, using
|
||||
`requires` directives of the form:
|
||||
|
||||
for _val in ("shared", "static"):
|
||||
requires(f"^some-virtual-mv libs={_val}", when=f"libs={_val} ^some-virtual-mv")
|
||||
"""
|
||||
s = Spec(spec_str).concretized()
|
||||
|
||||
for constraint in expected:
|
||||
assert s.satisfies(constraint)
|
||||
|
||||
for constraint in not_expected:
|
||||
assert not s.satisfies(constraint)
|
||||
|
@@ -1176,13 +1176,13 @@ def test_license_dir_config(mutable_config, mock_packages):
|
||||
expected_dir = spack.paths.default_license_dir
|
||||
assert spack.config.get("config:license_dir") == expected_dir
|
||||
assert spack.package_base.PackageBase.global_license_dir == expected_dir
|
||||
assert spack.repo.PATH.get_pkg_class("pkg-a").global_license_dir == expected_dir
|
||||
assert spack.repo.PATH.get_pkg_class("a").global_license_dir == expected_dir
|
||||
|
||||
rel_path = os.path.join(os.path.sep, "foo", "bar", "baz")
|
||||
spack.config.set("config:license_dir", rel_path)
|
||||
assert spack.config.get("config:license_dir") == rel_path
|
||||
assert spack.package_base.PackageBase.global_license_dir == rel_path
|
||||
assert spack.repo.PATH.get_pkg_class("pkg-a").global_license_dir == rel_path
|
||||
assert spack.repo.PATH.get_pkg_class("a").global_license_dir == rel_path
|
||||
|
||||
|
||||
@pytest.mark.regression("22547")
|
||||
@@ -1239,11 +1239,11 @@ def test_user_config_path_is_default_when_env_var_is_empty(working_env):
|
||||
assert os.path.expanduser("~%s.spack" % os.sep) == spack.paths._get_user_config_path()
|
||||
|
||||
|
||||
def test_default_install_tree(monkeypatch, default_config):
|
||||
def test_default_install_tree(monkeypatch):
|
||||
s = spack.spec.Spec("nonexistent@x.y.z %none@a.b.c arch=foo-bar-baz")
|
||||
monkeypatch.setattr(s, "dag_hash", lambda: "abc123")
|
||||
_, _, projections = spack.store.parse_install_tree(spack.config.get("config"))
|
||||
assert s.format(projections["all"]) == "foo-bar-baz/none-a.b.c/nonexistent-x.y.z-abc123"
|
||||
projection = spack.config.get("config:install_tree:projections:all", scope="defaults")
|
||||
assert s.format(projection) == "foo-bar-baz/none-a.b.c/nonexistent-x.y.z-abc123"
|
||||
|
||||
|
||||
def test_local_config_can_be_disabled(working_env):
|
||||
|
@@ -21,7 +21,6 @@
|
||||
import py
|
||||
import pytest
|
||||
|
||||
import archspec.cpu
|
||||
import archspec.cpu.microarchitecture
|
||||
import archspec.cpu.schema
|
||||
|
||||
@@ -597,7 +596,7 @@ def mutable_mock_repo(mock_repo_path, request):
|
||||
def mock_custom_repository(tmpdir, mutable_mock_repo):
|
||||
"""Create a custom repository with a single package "c" and return its path."""
|
||||
builder = spack.repo.MockRepositoryBuilder(tmpdir.mkdir("myrepo"))
|
||||
builder.add_package("pkg-c")
|
||||
builder.add_package("c")
|
||||
return builder.root
|
||||
|
||||
|
||||
@@ -630,7 +629,7 @@ def platform_config():
|
||||
spack.config.add_default_platform_scope(spack.platforms.real_host().name)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@pytest.fixture(scope="session")
|
||||
def default_config():
|
||||
"""Isolates the default configuration from the user configs.
|
||||
|
||||
@@ -709,13 +708,14 @@ def configuration_dir(tmpdir_factory, linux_os):
|
||||
t.write(content)
|
||||
|
||||
compilers_yaml = test_config.join("compilers.yaml")
|
||||
content = "".join(compilers_yaml.read()).format(
|
||||
linux_os=linux_os, target=str(archspec.cpu.host().family)
|
||||
)
|
||||
content = "".join(compilers_yaml.read()).format(linux_os)
|
||||
t = tmpdir.join("site", "compilers.yaml")
|
||||
t.write(content)
|
||||
yield tmpdir
|
||||
|
||||
# Once done, cleanup the directory
|
||||
shutil.rmtree(str(tmpdir))
|
||||
|
||||
|
||||
def _create_mock_configuration_scopes(configuration_dir):
|
||||
"""Create the configuration scopes used in `config` and `mutable_config`."""
|
||||
@@ -1953,50 +1953,17 @@ def pytest_runtest_setup(item):
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def disable_parallel_buildcache_push(monkeypatch):
|
||||
"""Disable process pools in tests."""
|
||||
monkeypatch.setattr(spack.cmd.buildcache, "_make_pool", spack.cmd.buildcache.NoPool)
|
||||
class MockPool:
|
||||
def map(self, func, args):
|
||||
return [func(a) for a in args]
|
||||
|
||||
def starmap(self, func, args):
|
||||
return [func(*a) for a in args]
|
||||
|
||||
def create_test_repo(tmpdir, pkg_name_content_tuples):
|
||||
repo_path = str(tmpdir)
|
||||
repo_yaml = tmpdir.join("repo.yaml")
|
||||
with open(str(repo_yaml), "w") as f:
|
||||
f.write(
|
||||
"""\
|
||||
repo:
|
||||
namespace: testcfgrequirements
|
||||
"""
|
||||
)
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
packages_dir = tmpdir.join("packages")
|
||||
for pkg_name, pkg_str in pkg_name_content_tuples:
|
||||
pkg_dir = packages_dir.ensure(pkg_name, dir=True)
|
||||
pkg_file = pkg_dir.join("package.py")
|
||||
with open(str(pkg_file), "w") as f:
|
||||
f.write(pkg_str)
|
||||
def __exit__(self, *args):
|
||||
pass
|
||||
|
||||
return spack.repo.Repo(repo_path)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def compiler_factory():
|
||||
"""Factory for a compiler dict, taking a spec and an OS as arguments."""
|
||||
|
||||
def _factory(*, spec, operating_system):
|
||||
return {
|
||||
"compiler": {
|
||||
"spec": spec,
|
||||
"operating_system": operating_system,
|
||||
"paths": {"cc": "/path/to/cc", "cxx": "/path/to/cxx", "f77": None, "fc": None},
|
||||
"modules": [],
|
||||
"target": str(archspec.cpu.host().family),
|
||||
}
|
||||
}
|
||||
|
||||
return _factory
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def host_architecture_str():
|
||||
"""Returns the broad architecture family (x86_64, aarch64, etc.)"""
|
||||
return str(archspec.cpu.host().family)
|
||||
monkeypatch.setattr(spack.cmd.buildcache, "_make_pool", MockPool)
|
||||
|
@@ -1,41 +1,353 @@
|
||||
compilers:
|
||||
- compiler:
|
||||
spec: gcc@=4.8.0
|
||||
operating_system: {linux_os.name}{linux_os.version}
|
||||
spec: clang@3.3
|
||||
operating_system: {0.name}{0.version}
|
||||
paths:
|
||||
cc: /path/to/clang
|
||||
cxx: /path/to/clang++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: gcc@4.5.0
|
||||
operating_system: {0.name}{0.version}
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: []
|
||||
target: {target}
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: gcc@=4.8.0
|
||||
spec: gcc@4.5.0
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: []
|
||||
target: {target}
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: clang@=12.0.0
|
||||
operating_system: {linux_os.name}{linux_os.version}
|
||||
spec: clang@3.3
|
||||
operating_system: CNL
|
||||
paths:
|
||||
cc: /path/to/clang
|
||||
cxx: /path/to/clang++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: []
|
||||
target: {target}
|
||||
modules: 'None'
|
||||
- compiler:
|
||||
spec: gcc@=10.2.1
|
||||
operating_system: {linux_os.name}{linux_os.version}
|
||||
spec: clang@3.3
|
||||
operating_system: SuSE11
|
||||
paths:
|
||||
cc: /path/to/clang
|
||||
cxx: /path/to/clang++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: clang@3.3
|
||||
operating_system: yosemite
|
||||
paths:
|
||||
cc: /path/to/clang
|
||||
cxx: /path/to/clang++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: /path/to/gfortran
|
||||
fc: /path/to/gfortran
|
||||
operating_system: CNL
|
||||
spec: gcc@4.5.0
|
||||
modules: 'None'
|
||||
- compiler:
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: /path/to/gfortran
|
||||
fc: /path/to/gfortran
|
||||
operating_system: SuSE11
|
||||
spec: gcc@4.5.0
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: /path/to/gfortran
|
||||
fc: /path/to/gfortran
|
||||
operating_system: yosemite
|
||||
spec: gcc@4.5.0
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: /path/to/gfortran
|
||||
fc: /path/to/gfortran
|
||||
operating_system: elcapitan
|
||||
spec: gcc@4.5.0
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: clang@3.3
|
||||
operating_system: elcapitan
|
||||
paths:
|
||||
cc: /path/to/clang
|
||||
cxx: /path/to/clang++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: gcc@4.7.2
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc472
|
||||
cxx: /path/to/g++472
|
||||
f77: /path/to/gfortran472
|
||||
fc: /path/to/gfortran472
|
||||
flags:
|
||||
cflags: -O0 -g
|
||||
cxxflags: -O0 -g
|
||||
fflags: -O0 -g
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: gcc@4.4.0
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc440
|
||||
cxx: /path/to/g++440
|
||||
f77: /path/to/gfortran440
|
||||
fc: /path/to/gfortran440
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: clang@3.5
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/clang35
|
||||
cxx: /path/to/clang++35
|
||||
f77: None
|
||||
fc: None
|
||||
flags:
|
||||
cflags: -O3
|
||||
cxxflags: -O3
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: clang@8.0.0
|
||||
operating_system: redhat7
|
||||
paths:
|
||||
cc: /path/to/clang-8
|
||||
cxx: /path/to/clang++-8
|
||||
f77: /path/to/gfortran-9
|
||||
fc: /path/to/gfortran-9
|
||||
flags:
|
||||
cflags: -O3
|
||||
cxxflags: -O3
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: apple-clang@9.1.0
|
||||
operating_system: elcapitan
|
||||
paths:
|
||||
cc: /path/to/clang
|
||||
cxx: /path/to/clang++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: gcc@10foo
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: /path/to/gfortran
|
||||
fc: /path/to/gfortran
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: gcc@4.4.0-special
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: /path/to/gfortran
|
||||
fc: /path/to/gfortran
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: clang@12.0.0
|
||||
operating_system: {0.name}{0.version}
|
||||
paths:
|
||||
cc: /path/to/clang
|
||||
cxx: /path/to/clang++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: aarch64
|
||||
- compiler:
|
||||
spec: gcc@10.2.1
|
||||
operating_system: {0.name}{0.version}
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: []
|
||||
target: {target}
|
||||
modules: 'None'
|
||||
target: aarch64
|
||||
- compiler:
|
||||
spec: clang@12.0.0
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/clang
|
||||
cxx: /path/to/clang++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: aarch64
|
||||
- compiler:
|
||||
spec: gcc@10.2.1
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: aarch64
|
||||
- compiler:
|
||||
spec: gcc@10.1.0
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: aarch64
|
||||
- compiler:
|
||||
spec: gcc@11.1.0
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: None
|
||||
fc: None
|
||||
flags:
|
||||
cflags: -O0 -g
|
||||
cxxflags: -O0 -g
|
||||
fflags: -O0 -g
|
||||
modules: 'None'
|
||||
target: aarch64
|
||||
- compiler:
|
||||
spec: clang@12.2.0
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/clang35
|
||||
cxx: /path/to/clang++35
|
||||
f77: None
|
||||
fc: None
|
||||
flags:
|
||||
cflags: -O3
|
||||
cxxflags: -O3
|
||||
modules: 'None'
|
||||
target: aarch64
|
||||
- compiler:
|
||||
spec: gcc@10foo
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: /path/to/gfortran
|
||||
fc: /path/to/gfortran
|
||||
modules: 'None'
|
||||
target: aarch64
|
||||
- compiler:
|
||||
spec: clang@12.0.0
|
||||
operating_system: {0.name}{0.version}
|
||||
paths:
|
||||
cc: /path/to/clang
|
||||
cxx: /path/to/clang++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: gcc@10.2.1
|
||||
operating_system: {0.name}{0.version}
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: clang@12.0.0
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/clang
|
||||
cxx: /path/to/clang++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: gcc@10.2.1
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: gcc@10.1.0
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: None
|
||||
fc: None
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: gcc@11.1.0
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/gcc
|
||||
cxx: /path/to/g++
|
||||
f77: None
|
||||
fc: None
|
||||
flags:
|
||||
cflags: -O0 -g
|
||||
cxxflags: -O0 -g
|
||||
fflags: -O0 -g
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
- compiler:
|
||||
spec: clang@12.2.0
|
||||
operating_system: redhat6
|
||||
paths:
|
||||
cc: /path/to/clang35
|
||||
cxx: /path/to/clang++35
|
||||
f77: None
|
||||
fc: None
|
||||
flags:
|
||||
cflags: -O3
|
||||
cxxflags: -O3
|
||||
modules: 'None'
|
||||
target: x86_64
|
||||
|
@@ -69,7 +69,7 @@ def test_spec_installed_upstream(
|
||||
|
||||
# a known installed spec should say that it's installed
|
||||
with spack.repo.use_repositories(mock_custom_repository):
|
||||
spec = spack.spec.Spec("pkg-c").concretized()
|
||||
spec = spack.spec.Spec("c").concretized()
|
||||
assert not spec.installed
|
||||
assert not spec.installed_upstream
|
||||
|
||||
@@ -813,7 +813,7 @@ def test_query_virtual_spec(database):
|
||||
|
||||
def test_failed_spec_path_error(database):
|
||||
"""Ensure spec not concrete check is covered."""
|
||||
s = spack.spec.Spec("pkg-a")
|
||||
s = spack.spec.Spec("a")
|
||||
with pytest.raises(AssertionError, match="concrete spec required"):
|
||||
spack.store.STORE.failure_tracker.mark(s)
|
||||
|
||||
@@ -828,7 +828,7 @@ def _is(self, spec):
|
||||
# Pretend the spec has been failure locked
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "lock_taken", _is)
|
||||
|
||||
s = spack.spec.Spec("pkg-a").concretized()
|
||||
s = spack.spec.Spec("a").concretized()
|
||||
spack.store.STORE.failure_tracker.clear(s)
|
||||
out = capfd.readouterr()[0]
|
||||
assert "Retaining failure marking" in out
|
||||
@@ -846,7 +846,7 @@ def _is(self, spec):
|
||||
# Ensure raise OSError when try to remove the non-existent marking
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "persistent_mark", _is)
|
||||
|
||||
s = spack.spec.Spec("pkg-a").concretized()
|
||||
s = default_mock_concretization("a")
|
||||
spack.store.STORE.failure_tracker.clear(s, force=True)
|
||||
out = capfd.readouterr()[1]
|
||||
assert "Removing failure marking despite lock" in out
|
||||
@@ -860,16 +860,15 @@ def test_mark_failed(default_mock_concretization, mutable_database, monkeypatch,
|
||||
def _raise_exc(lock):
|
||||
raise lk.LockTimeoutError("write", "/mock-lock", 1.234, 10)
|
||||
|
||||
# Ensure attempt to acquire write lock on the mark raises the exception
|
||||
monkeypatch.setattr(lk.Lock, "acquire_write", _raise_exc)
|
||||
|
||||
with tmpdir.as_cwd():
|
||||
s = spack.spec.Spec("pkg-a").concretized()
|
||||
|
||||
# Ensure attempt to acquire write lock on the mark raises the exception
|
||||
monkeypatch.setattr(lk.Lock, "acquire_write", _raise_exc)
|
||||
|
||||
s = default_mock_concretization("a")
|
||||
spack.store.STORE.failure_tracker.mark(s)
|
||||
|
||||
out = str(capsys.readouterr()[1])
|
||||
assert "Unable to mark pkg-a as failed" in out
|
||||
assert "Unable to mark a as failed" in out
|
||||
|
||||
spack.store.STORE.failure_tracker.clear_all()
|
||||
|
||||
@@ -878,7 +877,7 @@ def _raise_exc(lock):
|
||||
def test_prefix_failed(default_mock_concretization, mutable_database, monkeypatch):
|
||||
"""Add coverage to failed operation."""
|
||||
|
||||
s = spack.spec.Spec("pkg-a").concretized()
|
||||
s = default_mock_concretization("a")
|
||||
|
||||
# Confirm the spec is not already marked as failed
|
||||
assert not spack.store.STORE.failure_tracker.has_failed(s)
|
||||
@@ -902,7 +901,7 @@ def test_prefix_write_lock_error(default_mock_concretization, mutable_database,
|
||||
def _raise(db, spec):
|
||||
raise lk.LockError("Mock lock error")
|
||||
|
||||
s = spack.spec.Spec("pkg-a").concretized()
|
||||
s = default_mock_concretization("a")
|
||||
|
||||
# Ensure subsequent lock operations fail
|
||||
monkeypatch.setattr(lk.Lock, "acquire_write", _raise)
|
||||
|
@@ -30,7 +30,7 @@ def test_true_directives_exist(mock_packages):
|
||||
|
||||
assert cls.dependencies
|
||||
assert spack.spec.Spec() in cls.dependencies["extendee"]
|
||||
assert spack.spec.Spec() in cls.dependencies["pkg-b"]
|
||||
assert spack.spec.Spec() in cls.dependencies["b"]
|
||||
|
||||
assert cls.resources
|
||||
assert spack.spec.Spec() in cls.resources
|
||||
@@ -43,7 +43,7 @@ def test_constraints_from_context(mock_packages):
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("with-constraint-met")
|
||||
|
||||
assert pkg_cls.dependencies
|
||||
assert spack.spec.Spec("@1.0") in pkg_cls.dependencies["pkg-b"]
|
||||
assert spack.spec.Spec("@1.0") in pkg_cls.dependencies["b"]
|
||||
|
||||
assert pkg_cls.conflicts
|
||||
assert (spack.spec.Spec("+foo@1.0"), None) in pkg_cls.conflicts["%gcc"]
|
||||
@@ -54,7 +54,7 @@ def test_constraints_from_context_are_merged(mock_packages):
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("with-constraint-met")
|
||||
|
||||
assert pkg_cls.dependencies
|
||||
assert spack.spec.Spec("@0.14:15 ^pkg-b@3.8:4.0") in pkg_cls.dependencies["pkg-c"]
|
||||
assert spack.spec.Spec("@0.14:15 ^b@3.8:4.0") in pkg_cls.dependencies["c"]
|
||||
|
||||
|
||||
@pytest.mark.regression("27754")
|
||||
@@ -68,7 +68,7 @@ def test_extends_spec(config, mock_packages):
|
||||
|
||||
@pytest.mark.regression("34368")
|
||||
def test_error_on_anonymous_dependency(config, mock_packages):
|
||||
pkg = spack.repo.PATH.get_pkg_class("pkg-a")
|
||||
pkg = spack.repo.PATH.get_pkg_class("a")
|
||||
with pytest.raises(spack.directives.DependencyError):
|
||||
spack.directives._depends_on(pkg, "@4.5")
|
||||
|
||||
|
@@ -377,10 +377,10 @@ def test_can_add_specs_to_environment_without_specs_attribute(tmp_path, mock_pac
|
||||
"""
|
||||
)
|
||||
env = ev.Environment(tmp_path)
|
||||
env.add("pkg-a")
|
||||
env.add("a")
|
||||
|
||||
assert len(env.user_specs) == 1
|
||||
assert env.manifest.pristine_yaml_content["spack"]["specs"] == ["pkg-a"]
|
||||
assert env.manifest.pristine_yaml_content["spack"]["specs"] == ["a"]
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
@@ -578,7 +578,7 @@ def test_conflicts_with_packages_that_are_not_dependencies(
|
||||
spack:
|
||||
specs:
|
||||
- {spec_str}
|
||||
- pkg-b
|
||||
- b
|
||||
concretizer:
|
||||
unify: true
|
||||
"""
|
||||
@@ -695,7 +695,7 @@ def test_removing_spec_from_manifest_with_exact_duplicates(
|
||||
|
||||
@pytest.mark.regression("35298")
|
||||
@pytest.mark.only_clingo("Propagation not supported in the original concretizer")
|
||||
def test_variant_propagation_with_unify_false(tmp_path, mock_packages, config):
|
||||
def test_variant_propagation_with_unify_false(tmp_path, mock_packages):
|
||||
"""Spack distributes concretizations to different processes, when unify:false is selected and
|
||||
the number of roots is 2 or more. When that happens, the specs to be concretized need to be
|
||||
properly reconstructed on the worker process, if variant propagation was requested.
|
||||
@@ -706,7 +706,7 @@ def test_variant_propagation_with_unify_false(tmp_path, mock_packages, config):
|
||||
spack:
|
||||
specs:
|
||||
- parent-foo ++foo
|
||||
- pkg-c
|
||||
- c
|
||||
concretizer:
|
||||
unify: false
|
||||
"""
|
||||
@@ -778,32 +778,3 @@ def test_env_with_include_def_missing(mutable_mock_env_path, mock_packages):
|
||||
with e:
|
||||
with pytest.raises(UndefinedReferenceError, match=r"which does not appear"):
|
||||
e.concretize()
|
||||
|
||||
|
||||
@pytest.mark.regression("41292")
|
||||
def test_deconcretize_then_concretize_does_not_error(mutable_mock_env_path, mock_packages):
|
||||
"""Tests that, after having deconcretized a spec, we can reconcretize an environment which
|
||||
has 2 or more user specs mapping to the same concrete spec.
|
||||
"""
|
||||
mutable_mock_env_path.mkdir()
|
||||
spack_yaml = mutable_mock_env_path / ev.manifest_name
|
||||
spack_yaml.write_text(
|
||||
"""spack:
|
||||
specs:
|
||||
# These two specs concretize to the same hash
|
||||
- pkg-c
|
||||
- pkg-c@1.0
|
||||
# Spec used to trigger the bug
|
||||
- pkg-a
|
||||
concretizer:
|
||||
unify: true
|
||||
"""
|
||||
)
|
||||
e = ev.Environment(mutable_mock_env_path)
|
||||
with e:
|
||||
e.concretize()
|
||||
e.deconcretize(spack.spec.Spec("pkg-a"), concrete=False)
|
||||
e.concretize()
|
||||
assert len(e.concrete_roots()) == 3
|
||||
all_root_hashes = {x.dag_hash() for x in e.concrete_roots()}
|
||||
assert len(all_root_hashes) == 2
|
||||
|
@@ -12,12 +12,10 @@
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
import spack.error
|
||||
import spack.mirror
|
||||
import spack.patch
|
||||
import spack.repo
|
||||
import spack.store
|
||||
import spack.util.spack_json as sjson
|
||||
from spack import binary_distribution
|
||||
from spack.package_base import (
|
||||
InstallError,
|
||||
PackageBase,
|
||||
@@ -120,28 +118,61 @@ def remove_prefix(self):
|
||||
self.wrapped_rm_prefix()
|
||||
|
||||
|
||||
class MockStage:
|
||||
def __init__(self, wrapped_stage):
|
||||
self.wrapped_stage = wrapped_stage
|
||||
self.test_destroyed = False
|
||||
|
||||
def __enter__(self):
|
||||
self.create()
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
if exc_type is None:
|
||||
self.destroy()
|
||||
|
||||
def destroy(self):
|
||||
self.test_destroyed = True
|
||||
self.wrapped_stage.destroy()
|
||||
|
||||
def create(self):
|
||||
self.wrapped_stage.create()
|
||||
|
||||
def __getattr__(self, attr):
|
||||
if attr == "wrapped_stage":
|
||||
# This attribute may not be defined at some point during unpickling
|
||||
raise AttributeError()
|
||||
return getattr(self.wrapped_stage, attr)
|
||||
|
||||
|
||||
def test_partial_install_delete_prefix_and_stage(install_mockery, mock_fetch, working_env):
|
||||
s = Spec("canfail").concretized()
|
||||
|
||||
instance_rm_prefix = s.package.remove_prefix
|
||||
|
||||
s.package.remove_prefix = mock_remove_prefix
|
||||
with pytest.raises(MockInstallError):
|
||||
s.package.do_install()
|
||||
assert os.path.isdir(s.package.prefix)
|
||||
rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix)
|
||||
s.package.remove_prefix = rm_prefix_checker.remove_prefix
|
||||
try:
|
||||
s.package.remove_prefix = mock_remove_prefix
|
||||
with pytest.raises(MockInstallError):
|
||||
s.package.do_install()
|
||||
assert os.path.isdir(s.package.prefix)
|
||||
rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix)
|
||||
s.package.remove_prefix = rm_prefix_checker.remove_prefix
|
||||
|
||||
# must clear failure markings for the package before re-installing it
|
||||
spack.store.STORE.failure_tracker.clear(s, True)
|
||||
# must clear failure markings for the package before re-installing it
|
||||
spack.store.STORE.failure_tracker.clear(s, True)
|
||||
|
||||
s.package.set_install_succeed()
|
||||
s.package.do_install(restage=True)
|
||||
assert rm_prefix_checker.removed
|
||||
assert s.package.spec.installed
|
||||
s.package.set_install_succeed()
|
||||
s.package.stage = MockStage(s.package.stage)
|
||||
|
||||
s.package.do_install(restage=True)
|
||||
assert rm_prefix_checker.removed
|
||||
assert s.package.stage.test_destroyed
|
||||
assert s.package.spec.installed
|
||||
|
||||
finally:
|
||||
s.package.remove_prefix = instance_rm_prefix
|
||||
|
||||
|
||||
@pytest.mark.not_on_windows("Fails spuriously on Windows")
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
def test_failing_overwrite_install_should_keep_previous_installation(
|
||||
mock_fetch, install_mockery, working_env
|
||||
@@ -326,8 +357,10 @@ def test_partial_install_keep_prefix(install_mockery, mock_fetch, monkeypatch, w
|
||||
spack.store.STORE.failure_tracker.clear(s, True)
|
||||
|
||||
s.package.set_install_succeed()
|
||||
s.package.stage = MockStage(s.package.stage)
|
||||
s.package.do_install(keep_prefix=True)
|
||||
assert s.package.spec.installed
|
||||
assert not s.package.stage.test_destroyed
|
||||
|
||||
|
||||
def test_second_install_no_overwrite_first(install_mockery, mock_fetch, monkeypatch):
|
||||
@@ -611,48 +644,3 @@ def test_empty_install_sanity_check_prefix(
|
||||
spec = Spec("failing-empty-install").concretized()
|
||||
with pytest.raises(spack.build_environment.ChildError, match="Nothing was installed"):
|
||||
spec.package.do_install()
|
||||
|
||||
|
||||
def test_install_from_binary_with_missing_patch_succeeds(
|
||||
temporary_store: spack.store.Store, mutable_config, tmp_path, mock_packages
|
||||
):
|
||||
"""If a patch is missing in the local package repository, but was present when building and
|
||||
pushing the package to a binary cache, installation from that binary cache shouldn't error out
|
||||
because of the missing patch."""
|
||||
# Create a spec s with non-existing patches
|
||||
s = Spec("trivial-install-test-package").concretized()
|
||||
patches = ["a" * 64]
|
||||
s_dict = s.to_dict()
|
||||
s_dict["spec"]["nodes"][0]["patches"] = patches
|
||||
s_dict["spec"]["nodes"][0]["parameters"]["patches"] = patches
|
||||
s = Spec.from_dict(s_dict)
|
||||
|
||||
# Create an install dir for it
|
||||
os.makedirs(os.path.join(s.prefix, ".spack"))
|
||||
with open(os.path.join(s.prefix, ".spack", "spec.json"), "w") as f:
|
||||
s.to_json(f)
|
||||
|
||||
# And register it in the database
|
||||
temporary_store.db.add(s, directory_layout=temporary_store.layout, explicit=True)
|
||||
|
||||
# Push it to a binary cache
|
||||
build_cache = tmp_path / "my_build_cache"
|
||||
binary_distribution.push_or_raise(
|
||||
s,
|
||||
build_cache.as_uri(),
|
||||
binary_distribution.PushOptions(unsigned=True, regenerate_index=True),
|
||||
)
|
||||
|
||||
# Now re-install it.
|
||||
s.package.do_uninstall()
|
||||
assert not temporary_store.db.query_local_by_spec_hash(s.dag_hash())
|
||||
|
||||
# Source install: fails, we don't have the patch.
|
||||
with pytest.raises(spack.error.SpecError, match="Couldn't find patch for package"):
|
||||
s.package.do_install()
|
||||
|
||||
# Binary install: succeeds, we don't need the patch.
|
||||
spack.mirror.add(spack.mirror.Mirror.from_local_path(str(build_cache)))
|
||||
s.package.do_install(package_cache_only=True, dependencies_cache_only=True, unsigned=True)
|
||||
|
||||
assert temporary_store.db.query_local_by_spec_hash(s.dag_hash())
|
||||
|
@@ -11,8 +11,6 @@
|
||||
import py
|
||||
import pytest
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
import llnl.util.lock as ulk
|
||||
import llnl.util.tty as tty
|
||||
@@ -132,7 +130,7 @@ def test_hms(sec, result):
|
||||
|
||||
def test_get_dependent_ids(install_mockery, mock_packages):
|
||||
# Concretize the parent package, which handle dependency too
|
||||
spec = spack.spec.Spec("pkg-a")
|
||||
spec = spack.spec.Spec("a")
|
||||
spec.concretize()
|
||||
assert spec.concrete
|
||||
|
||||
@@ -167,19 +165,23 @@ def test_install_msg(monkeypatch):
|
||||
assert inst.install_msg(name, pid, None) == expected
|
||||
|
||||
|
||||
def test_install_from_cache_errors(install_mockery):
|
||||
"""Test to ensure cover install from cache errors."""
|
||||
def test_install_from_cache_errors(install_mockery, capsys):
|
||||
"""Test to ensure cover _install_from_cache errors."""
|
||||
spec = spack.spec.Spec("trivial-install-test-package")
|
||||
spec.concretize()
|
||||
assert spec.concrete
|
||||
|
||||
# Check with cache-only
|
||||
with pytest.raises(inst.InstallError, match="No binary found when cache-only was specified"):
|
||||
spec.package.do_install(package_cache_only=True, dependencies_cache_only=True)
|
||||
with pytest.raises(SystemExit):
|
||||
inst._install_from_cache(spec.package, True, True, False)
|
||||
|
||||
captured = str(capsys.readouterr())
|
||||
assert "No binary" in captured
|
||||
assert "found when cache-only specified" in captured
|
||||
assert not spec.package.installed_from_binary_cache
|
||||
|
||||
# Check when don't expect to install only from binary cache
|
||||
assert not inst._install_from_cache(spec.package, explicit=True, unsigned=False)
|
||||
assert not inst._install_from_cache(spec.package, False, True, False)
|
||||
assert not spec.package.installed_from_binary_cache
|
||||
|
||||
|
||||
@@ -190,7 +192,7 @@ def test_install_from_cache_ok(install_mockery, monkeypatch):
|
||||
monkeypatch.setattr(inst, "_try_install_from_binary_cache", _true)
|
||||
monkeypatch.setattr(spack.hooks, "post_install", _noop)
|
||||
|
||||
assert inst._install_from_cache(spec.package, explicit=True, unsigned=False)
|
||||
assert inst._install_from_cache(spec.package, True, True, False)
|
||||
|
||||
|
||||
def test_process_external_package_module(install_mockery, monkeypatch, capfd):
|
||||
@@ -223,11 +225,11 @@ def _spec(spec, unsigned=False, mirrors_for_spec=None):
|
||||
# Skip database updates
|
||||
monkeypatch.setattr(spack.database.Database, "add", _noop)
|
||||
|
||||
spec = spack.spec.Spec("pkg-a").concretized()
|
||||
spec = spack.spec.Spec("a").concretized()
|
||||
assert inst._process_binary_cache_tarball(spec.package, explicit=False, unsigned=False)
|
||||
|
||||
out = capfd.readouterr()[0]
|
||||
assert "Extracting pkg-a" in out
|
||||
assert "Extracting a" in out
|
||||
assert "from binary cache" in out
|
||||
|
||||
|
||||
@@ -278,7 +280,7 @@ def test_installer_prune_built_build_deps(install_mockery, monkeypatch, tmpdir):
|
||||
|
||||
@property
|
||||
def _mock_installed(self):
|
||||
return self.name == "pkg-c"
|
||||
return self.name in ["c"]
|
||||
|
||||
# Mock the installed property to say that (b) is installed
|
||||
monkeypatch.setattr(spack.spec.Spec, "installed", _mock_installed)
|
||||
@@ -286,25 +288,24 @@ def _mock_installed(self):
|
||||
# Create mock repository with packages (a), (b), (c), (d), and (e)
|
||||
builder = spack.repo.MockRepositoryBuilder(tmpdir.mkdir("mock-repo"))
|
||||
|
||||
builder.add_package("pkg-a", dependencies=[("pkg-b", "build", None), ("pkg-c", "build", None)])
|
||||
builder.add_package("pkg-b", dependencies=[("pkg-d", "build", None)])
|
||||
builder.add_package("a", dependencies=[("b", "build", None), ("c", "build", None)])
|
||||
builder.add_package("b", dependencies=[("d", "build", None)])
|
||||
builder.add_package(
|
||||
"pkg-c",
|
||||
dependencies=[("pkg-d", "build", None), ("pkg-e", "all", None), ("pkg-f", "build", None)],
|
||||
"c", dependencies=[("d", "build", None), ("e", "all", None), ("f", "build", None)]
|
||||
)
|
||||
builder.add_package("pkg-d")
|
||||
builder.add_package("pkg-e")
|
||||
builder.add_package("pkg-f")
|
||||
builder.add_package("d")
|
||||
builder.add_package("e")
|
||||
builder.add_package("f")
|
||||
|
||||
with spack.repo.use_repositories(builder.root):
|
||||
const_arg = installer_args(["pkg-a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
installer._init_queue()
|
||||
|
||||
# Assert that (c) is not in the build_pq
|
||||
result = {task.pkg_id[:5] for _, task in installer.build_pq}
|
||||
expected = {"pkg-a", "pkg-b", "pkg-c", "pkg-d", "pkg-e"}
|
||||
result = set([task.pkg_id[0] for _, task in installer.build_pq])
|
||||
expected = set(["a", "b", "c", "d", "e"])
|
||||
assert result == expected
|
||||
|
||||
|
||||
@@ -419,7 +420,8 @@ def test_ensure_locked_have(install_mockery, tmpdir, capsys):
|
||||
|
||||
@pytest.mark.parametrize("lock_type,reads,writes", [("read", 1, 0), ("write", 0, 1)])
|
||||
def test_ensure_locked_new_lock(install_mockery, tmpdir, lock_type, reads, writes):
|
||||
const_arg = installer_args(["pkg-a"], {})
|
||||
pkg_id = "a"
|
||||
const_arg = installer_args([pkg_id], {})
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
with tmpdir.as_cwd():
|
||||
@@ -438,7 +440,8 @@ def _pl(db, spec, timeout):
|
||||
lock.default_timeout = 1e-9 if timeout is None else None
|
||||
return lock
|
||||
|
||||
const_arg = installer_args(["pkg-a"], {})
|
||||
pkg_id = "a"
|
||||
const_arg = installer_args([pkg_id], {})
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
|
||||
@@ -495,7 +498,7 @@ def test_packages_needed_to_bootstrap_compiler_packages(install_mockery, monkeyp
|
||||
spec.concretize()
|
||||
|
||||
def _conc_spec(compiler):
|
||||
return spack.spec.Spec("pkg-a").concretized()
|
||||
return spack.spec.Spec("a").concretized()
|
||||
|
||||
# Ensure we can get past functions that are precluding obtaining
|
||||
# packages.
|
||||
@@ -529,10 +532,6 @@ def fake_package_list(compiler, architecture, pkgs):
|
||||
assert installer.build_pq[0][1].compiler
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
str(archspec.cpu.host().family) != "x86_64",
|
||||
reason="OneAPI compiler is not supported on other architectures",
|
||||
)
|
||||
def test_bootstrapping_compilers_with_different_names_from_spec(
|
||||
install_mockery, mutable_config, mock_fetch, archspec_host_is_spack_test_host
|
||||
):
|
||||
@@ -603,7 +602,7 @@ def test_clear_failures_success(tmpdir):
|
||||
"""Test the clear_failures happy path."""
|
||||
failures = spack.database.FailureTracker(str(tmpdir), default_timeout=0.1)
|
||||
|
||||
spec = spack.spec.Spec("pkg-a")
|
||||
spec = spack.spec.Spec("a")
|
||||
spec._mark_concrete()
|
||||
|
||||
# Set up a test prefix failure lock
|
||||
@@ -629,7 +628,7 @@ def test_clear_failures_success(tmpdir):
|
||||
def test_clear_failures_errs(tmpdir, capsys):
|
||||
"""Test the clear_failures exception paths."""
|
||||
failures = spack.database.FailureTracker(str(tmpdir), default_timeout=0.1)
|
||||
spec = spack.spec.Spec("pkg-a")
|
||||
spec = spack.spec.Spec("a")
|
||||
spec._mark_concrete()
|
||||
failures.mark(spec)
|
||||
|
||||
@@ -691,11 +690,11 @@ def test_check_deps_status_install_failure(install_mockery):
|
||||
"""Tests that checking the dependency status on a request to install
|
||||
'a' fails, if we mark the dependency as failed.
|
||||
"""
|
||||
s = spack.spec.Spec("pkg-a").concretized()
|
||||
s = spack.spec.Spec("a").concretized()
|
||||
for dep in s.traverse(root=False):
|
||||
spack.store.STORE.failure_tracker.mark(dep)
|
||||
|
||||
const_arg = installer_args(["pkg-a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
|
||||
@@ -704,7 +703,7 @@ def test_check_deps_status_install_failure(install_mockery):
|
||||
|
||||
|
||||
def test_check_deps_status_write_locked(install_mockery, monkeypatch):
|
||||
const_arg = installer_args(["pkg-a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
|
||||
@@ -716,7 +715,7 @@ def test_check_deps_status_write_locked(install_mockery, monkeypatch):
|
||||
|
||||
|
||||
def test_check_deps_status_external(install_mockery, monkeypatch):
|
||||
const_arg = installer_args(["pkg-a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
|
||||
@@ -729,7 +728,7 @@ def test_check_deps_status_external(install_mockery, monkeypatch):
|
||||
|
||||
|
||||
def test_check_deps_status_upstream(install_mockery, monkeypatch):
|
||||
const_arg = installer_args(["pkg-a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
|
||||
@@ -806,7 +805,7 @@ def test_install_task_add_compiler(install_mockery, monkeypatch, capfd):
|
||||
def _add(_compilers):
|
||||
tty.msg(config_msg)
|
||||
|
||||
const_arg = installer_args(["pkg-a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
task = create_build_task(installer.build_requests[0].pkg)
|
||||
task.compiler = True
|
||||
@@ -844,7 +843,7 @@ def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
|
||||
@pytest.mark.parametrize("installed", [True, False])
|
||||
def test_push_task_skip_processed(install_mockery, installed):
|
||||
"""Test to ensure skip re-queueing a processed package."""
|
||||
const_arg = installer_args(["pkg-a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
assert len(list(installer.build_tasks)) == 0
|
||||
|
||||
@@ -862,7 +861,7 @@ def test_push_task_skip_processed(install_mockery, installed):
|
||||
|
||||
def test_requeue_task(install_mockery, capfd):
|
||||
"""Test to ensure cover _requeue_task."""
|
||||
const_arg = installer_args(["pkg-a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
task = create_build_task(installer.build_requests[0].pkg)
|
||||
|
||||
@@ -880,7 +879,7 @@ def test_requeue_task(install_mockery, capfd):
|
||||
assert qtask.attempts == task.attempts + 1
|
||||
|
||||
out = capfd.readouterr()[1]
|
||||
assert "Installing pkg-a" in out
|
||||
assert "Installing a" in out
|
||||
assert " in progress by another process" in out
|
||||
|
||||
|
||||
@@ -893,17 +892,17 @@ def _mktask(pkg):
|
||||
def _rmtask(installer, pkg_id):
|
||||
raise RuntimeError("Raise an exception to test except path")
|
||||
|
||||
const_arg = installer_args(["pkg-a"], {})
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
|
||||
# Cover task removal happy path
|
||||
installer.build_tasks["pkg-a"] = _mktask(spec.package)
|
||||
installer.build_tasks["a"] = _mktask(spec.package)
|
||||
installer._cleanup_all_tasks()
|
||||
assert len(installer.build_tasks) == 0
|
||||
|
||||
# Cover task removal exception path
|
||||
installer.build_tasks["pkg-a"] = _mktask(spec.package)
|
||||
installer.build_tasks["a"] = _mktask(spec.package)
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_remove_task", _rmtask)
|
||||
installer._cleanup_all_tasks()
|
||||
assert len(installer.build_tasks) == 1
|
||||
@@ -997,7 +996,7 @@ def test_install_uninstalled_deps(install_mockery, monkeypatch, capsys):
|
||||
|
||||
def test_install_failed(install_mockery, monkeypatch, capsys):
|
||||
"""Test install with failed install."""
|
||||
const_arg = installer_args(["pkg-b"], {})
|
||||
const_arg = installer_args(["b"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Make sure the package is identified as failed
|
||||
@@ -1013,7 +1012,7 @@ def test_install_failed(install_mockery, monkeypatch, capsys):
|
||||
|
||||
def test_install_failed_not_fast(install_mockery, monkeypatch, capsys):
|
||||
"""Test install with failed install."""
|
||||
const_arg = installer_args(["pkg-a"], {"fail_fast": False})
|
||||
const_arg = installer_args(["a"], {"fail_fast": False})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Make sure the package is identified as failed
|
||||
@@ -1024,12 +1023,12 @@ def test_install_failed_not_fast(install_mockery, monkeypatch, capsys):
|
||||
|
||||
out = str(capsys.readouterr())
|
||||
assert "failed to install" in out
|
||||
assert "Skipping build of pkg-a" in out
|
||||
assert "Skipping build of a" in out
|
||||
|
||||
|
||||
def test_install_fail_on_interrupt(install_mockery, monkeypatch):
|
||||
"""Test ctrl-c interrupted install."""
|
||||
spec_name = "pkg-a"
|
||||
spec_name = "a"
|
||||
err_msg = "mock keyboard interrupt for {0}".format(spec_name)
|
||||
|
||||
def _interrupt(installer, task, install_status, **kwargs):
|
||||
@@ -1047,13 +1046,13 @@ def _interrupt(installer, task, install_status, **kwargs):
|
||||
with pytest.raises(KeyboardInterrupt, match=err_msg):
|
||||
installer.install()
|
||||
|
||||
assert "pkg-b" in installer.installed # ensure dependency of pkg-a is 'installed'
|
||||
assert "b" in installer.installed # ensure dependency of a is 'installed'
|
||||
assert spec_name not in installer.installed
|
||||
|
||||
|
||||
def test_install_fail_single(install_mockery, monkeypatch):
|
||||
"""Test expected results for failure of single package."""
|
||||
spec_name = "pkg-a"
|
||||
spec_name = "a"
|
||||
err_msg = "mock internal package build error for {0}".format(spec_name)
|
||||
|
||||
class MyBuildException(Exception):
|
||||
@@ -1074,13 +1073,13 @@ def _install(installer, task, install_status, **kwargs):
|
||||
with pytest.raises(MyBuildException, match=err_msg):
|
||||
installer.install()
|
||||
|
||||
assert "pkg-b" in installer.installed # ensure dependency of a is 'installed'
|
||||
assert "b" in installer.installed # ensure dependency of a is 'installed'
|
||||
assert spec_name not in installer.installed
|
||||
|
||||
|
||||
def test_install_fail_multi(install_mockery, monkeypatch):
|
||||
"""Test expected results for failure of multiple packages."""
|
||||
spec_name = "pkg-c"
|
||||
spec_name = "c"
|
||||
err_msg = "mock internal package build error"
|
||||
|
||||
class MyBuildException(Exception):
|
||||
@@ -1092,7 +1091,7 @@ def _install(installer, task, install_status, **kwargs):
|
||||
else:
|
||||
installer.installed.add(task.pkg.name)
|
||||
|
||||
const_arg = installer_args([spec_name, "pkg-a"], {})
|
||||
const_arg = installer_args([spec_name, "a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Raise a KeyboardInterrupt error to trigger early termination
|
||||
@@ -1101,14 +1100,14 @@ def _install(installer, task, install_status, **kwargs):
|
||||
with pytest.raises(inst.InstallError, match="Installation request failed"):
|
||||
installer.install()
|
||||
|
||||
assert "pkg-a" in installer.installed # ensure the the second spec installed
|
||||
assert "a" in installer.installed # ensure the the second spec installed
|
||||
assert spec_name not in installer.installed
|
||||
|
||||
|
||||
def test_install_fail_fast_on_detect(install_mockery, monkeypatch, capsys):
|
||||
"""Test fail_fast install when an install failure is detected."""
|
||||
const_arg = installer_args(["pkg-b"], {"fail_fast": False})
|
||||
const_arg.extend(installer_args(["pkg-c"], {"fail_fast": True}))
|
||||
const_arg = installer_args(["b"], {"fail_fast": False})
|
||||
const_arg.extend(installer_args(["c"], {"fail_fast": True}))
|
||||
installer = create_installer(const_arg)
|
||||
pkg_ids = [inst.package_id(spec.package) for spec, _ in const_arg]
|
||||
|
||||
@@ -1138,7 +1137,7 @@ def _test_install_fail_fast_on_except_patch(installer, **kwargs):
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
def test_install_fail_fast_on_except(install_mockery, monkeypatch, capsys):
|
||||
"""Test fail_fast install when an install failure results from an error."""
|
||||
const_arg = installer_args(["pkg-a"], {"fail_fast": True})
|
||||
const_arg = installer_args(["a"], {"fail_fast": True})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Raise a non-KeyboardInterrupt exception to trigger fast failure.
|
||||
@@ -1153,7 +1152,7 @@ def test_install_fail_fast_on_except(install_mockery, monkeypatch, capsys):
|
||||
installer.install()
|
||||
|
||||
out = str(capsys.readouterr())
|
||||
assert "Skipping build of pkg-a" in out
|
||||
assert "Skipping build of a" in out
|
||||
|
||||
|
||||
def test_install_lock_failures(install_mockery, monkeypatch, capfd):
|
||||
@@ -1162,7 +1161,7 @@ def test_install_lock_failures(install_mockery, monkeypatch, capfd):
|
||||
def _requeued(installer, task, install_status):
|
||||
tty.msg("requeued {0}".format(task.pkg.spec.name))
|
||||
|
||||
const_arg = installer_args(["pkg-b"], {})
|
||||
const_arg = installer_args(["b"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Ensure never acquire a lock
|
||||
@@ -1182,7 +1181,7 @@ def _requeued(installer, task, install_status):
|
||||
|
||||
def test_install_lock_installed_requeue(install_mockery, monkeypatch, capfd):
|
||||
"""Cover basic install handling for installed package."""
|
||||
const_arg = installer_args(["pkg-b"], {})
|
||||
const_arg = installer_args(["b"], {})
|
||||
b, _ = const_arg[0]
|
||||
installer = create_installer(const_arg)
|
||||
b_pkg_id = inst.package_id(b.package)
|
||||
@@ -1238,7 +1237,7 @@ def _requeued(installer, task, install_status):
|
||||
# Ensure don't continually requeue the task
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_requeue_task", _requeued)
|
||||
|
||||
const_arg = installer_args(["pkg-b"], {})
|
||||
const_arg = installer_args(["b"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
with pytest.raises(inst.InstallError, match="request failed"):
|
||||
@@ -1254,7 +1253,7 @@ def _requeued(installer, task, install_status):
|
||||
|
||||
def test_install_skip_patch(install_mockery, mock_fetch):
|
||||
"""Test the path skip_patch install path."""
|
||||
spec_name = "pkg-b"
|
||||
spec_name = "b"
|
||||
const_arg = installer_args([spec_name], {"fake": False, "skip_patch": True})
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
@@ -1281,7 +1280,7 @@ def test_overwrite_install_backup_success(temporary_store, config, mock_packages
|
||||
of the original prefix, and leave the original spec marked installed.
|
||||
"""
|
||||
# Get a build task. TODO: refactor this to avoid calling internal methods
|
||||
const_arg = installer_args(["pkg-b"], {})
|
||||
const_arg = installer_args(["b"])
|
||||
installer = create_installer(const_arg)
|
||||
installer._init_queue()
|
||||
task = installer._pop_task()
|
||||
@@ -1342,7 +1341,7 @@ def remove(self, spec):
|
||||
self.called = True
|
||||
|
||||
# Get a build task. TODO: refactor this to avoid calling internal methods
|
||||
const_arg = installer_args(["pkg-b"], {})
|
||||
const_arg = installer_args(["b"])
|
||||
installer = create_installer(const_arg)
|
||||
installer._init_queue()
|
||||
task = installer._pop_task()
|
||||
@@ -1371,8 +1370,8 @@ def test_term_status_line():
|
||||
# accept that. `with log_output(buf)` doesn't really work because it trims output
|
||||
# and we actually want to test for escape sequences etc.
|
||||
x = inst.TermStatusLine(enabled=True)
|
||||
x.add("pkg-a")
|
||||
x.add("pkg-b")
|
||||
x.add("a")
|
||||
x.add("b")
|
||||
x.clear()
|
||||
|
||||
|
||||
|
@@ -9,7 +9,10 @@
|
||||
This just tests whether the right args are getting passed to make.
|
||||
"""
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
import pytest
|
||||
|
||||
@@ -17,104 +20,110 @@
|
||||
from spack.util.environment import path_put_first
|
||||
|
||||
pytestmark = pytest.mark.skipif(
|
||||
sys.platform == "win32", reason="MakeExecutable not supported on Windows"
|
||||
sys.platform == "win32",
|
||||
reason="MakeExecutable \
|
||||
not supported on Windows",
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def make_executable(tmp_path, working_env):
|
||||
make_exe = tmp_path / "make"
|
||||
with open(make_exe, "w") as f:
|
||||
f.write("#!/bin/sh\n")
|
||||
f.write('echo "$@"')
|
||||
os.chmod(make_exe, 0o700)
|
||||
class MakeExecutableTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.tmpdir = tempfile.mkdtemp()
|
||||
|
||||
path_put_first("PATH", [tmp_path])
|
||||
make_exe = os.path.join(self.tmpdir, "make")
|
||||
with open(make_exe, "w") as f:
|
||||
f.write("#!/bin/sh\n")
|
||||
f.write('echo "$@"')
|
||||
os.chmod(make_exe, 0o700)
|
||||
|
||||
path_put_first("PATH", [self.tmpdir])
|
||||
|
||||
def test_make_normal():
|
||||
make = MakeExecutable("make", 8)
|
||||
assert make(output=str).strip() == "-j8"
|
||||
assert make("install", output=str).strip() == "-j8 install"
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.tmpdir)
|
||||
|
||||
def test_make_normal(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
self.assertEqual(make(output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j8 install")
|
||||
|
||||
def test_make_explicit():
|
||||
make = MakeExecutable("make", 8)
|
||||
assert make(parallel=True, output=str).strip() == "-j8"
|
||||
assert make("install", parallel=True, output=str).strip() == "-j8 install"
|
||||
def test_make_explicit(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
self.assertEqual(make(parallel=True, output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install")
|
||||
|
||||
def test_make_one_job(self):
|
||||
make = MakeExecutable("make", 1)
|
||||
self.assertEqual(make(output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j1 install")
|
||||
|
||||
def test_make_one_job():
|
||||
make = MakeExecutable("make", 1)
|
||||
assert make(output=str).strip() == "-j1"
|
||||
assert make("install", output=str).strip() == "-j1 install"
|
||||
def test_make_parallel_false(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
self.assertEqual(make(parallel=False, output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", parallel=False, output=str).strip(), "-j1 install")
|
||||
|
||||
def test_make_parallel_disabled(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
|
||||
def test_make_parallel_false():
|
||||
make = MakeExecutable("make", 8)
|
||||
assert make(parallel=False, output=str).strip() == "-j1"
|
||||
assert make("install", parallel=False, output=str).strip() == "-j1 install"
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "true"
|
||||
self.assertEqual(make(output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j1 install")
|
||||
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "1"
|
||||
self.assertEqual(make(output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j1 install")
|
||||
|
||||
def test_make_parallel_disabled(monkeypatch):
|
||||
make = MakeExecutable("make", 8)
|
||||
# These don't disable (false and random string)
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "false"
|
||||
self.assertEqual(make(output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j8 install")
|
||||
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "true")
|
||||
assert make(output=str).strip() == "-j1"
|
||||
assert make("install", output=str).strip() == "-j1 install"
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "foobar"
|
||||
self.assertEqual(make(output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", output=str).strip(), "-j8 install")
|
||||
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "1")
|
||||
assert make(output=str).strip() == "-j1"
|
||||
assert make("install", output=str).strip() == "-j1 install"
|
||||
del os.environ["SPACK_NO_PARALLEL_MAKE"]
|
||||
|
||||
# These don't disable (false and random string)
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "false")
|
||||
assert make(output=str).strip() == "-j8"
|
||||
assert make("install", output=str).strip() == "-j8 install"
|
||||
def test_make_parallel_precedence(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "foobar")
|
||||
assert make(output=str).strip() == "-j8"
|
||||
assert make("install", output=str).strip() == "-j8 install"
|
||||
# These should work
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "true"
|
||||
self.assertEqual(make(parallel=True, output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j1 install")
|
||||
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "1"
|
||||
self.assertEqual(make(parallel=True, output=str).strip(), "-j1")
|
||||
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j1 install")
|
||||
|
||||
def test_make_parallel_precedence(monkeypatch):
|
||||
make = MakeExecutable("make", 8)
|
||||
# These don't disable (false and random string)
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "false"
|
||||
self.assertEqual(make(parallel=True, output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install")
|
||||
|
||||
# These should work
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "true")
|
||||
assert make(parallel=True, output=str).strip() == "-j1"
|
||||
assert make("install", parallel=True, output=str).strip() == "-j1 install"
|
||||
os.environ["SPACK_NO_PARALLEL_MAKE"] = "foobar"
|
||||
self.assertEqual(make(parallel=True, output=str).strip(), "-j8")
|
||||
self.assertEqual(make("install", parallel=True, output=str).strip(), "-j8 install")
|
||||
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "1")
|
||||
assert make(parallel=True, output=str).strip() == "-j1"
|
||||
assert make("install", parallel=True, output=str).strip() == "-j1 install"
|
||||
del os.environ["SPACK_NO_PARALLEL_MAKE"]
|
||||
|
||||
# These don't disable (false and random string)
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "false")
|
||||
assert make(parallel=True, output=str).strip() == "-j8"
|
||||
assert make("install", parallel=True, output=str).strip() == "-j8 install"
|
||||
def test_make_jobs_env(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
dump_env = {}
|
||||
self.assertEqual(
|
||||
make(output=str, jobs_env="MAKE_PARALLELISM", _dump_env=dump_env).strip(), "-j8"
|
||||
)
|
||||
self.assertEqual(dump_env["MAKE_PARALLELISM"], "8")
|
||||
|
||||
monkeypatch.setenv("SPACK_NO_PARALLEL_MAKE", "foobar")
|
||||
assert make(parallel=True, output=str).strip() == "-j8"
|
||||
assert make("install", parallel=True, output=str).strip() == "-j8 install"
|
||||
def test_make_jobserver(self):
|
||||
make = MakeExecutable("make", 8)
|
||||
os.environ["MAKEFLAGS"] = "--jobserver-auth=X,Y"
|
||||
self.assertEqual(make(output=str).strip(), "")
|
||||
self.assertEqual(make(parallel=False, output=str).strip(), "-j1")
|
||||
del os.environ["MAKEFLAGS"]
|
||||
|
||||
|
||||
def test_make_jobs_env():
|
||||
make = MakeExecutable("make", 8)
|
||||
dump_env = {}
|
||||
assert make(output=str, jobs_env="MAKE_PARALLELISM", _dump_env=dump_env).strip() == "-j8"
|
||||
assert dump_env["MAKE_PARALLELISM"] == "8"
|
||||
|
||||
|
||||
def test_make_jobserver(monkeypatch):
|
||||
make = MakeExecutable("make", 8)
|
||||
monkeypatch.setenv("MAKEFLAGS", "--jobserver-auth=X,Y")
|
||||
assert make(output=str).strip() == ""
|
||||
assert make(parallel=False, output=str).strip() == "-j1"
|
||||
|
||||
|
||||
def test_make_jobserver_not_supported(monkeypatch):
|
||||
make = MakeExecutable("make", 8, supports_jobserver=False)
|
||||
monkeypatch.setenv("MAKEFLAGS", "--jobserver-auth=X,Y")
|
||||
# Currently fallback on default job count, Maybe it should force -j1 ?
|
||||
assert make(output=str).strip() == "-j8"
|
||||
def test_make_jobserver_not_supported(self):
|
||||
make = MakeExecutable("make", 8, supports_jobserver=False)
|
||||
os.environ["MAKEFLAGS"] = "--jobserver-auth=X,Y"
|
||||
# Currently fallback on default job count, Maybe it should force -j1 ?
|
||||
self.assertEqual(make(output=str).strip(), "-j8")
|
||||
del os.environ["MAKEFLAGS"]
|
||||
|
@@ -289,8 +289,8 @@ def test_mirror_cache_symlinks(tmpdir):
|
||||
@pytest.mark.parametrize(
|
||||
"specs,expected_specs",
|
||||
[
|
||||
(["pkg-a"], ["pkg-a@=1.0", "pkg-a@=2.0"]),
|
||||
(["pkg-a", "brillig"], ["pkg-a@=1.0", "pkg-a@=2.0", "brillig@=1.0.0", "brillig@=2.0.0"]),
|
||||
(["a"], ["a@=1.0", "a@=2.0"]),
|
||||
(["a", "brillig"], ["a@=1.0", "a@=2.0", "brillig@=1.0.0", "brillig@=2.0.0"]),
|
||||
],
|
||||
)
|
||||
def test_get_all_versions(specs, expected_specs):
|
||||
|
@@ -27,13 +27,16 @@
|
||||
]
|
||||
|
||||
|
||||
def test_module_function_change_env(tmp_path):
|
||||
environb = {b"TEST_MODULE_ENV_VAR": b"TEST_FAIL", b"NOT_AFFECTED": b"NOT_AFFECTED"}
|
||||
src_file = tmp_path / "src_me"
|
||||
src_file.write_text("export TEST_MODULE_ENV_VAR=TEST_SUCCESS\n")
|
||||
module("load", str(src_file), module_template=f". {src_file} 2>&1", environb=environb)
|
||||
assert environb[b"TEST_MODULE_ENV_VAR"] == b"TEST_SUCCESS"
|
||||
assert environb[b"NOT_AFFECTED"] == b"NOT_AFFECTED"
|
||||
def test_module_function_change_env(tmpdir, working_env):
|
||||
src_file = str(tmpdir.join("src_me"))
|
||||
with open(src_file, "w") as f:
|
||||
f.write("export TEST_MODULE_ENV_VAR=TEST_SUCCESS\n")
|
||||
|
||||
os.environ["NOT_AFFECTED"] = "NOT_AFFECTED"
|
||||
module("load", src_file, module_template=". {0} 2>&1".format(src_file))
|
||||
|
||||
assert os.environ["TEST_MODULE_ENV_VAR"] == "TEST_SUCCESS"
|
||||
assert os.environ["NOT_AFFECTED"] == "NOT_AFFECTED"
|
||||
|
||||
|
||||
def test_module_function_no_change(tmpdir):
|
||||
|
@@ -7,8 +7,6 @@
|
||||
|
||||
import pytest
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
import spack.environment as ev
|
||||
import spack.main
|
||||
import spack.modules.lmod
|
||||
@@ -102,19 +100,14 @@ def test_file_layout(self, compiler, provider, factory, module_configuration):
|
||||
else:
|
||||
assert repetitions == 1
|
||||
|
||||
def test_compilers_provided_different_name(
|
||||
self, factory, module_configuration, compiler_factory
|
||||
):
|
||||
with spack.config.override(
|
||||
"compilers", [compiler_factory(spec="clang@3.3", operating_system="debian6")]
|
||||
):
|
||||
module_configuration("complex_hierarchy")
|
||||
module, spec = factory("intel-oneapi-compilers%clang@3.3")
|
||||
def test_compilers_provided_different_name(self, factory, module_configuration):
|
||||
module_configuration("complex_hierarchy")
|
||||
module, spec = factory("intel-oneapi-compilers%clang@3.3")
|
||||
|
||||
provides = module.conf.provides
|
||||
provides = module.conf.provides
|
||||
|
||||
assert "compiler" in provides
|
||||
assert provides["compiler"] == spack.spec.CompilerSpec("oneapi@=3.0")
|
||||
assert "compiler" in provides
|
||||
assert provides["compiler"] == spack.spec.CompilerSpec("oneapi@=3.0")
|
||||
|
||||
def test_simple_case(self, modulefile_content, module_configuration):
|
||||
"""Tests the generation of a simple Lua module file."""
|
||||
@@ -143,9 +136,6 @@ def test_autoload_all(self, modulefile_content, module_configuration):
|
||||
|
||||
assert len([x for x in content if "depends_on(" in x]) == 5
|
||||
|
||||
@pytest.mark.skipif(
|
||||
str(archspec.cpu.host().family) != "x86_64", reason="test data is specific for x86_64"
|
||||
)
|
||||
def test_alter_environment(self, modulefile_content, module_configuration):
|
||||
"""Tests modifications to run-time environment."""
|
||||
|
||||
@@ -217,9 +207,6 @@ def test_setenv_raw_value(self, modulefile_content, module_configuration):
|
||||
|
||||
assert len([x for x in content if 'setenv("FOO", "{{name}}, {name}, {{}}, {}")' in x]) == 1
|
||||
|
||||
@pytest.mark.skipif(
|
||||
str(archspec.cpu.host().family) != "x86_64", reason="test data is specific for x86_64"
|
||||
)
|
||||
def test_help_message(self, modulefile_content, module_configuration):
|
||||
"""Tests the generation of module help message."""
|
||||
|
||||
@@ -343,16 +330,14 @@ def test_override_template_in_package(self, modulefile_content, module_configura
|
||||
|
||||
assert "Override successful!" in content
|
||||
|
||||
def test_override_template_in_modules_yaml(
|
||||
self, modulefile_content, module_configuration, host_architecture_str
|
||||
):
|
||||
def test_override_template_in_modules_yaml(self, modulefile_content, module_configuration):
|
||||
"""Tests overriding a template from `modules.yaml`"""
|
||||
module_configuration("override_template")
|
||||
|
||||
content = modulefile_content("override-module-templates")
|
||||
assert "Override even better!" in content
|
||||
|
||||
content = modulefile_content(f"mpileaks target={host_architecture_str}")
|
||||
content = modulefile_content("mpileaks target=x86_64")
|
||||
assert "Override even better!" in content
|
||||
|
||||
@pytest.mark.usefixtures("config")
|
||||
|
@@ -7,8 +7,6 @@
|
||||
|
||||
import pytest
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
import spack.modules.common
|
||||
import spack.modules.tcl
|
||||
import spack.spec
|
||||
@@ -91,29 +89,22 @@ def test_autoload_all(self, modulefile_content, module_configuration):
|
||||
assert len([x for x in content if "depends-on " in x]) == 2
|
||||
assert len([x for x in content if "module load " in x]) == 2
|
||||
|
||||
def test_prerequisites_direct(
|
||||
self, modulefile_content, module_configuration, host_architecture_str
|
||||
):
|
||||
def test_prerequisites_direct(self, modulefile_content, module_configuration):
|
||||
"""Tests asking direct dependencies as prerequisites."""
|
||||
|
||||
module_configuration("prerequisites_direct")
|
||||
content = modulefile_content(f"mpileaks target={host_architecture_str}")
|
||||
content = modulefile_content("mpileaks target=x86_64")
|
||||
|
||||
assert len([x for x in content if "prereq" in x]) == 2
|
||||
|
||||
def test_prerequisites_all(
|
||||
self, modulefile_content, module_configuration, host_architecture_str
|
||||
):
|
||||
def test_prerequisites_all(self, modulefile_content, module_configuration):
|
||||
"""Tests asking all dependencies as prerequisites."""
|
||||
|
||||
module_configuration("prerequisites_all")
|
||||
content = modulefile_content(f"mpileaks target={host_architecture_str}")
|
||||
content = modulefile_content("mpileaks target=x86_64")
|
||||
|
||||
assert len([x for x in content if "prereq" in x]) == 5
|
||||
|
||||
@pytest.mark.skipif(
|
||||
str(archspec.cpu.host().family) != "x86_64", reason="test data is specific for x86_64"
|
||||
)
|
||||
def test_alter_environment(self, modulefile_content, module_configuration):
|
||||
"""Tests modifications to run-time environment."""
|
||||
|
||||
@@ -186,9 +177,6 @@ def test_setenv_raw_value(self, modulefile_content, module_configuration):
|
||||
|
||||
assert len([x for x in content if "setenv FOO {{{name}}, {name}, {{}}, {}}" in x]) == 1
|
||||
|
||||
@pytest.mark.skipif(
|
||||
str(archspec.cpu.host().family) != "x86_64", reason="test data is specific for x86_64"
|
||||
)
|
||||
def test_help_message(self, modulefile_content, module_configuration):
|
||||
"""Tests the generation of module help message."""
|
||||
|
||||
@@ -231,7 +219,7 @@ def test_help_message(self, modulefile_content, module_configuration):
|
||||
)
|
||||
assert help_msg in "".join(content)
|
||||
|
||||
def test_exclude(self, modulefile_content, module_configuration, host_architecture_str):
|
||||
def test_exclude(self, modulefile_content, module_configuration):
|
||||
"""Tests excluding the generation of selected modules."""
|
||||
|
||||
module_configuration("exclude")
|
||||
@@ -243,9 +231,9 @@ def test_exclude(self, modulefile_content, module_configuration, host_architectu
|
||||
# and IOError on Python 2 or common bases like EnvironmentError
|
||||
# which are not officially documented
|
||||
with pytest.raises(Exception):
|
||||
modulefile_content(f"callpath target={host_architecture_str}")
|
||||
modulefile_content("callpath target=x86_64")
|
||||
|
||||
content = modulefile_content(f"zmpi target={host_architecture_str}")
|
||||
content = modulefile_content("zmpi target=x86_64")
|
||||
|
||||
assert len([x for x in content if "module load " in x]) == 1
|
||||
|
||||
@@ -415,16 +403,14 @@ def test_override_template_in_package(self, modulefile_content, module_configura
|
||||
|
||||
assert "Override successful!" in content
|
||||
|
||||
def test_override_template_in_modules_yaml(
|
||||
self, modulefile_content, module_configuration, host_architecture_str
|
||||
):
|
||||
def test_override_template_in_modules_yaml(self, modulefile_content, module_configuration):
|
||||
"""Tests overriding a template from `modules.yaml`"""
|
||||
module_configuration("override_template")
|
||||
|
||||
content = modulefile_content("override-module-templates")
|
||||
assert "Override even better!" in content
|
||||
|
||||
content = modulefile_content(f"mpileaks target={host_architecture_str}")
|
||||
content = modulefile_content("mpileaks target=x86_64")
|
||||
assert "Override even better!" in content
|
||||
|
||||
def test_extend_context(self, modulefile_content, module_configuration):
|
||||
|
@@ -69,15 +69,9 @@ def test_no_version_match(pkg_name):
|
||||
("", "boolean_false_first", "True"),
|
||||
],
|
||||
)
|
||||
def test_multimethod_calls(
|
||||
pkg_name, constraint_str, method_name, expected_result, compiler_factory
|
||||
):
|
||||
# Add apple-clang, as it is required by one of the tests
|
||||
with spack.config.override(
|
||||
"compilers", [compiler_factory(spec="apple-clang@9.1.0", operating_system="elcapitan")]
|
||||
):
|
||||
s = spack.spec.Spec(pkg_name + constraint_str).concretized()
|
||||
msg = f"Method {method_name} from {s} is giving a wrong result"
|
||||
def test_multimethod_calls(pkg_name, constraint_str, method_name, expected_result):
|
||||
s = spack.spec.Spec(pkg_name + constraint_str).concretized()
|
||||
msg = "Method {0} from {1} is giving a wrong result".format(method_name, s)
|
||||
assert getattr(s.package, method_name)() == expected_result, msg
|
||||
|
||||
|
||||
|
@@ -13,44 +13,28 @@
|
||||
# Normalize simple conditionals
|
||||
("optional-dep-test", {"optional-dep-test": None}),
|
||||
("optional-dep-test~a", {"optional-dep-test~a": None}),
|
||||
("optional-dep-test+a", {"optional-dep-test+a": {"pkg-a": None}}),
|
||||
("optional-dep-test a=true", {"optional-dep-test a=true": {"pkg-a": None}}),
|
||||
("optional-dep-test a=true", {"optional-dep-test+a": {"pkg-a": None}}),
|
||||
("optional-dep-test@1.1", {"optional-dep-test@1.1": {"pkg-b": None}}),
|
||||
("optional-dep-test%intel", {"optional-dep-test%intel": {"pkg-c": None}}),
|
||||
(
|
||||
"optional-dep-test%intel@64.1",
|
||||
{"optional-dep-test%intel@64.1": {"pkg-c": None, "pkg-d": None}},
|
||||
),
|
||||
("optional-dep-test+a", {"optional-dep-test+a": {"a": None}}),
|
||||
("optional-dep-test a=true", {"optional-dep-test a=true": {"a": None}}),
|
||||
("optional-dep-test a=true", {"optional-dep-test+a": {"a": None}}),
|
||||
("optional-dep-test@1.1", {"optional-dep-test@1.1": {"b": None}}),
|
||||
("optional-dep-test%intel", {"optional-dep-test%intel": {"c": None}}),
|
||||
("optional-dep-test%intel@64.1", {"optional-dep-test%intel@64.1": {"c": None, "d": None}}),
|
||||
(
|
||||
"optional-dep-test%intel@64.1.2",
|
||||
{"optional-dep-test%intel@64.1.2": {"pkg-c": None, "pkg-d": None}},
|
||||
{"optional-dep-test%intel@64.1.2": {"c": None, "d": None}},
|
||||
),
|
||||
("optional-dep-test%clang@35", {"optional-dep-test%clang@35": {"pkg-e": None}}),
|
||||
("optional-dep-test%clang@35", {"optional-dep-test%clang@35": {"e": None}}),
|
||||
# Normalize multiple conditionals
|
||||
("optional-dep-test+a@1.1", {"optional-dep-test+a@1.1": {"pkg-a": None, "pkg-b": None}}),
|
||||
(
|
||||
"optional-dep-test+a%intel",
|
||||
{"optional-dep-test+a%intel": {"pkg-a": None, "pkg-c": None}},
|
||||
),
|
||||
(
|
||||
"optional-dep-test@1.1%intel",
|
||||
{"optional-dep-test@1.1%intel": {"pkg-b": None, "pkg-c": None}},
|
||||
),
|
||||
("optional-dep-test+a@1.1", {"optional-dep-test+a@1.1": {"a": None, "b": None}}),
|
||||
("optional-dep-test+a%intel", {"optional-dep-test+a%intel": {"a": None, "c": None}}),
|
||||
("optional-dep-test@1.1%intel", {"optional-dep-test@1.1%intel": {"b": None, "c": None}}),
|
||||
(
|
||||
"optional-dep-test@1.1%intel@64.1.2+a",
|
||||
{
|
||||
"optional-dep-test@1.1%intel@64.1.2+a": {
|
||||
"pkg-a": None,
|
||||
"pkg-b": None,
|
||||
"pkg-c": None,
|
||||
"pkg-d": None,
|
||||
}
|
||||
},
|
||||
{"optional-dep-test@1.1%intel@64.1.2+a": {"a": None, "b": None, "c": None, "d": None}},
|
||||
),
|
||||
(
|
||||
"optional-dep-test@1.1%clang@36.5+a",
|
||||
{"optional-dep-test@1.1%clang@36.5+a": {"pkg-b": None, "pkg-a": None, "pkg-e": None}},
|
||||
{"optional-dep-test@1.1%clang@36.5+a": {"b": None, "a": None, "e": None}},
|
||||
),
|
||||
# Chained MPI
|
||||
(
|
||||
@@ -60,10 +44,7 @@
|
||||
# Each of these dependencies comes from a conditional
|
||||
# dependency on another. This requires iterating to evaluate
|
||||
# the whole chain.
|
||||
(
|
||||
"optional-dep-test+f",
|
||||
{"optional-dep-test+f": {"pkg-f": None, "pkg-g": None, "mpi": None}},
|
||||
),
|
||||
("optional-dep-test+f", {"optional-dep-test+f": {"f": None, "g": None, "mpi": None}}),
|
||||
]
|
||||
)
|
||||
def spec_and_expected(request):
|
||||
@@ -82,12 +63,12 @@ def test_normalize(spec_and_expected, config, mock_packages):
|
||||
def test_default_variant(config, mock_packages):
|
||||
spec = Spec("optional-dep-test-3")
|
||||
spec.concretize()
|
||||
assert "pkg-a" in spec
|
||||
assert "a" in spec
|
||||
|
||||
spec = Spec("optional-dep-test-3~var")
|
||||
spec.concretize()
|
||||
assert "pkg-a" in spec
|
||||
assert "a" in spec
|
||||
|
||||
spec = Spec("optional-dep-test-3+var")
|
||||
spec.concretize()
|
||||
assert "pkg-b" in spec
|
||||
assert "b" in spec
|
||||
|
@@ -21,7 +21,6 @@
|
||||
import spack.install_test
|
||||
import spack.package_base
|
||||
import spack.repo
|
||||
import spack.spec
|
||||
from spack.build_systems.generic import Package
|
||||
from spack.installer import InstallError
|
||||
|
||||
@@ -142,19 +141,19 @@ def setup_install_test(source_paths, test_root):
|
||||
"spec,sources,extras,expect",
|
||||
[
|
||||
(
|
||||
"pkg-a",
|
||||
"a",
|
||||
["example/a.c"], # Source(s)
|
||||
["example/a.c"], # Extra test source
|
||||
["example/a.c"],
|
||||
), # Test install dir source(s)
|
||||
(
|
||||
"pkg-b",
|
||||
"b",
|
||||
["test/b.cpp", "test/b.hpp", "example/b.txt"], # Source(s)
|
||||
["test"], # Extra test source
|
||||
["test/b.cpp", "test/b.hpp"],
|
||||
), # Test install dir source
|
||||
(
|
||||
"pkg-c",
|
||||
"c",
|
||||
["examples/a.py", "examples/b.py", "examples/c.py", "tests/d.py"],
|
||||
["examples/b.py", "tests"],
|
||||
["examples/b.py", "tests/d.py"],
|
||||
@@ -202,7 +201,7 @@ def test_cache_extra_sources(install_mockery, spec, sources, extras, expect):
|
||||
|
||||
|
||||
def test_cache_extra_sources_fails(install_mockery):
|
||||
s = spack.spec.Spec("pkg-a").concretized()
|
||||
s = spack.spec.Spec("a").concretized()
|
||||
s.package.spec.concretize()
|
||||
|
||||
with pytest.raises(InstallError) as exc_info:
|
||||
@@ -226,7 +225,7 @@ class URLsPackage(spack.package.Package):
|
||||
url = "https://www.example.com/url-package-1.0.tgz"
|
||||
urls = ["https://www.example.com/archive"]
|
||||
|
||||
s = spack.spec.Spec("pkg-a")
|
||||
s = spack.spec.Spec("a")
|
||||
with pytest.raises(ValueError, match="defines both"):
|
||||
URLsPackage(s)
|
||||
|
||||
@@ -236,7 +235,7 @@ class LicensedPackage(spack.package.Package):
|
||||
extendees = None # currently a required attribute for is_extension()
|
||||
license_files = None
|
||||
|
||||
s = spack.spec.Spec("pkg-a")
|
||||
s = spack.spec.Spec("a")
|
||||
pkg = LicensedPackage(s)
|
||||
assert pkg.global_license_file is None
|
||||
|
||||
@@ -249,21 +248,21 @@ class BaseTestPackage(Package):
|
||||
|
||||
|
||||
def test_package_version_fails():
|
||||
s = spack.spec.Spec("pkg-a")
|
||||
s = spack.spec.Spec("a")
|
||||
pkg = BaseTestPackage(s)
|
||||
with pytest.raises(ValueError, match="does not have a concrete version"):
|
||||
pkg.version()
|
||||
|
||||
|
||||
def test_package_tester_fails():
|
||||
s = spack.spec.Spec("pkg-a")
|
||||
s = spack.spec.Spec("a")
|
||||
pkg = BaseTestPackage(s)
|
||||
with pytest.raises(ValueError, match="without concrete version"):
|
||||
pkg.tester()
|
||||
|
||||
|
||||
def test_package_fetcher_fails():
|
||||
s = spack.spec.Spec("pkg-a")
|
||||
s = spack.spec.Spec("a")
|
||||
pkg = BaseTestPackage(s)
|
||||
with pytest.raises(ValueError, match="without concrete version"):
|
||||
pkg.fetcher
|
||||
@@ -281,7 +280,7 @@ def compilers(compiler, arch_spec):
|
||||
|
||||
monkeypatch.setattr(spack.compilers, "compilers_for_spec", compilers)
|
||||
|
||||
s = spack.spec.Spec("pkg-a")
|
||||
s = spack.spec.Spec("a")
|
||||
pkg = BaseTestPackage(s)
|
||||
pkg.test_requires_compiler = True
|
||||
pkg.do_test()
|
||||
|
@@ -517,7 +517,7 @@ def test_manual_download(
|
||||
def _instr(pkg):
|
||||
return f"Download instructions for {pkg.spec.name}"
|
||||
|
||||
spec = default_mock_concretization("pkg-a")
|
||||
spec = default_mock_concretization("a")
|
||||
spec.package.manual_download = manual
|
||||
if instr:
|
||||
monkeypatch.setattr(spack.package_base.PackageBase, "download_instr", _instr)
|
||||
@@ -543,7 +543,7 @@ def test_fetch_without_code_is_noop(
|
||||
default_mock_concretization, install_mockery, fetching_not_allowed
|
||||
):
|
||||
"""do_fetch for packages without code should be a no-op"""
|
||||
pkg = default_mock_concretization("pkg-a").package
|
||||
pkg = default_mock_concretization("a").package
|
||||
pkg.has_code = False
|
||||
pkg.do_fetch()
|
||||
|
||||
@@ -552,7 +552,7 @@ def test_fetch_external_package_is_noop(
|
||||
default_mock_concretization, install_mockery, fetching_not_allowed
|
||||
):
|
||||
"""do_fetch for packages without code should be a no-op"""
|
||||
spec = default_mock_concretization("pkg-a")
|
||||
spec = default_mock_concretization("a")
|
||||
spec.external_path = "/some/where"
|
||||
assert spec.external
|
||||
spec.package.do_fetch()
|
||||
|
@@ -9,8 +9,6 @@
|
||||
|
||||
import pytest
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
import spack.concretize
|
||||
import spack.paths
|
||||
import spack.platforms
|
||||
@@ -292,7 +290,6 @@ def test_relocate_text_bin_raise_if_new_prefix_is_longer(tmpdir):
|
||||
|
||||
|
||||
@pytest.mark.requires_executables("install_name_tool", "file", "cc")
|
||||
@pytest.mark.skipif(str(archspec.cpu.host().family) != "x86_64", reason="failing on Apple M1/M2")
|
||||
def test_fixup_macos_rpaths(make_dylib, make_object_file):
|
||||
# For each of these tests except for the "correct" case, the first fixup
|
||||
# should make changes, and the second fixup should be a null-op.
|
||||
|
@@ -9,8 +9,6 @@
|
||||
import spack.package_base
|
||||
import spack.paths
|
||||
import spack.repo
|
||||
import spack.spec
|
||||
import spack.util.file_cache
|
||||
|
||||
|
||||
@pytest.fixture(params=["packages", "", "foo"])
|
||||
@@ -32,25 +30,25 @@ def extra_repo(tmpdir_factory, request):
|
||||
|
||||
|
||||
def test_repo_getpkg(mutable_mock_repo):
|
||||
mutable_mock_repo.get_pkg_class("pkg-a")
|
||||
mutable_mock_repo.get_pkg_class("builtin.mock.pkg-a")
|
||||
mutable_mock_repo.get_pkg_class("a")
|
||||
mutable_mock_repo.get_pkg_class("builtin.mock.a")
|
||||
|
||||
|
||||
def test_repo_multi_getpkg(mutable_mock_repo, extra_repo):
|
||||
mutable_mock_repo.put_first(extra_repo[0])
|
||||
mutable_mock_repo.get_pkg_class("pkg-a")
|
||||
mutable_mock_repo.get_pkg_class("builtin.mock.pkg-a")
|
||||
mutable_mock_repo.get_pkg_class("a")
|
||||
mutable_mock_repo.get_pkg_class("builtin.mock.a")
|
||||
|
||||
|
||||
def test_repo_multi_getpkgclass(mutable_mock_repo, extra_repo):
|
||||
mutable_mock_repo.put_first(extra_repo[0])
|
||||
mutable_mock_repo.get_pkg_class("pkg-a")
|
||||
mutable_mock_repo.get_pkg_class("builtin.mock.pkg-a")
|
||||
mutable_mock_repo.get_pkg_class("a")
|
||||
mutable_mock_repo.get_pkg_class("builtin.mock.a")
|
||||
|
||||
|
||||
def test_repo_pkg_with_unknown_namespace(mutable_mock_repo):
|
||||
with pytest.raises(spack.repo.UnknownNamespaceError):
|
||||
mutable_mock_repo.get_pkg_class("unknown.pkg-a")
|
||||
mutable_mock_repo.get_pkg_class("unknown.a")
|
||||
|
||||
|
||||
def test_repo_unknown_pkg(mutable_mock_repo):
|
||||
@@ -144,14 +142,14 @@ def test_get_all_mock_packages(mock_packages):
|
||||
|
||||
def test_repo_path_handles_package_removal(tmpdir, mock_packages):
|
||||
builder = spack.repo.MockRepositoryBuilder(tmpdir, namespace="removal")
|
||||
builder.add_package("pkg-c")
|
||||
builder.add_package("c")
|
||||
with spack.repo.use_repositories(builder.root, override=False) as repos:
|
||||
r = repos.repo_for_pkg("pkg-c")
|
||||
r = repos.repo_for_pkg("c")
|
||||
assert r.namespace == "removal"
|
||||
|
||||
builder.remove("pkg-c")
|
||||
builder.remove("c")
|
||||
with spack.repo.use_repositories(builder.root, override=False) as repos:
|
||||
r = repos.repo_for_pkg("pkg-c")
|
||||
r = repos.repo_for_pkg("c")
|
||||
assert r.namespace == "builtin.mock"
|
||||
|
||||
|
||||
|
@@ -138,19 +138,19 @@ def test_specify_preinstalled_dep(tmpdir, monkeypatch):
|
||||
transitive dependency that is only supplied by the preinstalled package.
|
||||
"""
|
||||
builder = spack.repo.MockRepositoryBuilder(tmpdir)
|
||||
builder.add_package("pkg-c")
|
||||
builder.add_package("pkg-b", dependencies=[("pkg-c", None, None)])
|
||||
builder.add_package("pkg-a", dependencies=[("pkg-b", None, None)])
|
||||
builder.add_package("c")
|
||||
builder.add_package("b", dependencies=[("c", None, None)])
|
||||
builder.add_package("a", dependencies=[("b", None, None)])
|
||||
|
||||
with spack.repo.use_repositories(builder.root):
|
||||
b_spec = Spec("pkg-b").concretized()
|
||||
monkeypatch.setattr(Spec, "installed", property(lambda x: x.name != "pkg-a"))
|
||||
b_spec = Spec("b").concretized()
|
||||
monkeypatch.setattr(Spec, "installed", property(lambda x: x.name != "a"))
|
||||
|
||||
a_spec = Spec("pkg-a")
|
||||
a_spec = Spec("a")
|
||||
a_spec._add_dependency(b_spec, depflag=dt.BUILD | dt.LINK, virtuals=())
|
||||
a_spec.concretize()
|
||||
|
||||
assert {x.name for x in a_spec.traverse()} == {"pkg-a", "pkg-b", "pkg-c"}
|
||||
assert set(x.name for x in a_spec.traverse()) == set(["a", "b", "c"])
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("config")
|
||||
@@ -982,15 +982,15 @@ def test_synthetic_construction_of_split_dependencies_from_same_package(mock_pac
|
||||
# Construct in a synthetic way (i.e. without using the solver)
|
||||
# the following spec:
|
||||
#
|
||||
# pkg-b
|
||||
# b
|
||||
# build / \ link,run
|
||||
# pkg-c@2.0 pkg-c@1.0
|
||||
# c@2.0 c@1.0
|
||||
#
|
||||
# To demonstrate that a spec can now hold two direct
|
||||
# dependencies from the same package
|
||||
root = Spec("pkg-b").concretized()
|
||||
link_run_spec = Spec("pkg-c@=1.0").concretized()
|
||||
build_spec = Spec("pkg-c@=2.0").concretized()
|
||||
root = Spec("b").concretized()
|
||||
link_run_spec = Spec("c@=1.0").concretized()
|
||||
build_spec = Spec("c@=2.0").concretized()
|
||||
|
||||
root.add_dependency_edge(link_run_spec, depflag=dt.LINK, virtuals=())
|
||||
root.add_dependency_edge(link_run_spec, depflag=dt.RUN, virtuals=())
|
||||
@@ -998,10 +998,10 @@ def test_synthetic_construction_of_split_dependencies_from_same_package(mock_pac
|
||||
|
||||
# Check dependencies from the perspective of root
|
||||
assert len(root.dependencies()) == 2
|
||||
assert all(x.name == "pkg-c" for x in root.dependencies())
|
||||
assert all(x.name == "c" for x in root.dependencies())
|
||||
|
||||
assert "@2.0" in root.dependencies(name="pkg-c", deptype=dt.BUILD)[0]
|
||||
assert "@1.0" in root.dependencies(name="pkg-c", deptype=dt.LINK | dt.RUN)[0]
|
||||
assert "@2.0" in root.dependencies(name="c", deptype=dt.BUILD)[0]
|
||||
assert "@1.0" in root.dependencies(name="c", deptype=dt.LINK | dt.RUN)[0]
|
||||
|
||||
# Check parent from the perspective of the dependencies
|
||||
assert len(build_spec.dependents()) == 1
|
||||
@@ -1013,30 +1013,30 @@ def test_synthetic_construction_of_split_dependencies_from_same_package(mock_pac
|
||||
def test_synthetic_construction_bootstrapping(mock_packages, config):
|
||||
# Construct the following spec:
|
||||
#
|
||||
# pkg-b@2.0
|
||||
# b@2.0
|
||||
# | build
|
||||
# pkg-b@1.0
|
||||
# b@1.0
|
||||
#
|
||||
root = Spec("pkg-b@=2.0").concretized()
|
||||
bootstrap = Spec("pkg-b@=1.0").concretized()
|
||||
root = Spec("b@=2.0").concretized()
|
||||
bootstrap = Spec("b@=1.0").concretized()
|
||||
|
||||
root.add_dependency_edge(bootstrap, depflag=dt.BUILD, virtuals=())
|
||||
|
||||
assert len(root.dependencies()) == 1
|
||||
assert root.dependencies()[0].name == "pkg-b"
|
||||
assert root.name == "pkg-b"
|
||||
assert root.dependencies()[0].name == "b"
|
||||
assert root.name == "b"
|
||||
|
||||
|
||||
def test_addition_of_different_deptypes_in_multiple_calls(mock_packages, config):
|
||||
# Construct the following spec:
|
||||
#
|
||||
# pkg-b@2.0
|
||||
# b@2.0
|
||||
# | build,link,run
|
||||
# pkg-b@1.0
|
||||
# b@1.0
|
||||
#
|
||||
# with three calls and check we always have a single edge
|
||||
root = Spec("pkg-b@=2.0").concretized()
|
||||
bootstrap = Spec("pkg-b@=1.0").concretized()
|
||||
root = Spec("b@=2.0").concretized()
|
||||
bootstrap = Spec("b@=1.0").concretized()
|
||||
|
||||
for current_depflag in (dt.BUILD, dt.LINK, dt.RUN):
|
||||
root.add_dependency_edge(bootstrap, depflag=current_depflag, virtuals=())
|
||||
@@ -1063,9 +1063,9 @@ def test_addition_of_different_deptypes_in_multiple_calls(mock_packages, config)
|
||||
def test_adding_same_deptype_with_the_same_name_raises(
|
||||
mock_packages, config, c1_depflag, c2_depflag
|
||||
):
|
||||
p = Spec("pkg-b@=2.0").concretized()
|
||||
c1 = Spec("pkg-b@=1.0").concretized()
|
||||
c2 = Spec("pkg-b@=2.0").concretized()
|
||||
p = Spec("b@=2.0").concretized()
|
||||
c1 = Spec("b@=1.0").concretized()
|
||||
c2 = Spec("b@=2.0").concretized()
|
||||
|
||||
p.add_dependency_edge(c1, depflag=c1_depflag, virtuals=())
|
||||
with pytest.raises(spack.error.SpackError):
|
||||
|
@@ -373,7 +373,7 @@ def test_satisfies_single_valued_variant(self):
|
||||
https://github.com/spack/spack/pull/2386#issuecomment-282147639
|
||||
is handled correctly.
|
||||
"""
|
||||
a = Spec("pkg-a foobar=bar")
|
||||
a = Spec("a foobar=bar")
|
||||
a.concretize()
|
||||
|
||||
assert a.satisfies("foobar=bar")
|
||||
@@ -390,21 +390,21 @@ def test_satisfies_single_valued_variant(self):
|
||||
assert "foo=bar" in a
|
||||
|
||||
# Check that conditional dependencies are treated correctly
|
||||
assert "^pkg-b" in a
|
||||
assert "^b" in a
|
||||
|
||||
def test_unsatisfied_single_valued_variant(self):
|
||||
a = Spec("pkg-a foobar=baz")
|
||||
a = Spec("a foobar=baz")
|
||||
a.concretize()
|
||||
assert "^pkg-b" not in a
|
||||
assert "^b" not in a
|
||||
|
||||
mv = Spec("multivalue-variant")
|
||||
mv.concretize()
|
||||
assert "pkg-a@1.0" not in mv
|
||||
assert "a@1.0" not in mv
|
||||
|
||||
def test_indirect_unsatisfied_single_valued_variant(self):
|
||||
spec = Spec("singlevalue-variant-dependent")
|
||||
spec.concretize()
|
||||
assert "pkg-a@1.0" not in spec
|
||||
assert "a@1.0" not in spec
|
||||
|
||||
def test_unsatisfiable_multi_value_variant(self, default_mock_concretization):
|
||||
# Semantics for a multi-valued variant is different
|
||||
@@ -734,6 +734,18 @@ def test_spec_formatting_escapes(self, default_mock_concretization):
|
||||
with pytest.raises(SpecFormatStringError):
|
||||
spec.format(fmt_str)
|
||||
|
||||
@pytest.mark.regression("9908")
|
||||
def test_spec_flags_maintain_order(self):
|
||||
# Spack was assembling flags in a manner that could result in
|
||||
# different orderings for repeated concretizations of the same
|
||||
# spec and config
|
||||
spec_str = "libelf %gcc@11.1.0 os=redhat6"
|
||||
for _ in range(3):
|
||||
s = Spec(spec_str).concretized()
|
||||
assert all(
|
||||
s.compiler_flags[x] == ["-O0", "-g"] for x in ("cflags", "cxxflags", "fflags")
|
||||
)
|
||||
|
||||
def test_combination_of_wildcard_or_none(self):
|
||||
# Test that using 'none' and another value raises
|
||||
with pytest.raises(spack.variant.InvalidVariantValueCombinationError):
|
||||
@@ -982,8 +994,8 @@ def test_splice_swap_names_mismatch_virtuals(self, default_mock_concretization,
|
||||
spec.splice(dep, transitive)
|
||||
|
||||
def test_spec_override(self):
|
||||
init_spec = Spec("pkg-a foo=baz foobar=baz cflags=-O3 cxxflags=-O1")
|
||||
change_spec = Spec("pkg-a foo=fee cflags=-O2")
|
||||
init_spec = Spec("a foo=baz foobar=baz cflags=-O3 cxxflags=-O1")
|
||||
change_spec = Spec("a foo=fee cflags=-O2")
|
||||
new_spec = Spec.override(init_spec, change_spec)
|
||||
new_spec.concretize()
|
||||
assert "foo=fee" in new_spec
|
||||
@@ -1253,15 +1265,15 @@ def test_spec_installed(default_mock_concretization, database):
|
||||
spec = Spec("not-a-real-package")
|
||||
assert not spec.installed
|
||||
|
||||
# pkg-a is not in the mock DB and is not installed
|
||||
spec = default_mock_concretization("pkg-a")
|
||||
# 'a' is not in the mock DB and is not installed
|
||||
spec = default_mock_concretization("a")
|
||||
assert not spec.installed
|
||||
|
||||
|
||||
@pytest.mark.regression("30678")
|
||||
def test_call_dag_hash_on_old_dag_hash_spec(mock_packages, default_mock_concretization):
|
||||
# create a concrete spec
|
||||
a = default_mock_concretization("pkg-a")
|
||||
a = default_mock_concretization("a")
|
||||
dag_hashes = {spec.name: spec.dag_hash() for spec in a.traverse()}
|
||||
|
||||
# make it look like an old DAG hash spec with no package hash on the spec.
|
||||
@@ -1309,8 +1321,8 @@ def test_unsupported_compiler():
|
||||
|
||||
|
||||
def test_package_hash_affects_dunder_and_dag_hash(mock_packages, default_mock_concretization):
|
||||
a1 = default_mock_concretization("pkg-a")
|
||||
a2 = default_mock_concretization("pkg-a")
|
||||
a1 = default_mock_concretization("a")
|
||||
a2 = default_mock_concretization("a")
|
||||
|
||||
assert hash(a1) == hash(a2)
|
||||
assert a1.dag_hash() == a2.dag_hash()
|
||||
@@ -1334,8 +1346,8 @@ def test_intersects_and_satisfies_on_concretized_spec(default_mock_concretizatio
|
||||
"""Test that a spec obtained by concretizing an abstract spec, satisfies the abstract spec
|
||||
but not vice-versa.
|
||||
"""
|
||||
a1 = default_mock_concretization("pkg-a@1.0")
|
||||
a2 = Spec("pkg-a@1.0")
|
||||
a1 = default_mock_concretization("a@1.0")
|
||||
a2 = Spec("a@1.0")
|
||||
|
||||
assert a1.intersects(a2)
|
||||
assert a2.intersects(a1)
|
||||
@@ -1461,17 +1473,17 @@ def test_constrain(factory, lhs_str, rhs_str, result, constrained_str):
|
||||
|
||||
|
||||
def test_abstract_hash_intersects_and_satisfies(default_mock_concretization):
|
||||
concrete: Spec = default_mock_concretization("pkg-a")
|
||||
concrete: Spec = default_mock_concretization("a")
|
||||
hash = concrete.dag_hash()
|
||||
hash_5 = hash[:5]
|
||||
hash_6 = hash[:6]
|
||||
# abstract hash that doesn't have a common prefix with the others.
|
||||
hash_other = f"{'a' if hash_5[0] == 'b' else 'b'}{hash_5[1:]}"
|
||||
|
||||
abstract_5 = Spec(f"pkg-a/{hash_5}")
|
||||
abstract_6 = Spec(f"pkg-a/{hash_6}")
|
||||
abstract_none = Spec(f"pkg-a/{hash_other}")
|
||||
abstract = Spec("pkg-a")
|
||||
abstract_5 = Spec(f"a/{hash_5}")
|
||||
abstract_6 = Spec(f"a/{hash_6}")
|
||||
abstract_none = Spec(f"a/{hash_other}")
|
||||
abstract = Spec("a")
|
||||
|
||||
def assert_subset(a: Spec, b: Spec):
|
||||
assert a.intersects(b) and b.intersects(a) and a.satisfies(b) and not b.satisfies(a)
|
||||
@@ -1505,9 +1517,3 @@ def test_edge_equality_does_not_depend_on_virtual_order():
|
||||
assert edge1 == edge2
|
||||
assert tuple(sorted(edge1.virtuals)) == edge1.virtuals
|
||||
assert tuple(sorted(edge2.virtuals)) == edge1.virtuals
|
||||
|
||||
|
||||
def test_old_format_strings_trigger_error(default_mock_concretization):
|
||||
s = Spec("pkg-a").concretized()
|
||||
with pytest.raises(SpecFormatStringError):
|
||||
s.format("${PACKAGE}-${VERSION}-${HASH}")
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user