Compare commits

..

15 Commits

Author SHA1 Message Date
Todd Gamblin
5fe93fee1e Update CHANGELOG.md for v0.22.0 2024-05-12 02:06:28 +02:00
Todd Gamblin
8207f11333 Bump version to v0.22 2024-05-11 17:54:12 +02:00
Todd Gamblin
5bb5d2696f changelog: add changes form 0.21.1 and 0.21.2 (#44136)
These changes were added to the release branch but did not make it onto `develop`.
2024-05-11 17:48:27 +02:00
Harmen Stoppels
55f37dffe5 oci: improve default_retry (#44132)
Apparently urllib can throw a range of different exceptions:

1. HTTPError
2. URLError with e.reason set to the actual exception
3. TimeoutError from getresponse, which is not wrapped
2024-05-11 15:44:40 +02:00
Harmen Stoppels
252a5bd71b PythonExtension: fix issue where package does not extend python (#44109) 2024-05-10 10:48:06 +02:00
Massimiliano Culpo
f55224f161 Fix filtering external specs (#44093)
When an include filter on externals is present, implicitly
include libcs.

Also, do not penalize deprecated versions if they come
from externals.
2024-05-09 20:48:43 +02:00
Massimiliano Culpo
189ae4b06e CI/Update macOS runners: macos-latest switched to macos-14 (#44094)
macos-latest switched to macos-14, so now we are running
two identical jobs.
2024-05-09 20:48:43 +02:00
Harmen Stoppels
5e9c702fa7 gcc: use -idirafter for libc headers (#44081)
GCC C++ headers like cstdlib use `#include_next <stdlib.h>` to wrap libc
headers. We're using `-isystem` for libc, which puts those headers too
early in the search path. `-idirafter` fixes this so `include_next`
works.
2024-05-08 20:45:39 +02:00
Harmen Stoppels
965bb4d3c0 gitlab ci: tutorial: add julia and vim (#44073) 2024-05-08 14:19:22 +02:00
Todd Gamblin
354f98c94a r: patch R-CVE-2024-27322 for r@3.5:4.3.3 (#44050)
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-05-08 14:19:22 +02:00
Tamara Dahlgren
5dce480154 Remove dead environment creation code (#44065) 2024-05-08 14:19:22 +02:00
Richarda Butler
f634d48b7c Include concrete environments with include_concrete (#33768)
Add the ability to include any number of (potentially nested) concrete environments, e.g.:

```yaml
   spack:
     specs: []
     concretizer:
         unify: true
     include_concrete:
     - /path/to/environment1
     - /path/to/environment2
```

or, from the CLI:

```console
   $ spack env create myenv
   $ spack -e myenv add python
   $ spack -e myenv concretize
   $ spack env create --include-concrete myenv included_env
```

The contents of included concrete environments' spack.lock files are
included in the environment's lock file at creation time. Any changes
to included concrete environments are only reflected after the environment
is re-concretized from the re-concretized included environments.

- [x] Concretize included envs
- [x] Save concrete specs in memory by hash
- [x] Add included envs to combined env's lock file
- [x] Add test
- [x] Update documentation

    Co-authored-by: Kayla Butler <<butler59@llnl.gov>
    Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.co
m>
    Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
    Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-05-08 14:19:22 +02:00
Massimiliano Culpo
4daee565ae Update the tutorial command to point to releases/v0.22 (#44056) 2024-05-08 14:19:22 +02:00
Massimiliano Culpo
8e4dbdc2d7 Bump removal version in deprecation messages (#44064) 2024-05-08 14:19:22 +02:00
Harmen Stoppels
4f6adc03cd gitlab: dont build paraview for neoverse v2 (#44060) 2024-05-08 14:19:22 +02:00
828 changed files with 5992 additions and 9018 deletions

View File

@@ -28,7 +28,7 @@ jobs:
run:
shell: ${{ matrix.system.shell }}
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: ${{inputs.python_version}}
@@ -61,7 +61,7 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
spack -d audit externals
./share/spack/qa/validate_last_exit.ps1
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
if: ${{ inputs.with_coverage == 'true' }}
with:
flags: unittests,audits

View File

@@ -37,7 +37,7 @@ jobs:
make patch unzip which xz python3 python3-devel tree \
cmake bison
- name: Checkout
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- name: Bootstrap clingo
@@ -60,7 +60,7 @@ jobs:
run: |
brew install cmake bison tree
- name: Checkout
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
@@ -84,13 +84,15 @@ jobs:
- name: Setup macOS
if: ${{ matrix.runner != 'ubuntu-latest' }}
run: |
brew install tree gawk
sudo rm -rf $(command -v gpg gpg2)
brew install tree
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Setup Ubuntu
if: ${{ matrix.runner == 'ubuntu-latest' }}
run: sudo rm -rf $(command -v gpg gpg2 patchelf)
run: |
sudo rm -rf $(which gpg) $(which gpg2) $(which patchelf)
- name: Checkout
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- name: Bootstrap GnuPG
@@ -119,7 +121,7 @@ jobs:
run: |
sudo rm -rf $(which gpg) $(which gpg2) $(which patchelf)
- name: Checkout
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d

View File

@@ -56,7 +56,7 @@ jobs:
if: github.repository == 'spack/spack'
steps:
- name: Checkout
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
- uses: docker/metadata-action@8e5442c4ef9f78752691e2d8f8d19755c6f78e81
id: docker_meta

View File

@@ -36,7 +36,7 @@ jobs:
core: ${{ steps.filter.outputs.core }}
packages: ${{ steps.filter.outputs.packages }}
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
if: ${{ github.event_name == 'push' }}
with:
fetch-depth: 0

View File

@@ -14,7 +14,7 @@ jobs:
build-paraview-deps:
runs-on: windows-latest
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d

View File

@@ -3,5 +3,5 @@ clingo==5.7.1
flake8==7.0.0
isort==5.13.2
mypy==1.8.0
types-six==1.16.21.20240513
types-six==1.16.21.9
vermin==1.6.0

View File

@@ -51,7 +51,7 @@ jobs:
on_develop: false
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
@@ -91,7 +91,7 @@ jobs:
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }}
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
with:
flags: unittests,linux,${{ matrix.concretizer }}
token: ${{ secrets.CODECOV_TOKEN }}
@@ -100,7 +100,7 @@ jobs:
shell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
@@ -124,7 +124,7 @@ jobs:
COVERAGE: true
run: |
share/spack/qa/run-shell-tests
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
with:
flags: shelltests,linux
token: ${{ secrets.CODECOV_TOKEN }}
@@ -141,7 +141,7 @@ jobs:
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
- name: Setup repo and non-root user
run: |
git --version
@@ -160,7 +160,7 @@ jobs:
clingo-cffi:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
@@ -185,7 +185,7 @@ jobs:
SPACK_TEST_SOLVER: clingo
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
with:
flags: unittests,linux,clingo
token: ${{ secrets.CODECOV_TOKEN }}
@@ -198,7 +198,7 @@ jobs:
os: [macos-13, macos-14]
python-version: ["3.11"]
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
@@ -223,7 +223,7 @@ jobs:
$(which spack) solve zlib
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
$(which spack) unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}"
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
with:
flags: unittests,macos
token: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -18,7 +18,7 @@ jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.11'
@@ -35,7 +35,7 @@ jobs:
style:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
@@ -70,7 +70,7 @@ jobs:
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
- name: Setup repo and non-root user
run: |
git --version

View File

@@ -15,7 +15,7 @@ jobs:
unit-tests:
runs-on: windows-latest
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
@@ -33,7 +33,7 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
coverage combine -a
coverage xml
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
with:
flags: unittests,windows
token: ${{ secrets.CODECOV_TOKEN }}
@@ -41,7 +41,7 @@ jobs:
unit-tests-cmd:
runs-on: windows-latest
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
@@ -59,7 +59,7 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
coverage combine -a
coverage xml
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
with:
flags: unittests,windows
token: ${{ secrets.CODECOV_TOKEN }}
@@ -67,7 +67,7 @@ jobs:
build-abseil:
runs-on: windows-latest
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d

View File

@@ -1,3 +1,320 @@
# v0.22.0 (2024-05-12)
`v0.22.0` is a major feature release.
## Features in this release
1. **Compiler dependencies**
We are in the process of making compilers proper dependencies in Spack, and a number
of changes in `v0.22` support that effort. You may notice nodes in your dependency
graphs for compiler runtime libraries like `gcc-runtime` or `libgfortran`, and you
may notice that Spack graphs now include `libc`. We've also begun moving compiler
configuration from `compilers.yaml` to `packages.yaml` to make it consistent with
other externals. We are trying to do this with the least disruption possible, so
your existing `compilers.yaml` files should still work. We expect to be done with
this transition by the `v0.23` release in November.
* #41104: Packages compiled with `%gcc` on Linux, macOS and FreeBSD now depend on a
new package `gcc-runtime`, which contains a copy of the shared compiler runtime
libraries. This enables gcc runtime libraries to be installed and relocated when
using a build cache. When building minimal Spack-generated container images it is
no longer necessary to install libgfortran, libgomp etc. using the system package
manager.
* #42062: Packages compiled with `%oneapi` now depend on a new package
`intel-oneapi-runtime`. This is similar to `gcc-runtime`, and the runtimes can
provide virtuals and compilers can inject dependencies on virtuals into compiled
packages. This allows us to model library soname compatibility and allows
compilers like `%oneapi` to provide virtuals like `sycl` (which can also be
provided by standalone libraries). Note that until we have an agreement in place
with intel, Intel packages are marked `redistribute(source=False, binary=False)`
and must be downloaded outside of Spack.
* #43272: changes to the optimization criteria of the solver improve the hit-rate of
buildcaches by a fair amount. The solver more relaxed compatibility rules and will
not try to strictly match compilers or targets of reused specs. Users can still
enforce the previous strict behavior with `require:` sections in `packages.yaml`.
Note that to enforce correct linking, Spack will *not* reuse old `%gcc` and
`%oneapi` specs that do not have the runtime libraries as a dependency.
* #43539: Spack will reuse specs built with compilers that are *not* explicitly
configured in `compilers.yaml`. Because we can now keep runtime libraries in build
cache, we do not require you to also have a local configured compiler to *use* the
runtime libraries. This improves reuse in buildcaches and avoids conflicts with OS
updates that happen underneath Spack.
* #43190: binary compatibility on `linux` is now based on the `libc` version,
instead of on the `os` tag. Spack builds now detect the host `libc` (`glibc` or
`musl`) and add it as an implicit external node in the dependency graph. Binaries
with a `libc` with the same name and a version less than or equal to that of the
detected `libc` can be reused. This is only on `linux`, not `macos` or `Windows`.
* #43464: each package that can provide a compiler is now detectable using `spack
external find`. External packages defining compiler paths are effectively used as
compilers, and `spack external find -t compiler` can be used as a substitute for
`spack compiler find`. More details on this transition are in
[the docs](https://spack.readthedocs.io/en/latest/getting_started.html#manual-compiler-configuration)
2. **Improved `spack find` UI for Environments**
If you're working in an enviroment, you likely care about:
* What are the roots
* Which ones are installed / not installed
* What's been added that still needs to be concretized
We've tweaked `spack find` in environments to show this information much more
clearly. Installation status is shown next to each root, so you can see what is
installed. Roots are also shown in bold in the list of installed packages. There is
also a new option for `spack find -r` / `--only-roots` that will only show env
roots, if you don't want to look at all the installed specs.
More details in #42334.
3. **Improved command-line string quoting**
We are making some breaking changes to how Spack parses specs on the CLI in order to
respect shell quoting instead of trying to fight it. If you (sadly) had to write
something like this on the command line:
```
spack install zlib cflags=\"-O2 -g\"
```
That will now result in an error. The correct format (which you probably expected in
the first place) is:
```
spack install zlib cflags="-O2 -g"
```
Quoted can also now include special characters, enabling commands like:
```
spack install zlib ldflags='-Wl,-rpath=$ORIGIN/_libs'
```
To reduce ambiguity in parsing, do *not* put spaces around `=` and `==` in
flags or variants, as this will now result in an error:
```
spack install zlib cflags = "-O2 -g"
```
More details and discussion in #30634.
4. **Revert default `spack install` behavior to `--reuse`**
We changed the default concretizer behavior from `--reuse` to `--reuse-deps` in
#30990 (in `v0.20`), which meant that *every* `spack install` invocation would
attempt to build a new version of the requested package / any environment roots.
While this is a common ask for *upgrading* and for *developer* workflows, we don't
think it should be the default for a package manager.
We are going to try to stick to this policy:
1. Prioritize reuse and build as little as possible by default.
2. Only upgrade or install duplicates if they are explicitly asked for, or if there
is a known security issue that necessitates an upgrade.
With the install command you now have three options:
* `--reuse` (default): reuse as many existing installations as possible.
* `--reuse-deps` / `--fresh-roots`: upgrade (freshen) roots but reuse dependencies if possible.
* `--fresh`: install fresh versions of requested packages (roots) and their dependencies.
We've also introduced `--fresh-roots` as an alias for `--reuse-deps` to make it more clear
that it may give you fresh versions. More details in #41302 and #43988.
5. **More control over reused specs**
You can now control which packages to reuse and how. There is a new
`concretizer:reuse` config option, which accepts the following properties:
- `roots`: `true` to reuse roots, `false` to reuse just dependencies
- `exclude`: list of constraints used to select which specs *not* to reuse
- `include`: list of constraints used to select which specs *to* reuse
- `from`: list of sources for reused specs (some combination of `local`,
`buildcache`, or `external`)
For example, to reuse only specs compiled with GCC, you could write:
```yaml
concretizer:
reuse:
roots: true
include:
- "%gcc"
```
Or, if `openmpi` must be used from externals, and it must be the only external used:
```yaml
concretizer:
reuse:
roots: true
from:
- type: local
exclude: ["openmpi"]
- type: buildcache
exclude: ["openmpi"]
- type: external
include: ["openmpi"]
```
6. **Add new `redistribute()` directive**
Some packages can't be redistributed in source or binary form. We need an explicit
way to say that in a package.
Now there is a `redistribute()` directive so that package authors can write:
```python
class MyPackage(Package):
redistribute(source=False, binary=False)
```
Like other directives, this works with `when=`:
```python
class MyPackage(Package):
# 12.0 and higher are proprietary
redistribute(source=False, binary=False, when="@12.0:")
# can't redistribute when we depend on some proprietary dependency
redistribute(source=False, binary=False, when="^proprietary-dependency")
```
More in #20185.
7. **New `conflict:` and `prefer:` syntax for package preferences**
Previously, you could express conflicts and preferences in `packages.yaml` through
some contortions with `require:`:
```yaml
packages:
zlib-ng:
require:
- one_of: ["%clang", "@:"] # conflict on %clang
- any_of: ["+shared", "@:"] # strong preference for +shared
```
You can now use `require:` and `prefer:` for a much more readable configuration:
```yaml
packages:
zlib-ng:
conflict:
- "%clang"
prefer:
- "+shared"
```
See [the documentation](https://spack.readthedocs.io/en/latest/packages_yaml.html#conflicts-and-strong-preferences)
and #41832 for more details.
8. **`include_concrete` in environments**
You may want to build on the *concrete* contents of another environment without
changing that environment. You can now include the concrete specs from another
environment's `spack.lock` with `include_concrete`:
```yaml
spack:
specs: []
concretizer:
unify: true
include_concrete:
- /path/to/environment1
- /path/to/environment2
```
Now, when *this* environment is concretized, it will bring in the already concrete
specs from `environment1` and `environment2`, and build on top of them without
changing them. This is useful if you have phased deployments, where old deployments
should not be modified but you want to use as many of them as possible. More details
in #33768.
9. **`python-venv` isolation**
Spack has unique requirements for Python because it:
1. installs every package in its own independent directory, and
2. allows users to register *external* python installations.
External installations may contain their own installed packages that can interfere
with Spack installations, and some distributions (Debian and Ubuntu) even change the
`sysconfig` in ways that alter the installation layout of installed Python packages
(e.g., with the addition of a `/local` prefix on Debian or Ubuntu). To isolate Spack
from these and other issues, we now insert a small `python-venv` package in between
`python` and packages that need to install Python code. This isolates Spack's build
environment, isolates Spack from any issues with an external python, and resolves a
large number of issues we've had with Python installations.
See #40773 for further details.
## New commands, options, and directives
* Allow packages to be pushed to build cache after install from source (#42423)
* `spack develop`: stage build artifacts in same root as non-dev builds #41373
* Don't delete `spack develop` build artifacts after install (#43424)
* `spack find`: add options for local/upstream only (#42999)
* `spack logs`: print log files for packages (either partially built or installed) (#42202)
* `patch`: support reversing patches (#43040)
* `develop`: Add -b/--build-directory option to set build_directory package attribute (#39606)
* `spack list`: add `--namesapce` / `--repo` option (#41948)
* directives: add `checked_by` field to `license()`, add some license checks
* `spack gc`: add options for environments and build dependencies (#41731)
* Add `--create` to `spack env activate` (#40896)
## Performance improvements
* environment.py: fix excessive re-reads (#43746)
* ruamel yaml: fix quadratic complexity bug (#43745)
* Refactor to improve `spec format` speed (#43712)
* Do not acquire a write lock on the env post install if no views (#43505)
* asp.py: fewer calls to `spec.copy()` (#43715)
* spec.py: early return in `__str__`
* avoid `jinja2` import at startup unless needed (#43237)
## Other new features of note
* `archspec`: update to `v0.2.4`: support for Windows, bugfixes for `neoverse-v1` and
`neoverse-v2` detection.
* `spack config get`/`blame`: with no args, show entire config
* `spack env create <env>`: dir if dir-like (#44024)
* ASP-based solver: update os compatibility for macOS (#43862)
* Add handling of custom ssl certs in urllib ops (#42953)
* Add ability to rename environments (#43296)
* Add config option and compiler support to reuse across OS's (#42693)
* Support for prereleases (#43140)
* Only reuse externals when configured (#41707)
* Environments: Add support for including views (#42250)
* Make signed/unsigned a mirror configuration property (#41507)
## Removals, deprecations, and syntax changes
* remove `dpcpp` compiler and package (#43418)
* `spack load`: remove --only argument (#42120)
## Notable Bugfixes
* repo.py: drop deleted packages from provider cache (#43779)
* Allow `+` in module file names (#41999)
* `cmd/python`: use runpy to allow multiprocessing in scripts (#41789)
* Show extension commands with spack -h (#41726)
* Support environment variable expansion inside module projections (#42917)
* Alert user to failed concretizations (#42655)
* shell: fix zsh color formatting for PS1 in environments (#39497)
* spack mirror create --all: include patches (#41579)
## Spack community stats
* 7,994 total packages; 525 since `v0.21.0`
* 178 new Python packages, 5 new R packages
* 358 people contributed to this release
* 344 committers to packages
* 45 committers to core
# v0.21.2 (2024-03-01)
## Bugfixes
@@ -27,7 +344,7 @@
- spack graph: fix coloring with environments (#41240)
- spack info: sort variants in --variants-by-name (#41389)
- Spec.format: error on old style format strings (#41934)
- ASP-based solver:
- ASP-based solver:
- fix infinite recursion when computing concretization errors (#41061)
- don't error for type mismatch on preferences (#41138)
- don't emit spurious debug output (#41218)

View File

@@ -32,7 +32,7 @@
Spack is a multi-platform package manager that builds and installs
multiple versions and configurations of software. It works on Linux,
macOS, Windows, and many supercomputers. Spack is non-destructive: installing a
macOS, and many supercomputers. Spack is non-destructive: installing a
new version of a package does not break existing installations, so many
configurations of the same package can coexist.

View File

@@ -144,5 +144,3 @@ switch($SpackSubCommand)
"unload" {Invoke-SpackLoad}
default {python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand $SpackSubCommandArgs}
}
exit $LASTEXITCODE

View File

@@ -0,0 +1,16 @@
# -------------------------------------------------------------------------
# This is the default configuration for Spack's module file generation.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/modules.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/modules.yaml
# -------------------------------------------------------------------------
modules: {}

View File

@@ -0,0 +1,19 @@
# -------------------------------------------------------------------------
# This file controls default concretization preferences for Spack.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/packages.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/packages.yaml
# -------------------------------------------------------------------------
packages:
all:
providers:
iconv: [glibc, musl, libiconv]

View File

@@ -0,0 +1,19 @@
# -------------------------------------------------------------------------
# This file controls default concretization preferences for Spack.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/packages.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/packages.yaml
# -------------------------------------------------------------------------
packages:
all:
providers:
iconv: [glibc, musl, libiconv]

View File

@@ -38,9 +38,10 @@ packages:
lapack: [openblas, amdlibflame]
libc: [glibc, musl]
libgfortran: [ gcc-runtime ]
libglx: [mesa+glx]
libglx: [mesa+glx, mesa18+glx]
libifcore: [ intel-oneapi-runtime ]
libllvm: [llvm]
libosmesa: [mesa+osmesa, mesa18+osmesa]
lua-lang: [lua, lua-luajit-openresty, lua-luajit]
luajit: [lua-luajit-openresty, lua-luajit]
mariadb-client: [mariadb-c-client, mariadb]

View File

@@ -1433,12 +1433,22 @@ the reserved keywords ``platform``, ``os`` and ``target``:
$ spack install libelf os=ubuntu18.04
$ spack install libelf target=broadwell
or together by using the reserved keyword ``arch``:
.. code-block:: console
$ spack install libelf arch=cray-CNL10-haswell
Normally users don't have to bother specifying the architecture if they
are installing software for their current host, as in that case the
values will be detected automatically. If you need fine-grained control
over which packages use which targets (or over *all* packages' default
target), see :ref:`package-preferences`.
.. admonition:: Cray machines
The situation is a little bit different for Cray machines and a detailed
explanation on how the architecture can be set on them can be found at :ref:`cray-support`
.. _support-for-microarchitectures:

View File

@@ -147,15 +147,6 @@ example, the ``bash`` shell is used to run the ``autogen.sh`` script.
def autoreconf(self, spec, prefix):
which("bash")("autogen.sh")
If the ``package.py`` has build instructions in a separate
:ref:`builder class <multiple_build_systems>`, the signature for a phase changes slightly:
.. code-block:: python
class AutotoolsBuilder(AutotoolsBuilder):
def autoreconf(self, pkg, spec, prefix):
which("bash")("autogen.sh")
"""""""""""""""""""""""""""""""""""""""
patching configure or Makefile.in files
"""""""""""""""""""""""""""""""""""""""

View File

@@ -25,7 +25,7 @@ use Spack to build packages with the tools.
The Spack Python class ``IntelOneapiPackage`` is a base class that is
used by ``IntelOneapiCompilers``, ``IntelOneapiMkl``,
``IntelOneapiTbb`` and other classes to implement the oneAPI
packages. Search for ``oneAPI`` at `packages.spack.io <https://packages.spack.io>`_ for the full
packages. Search for ``oneAPI`` at `<packages.spack.io>`_ for the full
list of available oneAPI packages, or use::
spack list -d oneAPI

View File

@@ -11,8 +11,7 @@ Chaining Spack Installations
You can point your Spack installation to another installation to use any
packages that are installed there. To register the other Spack instance,
you can add it as an entry to ``upstreams.yaml`` at any of the
:ref:`configuration-scopes`:
you can add it as an entry to ``upstreams.yaml``:
.. code-block:: yaml
@@ -23,8 +22,7 @@ you can add it as an entry to ``upstreams.yaml`` at any of the
install_tree: /path/to/another/spack/opt/spack
``install_tree`` must point to the ``opt/spack`` directory inside of the
Spack base directory, or the location of the ``install_tree`` defined
in :ref:`config.yaml <config-yaml>`.
Spack base directory.
Once the upstream Spack instance has been added, ``spack find`` will
automatically check the upstream instance when querying installed packages,

View File

@@ -1364,6 +1364,187 @@ This will write the private key to the file `dinosaur.priv`.
or for help on an issue or the Spack slack.
.. _cray-support:
-------------
Spack on Cray
-------------
Spack differs slightly when used on a Cray system. The architecture spec
can differentiate between the front-end and back-end processor and operating system.
For example, on Edison at NERSC, the back-end target processor
is "Ivy Bridge", so you can specify to use the back-end this way:
.. code-block:: console
$ spack install zlib target=ivybridge
You can also use the operating system to build against the back-end:
.. code-block:: console
$ spack install zlib os=CNL10
Notice that the name includes both the operating system name and the major
version number concatenated together.
Alternatively, if you want to build something for the front-end,
you can specify the front-end target processor. The processor for a login node
on Edison is "Sandy bridge" so we specify on the command line like so:
.. code-block:: console
$ spack install zlib target=sandybridge
And the front-end operating system is:
.. code-block:: console
$ spack install zlib os=SuSE11
^^^^^^^^^^^^^^^^^^^^^^^
Cray compiler detection
^^^^^^^^^^^^^^^^^^^^^^^
Spack can detect compilers using two methods. For the front-end, we treat
everything the same. The difference lies in back-end compiler detection.
Back-end compiler detection is made via the Tcl module avail command.
Once it detects the compiler it writes the appropriate PrgEnv and compiler
module name to compilers.yaml and sets the paths to each compiler with Cray\'s
compiler wrapper names (i.e. cc, CC, ftn). During build time, Spack will load
the correct PrgEnv and compiler module and will call appropriate wrapper.
The compilers.yaml config file will also differ. There is a
modules section that is filled with the compiler's Programming Environment
and module name. On other systems, this field is empty []:
.. code-block:: yaml
- compiler:
modules:
- PrgEnv-intel
- intel/15.0.109
As mentioned earlier, the compiler paths will look different on a Cray system.
Since most compilers are invoked using cc, CC and ftn, the paths for each
compiler are replaced with their respective Cray compiler wrapper names:
.. code-block:: yaml
paths:
cc: cc
cxx: CC
f77: ftn
fc: ftn
As opposed to an explicit path to the compiler executable. This allows Spack
to call the Cray compiler wrappers during build time.
For more on compiler configuration, check out :ref:`compiler-config`.
Spack sets the default Cray link type to dynamic, to better match other
other platforms. Individual packages can enable static linking (which is the
default outside of Spack on cray systems) using the ``-static`` flag.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting defaults and using Cray modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to use default compilers for each PrgEnv and also be able
to load cray external modules, you will need to set up a ``packages.yaml``.
Here's an example of an external configuration for cray modules:
.. code-block:: yaml
packages:
mpich:
externals:
- spec: "mpich@7.3.1%gcc@5.2.0 arch=cray_xc-haswell-CNL10"
modules:
- cray-mpich
- spec: "mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-haswell-CNL10"
modules:
- cray-mpich
all:
providers:
mpi: [mpich]
This tells Spack that for whatever package that depends on mpi, load the
cray-mpich module into the environment. You can then be able to use whatever
environment variables, libraries, etc, that are brought into the environment
via module load.
.. note::
For Cray-provided packages, it is best to use ``modules:`` instead of ``prefix:``
in ``packages.yaml``, because the Cray Programming Environment heavily relies on
modules (e.g., loading the ``cray-mpich`` module adds MPI libraries to the
compiler wrapper link line).
You can set the default compiler that Spack can use for each compiler type.
If you want to use the Cray defaults, then set them under ``all:`` in packages.yaml.
In the compiler field, set the compiler specs in your order of preference.
Whenever you build with that compiler type, Spack will concretize to that version.
Here is an example of a full packages.yaml used at NERSC
.. code-block:: yaml
packages:
mpich:
externals:
- spec: "mpich@7.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-mpich
- spec: "mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-SuSE11-ivybridge"
modules:
- cray-mpich
buildable: False
netcdf:
externals:
- spec: "netcdf@4.3.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-netcdf
- spec: "netcdf@4.3.3.1%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-netcdf
buildable: False
hdf5:
externals:
- spec: "hdf5@1.8.14%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-hdf5
- spec: "hdf5@1.8.14%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-hdf5
buildable: False
all:
compiler: [gcc@5.2.0, intel@16.0.0.109]
providers:
mpi: [mpich]
Here we tell spack that whenever we want to build with gcc use version 5.2.0 or
if we want to build with intel compilers, use version 16.0.0.109. We add a spec
for each compiler type for each cray modules. This ensures that for each
compiler on our system we can use that external module.
For more on external packages check out the section :ref:`sec-external-packages`.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using Linux containers on Cray machines
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack uses environment variables particular to the Cray programming
environment to determine which systems are Cray platforms. These
environment variables may be propagated into containers that are not
using the Cray programming environment.
To ensure that Spack does not autodetect the Cray programming
environment, unset the environment variable ``MODULEPATH``. This
will cause Spack to treat a linux container on a Cray system as a base
linux distro.
.. _windows_support:
----------------

View File

@@ -476,3 +476,9 @@ implemented using Python's built-in `sys.path
:py:mod:`spack.repo` module implements a custom `Python importer
<https://docs.python.org/2/library/imp.html>`_.
.. warning::
The mechanism for extending packages is not yet extensively tested,
and extending packages across repositories imposes inter-repo
dependencies, which may be hard to manage. Use this feature at your
own risk, but let us know if you have a use case for it.

View File

@@ -4,9 +4,9 @@ sphinx_design==0.5.0
sphinx-rtd-theme==2.0.0
python-levenshtein==0.25.1
docutils==0.20.1
pygments==2.18.0
pygments==2.17.2
urllib3==2.2.1
pytest==8.2.1
pytest==8.2.0
isort==5.13.2
black==24.4.2
flake8==7.0.0

View File

@@ -98,10 +98,3 @@ def path_filter_caller(*args, **kwargs):
if _func:
return holder_func(_func)
return holder_func
def sanitize_win_longpath(path: str) -> str:
"""Strip Windows extended path prefix from strings
Returns sanitized string.
no-op if extended path prefix is not present"""
return path.lstrip("\\\\?\\")

View File

@@ -187,18 +187,12 @@ def polite_filename(filename: str) -> str:
return _polite_antipattern().sub("_", filename)
def getuid() -> Union[str, int]:
"""Returns os getuid on non Windows
On Windows returns 0 for admin users, login string otherwise
This is in line with behavior from get_owner_uid which
always returns the login string on Windows
"""
def getuid():
if sys.platform == "win32":
import ctypes
# If not admin, use the string name of the login as a unique ID
if ctypes.windll.shell32.IsUserAnAdmin() == 0:
return os.getlogin()
return 1
return 0
else:
return os.getuid()
@@ -219,15 +213,6 @@ def _win_rename(src, dst):
os.replace(src, dst)
@system_path_filter
def msdos_escape_parens(path):
"""MS-DOS interprets parens as grouping parameters even in a quoted string"""
if sys.platform == "win32":
return path.replace("(", "^(").replace(")", "^)")
else:
return path
@system_path_filter
def rename(src, dst):
# On Windows, os.rename will fail if the destination file already exists
@@ -568,13 +553,7 @@ def exploding_archive_handler(tarball_container, stage):
@system_path_filter(arg_slice=slice(1))
def get_owner_uid(path, err_msg=None) -> Union[str, int]:
"""Returns owner UID of path destination
On non Windows this is the value of st_uid
On Windows this is the login string associated with the
owning user.
"""
def get_owner_uid(path, err_msg=None):
if not os.path.exists(path):
mkdirp(path, mode=stat.S_IRWXU)
@@ -766,6 +745,7 @@ def copy_tree(
src: str,
dest: str,
symlinks: bool = True,
allow_broken_symlinks: bool = sys.platform != "win32",
ignore: Optional[Callable[[str], bool]] = None,
_permissions: bool = False,
):
@@ -788,6 +768,8 @@ def copy_tree(
src (str): the directory to copy
dest (str): the destination directory
symlinks (bool): whether or not to preserve symlinks
allow_broken_symlinks (bool): whether or not to allow broken (dangling) symlinks,
On Windows, setting this to True will raise an exception. Defaults to true on unix.
ignore (typing.Callable): function indicating which files to ignore
_permissions (bool): for internal use only
@@ -795,6 +777,8 @@ def copy_tree(
IOError: if *src* does not match any files or directories
ValueError: if *src* is a parent directory of *dest*
"""
if allow_broken_symlinks and sys.platform == "win32":
raise llnl.util.symlink.SymlinkError("Cannot allow broken symlinks on Windows!")
if _permissions:
tty.debug("Installing {0} to {1}".format(src, dest))
else:
@@ -838,7 +822,7 @@ def copy_tree(
if islink(s):
link_target = resolve_link_target_relative_to_the_link(s)
if symlinks:
target = readlink(s)
target = os.readlink(s)
if os.path.isabs(target):
def escaped_path(path):
@@ -867,14 +851,16 @@ def escaped_path(path):
copy_mode(s, d)
for target, d, s in links:
symlink(target, d)
symlink(target, d, allow_broken_symlinks=allow_broken_symlinks)
if _permissions:
set_install_permissions(d)
copy_mode(s, d)
@system_path_filter
def install_tree(src, dest, symlinks=True, ignore=None):
def install_tree(
src, dest, symlinks=True, ignore=None, allow_broken_symlinks=sys.platform != "win32"
):
"""Recursively install an entire directory tree rooted at *src*.
Same as :py:func:`copy_tree` with the addition of setting proper
@@ -885,12 +871,21 @@ def install_tree(src, dest, symlinks=True, ignore=None):
dest (str): the destination directory
symlinks (bool): whether or not to preserve symlinks
ignore (typing.Callable): function indicating which files to ignore
allow_broken_symlinks (bool): whether or not to allow broken (dangling) symlinks,
On Windows, setting this to True will raise an exception.
Raises:
IOError: if *src* does not match any files or directories
ValueError: if *src* is a parent directory of *dest*
"""
copy_tree(src, dest, symlinks=symlinks, ignore=ignore, _permissions=True)
copy_tree(
src,
dest,
symlinks=symlinks,
allow_broken_symlinks=allow_broken_symlinks,
ignore=ignore,
_permissions=True,
)
@system_path_filter
@@ -2434,10 +2429,9 @@ def add_library_dependent(self, *dest):
"""
for pth in dest:
if os.path.isfile(pth):
new_pth = pathlib.Path(pth).parent
self._additional_library_dependents.add(pathlib.Path(pth).parent)
else:
new_pth = pathlib.Path(pth)
self._additional_library_dependents.add(new_pth)
self._additional_library_dependents.add(pathlib.Path(pth))
@property
def rpaths(self):
@@ -2515,14 +2509,8 @@ def establish_link(self):
# for each binary install dir in self.pkg (i.e. pkg.prefix.bin, pkg.prefix.lib)
# install a symlink to each dependent library
# do not rpath for system libraries included in the dag
# we should not be modifying libraries managed by the Windows system
# as this will negatively impact linker behavior and can result in permission
# errors if those system libs are not modifiable by Spack
if "windows-system" not in getattr(self.pkg, "tags", []):
for library, lib_dir in itertools.product(self.rpaths, self.library_dependents):
self._link(library, lib_dir)
for library, lib_dir in itertools.product(self.rpaths, self.library_dependents):
self._link(library, lib_dir)
@system_path_filter

View File

@@ -8,75 +8,100 @@
import subprocess
import sys
import tempfile
from typing import Union
from llnl.util import lang, tty
from ..path import sanitize_win_longpath, system_path_filter
from ..path import system_path_filter
if sys.platform == "win32":
from win32file import CreateHardLink
is_windows = sys.platform == "win32"
def _windows_symlink(
src: str, dst: str, target_is_directory: bool = False, *, dir_fd: Union[int, None] = None
):
"""On Windows with System Administrator privileges this will be a normal symbolic link via
os.symlink. On Windows without privledges the link will be a junction for a directory and a
hardlink for a file. On Windows the various link types are:
Symbolic Link: A link to a file or directory on the same or different volume (drive letter) or
even to a remote file or directory (using UNC in its path). Need System Administrator
privileges to make these.
def symlink(source_path: str, link_path: str, allow_broken_symlinks: bool = not is_windows):
"""
Create a link.
Hard Link: A link to a file on the same volume (drive letter) only. Every file (file's data)
has at least 1 hard link (file's name). But when this method creates a new hard link there will
be 2. Deleting all hard links effectively deletes the file. Don't need System Administrator
privileges.
On non-Windows and Windows with System Administrator
privleges this will be a normal symbolic link via
os.symlink.
Junction: A link to a directory on the same or different volume (drive letter) but not to a
remote directory. Don't need System Administrator privileges."""
source_path = os.path.normpath(src)
On Windows without privledges the link will be a
junction for a directory and a hardlink for a file.
On Windows the various link types are:
Symbolic Link: A link to a file or directory on the
same or different volume (drive letter) or even to
a remote file or directory (using UNC in its path).
Need System Administrator privileges to make these.
Hard Link: A link to a file on the same volume (drive
letter) only. Every file (file's data) has at least 1
hard link (file's name). But when this method creates
a new hard link there will be 2. Deleting all hard
links effectively deletes the file. Don't need System
Administrator privileges.
Junction: A link to a directory on the same or different
volume (drive letter) but not to a remote directory. Don't
need System Administrator privileges.
Parameters:
source_path (str): The real file or directory that the link points to.
Must be absolute OR relative to the link.
link_path (str): The path where the link will exist.
allow_broken_symlinks (bool): On Linux or Mac, don't raise an exception if the source_path
doesn't exist. This will still raise an exception on Windows.
"""
source_path = os.path.normpath(source_path)
win_source_path = source_path
link_path = os.path.normpath(dst)
link_path = os.path.normpath(link_path)
# Perform basic checks to make sure symlinking will succeed
if os.path.lexists(link_path):
raise AlreadyExistsError(f"Link path ({link_path}) already exists. Cannot create link.")
# Never allow broken links on Windows.
if sys.platform == "win32" and allow_broken_symlinks:
raise ValueError("allow_broken_symlinks parameter cannot be True on Windows.")
if not os.path.exists(source_path):
if os.path.isabs(source_path):
# An absolute source path that does not exist will result in a broken link.
raise SymlinkError(
f"Source path ({source_path}) is absolute but does not exist. Resulting "
f"link would be broken so not making link."
if not allow_broken_symlinks:
# Perform basic checks to make sure symlinking will succeed
if os.path.lexists(link_path):
raise AlreadyExistsError(
f"Link path ({link_path}) already exists. Cannot create link."
)
else:
# os.symlink can create a link when the given source path is relative to
# the link path. Emulate this behavior and check to see if the source exists
# relative to the link path ahead of link creation to prevent broken
# links from being made.
link_parent_dir = os.path.dirname(link_path)
relative_path = os.path.join(link_parent_dir, source_path)
if os.path.exists(relative_path):
# In order to work on windows, the source path needs to be modified to be
# relative because hardlink/junction dont resolve relative paths the same
# way as os.symlink. This is ignored on other operating systems.
win_source_path = relative_path
else:
if not os.path.exists(source_path):
if os.path.isabs(source_path) and not allow_broken_symlinks:
# An absolute source path that does not exist will result in a broken link.
raise SymlinkError(
f"The source path ({source_path}) is not relative to the link path "
f"({link_path}). Resulting link would be broken so not making link."
f"Source path ({source_path}) is absolute but does not exist. Resulting "
f"link would be broken so not making link."
)
else:
# os.symlink can create a link when the given source path is relative to
# the link path. Emulate this behavior and check to see if the source exists
# relative to the link path ahead of link creation to prevent broken
# links from being made.
link_parent_dir = os.path.dirname(link_path)
relative_path = os.path.join(link_parent_dir, source_path)
if os.path.exists(relative_path):
# In order to work on windows, the source path needs to be modified to be
# relative because hardlink/junction dont resolve relative paths the same
# way as os.symlink. This is ignored on other operating systems.
win_source_path = relative_path
elif not allow_broken_symlinks:
raise SymlinkError(
f"The source path ({source_path}) is not relative to the link path "
f"({link_path}). Resulting link would be broken so not making link."
)
# Create the symlink
if not _windows_can_symlink():
if sys.platform == "win32" and not _windows_can_symlink():
_windows_create_link(win_source_path, link_path)
else:
os.symlink(source_path, link_path, target_is_directory=os.path.isdir(source_path))
def _windows_islink(path: str) -> bool:
def islink(path: str) -> bool:
"""Override os.islink to give correct answer for spack logic.
For Non-Windows: a link can be determined with the os.path.islink method.
@@ -222,9 +247,9 @@ def _windows_create_junction(source: str, link: str):
out, err = proc.communicate()
tty.debug(out.decode())
if proc.returncode != 0:
err_str = err.decode()
tty.error(err_str)
raise SymlinkError("Make junction command returned a non-zero return code.", err_str)
err = err.decode()
tty.error(err)
raise SymlinkError("Make junction command returned a non-zero return code.", err)
def _windows_create_hard_link(path: str, link: str):
@@ -244,14 +269,14 @@ def _windows_create_hard_link(path: str, link: str):
CreateHardLink(link, path)
def _windows_readlink(path: str, *, dir_fd=None):
def readlink(path: str):
"""Spack utility to override of os.readlink method to work cross platform"""
if _windows_is_hardlink(path):
return _windows_read_hard_link(path)
elif _windows_is_junction(path):
return _windows_read_junction(path)
else:
return sanitize_win_longpath(os.readlink(path, dir_fd=dir_fd))
return os.readlink(path)
def _windows_read_hard_link(link: str) -> str:
@@ -313,16 +338,6 @@ def resolve_link_target_relative_to_the_link(link):
return os.path.join(link_dir, target)
if sys.platform == "win32":
symlink = _windows_symlink
readlink = _windows_readlink
islink = _windows_islink
else:
symlink = os.symlink
readlink = os.readlink
islink = os.path.islink
class SymlinkError(RuntimeError):
"""Exception class for errors raised while creating symlinks,
junctions and hard links

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
__version__ = "0.23.0.dev0"
__version__ = "0.22.0"
spack_version = __version__

View File

@@ -421,10 +421,6 @@ def _check_patch_urls(pkgs, error_cls):
r"^https?://(?:patch-diff\.)?github(?:usercontent)?\.com/"
r".+/.+/(?:commit|pull)/[a-fA-F0-9]+\.(?:patch|diff)"
)
github_pull_commits_re = (
r"^https?://(?:patch-diff\.)?github(?:usercontent)?\.com/"
r".+/.+/pull/\d+/commits/[a-fA-F0-9]+\.(?:patch|diff)"
)
# Only .diff URLs have stable/full hashes:
# https://forum.gitlab.com/t/patches-with-full-index/29313
gitlab_patch_url_re = (
@@ -440,24 +436,14 @@ def _check_patch_urls(pkgs, error_cls):
if not isinstance(patch, spack.patch.UrlPatch):
continue
if re.match(github_pull_commits_re, patch.url):
url = re.sub(r"/pull/\d+/commits/", r"/commit/", patch.url)
url = re.sub(r"^(.*)(?<!full_index=1)$", r"\1?full_index=1", url)
errors.append(
error_cls(
f"patch URL in package {pkg_cls.name} "
+ "must not be a pull request commit; "
+ f"instead use {url}",
[patch.url],
)
)
elif re.match(github_patch_url_re, patch.url):
if re.match(github_patch_url_re, patch.url):
full_index_arg = "?full_index=1"
if not patch.url.endswith(full_index_arg):
errors.append(
error_cls(
f"patch URL in package {pkg_cls.name} "
+ f"must end with {full_index_arg}",
"patch URL in package {0} must end with {1}".format(
pkg_cls.name, full_index_arg
),
[patch.url],
)
)
@@ -465,7 +451,9 @@ def _check_patch_urls(pkgs, error_cls):
if not patch.url.endswith(".diff"):
errors.append(
error_cls(
f"patch URL in package {pkg_cls.name} must end with .diff",
"patch URL in package {0} must end with .diff".format(
pkg_cls.name
),
[patch.url],
)
)

View File

@@ -29,7 +29,6 @@
import llnl.util.lang
import llnl.util.tty as tty
from llnl.util.filesystem import BaseDirectoryVisitor, mkdirp, visit_directory_tree
from llnl.util.symlink import readlink
import spack.caches
import spack.cmd
@@ -659,7 +658,7 @@ def get_buildfile_manifest(spec):
# 2. paths are used as strings.
for rel_path in visitor.symlinks:
abs_path = os.path.join(root, rel_path)
link = readlink(abs_path)
link = os.readlink(abs_path)
if os.path.isabs(link) and link.startswith(spack.store.STORE.layout.root):
data["link_to_relocate"].append(rel_path)
@@ -2002,7 +2001,6 @@ def install_root_node(spec, unsigned=False, force=False, sha256=None):
with spack.util.path.filter_padding():
tty.msg('Installing "{0}" from a buildcache'.format(spec.format()))
extract_tarball(spec, download_result, force)
spec.package.windows_establish_runtime_linkage()
spack.hooks.post_install(spec, False)
spack.store.STORE.db.add(spec, spack.store.STORE.layout)

View File

@@ -213,18 +213,15 @@ def _root_spec(spec_str: str) -> str:
Args:
spec_str: spec to be bootstrapped. Must be without compiler and target.
"""
# Add a compiler and platform requirement to the root spec.
# Add a compiler requirement to the root spec.
platform = str(spack.platforms.host())
if platform == "darwin":
spec_str += " %apple-clang"
elif platform == "windows":
spec_str += " %msvc"
elif platform == "linux":
spec_str += " %gcc"
elif platform == "freebsd":
spec_str += " %clang"
spec_str += f" platform={platform}"
target = archspec.cpu.host().family
spec_str += f" target={target}"

View File

@@ -43,7 +43,7 @@
from collections import defaultdict
from enum import Flag, auto
from itertools import chain
from typing import Dict, List, Set, Tuple
from typing import List, Set, Tuple
import llnl.util.tty as tty
from llnl.string import plural
@@ -91,7 +91,7 @@
)
from spack.util.executable import Executable
from spack.util.log_parse import make_log_context, parse_log_events
from spack.util.module_cmd import load_module, path_from_modules
from spack.util.module_cmd import load_module, module, path_from_modules
#
# This can be set by the user to globally disable parallel builds.
@@ -190,6 +190,14 @@ def __call__(self, *args, **kwargs):
return super().__call__(*args, **kwargs)
def _on_cray():
host_platform = spack.platforms.host()
host_os = host_platform.operating_system("default_os")
on_cray = str(host_platform) == "cray"
using_cnl = re.match(r"cnl\d+", str(host_os))
return on_cray, using_cnl
def clean_environment():
# Stuff in here sanitizes the build environment to eliminate
# anything the user has set that may interfere. We apply it immediately
@@ -233,6 +241,17 @@ def clean_environment():
if varname.endswith("_ROOT") and varname != "SPACK_ROOT":
env.unset(varname)
# On Cray "cluster" systems, unset CRAY_LD_LIBRARY_PATH to avoid
# interference with Spack dependencies.
# CNL requires these variables to be set (or at least some of them,
# depending on the CNL version).
on_cray, using_cnl = _on_cray()
if on_cray and not using_cnl:
env.unset("CRAY_LD_LIBRARY_PATH")
for varname in os.environ.keys():
if "PKGCONF" in varname:
env.unset(varname)
# Unset the following variables because they can affect installation of
# Autotools and CMake packages.
build_system_vars = [
@@ -362,7 +381,11 @@ def set_compiler_environment_variables(pkg, env):
_add_werror_handling(keep_werror, env)
# Set the target parameters that the compiler will add
isa_arg = spec.architecture.target.optimization_flags(compiler)
# Don't set on cray platform because the targeting module handles this
if spec.satisfies("platform=cray"):
isa_arg = ""
else:
isa_arg = spec.architecture.target.optimization_flags(compiler)
env.set("SPACK_TARGET_ARGS", isa_arg)
# Trap spack-tracked compiler flags as appropriate.
@@ -707,28 +730,12 @@ def _static_to_shared_library(arch, compiler, static_lib, shared_lib=None, **kwa
return compiler(*compiler_args, output=compiler_output)
def _get_rpath_deps_from_spec(
spec: spack.spec.Spec, transitive_rpaths: bool
) -> List[spack.spec.Spec]:
if not transitive_rpaths:
return spec.dependencies(deptype=dt.LINK)
by_name: Dict[str, spack.spec.Spec] = {}
for dep in spec.traverse(root=False, deptype=dt.LINK):
lookup = by_name.get(dep.name)
if lookup is None:
by_name[dep.name] = dep
elif lookup.version < dep.version:
by_name[dep.name] = dep
return list(by_name.values())
def get_rpath_deps(pkg: spack.package_base.PackageBase) -> List[spack.spec.Spec]:
"""Return immediate or transitive dependencies (depending on the package) that need to be
rpath'ed. If a package occurs multiple times, the newest version is kept."""
return _get_rpath_deps_from_spec(pkg.spec, pkg.transitive_rpaths)
def get_rpath_deps(pkg):
"""Return immediate or transitive RPATHs depending on the package."""
if pkg.transitive_rpaths:
return [d for d in pkg.spec.traverse(root=False, deptype=("link"))]
else:
return pkg.spec.dependencies(deptype="link")
def get_rpaths(pkg):
@@ -810,6 +817,14 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
for mod in pkg.compiler.modules:
load_module(mod)
# kludge to handle cray mpich and libsci being automatically loaded by
# PrgEnv modules on cray platform. Module unload does no damage when
# unnecessary
on_cray, _ = _on_cray()
if on_cray and not dirty:
for mod in ["cray-mpich", "cray-libsci"]:
module("unload", mod)
if target and target.module_name:
load_module(target.module_name)

View File

@@ -110,8 +110,9 @@ def cuda_flags(arch_list):
# From the NVIDIA install guide we know of conflicts for particular
# platforms (linux, darwin), architectures (x86, powerpc) and compilers
# (gcc, clang). We don't restrict %gcc and %clang conflicts to
# platform=linux, since they may apply to platform=darwin. We currently
# do not provide conflicts for platform=darwin with %apple-clang.
# platform=linux, since they should also apply to platform=cray, and may
# apply to platform=darwin. We currently do not provide conflicts for
# platform=darwin with %apple-clang.
# Linux x86_64 compiler conflicts from here:
# https://gist.github.com/ax3l/9489132
@@ -136,14 +137,11 @@ def cuda_flags(arch_list):
conflicts("%gcc@11.2:", when="+cuda ^cuda@:11.5")
conflicts("%gcc@12:", when="+cuda ^cuda@:11.8")
conflicts("%gcc@13:", when="+cuda ^cuda@:12.3")
conflicts("%gcc@14:", when="+cuda ^cuda@:12.4")
conflicts("%clang@12:", when="+cuda ^cuda@:11.4.0")
conflicts("%clang@13:", when="+cuda ^cuda@:11.5")
conflicts("%clang@14:", when="+cuda ^cuda@:11.7")
conflicts("%clang@15:", when="+cuda ^cuda@:12.0")
conflicts("%clang@16:", when="+cuda ^cuda@:12.1")
conflicts("%clang@17:", when="+cuda ^cuda@:12.3")
conflicts("%clang@18:", when="+cuda ^cuda@:12.4")
conflicts("%clang@16:", when="+cuda ^cuda@:12.3")
# https://gist.github.com/ax3l/9489132#gistcomment-3860114
conflicts("%gcc@10", when="+cuda ^cuda@:11.4.0")

View File

@@ -846,7 +846,6 @@ def scalapack_libs(self):
"^mpich@2:" in spec_root
or "^cray-mpich" in spec_root
or "^mvapich2" in spec_root
or "^mvapich" in spec_root
or "^intel-mpi" in spec_root
or "^intel-oneapi-mpi" in spec_root
or "^intel-parallel-studio" in spec_root
@@ -937,15 +936,32 @@ def mpi_setup_dependent_build_environment(self, env, dependent_spec, compilers_o
"I_MPI_ROOT": self.normalize_path("mpi"),
}
compiler_wrapper_commands = self.mpi_compiler_wrappers
wrapper_vars.update(
{
"MPICC": compiler_wrapper_commands["MPICC"],
"MPICXX": compiler_wrapper_commands["MPICXX"],
"MPIF77": compiler_wrapper_commands["MPIF77"],
"MPIF90": compiler_wrapper_commands["MPIF90"],
}
)
# CAUTION - SIMILAR code in:
# var/spack/repos/builtin/packages/mpich/package.py
# var/spack/repos/builtin/packages/openmpi/package.py
# var/spack/repos/builtin/packages/mvapich2/package.py
#
# On Cray, the regular compiler wrappers *are* the MPI wrappers.
if "platform=cray" in self.spec:
# TODO: Confirm
wrapper_vars.update(
{
"MPICC": compilers_of_client["CC"],
"MPICXX": compilers_of_client["CXX"],
"MPIF77": compilers_of_client["F77"],
"MPIF90": compilers_of_client["F90"],
}
)
else:
compiler_wrapper_commands = self.mpi_compiler_wrappers
wrapper_vars.update(
{
"MPICC": compiler_wrapper_commands["MPICC"],
"MPICXX": compiler_wrapper_commands["MPICXX"],
"MPIF77": compiler_wrapper_commands["MPIF77"],
"MPIF90": compiler_wrapper_commands["MPIF90"],
}
)
# Ensure that the directory containing the compiler wrappers is in the
# PATH. Spack packages add `prefix.bin` to their dependents' paths,

View File

@@ -24,6 +24,7 @@ class MSBuildPackage(spack.package_base.PackageBase):
build_system("msbuild")
conflicts("platform=linux", when="build_system=msbuild")
conflicts("platform=darwin", when="build_system=msbuild")
conflicts("platform=cray", when="build_system=msbuild")
@spack.builder.builder("msbuild")

View File

@@ -24,6 +24,7 @@ class NMakePackage(spack.package_base.PackageBase):
build_system("nmake")
conflicts("platform=linux", when="build_system=nmake")
conflicts("platform=darwin", when="build_system=nmake")
conflicts("platform=cray", when="build_system=nmake")
@spack.builder.builder("nmake")
@@ -144,7 +145,7 @@ def install(self, pkg, spec, prefix):
opts += self.nmake_install_args()
if self.makefile_name:
opts.append("/F{}".format(self.makefile_name))
opts.append(self.define("PREFIX", fs.windows_sfn(prefix)))
opts.append(self.define("PREFIX", prefix))
with fs.working_dir(self.build_directory):
inspect.getmodule(self.pkg).nmake(
*opts, *self.install_targets, ignore_quotes=self.ignore_quotes

View File

@@ -36,8 +36,9 @@ class IntelOneApiPackage(Package):
"target=ppc64:",
"target=ppc64le:",
"target=aarch64:",
"platform=darwin",
"platform=windows",
"platform=darwin:",
"platform=cray:",
"platform=windows:",
]:
conflicts(c, msg="This package in only available for x86_64 and Linux")

View File

@@ -44,7 +44,6 @@
from spack import traverse
from spack.error import SpackError
from spack.reporters import CDash, CDashConfiguration
from spack.reporters.cdash import SPACK_CDASH_TIMEOUT
from spack.reporters.cdash import build_stamp as cdash_build_stamp
# See https://docs.gitlab.com/ee/ci/yaml/#retry for descriptions of conditions
@@ -684,22 +683,6 @@ def generate_gitlab_ci_yaml(
"instead.",
)
def ensure_expected_target_path(path):
"""Returns passed paths with all Windows path separators exchanged
for posix separators only if copy_only_pipeline is enabled
This is required as copy_only_pipelines are a unique scenario where
the generate job and child pipelines are run on different platforms.
To make this compatible w/ Windows, we cannot write Windows style path separators
that will be consumed on by the Posix copy job runner.
TODO (johnwparent): Refactor config + cli read/write to deal only in posix
style paths
"""
if copy_only_pipeline and path:
path = path.replace("\\", "/")
return path
pipeline_mirrors = spack.mirror.MirrorCollection(binary=True)
deprecated_mirror_config = False
buildcache_destination = None
@@ -823,7 +806,7 @@ def ensure_expected_target_path(path):
if scope not in include_scopes and scope not in env_includes:
include_scopes.insert(0, scope)
env_includes.extend(include_scopes)
env_yaml_root["spack"]["include"] = [ensure_expected_target_path(i) for i in env_includes]
env_yaml_root["spack"]["include"] = env_includes
if "gitlab-ci" in env_yaml_root["spack"] and "ci" not in env_yaml_root["spack"]:
env_yaml_root["spack"]["ci"] = env_yaml_root["spack"].pop("gitlab-ci")
@@ -1244,9 +1227,6 @@ def main_script_replacements(cmd):
"SPACK_REBUILD_EVERYTHING": str(rebuild_everything),
"SPACK_REQUIRE_SIGNING": os.environ.get("SPACK_REQUIRE_SIGNING", "False"),
}
output_vars = output_object["variables"]
for item, val in output_vars.items():
output_vars[item] = ensure_expected_target_path(val)
# TODO: Remove this block in Spack 0.23
if deprecated_mirror_config and remote_mirror_override:
@@ -1303,6 +1283,7 @@ def main_script_replacements(cmd):
sorted_output = {}
for output_key, output_value in sorted(output_object.items()):
sorted_output[output_key] = output_value
if known_broken_specs_encountered:
tty.error("This pipeline generated hashes known to be broken on develop:")
display_broken_spec_messages(broken_specs_url, known_broken_specs_encountered)
@@ -1497,12 +1478,6 @@ def copy_test_logs_to_artifacts(test_stage, job_test_dir):
copy_files_to_artifacts(os.path.join(test_stage, "*", "*.txt"), job_test_dir)
def win_quote(quote_str: str) -> str:
if IS_WINDOWS:
quote_str = f'"{quote_str}"'
return quote_str
def download_and_extract_artifacts(url, work_dir):
"""Look for gitlab artifacts.zip at the given url, and attempt to download
and extract the contents into the given work_dir
@@ -1525,7 +1500,7 @@ def download_and_extract_artifacts(url, work_dir):
request = Request(url, headers=headers)
request.get_method = lambda: "GET"
response = opener.open(request, timeout=SPACK_CDASH_TIMEOUT)
response = opener.open(request)
response_code = response.getcode()
if response_code != 200:
@@ -1967,9 +1942,9 @@ def compose_command_err_handling(args):
# but we need to handle EXEs (git, etc) ourselves
catch_exe_failure = (
"""
if ($LASTEXITCODE -ne 0){{
throw 'Command {} has failed'
}}
if ($LASTEXITCODE -ne 0){
throw "Command {} has failed"
}
"""
if IS_WINDOWS
else ""
@@ -2201,13 +2176,13 @@ def __init__(self, ci_cdash):
def args(self):
return [
"--cdash-upload-url",
win_quote(self.upload_url),
self.upload_url,
"--cdash-build",
win_quote(self.build_name),
self.build_name,
"--cdash-site",
win_quote(self.site),
self.site,
"--cdash-buildstamp",
win_quote(self.build_stamp),
self.build_stamp,
]
@property # type: ignore
@@ -2273,7 +2248,7 @@ def create_buildgroup(self, opener, headers, url, group_name, group_type):
request = Request(url, data=enc_data, headers=headers)
response = opener.open(request, timeout=SPACK_CDASH_TIMEOUT)
response = opener.open(request)
response_code = response.getcode()
if response_code not in [200, 201]:
@@ -2319,7 +2294,7 @@ def populate_buildgroup(self, job_names):
request = Request(url, data=enc_data, headers=headers)
request.get_method = lambda: "PUT"
response = opener.open(request, timeout=SPACK_CDASH_TIMEOUT)
response = opener.open(request)
response_code = response.getcode()
if response_code != 200:

View File

@@ -13,6 +13,7 @@
import shutil
import sys
import tempfile
import urllib.request
from typing import Dict, List, Optional, Tuple, Union
import llnl.util.tty as tty
@@ -53,7 +54,6 @@
from spack.oci.oci import (
copy_missing_layers_with_retry,
get_manifest_and_config_with_retry,
list_tags,
upload_blob_with_retry,
upload_manifest_with_retry,
)
@@ -856,7 +856,10 @@ def _config_from_tag(image_ref: ImageReference, tag: str) -> Optional[dict]:
def _update_index_oci(image_ref: ImageReference, tmpdir: str, pool: MaybePool) -> None:
tags = list_tags(image_ref)
request = urllib.request.Request(url=image_ref.tags_url())
response = spack.oci.opener.urlopen(request)
spack.oci.opener.ensure_status(request, response, 200)
tags = json.load(response)["tags"]
# Fetch all image config files in parallel
spec_dicts = pool.starmap(

View File

@@ -31,6 +31,7 @@
level = "long"
SPACK_COMMAND = "spack"
MAKE_COMMAND = "make"
INSTALL_FAIL_CODE = 1
FAILED_CREATE_BUILDCACHE_CODE = 100
@@ -39,12 +40,6 @@ def deindent(desc):
return desc.replace(" ", "")
def unicode_escape(path: str) -> str:
"""Returns transformed path with any unicode
characters replaced with their corresponding escapes"""
return path.encode("unicode-escape").decode("utf-8")
def setup_parser(subparser):
setup_parser.parser = subparser
subparsers = subparser.add_subparsers(help="CI sub-commands")
@@ -556,35 +551,75 @@ def ci_rebuild(args):
# No hash match anywhere means we need to rebuild spec
# Start with spack arguments
spack_cmd = [SPACK_COMMAND, "--color=always", "--backtrace", "--verbose", "install"]
spack_cmd = [SPACK_COMMAND, "--color=always", "--backtrace", "--verbose"]
config = cfg.get("config")
if not config["verify_ssl"]:
spack_cmd.append("-k")
install_args = [f'--use-buildcache={spack_ci.win_quote("package:never,dependencies:only")}']
install_args = []
can_verify = spack_ci.can_verify_binaries()
verify_binaries = can_verify and spack_is_pr_pipeline is False
if not verify_binaries:
install_args.append("--no-check-signature")
slash_hash = spack_ci.win_quote("/" + job_spec.dag_hash())
slash_hash = "/{}".format(job_spec.dag_hash())
# Arguments when installing dependencies from cache
deps_install_args = install_args
# Arguments when installing the root from sources
deps_install_args = install_args + ["--only=dependencies"]
root_install_args = install_args + ["--keep-stage", "--only=package"]
root_install_args = install_args + [
"--keep-stage",
"--only=package",
"--use-buildcache=package:never,dependencies:only",
]
if cdash_handler:
# Add additional arguments to `spack install` for CDash reporting.
root_install_args.extend(cdash_handler.args())
root_install_args.append(slash_hash)
# ["x", "y"] -> "'x' 'y'"
args_to_string = lambda args: " ".join("'{}'".format(arg) for arg in args)
commands = [
# apparently there's a race when spack bootstraps? do it up front once
[SPACK_COMMAND, "-e", unicode_escape(env.path), "bootstrap", "now"],
spack_cmd + deps_install_args + [slash_hash],
spack_cmd + root_install_args + [slash_hash],
[SPACK_COMMAND, "-e", env.path, "bootstrap", "now"],
[
SPACK_COMMAND,
"-e",
env.path,
"env",
"depfile",
"-o",
"Makefile",
"--use-buildcache=package:never,dependencies:only",
slash_hash, # limit to spec we're building
],
[
# --output-sync requires GNU make 4.x.
# Old make errors when you pass it a flag it doesn't recognize,
# but it doesn't error or warn when you set unrecognized flags in
# this variable.
"export",
"GNUMAKEFLAGS=--output-sync=recurse",
],
[
MAKE_COMMAND,
"SPACK={}".format(args_to_string(spack_cmd)),
"SPACK_COLOR=always",
"SPACK_INSTALL_FLAGS={}".format(args_to_string(deps_install_args)),
"-j$(nproc)",
"install-deps/{}".format(
spack.environment.depfile.MakefileSpec(job_spec).safe_format(
"{name}-{version}-{hash}"
)
),
],
spack_cmd + ["install"] + root_install_args,
]
tty.debug("Installing {0} from source".format(job_spec.name))
install_exit_code = spack_ci.process_command("install", commands, repro_dir)

View File

@@ -106,8 +106,7 @@ def clean(parser, args):
# Then do the cleaning falling through the cases
if args.specs:
specs = spack.cmd.parse_specs(args.specs, concretize=False)
specs = list(spack.cmd.matching_spec_from_env(x) for x in specs)
specs = spack.cmd.parse_specs(args.specs, concretize=True)
for spec in specs:
msg = "Cleaning build stage [{0}]"
tty.msg(msg.format(spec.short_spec))

View File

@@ -50,7 +50,7 @@
@B{++}, @r{--}, @r{~~}, @B{==} propagate variants to package dependencies
architecture variants:
@m{platform=platform} linux, darwin, freebsd, windows
@m{platform=platform} linux, darwin, cray, etc.
@m{os=operating_system} specific <operating_system>
@m{target=target} specific <target> processor
@m{arch=platform-os-target} shortcut for all three above

View File

@@ -61,6 +61,7 @@ def install_kwargs_from_args(args):
"dependencies_use_cache": cache_opt(args.use_cache, dep_use_bc),
"dependencies_cache_only": cache_opt(args.cache_only, dep_use_bc),
"include_build_deps": args.include_build_deps,
"explicit": True, # Use true as a default for install command
"stop_at": args.until,
"unsigned": args.unsigned,
"install_deps": ("dependencies" in args.things_to_install),
@@ -472,7 +473,6 @@ def install_without_active_env(args, install_kwargs, reporter_factory):
require_user_confirmation_for_overwrite(concrete_specs, args)
install_kwargs["overwrite"] = [spec.dag_hash() for spec in concrete_specs]
installs = [s.package for s in concrete_specs]
install_kwargs["explicit"] = [s.dag_hash() for s in concrete_specs]
builder = PackageInstaller(installs, install_kwargs)
installs = [(s.package, install_kwargs) for s in concrete_specs]
builder = PackageInstaller(installs)
builder.install()

View File

@@ -151,8 +151,7 @@ def is_installed(spec):
key=lambda s: s.dag_hash(),
)
with spack.store.STORE.db.read_transaction():
return [spec for spec in specs if is_installed(spec)]
return [spec for spec in specs if is_installed(spec)]
def dependent_environments(
@@ -240,8 +239,6 @@ def get_uninstall_list(args, specs: List[spack.spec.Spec], env: Optional[ev.Envi
print()
tty.info("The following environments still reference these specs:")
colify([e.name for e in other_dependent_envs.keys()], indent=4)
if env:
msgs.append("use `spack remove` to remove the spec from the current environment")
msgs.append("use `spack env remove` to remove environments")
msgs.append("use `spack uninstall --force` to override")
print()

View File

@@ -695,6 +695,10 @@ def compiler_environment(self):
try:
# load modules and set env variables
for module in self.modules:
# On cray, mic-knl module cannot be loaded without cce module
# See: https://github.com/spack/spack/issues/3153
if os.environ.get("CRAY_CPU_TARGET") == "mic-knl":
spack.util.module_cmd.load_module("cce")
spack.util.module_cmd.load_module(module)
# apply other compiler environment changes

View File

@@ -220,10 +220,10 @@ def _compiler_config_from_external(config):
operating_system = host_platform.operating_system("default_os")
target = host_platform.target("default_target").microarchitecture
else:
target = spec.architecture.target
target = spec.target
if not target:
target = spack.platforms.host().target("default_target")
target = target.microarchitecture
host_platform = spack.platforms.host()
target = host_platform.target("default_target").microarchitecture
operating_system = spec.os
if not operating_system:

View File

@@ -96,8 +96,6 @@ def verbose_flag(self):
openmp_flag = "-fopenmp"
# C++ flags based on CMake Modules/Compiler/Clang.cmake
@property
def cxx11_flag(self):
if self.real_version < Version("3.3"):
@@ -122,24 +120,6 @@ def cxx17_flag(self):
return "-std=c++17"
@property
def cxx20_flag(self):
if self.real_version < Version("5.0"):
raise UnsupportedCompilerFlag(self, "the C++20 standard", "cxx20_flag", "< 5.0")
elif self.real_version < Version("11.0"):
return "-std=c++2a"
else:
return "-std=c++20"
@property
def cxx23_flag(self):
if self.real_version < Version("12.0"):
raise UnsupportedCompilerFlag(self, "the C++23 standard", "cxx23_flag", "< 12.0")
elif self.real_version < Version("17.0"):
return "-std=c++2b"
else:
return "-std=c++23"
@property
def c99_flag(self):
return "-std=c99"
@@ -162,10 +142,7 @@ def c17_flag(self):
def c23_flag(self):
if self.real_version < Version("9.0"):
raise UnsupportedCompilerFlag(self, "the C23 standard", "c23_flag", "< 9.0")
elif self.real_version < Version("18.0"):
return "-std=c2x"
else:
return "-std=c23"
return "-std=c2x"
@property
def cc_pic_flag(self):

View File

@@ -15,7 +15,6 @@
import llnl.util.filesystem as fs
import llnl.util.tty as tty
from llnl.util.symlink import readlink
import spack.config
import spack.hash_types as ht
@@ -182,7 +181,7 @@ def deprecated_file_path(self, deprecated_spec, deprecator_spec=None):
base_dir = (
self.path_for_spec(deprecator_spec)
if deprecator_spec
else readlink(deprecated_spec.prefix)
else os.readlink(deprecated_spec.prefix)
)
yaml_path = os.path.join(

View File

@@ -22,7 +22,7 @@
import llnl.util.tty as tty
import llnl.util.tty.color as clr
from llnl.util.link_tree import ConflictingSpecsError
from llnl.util.symlink import readlink, symlink
from llnl.util.symlink import symlink
import spack.compilers
import spack.concretize
@@ -662,7 +662,7 @@ def _current_root(self):
if not os.path.islink(self.root):
return None
root = readlink(self.root)
root = os.readlink(self.root)
if os.path.isabs(root):
return root
@@ -1948,19 +1948,13 @@ def install_specs(self, specs: Optional[List[Spec]] = None, **install_args):
specs = specs if specs is not None else roots
# Extend the set of specs to overwrite with modified dev specs and their parents
overwrite: Set[str] = set()
overwrite.update(install_args.get("overwrite", []), self._dev_specs_that_need_overwrite())
install_args["overwrite"] = overwrite
explicit: Set[str] = set()
explicit.update(
install_args.get("explicit", []),
(s.dag_hash() for s in specs),
(s.dag_hash() for s in roots),
install_args["overwrite"] = (
install_args.get("overwrite", []) + self._dev_specs_that_need_overwrite()
)
install_args["explicit"] = explicit
PackageInstaller([spec.package for spec in specs], install_args).install()
installs = [(spec.package, {**install_args, "explicit": spec in roots}) for spec in specs]
PackageInstaller(installs).install()
def all_specs_generator(self) -> Iterable[Spec]:
"""Returns a generator for all concrete specs"""
@@ -3042,56 +3036,54 @@ def included_config_scopes(self) -> List[spack.config.ConfigScope]:
for i, config_path in enumerate(reversed(includes)):
# allow paths to contain spack config/environment variables, etc.
config_path = substitute_path_variables(config_path)
include_url = urllib.parse.urlparse(config_path)
# If scheme is not valid, config_path is not a url
# of a type Spack is generally aware
if spack.util.url.validate_scheme(include_url.scheme):
# Transform file:// URLs to direct includes.
if include_url.scheme == "file":
config_path = urllib.request.url2pathname(include_url.path)
# Transform file:// URLs to direct includes.
if include_url.scheme == "file":
config_path = urllib.request.url2pathname(include_url.path)
# Any other URL should be fetched.
elif include_url.scheme in ("http", "https", "ftp"):
# Stage any remote configuration file(s)
staged_configs = (
os.listdir(self.config_stage_dir)
if os.path.exists(self.config_stage_dir)
else []
# Any other URL should be fetched.
elif include_url.scheme in ("http", "https", "ftp"):
# Stage any remote configuration file(s)
staged_configs = (
os.listdir(self.config_stage_dir)
if os.path.exists(self.config_stage_dir)
else []
)
remote_path = urllib.request.url2pathname(include_url.path)
basename = os.path.basename(remote_path)
if basename in staged_configs:
# Do NOT re-stage configuration files over existing
# ones with the same name since there is a risk of
# losing changes (e.g., from 'spack config update').
tty.warn(
"Will not re-stage configuration from {0} to avoid "
"losing changes to the already staged file of the "
"same name.".format(remote_path)
)
remote_path = urllib.request.url2pathname(include_url.path)
basename = os.path.basename(remote_path)
if basename in staged_configs:
# Do NOT re-stage configuration files over existing
# ones with the same name since there is a risk of
# losing changes (e.g., from 'spack config update').
tty.warn(
"Will not re-stage configuration from {0} to avoid "
"losing changes to the already staged file of the "
"same name.".format(remote_path)
)
# Recognize the configuration stage directory
# is flattened to ensure a single copy of each
# configuration file.
config_path = self.config_stage_dir
if basename.endswith(".yaml"):
config_path = os.path.join(config_path, basename)
else:
staged_path = spack.config.fetch_remote_configs(
config_path, str(self.config_stage_dir), skip_existing=True
)
if not staged_path:
raise SpackEnvironmentError(
"Unable to fetch remote configuration {0}".format(config_path)
)
config_path = staged_path
elif include_url.scheme:
raise ValueError(
f"Unsupported URL scheme ({include_url.scheme}) for "
f"environment include: {config_path}"
# Recognize the configuration stage directory
# is flattened to ensure a single copy of each
# configuration file.
config_path = self.config_stage_dir
if basename.endswith(".yaml"):
config_path = os.path.join(config_path, basename)
else:
staged_path = spack.config.fetch_remote_configs(
config_path, str(self.config_stage_dir), skip_existing=True
)
if not staged_path:
raise SpackEnvironmentError(
"Unable to fetch remote configuration {0}".format(config_path)
)
config_path = staged_path
elif include_url.scheme:
raise ValueError(
f"Unsupported URL scheme ({include_url.scheme}) for "
f"environment include: {config_path}"
)
# treat relative paths as relative to the environment
if not os.path.isabs(config_path):

View File

@@ -13,6 +13,7 @@
import spack.config
import spack.relocate
from spack.util.elf import ElfParsingError, parse_elf
from spack.util.executable import Executable
def is_shared_library_elf(filepath):
@@ -140,7 +141,7 @@ def post_install(spec, explicit=None):
return
# Only enable on platforms using ELF.
if not spec.satisfies("platform=linux"):
if not spec.satisfies("platform=linux") and not spec.satisfies("platform=cray"):
return
# Disable this hook when bootstrapping, to avoid recursion.
@@ -148,9 +149,10 @@ def post_install(spec, explicit=None):
return
# Should failing to locate patchelf be a hard error?
patchelf = spack.relocate._patchelf()
if not patchelf:
patchelf_path = spack.relocate._patchelf()
if not patchelf_path:
return
patchelf = Executable(patchelf_path)
fixes = find_and_patch_sonames(spec.prefix, spec.package.non_bindable_shared_objects, patchelf)

View File

@@ -117,7 +117,7 @@ def post_install(spec, explicit=None):
return
# Only enable on platforms using ELF.
if not spec.satisfies("platform=linux"):
if not spec.satisfies("platform=linux") and not spec.satisfies("platform=cray"):
return
visit_directory_tree(spec.prefix, ElfFilesWithRPathVisitor())

View File

@@ -488,7 +488,6 @@ def _process_binary_cache_tarball(
with timer.measure("install"), spack.util.path.filter_padding():
binary_distribution.extract_tarball(pkg.spec, download_result, force=False, timer=timer)
pkg.windows_establish_runtime_linkage()
if hasattr(pkg, "_post_buildcache_install_hook"):
pkg._post_buildcache_install_hook()
@@ -600,7 +599,9 @@ def dump_packages(spec: "spack.spec.Spec", path: str) -> None:
if node is spec:
spack.repo.PATH.dump_provenance(node, dest_pkg_dir)
elif source_pkg_dir:
fs.install_tree(source_pkg_dir, dest_pkg_dir)
fs.install_tree(
source_pkg_dir, dest_pkg_dir, allow_broken_symlinks=(sys.platform != "win32")
)
def get_dependent_ids(spec: "spack.spec.Spec") -> List[str]:
@@ -759,8 +760,12 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
if not self.pkg.spec.concrete:
raise ValueError(f"{self.pkg.name} must have a concrete spec")
self.pkg.stop_before_phase = install_args.get("stop_before") # type: ignore[attr-defined] # noqa: E501
self.pkg.last_phase = install_args.get("stop_at") # type: ignore[attr-defined]
# Cache the package phase options with the explicit package,
# popping the options to ensure installation of associated
# dependencies is NOT affected by these options.
self.pkg.stop_before_phase = install_args.pop("stop_before", None) # type: ignore[attr-defined] # noqa: E501
self.pkg.last_phase = install_args.pop("stop_at", None) # type: ignore[attr-defined]
# Cache the package id for convenience
self.pkg_id = package_id(pkg.spec)
@@ -1070,17 +1075,19 @@ def flag_installed(self, installed: List[str]) -> None:
@property
def explicit(self) -> bool:
return self.pkg.spec.dag_hash() in self.request.install_args.get("explicit", [])
"""The package was explicitly requested by the user."""
return self.is_root and self.request.install_args.get("explicit", True)
@property
def is_build_request(self) -> bool:
"""The package was requested directly"""
def is_root(self) -> bool:
"""The package was requested directly, but may or may not be explicit
in an environment."""
return self.pkg == self.request.pkg
@property
def use_cache(self) -> bool:
_use_cache = True
if self.is_build_request:
if self.is_root:
return self.request.install_args.get("package_use_cache", _use_cache)
else:
return self.request.install_args.get("dependencies_use_cache", _use_cache)
@@ -1088,7 +1095,7 @@ def use_cache(self) -> bool:
@property
def cache_only(self) -> bool:
_cache_only = False
if self.is_build_request:
if self.is_root:
return self.request.install_args.get("package_cache_only", _cache_only)
else:
return self.request.install_args.get("dependencies_cache_only", _cache_only)
@@ -1114,17 +1121,24 @@ def priority(self):
class PackageInstaller:
"""
Class for managing the install process for a Spack instance based on a bottom-up DAG approach.
Class for managing the install process for a Spack instance based on a
bottom-up DAG approach.
This installer can coordinate concurrent batch and interactive, local and distributed (on a
shared file system) builds for the same Spack instance.
This installer can coordinate concurrent batch and interactive, local
and distributed (on a shared file system) builds for the same Spack
instance.
"""
def __init__(
self, packages: List["spack.package_base.PackageBase"], install_args: dict
) -> None:
def __init__(self, installs: List[Tuple["spack.package_base.PackageBase", dict]] = []) -> None:
"""Initialize the installer.
Args:
installs (list): list of tuples, where each
tuple consists of a package (PackageBase) and its associated
install arguments (dict)
"""
# List of build requests
self.build_requests = [BuildRequest(pkg, install_args) for pkg in packages]
self.build_requests = [BuildRequest(pkg, install_args) for pkg, install_args in installs]
# Priority queue of build tasks
self.build_pq: List[Tuple[Tuple[int, int], BuildTask]] = []
@@ -1547,7 +1561,7 @@ def _add_tasks(self, request: BuildRequest, all_deps):
#
# External and upstream packages need to get flagged as installed to
# ensure proper status tracking for environment build.
explicit = request.pkg.spec.dag_hash() in request.install_args.get("explicit", [])
explicit = request.install_args.get("explicit", True)
not_local = _handle_external_and_upstream(request.pkg, explicit)
if not_local:
self._flag_installed(request.pkg)
@@ -1668,6 +1682,10 @@ def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
if not pkg.unit_test_check():
return
# Injecting information to know if this installation request is the root one
# to determine in BuildProcessInstaller whether installation is explicit or not
install_args["is_root"] = task.is_root
try:
self._setup_install_dir(pkg)
@@ -1979,8 +1997,8 @@ def install(self) -> None:
self._init_queue()
fail_fast_err = "Terminating after first install failure"
single_requested_spec = len(self.build_requests) == 1
failed_build_requests = []
single_explicit_spec = len(self.build_requests) == 1
failed_explicits = []
install_status = InstallStatus(len(self.build_pq))
@@ -2178,11 +2196,14 @@ def install(self) -> None:
if self.fail_fast:
raise InstallError(f"{fail_fast_err}: {str(exc)}", pkg=pkg)
# Terminate when a single build request has failed, or summarize errors later.
if task.is_build_request:
if single_requested_spec:
raise
failed_build_requests.append((pkg, pkg_id, str(exc)))
# Terminate at this point if the single explicit spec has
# failed to install.
if single_explicit_spec and task.explicit:
raise
# Track explicit spec id and error to summarize when done
if task.explicit:
failed_explicits.append((pkg, pkg_id, str(exc)))
finally:
# Remove the install prefix if anything went wrong during
@@ -2205,16 +2226,16 @@ def install(self) -> None:
if request.install_args.get("install_package") and request.pkg_id not in self.installed
]
if failed_build_requests or missing:
for _, pkg_id, err in failed_build_requests:
if failed_explicits or missing:
for _, pkg_id, err in failed_explicits:
tty.error(f"{pkg_id}: {err}")
for _, pkg_id in missing:
tty.error(f"{pkg_id}: Package was not installed")
if len(failed_build_requests) > 0:
pkg = failed_build_requests[0][0]
ids = [pkg_id for _, pkg_id, _ in failed_build_requests]
if len(failed_explicits) > 0:
pkg = failed_explicits[0][0]
ids = [pkg_id for _, pkg_id, _ in failed_explicits]
tty.debug(
"Associating installation failure with first failed "
f"explicit package ({ids[0]}) from {', '.join(ids)}"
@@ -2273,7 +2294,7 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
self.verbose = bool(install_args.get("verbose", False))
# whether installation was explicitly requested by the user
self.explicit = pkg.spec.dag_hash() in install_args.get("explicit", [])
self.explicit = install_args.get("is_root", False) and install_args.get("explicit", True)
# env before starting installation
self.unmodified_env = install_args.get("unmodified_env", {})
@@ -2358,7 +2379,9 @@ def _install_source(self) -> None:
src_target = os.path.join(pkg.spec.prefix, "share", pkg.name, "src")
tty.debug(f"{self.pre} Copying source to {src_target}")
fs.install_tree(pkg.stage.source_path, src_target)
fs.install_tree(
pkg.stage.source_path, src_target, allow_broken_symlinks=(sys.platform != "win32")
)
def _real_install(self) -> None:
import spack.builder

View File

@@ -11,7 +11,7 @@
import urllib.parse
import urllib.request
from http.client import HTTPResponse
from typing import List, NamedTuple, Tuple
from typing import NamedTuple, Tuple
from urllib.request import Request
import llnl.util.tty as tty
@@ -27,7 +27,6 @@
import spack.stage
import spack.traverse
import spack.util.crypto
import spack.util.url
from .image import Digest, ImageReference
@@ -70,42 +69,6 @@ def with_query_param(url: str, param: str, value: str) -> str:
)
def list_tags(ref: ImageReference, _urlopen: spack.oci.opener.MaybeOpen = None) -> List[str]:
"""Retrieves the list of tags associated with an image, handling pagination."""
_urlopen = _urlopen or spack.oci.opener.urlopen
tags = set()
fetch_url = ref.tags_url()
while True:
# Fetch tags
request = Request(url=fetch_url)
response = _urlopen(request)
spack.oci.opener.ensure_status(request, response, 200)
tags.update(json.load(response)["tags"])
# Check for pagination
link_header = response.headers["Link"]
if link_header is None:
break
tty.debug(f"OCI tag pagination: {link_header}")
rel_next_value = spack.util.url.parse_link_rel_next(link_header)
if rel_next_value is None:
break
rel_next = urllib.parse.urlparse(rel_next_value)
if rel_next.scheme not in ("https", ""):
break
fetch_url = ref.endpoint(rel_next_value)
return sorted(tags)
def upload_blob(
ref: ImageReference,
file: str,

View File

@@ -3,12 +3,22 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from ._operating_system import OperatingSystem
from .cray_backend import CrayBackend
from .cray_frontend import CrayFrontend
from .freebsd import FreeBSDOs
from .linux_distro import LinuxDistro
from .mac_os import MacOs
from .windows_os import WindowsOs
__all__ = ["OperatingSystem", "LinuxDistro", "MacOs", "WindowsOs", "FreeBSDOs"]
__all__ = [
"OperatingSystem",
"LinuxDistro",
"MacOs",
"CrayFrontend",
"CrayBackend",
"WindowsOs",
"FreeBSDOs",
]
#: List of all the Operating Systems known to Spack
operating_systems = [LinuxDistro, MacOs, WindowsOs, FreeBSDOs]
operating_systems = [LinuxDistro, MacOs, CrayFrontend, CrayBackend, WindowsOs, FreeBSDOs]

View File

@@ -0,0 +1,172 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import re
import llnl.util.tty as tty
import spack.error
import spack.version
from spack.util.module_cmd import module
from .linux_distro import LinuxDistro
#: Possible locations of the Cray CLE release file,
#: which we look at to get the CNL OS version.
_cle_release_file = "/etc/opt/cray/release/cle-release"
_clerelease_file = "/etc/opt/cray/release/clerelease"
def read_cle_release_file():
"""Read the CLE release file and return a dict with its attributes.
This file is present on newer versions of Cray.
The release file looks something like this::
RELEASE=6.0.UP07
BUILD=6.0.7424
...
The dictionary we produce looks like this::
{
"RELEASE": "6.0.UP07",
"BUILD": "6.0.7424",
...
}
Returns:
dict: dictionary of release attributes
"""
with open(_cle_release_file) as release_file:
result = {}
for line in release_file:
# use partition instead of split() to ensure we only split on
# the first '=' in the line.
key, _, value = line.partition("=")
result[key] = value.strip()
return result
def read_clerelease_file():
"""Read the CLE release file and return the Cray OS version.
This file is present on older versions of Cray.
The release file looks something like this::
5.2.UP04
Returns:
str: the Cray OS version
"""
with open(_clerelease_file) as release_file:
for line in release_file:
return line.strip()
class CrayBackend(LinuxDistro):
"""Compute Node Linux (CNL) is the operating system used for the Cray XC
series super computers. It is a very stripped down version of GNU/Linux.
Any compilers found through this operating system will be used with
modules. If updated, user must make sure that version and name are
updated to indicate that OS has been upgraded (or downgraded)
"""
def __init__(self):
name = "cnl"
version = self._detect_crayos_version()
if version:
# If we found a CrayOS version, we do not want the information
# from LinuxDistro. In order to skip the logic from
# distro.linux_distribution, while still calling __init__
# methods further up the MRO, we skip LinuxDistro in the MRO and
# call the OperatingSystem superclass __init__ method
super(LinuxDistro, self).__init__(name, version)
else:
super().__init__()
self.modulecmd = module
def __str__(self):
return self.name + str(self.version)
@classmethod
def _detect_crayos_version(cls):
if os.path.isfile(_cle_release_file):
release_attrs = read_cle_release_file()
if "RELEASE" not in release_attrs:
# This Cray system uses a base OS not CLE/CNL
return None
v = spack.version.Version(release_attrs["RELEASE"])
return v[0]
elif os.path.isfile(_clerelease_file):
v = read_clerelease_file()
return spack.version.Version(v)[0]
else:
# Not all Cray systems run CNL on the backend.
# Systems running in what Cray calls "cluster" mode run other
# linux OSs under the Cray PE.
# So if we don't detect any Cray OS version on the system,
# we return None. We can't ever be sure we will get a Cray OS
# version.
# Returning None allows the calling code to test for the value
# being "True-ish" rather than requiring a try/except block.
return None
def arguments_to_detect_version_fn(self, paths):
import spack.compilers
command_arguments = []
for compiler_name in spack.compilers.supported_compilers():
cmp_cls = spack.compilers.class_for_compiler_name(compiler_name)
# If the compiler doesn't have a corresponding
# Programming Environment, skip to the next
if cmp_cls.PrgEnv is None:
continue
if cmp_cls.PrgEnv_compiler is None:
tty.die("Must supply PrgEnv_compiler with PrgEnv")
compiler_id = spack.compilers.CompilerID(self, compiler_name, None)
detect_version_args = spack.compilers.DetectVersionArgs(
id=compiler_id, variation=(None, None), language="cc", path="cc"
)
command_arguments.append(detect_version_args)
return command_arguments
def detect_version(self, detect_version_args):
import spack.compilers
modulecmd = self.modulecmd
compiler_name = detect_version_args.id.compiler_name
compiler_cls = spack.compilers.class_for_compiler_name(compiler_name)
output = modulecmd("avail", compiler_cls.PrgEnv_compiler)
version_regex = r"({0})/([\d\.]+[\d]-?[\w]*)".format(compiler_cls.PrgEnv_compiler)
matches = re.findall(version_regex, output)
version = tuple(version for _, version in matches if "classic" not in version)
compiler_id = detect_version_args.id
value = detect_version_args._replace(id=compiler_id._replace(version=version))
return value, None
def make_compilers(self, compiler_id, paths):
import spack.spec
name = compiler_id.compiler_name
cmp_cls = spack.compilers.class_for_compiler_name(name)
compilers = []
for v in compiler_id.version:
comp = cmp_cls(
spack.spec.CompilerSpec(name + "@=" + v),
self,
"any",
["cc", "CC", "ftn"],
[cmp_cls.PrgEnv, name + "/" + v],
)
compilers.append(comp)
return compilers

View File

@@ -0,0 +1,105 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import contextlib
import os
import re
import llnl.util.filesystem as fs
import llnl.util.lang
import llnl.util.tty as tty
from spack.util.environment import get_path
from spack.util.module_cmd import module
from .linux_distro import LinuxDistro
@contextlib.contextmanager
def unload_programming_environment():
"""Context manager that unloads Cray Programming Environments."""
env_bu = None
# We rely on the fact that the PrgEnv-* modules set the PE_ENV
# environment variable.
if "PE_ENV" in os.environ:
# Copy environment variables to restore them after the compiler
# detection. We expect that the only thing PrgEnv-* modules do is
# the environment variables modifications.
env_bu = os.environ.copy()
# Get the name of the module from the environment variable.
prg_env = "PrgEnv-" + os.environ["PE_ENV"].lower()
# Unload the PrgEnv-* module. By doing this we intentionally
# provoke errors when the Cray's compiler wrappers are executed
# (Error: A PrgEnv-* modulefile must be loaded.) so they will not
# be detected as valid compilers by the overridden method. We also
# expect that the modules that add the actual compilers' binaries
# into the PATH environment variable (i.e. the following modules:
# 'intel', 'cce', 'gcc', etc.) will also be unloaded since they are
# specified as prerequisites in the PrgEnv-* modulefiles.
module("unload", prg_env)
yield
# Restore the environment.
if env_bu is not None:
os.environ.clear()
os.environ.update(env_bu)
class CrayFrontend(LinuxDistro):
"""Represents OS that runs on login and service nodes of the Cray platform.
It acts as a regular Linux without Cray-specific modules and compiler
wrappers."""
@property
def compiler_search_paths(self):
"""Calls the default function but unloads Cray's programming
environments first.
This prevents from detecting Cray compiler wrappers and avoids
possible false detections.
"""
import spack.compilers
with unload_programming_environment():
search_paths = get_path("PATH")
extract_path_re = re.compile(r"prepend-path[\s]*PATH[\s]*([/\w\.:-]*)")
for compiler_cls in spack.compilers.all_compiler_types():
# Check if the compiler class is supported on Cray
prg_env = getattr(compiler_cls, "PrgEnv", None)
compiler_module = getattr(compiler_cls, "PrgEnv_compiler", None)
if not (prg_env and compiler_module):
continue
# It is supported, check which versions are available
output = module("avail", compiler_cls.PrgEnv_compiler)
version_regex = r"({0})/([\d\.]+[\d]-?[\w]*)".format(compiler_cls.PrgEnv_compiler)
matches = re.findall(version_regex, output)
versions = tuple(version for _, version in matches if "classic" not in version)
# Now inspect the modules and add to paths
msg = "[CRAY FE] Detected FE compiler [name={0}, versions={1}]"
tty.debug(msg.format(compiler_module, versions))
for v in versions:
try:
current_module = compiler_module + "/" + v
out = module("show", current_module)
match = extract_path_re.search(out)
search_paths += match.group(1).split(":")
except Exception as e:
msg = (
"[CRAY FE] An unexpected error occurred while "
"detecting FE compiler [compiler={0}, "
" version={1}, error={2}]"
)
tty.debug(msg.format(compiler_cls.name, v, str(e)))
search_paths = list(llnl.util.lang.dedupe(search_paths))
return fs.search_paths_for_executables(*search_paths)

View File

@@ -161,11 +161,7 @@ def windows_establish_runtime_linkage(self):
Performs symlinking to incorporate rpath dependencies to Windows runtime search paths
"""
# If spec is an external, we should not be modifying its bin directory, as we would
# be doing in this method
# Spack should in general not modify things it has not installed
# we can reasonably expect externals to have their link interface properly established
if sys.platform == "win32" and not self.spec.external:
if sys.platform == "win32":
self.win_rpath.add_library_dependent(*self.win_add_library_dependent())
self.win_rpath.add_rpath(*self.win_add_rpath())
self.win_rpath.establish_link()
@@ -1881,10 +1877,7 @@ def do_install(self, **kwargs):
verbose (bool): Display verbose build output (by default,
suppresses it)
"""
explicit = kwargs.get("explicit", True)
if isinstance(explicit, bool):
kwargs["explicit"] = {self.spec.dag_hash()} if explicit else set()
PackageInstaller([self], kwargs).install()
PackageInstaller([(self, kwargs)]).install()
# TODO (post-34236): Update tests and all packages that use this as a
# TODO (post-34236): package method to the routine made available to
@@ -2453,18 +2446,9 @@ def rpath(self):
# on Windows, libraries of runtime interest are typically
# stored in the bin directory
# Do not include Windows system libraries in the rpath interface
# these libraries are handled automatically by VS/VCVARS and adding
# Spack derived system libs into the link path or address space of a program
# can result in conflicting versions, which makes Spack packages less useable
if sys.platform == "win32":
rpaths = [self.prefix.bin]
rpaths.extend(
d.prefix.bin
for d in deps
if os.path.isdir(d.prefix.bin)
and "windows-system" not in getattr(d.package, "tags", [])
)
rpaths.extend(d.prefix.bin for d in deps if os.path.isdir(d.prefix.bin))
else:
rpaths = [self.prefix.lib, self.prefix.lib64]
rpaths.extend(d.prefix.lib for d in deps if os.path.isdir(d.prefix.lib))

View File

@@ -6,6 +6,7 @@
from ._functions import _host, by_name, platforms, prevent_cray_detection, reset
from ._platform import Platform
from .cray import Cray
from .darwin import Darwin
from .freebsd import FreeBSD
from .linux import Linux
@@ -14,6 +15,7 @@
__all__ = [
"Platform",
"Cray",
"Darwin",
"Linux",
"FreeBSD",

View File

@@ -8,6 +8,7 @@
import spack.util.environment
from .cray import Cray
from .darwin import Darwin
from .freebsd import FreeBSD
from .linux import Linux
@@ -15,7 +16,7 @@
from .windows import Windows
#: List of all the platform classes known to Spack
platforms = [Darwin, Linux, Windows, FreeBSD, Test]
platforms = [Cray, Darwin, Linux, Windows, FreeBSD, Test]
@llnl.util.lang.memoized

View File

@@ -2,10 +2,253 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import os.path
import platform
import re
import archspec.cpu
import llnl.util.tty as tty
import spack.target
import spack.version
from spack.operating_systems.cray_backend import CrayBackend
from spack.operating_systems.cray_frontend import CrayFrontend
from spack.paths import build_env_path
from spack.util.executable import Executable
from spack.util.module_cmd import module
from ._platform import NoPlatformError, Platform
_craype_name_to_target_name = {
"x86-cascadelake": "cascadelake",
"x86-naples": "zen",
"x86-rome": "zen2",
"x86-milan": "zen3",
"x86-skylake": "skylake_avx512",
"mic-knl": "mic_knl",
"interlagos": "bulldozer",
"abudhabi": "piledriver",
}
_ex_craype_dir = "/opt/cray/pe/cpe"
_xc_craype_dir = "/opt/cray/pe/cdt"
def slingshot_network():
return os.path.exists("/opt/cray/pe") and (
os.path.exists("/lib64/libcxi.so") or os.path.exists("/usr/lib64/libcxi.so")
)
def _target_name_from_craype_target_name(name):
return _craype_name_to_target_name.get(name, name)
class Cray(Platform):
priority = 10
def __init__(self):
"""Create a Cray system platform.
Target names should use craype target names but not include the
'craype-' prefix. Uses first viable target from:
self
envars [SPACK_FRONT_END, SPACK_BACK_END]
configuration file "targets.yaml" with keys 'front_end', 'back_end'
scanning /etc/bash/bashrc.local for back_end only
"""
super().__init__("cray")
# Make all craype targets available.
for target in self._avail_targets():
name = _target_name_from_craype_target_name(target)
self.add_target(name, spack.target.Target(name, "craype-%s" % target))
self.back_end = os.environ.get("SPACK_BACK_END", self._default_target_from_env())
self.default = self.back_end
if self.back_end not in self.targets:
# We didn't find a target module for the backend
raise NoPlatformError()
# Setup frontend targets
for name in archspec.cpu.TARGETS:
if name not in self.targets:
self.add_target(name, spack.target.Target(name))
self.front_end = os.environ.get("SPACK_FRONT_END", archspec.cpu.host().name)
if self.front_end not in self.targets:
self.add_target(self.front_end, spack.target.Target(self.front_end))
front_distro = CrayFrontend()
back_distro = CrayBackend()
self.default_os = str(back_distro)
self.back_os = self.default_os
self.front_os = str(front_distro)
self.add_operating_system(self.back_os, back_distro)
if self.front_os != self.back_os:
self.add_operating_system(self.front_os, front_distro)
def setup_platform_environment(self, pkg, env):
"""Change the linker to default dynamic to be more
similar to linux/standard linker behavior
"""
# Unload these modules to prevent any silent linking or unnecessary
# I/O profiling in the case of darshan.
modules_to_unload = ["cray-mpich", "darshan", "cray-libsci", "altd"]
for mod in modules_to_unload:
module("unload", mod)
env.set("CRAYPE_LINK_TYPE", "dynamic")
cray_wrapper_names = os.path.join(build_env_path, "cray")
if os.path.isdir(cray_wrapper_names):
env.prepend_path("PATH", cray_wrapper_names)
env.prepend_path("SPACK_ENV_PATH", cray_wrapper_names)
# Makes spack installed pkg-config work on Crays
env.append_path("PKG_CONFIG_PATH", "/usr/lib64/pkgconfig")
env.append_path("PKG_CONFIG_PATH", "/usr/local/lib64/pkgconfig")
# CRAY_LD_LIBRARY_PATH is used at build time by the cray compiler
# wrappers to augment LD_LIBRARY_PATH. This is to avoid long load
# times at runtime. This behavior is not always respected on cray
# "cluster" systems, so we reproduce it here.
if os.environ.get("CRAY_LD_LIBRARY_PATH"):
env.prepend_path("LD_LIBRARY_PATH", os.environ["CRAY_LD_LIBRARY_PATH"])
@classmethod
def craype_type_and_version(cls):
if os.path.isdir(_ex_craype_dir):
craype_dir = _ex_craype_dir
craype_type = "EX"
elif os.path.isdir(_xc_craype_dir):
craype_dir = _xc_craype_dir
craype_type = "XC"
else:
return (None, None)
# Take the default version from known symlink path
default_path = os.path.join(craype_dir, "default")
if os.path.islink(default_path):
version = spack.version.Version(os.readlink(default_path))
return (craype_type, version)
# If no default version, sort available versions and return latest
versions_available = [spack.version.Version(v) for v in os.listdir(craype_dir)]
versions_available.sort(reverse=True)
if not versions_available:
return (craype_type, None)
return (craype_type, versions_available[0])
@classmethod
def detect(cls):
"""
Detect whether this system requires CrayPE module support.
Systems with newer CrayPE (21.10 for EX systems, future work for CS and
XC systems) have compilers and MPI wrappers that can be used directly
by path. These systems are considered ``linux`` platforms.
For systems running an older CrayPE, we detect the Cray platform based
on the availability through `module` of the Cray programming
environment. If this environment is available, we can use it to find
compilers, target modules, etc. If the Cray programming environment is
not available via modules, then we will treat it as a standard linux
system, as the Cray compiler wrappers and other components of the Cray
programming environment are irrelevant without module support.
"""
if "opt/cray" not in os.environ.get("MODULEPATH", ""):
return False
craype_type, craype_version = cls.craype_type_and_version()
if craype_type == "XC":
return True
if craype_type == "EX" and craype_version < spack.version.Version("21.10"):
return True
return False
def _default_target_from_env(self):
"""Set and return the default CrayPE target loaded in a clean login
session.
A bash subshell is launched with a wiped environment and the list of
loaded modules is parsed for the first acceptable CrayPE target.
"""
# env -i /bin/bash -lc echo $CRAY_CPU_TARGET 2> /dev/null
if getattr(self, "default", None) is None:
bash = Executable("/bin/bash")
output = bash(
"--norc",
"--noprofile",
"-lc",
"echo $CRAY_CPU_TARGET",
env={"TERM": os.environ.get("TERM", "")},
output=str,
error=os.devnull,
)
default_from_module = "".join(output.split()) # rm all whitespace
if default_from_module:
tty.debug("Found default module:%s" % default_from_module)
return default_from_module
else:
front_end = archspec.cpu.host()
# Look for the frontend architecture or closest ancestor
# available in cray target modules
avail = [_target_name_from_craype_target_name(x) for x in self._avail_targets()]
for front_end_possibility in [front_end] + front_end.ancestors:
if front_end_possibility.name in avail:
tty.debug("using front-end architecture or available ancestor")
return front_end_possibility.name
else:
tty.debug("using platform.machine as default")
return platform.machine()
def _avail_targets(self):
"""Return a list of available CrayPE CPU targets."""
def modules_in_output(output):
"""Returns a list of valid modules parsed from modulecmd output"""
return [i for i in re.split(r"\s\s+|\n", output)]
def target_names_from_modules(modules):
# Craype- module prefixes that are not valid CPU targets.
targets = []
for mod in modules:
if "craype-" in mod:
name = mod[7:]
name = name.split()[0]
_n = name.replace("-", "_") # test for mic-knl/mic_knl
is_target_name = name in archspec.cpu.TARGETS or _n in archspec.cpu.TARGETS
is_cray_target_name = name in _craype_name_to_target_name
if is_target_name or is_cray_target_name:
targets.append(name)
return targets
def modules_from_listdir():
craype_default_path = "/opt/cray/pe/craype/default/modulefiles"
if os.path.isdir(craype_default_path):
return os.listdir(craype_default_path)
return []
if getattr(self, "_craype_targets", None) is None:
strategies = [
lambda: modules_in_output(module("avail", "-t", "craype-")),
modules_from_listdir,
]
for available_craype_modules in strategies:
craype_modules = available_craype_modules()
craype_targets = target_names_from_modules(craype_modules)
if craype_targets:
self._craype_targets = craype_targets
break
else:
# If nothing is found add platform.machine()
# to avoid Spack erroring out
self._craype_targets = [platform.machine()]
return self._craype_targets

View File

@@ -16,7 +16,7 @@
import llnl.util.lang
import llnl.util.tty as tty
from llnl.util.lang import memoized
from llnl.util.symlink import readlink, symlink
from llnl.util.symlink import symlink
import spack.paths
import spack.platforms
@@ -25,7 +25,6 @@
import spack.store
import spack.util.elf as elf
import spack.util.executable as executable
import spack.util.path
from .relocate_text import BinaryFilePrefixReplacer, TextFilePrefixReplacer
@@ -566,7 +565,7 @@ def make_link_relative(new_links, orig_links):
orig_links (list): original links
"""
for new_link, orig_link in zip(new_links, orig_links):
target = readlink(orig_link)
target = os.readlink(orig_link)
relative_target = os.path.relpath(target, os.path.dirname(orig_link))
os.unlink(new_link)
symlink(relative_target, new_link)
@@ -614,7 +613,7 @@ def relocate_links(links, prefix_to_prefix):
"""Relocate links to a new install prefix."""
regex = re.compile("|".join(re.escape(p) for p in prefix_to_prefix.keys()))
for link in links:
old_target = readlink(link)
old_target = os.readlink(link)
match = regex.match(old_target)
# No match.

View File

@@ -241,7 +241,7 @@ def get_all_package_diffs(type, rev1="HEAD^1", rev2="HEAD"):
Arguments:
type (str): String containing one or more of 'A', 'R', 'C'
type (str): String containing one or more of 'A', 'B', 'C'
rev1 (str): Revision to compare against, default is 'HEAD^'
rev2 (str): Revision to compare to rev1, default is 'HEAD'
@@ -264,7 +264,7 @@ def get_all_package_diffs(type, rev1="HEAD^1", rev2="HEAD"):
lines = [] if not out else re.split(r"\s+", out)
changed = set()
for path in lines:
pkg_name, _, _ = path.partition("/")
pkg_name, _, _ = path.partition(os.sep)
if pkg_name not in added and pkg_name not in removed:
changed.add(pkg_name)

View File

@@ -58,8 +58,7 @@
# Initialize data structures common to each phase's report.
CDASH_PHASES = set(MAP_PHASES_TO_CDASH.values())
CDASH_PHASES.add("update")
# CDash request timeout in seconds
SPACK_CDASH_TIMEOUT = 45
CDashConfiguration = collections.namedtuple(
"CDashConfiguration", ["upload_url", "packages", "build", "site", "buildstamp", "track"]
@@ -448,7 +447,7 @@ def upload(self, filename):
# By default, urllib2 only support GET and POST.
# CDash expects this file to be uploaded via PUT.
request.get_method = lambda: "PUT"
response = opener.open(request, timeout=SPACK_CDASH_TIMEOUT)
response = opener.open(request)
if self.current_package_name not in self.buildIds:
resp_value = response.read()
if isinstance(resp_value, bytes):

View File

@@ -9,7 +9,7 @@
import tempfile
from collections import OrderedDict
from llnl.util.symlink import readlink, symlink
from llnl.util.symlink import symlink
import spack.binary_distribution as bindist
import spack.error
@@ -26,7 +26,7 @@ def _relocate_spliced_links(links, orig_prefix, new_prefix):
in our case. This still needs to be called after the copy to destination
because it expects the new directory structure to be in place."""
for link in links:
link_target = readlink(os.path.join(orig_prefix, link))
link_target = os.readlink(os.path.join(orig_prefix, link))
link_target = re.sub("^" + orig_prefix, new_prefix, link_target)
new_link_path = os.path.join(new_prefix, link)
os.unlink(new_link_path)

View File

@@ -44,7 +44,6 @@
},
]
},
"prefer_older": {"type": "boolean"},
"enable_node_namespace": {"type": "boolean"},
"targets": {
"type": "object",

View File

@@ -314,10 +314,6 @@ def using_libc_compatibility() -> bool:
return spack.platforms.host().name == "linux"
def c_compiler_runs(compiler: spack.compiler.Compiler) -> bool:
return compiler.compiler_verbose_output is not None
def extend_flag_list(flag_list, new_flags):
"""Extend a list of flags, preserving order and precedence.
@@ -1025,7 +1021,7 @@ def __iter__(self):
class SpackSolverSetup:
"""Class to set up and run a Spack concretization solve."""
def __init__(self, tests: bool = False, prefer_older=False):
def __init__(self, tests: bool = False):
# these are all initialized in setup()
self.gen: "ProblemInstanceBuilder" = ProblemInstanceBuilder()
self.possible_virtuals: Set[str] = set()
@@ -1058,8 +1054,6 @@ def __init__(self, tests: bool = False, prefer_older=False):
# whether to add installed/binary hashes to the solve
self.tests = tests
self.prefer_older = prefer_older
# If False allows for input specs that are not solved
self.concretize_everything = True
@@ -1093,9 +1087,7 @@ def key_fn(version):
list(sorted(group, reverse=True, key=lambda x: vn.ver(x.version)))
)
for weight, declared_version in enumerate(
reversed(most_to_least_preferred) if self.prefer_older else most_to_least_preferred
):
for weight, declared_version in enumerate(most_to_least_preferred):
self.gen.fact(
fn.pkg_fact(
pkg.name,
@@ -1943,11 +1935,6 @@ def _spec_clauses(
for virtual in virtuals:
clauses.append(fn.attr("virtual_on_incoming_edges", spec.name, virtual))
# If the spec is external and concrete, we allow all the libcs on the system
if spec.external and spec.concrete and using_libc_compatibility():
for libc in self.libcs:
clauses.append(fn.attr("compatible_libc", spec.name, libc.name, libc.version))
# add all clauses from dependencies
if transitive:
# TODO: Eventually distinguish 2 deps on the same pkg (build and link)
@@ -2444,7 +2431,7 @@ def setup(
if using_libc_compatibility():
for libc in self.libcs:
self.gen.fact(fn.host_libc(libc.name, libc.version))
self.gen.fact(fn.allowed_libc(libc.name, libc.version))
if not allow_deprecated:
self.gen.fact(fn.deprecated_versions_not_allowed())
@@ -2988,13 +2975,6 @@ class CompilerParser:
def __init__(self, configuration) -> None:
self.compilers: Set[KnownCompiler] = set()
for c in all_compilers_in_config(configuration):
if using_libc_compatibility() and not c_compiler_runs(c):
tty.debug(
f"the C compiler {c.cc} does not exist, or does not run correctly."
f" The compiler {c.spec} will not be used during concretization."
)
continue
if using_libc_compatibility() and not c.default_libc:
warnings.warn(
f"cannot detect libc from {c.spec}. The compiler will not be used "
@@ -3803,8 +3783,12 @@ class Solver:
def __init__(self):
self.driver = PyclingoDriver()
self.selector = ReusableSpecsSelector(configuration=spack.config.CONFIG)
self.prefer_older = spack.config.get("concretizer:prefer_older", False)
if spack.platforms.host().name == "cray":
msg = (
"The Cray platform, i.e. 'platform=cray', will be removed in Spack v0.23. "
"All Cray machines will be then detected as 'platform=linux'."
)
warnings.warn(msg)
@staticmethod
def _check_input_and_extract_concrete_specs(specs):
@@ -3844,7 +3828,7 @@ def solve(
specs = [s.lookup_hash() for s in specs]
reusable_specs = self._check_input_and_extract_concrete_specs(specs)
reusable_specs.extend(self.selector.reusable_specs(specs))
setup = SpackSolverSetup(tests=tests, prefer_older=self.prefer_older)
setup = SpackSolverSetup(tests=tests)
output = OutputConfiguration(timers=timers, stats=stats, out=out, setup_only=setup_only)
result, _, _ = self.driver.solve(
setup, specs, reuse=reusable_specs, output=output, allow_deprecated=allow_deprecated
@@ -3873,7 +3857,7 @@ def solve_in_rounds(
specs = [s.lookup_hash() for s in specs]
reusable_specs = self._check_input_and_extract_concrete_specs(specs)
reusable_specs.extend(self.selector.reusable_specs(specs))
setup = SpackSolverSetup(tests=tests, prefer_older=self.prefer_older)
setup = SpackSolverSetup(tests=tests)
# Tell clingo that we don't have to solve all the inputs at once
setup.concretize_everything = False

View File

@@ -1345,10 +1345,8 @@ build(PackageNode) :- not attr("hash", PackageNode, _), attr("node", PackageNode
% topmost-priority criterion to reuse what is installed.
%
% The priority ranges are:
% 1000+ Optimizations for concretization errors
% 300 - 1000 Highest priority optimizations for valid solutions
% 200 - 299 Shifted priorities for build nodes; correspond to priorities 0 - 99.
% 100 - 199 Unshifted priorities. Currently only includes minimizing #builds and minimizing dupes.
% 200+ Shifted priorities for build nodes; correspond to priorities 0 - 99.
% 100 - 199 Unshifted priorities. Currently only includes minimizing #builds.
% 0 - 99 Priorities for non-built nodes.
build_priority(PackageNode, 200) :- build(PackageNode), attr("node", PackageNode).
build_priority(PackageNode, 0) :- not build(PackageNode), attr("node", PackageNode).
@@ -1396,16 +1394,6 @@ build_priority(PackageNode, 0) :- not build(PackageNode), attr("node", Package
% 2. a `#minimize{ 0@2 : #true }.` statement that ensures the criterion
% is displayed (clingo doesn't display sums over empty sets by default)
% A condition group specifies one or more specs that must be satisfied.
% Specs declared first are preferred, so we assign increasing weights and
% minimize the weights.
opt_criterion(310, "requirement weight").
#minimize{ 0@310: #true }.
#minimize {
Weight@310,PackageNode,Group
: requirement_weight(PackageNode, Group, Weight)
}.
% Try hard to reuse installed packages (i.e., minimize the number built)
opt_criterion(110, "number of packages to build (vs. reuse)").
#minimize { 0@110: #true }.
@@ -1417,6 +1405,18 @@ opt_criterion(100, "number of nodes from the same package").
#minimize { ID@100,Package : attr("virtual_node", node(ID, Package)) }.
#defined optimize_for_reuse/0.
% A condition group specifies one or more specs that must be satisfied.
% Specs declared first are preferred, so we assign increasing weights and
% minimize the weights.
opt_criterion(75, "requirement weight").
#minimize{ 0@275: #true }.
#minimize{ 0@75: #true }.
#minimize {
Weight@75+Priority,PackageNode,Group
: requirement_weight(PackageNode, Group, Weight),
build_priority(PackageNode, Priority)
}.
% Minimize the number of deprecated versions being used
opt_criterion(73, "deprecated versions used").
#minimize{ 0@273: #true }.
@@ -1432,11 +1432,11 @@ opt_criterion(73, "deprecated versions used").
% 1. Version weight
% 2. Number of variants with a non default value, if not set
% for the root package.
opt_criterion(70, "version badness (roots)").
opt_criterion(70, "version weight").
#minimize{ 0@270: #true }.
#minimize{ 0@70: #true }.
#minimize {
Weight@70+Priority,PackageNode
Weight@70+Priority
: attr("root", PackageNode),
version_weight(PackageNode, Weight),
build_priority(PackageNode, Priority)
@@ -1526,14 +1526,13 @@ opt_criterion(30, "non-preferred OS's").
}.
% Choose more recent versions for nodes
opt_criterion(25, "version badness (non roots)").
opt_criterion(25, "version badness").
#minimize{ 0@225: #true }.
#minimize{ 0@25: #true }.
#minimize{
Weight@25+Priority,node(X, Package)
: version_weight(node(X, Package), Weight),
build_priority(node(X, Package), Priority),
not attr("root", node(X, Package)),
not runtime(Package)
}.

View File

@@ -10,13 +10,12 @@
%=============================================================================
% A package cannot be reused if the libc is not compatible with it
error(100, "Cannot reuse {0} since we cannot determine libc compatibility", ReusedPackage)
:- provider(node(X, LibcPackage), node(0, "libc")),
attr("version", node(X, LibcPackage), LibcVersion),
attr("hash", node(R, ReusedPackage), Hash),
% Libc packages can be reused without the "compatible_libc" attribute
ReusedPackage != LibcPackage,
not attr("compatible_libc", node(R, ReusedPackage), LibcPackage, LibcVersion).
:- provider(node(X, LibcPackage), node(0, "libc")),
attr("version", node(X, LibcPackage), LibcVersion),
attr("hash", node(R, ReusedPackage), Hash),
% Libc packages can be reused without the "compatible_libc" attribute
ReusedPackage != LibcPackage,
not attr("compatible_libc", node(R, ReusedPackage), LibcPackage, LibcVersion).
% Check whether the DAG has any built package
has_built_packages() :- build(X), not external(X).
@@ -24,20 +23,12 @@ has_built_packages() :- build(X), not external(X).
% A libc is needed in the DAG
:- has_built_packages(), not provider(_, node(0, "libc")).
% Non-libc reused specs must be host libc compatible. In case we build packages, we get a
% host compatible libc provider from other rules. If nothing is built, there is no libc provider,
% since it's pruned from reusable specs, meaning we have to explicitly impose reused specs are host
% compatible.
:- attr("hash", node(R, ReusedPackage), Hash),
not provider(node(R, ReusedPackage), node(0, "libc")),
not attr("compatible_libc", node(R, ReusedPackage), _, _).
% The libc provider must be one that a compiler can target
% The libc must be chosen among available ones
:- has_built_packages(),
provider(node(X, LibcPackage), node(0, "libc")),
attr("node", node(X, LibcPackage)),
attr("version", node(X, LibcPackage), LibcVersion),
not host_libc(LibcPackage, LibcVersion).
not allowed_libc(LibcPackage, LibcVersion).
% A built node must depend on libc
:- build(PackageNode),

View File

@@ -2816,7 +2816,9 @@ def _old_concretize(self, tests=False, deprecation_warning=True):
# Check if we can produce an optimized binary (will throw if
# there are declared inconsistencies)
self.architecture.target.optimization_flags(self.compiler)
# No need on platform=cray because of the targeting modules
if not self.satisfies("platform=cray"):
self.architecture.target.optimization_flags(self.compiler)
def _patches_assigned(self):
"""Whether patches have been assigned to this spec by the concretizer."""

View File

@@ -2,12 +2,16 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import platform
import sys
import pytest
import archspec.cpu
import llnl.util.filesystem as fs
import spack.compilers
import spack.concretize
import spack.operating_systems
@@ -21,8 +25,9 @@ def current_host_platform():
"""Return the platform of the current host as detected by the
'platform' stdlib package.
"""
current_platform = None
if "Linux" in platform.system():
if os.path.exists("/opt/cray/pe"):
current_platform = spack.platforms.Cray()
elif "Linux" in platform.system():
current_platform = spack.platforms.Linux()
elif "Darwin" in platform.system():
current_platform = spack.platforms.Darwin()
@@ -217,3 +222,28 @@ def test_concretize_target_ranges(root_target_range, dep_target_range, result, m
with spack.concretize.disable_compiler_existence_check():
spec.concretize()
assert spec.target == spec["b"].target == result
@pytest.mark.parametrize(
"versions,default,expected",
[
(["21.11", "21.9"], "21.11", False),
(["21.11", "21.9"], "21.9", True),
(["21.11", "21.9"], None, False),
],
)
@pytest.mark.skipif(sys.platform == "win32", reason="Cray does not use windows")
def test_cray_platform_detection(versions, default, expected, tmpdir, monkeypatch, working_env):
ex_path = str(tmpdir.join("fake_craype_dir"))
fs.mkdirp(ex_path)
with fs.working_dir(ex_path):
for version in versions:
fs.touch(version)
if default:
os.symlink(default, "default")
monkeypatch.setattr(spack.platforms.cray, "_ex_craype_dir", ex_path)
os.environ["MODULEPATH"] = "/opt/cray/pe"
assert spack.platforms.cray.Cray.detect() == expected

View File

@@ -19,8 +19,6 @@
(["missing-dependency"], ["PKG-DIRECTIVES", "PKG-PROPERTIES"]),
# The package use a non existing variant in a depends_on directive
(["wrong-variant-in-depends-on"], ["PKG-DIRECTIVES", "PKG-PROPERTIES"]),
# This package has a GitHub pull request commit patch URL
(["invalid-github-pull-commits-patch-url"], ["PKG-DIRECTIVES", "PKG-PROPERTIES"]),
# This package has a GitHub patch URL without full_index=1
(["invalid-github-patch-url"], ["PKG-DIRECTIVES", "PKG-PROPERTIES"]),
# This package has invalid GitLab patch URLs

View File

@@ -22,7 +22,6 @@
import archspec.cpu
from llnl.util.filesystem import join_path, visit_directory_tree
from llnl.util.symlink import readlink
import spack.binary_distribution as bindist
import spack.caches
@@ -1063,10 +1062,10 @@ def test_tarball_common_prefix(dummy_prefix, tmpdir):
assert set(os.listdir(os.path.join("prefix2", "share"))) == {"file"}
# Relative symlink should still be correct
assert readlink(os.path.join("prefix2", "bin", "relative_app_link")) == "app"
assert os.readlink(os.path.join("prefix2", "bin", "relative_app_link")) == "app"
# Absolute symlink should remain absolute -- this is for relocation to fix up.
assert readlink(os.path.join("prefix2", "bin", "absolute_app_link")) == os.path.join(
assert os.readlink(os.path.join("prefix2", "bin", "absolute_app_link")) == os.path.join(
dummy_prefix, "bin", "app"
)

View File

@@ -14,7 +14,6 @@
import spack.build_environment
import spack.config
import spack.deptypes as dt
import spack.package_base
import spack.spec
import spack.util.spack_yaml as syaml
@@ -556,6 +555,24 @@ def test_build_jobs_defaults():
)
def test_dirty_disable_module_unload(config, mock_packages, working_env, mock_module_cmd):
"""Test that on CRAY platform 'module unload' is not called if the 'dirty'
option is on.
"""
s = spack.spec.Spec("a").concretized()
# If called with "dirty" we don't unload modules, so no calls to the
# `module` function on Cray
spack.build_environment.setup_package(s.package, dirty=True)
assert not mock_module_cmd.calls
# If called without "dirty" we unload modules on Cray
spack.build_environment.setup_package(s.package, dirty=False)
assert mock_module_cmd.calls
assert any(("unload", "cray-libsci") == item[0] for item in mock_module_cmd.calls)
assert any(("unload", "cray-mpich") == item[0] for item in mock_module_cmd.calls)
class TestModuleMonkeyPatcher:
def test_getting_attributes(self, default_mock_concretization):
s = default_mock_concretization("libelf")
@@ -699,21 +716,3 @@ def test_build_system_globals_only_set_on_root_during_build(default_mock_concret
for depth, spec in root.traverse(depth=True, root=True):
for variable in build_variables:
assert hasattr(spec.package.module, variable) == should_be_set(depth)
def test_rpath_with_duplicate_link_deps():
"""If we have two instances of one package in the same link sub-dag, only the newest version is
rpath'ed. This is for runtime support without splicing."""
runtime_1 = spack.spec.Spec("runtime@=1.0")
runtime_2 = spack.spec.Spec("runtime@=2.0")
child = spack.spec.Spec("child@=1.0")
root = spack.spec.Spec("root@=1.0")
root.add_dependency_edge(child, depflag=dt.LINK, virtuals=())
root.add_dependency_edge(runtime_2, depflag=dt.LINK, virtuals=())
child.add_dependency_edge(runtime_1, depflag=dt.LINK, virtuals=())
rpath_deps = spack.build_environment._get_rpath_deps_from_spec(root, transitive_rpaths=True)
assert child in rpath_deps
assert runtime_2 in rpath_deps
assert runtime_1 not in rpath_deps

View File

@@ -12,21 +12,21 @@
def test_build_task_errors(install_mockery):
with pytest.raises(ValueError, match="must be a package"):
inst.BuildTask("abc", None, False, 0, 0, 0, set())
inst.BuildTask("abc", None, False, 0, 0, 0, [])
spec = spack.spec.Spec("trivial-install-test-package")
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
with pytest.raises(ValueError, match="must have a concrete spec"):
inst.BuildTask(pkg_cls(spec), None, False, 0, 0, 0, set())
inst.BuildTask(pkg_cls(spec), None, False, 0, 0, 0, [])
spec.concretize()
assert spec.concrete
with pytest.raises(ValueError, match="must have a build request"):
inst.BuildTask(spec.package, None, False, 0, 0, 0, set())
inst.BuildTask(spec.package, None, False, 0, 0, 0, [])
request = inst.BuildRequest(spec.package, {})
with pytest.raises(inst.InstallError, match="Cannot create a build task"):
inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_REMOVED, set())
inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_REMOVED, [])
def test_build_task_basics(install_mockery):
@@ -36,8 +36,8 @@ def test_build_task_basics(install_mockery):
# Ensure key properties match expectations
request = inst.BuildRequest(spec.package, {})
task = inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_ADDED, set())
assert not task.explicit
task = inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_ADDED, [])
assert task.explicit # package was "explicitly" requested
assert task.priority == len(task.uninstalled_deps)
assert task.key == (task.priority, task.sequence)
@@ -58,7 +58,7 @@ def test_build_task_strings(install_mockery):
# Ensure key properties match expectations
request = inst.BuildRequest(spec.package, {})
task = inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_ADDED, set())
task = inst.BuildTask(spec.package, request, False, 0, 0, inst.STATUS_ADDED, [])
# Cover __repr__
irep = task.__repr__()

View File

@@ -51,7 +51,7 @@ def __init__(self, response_code=200, content_to_read=[]):
self._content = content_to_read
self._read = [False for c in content_to_read]
def open(self, request, data=None, timeout=object()):
def open(self, request):
return self
def getcode(self):

View File

@@ -760,6 +760,7 @@ def test_ci_rebuild_mock_success(
rebuild_env = create_rebuild_env(tmpdir, pkg_name, broken_tests)
monkeypatch.setattr(spack.cmd.ci, "SPACK_COMMAND", "echo")
monkeypatch.setattr(spack.cmd.ci, "MAKE_COMMAND", "echo")
with rebuild_env.env_dir.as_cwd():
activate_rebuild_env(tmpdir, pkg_name, rebuild_env)
@@ -842,6 +843,7 @@ def test_ci_rebuild(
ci_cmd("rebuild", "--tests", fail_on_error=False)
monkeypatch.setattr(spack.cmd.ci, "SPACK_COMMAND", "notcommand")
monkeypatch.setattr(spack.cmd.ci, "MAKE_COMMAND", "notcommand")
monkeypatch.setattr(spack.cmd.ci, "INSTALL_FAIL_CODE", 127)
with rebuild_env.env_dir.as_cwd():

View File

@@ -11,7 +11,6 @@
import spack.caches
import spack.cmd.clean
import spack.environment as ev
import spack.main
import spack.package_base
import spack.stage
@@ -69,20 +68,6 @@ def test_function_calls(command_line, effects, mock_calls_for_clean):
assert mock_calls_for_clean[name] == (1 if name in effects else 0)
def test_env_aware_clean(mock_stage, install_mockery, mutable_mock_env_path, monkeypatch):
e = ev.create("test", with_view=False)
e.add("mpileaks")
e.concretize()
def fail(*args, **kwargs):
raise Exception("This should not have been called")
monkeypatch.setattr(spack.spec.Spec, "concretize", fail)
with e:
clean("mpileaks")
def test_remove_python_cache(tmpdir, monkeypatch):
cache_files = ["file1.pyo", "file2.pyc"]
source_file = "file1.py"

View File

@@ -125,8 +125,18 @@ def print_spack_cc(*args):
print(os.environ.get("CC", ""))
# `module unload cray-libsci` in test environment causes failure
# It does not fail for actual installs
# build_environment.py imports module directly, so we monkeypatch it there
# rather than in module_cmd
def mock_module_noop(*args):
pass
def test_dev_build_drop_in(tmpdir, mock_packages, monkeypatch, install_mockery, working_env):
monkeypatch.setattr(os, "execvp", print_spack_cc)
monkeypatch.setattr(spack.build_environment, "module", mock_module_noop)
with tmpdir.as_cwd():
output = dev_build("-b", "edit", "--drop-in", "sh", "dev-build-test-install@0.0.0")
assert "lib/spack/env" in output

View File

@@ -15,7 +15,6 @@
import llnl.util.filesystem as fs
import llnl.util.link_tree
import llnl.util.tty as tty
from llnl.util.symlink import readlink
import spack.cmd.env
import spack.config
@@ -4415,8 +4414,8 @@ def test_env_view_resolves_identical_file_conflicts(tmp_path, install_mockery, m
# view-file/bin/
# x # expect this x to be linked
assert readlink(tmp_path / "view" / "bin" / "x") == bottom.bin.x
assert readlink(tmp_path / "view" / "bin" / "y") == top.bin.y
assert os.readlink(tmp_path / "view" / "bin" / "x") == bottom.bin.x
assert os.readlink(tmp_path / "view" / "bin" / "y") == top.bin.y
def test_env_view_ignores_different_file_conflicts(tmp_path, install_mockery, mock_fetch):
@@ -4427,4 +4426,4 @@ def test_env_view_ignores_different_file_conflicts(tmp_path, install_mockery, mo
install()
prefix_dependent = e.matching_spec("view-ignore-conflict").prefix
# The dependent's file is linked into the view
assert readlink(tmp_path / "view" / "bin" / "x") == prefix_dependent.bin.x
assert os.readlink(tmp_path / "view" / "bin" / "x") == prefix_dependent.bin.x

View File

@@ -384,18 +384,9 @@ def test_clang_flags():
unsupported_flag_test("cxx17_flag", "clang@3.4")
supported_flag_test("cxx17_flag", "-std=c++1z", "clang@3.5")
supported_flag_test("cxx17_flag", "-std=c++17", "clang@5.0")
unsupported_flag_test("cxx20_flag", "clang@4.0")
supported_flag_test("cxx20_flag", "-std=c++2a", "clang@5.0")
supported_flag_test("cxx20_flag", "-std=c++20", "clang@11.0")
unsupported_flag_test("cxx23_flag", "clang@11.0")
supported_flag_test("cxx23_flag", "-std=c++2b", "clang@12.0")
supported_flag_test("cxx23_flag", "-std=c++23", "clang@17.0")
supported_flag_test("c99_flag", "-std=c99", "clang@3.3")
unsupported_flag_test("c11_flag", "clang@2.0")
supported_flag_test("c11_flag", "-std=c11", "clang@6.1.0")
unsupported_flag_test("c23_flag", "clang@8.0")
supported_flag_test("c23_flag", "-std=c2x", "clang@9.0")
supported_flag_test("c23_flag", "-std=c23", "clang@18.0")
supported_flag_test("cc_pic_flag", "-fPIC", "clang@3.3")
supported_flag_test("cxx_pic_flag", "-fPIC", "clang@3.3")
supported_flag_test("f77_pic_flag", "-fPIC", "clang@3.3")

View File

@@ -3,8 +3,12 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Test detection of compiler version"""
import os
import pytest
import llnl.util.filesystem as fs
import spack.compilers.aocc
import spack.compilers.arm
import spack.compilers.cce
@@ -19,6 +23,7 @@
import spack.compilers.xl
import spack.compilers.xl_r
import spack.util.module_cmd
from spack.operating_systems.cray_frontend import CrayFrontend
@pytest.mark.parametrize(
@@ -408,6 +413,48 @@ def test_xl_version_detection(version_str, expected_version):
assert version == expected_version
@pytest.mark.not_on_windows("Not supported on Windows (yet)")
@pytest.mark.parametrize(
"compiler,version",
[
("gcc", "8.1.0"),
("gcc", "1.0.0-foo"),
("pgi", "19.1"),
("pgi", "19.1a"),
("intel", "9.0.0"),
("intel", "0.0.0-foobar"),
# ('oneapi', '2021.1'),
# ('oneapi', '2021.1-foobar')
],
)
def test_cray_frontend_compiler_detection(compiler, version, tmpdir, monkeypatch, working_env):
"""Test that the Cray frontend properly finds compilers form modules"""
# setup the fake compiler directory
compiler_dir = tmpdir.join(compiler)
compiler_exe = compiler_dir.join("cc").ensure()
fs.set_executable(str(compiler_exe))
# mock modules
def _module(cmd, *args):
module_name = "%s/%s" % (compiler, version)
module_contents = "prepend-path PATH %s" % compiler_dir
if cmd == "avail":
return module_name if compiler in args[0] else ""
if cmd == "show":
return module_contents if module_name in args else ""
monkeypatch.setattr(spack.operating_systems.cray_frontend, "module", _module)
# remove PATH variable
os.environ.pop("PATH", None)
# get a CrayFrontend object
cray_fe_os = CrayFrontend()
paths = cray_fe_os.compiler_search_paths
assert paths == [str(compiler_dir)]
@pytest.mark.parametrize(
"version_str,expected_version",
[

View File

@@ -28,7 +28,7 @@
import spack.variant as vt
from spack.concretize import find_spec
from spack.spec import CompilerSpec, Spec
from spack.version import Version, VersionList, ver
from spack.version import Version, ver
def check_spec(abstract, concrete):
@@ -1914,11 +1914,11 @@ def test_version_weight_and_provenance(self):
libc_offset = 1 if spack.solver.asp.using_libc_compatibility() else 0
criteria = [
(num_specs - 1 - libc_offset, None, "number of packages to build (vs. reuse)"),
(2, 0, "version badness (non roots)"),
(2, 0, "version badness"),
]
for criterion in criteria:
assert criterion in result.criteria, criterion
assert criterion in result.criteria, result_spec
assert result_spec.satisfies("^b@1.0")
@pytest.mark.only_clingo("Use case not supported by the original concretizer")
@@ -2546,53 +2546,6 @@ def test_include_specs_from_externals_and_libcs(
assert result["deprecated-versions"].satisfies("@1.0.0")
@pytest.mark.regression("44085")
@pytest.mark.only_clingo("Use case not supported by the original concretizer")
def test_can_reuse_concrete_externals_for_dependents(self, mutable_config, tmp_path):
"""Test that external specs that are in the DB can be reused. This means they are
preferred to concretizing another external from packages.yaml
"""
packages_yaml = {
"externaltool": {"externals": [{"spec": "externaltool@2.0", "prefix": "/fake/path"}]}
}
mutable_config.set("packages", packages_yaml)
# Concretize with gcc@9 to get a suboptimal spec, since we have gcc@10 available
external_spec = Spec("externaltool@2 %gcc@9").concretized()
assert external_spec.external
root_specs = [Spec("sombrero")]
with spack.config.override("concretizer:reuse", True):
solver = spack.solver.asp.Solver()
setup = spack.solver.asp.SpackSolverSetup()
result, _, _ = solver.driver.solve(setup, root_specs, reuse=[external_spec])
assert len(result.specs) == 1
sombrero = result.specs[0]
assert sombrero["externaltool"].dag_hash() == external_spec.dag_hash()
@pytest.mark.only_clingo("Original concretizer cannot reuse")
def test_cannot_reuse_host_incompatible_libc(self):
"""Test whether reuse concretization correctly fails to reuse a spec with a host
incompatible libc."""
if not spack.solver.asp.using_libc_compatibility():
pytest.skip("This test requires libc nodes")
# We install b@1 ^glibc@2.30, and b@0 ^glibc@2.28. The former is not host compatible, the
# latter is.
fst = Spec("b@1").concretized()
fst._mark_concrete(False)
fst.dependencies("glibc")[0].versions = VersionList(["=2.30"])
fst._mark_concrete(True)
snd = Spec("b@0").concretized()
# The spec b@1 ^glibc@2.30 is "more optimal" than b@0 ^glibc@2.28, but due to glibc
# incompatibility, it should not be reused.
solver = spack.solver.asp.Solver()
setup = spack.solver.asp.SpackSolverSetup()
result, _, _ = solver.driver.solve(setup, [Spec("b")], reuse=[fst, snd])
assert len(result.specs) == 1
assert result.specs[0] == snd
@pytest.fixture()
def duplicates_test_repository():

View File

@@ -1176,46 +1176,3 @@ def test_forward_multi_valued_variant_using_requires(
for constraint in not_expected:
assert not s.satisfies(constraint)
def test_strong_preferences_higher_priority_than_reuse(concretize_scope, mock_packages):
"""Tests that strong preferences have a higher priority than reusing specs."""
reused_spec = Spec("adios2~bzip2").concretized()
reuse_nodes = list(reused_spec.traverse())
root_specs = [Spec("ascent+adios2")]
# Check that without further configuration adios2 is reused
with spack.config.override("concretizer:reuse", True):
solver = spack.solver.asp.Solver()
setup = spack.solver.asp.SpackSolverSetup()
result, _, _ = solver.driver.solve(setup, root_specs, reuse=reuse_nodes)
ascent = result.specs[0]
assert ascent["adios2"].dag_hash() == reused_spec.dag_hash(), ascent
# If we stick a preference, adios2 is not reused
update_packages_config(
"""
packages:
adios2:
prefer:
- "+bzip2"
"""
)
with spack.config.override("concretizer:reuse", True):
solver = spack.solver.asp.Solver()
setup = spack.solver.asp.SpackSolverSetup()
result, _, _ = solver.driver.solve(setup, root_specs, reuse=reuse_nodes)
ascent = result.specs[0]
assert ascent["adios2"].dag_hash() != reused_spec.dag_hash()
assert ascent["adios2"].satisfies("+bzip2")
# A preference is still preference, so we can override from input
with spack.config.override("concretizer:reuse", True):
solver = spack.solver.asp.Solver()
setup = spack.solver.asp.SpackSolverSetup()
result, _, _ = solver.driver.solve(
setup, [Spec("ascent+adios2 ^adios2~bzip2")], reuse=reuse_nodes
)
ascent = result.specs[0]
assert ascent["adios2"].dag_hash() == reused_spec.dag_hash(), ascent

View File

@@ -492,7 +492,7 @@ def test_substitute_date(mock_low_high_config):
],
)
def test_parse_install_tree(config_settings, expected, mutable_config):
expected_root = expected[0] or mutable_config.get("config:install_tree:root")
expected_root = expected[0] or spack.store.DEFAULT_INSTALL_TREE_ROOT
expected_unpadded_root = expected[1] or expected_root
expected_proj = expected[2] or spack.directory_layout.default_projections
@@ -575,7 +575,7 @@ def change_fn(self, section):
],
)
def test_parse_install_tree_padded(config_settings, expected, mutable_config):
expected_root = expected[0] or mutable_config.get("config:install_tree:root")
expected_root = expected[0] or spack.store.DEFAULT_INSTALL_TREE_ROOT
expected_unpadded_root = expected[1] or expected_root
expected_proj = expected[2] or spack.directory_layout.default_projections
@@ -761,20 +761,25 @@ def test_internal_config_from_data():
assert config.get("config:checksum", scope="higher") is True
def test_keys_are_ordered(configuration_dir):
def test_keys_are_ordered():
"""Test that keys in Spack YAML files retain their order from the file."""
expected_order = (
"./bin",
"./man",
"./share/man",
"./share/aclocal",
"./lib/pkgconfig",
"./lib64/pkgconfig",
"./share/pkgconfig",
"./",
"bin",
"man",
"share/man",
"share/aclocal",
"lib",
"lib64",
"include",
"lib/pkgconfig",
"lib64/pkgconfig",
"share/pkgconfig",
"",
)
config_scope = spack.config.ConfigScope("modules", configuration_dir.join("site"))
config_scope = spack.config.ConfigScope(
"modules", os.path.join(spack.paths.test_path, "data", "config")
)
data = config_scope.get_section("modules")

View File

@@ -12,7 +12,6 @@
import json
import os
import os.path
import pathlib
import re
import shutil
import stat
@@ -33,7 +32,6 @@
from llnl.util.filesystem import copy_tree, mkdirp, remove_linked_tree, touchp, working_dir
import spack.binary_distribution
import spack.bootstrap.core
import spack.caches
import spack.cmd.buildcache
import spack.compiler
@@ -684,34 +682,36 @@ def configuration_dir(tmpdir_factory, linux_os):
directory path.
"""
tmpdir = tmpdir_factory.mktemp("configurations")
install_tree_root = tmpdir_factory.mktemp("opt")
modules_root = tmpdir_factory.mktemp("share")
tcl_root = modules_root.ensure("modules", dir=True)
lmod_root = modules_root.ensure("lmod", dir=True)
# <test_path>/data/config has mock config yaml files in it
# copy these to the site config.
test_config = pathlib.Path(spack.paths.test_path) / "data" / "config"
shutil.copytree(test_config, tmpdir.join("site"))
test_config = py.path.local(spack.paths.test_path).join("data", "config")
test_config.copy(tmpdir.join("site"))
# Create temporary 'defaults', 'site' and 'user' folders
tmpdir.ensure("user", dir=True)
# Fill out config.yaml, compilers.yaml and modules.yaml templates.
# Slightly modify config.yaml and compilers.yaml
if sys.platform == "win32":
locks = False
else:
locks = True
solver = os.environ.get("SPACK_TEST_SOLVER", "clingo")
locks = sys.platform != "win32"
config = tmpdir.join("site", "config.yaml")
config_template = test_config / "config.yaml"
config.write(config_template.read_text().format(install_tree_root, solver, locks))
config_yaml = test_config.join("config.yaml")
modules_root = tmpdir_factory.mktemp("share")
tcl_root = modules_root.ensure("modules", dir=True)
lmod_root = modules_root.ensure("lmod", dir=True)
content = "".join(config_yaml.read()).format(solver, locks, str(tcl_root), str(lmod_root))
t = tmpdir.join("site", "config.yaml")
t.write(content)
target = str(archspec.cpu.host().family)
compilers = tmpdir.join("site", "compilers.yaml")
compilers_template = test_config / "compilers.yaml"
compilers.write(compilers_template.read_text().format(linux_os=linux_os, target=target))
modules = tmpdir.join("site", "modules.yaml")
modules_template = test_config / "modules.yaml"
modules.write(modules_template.read_text().format(tcl_root, lmod_root))
compilers_yaml = test_config.join("compilers.yaml")
content = "".join(compilers_yaml.read()).format(
linux_os=linux_os, target=str(archspec.cpu.host().family)
)
t = tmpdir.join("site", "compilers.yaml")
t.write(content)
yield tmpdir
@@ -1702,7 +1702,7 @@ def _factory(name, output, subdir=("bin",)):
executable_path = executable_dir / name
if sys.platform == "win32":
executable_path = executable_dir / (name + ".bat")
executable_path.write_text(f"{shebang}{output}\n")
executable_path.write_text(f"{ shebang }{ output }\n")
executable_path.chmod(0o755)
return executable_path
@@ -2053,11 +2053,3 @@ def _true(x):
@pytest.fixture()
def do_not_check_runtimes_on_reuse(monkeypatch):
monkeypatch.setattr(spack.solver.asp, "_has_runtime_dependencies", _true)
@pytest.fixture(autouse=True, scope="session")
def _c_compiler_always_exists():
fn = spack.solver.asp.c_compiler_runs
spack.solver.asp.c_compiler_runs = _true
yield
spack.solver.asp.c_compiler_runs = fn

View File

@@ -1,5 +1,5 @@
concretizer:
reuse: true
reuse: True
targets:
granularity: microarchitectures
host_compatible: false

View File

@@ -1,6 +1,6 @@
config:
install_tree:
root: {0}
root: $spack/opt/spack
template_dirs:
- $spack/share/spack/templates
- $spack/lib/spack/spack/test/data/templates
@@ -13,5 +13,5 @@ config:
ssl_certs: $SSL_CERT_FILE
checksum: true
dirty: false
concretizer: {1}
locks: {2}
concretizer: {0}
locks: {1}

View File

@@ -14,25 +14,29 @@
# ~/.spack/modules.yaml
# -------------------------------------------------------------------------
modules:
default: {}
prefix_inspections:
./bin: [PATH]
./man: [MANPATH]
./share/man: [MANPATH]
./share/aclocal: [ACLOCAL_PATH]
./lib/pkgconfig: [PKG_CONFIG_PATH]
./lib64/pkgconfig: [PKG_CONFIG_PATH]
./share/pkgconfig: [PKG_CONFIG_PATH]
./: [CMAKE_PREFIX_PATH]
default:
roots:
tcl: {0}
lmod: {1}
enable: []
tcl:
all:
autoload: direct
lmod:
all:
autoload: direct
hierarchy:
- mpi
bin:
- PATH
man:
- MANPATH
share/man:
- MANPATH
share/aclocal:
- ACLOCAL_PATH
lib:
- LIBRARY_PATH
- LD_LIBRARY_PATH
lib64:
- LIBRARY_PATH
- LD_LIBRARY_PATH
include:
- CPATH
lib/pkgconfig:
- PKG_CONFIG_PATH
lib64/pkgconfig:
- PKG_CONFIG_PATH
share/pkgconfig:
- PKG_CONFIG_PATH
'':
- CMAKE_PREFIX_PATH

View File

@@ -813,33 +813,3 @@ def test_deconcretize_then_concretize_does_not_error(mutable_mock_env_path, mock
assert len(e.concrete_roots()) == 3
all_root_hashes = set(x.dag_hash() for x in e.concrete_roots())
assert len(all_root_hashes) == 2
@pytest.mark.regression("44216")
@pytest.mark.only_clingo()
def test_root_version_weights_for_old_versions(mutable_mock_env_path, mock_packages):
"""Tests that, when we select two old versions of root specs that have the same version
optimization penalty, both are considered.
"""
mutable_mock_env_path.mkdir()
spack_yaml = mutable_mock_env_path / ev.manifest_name
spack_yaml.write_text(
"""spack:
specs:
# allow any version, but the most recent
- bowtie@:1.3
# allows only the third most recent, so penalty is 2
- gcc@1
concretizer:
unify: true
"""
)
e = ev.Environment(mutable_mock_env_path)
with e:
e.concretize()
bowtie = [x for x in e.concrete_roots() if x.name == "bowtie"][0]
gcc = [x for x in e.concrete_roots() if x.name == "gcc"][0]
assert bowtie.satisfies("@=1.3.0")
assert gcc.satisfies("@=1.0")

View File

@@ -7,7 +7,6 @@
import os
import shutil
import sys
from typing import List, Optional, Union
import py
import pytest
@@ -45,10 +44,12 @@ def _mock_repo(root, namespace):
repodir.ensure(spack.repo.packages_dir_name, dir=True)
yaml = repodir.join("repo.yaml")
yaml.write(
f"""
"""
repo:
namespace: {namespace}
"""
namespace: {0}
""".format(
namespace
)
)
@@ -72,21 +73,53 @@ def _true(*args, **kwargs):
return True
def create_build_task(
pkg: spack.package_base.PackageBase, install_args: Optional[dict] = None
) -> inst.BuildTask:
request = inst.BuildRequest(pkg, {} if install_args is None else install_args)
return inst.BuildTask(pkg, request, False, 0, 0, inst.STATUS_ADDED, set())
def create_build_task(pkg, install_args={}):
"""
Create a built task for the given (concretized) package
Args:
pkg (spack.package_base.PackageBase): concretized package associated with
the task
install_args (dict): dictionary of kwargs (or install args)
Return:
(BuildTask) A basic package build task
"""
request = inst.BuildRequest(pkg, install_args)
return inst.BuildTask(pkg, request, False, 0, 0, inst.STATUS_ADDED, [])
def create_installer(
specs: Union[List[str], List[spack.spec.Spec]], install_args: Optional[dict] = None
) -> inst.PackageInstaller:
"""Create an installer instance for a list of specs or package names that will be
concretized."""
_specs = [spack.spec.Spec(s).concretized() if isinstance(s, str) else s for s in specs]
_install_args = {} if install_args is None else install_args
return inst.PackageInstaller([spec.package for spec in _specs], _install_args)
def create_installer(installer_args):
"""
Create an installer using the concretized spec for each arg
Args:
installer_args (list): the list of (spec name, kwargs) tuples
Return:
spack.installer.PackageInstaller: the associated package installer
"""
const_arg = [(spec.package, kwargs) for spec, kwargs in installer_args]
return inst.PackageInstaller(const_arg)
def installer_args(spec_names, kwargs={}):
"""Return a the installer argument with each spec paired with kwargs
Args:
spec_names (list): list of spec names
kwargs (dict or None): install arguments to apply to all of the specs
Returns:
list: list of (spec, kwargs), the installer constructor argument
"""
arg = []
for name in spec_names:
spec = spack.spec.Spec(name)
spec.concretize()
assert spec.concrete
arg.append((spec, kwargs))
return arg
@pytest.mark.parametrize(
@@ -207,7 +240,8 @@ def test_try_install_from_binary_cache(install_mockery, mock_packages, monkeypat
def test_installer_repr(install_mockery):
installer = create_installer(["trivial-install-test-package"])
const_arg = installer_args(["trivial-install-test-package"], {})
installer = create_installer(const_arg)
irep = installer.__repr__()
assert irep.startswith(installer.__class__.__name__)
@@ -216,7 +250,8 @@ def test_installer_repr(install_mockery):
def test_installer_str(install_mockery):
installer = create_installer(["trivial-install-test-package"])
const_arg = installer_args(["trivial-install-test-package"], {})
installer = create_installer(const_arg)
istr = str(installer)
assert "#tasks=0" in istr
@@ -261,7 +296,8 @@ def _mock_installed(self):
builder.add_package("f")
with spack.repo.use_repositories(builder.root):
installer = create_installer(["a"])
const_arg = installer_args(["a"], {})
installer = create_installer(const_arg)
installer._init_queue()
@@ -295,7 +331,8 @@ def test_check_last_phase_error(install_mockery):
def test_installer_ensure_ready_errors(install_mockery, monkeypatch):
installer = create_installer(["trivial-install-test-package"])
const_arg = installer_args(["trivial-install-test-package"], {})
installer = create_installer(const_arg)
spec = installer.build_requests[0].pkg.spec
fmt = r"cannot be installed locally.*{0}"
@@ -329,7 +366,8 @@ def test_ensure_locked_err(install_mockery, monkeypatch, tmpdir, capsys):
def _raise(lock, timeout=None):
raise RuntimeError(mock_err_msg)
installer = create_installer(["trivial-install-test-package"])
const_arg = installer_args(["trivial-install-test-package"], {})
installer = create_installer(const_arg)
spec = installer.build_requests[0].pkg.spec
monkeypatch.setattr(ulk.Lock, "acquire_read", _raise)
@@ -344,7 +382,8 @@ def _raise(lock, timeout=None):
def test_ensure_locked_have(install_mockery, tmpdir, capsys):
"""Test _ensure_locked when already have lock."""
installer = create_installer(["trivial-install-test-package"], {})
const_arg = installer_args(["trivial-install-test-package"], {})
installer = create_installer(const_arg)
spec = installer.build_requests[0].pkg.spec
pkg_id = inst.package_id(spec)
@@ -380,7 +419,8 @@ def test_ensure_locked_have(install_mockery, tmpdir, capsys):
@pytest.mark.parametrize("lock_type,reads,writes", [("read", 1, 0), ("write", 0, 1)])
def test_ensure_locked_new_lock(install_mockery, tmpdir, lock_type, reads, writes):
pkg_id = "a"
installer = create_installer([pkg_id], {})
const_arg = installer_args([pkg_id], {})
installer = create_installer(const_arg)
spec = installer.build_requests[0].pkg.spec
with tmpdir.as_cwd():
ltype, lock = installer._ensure_locked(lock_type, spec.package)
@@ -399,7 +439,8 @@ def _pl(db, spec, timeout):
return lock
pkg_id = "a"
installer = create_installer([pkg_id], {})
const_arg = installer_args([pkg_id], {})
installer = create_installer(const_arg)
spec = installer.build_requests[0].pkg.spec
monkeypatch.setattr(spack.database.SpecLocker, "lock", _pl)
@@ -469,7 +510,7 @@ def _conc_spec(compiler):
def test_update_tasks_for_compiler_packages_as_compiler(mock_packages, config, monkeypatch):
spec = spack.spec.Spec("trivial-install-test-package").concretized()
installer = inst.PackageInstaller([spec.package], {})
installer = inst.PackageInstaller([(spec.package, {})])
# Add a task to the queue
installer._add_init_task(spec.package, installer.build_requests[0], False, {})
@@ -653,7 +694,8 @@ def test_check_deps_status_install_failure(install_mockery):
for dep in s.traverse(root=False):
spack.store.STORE.failure_tracker.mark(dep)
installer = create_installer(["a"], {})
const_arg = installer_args(["a"], {})
installer = create_installer(const_arg)
request = installer.build_requests[0]
with pytest.raises(inst.InstallError, match="install failure"):
@@ -661,7 +703,8 @@ def test_check_deps_status_install_failure(install_mockery):
def test_check_deps_status_write_locked(install_mockery, monkeypatch):
installer = create_installer(["a"], {})
const_arg = installer_args(["a"], {})
installer = create_installer(const_arg)
request = installer.build_requests[0]
# Ensure the lock is not acquired
@@ -672,7 +715,8 @@ def test_check_deps_status_write_locked(install_mockery, monkeypatch):
def test_check_deps_status_external(install_mockery, monkeypatch):
installer = create_installer(["a"], {})
const_arg = installer_args(["a"], {})
installer = create_installer(const_arg)
request = installer.build_requests[0]
# Mock the dependencies as external so assumed to be installed
@@ -684,7 +728,8 @@ def test_check_deps_status_external(install_mockery, monkeypatch):
def test_check_deps_status_upstream(install_mockery, monkeypatch):
installer = create_installer(["a"], {})
const_arg = installer_args(["a"], {})
installer = create_installer(const_arg)
request = installer.build_requests[0]
# Mock the known dependencies as installed upstream
@@ -702,7 +747,8 @@ def _pkgs(compiler, architecture, pkgs):
spec = spack.spec.Spec("mpi").concretized()
return [(spec.package, True)]
installer = create_installer(["trivial-install-test-package"], {})
const_arg = installer_args(["trivial-install-test-package"], {})
installer = create_installer(const_arg)
request = installer.build_requests[0]
all_deps = defaultdict(set)
@@ -717,7 +763,8 @@ def _pkgs(compiler, architecture, pkgs):
def test_prepare_for_install_on_installed(install_mockery, monkeypatch):
"""Test of _prepare_for_install's early return for installed task path."""
installer = create_installer(["dependent-install"], {})
const_arg = installer_args(["dependent-install"], {})
installer = create_installer(const_arg)
request = installer.build_requests[0]
install_args = {"keep_prefix": True, "keep_stage": True, "restage": False}
@@ -732,7 +779,8 @@ def test_installer_init_requests(install_mockery):
"""Test of installer initial requests."""
spec_name = "dependent-install"
with spack.config.override("config:install_missing_compilers", True):
installer = create_installer([spec_name], {})
const_arg = installer_args([spec_name], {})
installer = create_installer(const_arg)
# There is only one explicit request in this case
assert len(installer.build_requests) == 1
@@ -741,7 +789,8 @@ def test_installer_init_requests(install_mockery):
def test_install_task_use_cache(install_mockery, monkeypatch):
installer = create_installer(["trivial-install-test-package"], {})
const_arg = installer_args(["trivial-install-test-package"], {})
installer = create_installer(const_arg)
request = installer.build_requests[0]
task = create_build_task(request.pkg)
@@ -756,7 +805,8 @@ def test_install_task_add_compiler(install_mockery, monkeypatch, capfd):
def _add(_compilers):
tty.msg(config_msg)
installer = create_installer(["a"], {})
const_arg = installer_args(["a"], {})
installer = create_installer(const_arg)
task = create_build_task(installer.build_requests[0].pkg)
task.compiler = True
@@ -775,7 +825,8 @@ def _add(_compilers):
def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
"""Test _release_lock for supposed write lock with exception."""
installer = create_installer(["trivial-install-test-package"], {})
const_arg = installer_args(["trivial-install-test-package"], {})
installer = create_installer(const_arg)
pkg_id = "test"
with tmpdir.as_cwd():
@@ -792,7 +843,8 @@ def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
@pytest.mark.parametrize("installed", [True, False])
def test_push_task_skip_processed(install_mockery, installed):
"""Test to ensure skip re-queueing a processed package."""
installer = create_installer(["a"], {})
const_arg = installer_args(["a"], {})
installer = create_installer(const_arg)
assert len(list(installer.build_tasks)) == 0
# Mark the package as installed OR failed
@@ -809,7 +861,8 @@ def test_push_task_skip_processed(install_mockery, installed):
def test_requeue_task(install_mockery, capfd):
"""Test to ensure cover _requeue_task."""
installer = create_installer(["a"], {})
const_arg = installer_args(["a"], {})
installer = create_installer(const_arg)
task = create_build_task(installer.build_requests[0].pkg)
# temporarily set tty debug messages on so we can test output
@@ -839,7 +892,8 @@ def _mktask(pkg):
def _rmtask(installer, pkg_id):
raise RuntimeError("Raise an exception to test except path")
installer = create_installer(["a"], {})
const_arg = installer_args(["a"], {})
installer = create_installer(const_arg)
spec = installer.build_requests[0].pkg.spec
# Cover task removal happy path
@@ -868,7 +922,8 @@ def _chgrp(path, group, follow_symlinks=True):
monkeypatch.setattr(prefs, "get_package_group", _get_group)
monkeypatch.setattr(fs, "chgrp", _chgrp)
installer = create_installer(["trivial-install-test-package"], {})
const_arg = installer_args(["trivial-install-test-package"], {})
installer = create_installer(const_arg)
spec = installer.build_requests[0].pkg.spec
fs.touchp(spec.prefix)
@@ -894,7 +949,8 @@ def test_cleanup_failed_err(install_mockery, tmpdir, monkeypatch, capsys):
def _raise_except(lock):
raise RuntimeError(msg)
installer = create_installer(["trivial-install-test-package"], {})
const_arg = installer_args(["trivial-install-test-package"], {})
installer = create_installer(const_arg)
monkeypatch.setattr(lk.Lock, "release_write", _raise_except)
pkg_id = "test"
@@ -910,7 +966,8 @@ def _raise_except(lock):
def test_update_failed_no_dependent_task(install_mockery):
"""Test _update_failed with missing dependent build tasks."""
installer = create_installer(["dependent-install"], {})
const_arg = installer_args(["dependent-install"], {})
installer = create_installer(const_arg)
spec = installer.build_requests[0].pkg.spec
for dep in spec.traverse(root=False):
@@ -921,7 +978,8 @@ def test_update_failed_no_dependent_task(install_mockery):
def test_install_uninstalled_deps(install_mockery, monkeypatch, capsys):
"""Test install with uninstalled dependencies."""
installer = create_installer(["dependent-install"], {})
const_arg = installer_args(["dependent-install"], {})
installer = create_installer(const_arg)
# Skip the actual installation and any status updates
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _noop)
@@ -938,7 +996,8 @@ def test_install_uninstalled_deps(install_mockery, monkeypatch, capsys):
def test_install_failed(install_mockery, monkeypatch, capsys):
"""Test install with failed install."""
installer = create_installer(["b"], {})
const_arg = installer_args(["b"], {})
installer = create_installer(const_arg)
# Make sure the package is identified as failed
monkeypatch.setattr(spack.database.FailureTracker, "has_failed", _true)
@@ -953,7 +1012,8 @@ def test_install_failed(install_mockery, monkeypatch, capsys):
def test_install_failed_not_fast(install_mockery, monkeypatch, capsys):
"""Test install with failed install."""
installer = create_installer(["a"], {"fail_fast": False})
const_arg = installer_args(["a"], {"fail_fast": False})
installer = create_installer(const_arg)
# Make sure the package is identified as failed
monkeypatch.setattr(spack.database.FailureTracker, "has_failed", _true)
@@ -977,7 +1037,8 @@ def _interrupt(installer, task, install_status, **kwargs):
else:
installer.installed.add(task.pkg.name)
installer = create_installer([spec_name], {})
const_arg = installer_args([spec_name], {})
installer = create_installer(const_arg)
# Raise a KeyboardInterrupt error to trigger early termination
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _interrupt)
@@ -1003,7 +1064,8 @@ def _install(installer, task, install_status, **kwargs):
else:
installer.installed.add(task.pkg.name)
installer = create_installer([spec_name], {})
const_arg = installer_args([spec_name], {})
installer = create_installer(const_arg)
# Raise a KeyboardInterrupt error to trigger early termination
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install)
@@ -1029,7 +1091,8 @@ def _install(installer, task, install_status, **kwargs):
else:
installer.installed.add(task.pkg.name)
installer = create_installer([spec_name, "a"], {})
const_arg = installer_args([spec_name, "a"], {})
installer = create_installer(const_arg)
# Raise a KeyboardInterrupt error to trigger early termination
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install)
@@ -1043,21 +1106,25 @@ def _install(installer, task, install_status, **kwargs):
def test_install_fail_fast_on_detect(install_mockery, monkeypatch, capsys):
"""Test fail_fast install when an install failure is detected."""
b, c = spack.spec.Spec("b").concretized(), spack.spec.Spec("c").concretized()
b_id, c_id = inst.package_id(b), inst.package_id(c)
installer = create_installer([b, c], {"fail_fast": True})
const_arg = installer_args(["b"], {"fail_fast": False})
const_arg.extend(installer_args(["c"], {"fail_fast": True}))
installer = create_installer(const_arg)
pkg_ids = [inst.package_id(spec) for spec, _ in const_arg]
# Make sure all packages are identified as failed
# This will prevent b from installing, which will cause the build of c to be skipped.
#
# This will prevent b from installing, which will cause the build of a
# to be skipped.
monkeypatch.setattr(spack.database.FailureTracker, "has_failed", _true)
with pytest.raises(inst.InstallError, match="after first install failure"):
installer.install()
assert b_id in installer.failed, "Expected b to be marked as failed"
assert c_id not in installer.failed, "Expected no attempt to install c"
assert f"{b_id} failed to install" in capsys.readouterr().err
assert pkg_ids[0] in installer.failed, "Expected b to be marked as failed"
assert pkg_ids[1] not in installer.failed, "Expected no attempt to install c"
out = capsys.readouterr()[1]
assert "{0} failed to install".format(pkg_ids[0]) in out
def _test_install_fail_fast_on_except_patch(installer, **kwargs):
@@ -1070,7 +1137,8 @@ def _test_install_fail_fast_on_except_patch(installer, **kwargs):
@pytest.mark.disable_clean_stage_check
def test_install_fail_fast_on_except(install_mockery, monkeypatch, capsys):
"""Test fail_fast install when an install failure results from an error."""
installer = create_installer(["a"], {"fail_fast": True})
const_arg = installer_args(["a"], {"fail_fast": True})
installer = create_installer(const_arg)
# Raise a non-KeyboardInterrupt exception to trigger fast failure.
#
@@ -1093,7 +1161,8 @@ def test_install_lock_failures(install_mockery, monkeypatch, capfd):
def _requeued(installer, task, install_status):
tty.msg("requeued {0}".format(task.pkg.spec.name))
installer = create_installer(["b"], {})
const_arg = installer_args(["b"], {})
installer = create_installer(const_arg)
# Ensure never acquire a lock
monkeypatch.setattr(inst.PackageInstaller, "_ensure_locked", _not_locked)
@@ -1112,19 +1181,20 @@ def _requeued(installer, task, install_status):
def test_install_lock_installed_requeue(install_mockery, monkeypatch, capfd):
"""Cover basic install handling for installed package."""
b = spack.spec.Spec("b").concretized()
const_arg = installer_args(["b"], {})
b, _ = const_arg[0]
installer = create_installer(const_arg)
b_pkg_id = inst.package_id(b)
installer = create_installer([b])
def _prep(installer, task):
installer.installed.add(b_pkg_id)
tty.msg(f"{b_pkg_id} is installed")
tty.msg("{0} is installed".format(b_pkg_id))
# also do not allow the package to be locked again
monkeypatch.setattr(inst.PackageInstaller, "_ensure_locked", _not_locked)
def _requeued(installer, task, install_status):
tty.msg(f"requeued {inst.package_id(task.pkg.spec)}")
tty.msg("requeued {0}".format(inst.package_id(task.pkg.spec)))
# Flag the package as installed
monkeypatch.setattr(inst.PackageInstaller, "_prepare_for_install", _prep)
@@ -1137,8 +1207,9 @@ def _requeued(installer, task, install_status):
assert b_pkg_id not in installer.installed
out = capfd.readouterr()[0]
expected = ["is installed", "read locked", "requeued"]
for exp, ln in zip(expected, capfd.readouterr().out.splitlines()):
for exp, ln in zip(expected, out.split("\n")):
assert exp in ln
@@ -1166,7 +1237,8 @@ def _requeued(installer, task, install_status):
# Ensure don't continually requeue the task
monkeypatch.setattr(inst.PackageInstaller, "_requeue_task", _requeued)
installer = create_installer(["b"], {})
const_arg = installer_args(["b"], {})
installer = create_installer(const_arg)
with pytest.raises(inst.InstallError, match="request failed"):
installer.install()
@@ -1181,19 +1253,25 @@ def _requeued(installer, task, install_status):
def test_install_skip_patch(install_mockery, mock_fetch):
"""Test the path skip_patch install path."""
installer = create_installer(["b"], {"fake": False, "skip_patch": True})
spec_name = "b"
const_arg = installer_args([spec_name], {"fake": False, "skip_patch": True})
installer = create_installer(const_arg)
installer.install()
assert inst.package_id(installer.build_requests[0].pkg.spec) in installer.installed
spec, install_args = const_arg[0]
assert inst.package_id(spec) in installer.installed
def test_install_implicit(install_mockery, mock_fetch):
"""Test the path skip_patch install path."""
spec_name = "trivial-install-test-package"
installer = create_installer([spec_name], {"fake": False})
const_arg = installer_args([spec_name], {"fake": False})
installer = create_installer(const_arg)
pkg = installer.build_requests[0].pkg
assert not create_build_task(pkg, {"explicit": []}).explicit
assert create_build_task(pkg, {"explicit": [pkg.spec.dag_hash()]}).explicit
assert not create_build_task(pkg).explicit
assert not create_build_task(pkg, {"explicit": False}).explicit
assert create_build_task(pkg, {"explicit": True}).explicit
assert create_build_task(pkg).explicit
def test_overwrite_install_backup_success(temporary_store, config, mock_packages, tmpdir):
@@ -1202,7 +1280,8 @@ def test_overwrite_install_backup_success(temporary_store, config, mock_packages
of the original prefix, and leave the original spec marked installed.
"""
# Get a build task. TODO: refactor this to avoid calling internal methods
installer = create_installer(["b"])
const_arg = installer_args(["b"])
installer = create_installer(const_arg)
installer._init_queue()
task = installer._pop_task()
@@ -1262,7 +1341,8 @@ def remove(self, spec):
self.called = True
# Get a build task. TODO: refactor this to avoid calling internal methods
installer = create_installer(["b"])
const_arg = installer_args(["b"])
installer = create_installer(const_arg)
installer._init_queue()
task = installer._pop_task()
@@ -1295,20 +1375,22 @@ def test_term_status_line():
x.clear()
@pytest.mark.parametrize("explicit", [True, False])
def test_single_external_implicit_install(install_mockery, explicit):
@pytest.mark.parametrize(
"explicit_args,is_explicit",
[({"explicit": False}, False), ({"explicit": True}, True), ({}, True)],
)
def test_single_external_implicit_install(install_mockery, explicit_args, is_explicit):
pkg = "trivial-install-test-package"
s = spack.spec.Spec(pkg).concretized()
s.external_path = "/usr"
args = {"explicit": [s.dag_hash()] if explicit else []}
create_installer([s], args).install()
assert spack.store.STORE.db.get_record(pkg).explicit == explicit
create_installer([(s, explicit_args)]).install()
assert spack.store.STORE.db.get_record(pkg).explicit == is_explicit
def test_overwrite_install_does_install_build_deps(install_mockery, mock_fetch):
"""When overwrite installing something from sources, build deps should be installed."""
s = spack.spec.Spec("dtrun3").concretized()
create_installer([s]).install()
create_installer([(s, {})]).install()
# Verify there is a pure build dep
edge = s.edges_to_dependencies(name="dtbuild3").pop()
@@ -1319,7 +1401,7 @@ def test_overwrite_install_does_install_build_deps(install_mockery, mock_fetch):
build_dep.package.do_uninstall()
# Overwrite install the root dtrun3
create_installer([s], {"overwrite": [s.dag_hash()]}).install()
create_installer([(s, {"overwrite": [s.dag_hash()]})]).install()
# Verify that the build dep was also installed.
assert build_dep.installed

View File

@@ -14,7 +14,7 @@
import pytest
import llnl.util.filesystem as fs
from llnl.util.symlink import islink, readlink, symlink
from llnl.util.symlink import islink, symlink
import spack.paths
@@ -181,7 +181,7 @@ def test_symlinks_true(self, stage):
assert os.path.exists("dest/a/b2")
with fs.working_dir("dest/a"):
assert os.path.exists(readlink("b2"))
assert os.path.exists(os.readlink("b2"))
assert os.path.realpath("dest/f/2") == os.path.abspath("dest/a/b/2")
assert os.path.realpath("dest/2") == os.path.abspath("dest/1")
@@ -278,10 +278,10 @@ def test_symlinks_false(self, stage):
def test_allow_broken_symlinks(self, stage):
"""Test installing with a broken symlink."""
with fs.working_dir(str(stage)):
symlink("nonexistant.txt", "source/broken")
fs.install_tree("source", "dest", symlinks=True)
symlink("nonexistant.txt", "source/broken", allow_broken_symlinks=True)
fs.install_tree("source", "dest", symlinks=True, allow_broken_symlinks=True)
assert os.path.islink("dest/broken")
assert not os.path.exists(readlink("dest/broken"))
assert not os.path.exists(os.readlink("dest/broken"))
def test_glob_src(self, stage):
"""Test using a glob as the source."""

View File

@@ -20,7 +20,7 @@ def test_symlink_file(tmpdir):
fd, real_file = tempfile.mkstemp(prefix="real", suffix=".txt", dir=test_dir)
link_file = str(tmpdir.join("link.txt"))
assert os.path.exists(link_file) is False
symlink.symlink(real_file, link_file)
symlink.symlink(source_path=real_file, link_path=link_file)
assert os.path.exists(link_file)
assert symlink.islink(link_file)
@@ -32,12 +32,11 @@ def test_symlink_dir(tmpdir):
real_dir = os.path.join(test_dir, "real_dir")
link_dir = os.path.join(test_dir, "link_dir")
os.mkdir(real_dir)
symlink.symlink(real_dir, link_dir)
symlink.symlink(source_path=real_dir, link_path=link_dir)
assert os.path.exists(link_dir)
assert symlink.islink(link_dir)
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
def test_symlink_source_not_exists(tmpdir):
"""Test the symlink.symlink method for the case where a source path does not exist"""
with tmpdir.as_cwd():
@@ -45,7 +44,7 @@ def test_symlink_source_not_exists(tmpdir):
real_dir = os.path.join(test_dir, "real_dir")
link_dir = os.path.join(test_dir, "link_dir")
with pytest.raises(symlink.SymlinkError):
symlink._windows_symlink(real_dir, link_dir)
symlink.symlink(source_path=real_dir, link_path=link_dir, allow_broken_symlinks=False)
def test_symlink_src_relative_to_link(tmpdir):
@@ -62,16 +61,18 @@ def test_symlink_src_relative_to_link(tmpdir):
fd, real_file = tempfile.mkstemp(prefix="real", suffix=".txt", dir=subdir_2)
link_file = os.path.join(subdir_1, "link.txt")
symlink.symlink(f"b/{os.path.basename(real_file)}", f"a/{os.path.basename(link_file)}")
symlink.symlink(
source_path=f"b/{os.path.basename(real_file)}",
link_path=f"a/{os.path.basename(link_file)}",
)
assert os.path.exists(link_file)
assert symlink.islink(link_file)
# Check dirs
assert not os.path.lexists(link_dir)
symlink.symlink("b", "a/c")
symlink.symlink(source_path="b", link_path="a/c")
assert os.path.lexists(link_dir)
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
def test_symlink_src_not_relative_to_link(tmpdir):
"""Test the symlink.symlink functionality where the source value does not exist relative to
the link and not relative to the cwd. NOTE that this symlink api call is EXPECTED to raise
@@ -87,18 +88,19 @@ def test_symlink_src_not_relative_to_link(tmpdir):
link_file = str(tmpdir.join("link.txt"))
# Expected SymlinkError because source path does not exist relative to link path
with pytest.raises(symlink.SymlinkError):
symlink._windows_symlink(
f"d/{os.path.basename(real_file)}", f"a/{os.path.basename(link_file)}"
symlink.symlink(
source_path=f"d/{os.path.basename(real_file)}",
link_path=f"a/{os.path.basename(link_file)}",
allow_broken_symlinks=False,
)
assert not os.path.exists(link_file)
# Check dirs
assert not os.path.lexists(link_dir)
with pytest.raises(symlink.SymlinkError):
symlink._windows_symlink("d", "a/c")
symlink.symlink(source_path="d", link_path="a/c", allow_broken_symlinks=False)
assert not os.path.lexists(link_dir)
@pytest.mark.skipif(sys.platform != "win32", reason="Test is only for Windows")
def test_symlink_link_already_exists(tmpdir):
"""Test the symlink.symlink method for the case where a link already exists"""
with tmpdir.as_cwd():
@@ -106,10 +108,10 @@ def test_symlink_link_already_exists(tmpdir):
real_dir = os.path.join(test_dir, "real_dir")
link_dir = os.path.join(test_dir, "link_dir")
os.mkdir(real_dir)
symlink._windows_symlink(real_dir, link_dir)
symlink.symlink(real_dir, link_dir, allow_broken_symlinks=False)
assert os.path.exists(link_dir)
with pytest.raises(symlink.SymlinkError):
symlink._windows_symlink(real_dir, link_dir)
symlink.symlink(source_path=real_dir, link_path=link_dir, allow_broken_symlinks=False)
@pytest.mark.skipif(not symlink._windows_can_symlink(), reason="Test requires elevated privileges")
@@ -120,7 +122,7 @@ def test_symlink_win_file(tmpdir):
test_dir = str(tmpdir)
fd, real_file = tempfile.mkstemp(prefix="real", suffix=".txt", dir=test_dir)
link_file = str(tmpdir.join("link.txt"))
symlink.symlink(real_file, link_file)
symlink.symlink(source_path=real_file, link_path=link_file)
# Verify that all expected conditions are met
assert os.path.exists(link_file)
assert symlink.islink(link_file)
@@ -138,7 +140,7 @@ def test_symlink_win_dir(tmpdir):
real_dir = os.path.join(test_dir, "real")
link_dir = os.path.join(test_dir, "link")
os.mkdir(real_dir)
symlink.symlink(real_dir, link_dir)
symlink.symlink(source_path=real_dir, link_path=link_dir)
# Verify that all expected conditions are met
assert os.path.exists(link_dir)
assert symlink.islink(link_dir)

View File

@@ -7,8 +7,6 @@
import pytest
from llnl.util.symlink import readlink
import spack.cmd.modules
import spack.config
import spack.error
@@ -80,7 +78,7 @@ def test_modules_default_symlink(
link_path = os.path.join(os.path.dirname(mock_module_filename), "default")
assert os.path.islink(link_path)
assert readlink(link_path) == mock_module_filename
assert os.readlink(link_path) == mock_module_filename
generator.remove()
assert not os.path.lexists(link_path)

View File

@@ -151,9 +151,7 @@ class InMemoryOCIRegistry(DummyServer):
A third option is to use the chunked upload, but this is not implemented here, because
it's typically a major performance hit in upload speed, so we're not using it in Spack."""
def __init__(
self, domain: str, allow_single_post: bool = True, tags_per_page: int = 100
) -> None:
def __init__(self, domain: str, allow_single_post: bool = True) -> None:
super().__init__(domain)
self.router.register("GET", r"/v2/", self.index)
self.router.register("HEAD", r"/v2/(?P<name>.+)/blobs/(?P<digest>.+)", self.head_blob)
@@ -167,9 +165,6 @@ def __init__(
# If True, allow single POST upload, not all registries support this
self.allow_single_post = allow_single_post
# How many tags are returned in a single request
self.tags_per_page = tags_per_page
# Used for POST + PUT upload. This is a map from session ID to image name
self.sessions: Dict[str, str] = {}
@@ -285,34 +280,10 @@ def handle_upload(self, req: Request, name: str, digest: Digest):
return MockHTTPResponse(201, "Created", headers={"Location": f"/v2/{name}/blobs/{digest}"})
def list_tags(self, req: Request, name: str):
# Paginate using Link headers, this was added to the spec in the following commit:
# https://github.com/opencontainers/distribution-spec/commit/2ed79d930ecec11dd755dc8190409a3b10f01ca9
# List all tags, exclude digests.
all_tags = sorted(
_tag for _name, _tag in self.manifests.keys() if _name == name and ":" not in _tag
)
query = urllib.parse.parse_qs(urllib.parse.urlparse(req.full_url).query)
n = int(query["n"][0]) if "n" in query else self.tags_per_page
if "last" in query:
try:
offset = all_tags.index(query["last"][0]) + 1
except ValueError:
return MockHTTPResponse(404, "Not found")
else:
offset = 0
tags = all_tags[offset : offset + n]
if offset + n < len(all_tags):
headers = {"Link": f'</v2/{name}/tags/list?last={tags[-1]}&n={n}>; rel="next"'}
else:
headers = None
return MockHTTPResponse.with_json(200, "OK", headers=headers, body={"tags": tags})
tags = [_tag for _name, _tag in self.manifests.keys() if _name == name and ":" not in _tag]
tags.sort()
return MockHTTPResponse.with_json(200, "OK", body={"tags": tags})
class DummyServerUrllibHandler(urllib.request.BaseHandler):

View File

@@ -6,7 +6,6 @@
import hashlib
import json
import random
import urllib.error
import urllib.parse
import urllib.request
@@ -20,7 +19,6 @@
copy_missing_layers,
get_manifest_and_config,
image_from_mirror,
list_tags,
upload_blob,
upload_manifest,
)
@@ -672,31 +670,3 @@ def test_retry(url, max_retries, expect_failure, expect_requests):
assert len(server.requests) == expect_requests
assert sleep_time == [2**i for i in range(expect_requests - 1)]
def test_list_tags():
# Follows a relatively new rewording of the OCI distribution spec, which is not yet tagged.
# https://github.com/opencontainers/distribution-spec/commit/2ed79d930ecec11dd755dc8190409a3b10f01ca9
N = 20
urlopen = create_opener(InMemoryOCIRegistry("example.com", tags_per_page=5)).open
image = ImageReference.from_string("example.com/image")
to_tag = lambda i: f"tag-{i:02}"
# Create N tags in arbitrary order
_tags_to_create = [to_tag(i) for i in range(N)]
random.shuffle(_tags_to_create)
for tag in _tags_to_create:
upload_manifest(image.with_tag(tag), default_manifest(), tag=True, _urlopen=urlopen)
# list_tags should return all tags from all pages in order
tags = list_tags(image, urlopen)
assert len(tags) == N
assert [to_tag(i) for i in range(N)] == tags
# Test a single request, which should give the first 5 tags
assert json.loads(urlopen(image.tags_url()).read())["tags"] == [to_tag(i) for i in range(5)]
# Test response at an offset, which should exclude the `last` tag.
assert json.loads(urlopen(image.tags_url() + f"?last={to_tag(N - 3)}").read())["tags"] == [
to_tag(i) for i in range(N - 2, N)
]

View File

@@ -0,0 +1,77 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.operating_systems.cray_backend as cray_backend
def test_read_cle_release_file(tmpdir, monkeypatch):
"""test reading the Cray cle-release file"""
cle_release_path = tmpdir.join("cle-release")
with cle_release_path.open("w") as f:
f.write(
"""\
RELEASE=6.0.UP07
BUILD=6.0.7424
DATE=20190611
ARCH=noarch
NETWORK=ari
PATCHSET=35-201906112304
DUMMY=foo=bar
"""
)
monkeypatch.setattr(cray_backend, "_cle_release_file", str(cle_release_path))
attrs = cray_backend.read_cle_release_file()
assert attrs["RELEASE"] == "6.0.UP07"
assert attrs["BUILD"] == "6.0.7424"
assert attrs["DATE"] == "20190611"
assert attrs["ARCH"] == "noarch"
assert attrs["NETWORK"] == "ari"
assert attrs["PATCHSET"] == "35-201906112304"
assert attrs["DUMMY"] == "foo=bar"
assert cray_backend.CrayBackend._detect_crayos_version() == 6
def test_read_clerelease_file(tmpdir, monkeypatch):
"""test reading the Cray clerelease file"""
clerelease_path = tmpdir.join("clerelease")
with clerelease_path.open("w") as f:
f.write("5.2.UP04\n")
monkeypatch.setattr(cray_backend, "_clerelease_file", str(clerelease_path))
v = cray_backend.read_clerelease_file()
assert v == "5.2.UP04"
assert cray_backend.CrayBackend._detect_crayos_version() == 5
def test_cle_release_precedence(tmpdir, monkeypatch):
"""test that cle-release file takes precedence over clerelease file."""
cle_release_path = tmpdir.join("cle-release")
clerelease_path = tmpdir.join("clerelease")
with cle_release_path.open("w") as f:
f.write(
"""\
RELEASE=6.0.UP07
BUILD=6.0.7424
DATE=20190611
ARCH=noarch
NETWORK=ari
PATCHSET=35-201906112304
DUMMY=foo=bar
"""
)
with clerelease_path.open("w") as f:
f.write("5.2.UP04\n")
monkeypatch.setattr(cray_backend, "_clerelease_file", str(clerelease_path))
monkeypatch.setattr(cray_backend, "_cle_release_file", str(cle_release_path))
assert cray_backend.CrayBackend._detect_crayos_version() == 6

View File

@@ -16,7 +16,7 @@
import pytest
from llnl.util import filesystem as fs
from llnl.util.symlink import readlink, symlink
from llnl.util.symlink import symlink
import spack.binary_distribution as bindist
import spack.cmd.buildcache as buildcache
@@ -181,12 +181,12 @@ def test_relocate_links(tmpdir):
relocate_links(["to_self", "to_dependency", "to_system"], prefix_to_prefix)
# These two are relocated
assert readlink("to_self") == str(tmpdir.join("new_prefix_a", "file"))
assert readlink("to_dependency") == str(tmpdir.join("new_prefix_b", "file"))
assert os.readlink("to_self") == str(tmpdir.join("new_prefix_a", "file"))
assert os.readlink("to_dependency") == str(tmpdir.join("new_prefix_b", "file"))
# These two are not.
assert readlink("to_system") == system_path
assert readlink("to_self_but_relative") == "relative"
assert os.readlink("to_system") == system_path
assert os.readlink("to_self_but_relative") == "relative"
def test_needs_relocation():

View File

@@ -15,7 +15,6 @@
import pytest
from llnl.util.filesystem import getuid, mkdirp, partition_path, touch, working_dir
from llnl.util.symlink import readlink
import spack.error
import spack.paths
@@ -873,7 +872,7 @@ def _create_files_from_tree(base, tree):
def _create_tree_from_dir_recursive(path):
if os.path.islink(path):
return readlink(path)
return os.readlink(path)
elif os.path.isdir(path):
tree = {}
for name in os.listdir(path):

View File

@@ -272,29 +272,6 @@ def test_breadth_first_versus_depth_first_tree(abstract_specs_chain):
]
@pytest.mark.parametrize("cover", ["nodes", "edges"])
@pytest.mark.parametrize("depth_first", [True, False])
def test_tree_traversal_with_key(cover, depth_first, abstract_specs_chain):
"""Compare two multisource traversals of the same DAG. In one case the DAG consists of unique
Spec instances, in the second case there are identical copies of nodes and edges. Traversal
should be equivalent when nodes are identified by dag_hash."""
a = abstract_specs_chain["chain-a"]
c = abstract_specs_chain["chain-c"]
kwargs = {"cover": cover, "depth_first": depth_first}
dag_hash = lambda s: s.dag_hash()
# Traverse DAG spanned by a unique set of Spec instances
first = traverse.traverse_tree([a, c], key=id, **kwargs)
# Traverse equivalent DAG with copies of Spec instances included, keyed by dag hash.
second = traverse.traverse_tree([a, c.copy()], key=dag_hash, **kwargs)
# Check that the same nodes are discovered at the same depth
node_at_depth_first = [(depth, dag_hash(edge.spec)) for (depth, edge) in first]
node_at_depth_second = [(depth, dag_hash(edge.spec)) for (depth, edge) in second]
assert node_at_depth_first == node_at_depth_second
def test_breadth_first_versus_depth_first_printing(abstract_specs_chain):
"""Test breadth-first versus depth-first tree printing."""
s = abstract_specs_chain["chain-a"]

View File

@@ -87,13 +87,6 @@ def test_url_strip_name_suffixes(url, version, expected):
59,
"https://github.com/nextflow-io/nextflow/releases/download/v0.20.1/nextflow",
),
(
"hpcviewer",
30,
"2024.02",
51,
"https://gitlab.com/hpctoolkit/hpcviewer/-/releases/2024.02/downloads/hpcviewer.tgz",
),
# Version in stem
("zlib", 24, "1.2.10", 29, "http://zlib.net/fossils/zlib-1.2.10.tar.gz"),
(

View File

@@ -207,29 +207,3 @@ def test_default_download_name_dot_dot():
assert url_util.default_download_filename("https://example.com/.") == "_"
assert url_util.default_download_filename("https://example.com/..") == "_."
assert url_util.default_download_filename("https://example.com/.abcdef") == "_abcdef"
def test_parse_link_rel_next():
parse = url_util.parse_link_rel_next
assert parse(r'</abc>; rel="next"') == "/abc"
assert parse(r'</abc>; x=y; rel="next", </def>; x=y; rel="prev"') == "/abc"
assert parse(r'</abc>; rel="prev"; x=y, </def>; x=y; rel="next"') == "/def"
# example from RFC5988
assert (
parse(
r"""</TheBook/chapter2>; title*=UTF-8'de'letztes%20Kapitel; rel="previous","""
r"""</TheBook/chapter4>; title*=UTF-8'de'n%c3%a4chstes%20Kapitel; rel="next" """
)
== "/TheBook/chapter4"
)
assert (
parse(r"""<https://example.com/example>; key=";a=b, </c/d>; e=f"; rel="next" """)
== "https://example.com/example"
)
assert parse("https://example.com/example") is None
assert parse("<https://example.com/example; broken=broken") is None
assert parse("https://example.com/example; rel=prev") is None
assert parse("https://example.com/example; a=b; c=d; g=h") is None

Some files were not shown because too many files have changed in this diff Show More