Compare commits
1 Commits
albestro/o
...
bugfix/com
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
55ac6fb542 |
8
.github/ISSUE_TEMPLATE/build_error.yml
vendored
8
.github/ISSUE_TEMPLATE/build_error.yml
vendored
@@ -9,7 +9,7 @@ body:
|
||||
Thanks for taking the time to report this build failure. To proceed with the report please:
|
||||
1. Title the issue `Installation issue: <name-of-the-package>`.
|
||||
2. Provide the information required below.
|
||||
|
||||
|
||||
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
|
||||
- type: textarea
|
||||
id: reproduce
|
||||
@@ -29,9 +29,7 @@ body:
|
||||
description: |
|
||||
Please post the error message from spack inside the `<details>` tag below:
|
||||
value: |
|
||||
<details><summary>Error message</summary>
|
||||
|
||||
<pre>
|
||||
<details><summary>Error message</summary><pre>
|
||||
...
|
||||
</pre></details>
|
||||
validations:
|
||||
@@ -55,7 +53,7 @@ body:
|
||||
Please upload the following files:
|
||||
* **`spack-build-out.txt`**
|
||||
* **`spack-build-env.txt`**
|
||||
|
||||
|
||||
They should be present in the stage directory of the failing build. Also upload any `config.log` or similar file if one exists.
|
||||
- type: markdown
|
||||
attributes:
|
||||
|
||||
6
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
6
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@@ -1,4 +1,4 @@
|
||||
name: "\U0001F38A Feature request"
|
||||
name: "\U0001F38A Feature request"
|
||||
description: Suggest adding a feature that is not yet in Spack
|
||||
labels: [feature]
|
||||
body:
|
||||
@@ -29,11 +29,13 @@ body:
|
||||
attributes:
|
||||
label: General information
|
||||
options:
|
||||
- label: I have run `spack --version` and reported the version of Spack
|
||||
required: true
|
||||
- label: I have searched the issues of this repo and believe this is not a duplicate
|
||||
required: true
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
|
||||
|
||||
|
||||
Other than that, thanks for taking the time to contribute to Spack!
|
||||
|
||||
4
.github/ISSUE_TEMPLATE/test_error.yml
vendored
4
.github/ISSUE_TEMPLATE/test_error.yml
vendored
@@ -21,9 +21,7 @@ body:
|
||||
description: |
|
||||
Please post the error message from spack inside the `<details>` tag below:
|
||||
value: |
|
||||
<details><summary>Error message</summary>
|
||||
|
||||
<pre>
|
||||
<details><summary>Error message</summary><pre>
|
||||
...
|
||||
</pre></details>
|
||||
validations:
|
||||
|
||||
8
.github/workflows/audit.yaml
vendored
8
.github/workflows/audit.yaml
vendored
@@ -19,13 +19,13 @@ jobs:
|
||||
package-audits:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
|
||||
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
with:
|
||||
python-version: ${{inputs.python_version}}
|
||||
- name: Install Python packages
|
||||
run: |
|
||||
pip install --upgrade pip setuptools pytest coverage[toml]
|
||||
pip install --upgrade pip six setuptools pytest codecov coverage[toml]
|
||||
- name: Package audits (with coverage)
|
||||
if: ${{ inputs.with_coverage == 'true' }}
|
||||
run: |
|
||||
@@ -38,7 +38,7 @@ jobs:
|
||||
run: |
|
||||
. share/spack/setup-env.sh
|
||||
$(which spack) audit packages
|
||||
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2 # @v2.1.0
|
||||
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70 # @v2.1.0
|
||||
if: ${{ inputs.with_coverage == 'true' }}
|
||||
with:
|
||||
flags: unittests,linux,audits
|
||||
|
||||
22
.github/workflows/bootstrap.yml
vendored
22
.github/workflows/bootstrap.yml
vendored
@@ -24,7 +24,7 @@ jobs:
|
||||
make patch unzip which xz python3 python3-devel tree \
|
||||
cmake bison bison-devel libstdc++-static
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup non-root user
|
||||
@@ -62,7 +62,7 @@ jobs:
|
||||
make patch unzip xz-utils python3 python3-dev tree \
|
||||
cmake bison
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup non-root user
|
||||
@@ -99,7 +99,7 @@ jobs:
|
||||
bzip2 curl file g++ gcc gfortran git gnupg2 gzip \
|
||||
make patch unzip xz-utils python3 python3-dev tree
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup non-root user
|
||||
@@ -133,7 +133,7 @@ jobs:
|
||||
make patch unzip which xz python3 python3-devel tree \
|
||||
cmake bison
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup repo
|
||||
@@ -158,7 +158,7 @@ jobs:
|
||||
run: |
|
||||
brew install cmake bison@2.7 tree
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
- name: Bootstrap clingo
|
||||
run: |
|
||||
source share/spack/setup-env.sh
|
||||
@@ -179,7 +179,7 @@ jobs:
|
||||
run: |
|
||||
brew install tree
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
- name: Bootstrap clingo
|
||||
run: |
|
||||
set -ex
|
||||
@@ -204,7 +204,7 @@ jobs:
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup repo
|
||||
@@ -247,7 +247,7 @@ jobs:
|
||||
bzip2 curl file g++ gcc patchelf gfortran git gzip \
|
||||
make patch unzip xz-utils python3 python3-dev tree
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup non-root user
|
||||
@@ -283,7 +283,7 @@ jobs:
|
||||
make patch unzip xz-utils python3 python3-dev tree \
|
||||
gawk
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup non-root user
|
||||
@@ -316,7 +316,7 @@ jobs:
|
||||
# Remove GnuPG since we want to bootstrap it
|
||||
sudo rm -rf /usr/local/bin/gpg
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
- name: Bootstrap GnuPG
|
||||
run: |
|
||||
source share/spack/setup-env.sh
|
||||
@@ -333,7 +333,7 @@ jobs:
|
||||
# Remove GnuPG since we want to bootstrap it
|
||||
sudo rm -rf /usr/local/bin/gpg
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
- name: Bootstrap GnuPG
|
||||
run: |
|
||||
source share/spack/setup-env.sh
|
||||
|
||||
12
.github/workflows/build-containers.yml
vendored
12
.github/workflows/build-containers.yml
vendored
@@ -45,18 +45,12 @@ jobs:
|
||||
[leap15, 'linux/amd64,linux/arm64,linux/ppc64le', 'opensuse/leap:15'],
|
||||
[ubuntu-bionic, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:18.04'],
|
||||
[ubuntu-focal, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:20.04'],
|
||||
[ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04'],
|
||||
[almalinux8, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:8'],
|
||||
[almalinux9, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:9'],
|
||||
[rockylinux8, 'linux/amd64,linux/arm64', 'rockylinux:8'],
|
||||
[rockylinux9, 'linux/amd64,linux/arm64,linux/ppc64le', 'rockylinux:9'],
|
||||
[fedora37, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:37'],
|
||||
[fedora38, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:38']]
|
||||
[ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04']]
|
||||
name: Build ${{ matrix.dockerfile[0] }}
|
||||
if: github.repository == 'spack/spack'
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
|
||||
- name: Set Container Tag Normal (Nightly)
|
||||
run: |
|
||||
@@ -95,7 +89,7 @@ jobs:
|
||||
uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # @v1
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@4b4e9c3e2d4531116a6f8ba8e71fc6e2cb6e6c8c # @v1
|
||||
uses: docker/setup-buildx-action@f03ac48505955848960e80bbb68046aa35c7b9e7 # @v1
|
||||
|
||||
- name: Log in to GitHub Container Registry
|
||||
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # @v1
|
||||
|
||||
2
.github/workflows/ci.yaml
vendored
2
.github/workflows/ci.yaml
vendored
@@ -35,7 +35,7 @@ jobs:
|
||||
core: ${{ steps.filter.outputs.core }}
|
||||
packages: ${{ steps.filter.outputs.packages }}
|
||||
steps:
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
if: ${{ github.event_name == 'push' }}
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
36
.github/workflows/unit_tests.yaml
vendored
36
.github/workflows/unit_tests.yaml
vendored
@@ -47,10 +47,10 @@ jobs:
|
||||
on_develop: false
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
- name: Install System packages
|
||||
@@ -62,7 +62,7 @@ jobs:
|
||||
cmake bison libbison-dev kcov
|
||||
- name: Install Python packages
|
||||
run: |
|
||||
pip install --upgrade pip setuptools pytest pytest-xdist pytest-cov
|
||||
pip install --upgrade pip six setuptools pytest codecov[toml] pytest-xdist pytest-cov
|
||||
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black"
|
||||
- name: Setup git configuration
|
||||
run: |
|
||||
@@ -87,17 +87,17 @@ jobs:
|
||||
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }}
|
||||
run: |
|
||||
share/spack/qa/run-unit-tests
|
||||
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2
|
||||
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
|
||||
with:
|
||||
flags: unittests,linux,${{ matrix.concretizer }}
|
||||
# Test shell integration
|
||||
shell:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
with:
|
||||
python-version: '3.11'
|
||||
- name: Install System packages
|
||||
@@ -107,7 +107,7 @@ jobs:
|
||||
sudo apt-get install -y coreutils kcov csh zsh tcsh fish dash bash
|
||||
- name: Install Python packages
|
||||
run: |
|
||||
pip install --upgrade pip setuptools pytest coverage[toml] pytest-xdist
|
||||
pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-xdist
|
||||
- name: Setup git configuration
|
||||
run: |
|
||||
# Need this for the git tests to succeed.
|
||||
@@ -118,7 +118,7 @@ jobs:
|
||||
COVERAGE: true
|
||||
run: |
|
||||
share/spack/qa/run-shell-tests
|
||||
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2
|
||||
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
|
||||
with:
|
||||
flags: shelltests,linux
|
||||
|
||||
@@ -133,7 +133,7 @@ jobs:
|
||||
dnf install -y \
|
||||
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
|
||||
make patch tcl unzip which xz
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
- name: Setup repo and non-root user
|
||||
run: |
|
||||
git --version
|
||||
@@ -151,10 +151,10 @@ jobs:
|
||||
clingo-cffi:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
with:
|
||||
python-version: '3.11'
|
||||
- name: Install System packages
|
||||
@@ -163,7 +163,7 @@ jobs:
|
||||
sudo apt-get -y install coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build kcov
|
||||
- name: Install Python packages
|
||||
run: |
|
||||
pip install --upgrade pip setuptools pytest coverage[toml] pytest-cov clingo pytest-xdist
|
||||
pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-cov clingo pytest-xdist
|
||||
- name: Setup git configuration
|
||||
run: |
|
||||
# Need this for the git tests to succeed.
|
||||
@@ -175,7 +175,7 @@ jobs:
|
||||
SPACK_TEST_SOLVER: clingo
|
||||
run: |
|
||||
share/spack/qa/run-unit-tests
|
||||
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2 # @v2.1.0
|
||||
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70 # @v2.1.0
|
||||
with:
|
||||
flags: unittests,linux,clingo
|
||||
# Run unit tests on MacOS
|
||||
@@ -185,16 +185,16 @@ jobs:
|
||||
matrix:
|
||||
python-version: ["3.10"]
|
||||
steps:
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
- name: Install Python packages
|
||||
run: |
|
||||
pip install --upgrade pip setuptools
|
||||
pip install --upgrade pytest coverage[toml] pytest-xdist pytest-cov
|
||||
pip install --upgrade pip six setuptools
|
||||
pip install --upgrade pytest codecov coverage[toml] pytest-xdist pytest-cov
|
||||
- name: Setup Homebrew packages
|
||||
run: |
|
||||
brew install dash fish gcc gnupg2 kcov
|
||||
@@ -210,6 +210,6 @@ jobs:
|
||||
$(which spack) solve zlib
|
||||
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
|
||||
$(which spack) unit-test --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}"
|
||||
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2
|
||||
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
|
||||
with:
|
||||
flags: unittests,macos
|
||||
|
||||
35
.github/workflows/valid-style.yml
vendored
35
.github/workflows/valid-style.yml
vendored
@@ -18,8 +18,8 @@ jobs:
|
||||
validate:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
|
||||
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
with:
|
||||
python-version: '3.11'
|
||||
cache: 'pip'
|
||||
@@ -35,16 +35,16 @@ jobs:
|
||||
style:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
with:
|
||||
python-version: '3.11'
|
||||
cache: 'pip'
|
||||
- name: Install Python packages
|
||||
run: |
|
||||
python3 -m pip install --upgrade pip setuptools types-six black==23.1.0 mypy isort clingo flake8
|
||||
python3 -m pip install --upgrade pip six setuptools types-six black==23.1.0 mypy isort clingo flake8
|
||||
- name: Setup git configuration
|
||||
run: |
|
||||
# Need this for the git tests to succeed.
|
||||
@@ -58,28 +58,3 @@ jobs:
|
||||
with:
|
||||
with_coverage: ${{ inputs.with_coverage }}
|
||||
python_version: '3.11'
|
||||
# Check that spack can bootstrap the development environment on Python 3.6 - RHEL8
|
||||
bootstrap-dev-rhel8:
|
||||
runs-on: ubuntu-latest
|
||||
container: registry.access.redhat.com/ubi8/ubi
|
||||
steps:
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
dnf install -y \
|
||||
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
|
||||
make patch tcl unzip which xz
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
|
||||
- name: Setup repo and non-root user
|
||||
run: |
|
||||
git --version
|
||||
git fetch --unshallow
|
||||
. .github/workflows/setup_git.sh
|
||||
useradd spack-test
|
||||
chown -R spack-test .
|
||||
- name: Bootstrap Spack development environment
|
||||
shell: runuser -u spack-test -- bash {0}
|
||||
run: |
|
||||
source share/spack/setup-env.sh
|
||||
spack -d bootstrap now --dev
|
||||
spack style -t black
|
||||
spack unit-test -V
|
||||
|
||||
97
.github/workflows/windows_python.yml
vendored
97
.github/workflows/windows_python.yml
vendored
@@ -15,15 +15,15 @@ jobs:
|
||||
unit-tests:
|
||||
runs-on: windows-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
|
||||
with:
|
||||
python-version: 3.9
|
||||
- name: Install Python packages
|
||||
run: |
|
||||
python -m pip install --upgrade pip pywin32 setuptools pytest-cov clingo
|
||||
python -m pip install --upgrade pip six pywin32 setuptools codecov pytest-cov clingo
|
||||
- name: Create local develop
|
||||
run: |
|
||||
./.github/workflows/setup_git.ps1
|
||||
@@ -33,21 +33,21 @@ jobs:
|
||||
./share/spack/qa/validate_last_exit.ps1
|
||||
coverage combine -a
|
||||
coverage xml
|
||||
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2
|
||||
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
|
||||
with:
|
||||
flags: unittests,windows
|
||||
unit-tests-cmd:
|
||||
runs-on: windows-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
|
||||
with:
|
||||
python-version: 3.9
|
||||
- name: Install Python packages
|
||||
run: |
|
||||
python -m pip install --upgrade pip pywin32 setuptools coverage pytest-cov clingo
|
||||
python -m pip install --upgrade pip six pywin32 setuptools codecov coverage pytest-cov clingo
|
||||
- name: Create local develop
|
||||
run: |
|
||||
./.github/workflows/setup_git.ps1
|
||||
@@ -57,24 +57,99 @@ jobs:
|
||||
./share/spack/qa/validate_last_exit.ps1
|
||||
coverage combine -a
|
||||
coverage xml
|
||||
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2
|
||||
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
|
||||
with:
|
||||
flags: unittests,windows
|
||||
build-abseil:
|
||||
runs-on: windows-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
|
||||
with:
|
||||
python-version: 3.9
|
||||
- name: Install Python packages
|
||||
run: |
|
||||
python -m pip install --upgrade pip pywin32 setuptools coverage
|
||||
python -m pip install --upgrade pip six pywin32 setuptools codecov coverage
|
||||
- name: Build Test
|
||||
run: |
|
||||
spack compiler find
|
||||
spack external find cmake
|
||||
spack external find ninja
|
||||
spack -d install abseil-cpp
|
||||
# TODO: johnwparent - reduce the size of the installer operations
|
||||
# make-installer:
|
||||
# runs-on: windows-latest
|
||||
# steps:
|
||||
# - name: Disable Windows Symlinks
|
||||
# run: |
|
||||
# git config --global core.symlinks false
|
||||
# shell:
|
||||
# powershell
|
||||
# - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
# with:
|
||||
# fetch-depth: 0
|
||||
# - uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
|
||||
# with:
|
||||
# python-version: 3.9
|
||||
# - name: Install Python packages
|
||||
# run: |
|
||||
# python -m pip install --upgrade pip six pywin32 setuptools
|
||||
# - name: Add Light and Candle to Path
|
||||
# run: |
|
||||
# $env:WIX >> $GITHUB_PATH
|
||||
# - name: Run Installer
|
||||
# run: |
|
||||
# ./share/spack/qa/setup_spack_installer.ps1
|
||||
# spack make-installer -s . -g SILENT pkg
|
||||
# echo "installer_root=$((pwd).Path)" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
|
||||
# env:
|
||||
# ProgressPreference: SilentlyContinue
|
||||
# - uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
|
||||
# with:
|
||||
# name: Windows Spack Installer Bundle
|
||||
# path: ${{ env.installer_root }}\pkg\Spack.exe
|
||||
# - uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
|
||||
# with:
|
||||
# name: Windows Spack Installer
|
||||
# path: ${{ env.installer_root}}\pkg\Spack.msi
|
||||
# execute-installer:
|
||||
# needs: make-installer
|
||||
# runs-on: windows-latest
|
||||
# defaults:
|
||||
# run:
|
||||
# shell: pwsh
|
||||
# steps:
|
||||
# - uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
|
||||
# with:
|
||||
# python-version: 3.9
|
||||
# - name: Install Python packages
|
||||
# run: |
|
||||
# python -m pip install --upgrade pip six pywin32 setuptools
|
||||
# - name: Setup installer directory
|
||||
# run: |
|
||||
# mkdir -p spack_installer
|
||||
# echo "spack_installer=$((pwd).Path)\spack_installer" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
|
||||
# - uses: actions/download-artifact@v3
|
||||
# with:
|
||||
# name: Windows Spack Installer Bundle
|
||||
# path: ${{ env.spack_installer }}
|
||||
# - name: Execute Bundled Installer
|
||||
# run: |
|
||||
# $proc = Start-Process ${{ env.spack_installer }}\spack.exe "/install /quiet" -Passthru
|
||||
# $handle = $proc.Handle # cache proc.Handle
|
||||
# $proc.WaitForExit();
|
||||
# $LASTEXITCODE
|
||||
# env:
|
||||
# ProgressPreference: SilentlyContinue
|
||||
# - uses: actions/download-artifact@v3
|
||||
# with:
|
||||
# name: Windows Spack Installer
|
||||
# path: ${{ env.spack_installer }}
|
||||
# - name: Execute MSI
|
||||
# run: |
|
||||
# $proc = Start-Process ${{ env.spack_installer }}\spack.msi "/quiet" -Passthru
|
||||
# $handle = $proc.Handle # cache proc.Handle
|
||||
# $proc.WaitForExit();
|
||||
# $LASTEXITCODE
|
||||
|
||||
109
bin/spack.bat
109
bin/spack.bat
@@ -50,69 +50,24 @@ setlocal enabledelayedexpansion
|
||||
:: flags will always start with '-', e.g. --help or -V
|
||||
:: subcommands will never start with '-'
|
||||
:: everything after the subcommand is an arg
|
||||
|
||||
:: we cannot allow batch "for" loop to directly process CL args
|
||||
:: a number of batch reserved characters are commonly passed to
|
||||
:: spack and allowing batch's "for" method to process the raw inputs
|
||||
:: results in a large number of formatting issues
|
||||
:: instead, treat the entire CLI as one string
|
||||
:: and split by space manually
|
||||
:: capture cl args in variable named cl_args
|
||||
set cl_args=%*
|
||||
:process_cl_args
|
||||
rem tokens=1* returns the first processed token produced
|
||||
rem by tokenizing the input string cl_args on spaces into
|
||||
rem the named variable %%g
|
||||
rem While this make look like a for loop, it only
|
||||
rem executes a single time for each of the cl args
|
||||
rem the actual iterative loop is performed by the
|
||||
rem goto process_cl_args stanza
|
||||
rem we are simply leveraging the "for" method's string
|
||||
rem tokenization
|
||||
for /f "tokens=1*" %%g in ("%cl_args%") do (
|
||||
set t=%%~g
|
||||
rem remainder of string is composed into %%h
|
||||
rem these are the cl args yet to be processed
|
||||
rem assign cl_args var to only the args to be processed
|
||||
rem effectively discarding the current arg %%g
|
||||
rem this will be nul when we have no further tokens to process
|
||||
set cl_args=%%h
|
||||
rem process the first space delineated cl arg
|
||||
rem of this iteration
|
||||
for %%x in (%*) do (
|
||||
set t="%%~x"
|
||||
if "!t:~0,1!" == "-" (
|
||||
if defined _sp_subcommand (
|
||||
rem We already have a subcommand, processing args now
|
||||
if not defined _sp_args (
|
||||
set "_sp_args=!t!"
|
||||
) else (
|
||||
set "_sp_args=!_sp_args! !t!"
|
||||
)
|
||||
:: We already have a subcommand, processing args now
|
||||
set "_sp_args=!_sp_args! !t!"
|
||||
) else (
|
||||
if not defined _sp_flags (
|
||||
set "_sp_flags=!t!"
|
||||
shift
|
||||
) else (
|
||||
set "_sp_flags=!_sp_flags! !t!"
|
||||
shift
|
||||
)
|
||||
set "_sp_flags=!_sp_flags! !t!"
|
||||
shift
|
||||
)
|
||||
) else if not defined _sp_subcommand (
|
||||
set "_sp_subcommand=!t!"
|
||||
shift
|
||||
) else (
|
||||
if not defined _sp_args (
|
||||
set "_sp_args=!t!"
|
||||
shift
|
||||
) else (
|
||||
set "_sp_args=!_sp_args! !t!"
|
||||
shift
|
||||
)
|
||||
set "_sp_args=!_sp_args! !t!"
|
||||
shift
|
||||
)
|
||||
)
|
||||
rem if this is not nil, we have more tokens to process
|
||||
rem start above process again with remaining unprocessed cl args
|
||||
if defined cl_args goto :process_cl_args
|
||||
|
||||
|
||||
:: --help, -h and -V flags don't require further output parsing.
|
||||
:: If we encounter, execute and exit
|
||||
@@ -140,21 +95,31 @@ if not defined _sp_subcommand (
|
||||
|
||||
:: pass parsed variables outside of local scope. Need to do
|
||||
:: this because delayedexpansion can only be set by setlocal
|
||||
endlocal & (
|
||||
set "_sp_flags=%_sp_flags%"
|
||||
set "_sp_args=%_sp_args%"
|
||||
set "_sp_subcommand=%_sp_subcommand%"
|
||||
)
|
||||
|
||||
echo %_sp_flags%>flags
|
||||
echo %_sp_args%>args
|
||||
echo %_sp_subcommand%>subcmd
|
||||
endlocal
|
||||
set /p _sp_subcommand=<subcmd
|
||||
set /p _sp_flags=<flags
|
||||
set /p _sp_args=<args
|
||||
if "%_sp_subcommand%"=="ECHO is off." (set "_sp_subcommand=")
|
||||
if "%_sp_subcommand%"=="ECHO is on." (set "_sp_subcommand=")
|
||||
if "%_sp_flags%"=="ECHO is off." (set "_sp_flags=")
|
||||
if "%_sp_flags%"=="ECHO is on." (set "_sp_flags=")
|
||||
if "%_sp_args%"=="ECHO is off." (set "_sp_args=")
|
||||
if "%_sp_args%"=="ECHO is on." (set "_sp_args=")
|
||||
del subcmd
|
||||
del flags
|
||||
del args
|
||||
|
||||
:: Filter out some commands. For any others, just run the command.
|
||||
if "%_sp_subcommand%" == "cd" (
|
||||
if %_sp_subcommand% == "cd" (
|
||||
goto :case_cd
|
||||
) else if "%_sp_subcommand%" == "env" (
|
||||
) else if %_sp_subcommand% == "env" (
|
||||
goto :case_env
|
||||
) else if "%_sp_subcommand%" == "load" (
|
||||
) else if %_sp_subcommand% == "load" (
|
||||
goto :case_load
|
||||
) else if "%_sp_subcommand%" == "unload" (
|
||||
) else if %_sp_subcommand% == "unload" (
|
||||
goto :case_load
|
||||
) else (
|
||||
goto :default_case
|
||||
@@ -189,20 +154,20 @@ goto :end_switch
|
||||
if NOT defined _sp_args (
|
||||
goto :default_case
|
||||
)
|
||||
|
||||
if NOT "%_sp_args%"=="%_sp_args:--help=%" (
|
||||
set args_no_quote=%_sp_args:"=%
|
||||
if NOT "%args_no_quote%"=="%args_no_quote:--help=%" (
|
||||
goto :default_case
|
||||
) else if NOT "%_sp_args%"=="%_sp_args: -h=%" (
|
||||
) else if NOT "%args_no_quote%"=="%args_no_quote: -h=%" (
|
||||
goto :default_case
|
||||
) else if NOT "%_sp_args%"=="%_sp_args:--bat=%" (
|
||||
) else if NOT "%args_no_quote%"=="%args_no_quote:--bat=%" (
|
||||
goto :default_case
|
||||
) else if NOT "%_sp_args%"=="%_sp_args:deactivate=%" (
|
||||
) else if NOT "%args_no_quote%"=="%args_no_quote:deactivate=%" (
|
||||
for /f "tokens=* USEBACKQ" %%I in (
|
||||
`call python %spack% %_sp_flags% env deactivate --bat %_sp_args:deactivate=%`
|
||||
`call python %spack% %_sp_flags% env deactivate --bat %args_no_quote:deactivate=%`
|
||||
) do %%I
|
||||
) else if NOT "%_sp_args%"=="%_sp_args:activate=%" (
|
||||
) else if NOT "%args_no_quote%"=="%args_no_quote:activate=%" (
|
||||
for /f "tokens=* USEBACKQ" %%I in (
|
||||
`python %spack% %_sp_flags% env activate --bat %_sp_args:activate=%`
|
||||
`python %spack% %_sp_flags% env activate --bat %args_no_quote:activate=%`
|
||||
) do %%I
|
||||
) else (
|
||||
goto :default_case
|
||||
@@ -223,7 +188,7 @@ if defined _sp_args (
|
||||
|
||||
for /f "tokens=* USEBACKQ" %%I in (
|
||||
`python "%spack%" %_sp_flags% %_sp_subcommand% --bat %_sp_args%`) do %%I
|
||||
|
||||
)
|
||||
goto :end_switch
|
||||
|
||||
:case_unload
|
||||
|
||||
@@ -13,9 +13,8 @@ concretizer:
|
||||
# Whether to consider installed packages or packages from buildcaches when
|
||||
# concretizing specs. If `true`, we'll try to use as many installs/binaries
|
||||
# as possible, rather than building. If `false`, we'll always give you a fresh
|
||||
# concretization. If `dependencies`, we'll only reuse dependencies but
|
||||
# give you a fresh concretization for your root specs.
|
||||
reuse: dependencies
|
||||
# concretization.
|
||||
reuse: true
|
||||
# Options that tune which targets are considered for concretization. The
|
||||
# concretization process is very sensitive to the number targets, and the time
|
||||
# needed to reach a solution increases noticeably with the number of targets
|
||||
|
||||
@@ -23,20 +23,8 @@ packages:
|
||||
providers:
|
||||
elf: [libelf]
|
||||
fuse: [macfuse]
|
||||
gl: [apple-gl]
|
||||
glu: [apple-glu]
|
||||
unwind: [apple-libunwind]
|
||||
uuid: [apple-libuuid]
|
||||
apple-gl:
|
||||
buildable: false
|
||||
externals:
|
||||
- spec: apple-gl@4.1.0
|
||||
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
|
||||
apple-glu:
|
||||
buildable: false
|
||||
externals:
|
||||
- spec: apple-glu@1.3.0
|
||||
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
|
||||
apple-libunwind:
|
||||
buildable: false
|
||||
externals:
|
||||
|
||||
@@ -40,12 +40,13 @@ modules:
|
||||
roots:
|
||||
tcl: $spack/share/spack/modules
|
||||
lmod: $spack/share/spack/lmod
|
||||
# What type of modules to use ("tcl" and/or "lmod")
|
||||
enable: []
|
||||
# What type of modules to use
|
||||
enable:
|
||||
- tcl
|
||||
|
||||
tcl:
|
||||
all:
|
||||
autoload: direct
|
||||
autoload: none
|
||||
|
||||
# Default configurations if lmod is enabled
|
||||
lmod:
|
||||
|
||||
@@ -28,7 +28,7 @@ packages:
|
||||
gl: [glx, osmesa]
|
||||
glu: [mesa-glu, openglu]
|
||||
golang: [go, gcc]
|
||||
go-or-gccgo-bootstrap: [go-bootstrap, gcc]
|
||||
go-external-or-gccgo-bootstrap: [go-bootstrap, gcc]
|
||||
iconv: [libiconv]
|
||||
ipp: [intel-ipp]
|
||||
java: [openjdk, jdk, ibm-java]
|
||||
|
||||
@@ -3,4 +3,3 @@ config:
|
||||
concretizer: clingo
|
||||
build_stage::
|
||||
- '$spack/.staging'
|
||||
stage_name: '{name}-{version}-{hash:7}'
|
||||
|
||||
@@ -19,4 +19,3 @@ packages:
|
||||
- msvc
|
||||
providers:
|
||||
mpi: [msmpi]
|
||||
gl: [wgl]
|
||||
|
||||
@@ -942,7 +942,7 @@ first ``libelf`` above, you would run:
|
||||
|
||||
$ spack load /qmm4kso
|
||||
|
||||
To see which packages that you have loaded to your environment you would
|
||||
To see which packages that you have loaded to your enviornment you would
|
||||
use ``spack find --loaded``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@@ -18,7 +18,7 @@ your Spack mirror and then downloaded and installed by others.
|
||||
|
||||
Whenever a mirror provides prebuilt packages, Spack will take these packages
|
||||
into account during concretization and installation, making ``spack install``
|
||||
significantly faster.
|
||||
signficantly faster.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -28,14 +28,11 @@ This package provides the following variants:
|
||||
|
||||
* **cuda_arch**
|
||||
|
||||
This variant supports the optional specification of one or multiple architectures.
|
||||
This variant supports the optional specification of the architecture.
|
||||
Valid values are maintained in the ``cuda_arch_values`` property and
|
||||
are the numeric character equivalent of the compute capability version
|
||||
(e.g., '10' for version 1.0). Each provided value affects associated
|
||||
``CUDA`` dependencies and compiler conflicts.
|
||||
|
||||
The variant builds both PTX code for the _virtual_ architecture
|
||||
(e.g. ``compute_10``) and binary code for the _real_ architecture (e.g. ``sm_10``).
|
||||
|
||||
GPUs and their compute capability versions are listed at
|
||||
https://developer.nvidia.com/cuda-gpus .
|
||||
|
||||
@@ -124,7 +124,7 @@ Using oneAPI Tools Installed by Spack
|
||||
=====================================
|
||||
|
||||
Spack can be a convenient way to install and configure compilers and
|
||||
libraries, even if you do not intend to build a Spack package. If you
|
||||
libaries, even if you do not intend to build a Spack package. If you
|
||||
want to build a Makefile project using Spack-installed oneAPI compilers,
|
||||
then use spack to configure your environment::
|
||||
|
||||
|
||||
@@ -397,7 +397,7 @@ for specifics and examples for ``packages.yaml`` files.
|
||||
|
||||
.. If your system administrator did not provide modules for pre-installed Intel
|
||||
tools, you could do well to ask for them, because installing multiple copies
|
||||
of the Intel tools, as is won't to happen once Spack is in the picture, is
|
||||
of the Intel tools, as is wont to happen once Spack is in the picture, is
|
||||
bound to stretch disk space and patience thin. If you *are* the system
|
||||
administrator and are still new to modules, then perhaps it's best to follow
|
||||
the `next section <Installing Intel tools within Spack_>`_ and install the tools
|
||||
@@ -653,7 +653,7 @@ follow `the next section <intel-install-libs_>`_ instead.
|
||||
* If you specified a custom variant (for example ``+vtune``) you may want to add this as your
|
||||
preferred variant in the packages configuration for the ``intel-parallel-studio`` package
|
||||
as described in :ref:`package-preferences`. Otherwise you will have to specify
|
||||
the variant every time ``intel-parallel-studio`` is being used as ``mkl``, ``fftw`` or ``mpi``
|
||||
the variant everytime ``intel-parallel-studio`` is being used as ``mkl``, ``fftw`` or ``mpi``
|
||||
implementation to avoid pulling in a different variant.
|
||||
|
||||
* To set the Intel compilers for default use in Spack, instead of the usual ``%gcc``,
|
||||
|
||||
@@ -582,7 +582,7 @@ libraries. Make sure not to add modules/packages containing the word
|
||||
"test", as these likely won't end up in the installation directory,
|
||||
or may require test dependencies like pytest to be installed.
|
||||
|
||||
Instead of defining the ``import_modules`` explicitly, only the subset
|
||||
Instead of defining the ``import_modules`` explicity, only the subset
|
||||
of module names to be skipped can be defined by using ``skip_modules``.
|
||||
If a defined module has submodules, they are skipped as well, e.g.,
|
||||
in case the ``plotting`` modules should be excluded from the
|
||||
|
||||
@@ -227,9 +227,6 @@ You can get the name to use for ``<platform>`` by running ``spack arch
|
||||
--platform``. The system config scope has a ``<platform>`` section for
|
||||
sites at which ``/etc`` is mounted on multiple heterogeneous machines.
|
||||
|
||||
|
||||
.. _config-scope-precedence:
|
||||
|
||||
----------------
|
||||
Scope Precedence
|
||||
----------------
|
||||
@@ -242,11 +239,6 @@ lower-precedence settings. Completely ignoring higher-level configuration
|
||||
options is supported with the ``::`` notation for keys (see
|
||||
:ref:`config-overrides` below).
|
||||
|
||||
There are also special notations for string concatenation and precendense override.
|
||||
Using the ``+:`` notation can be used to force *prepending* strings or lists. For lists, this is identical
|
||||
to the default behavior. Using the ``-:`` works similarly, but for *appending* values.
|
||||
:ref:`config-prepend-append`
|
||||
|
||||
^^^^^^^^^^^
|
||||
Simple keys
|
||||
^^^^^^^^^^^
|
||||
@@ -287,47 +279,6 @@ command:
|
||||
- ~/.spack/stage
|
||||
|
||||
|
||||
.. _config-prepend-append:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
String Concatenation
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Above, the user ``config.yaml`` *completely* overrides specific settings in the
|
||||
default ``config.yaml``. Sometimes, it is useful to add a suffix/prefix
|
||||
to a path or name. To do this, you can use the ``-:`` notation for *append*
|
||||
string concatenation at the end of a key in a configuration file. For example:
|
||||
|
||||
.. code-block:: yaml
|
||||
:emphasize-lines: 1
|
||||
:caption: ~/.spack/config.yaml
|
||||
|
||||
config:
|
||||
install_tree-: /my/custom/suffix/
|
||||
|
||||
Spack will then append to the lower-precedence configuration under the
|
||||
``install_tree-:`` section:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack config get config
|
||||
config:
|
||||
install_tree: /some/other/directory/my/custom/suffix
|
||||
build_stage:
|
||||
- $tempdir/$user/spack-stage
|
||||
- ~/.spack/stage
|
||||
|
||||
|
||||
Similarly, ``+:`` can be used to *prepend* to a path or name:
|
||||
|
||||
.. code-block:: yaml
|
||||
:emphasize-lines: 1
|
||||
:caption: ~/.spack/config.yaml
|
||||
|
||||
config:
|
||||
install_tree+: /my/custom/suffix/
|
||||
|
||||
|
||||
.. _config-overrides:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
@@ -444,120 +444,6 @@ attribute:
|
||||
The minimum version of Singularity required to build a SIF (Singularity Image Format)
|
||||
image from the recipes generated by Spack is ``3.5.3``.
|
||||
|
||||
------------------------------
|
||||
Extending the Jinja2 Templates
|
||||
------------------------------
|
||||
|
||||
The Dockerfile and the Singularity definition file that Spack can generate are based on
|
||||
a few Jinja2 templates that are rendered according to the environment being containerized.
|
||||
Even though Spack allows a great deal of customization by just setting appropriate values for
|
||||
the configuration options, sometimes that is not enough.
|
||||
|
||||
In those cases, a user can directly extend the template that Spack uses to render the image
|
||||
to e.g. set additional environment variables or perform specific operations either before or
|
||||
after a given stage of the build. Let's consider as an example the following structure:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ tree /opt/environment
|
||||
/opt/environment
|
||||
├── data
|
||||
│ └── data.csv
|
||||
├── spack.yaml
|
||||
├── data
|
||||
└── templates
|
||||
└── container
|
||||
└── CustomDockerfile
|
||||
|
||||
containing both the custom template extension and the environment manifest file. To use a custom
|
||||
template, the environment must register the directory containing it, and declare its use under the
|
||||
``container`` configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
:emphasize-lines: 7-8,12
|
||||
|
||||
spack:
|
||||
specs:
|
||||
- hdf5~mpi
|
||||
concretizer:
|
||||
unify: true
|
||||
config:
|
||||
template_dirs:
|
||||
- /opt/environment/templates
|
||||
container:
|
||||
format: docker
|
||||
depfile: true
|
||||
template: container/CustomDockerfile
|
||||
|
||||
The template extension can override two blocks, named ``build_stage`` and ``final_stage``, similarly to
|
||||
the example below:
|
||||
|
||||
.. code-block::
|
||||
:emphasize-lines: 3,8
|
||||
|
||||
{% extends "container/Dockerfile" %}
|
||||
{% block build_stage %}
|
||||
RUN echo "Start building"
|
||||
{{ super() }}
|
||||
{% endblock %}
|
||||
{% block final_stage %}
|
||||
{{ super() }}
|
||||
COPY data /share/myapp/data
|
||||
{% endblock %}
|
||||
|
||||
The recipe that gets generated contains the two extra instruction that we added in our template extension:
|
||||
|
||||
.. code-block:: Dockerfile
|
||||
:emphasize-lines: 4,43
|
||||
|
||||
# Build stage with Spack pre-installed and ready to be used
|
||||
FROM spack/ubuntu-jammy:latest as builder
|
||||
|
||||
RUN echo "Start building"
|
||||
|
||||
# What we want to install and how we want to install it
|
||||
# is specified in a manifest file (spack.yaml)
|
||||
RUN mkdir /opt/spack-environment \
|
||||
&& (echo "spack:" \
|
||||
&& echo " specs:" \
|
||||
&& echo " - hdf5~mpi" \
|
||||
&& echo " concretizer:" \
|
||||
&& echo " unify: true" \
|
||||
&& echo " config:" \
|
||||
&& echo " template_dirs:" \
|
||||
&& echo " - /tmp/environment/templates" \
|
||||
&& echo " install_tree: /opt/software" \
|
||||
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
|
||||
|
||||
# Install the software, remove unnecessary deps
|
||||
RUN cd /opt/spack-environment && spack env activate . && spack concretize && spack env depfile -o Makefile && make -j $(nproc) && spack gc -y
|
||||
|
||||
# Strip all the binaries
|
||||
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
|
||||
xargs file -i | \
|
||||
grep 'charset=binary' | \
|
||||
grep 'x-executable\|x-archive\|x-sharedlib' | \
|
||||
awk -F: '{print $1}' | xargs strip -s
|
||||
|
||||
# Modifications to the environment that are necessary to run
|
||||
RUN cd /opt/spack-environment && \
|
||||
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
|
||||
|
||||
# Bare OS image to run the installed executables
|
||||
FROM ubuntu:22.04
|
||||
|
||||
COPY --from=builder /opt/spack-environment /opt/spack-environment
|
||||
COPY --from=builder /opt/software /opt/software
|
||||
COPY --from=builder /opt/._view /opt/._view
|
||||
COPY --from=builder /opt/view /opt/view
|
||||
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
|
||||
|
||||
COPY data /share/myapp/data
|
||||
|
||||
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l", "-c", "$*", "--" ]
|
||||
CMD [ "/bin/bash" ]
|
||||
|
||||
|
||||
.. _container_config_options:
|
||||
|
||||
-----------------------
|
||||
@@ -578,10 +464,6 @@ to customize the generation of container recipes:
|
||||
- The format of the recipe
|
||||
- ``docker`` or ``singularity``
|
||||
- Yes
|
||||
* - ``depfile``
|
||||
- Whether to use a depfile for installation, or not
|
||||
- True or False (default)
|
||||
- No
|
||||
* - ``images:os``
|
||||
- Operating system used as a base for the image
|
||||
- See :ref:`containers-supported-os`
|
||||
@@ -630,6 +512,14 @@ to customize the generation of container recipes:
|
||||
- System packages needed at run-time
|
||||
- Valid packages for the current OS
|
||||
- No
|
||||
* - ``extra_instructions:build``
|
||||
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``build`` stage
|
||||
- Anything understood by the current ``format``
|
||||
- No
|
||||
* - ``extra_instructions:final``
|
||||
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``final`` stage
|
||||
- Anything understood by the current ``format``
|
||||
- No
|
||||
* - ``labels``
|
||||
- Labels to tag the image
|
||||
- Pairs of key-value strings
|
||||
|
||||
@@ -472,7 +472,7 @@ use my new hook as follows:
|
||||
.. code-block:: python
|
||||
|
||||
def post_log_write(message, level):
|
||||
"""Do something custom with the message and level every time we write
|
||||
"""Do something custom with the messsage and level every time we write
|
||||
to the log
|
||||
"""
|
||||
print('running post_log_write!')
|
||||
|
||||
@@ -368,8 +368,7 @@ Manual compiler configuration
|
||||
|
||||
If auto-detection fails, you can manually configure a compiler by
|
||||
editing your ``~/.spack/<platform>/compilers.yaml`` file. You can do this by running
|
||||
``spack config edit compilers``, which will open the file in
|
||||
:ref:`your favorite editor <controlling-the-editor>`.
|
||||
``spack config edit compilers``, which will open the file in your ``$EDITOR``.
|
||||
|
||||
Each compiler configuration in the file looks like this:
|
||||
|
||||
@@ -1598,8 +1597,8 @@ in a Windows CMD prompt.
|
||||
|
||||
.. note::
|
||||
If you chose to install Spack into a directory on Windows that is set up to require Administrative
|
||||
Privileges, Spack will require elevated privileges to run.
|
||||
Administrative Privileges can be denoted either by default such as
|
||||
Privleges, Spack will require elevated privleges to run.
|
||||
Administrative Privleges can be denoted either by default such as
|
||||
``C:\Program Files``, or aministrator applied administrative restrictions
|
||||
on a directory that spack installs files to such as ``C:\Users``
|
||||
|
||||
@@ -1695,7 +1694,7 @@ Spack console via:
|
||||
|
||||
spack install cpuinfo
|
||||
|
||||
If in the previous step, you did not have CMake or Ninja installed, running the command above should bootstrap both packages
|
||||
If in the previous step, you did not have CMake or Ninja installed, running the command above should boostrap both packages
|
||||
|
||||
"""""""""""""""""""""""""""
|
||||
Windows Compatible Packages
|
||||
|
||||
@@ -13,7 +13,7 @@ The use of module systems to manage user environment in a controlled way
|
||||
is a common practice at HPC centers that is often embraced also by
|
||||
individual programmers on their development machines. To support this
|
||||
common practice Spack integrates with `Environment Modules
|
||||
<http://modules.sourceforge.net/>`_ and `Lmod
|
||||
<http://modules.sourceforge.net/>`_ and `LMod
|
||||
<http://lmod.readthedocs.io/en/latest/>`_ by providing post-install hooks
|
||||
that generate module files and commands to manipulate them.
|
||||
|
||||
@@ -26,8 +26,8 @@ Using module files via Spack
|
||||
----------------------------
|
||||
|
||||
If you have installed a supported module system you should be able to
|
||||
run ``module avail`` to see what module
|
||||
files have been installed. Here is sample output of those programs,
|
||||
run either ``module avail`` or ``use -l spack`` to see what module
|
||||
files have been installed. Here is sample output of those programs,
|
||||
showing lots of installed packages:
|
||||
|
||||
.. code-block:: console
|
||||
@@ -51,7 +51,12 @@ showing lots of installed packages:
|
||||
help2man-1.47.4-gcc-4.8-kcnqmau lua-luaposix-33.4.0-gcc-4.8-mdod2ry netlib-scalapack-2.0.2-gcc-6.3.0-rgqfr6d py-scipy-0.19.0-gcc-6.3.0-kr7nat4 zlib-1.2.11-gcc-6.3.0-7cqp6cj
|
||||
|
||||
The names should look familiar, as they resemble the output from ``spack find``.
|
||||
For example, you could type the following command to load the ``cmake`` module:
|
||||
You *can* use the modules here directly. For example, you could type either of these commands
|
||||
to load the ``cmake`` module:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ use cmake-3.7.2-gcc-6.3.0-fowuuby
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@@ -88,9 +93,9 @@ the different file formats that can be generated by Spack:
|
||||
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
|
||||
| | **Hook name** | **Default root directory** | **Default template file** | **Compatible tools** |
|
||||
+=============================+====================+===============================+==============================================+======================+
|
||||
| **Tcl - Non-Hierarchical** | ``tcl`` | share/spack/modules | share/spack/templates/modules/modulefile.tcl | Env. Modules/Lmod |
|
||||
| **TCL - Non-Hierarchical** | ``tcl`` | share/spack/modules | share/spack/templates/modules/modulefile.tcl | Env. Modules/LMod |
|
||||
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
|
||||
| **Lua - Hierarchical** | ``lmod`` | share/spack/lmod | share/spack/templates/modules/modulefile.lua | Lmod |
|
||||
| **Lua - Hierarchical** | ``lmod`` | share/spack/lmod | share/spack/templates/modules/modulefile.lua | LMod |
|
||||
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
|
||||
|
||||
|
||||
@@ -391,13 +396,13 @@ name and version for all packages that depend on mpi.
|
||||
|
||||
When specifying module names by projection for Lmod modules, we
|
||||
recommend NOT including names of dependencies (e.g., MPI, compilers)
|
||||
that are already in the Lmod hierarchy.
|
||||
that are already in the LMod hierarchy.
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
Tcl modules
|
||||
Tcl modules also allow for explicit conflicts between modulefiles.
|
||||
TCL modules
|
||||
TCL modules also allow for explicit conflicts between modulefiles.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@@ -421,9 +426,9 @@ that are already in the Lmod hierarchy.
|
||||
|
||||
|
||||
.. note::
|
||||
Lmod hierarchical module files
|
||||
LMod hierarchical module files
|
||||
When ``lmod`` is activated Spack will generate a set of hierarchical lua module
|
||||
files that are understood by Lmod. The hierarchy will always contain the
|
||||
files that are understood by LMod. The hierarchy will always contain the
|
||||
two layers ``Core`` / ``Compiler`` but can be further extended to
|
||||
any of the virtual dependencies present in Spack. A case that could be useful in
|
||||
practice is for instance:
|
||||
@@ -445,7 +450,7 @@ that are already in the Lmod hierarchy.
|
||||
|
||||
that will generate a hierarchy in which the ``lapack`` and ``mpi`` layer can be switched
|
||||
independently. This allows a site to build the same libraries or applications against different
|
||||
implementations of ``mpi`` and ``lapack``, and let Lmod switch safely from one to the
|
||||
implementations of ``mpi`` and ``lapack``, and let LMod switch safely from one to the
|
||||
other.
|
||||
|
||||
All packages built with a compiler in ``core_compilers`` and all
|
||||
@@ -455,12 +460,12 @@ that are already in the Lmod hierarchy.
|
||||
.. warning::
|
||||
Consistency of Core packages
|
||||
The user is responsible for maintining consistency among core packages, as ``core_specs``
|
||||
bypasses the hierarchy that allows Lmod to safely switch between coherent software stacks.
|
||||
bypasses the hierarchy that allows LMod to safely switch between coherent software stacks.
|
||||
|
||||
.. warning::
|
||||
Deep hierarchies and ``lmod spider``
|
||||
For hierarchies that are deeper than three layers ``lmod spider`` may have some issues.
|
||||
See `this discussion on the Lmod project <https://github.com/TACC/Lmod/issues/114>`_.
|
||||
See `this discussion on the LMod project <https://github.com/TACC/Lmod/issues/114>`_.
|
||||
|
||||
""""""""""""""""""""""
|
||||
Select default modules
|
||||
@@ -529,7 +534,7 @@ installed to ``/spack/prefix/foo``, if ``foo`` installs executables to
|
||||
update ``MANPATH``.
|
||||
|
||||
The default list of environment variables in this config section
|
||||
includes ``PATH``, ``MANPATH``, ``ACLOCAL_PATH``, ``PKG_CONFIG_PATH``
|
||||
inludes ``PATH``, ``MANPATH``, ``ACLOCAL_PATH``, ``PKG_CONFIG_PATH``
|
||||
and ``CMAKE_PREFIX_PATH``, as well as ``DYLD_FALLBACK_LIBRARY_PATH``
|
||||
on macOS. On Linux however, the corresponding ``LD_LIBRARY_PATH``
|
||||
variable is *not* set, because it affects the behavior of
|
||||
@@ -629,9 +634,8 @@ by its dependency; when the dependency is autoloaded, the executable will be in
|
||||
PATH. Similarly for scripting languages such as Python, packages and their dependencies
|
||||
have to be loaded together.
|
||||
|
||||
Autoloading is enabled by default for Lmod and Environment Modules. The former
|
||||
has builtin support for through the ``depends_on`` function. The latter uses
|
||||
``module load`` statement to load and track dependencies.
|
||||
Autoloading is enabled by default for LMod, as it has great builtin support for through
|
||||
the ``depends_on`` function. For Environment Modules it is disabled by default.
|
||||
|
||||
Autoloading can also be enabled conditionally:
|
||||
|
||||
@@ -651,14 +655,12 @@ The allowed values for the ``autoload`` statement are either ``none``,
|
||||
``direct`` or ``all``.
|
||||
|
||||
.. note::
|
||||
Tcl prerequisites
|
||||
TCL prerequisites
|
||||
In the ``tcl`` section of the configuration file it is possible to use
|
||||
the ``prerequisites`` directive that accepts the same values as
|
||||
``autoload``. It will produce module files that have a ``prereq``
|
||||
statement, which autoloads dependencies on Environment Modules when its
|
||||
``auto_handling`` configuration option is enabled. If Environment Modules
|
||||
is installed with Spack, ``auto_handling`` is enabled by default starting
|
||||
version 4.2. Otherwise it is enabled by default since version 5.0.
|
||||
statement, which can be used to autoload dependencies in some versions
|
||||
of Environment Modules.
|
||||
|
||||
------------------------
|
||||
Maintaining Module Files
|
||||
|
||||
@@ -229,8 +229,7 @@ always choose to download just one tarball initially, and run
|
||||
|
||||
Spack automatically creates a directory in the appropriate repository,
|
||||
generates a boilerplate template for your package, and opens up the new
|
||||
``package.py`` in your favorite ``$EDITOR`` (see :ref:`controlling-the-editor`
|
||||
for details):
|
||||
``package.py`` in your favorite ``$EDITOR``:
|
||||
|
||||
.. code-block:: python
|
||||
:linenos:
|
||||
@@ -336,31 +335,6 @@ The rest of the tasks you need to do are as follows:
|
||||
:ref:`installation_process` is
|
||||
covered in detail later.
|
||||
|
||||
|
||||
.. _controlling-the-editor:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Controlling the editor
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
When Spack needs to open an editor for you (e.g., for commands like
|
||||
:ref:`cmd-spack-create` or :ref:`cmd-spack-edit`, it looks at several environment variables
|
||||
to figure out what to use. The order of precedence is:
|
||||
|
||||
* ``SPACK_EDITOR``: highest precedence, in case you want something specific for Spack;
|
||||
* ``VISUAL``: standard environment variable for full-screen editors like ``vim`` or ``emacs``;
|
||||
* ``EDITOR``: older environment variable for your editor.
|
||||
|
||||
You can set any of these to the command you want to run, e.g., in ``bash`` you might run
|
||||
one of these::
|
||||
|
||||
export VISUAL=vim
|
||||
export EDITOR="emacs -nw"
|
||||
export SPACK_EDITOR=nano
|
||||
|
||||
If Spack finds none of these variables set, it will look for ``vim``, ``vi``, ``emacs``,
|
||||
``nano``, and ``notepad``, in that order.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Non-downloadable software
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
@@ -604,9 +578,9 @@ add a line like this in the package class:
|
||||
|
||||
url = "http://example.com/foo-1.0.tar.gz"
|
||||
|
||||
version("8.2.1", md5="4136d7b4c04df68b686570afa26988ac")
|
||||
version("8.2.0", md5="1c9f62f0778697a09d36121ead88e08e")
|
||||
version("8.1.2", md5="d47dd09ed7ae6e7fd6f9a816d7f5fdf6")
|
||||
version("8.2.1", "4136d7b4c04df68b686570afa26988ac")
|
||||
version("8.2.0", "1c9f62f0778697a09d36121ead88e08e")
|
||||
version("8.1.2", "d47dd09ed7ae6e7fd6f9a816d7f5fdf6")
|
||||
|
||||
Versions should be listed in descending order, from newest to oldest.
|
||||
|
||||
@@ -695,7 +669,7 @@ of its versions, you can add an explicit URL for a particular version:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
version("8.2.1", md5="4136d7b4c04df68b686570afa26988ac",
|
||||
version("8.2.1", "4136d7b4c04df68b686570afa26988ac",
|
||||
url="http://example.com/foo-8.2.1-special-version.tar.gz")
|
||||
|
||||
|
||||
@@ -757,7 +731,7 @@ executables and other custom archive types), you can add ``expand=False`` to a
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
version("8.2.1", md5="4136d7b4c04df68b686570afa26988ac",
|
||||
version("8.2.1", "4136d7b4c04df68b686570afa26988ac",
|
||||
url="http://example.com/foo-8.2.1-special-version.sh", expand=False)
|
||||
|
||||
When ``expand`` is set to ``False``, Spack sets the current working
|
||||
@@ -955,10 +929,10 @@ file:
|
||||
.. code-block:: console
|
||||
|
||||
==> Checksummed new versions of libelf:
|
||||
version("0.8.13", md5="4136d7b4c04df68b686570afa26988ac")
|
||||
version("0.8.12", md5="e21f8273d9f5f6d43a59878dc274fec7")
|
||||
version("0.8.11", md5="e931910b6d100f6caa32239849947fbf")
|
||||
version("0.8.10", md5="9db4d36c283d9790d8fa7df1f4d7b4d9")
|
||||
version("0.8.13", "4136d7b4c04df68b686570afa26988ac")
|
||||
version("0.8.12", "e21f8273d9f5f6d43a59878dc274fec7")
|
||||
version("0.8.11", "e931910b6d100f6caa32239849947fbf")
|
||||
version("0.8.10", "9db4d36c283d9790d8fa7df1f4d7b4d9")
|
||||
|
||||
By default, Spack will search for new tarball downloads by scraping
|
||||
the parent directory of the tarball you gave it. So, if your tarball
|
||||
@@ -1112,8 +1086,8 @@ class-level tarball URL and VCS. For example:
|
||||
|
||||
version("develop", branch="develop")
|
||||
version("master", branch="master")
|
||||
version("12.12.1", md5="ecd4606fa332212433c98bf950a69cc7")
|
||||
version("12.10.1", md5="667333dbd7c0f031d47d7c5511fd0810")
|
||||
version("12.12.1", "ecd4606fa332212433c98bf950a69cc7")
|
||||
version("12.10.1", "667333dbd7c0f031d47d7c5511fd0810")
|
||||
version("12.8.1", "9f37f683ee2b427b5540db8a20ed6b15")
|
||||
|
||||
If a package contains both a ``url`` and ``git`` class-level attribute,
|
||||
@@ -1276,7 +1250,7 @@ checksum.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
version("1.9.5.1.1", md5="d035e4bc704d136db79b43ab371b27d2",
|
||||
version("1.9.5.1.1", "d035e4bc704d136db79b43ab371b27d2",
|
||||
url="https://www.github.com/jswhit/pyproj/tarball/0be612cc9f972e38b50a90c946a9b353e2ab140f")
|
||||
|
||||
.. _hg-fetch:
|
||||
@@ -1813,7 +1787,7 @@ for instructions on setting up a mirror.
|
||||
After running ``spack install pgi``, the first thing that will happen is
|
||||
Spack will create a global license file located at
|
||||
``$SPACK_ROOT/etc/spack/licenses/pgi/license.dat``. It will then open up the
|
||||
file using :ref:`your favorite editor <controlling-the-editor>`. It will look like
|
||||
file using the editor set in ``$EDITOR``, or vi if unset. It will look like
|
||||
this:
|
||||
|
||||
.. code-block:: sh
|
||||
@@ -2212,7 +2186,7 @@ looks like this:
|
||||
homepage = "http://www.openssl.org"
|
||||
url = "http://www.openssl.org/source/openssl-1.0.1h.tar.gz"
|
||||
|
||||
version("1.0.1h", md5="8d6d684a9430d5cc98a62a5d8fbda8cf")
|
||||
version("1.0.1h", "8d6d684a9430d5cc98a62a5d8fbda8cf")
|
||||
depends_on("zlib")
|
||||
|
||||
parallel = False
|
||||
@@ -2309,7 +2283,7 @@ Spack makes this relatively easy. Let's take a look at the
|
||||
url = "http://www.prevanders.net/libdwarf-20130729.tar.gz"
|
||||
list_url = homepage
|
||||
|
||||
version("20130729", md5="4cc5e48693f7b93b7aa0261e63c0e21d")
|
||||
version("20130729", "4cc5e48693f7b93b7aa0261e63c0e21d")
|
||||
...
|
||||
|
||||
depends_on("libelf")
|
||||
@@ -2752,6 +2726,29 @@ variant(s) are selected. This may be accomplished with conditional
|
||||
extends("python", when="+python")
|
||||
...
|
||||
|
||||
Sometimes, certain files in one package will conflict with those in
|
||||
another, which means they cannot both be used in a view at the
|
||||
same time. In this case, you can tell Spack to ignore those files:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class PySncosmo(Package):
|
||||
...
|
||||
# py-sncosmo binaries are duplicates of those from py-astropy
|
||||
extends("python", ignore=r"bin/.*")
|
||||
depends_on("py-astropy")
|
||||
...
|
||||
|
||||
The code above will prevent everything in the ``$prefix/bin/`` directory
|
||||
from being linked in a view.
|
||||
|
||||
.. note::
|
||||
|
||||
You can call *either* ``depends_on`` or ``extends`` on any one
|
||||
package, but not both. For example you cannot both
|
||||
``depends_on("python")`` and ``extends("python")`` in the same
|
||||
package. ``extends`` implies ``depends_on``.
|
||||
|
||||
-----
|
||||
Views
|
||||
-----
|
||||
@@ -2808,7 +2805,7 @@ supplying a ``depends_on`` call in the package definition. For example:
|
||||
homepage = "https://github.com/hpc/mpileaks"
|
||||
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
|
||||
|
||||
version("1.0", md5="8838c574b39202a57d7c2d68692718aa")
|
||||
version("1.0", "8838c574b39202a57d7c2d68692718aa")
|
||||
|
||||
depends_on("mpi")
|
||||
depends_on("adept-utils")
|
||||
@@ -3001,12 +2998,12 @@ follows:
|
||||
def libs(self):
|
||||
return find_libraries("libFoo", root=self.home, recursive=True)
|
||||
|
||||
# The header provided by the bar virtual package
|
||||
# The header provided by the bar virutal package
|
||||
@property
|
||||
def bar_headers(self):
|
||||
return find_headers("bar/bar.h", root=self.home.include, recursive=False)
|
||||
|
||||
# The library provided by the bar virtual package
|
||||
# The libary provided by the bar virtual package
|
||||
@property
|
||||
def bar_libs(self):
|
||||
return find_libraries("libFooBar", root=sef.home, recursive=True)
|
||||
|
||||
@@ -9,32 +9,27 @@
|
||||
CI Pipelines
|
||||
============
|
||||
|
||||
Spack provides commands that support generating and running automated build pipelines in CI instances. At the highest
|
||||
level it works like this: provide a spack environment describing the set of packages you care about, and include a
|
||||
description of how those packages should be mapped to Gitlab runners. Spack can then generate a ``.gitlab-ci.yml``
|
||||
file containing job descriptions for all your packages that can be run by a properly configured CI instance. When
|
||||
run, the generated pipeline will build and deploy binaries, and it can optionally report to a CDash instance
|
||||
Spack provides commands that support generating and running automated build
|
||||
pipelines designed for Gitlab CI. At the highest level it works like this:
|
||||
provide a spack environment describing the set of packages you care about,
|
||||
and include within that environment file a description of how those packages
|
||||
should be mapped to Gitlab runners. Spack can then generate a ``.gitlab-ci.yml``
|
||||
file containing job descriptions for all your packages that can be run by a
|
||||
properly configured Gitlab CI instance. When run, the generated pipeline will
|
||||
build and deploy binaries, and it can optionally report to a CDash instance
|
||||
regarding the health of the builds as they evolve over time.
|
||||
|
||||
------------------------------
|
||||
Getting started with pipelines
|
||||
------------------------------
|
||||
|
||||
To get started with automated build pipelines a Gitlab instance with version ``>= 12.9``
|
||||
(more about Gitlab CI `here <https://about.gitlab.com/product/continuous-integration/>`_)
|
||||
with at least one `runner <https://docs.gitlab.com/runner/>`_ configured is required. This
|
||||
can be done quickly by setting up a local Gitlab instance.
|
||||
It is fairly straightforward to get started with automated build pipelines. At
|
||||
a minimum, you'll need to set up a Gitlab instance (more about Gitlab CI
|
||||
`here <https://about.gitlab.com/product/continuous-integration/>`_) and configure
|
||||
at least one `runner <https://docs.gitlab.com/runner/>`_. Then the basic steps
|
||||
for setting up a build pipeline are as follows:
|
||||
|
||||
It is possible to set up pipelines on gitlab.com, but the builds there are limited to
|
||||
60 minutes and generic hardware. It is possible to
|
||||
`hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
|
||||
Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
|
||||
or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
|
||||
topics are outside the scope of this document.
|
||||
|
||||
After setting up a Gitlab instance for running CI, the basic steps for setting up a build pipeline are as follows:
|
||||
|
||||
#. Create a repository in the Gitlab instance with CI and a runner enabled.
|
||||
#. Create a repository on your gitlab instance
|
||||
#. Add a ``spack.yaml`` at the root containing your pipeline environment
|
||||
#. Add a ``.gitlab-ci.yml`` at the root containing two jobs (one to generate
|
||||
the pipeline dynamically, and one to run the generated jobs).
|
||||
@@ -45,6 +40,13 @@ See the :ref:`functional_example` section for a minimal working example. See al
|
||||
the :ref:`custom_Workflow` section for a link to an example of a custom workflow
|
||||
based on spack pipelines.
|
||||
|
||||
While it is possible to set up pipelines on gitlab.com, as illustrated above, the
|
||||
builds there are limited to 60 minutes and generic hardware. It is also possible to
|
||||
`hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
|
||||
Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
|
||||
or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
|
||||
topics are outside the scope of this document.
|
||||
|
||||
Spack's pipelines are now making use of the
|
||||
`trigger <https://docs.gitlab.com/ee/ci/yaml/#trigger>`_ syntax to run
|
||||
dynamically generated
|
||||
@@ -130,35 +132,29 @@ And here's the spack environment built by the pipeline represented as a
|
||||
|
||||
mirrors: { "mirror": "s3://spack-public/mirror" }
|
||||
|
||||
ci:
|
||||
gitlab-ci:
|
||||
before_script:
|
||||
- git clone ${SPACK_REPO}
|
||||
- pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
|
||||
- . "./spack/share/spack/setup-env.sh"
|
||||
script:
|
||||
- pushd ${SPACK_CONCRETE_ENV_DIR} && spack env activate --without-view . && popd
|
||||
- spack -d ci rebuild
|
||||
mappings:
|
||||
- match: ["os=ubuntu18.04"]
|
||||
runner-attributes:
|
||||
image:
|
||||
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
|
||||
entrypoint: [""]
|
||||
tags:
|
||||
- docker
|
||||
enable-artifacts-buildcache: True
|
||||
rebuild-index: False
|
||||
pipeline-gen:
|
||||
- any-job:
|
||||
before_script:
|
||||
- git clone ${SPACK_REPO}
|
||||
- pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
|
||||
- . "./spack/share/spack/setup-env.sh"
|
||||
- build-job:
|
||||
tags: [docker]
|
||||
image:
|
||||
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
|
||||
entrypoint: [""]
|
||||
|
||||
|
||||
The elements of this file important to spack ci pipelines are described in more
|
||||
detail below, but there are a couple of things to note about the above working
|
||||
example:
|
||||
|
||||
.. note::
|
||||
There is no ``script`` attribute specified for here. The reason for this is
|
||||
Spack CI will automatically generate reasonable default scripts. More
|
||||
detail on what is in these scripts can be found below.
|
||||
|
||||
Also notice the ``before_script`` section. It is required when using any of the
|
||||
default scripts to source the ``setup-env.sh`` script in order to inform
|
||||
the default scripts where to find the ``spack`` executable.
|
||||
|
||||
Normally ``enable-artifacts-buildcache`` is not recommended in production as it
|
||||
results in large binary artifacts getting transferred back and forth between
|
||||
gitlab and the runners. But in this example on gitlab.com where there is no
|
||||
@@ -178,7 +174,7 @@ during subsequent pipeline runs.
|
||||
With the addition of reproducible builds (#22887) a previously working
|
||||
pipeline will require some changes:
|
||||
|
||||
* In the build-jobs, the environment location changed.
|
||||
* In the build jobs (``runner-attributes``), the environment location changed.
|
||||
This will typically show as a ``KeyError`` in the failing job. Be sure to
|
||||
point to ``${SPACK_CONCRETE_ENV_DIR}``.
|
||||
|
||||
@@ -200,9 +196,9 @@ ci pipelines. These commands are covered in more detail in this section.
|
||||
|
||||
.. _cmd-spack-ci:
|
||||
|
||||
^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
``spack ci``
|
||||
^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Super-command for functionality related to generating pipelines and executing
|
||||
pipeline jobs.
|
||||
@@ -231,7 +227,7 @@ Using ``--prune-dag`` or ``--no-prune-dag`` configures whether or not jobs are
|
||||
generated for specs that are already up to date on the mirror. If enabling
|
||||
DAG pruning using ``--prune-dag``, more information may be required in your
|
||||
``spack.yaml`` file, see the :ref:`noop_jobs` section below regarding
|
||||
``noop-job``.
|
||||
``service-job-attributes``.
|
||||
|
||||
The optional ``--check-index-only`` argument can be used to speed up pipeline
|
||||
generation by telling spack to consider only remote buildcache indices when
|
||||
@@ -267,11 +263,11 @@ generated by jobs in the pipeline.
|
||||
|
||||
.. _cmd-spack-ci-rebuild:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
``spack ci rebuild``
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The purpose of ``spack ci rebuild`` is to take an assigned
|
||||
The purpose of ``spack ci rebuild`` is straightforward: take its assigned
|
||||
spec and ensure a binary of a successful build exists on the target mirror.
|
||||
If the binary does not already exist, it is built from source and pushed
|
||||
to the mirror. The associated stand-alone tests are optionally run against
|
||||
@@ -284,7 +280,7 @@ directory. The script is run in a job to install the spec from source. The
|
||||
resulting binary package is pushed to the mirror. If ``cdash`` is configured
|
||||
for the environment, then the build results will be uploaded to the site.
|
||||
|
||||
Environment variables and values in the ``ci::pipeline-gen`` section of the
|
||||
Environment variables and values in the ``gitlab-ci`` section of the
|
||||
``spack.yaml`` environment file provide inputs to this process. The
|
||||
two main sources of environment variables are variables written into
|
||||
``.gitlab-ci.yml`` by ``spack ci generate`` and the GitLab CI runtime.
|
||||
@@ -302,23 +298,21 @@ A snippet from an example ``spack.yaml`` file illustrating use of this
|
||||
option *and* specification of a package with broken tests is given below.
|
||||
The inclusion of a spec for building ``gptune`` is not shown here. Note
|
||||
that ``--tests`` is passed to ``spack ci rebuild`` as part of the
|
||||
``build-job`` script.
|
||||
``gitlab-ci`` script.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
ci:
|
||||
pipeline-gen:
|
||||
- build-job
|
||||
script:
|
||||
- . "./share/spack/setup-env.sh"
|
||||
- spack --version
|
||||
- cd ${SPACK_CONCRETE_ENV_DIR}
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- spack -d ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||
gitlab-ci:
|
||||
script:
|
||||
- . "./share/spack/setup-env.sh"
|
||||
- spack --version
|
||||
- cd ${SPACK_CONCRETE_ENV_DIR}
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- spack -d ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||
|
||||
broken-tests-packages:
|
||||
- gptune
|
||||
@@ -360,31 +354,113 @@ arguments you can pass to ``spack ci reproduce-build`` in order to reproduce
|
||||
a particular build locally.
|
||||
|
||||
------------------------------------
|
||||
Job Types
|
||||
A pipeline-enabled spack environment
|
||||
------------------------------------
|
||||
|
||||
^^^^^^^^^^^^^^^
|
||||
Rebuild (build)
|
||||
^^^^^^^^^^^^^^^
|
||||
Here's an example of a spack environment file that has been enhanced with
|
||||
sections describing a build pipeline:
|
||||
|
||||
Rebuild jobs, denoted as ``build-job``'s in the ``pipeline-gen`` list, are jobs
|
||||
associated with concrete specs that have been marked for rebuild. By default a simple
|
||||
script for doing rebuild is generated, but may be modified as needed.
|
||||
.. code-block:: yaml
|
||||
|
||||
The default script does three main steps, change directories to the pipelines concrete
|
||||
environment, activate the concrete environment, and run the ``spack ci rebuild`` command:
|
||||
spack:
|
||||
definitions:
|
||||
- pkgs:
|
||||
- readline@7.0
|
||||
- compilers:
|
||||
- '%gcc@5.5.0'
|
||||
- oses:
|
||||
- os=ubuntu18.04
|
||||
- os=centos7
|
||||
specs:
|
||||
- matrix:
|
||||
- [$pkgs]
|
||||
- [$compilers]
|
||||
- [$oses]
|
||||
mirrors:
|
||||
cloud_gitlab: https://mirror.spack.io
|
||||
gitlab-ci:
|
||||
mappings:
|
||||
- match:
|
||||
- os=ubuntu18.04
|
||||
runner-attributes:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/ubuntu-bionic
|
||||
- match:
|
||||
- os=centos7
|
||||
runner-attributes:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/centos7
|
||||
cdash:
|
||||
build-group: Release Testing
|
||||
url: https://cdash.spack.io
|
||||
project: Spack
|
||||
site: Spack AWS Gitlab Instance
|
||||
|
||||
.. code-block:: bash
|
||||
Hopefully, the ``definitions``, ``specs``, ``mirrors``, etc. sections are already
|
||||
familiar, as they are part of spack :ref:`environments`. So let's take a more
|
||||
in-depth look some of the pipeline-related sections in that environment file
|
||||
that might not be as familiar.
|
||||
|
||||
cd ${concrete_environment_dir}
|
||||
spack env activate --without-view .
|
||||
spack ci rebuild
|
||||
The ``gitlab-ci`` section is used to configure how the pipeline workload should be
|
||||
generated, mainly how the jobs for building specs should be assigned to the
|
||||
configured runners on your instance. Each entry within the list of ``mappings``
|
||||
corresponds to a known gitlab runner, where the ``match`` section is used
|
||||
in assigning a release spec to one of the runners, and the ``runner-attributes``
|
||||
section is used to configure the spec/job for that particular runner.
|
||||
|
||||
Both the top-level ``gitlab-ci`` section as well as each ``runner-attributes``
|
||||
section can also contain the following keys: ``image``, ``tags``, ``variables``,
|
||||
``before_script``, ``script``, and ``after_script``. If any of these keys are
|
||||
provided at the ``gitlab-ci`` level, they will be used as the defaults for any
|
||||
``runner-attributes``, unless they are overridden in those sections. Specifying
|
||||
any of these keys at the ``runner-attributes`` level generally overrides the
|
||||
keys specified at the higher level, with a couple exceptions. Any ``variables``
|
||||
specified at both levels result in those dictionaries getting merged in the
|
||||
resulting generated job, and any duplicate variable names get assigned the value
|
||||
provided in the specific ``runner-attributes``. If ``tags`` are specified both
|
||||
at the ``gitlab-ci`` level as well as the ``runner-attributes`` level, then the
|
||||
lists of tags are combined, and any duplicates are removed.
|
||||
|
||||
See the section below on using a custom spack for an example of how these keys
|
||||
could be used.
|
||||
|
||||
There are other pipeline options you can configure within the ``gitlab-ci`` section
|
||||
as well.
|
||||
|
||||
The ``bootstrap`` section allows you to specify lists of specs from
|
||||
your ``definitions`` that should be staged ahead of the environment's ``specs`` (this
|
||||
section is described in more detail below). The ``enable-artifacts-buildcache`` key
|
||||
takes a boolean and determines whether the pipeline uses artifacts to store and
|
||||
pass along the buildcaches from one stage to the next (the default if you don't
|
||||
provide this option is ``False``).
|
||||
|
||||
The optional ``broken-specs-url`` key tells Spack to check against a list of
|
||||
specs that are known to be currently broken in ``develop``. If any such specs
|
||||
are found, the ``spack ci generate`` command will fail with an error message
|
||||
informing the user what broken specs were encountered. This allows the pipeline
|
||||
to fail early and avoid wasting compute resources attempting to build packages
|
||||
that will not succeed.
|
||||
|
||||
The optional ``cdash`` section provides information that will be used by the
|
||||
``spack ci generate`` command (invoked by ``spack ci start``) for reporting
|
||||
to CDash. All the jobs generated from this environment will belong to a
|
||||
"build group" within CDash that can be tracked over time. As the release
|
||||
progresses, this build group may have jobs added or removed. The url, project,
|
||||
and site are used to specify the CDash instance to which build results should
|
||||
be reported.
|
||||
|
||||
Take a look at the
|
||||
`schema <https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/gitlab_ci.py>`_
|
||||
for the gitlab-ci section of the spack environment file, to see precisely what
|
||||
syntax is allowed there.
|
||||
|
||||
.. _rebuild_index:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
Update Index (reindex)
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Note about rebuilding buildcache index
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
By default, while a pipeline job may rebuild a package, create a buildcache
|
||||
entry, and push it to the mirror, it does not automatically re-generate the
|
||||
@@ -399,44 +475,21 @@ not correctly reflect the mirror's contents at the end of a pipeline.
|
||||
To make sure the buildcache index is up to date at the end of your pipeline,
|
||||
spack generates a job to update the buildcache index of the target mirror
|
||||
at the end of each pipeline by default. You can disable this behavior by
|
||||
adding ``rebuild-index: False`` inside the ``ci`` section of your
|
||||
spack environment.
|
||||
|
||||
Reindex jobs do not allow modifying the ``script`` attribute since it is automatically
|
||||
generated using the target mirror listed in the ``mirrors::mirror`` configuration.
|
||||
|
||||
^^^^^^^^^^^^^^^^^
|
||||
Signing (signing)
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
This job is run after all of the rebuild jobs are completed and is intended to be used
|
||||
to sign the package binaries built by a protected CI run. Signing jobs are generated
|
||||
only if a signing job ``script`` is specified and the spack CI job type is protected.
|
||||
Note, if an ``any-job`` section contains a script, this will not implicitly create a
|
||||
``signing`` job, a signing job may only exist if it is explicitly specified in the
|
||||
configuration with a ``script`` attribute. Specifying a signing job without a script
|
||||
does not create a signing job and the job configuration attributes will be ignored.
|
||||
Signing jobs are always assigned the runner tags ``aws``, ``protected``, and ``notary``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^
|
||||
Cleanup (cleanup)
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
When using ``temporary-storage-url-prefix`` the cleanup job will destroy the mirror
|
||||
created for the associated Gitlab pipeline. Cleanup jobs do not allow modifying the
|
||||
script, but do expect that the spack command is in the path and require a
|
||||
``before_script`` to be specified that sources the ``setup-env.sh`` script.
|
||||
adding ``rebuild-index: False`` inside the ``gitlab-ci`` section of your
|
||||
spack environment. Spack will assign the job any runner attributes found
|
||||
on the ``service-job-attributes``, if you have provided that in your
|
||||
``spack.yaml``.
|
||||
|
||||
.. _noop_jobs:
|
||||
|
||||
^^^^^^^^^^^^
|
||||
No Op (noop)
|
||||
^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Note about "no-op" jobs
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If no specs in an environment need to be rebuilt during a given pipeline run
|
||||
(meaning all are already up to date on the mirror), a single successful job
|
||||
(a NO-OP) is still generated to avoid an empty pipeline (which GitLab
|
||||
considers to be an error). The ``noop-job*`` sections
|
||||
considers to be an error). An optional ``service-job-attributes`` section
|
||||
can be added to your ``spack.yaml`` where you can provide ``tags`` and
|
||||
``image`` or ``variables`` for the generated NO-OP job. This section also
|
||||
supports providing ``before_script``, ``script``, and ``after_script``, in
|
||||
@@ -446,100 +499,51 @@ Following is an example of this section added to a ``spack.yaml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
spack:
|
||||
ci:
|
||||
pipeline-gen:
|
||||
- noop-job:
|
||||
tags: ['custom', 'tag']
|
||||
image:
|
||||
name: 'some.image.registry/custom-image:latest'
|
||||
entrypoint: ['/bin/bash']
|
||||
script::
|
||||
- echo "Custom message in a custom script"
|
||||
spack:
|
||||
specs:
|
||||
- openmpi
|
||||
mirrors:
|
||||
cloud_gitlab: https://mirror.spack.io
|
||||
gitlab-ci:
|
||||
mappings:
|
||||
- match:
|
||||
- os=centos8
|
||||
runner-attributes:
|
||||
tags:
|
||||
- custom
|
||||
- tag
|
||||
image: spack/centos7
|
||||
service-job-attributes:
|
||||
tags: ['custom', 'tag']
|
||||
image:
|
||||
name: 'some.image.registry/custom-image:latest'
|
||||
entrypoint: ['/bin/bash']
|
||||
script:
|
||||
- echo "Custom message in a custom script"
|
||||
|
||||
The example above illustrates how you can provide the attributes used to run
|
||||
the NO-OP job in the case of an empty pipeline. The only field for the NO-OP
|
||||
job that might be generated for you is ``script``, but that will only happen
|
||||
if you do not provide one yourself. Notice in this example the ``script``
|
||||
uses the ``::`` notation to prescribe override behavior. Without this, the
|
||||
``echo`` command would have been prepended to the automatically generated script
|
||||
rather than replacing it.
|
||||
if you do not provide one yourself.
|
||||
|
||||
------------------------------------
|
||||
ci.yaml
|
||||
------------------------------------
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Assignment of specs to runners
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here's an example of a spack configuration file describing a build pipeline:
|
||||
The ``mappings`` section corresponds to a list of runners, and during assignment
|
||||
of specs to runners, the list is traversed in order looking for matches, the
|
||||
first runner that matches a release spec is assigned to build that spec. The
|
||||
``match`` section within each runner mapping section is a list of specs, and
|
||||
if any of those specs match the release spec (the ``spec.satisfies()`` method
|
||||
is used), then that runner is considered a match.
|
||||
|
||||
.. code-block:: yaml
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Configuration of specs/jobs for a runner
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
ci:
|
||||
target: gitlab
|
||||
|
||||
rebuild_index: True
|
||||
|
||||
broken-specs-url: https://broken.specs.url
|
||||
|
||||
broken-tests-packages:
|
||||
- gptune
|
||||
|
||||
pipeline-gen:
|
||||
- submapping:
|
||||
- match:
|
||||
- os=ubuntu18.04
|
||||
build-job:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/ubuntu-bionic
|
||||
- match:
|
||||
- os=centos7
|
||||
build-job:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/centos7
|
||||
|
||||
cdash:
|
||||
build-group: Release Testing
|
||||
url: https://cdash.spack.io
|
||||
project: Spack
|
||||
site: Spack AWS Gitlab Instance
|
||||
|
||||
The ``ci`` config section is used to configure how the pipeline workload should be
|
||||
generated, mainly how the jobs for building specs should be assigned to the
|
||||
configured runners on your instance. The main section for configuring pipelines
|
||||
is ``pipeline-gen``, which is a list of job attribute sections that are merged,
|
||||
using the same rules as Spack configs (:ref:`config-scope-precedence`), from the bottom up.
|
||||
The order sections are applied is to be consistent with how spack orders scope precedence when merging lists.
|
||||
There are two main section types, ``<type>-job`` sections and ``submapping``
|
||||
sections.
|
||||
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
Job Attribute Sections
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Each type of job may have attributes added or removed via sections in the ``pipeline-gen``
|
||||
list. Job type specific attributes may be specified using the keys ``<type>-job`` to
|
||||
add attributes to all jobs of type ``<type>`` or ``<type>-job-remove`` to remove attributes
|
||||
of type ``<type>``. Each section may only contain one type of job attribute specification, ie. ,
|
||||
``build-job`` and ``noop-job`` may not coexist but ``build-job`` and ``build-job-remove`` may.
|
||||
|
||||
.. note::
|
||||
The ``*-remove`` specifications are applied before the additive attribute specification.
|
||||
For example, in the case where both ``build-job`` and ``build-job-remove`` are listed in
|
||||
the same ``pipeline-gen`` section, the value will still exist in the merged build-job after
|
||||
applying the section.
|
||||
|
||||
All of the attributes specified are forwarded to the generated CI jobs, however special
|
||||
treatment is applied to the attributes ``tags``, ``image``, ``variables``, ``script``,
|
||||
``before_script``, and ``after_script`` as they are components recognized explicitly by the
|
||||
Spack CI generator. For the ``tags`` attribute, Spack will remove reserved tags
|
||||
(:ref:`reserved_tags`) from all jobs specified in the config. In some cases, such as for
|
||||
``signing`` jobs, reserved tags will be added back based on the type of CI that is being run.
|
||||
|
||||
Once a runner has been chosen to build a release spec, the ``build-job*``
|
||||
sections provide information determining details of the job in the context of
|
||||
the runner. At lease one of the ``build-job*`` sections must contain a ``tags`` key, which
|
||||
Once a runner has been chosen to build a release spec, the ``runner-attributes``
|
||||
section provides information determining details of the job in the context of
|
||||
the runner. The ``runner-attributes`` section must have a ``tags`` key, which
|
||||
is a list containing at least one tag used to select the runner from among the
|
||||
runners known to the gitlab instance. For Docker executor type runners, the
|
||||
``image`` key is used to specify the Docker image used to build the release spec
|
||||
@@ -550,7 +554,7 @@ information on to the runner that it needs to do its work (e.g. scheduler
|
||||
parameters, etc.). Any ``variables`` provided here will be added, verbatim, to
|
||||
each job.
|
||||
|
||||
The ``build-job`` section also allows users to supply custom ``script``,
|
||||
The ``runner-attributes`` section also allows users to supply custom ``script``,
|
||||
``before_script``, and ``after_script`` sections to be applied to every job
|
||||
scheduled on that runner. This allows users to do any custom preparation or
|
||||
cleanup tasks that fit their particular workflow, as well as completely
|
||||
@@ -561,45 +565,46 @@ environment directory is located within your ``--artifacts_root`` (or if not
|
||||
provided, within your ``$CI_PROJECT_DIR``), activates that environment for
|
||||
you, and invokes ``spack ci rebuild``.
|
||||
|
||||
Sections that specify scripts (``script``, ``before_script``, ``after_script``) are all
|
||||
read as lists of commands or lists of lists of commands. It is recommended to write scripts
|
||||
as lists of lists if scripts will be composed via merging. The default behavior of merging
|
||||
lists will remove duplicate commands and potentially apply unwanted reordering, whereas
|
||||
merging lists of lists will preserve the local ordering and never removes duplicate
|
||||
commands. When writing commands to the CI target script, all lists are expanded and
|
||||
flattened into a single list.
|
||||
.. _staging_algorithm:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
Submapping Sections
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Summary of ``.gitlab-ci.yml`` generation algorithm
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A special case of attribute specification is the ``submapping`` section which may be used
|
||||
to apply job attributes to build jobs based on the package spec associated with the rebuild
|
||||
job. Submapping is specified as a list of spec ``match`` lists associated with
|
||||
``build-job``/``build-job-remove`` sections. There are two options for ``match_behavior``,
|
||||
either ``first`` or ``merge`` may be specified. In either case, the ``submapping`` list is
|
||||
processed from the bottom up, and then each ``match`` list is searched for a string that
|
||||
satisfies the check ``spec.satisfies({match_item})`` for each concrete spec.
|
||||
All specs yielded by the matrix (or all the specs in the environment) have their
|
||||
dependencies computed, and the entire resulting set of specs are staged together
|
||||
before being run through the ``gitlab-ci/mappings`` entries, where each staged
|
||||
spec is assigned a runner. "Staging" is the name given to the process of
|
||||
figuring out in what order the specs should be built, taking into consideration
|
||||
Gitlab CI rules about jobs/stages. In the staging process the goal is to maximize
|
||||
the number of jobs in any stage of the pipeline, while ensuring that the jobs in
|
||||
any stage only depend on jobs in previous stages (since those jobs are guaranteed
|
||||
to have completed already). As a runner is determined for a job, the information
|
||||
in the ``runner-attributes`` is used to populate various parts of the job
|
||||
description that will be used by Gitlab CI. Once all the jobs have been assigned
|
||||
a runner, the ``.gitlab-ci.yml`` is written to disk.
|
||||
|
||||
The the case of ``match_behavior: first``, the first ``match`` section in the list of
|
||||
``submappings`` that contains a string that satisfies the spec will apply it's
|
||||
``build-job*`` attributes to the rebuild job associated with that spec. This is the
|
||||
default behavior and will be the method if no ``match_behavior`` is specified.
|
||||
The short example provided above would result in the ``readline``, ``ncurses``,
|
||||
and ``pkgconf`` packages getting staged and built on the runner chosen by the
|
||||
``spack-k8s`` tag. In this example, spack assumes the runner is a Docker executor
|
||||
type runner, and thus certain jobs will be run in the ``centos7`` container,
|
||||
and others in the ``ubuntu-18.04`` container. The resulting ``.gitlab-ci.yml``
|
||||
will contain 6 jobs in three stages. Once the jobs have been generated, the
|
||||
presence of a ``SPACK_CDASH_AUTH_TOKEN`` environment variable during the
|
||||
``spack ci generate`` command would result in all of the jobs being put in a
|
||||
build group on CDash called "Release Testing" (that group will be created if
|
||||
it didn't already exist).
|
||||
|
||||
The the case of ``merge`` match, all of the ``match`` sections in the list of
|
||||
``submappings`` that contain a string that satisfies the spec will have the associated
|
||||
``build-job*`` attributes applied to the rebuild job associated with that spec. Again,
|
||||
the attributes will be merged starting from the bottom match going up to the top match.
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Optional compiler bootstrapping
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In the case that no match is found in a submapping section, no additional attributes will be applied.
|
||||
|
||||
^^^^^^^^^^^^^
|
||||
Bootstrapping
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
|
||||
The ``bootstrap`` section allows you to specify lists of specs from
|
||||
your ``definitions`` that should be staged ahead of the environment's ``specs``. At the moment
|
||||
Spack pipelines also have support for bootstrapping compilers on systems that
|
||||
may not already have the desired compilers installed. The idea here is that
|
||||
you can specify a list of things to bootstrap in your ``definitions``, and
|
||||
spack will guarantee those will be installed in a phase of the pipeline before
|
||||
your release specs, so that you can rely on those packages being available in
|
||||
the binary mirror when you need them later on in the pipeline. At the moment
|
||||
the only viable use-case for bootstrapping is to install compilers.
|
||||
|
||||
Here's an example of what bootstrapping some compilers might look like:
|
||||
@@ -632,18 +637,18 @@ Here's an example of what bootstrapping some compilers might look like:
|
||||
exclude:
|
||||
- '%gcc@7.3.0 os=centos7'
|
||||
- '%gcc@5.5.0 os=ubuntu18.04'
|
||||
ci:
|
||||
gitlab-ci:
|
||||
bootstrap:
|
||||
- name: compiler-pkgs
|
||||
compiler-agnostic: true
|
||||
pipeline-gen:
|
||||
# similar to the example higher up in this description
|
||||
mappings:
|
||||
# mappings similar to the example higher up in this description
|
||||
...
|
||||
|
||||
The example above adds a list to the ``definitions`` called ``compiler-pkgs``
|
||||
(you can add any number of these), which lists compiler packages that should
|
||||
be staged ahead of the full matrix of release specs (in this example, only
|
||||
readline). Then within the ``ci`` section, note the addition of a
|
||||
readline). Then within the ``gitlab-ci`` section, note the addition of a
|
||||
``bootstrap`` section, which can contain a list of items, each referring to
|
||||
a list in the ``definitions`` section. These items can either
|
||||
be a dictionary or a string. If you supply a dictionary, it must have a name
|
||||
@@ -675,86 +680,6 @@ environment/stack file, and in that case no bootstrapping will be done (only the
|
||||
specs will be staged for building) and the runners will be expected to already
|
||||
have all needed compilers installed and configured for spack to use.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
Pipeline Buildcache
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``enable-artifacts-buildcache`` key
|
||||
takes a boolean and determines whether the pipeline uses artifacts to store and
|
||||
pass along the buildcaches from one stage to the next (the default if you don't
|
||||
provide this option is ``False``).
|
||||
|
||||
^^^^^^^^^^^^^^^^
|
||||
Broken Specs URL
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
The optional ``broken-specs-url`` key tells Spack to check against a list of
|
||||
specs that are known to be currently broken in ``develop``. If any such specs
|
||||
are found, the ``spack ci generate`` command will fail with an error message
|
||||
informing the user what broken specs were encountered. This allows the pipeline
|
||||
to fail early and avoid wasting compute resources attempting to build packages
|
||||
that will not succeed.
|
||||
|
||||
^^^^^
|
||||
CDash
|
||||
^^^^^
|
||||
|
||||
The optional ``cdash`` section provides information that will be used by the
|
||||
``spack ci generate`` command (invoked by ``spack ci start``) for reporting
|
||||
to CDash. All the jobs generated from this environment will belong to a
|
||||
"build group" within CDash that can be tracked over time. As the release
|
||||
progresses, this build group may have jobs added or removed. The url, project,
|
||||
and site are used to specify the CDash instance to which build results should
|
||||
be reported.
|
||||
|
||||
Take a look at the
|
||||
`schema <https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/ci.py>`_
|
||||
for the ci section of the spack environment file, to see precisely what
|
||||
syntax is allowed there.
|
||||
|
||||
.. _reserved_tags:
|
||||
|
||||
^^^^^^^^^^^^^
|
||||
Reserved Tags
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
Spack has a subset of tags (``public``, ``protected``, and ``notary``) that it reserves
|
||||
for classifying runners that may require special permissions or access. The tags
|
||||
``public`` and ``protected`` are used to distinguish between runners that use public
|
||||
permissions and runners with protected permissions. The ``notary`` tag is a special tag
|
||||
that is used to indicate runners that have access to the highly protected information
|
||||
used for signing binaries using the ``signing`` job.
|
||||
|
||||
.. _staging_algorithm:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Summary of ``.gitlab-ci.yml`` generation algorithm
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
All specs yielded by the matrix (or all the specs in the environment) have their
|
||||
dependencies computed, and the entire resulting set of specs are staged together
|
||||
before being run through the ``ci/pipeline-gen`` entries, where each staged
|
||||
spec is assigned a runner. "Staging" is the name given to the process of
|
||||
figuring out in what order the specs should be built, taking into consideration
|
||||
Gitlab CI rules about jobs/stages. In the staging process the goal is to maximize
|
||||
the number of jobs in any stage of the pipeline, while ensuring that the jobs in
|
||||
any stage only depend on jobs in previous stages (since those jobs are guaranteed
|
||||
to have completed already). As a runner is determined for a job, the information
|
||||
in the merged ``any-job*`` and ``build-job*`` sections is used to populate various parts of the job
|
||||
description that will be used by the target CI pipelines. Once all the jobs have been assigned
|
||||
a runner, the ``.gitlab-ci.yml`` is written to disk.
|
||||
|
||||
The short example provided above would result in the ``readline``, ``ncurses``,
|
||||
and ``pkgconf`` packages getting staged and built on the runner chosen by the
|
||||
``spack-k8s`` tag. In this example, spack assumes the runner is a Docker executor
|
||||
type runner, and thus certain jobs will be run in the ``centos7`` container,
|
||||
and others in the ``ubuntu-18.04`` container. The resulting ``.gitlab-ci.yml``
|
||||
will contain 6 jobs in three stages. Once the jobs have been generated, the
|
||||
presence of a ``SPACK_CDASH_AUTH_TOKEN`` environment variable during the
|
||||
``spack ci generate`` command would result in all of the jobs being put in a
|
||||
build group on CDash called "Release Testing" (that group will be created if
|
||||
it didn't already exist).
|
||||
|
||||
-------------------------------------
|
||||
Using a custom spack in your pipeline
|
||||
-------------------------------------
|
||||
@@ -801,21 +726,23 @@ generated by ``spack ci generate``. You also want your generated rebuild jobs
|
||||
|
||||
spack:
|
||||
...
|
||||
ci:
|
||||
pipeline-gen:
|
||||
- build-job:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/ubuntu-bionic
|
||||
before_script:
|
||||
- git clone ${SPACK_REPO}
|
||||
- pushd spack && git checkout ${SPACK_REF} && popd
|
||||
- . "./spack/share/spack/setup-env.sh"
|
||||
script:
|
||||
- spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR}
|
||||
- spack -d ci rebuild
|
||||
after_script:
|
||||
- rm -rf ./spack
|
||||
gitlab-ci:
|
||||
mappings:
|
||||
- match:
|
||||
- os=ubuntu18.04
|
||||
runner-attributes:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/ubuntu-bionic
|
||||
before_script:
|
||||
- git clone ${SPACK_REPO}
|
||||
- pushd spack && git checkout ${SPACK_REF} && popd
|
||||
- . "./spack/share/spack/setup-env.sh"
|
||||
script:
|
||||
- spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR}
|
||||
- spack -d ci rebuild
|
||||
after_script:
|
||||
- rm -rf ./spack
|
||||
|
||||
Now all of the generated rebuild jobs will use the same shell script to clone
|
||||
spack before running their actual workload.
|
||||
@@ -904,4 +831,3 @@ verify binary packages (when installing or creating buildcaches). You could
|
||||
also have already trusted a key spack know about, or if no key is present anywhere,
|
||||
spack will install specs using ``--no-check-signature`` and create buildcaches
|
||||
using ``-u`` (for unsigned binaries).
|
||||
|
||||
|
||||
111
lib/spack/env/cc
vendored
111
lib/spack/env/cc
vendored
@@ -430,37 +430,21 @@ other_args_list=""
|
||||
# Global state for keeping track of -Wl,-rpath -Wl,/path
|
||||
wl_expect_rpath=no
|
||||
|
||||
# Same, but for -Xlinker -rpath -Xlinker /path
|
||||
xlinker_expect_rpath=no
|
||||
|
||||
parse_Wl() {
|
||||
# drop -Wl
|
||||
shift
|
||||
while [ $# -ne 0 ]; do
|
||||
if [ "$wl_expect_rpath" = yes ]; then
|
||||
if system_dir "$1"; then
|
||||
append system_rpath_dirs_list "$1"
|
||||
else
|
||||
append rpath_dirs_list "$1"
|
||||
fi
|
||||
rp="$1"
|
||||
wl_expect_rpath=no
|
||||
else
|
||||
rp=""
|
||||
case "$1" in
|
||||
-rpath=*)
|
||||
arg="${1#-rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
rp="${1#-rpath=}"
|
||||
;;
|
||||
--rpath=*)
|
||||
arg="${1#--rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
rp="${1#--rpath=}"
|
||||
;;
|
||||
-rpath|--rpath)
|
||||
wl_expect_rpath=yes
|
||||
@@ -472,8 +456,17 @@ parse_Wl() {
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
if [ -n "$rp" ]; then
|
||||
if system_dir "$rp"; then
|
||||
append system_rpath_dirs_list "$rp"
|
||||
else
|
||||
append rpath_dirs_list "$rp"
|
||||
fi
|
||||
fi
|
||||
shift
|
||||
done
|
||||
# By lack of local variables, always set this to empty string.
|
||||
rp=""
|
||||
}
|
||||
|
||||
|
||||
@@ -580,72 +573,38 @@ while [ $# -ne 0 ]; do
|
||||
unset IFS
|
||||
;;
|
||||
-Xlinker)
|
||||
shift
|
||||
if [ $# -eq 0 ]; then
|
||||
# -Xlinker without value: let the compiler error about it.
|
||||
append other_args_list -Xlinker
|
||||
xlinker_expect_rpath=no
|
||||
break
|
||||
elif [ "$xlinker_expect_rpath" = yes ]; then
|
||||
# Register the path of -Xlinker -rpath <other args> -Xlinker <path>
|
||||
if system_dir "$1"; then
|
||||
append system_rpath_dirs_list "$1"
|
||||
else
|
||||
append rpath_dirs_list "$1"
|
||||
if [ "$2" = "-rpath" ]; then
|
||||
if [ "$3" != "-Xlinker" ]; then
|
||||
die "-Xlinker,-rpath was not followed by -Xlinker,*"
|
||||
fi
|
||||
xlinker_expect_rpath=no
|
||||
shift 3;
|
||||
rp="$1"
|
||||
elif [ "$2" = "$dtags_to_strip" ]; then
|
||||
shift # We want to remove explicitly this flag
|
||||
else
|
||||
case "$1" in
|
||||
-rpath=*)
|
||||
arg="${1#-rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
;;
|
||||
--rpath=*)
|
||||
arg="${1#--rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
;;
|
||||
-rpath|--rpath)
|
||||
xlinker_expect_rpath=yes
|
||||
;;
|
||||
"$dtags_to_strip")
|
||||
;;
|
||||
*)
|
||||
append other_args_list -Xlinker
|
||||
append other_args_list "$1"
|
||||
;;
|
||||
esac
|
||||
append other_args_list "$1"
|
||||
fi
|
||||
;;
|
||||
"$dtags_to_strip")
|
||||
;;
|
||||
*)
|
||||
append other_args_list "$1"
|
||||
if [ "$1" = "$dtags_to_strip" ]; then
|
||||
: # We want to remove explicitly this flag
|
||||
else
|
||||
append other_args_list "$1"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
# test rpaths against system directories in one place.
|
||||
if [ -n "$rp" ]; then
|
||||
if system_dir "$rp"; then
|
||||
append system_rpath_dirs_list "$rp"
|
||||
else
|
||||
append rpath_dirs_list "$rp"
|
||||
fi
|
||||
fi
|
||||
shift
|
||||
done
|
||||
|
||||
# We found `-Xlinker -rpath` but no matching value `-Xlinker /path`. Just append
|
||||
# `-Xlinker -rpath` again and let the compiler or linker handle the error during arg
|
||||
# parsing.
|
||||
if [ "$xlinker_expect_rpath" = yes ]; then
|
||||
append other_args_list -Xlinker
|
||||
append other_args_list -rpath
|
||||
fi
|
||||
|
||||
# Same, but for -Wl flags.
|
||||
if [ "$wl_expect_rpath" = yes ]; then
|
||||
append other_args_list -Wl,-rpath
|
||||
fi
|
||||
|
||||
#
|
||||
# Add flags from Spack's cppflags, cflags, cxxflags, fcflags, fflags, and
|
||||
# ldflags. We stick to the order that gmake puts the flags in by default.
|
||||
|
||||
2
lib/spack/external/__init__.py
vendored
2
lib/spack/external/__init__.py
vendored
@@ -18,7 +18,7 @@
|
||||
|
||||
* Homepage: https://pypi.python.org/pypi/archspec
|
||||
* Usage: Labeling, comparison and detection of microarchitectures
|
||||
* Version: 0.2.0-dev (commit d02dadbac4fa8f3a60293c4fbfd59feadaf546dc)
|
||||
* Version: 0.2.0 (commit e44bad9c7b6defac73696f64078b2fe634719b62)
|
||||
|
||||
astunparse
|
||||
----------------
|
||||
|
||||
8
lib/spack/external/archspec/__main__.py
vendored
8
lib/spack/external/archspec/__main__.py
vendored
@@ -1,8 +0,0 @@
|
||||
"""
|
||||
Run the `archspec` CLI as a module.
|
||||
"""
|
||||
|
||||
import sys
|
||||
from .cli import main
|
||||
|
||||
sys.exit(main())
|
||||
60
lib/spack/external/archspec/cli.py
vendored
60
lib/spack/external/archspec/cli.py
vendored
@@ -6,61 +6,19 @@
|
||||
archspec command line interface
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import typing
|
||||
import click
|
||||
|
||||
import archspec
|
||||
import archspec.cpu
|
||||
|
||||
|
||||
def _make_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(
|
||||
"archspec",
|
||||
description="archspec command line interface",
|
||||
add_help=False,
|
||||
)
|
||||
parser.add_argument(
|
||||
"--version",
|
||||
"-V",
|
||||
help="Show the version and exit.",
|
||||
action="version",
|
||||
version=f"archspec, version {archspec.__version__}",
|
||||
)
|
||||
parser.add_argument("--help", "-h", help="Show the help and exit.", action="help")
|
||||
|
||||
subcommands = parser.add_subparsers(
|
||||
title="command",
|
||||
metavar="COMMAND",
|
||||
dest="command",
|
||||
)
|
||||
|
||||
cpu_command = subcommands.add_parser(
|
||||
"cpu",
|
||||
help="archspec command line interface for CPU",
|
||||
description="archspec command line interface for CPU",
|
||||
)
|
||||
cpu_command.set_defaults(run=cpu)
|
||||
|
||||
return parser
|
||||
@click.group(name="archspec")
|
||||
@click.version_option(version=archspec.__version__)
|
||||
def main():
|
||||
"""archspec command line interface"""
|
||||
|
||||
|
||||
def cpu() -> int:
|
||||
"""Run the `archspec cpu` subcommand."""
|
||||
print(archspec.cpu.host())
|
||||
return 0
|
||||
|
||||
|
||||
def main(argv: typing.Optional[typing.List[str]] = None) -> int:
|
||||
"""Run the `archspec` command line interface."""
|
||||
parser = _make_parser()
|
||||
|
||||
try:
|
||||
args = parser.parse_args(argv)
|
||||
except SystemExit as err:
|
||||
return err.code
|
||||
|
||||
if args.command is None:
|
||||
parser.print_help()
|
||||
return 0
|
||||
|
||||
return args.run()
|
||||
@main.command()
|
||||
def cpu():
|
||||
"""archspec command line interface for CPU"""
|
||||
click.echo(archspec.cpu.host())
|
||||
|
||||
@@ -268,14 +268,15 @@ def tuplify(ver):
|
||||
return flags
|
||||
|
||||
msg = (
|
||||
"cannot produce optimized binary for micro-architecture '{0}' with {1}@{2}"
|
||||
"cannot produce optimized binary for micro-architecture '{0}'"
|
||||
" with {1}@{2} [supported compiler versions are {3}]"
|
||||
)
|
||||
msg = msg.format(
|
||||
self.name,
|
||||
compiler,
|
||||
version,
|
||||
", ".join([x["versions"] for x in compiler_info]),
|
||||
)
|
||||
if compiler_info:
|
||||
versions = [x["versions"] for x in compiler_info]
|
||||
msg += f' [supported compiler versions are {", ".join(versions)}]'
|
||||
else:
|
||||
msg += " [no supported compiler versions]"
|
||||
msg = msg.format(self.name, compiler, version)
|
||||
raise UnsupportedMicroarchitecture(msg)
|
||||
|
||||
|
||||
|
||||
@@ -102,8 +102,7 @@
|
||||
"name": "x86-64",
|
||||
"flags": "-march={name} -mtune=generic"
|
||||
}
|
||||
],
|
||||
"nvhpc": []
|
||||
]
|
||||
}
|
||||
},
|
||||
"x86_64_v2": {
|
||||
@@ -158,8 +157,7 @@
|
||||
"name": "x86-64-v2",
|
||||
"flags": "-march={name} -mtune=generic"
|
||||
}
|
||||
],
|
||||
"nvhpc": []
|
||||
]
|
||||
}
|
||||
},
|
||||
"x86_64_v3": {
|
||||
@@ -230,13 +228,6 @@
|
||||
"name": "x86-64-v3",
|
||||
"flags": "-march={name} -mtune=generic"
|
||||
}
|
||||
],
|
||||
"nvhpc" : [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "px",
|
||||
"flags": "-tp {name} -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mxsave"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -313,13 +304,6 @@
|
||||
"name": "x86-64-v4",
|
||||
"flags": "-march={name} -mtune=generic"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "px",
|
||||
"flags": "-tp {name} -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mxsave -mavx512f -mavx512bw -mavx512cd -mavx512dq -mavx512vl"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -374,8 +358,7 @@
|
||||
"versions": ":",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": []
|
||||
]
|
||||
}
|
||||
},
|
||||
"core2": {
|
||||
@@ -429,8 +412,7 @@
|
||||
"versions": ":",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": []
|
||||
]
|
||||
}
|
||||
},
|
||||
"nehalem": {
|
||||
@@ -495,8 +477,7 @@
|
||||
"name": "corei7",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": []
|
||||
]
|
||||
}
|
||||
},
|
||||
"westmere": {
|
||||
@@ -558,8 +539,7 @@
|
||||
"name": "corei7",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": []
|
||||
]
|
||||
}
|
||||
},
|
||||
"sandybridge": {
|
||||
@@ -629,12 +609,6 @@
|
||||
"versions": ":",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -707,12 +681,6 @@
|
||||
"versions": ":",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -790,12 +758,6 @@
|
||||
"versions": ":",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -865,13 +827,6 @@
|
||||
"versions": ":",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "haswell",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -944,13 +899,6 @@
|
||||
"versions": ":",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "haswell",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -1115,13 +1063,6 @@
|
||||
"name": "skylake-avx512",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "skylake",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -1202,13 +1143,6 @@
|
||||
"versions": ":",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "skylake",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -1288,13 +1222,6 @@
|
||||
"versions": ":",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "skylake",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -1402,13 +1329,6 @@
|
||||
"name": "icelake-client",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "skylake",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -1467,8 +1387,7 @@
|
||||
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
|
||||
"flags": "-msse2"
|
||||
}
|
||||
],
|
||||
"nvhpc": []
|
||||
]
|
||||
}
|
||||
},
|
||||
"bulldozer": {
|
||||
@@ -1532,12 +1451,6 @@
|
||||
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
|
||||
"flags": "-msse3"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -1606,12 +1519,6 @@
|
||||
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
|
||||
"flags": "-msse3"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -1681,13 +1588,6 @@
|
||||
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
|
||||
"flags": "-msse4.2"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "piledriver",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -1763,13 +1663,6 @@
|
||||
"name": "core-avx2",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "piledriver",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -1848,12 +1741,6 @@
|
||||
"name": "core-avx2",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -1933,12 +1820,6 @@
|
||||
"name": "core-avx2",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": "20.5:",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -2021,12 +1902,6 @@
|
||||
"name": "core-avx2",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": "21.11:",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -2107,15 +1982,7 @@
|
||||
"name": "znver4",
|
||||
"flags": "-march={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": "21.11:",
|
||||
"name": "zen3",
|
||||
"flags": "-tp {name}",
|
||||
"warnings": "zen4 is not fully supported by nvhpc yet, falling back to zen3"
|
||||
}
|
||||
]
|
||||
]
|
||||
}
|
||||
},
|
||||
"ppc64": {
|
||||
@@ -2220,8 +2087,7 @@
|
||||
"versions": ":",
|
||||
"flags": "-mcpu={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": []
|
||||
]
|
||||
}
|
||||
},
|
||||
"power8le": {
|
||||
@@ -2250,13 +2116,6 @@
|
||||
"name": "power8",
|
||||
"flags": "-mcpu={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "pwr8",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -2280,13 +2139,6 @@
|
||||
"name": "power9",
|
||||
"flags": "-mcpu={name} -mtune={name}"
|
||||
}
|
||||
],
|
||||
"nvhpc": [
|
||||
{
|
||||
"versions": ":",
|
||||
"name": "pwr9",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -2318,8 +2170,7 @@
|
||||
"versions": ":",
|
||||
"flags": "-march=armv8-a -mtune=generic"
|
||||
}
|
||||
],
|
||||
"nvhpc": []
|
||||
]
|
||||
}
|
||||
},
|
||||
"armv8.1a": {
|
||||
@@ -2701,13 +2552,6 @@
|
||||
"versions": "20:",
|
||||
"flags" : "-march=armv8.2-a+fp16+rcpc+dotprod+crypto"
|
||||
}
|
||||
],
|
||||
"nvhpc" : [
|
||||
{
|
||||
"versions": "22.5:",
|
||||
"name": "neoverse-n1",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -2773,31 +2617,15 @@
|
||||
"flags" : "-march=armv8.2-a+crypto+fp16 -mtune=cortex-a72"
|
||||
},
|
||||
{
|
||||
"versions": "8.0:8.4",
|
||||
"versions": "8.0:8.9",
|
||||
"flags" : "-march=armv8.2-a+fp16+dotprod+crypto -mtune=cortex-a72"
|
||||
},
|
||||
{
|
||||
"versions": "8.5:8.9",
|
||||
"versions": "9.0:9.9",
|
||||
"flags" : "-mcpu=neoverse-v1"
|
||||
},
|
||||
{
|
||||
"versions": "9.0:9.3",
|
||||
"flags" : "-march=armv8.2-a+fp16+dotprod+crypto -mtune=cortex-a72"
|
||||
},
|
||||
{
|
||||
"versions": "9.4:9.9",
|
||||
"flags" : "-mcpu=neoverse-v1"
|
||||
},
|
||||
{
|
||||
"versions": "10.0:10.1",
|
||||
"flags" : "-march=armv8.2-a+fp16+dotprod+crypto -mtune=cortex-a72"
|
||||
},
|
||||
{
|
||||
"versions": "10.2",
|
||||
"flags" : "-mcpu=zeus"
|
||||
},
|
||||
{
|
||||
"versions": "10.3:",
|
||||
{
|
||||
"versions": "10.0:",
|
||||
"flags" : "-mcpu=neoverse-v1"
|
||||
}
|
||||
|
||||
@@ -2829,13 +2657,6 @@
|
||||
"versions": "22:",
|
||||
"flags" : "-march=armv8.4-a+sve+ssbs+fp16+bf16+crypto+i8mm+rng"
|
||||
}
|
||||
],
|
||||
"nvhpc" : [
|
||||
{
|
||||
"versions": "22.5:",
|
||||
"name": "neoverse-n1",
|
||||
"flags": "-tp {name}"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
@@ -2961,10 +2782,6 @@
|
||||
{
|
||||
"versions": "13.0:",
|
||||
"flags" : "-mcpu=apple-m1"
|
||||
},
|
||||
{
|
||||
"versions": "16.0:",
|
||||
"flags" : "-mcpu=apple-m2"
|
||||
}
|
||||
],
|
||||
"apple-clang": [
|
||||
@@ -2973,12 +2790,8 @@
|
||||
"flags" : "-march=armv8.5-a"
|
||||
},
|
||||
{
|
||||
"versions": "13.0:14.0.2",
|
||||
"flags" : "-mcpu=apple-m1"
|
||||
},
|
||||
{
|
||||
"versions": "14.0.2:",
|
||||
"flags" : "-mcpu=apple-m2"
|
||||
"versions": "13.0:",
|
||||
"flags" : "-mcpu=vortex"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -5,20 +5,18 @@
|
||||
import collections
|
||||
import collections.abc
|
||||
import errno
|
||||
import fnmatch
|
||||
import glob
|
||||
import hashlib
|
||||
import itertools
|
||||
import numbers
|
||||
import os
|
||||
import posixpath
|
||||
import re
|
||||
import shutil
|
||||
import stat
|
||||
import sys
|
||||
import tempfile
|
||||
from contextlib import contextmanager
|
||||
from typing import Callable, Iterable, List, Match, Optional, Tuple, Union
|
||||
from typing import Callable, List, Match, Optional, Tuple, Union
|
||||
|
||||
from llnl.util import tty
|
||||
from llnl.util.lang import dedupe, memoized
|
||||
@@ -1673,38 +1671,6 @@ def fix_darwin_install_name(path):
|
||||
break
|
||||
|
||||
|
||||
def find_first(root: str, files: Union[Iterable[str], str], bfs_depth: int = 2) -> Optional[str]:
|
||||
"""Find the first file matching a pattern.
|
||||
|
||||
The following
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ find /usr -name 'abc*' -o -name 'def*' -quit
|
||||
|
||||
is equivalent to:
|
||||
|
||||
>>> find_first("/usr", ["abc*", "def*"])
|
||||
|
||||
Any glob pattern supported by fnmatch can be used.
|
||||
|
||||
The search order of this method is breadth-first over directories,
|
||||
until depth bfs_depth, after which depth-first search is used.
|
||||
|
||||
Parameters:
|
||||
root (str): The root directory to start searching from
|
||||
files (str or Iterable): File pattern(s) to search for
|
||||
bfs_depth (int): (advanced) parameter that specifies at which
|
||||
depth to switch to depth-first search.
|
||||
|
||||
Returns:
|
||||
str or None: The matching file or None when no file is found.
|
||||
"""
|
||||
if isinstance(files, str):
|
||||
files = [files]
|
||||
return FindFirstFile(root, *files, bfs_depth=bfs_depth).find()
|
||||
|
||||
|
||||
def find(root, files, recursive=True):
|
||||
"""Search for ``files`` starting from the ``root`` directory.
|
||||
|
||||
@@ -2441,7 +2407,7 @@ def _link(self, path, dest_dir):
|
||||
# For py2 compatibility, we have to catch the specific Windows error code
|
||||
# associate with trying to create a file that already exists (winerror 183)
|
||||
except OSError as e:
|
||||
if sys.platform == "win32" and (e.winerror == 183 or e.errno == errno.EEXIST):
|
||||
if sys.platform == "win32" and e.winerror == 183:
|
||||
# We have either already symlinked or we are encoutering a naming clash
|
||||
# either way, we don't want to overwrite existing libraries
|
||||
already_linked = islink(dest_file)
|
||||
@@ -2754,105 +2720,3 @@ def filesummary(path, print_bytes=16) -> Tuple[int, bytes]:
|
||||
return size, short_contents
|
||||
except OSError:
|
||||
return 0, b""
|
||||
|
||||
|
||||
class FindFirstFile:
|
||||
"""Uses hybrid iterative deepening to locate the first matching
|
||||
file. Up to depth ``bfs_depth`` it uses iterative deepening, which
|
||||
mimics breadth-first with the same memory footprint as depth-first
|
||||
search, after which it switches to ordinary depth-first search using
|
||||
``os.walk``."""
|
||||
|
||||
def __init__(self, root: str, *file_patterns: str, bfs_depth: int = 2):
|
||||
"""Create a small summary of the given file. Does not error
|
||||
when file does not exist.
|
||||
|
||||
Args:
|
||||
root (str): directory in which to recursively search
|
||||
file_patterns (str): glob file patterns understood by fnmatch
|
||||
bfs_depth (int): until this depth breadth-first traversal is used,
|
||||
when no match is found, the mode is switched to depth-first search.
|
||||
"""
|
||||
self.root = root
|
||||
self.bfs_depth = bfs_depth
|
||||
self.match: Callable
|
||||
|
||||
# normcase is trivial on posix
|
||||
regex = re.compile("|".join(fnmatch.translate(os.path.normcase(p)) for p in file_patterns))
|
||||
|
||||
# On case sensitive filesystems match against normcase'd paths.
|
||||
if os.path is posixpath:
|
||||
self.match = regex.match
|
||||
else:
|
||||
self.match = lambda p: regex.match(os.path.normcase(p))
|
||||
|
||||
def find(self) -> Optional[str]:
|
||||
"""Run the file search
|
||||
|
||||
Returns:
|
||||
str or None: path of the matching file
|
||||
"""
|
||||
self.file = None
|
||||
|
||||
# First do iterative deepening (i.e. bfs through limited depth dfs)
|
||||
for i in range(self.bfs_depth + 1):
|
||||
if self._find_at_depth(self.root, i):
|
||||
return self.file
|
||||
|
||||
# Then fall back to depth-first search
|
||||
return self._find_dfs()
|
||||
|
||||
def _find_at_depth(self, path, max_depth, depth=0) -> bool:
|
||||
"""Returns True when done. Notice search can be done
|
||||
either because a file was found, or because it recursed
|
||||
through all directories."""
|
||||
try:
|
||||
entries = os.scandir(path)
|
||||
except OSError:
|
||||
return True
|
||||
|
||||
done = True
|
||||
|
||||
with entries:
|
||||
# At max depth we look for matching files.
|
||||
if depth == max_depth:
|
||||
for f in entries:
|
||||
# Exit on match
|
||||
if self.match(f.name):
|
||||
self.file = os.path.join(path, f.name)
|
||||
return True
|
||||
|
||||
# is_dir should not require a stat call, so it's a good optimization.
|
||||
if self._is_dir(f):
|
||||
done = False
|
||||
return done
|
||||
|
||||
# At lower depth only recurse into subdirs
|
||||
for f in entries:
|
||||
if not self._is_dir(f):
|
||||
continue
|
||||
|
||||
# If any subdir is not fully traversed, we're not done yet.
|
||||
if not self._find_at_depth(os.path.join(path, f.name), max_depth, depth + 1):
|
||||
done = False
|
||||
|
||||
# Early exit when we've found something.
|
||||
if self.file:
|
||||
return True
|
||||
|
||||
return done
|
||||
|
||||
def _is_dir(self, f: os.DirEntry) -> bool:
|
||||
"""Returns True when f is dir we can enter (and not a symlink)."""
|
||||
try:
|
||||
return f.is_dir(follow_symlinks=False)
|
||||
except OSError:
|
||||
return False
|
||||
|
||||
def _find_dfs(self) -> Optional[str]:
|
||||
"""Returns match or None"""
|
||||
for dirpath, _, filenames in os.walk(self.root):
|
||||
for file in filenames:
|
||||
if self.match(file):
|
||||
return os.path.join(dirpath, file)
|
||||
return None
|
||||
|
||||
@@ -30,15 +30,9 @@ def symlink(real_path, link_path):
|
||||
try:
|
||||
# Try to use junctions
|
||||
_win32_junction(real_path, link_path)
|
||||
except OSError as e:
|
||||
if e.errno == errno.EEXIST:
|
||||
# EEXIST error indicates that file we're trying to "link"
|
||||
# is already present, don't bother trying to copy which will also fail
|
||||
# just raise
|
||||
raise
|
||||
else:
|
||||
# If all else fails, fall back to copying files
|
||||
shutil.copyfile(real_path, link_path)
|
||||
except OSError:
|
||||
# If all else fails, fall back to copying files
|
||||
shutil.copyfile(real_path, link_path)
|
||||
|
||||
|
||||
def islink(path):
|
||||
|
||||
@@ -695,11 +695,8 @@ def _ensure_variant_defaults_are_parsable(pkgs, error_cls):
|
||||
try:
|
||||
variant.validate_or_raise(vspec, pkg_cls=pkg_cls)
|
||||
except spack.variant.InvalidVariantValueError:
|
||||
error_msg = (
|
||||
"The default value of the variant '{}' in package '{}' failed validation"
|
||||
)
|
||||
question = "Is it among the allowed values?"
|
||||
errors.append(error_cls(error_msg.format(variant_name, pkg_name), [question]))
|
||||
error_msg = "The variant '{}' default value in package '{}' cannot be validated"
|
||||
errors.append(error_cls(error_msg.format(variant_name, pkg_name), []))
|
||||
|
||||
return errors
|
||||
|
||||
|
||||
@@ -24,7 +24,6 @@
|
||||
import warnings
|
||||
from contextlib import closing, contextmanager
|
||||
from gzip import GzipFile
|
||||
from typing import Union
|
||||
from urllib.error import HTTPError, URLError
|
||||
|
||||
import ruamel.yaml as yaml
|
||||
@@ -43,7 +42,6 @@
|
||||
import spack.platforms
|
||||
import spack.relocate as relocate
|
||||
import spack.repo
|
||||
import spack.stage
|
||||
import spack.store
|
||||
import spack.traverse as traverse
|
||||
import spack.util.crypto
|
||||
@@ -503,9 +501,7 @@ def _binary_index():
|
||||
|
||||
|
||||
#: Singleton binary_index instance
|
||||
binary_index: Union[BinaryCacheIndex, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(
|
||||
_binary_index
|
||||
)
|
||||
binary_index = llnl.util.lang.Singleton(_binary_index)
|
||||
|
||||
|
||||
class NoOverwriteException(spack.error.SpackError):
|
||||
@@ -1222,37 +1218,15 @@ def _build_tarball(
|
||||
if not spec.concrete:
|
||||
raise ValueError("spec must be concrete to build tarball")
|
||||
|
||||
with tempfile.TemporaryDirectory(dir=spack.stage.get_stage_root()) as tmpdir:
|
||||
_build_tarball_in_stage_dir(
|
||||
spec,
|
||||
out_url,
|
||||
stage_dir=tmpdir,
|
||||
force=force,
|
||||
relative=relative,
|
||||
unsigned=unsigned,
|
||||
allow_root=allow_root,
|
||||
key=key,
|
||||
regenerate_index=regenerate_index,
|
||||
)
|
||||
# set up some paths
|
||||
tmpdir = tempfile.mkdtemp()
|
||||
cache_prefix = build_cache_prefix(tmpdir)
|
||||
|
||||
|
||||
def _build_tarball_in_stage_dir(
|
||||
spec,
|
||||
out_url,
|
||||
stage_dir,
|
||||
force=False,
|
||||
relative=False,
|
||||
unsigned=False,
|
||||
allow_root=False,
|
||||
key=None,
|
||||
regenerate_index=False,
|
||||
):
|
||||
cache_prefix = build_cache_prefix(stage_dir)
|
||||
tarfile_name = tarball_name(spec, ".spack")
|
||||
tarfile_dir = os.path.join(cache_prefix, tarball_directory_name(spec))
|
||||
tarfile_path = os.path.join(tarfile_dir, tarfile_name)
|
||||
spackfile_path = os.path.join(cache_prefix, tarball_path_name(spec, ".spack"))
|
||||
remote_spackfile_path = url_util.join(out_url, os.path.relpath(spackfile_path, stage_dir))
|
||||
remote_spackfile_path = url_util.join(out_url, os.path.relpath(spackfile_path, tmpdir))
|
||||
|
||||
mkdirp(tarfile_dir)
|
||||
if web_util.url_exists(remote_spackfile_path):
|
||||
@@ -1271,7 +1245,7 @@ def _build_tarball_in_stage_dir(
|
||||
signed_specfile_path = "{0}.sig".format(specfile_path)
|
||||
|
||||
remote_specfile_path = url_util.join(
|
||||
out_url, os.path.relpath(specfile_path, os.path.realpath(stage_dir))
|
||||
out_url, os.path.relpath(specfile_path, os.path.realpath(tmpdir))
|
||||
)
|
||||
remote_signed_specfile_path = "{0}.sig".format(remote_specfile_path)
|
||||
|
||||
@@ -1287,7 +1261,7 @@ def _build_tarball_in_stage_dir(
|
||||
raise NoOverwriteException(url_util.format(remote_specfile_path))
|
||||
|
||||
pkg_dir = os.path.basename(spec.prefix.rstrip(os.path.sep))
|
||||
workdir = os.path.join(stage_dir, pkg_dir)
|
||||
workdir = os.path.join(tmpdir, pkg_dir)
|
||||
|
||||
# TODO: We generally don't want to mutate any files, but when using relative
|
||||
# mode, Spack unfortunately *does* mutate rpaths and links ahead of time.
|
||||
@@ -1311,10 +1285,14 @@ def _build_tarball_in_stage_dir(
|
||||
|
||||
# optionally make the paths in the binaries relative to each other
|
||||
# in the spack install tree before creating tarball
|
||||
if relative:
|
||||
make_package_relative(workdir, spec, buildinfo, allow_root)
|
||||
elif not allow_root:
|
||||
ensure_package_relocatable(buildinfo, binaries_dir)
|
||||
try:
|
||||
if relative:
|
||||
make_package_relative(workdir, spec, buildinfo, allow_root)
|
||||
elif not allow_root:
|
||||
ensure_package_relocatable(buildinfo, binaries_dir)
|
||||
except Exception as e:
|
||||
shutil.rmtree(tmpdir)
|
||||
tty.die(e)
|
||||
|
||||
_do_create_tarball(tarfile_path, binaries_dir, pkg_dir, buildinfo)
|
||||
|
||||
@@ -1346,11 +1324,7 @@ def _build_tarball_in_stage_dir(
|
||||
spec_dict["buildinfo"] = buildinfo
|
||||
|
||||
with open(specfile_path, "w") as outfile:
|
||||
# Note: when using gpg clear sign, we need to avoid long lines (19995 chars).
|
||||
# If lines are longer, they are truncated without error. Thanks GPG!
|
||||
# So, here we still add newlines, but no indent, so save on file size and
|
||||
# line length.
|
||||
json.dump(spec_dict, outfile, indent=0, separators=(",", ":"))
|
||||
outfile.write(sjson.dump(spec_dict))
|
||||
|
||||
# sign the tarball and spec file with gpg
|
||||
if not unsigned:
|
||||
@@ -1367,15 +1341,18 @@ def _build_tarball_in_stage_dir(
|
||||
|
||||
tty.debug('Buildcache for "{0}" written to \n {1}'.format(spec, remote_spackfile_path))
|
||||
|
||||
# push the key to the build cache's _pgp directory so it can be
|
||||
# imported
|
||||
if not unsigned:
|
||||
push_keys(out_url, keys=[key], regenerate_index=regenerate_index, tmpdir=stage_dir)
|
||||
try:
|
||||
# push the key to the build cache's _pgp directory so it can be
|
||||
# imported
|
||||
if not unsigned:
|
||||
push_keys(out_url, keys=[key], regenerate_index=regenerate_index, tmpdir=tmpdir)
|
||||
|
||||
# create an index.json for the build_cache directory so specs can be
|
||||
# found
|
||||
if regenerate_index:
|
||||
generate_package_index(url_util.join(out_url, os.path.relpath(cache_prefix, stage_dir)))
|
||||
# create an index.json for the build_cache directory so specs can be
|
||||
# found
|
||||
if regenerate_index:
|
||||
generate_package_index(url_util.join(out_url, os.path.relpath(cache_prefix, tmpdir)))
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
|
||||
return None
|
||||
|
||||
@@ -1799,15 +1776,7 @@ def is_backup_file(file):
|
||||
relocate.relocate_text(text_names, prefix_to_prefix_text)
|
||||
|
||||
# relocate the install prefixes in binary files including dependencies
|
||||
changed_files = relocate.relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
|
||||
|
||||
# Add ad-hoc signatures to patched macho files when on macOS.
|
||||
if "macho" in platform.binary_formats and sys.platform == "darwin":
|
||||
codesign = which("codesign")
|
||||
if not codesign:
|
||||
return
|
||||
for binary in changed_files:
|
||||
codesign("-fs-", binary)
|
||||
relocate.relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
|
||||
|
||||
# If we are installing back to the same location
|
||||
# relocate the sbang location if the spack directory changed
|
||||
@@ -2026,7 +1995,7 @@ def install_root_node(spec, allow_root, unsigned=False, force=False, sha256=None
|
||||
with spack.util.path.filter_padding():
|
||||
tty.msg('Installing "{0}" from a buildcache'.format(spec.format()))
|
||||
extract_tarball(spec, download_result, allow_root, unsigned, force)
|
||||
spack.hooks.post_install(spec, False)
|
||||
spack.hooks.post_install(spec)
|
||||
spack.store.db.add(spec, spack.store.layout)
|
||||
|
||||
|
||||
|
||||
@@ -9,7 +9,6 @@
|
||||
import sys
|
||||
import sysconfig
|
||||
import warnings
|
||||
from typing import Dict, Optional, Sequence, Union
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
@@ -22,10 +21,8 @@
|
||||
|
||||
from .config import spec_for_current_python
|
||||
|
||||
QueryInfo = Dict[str, "spack.spec.Spec"]
|
||||
|
||||
|
||||
def _python_import(module: str) -> bool:
|
||||
def _python_import(module):
|
||||
try:
|
||||
__import__(module)
|
||||
except ImportError:
|
||||
@@ -33,9 +30,7 @@ def _python_import(module: str) -> bool:
|
||||
return True
|
||||
|
||||
|
||||
def _try_import_from_store(
|
||||
module: str, query_spec: Union[str, "spack.spec.Spec"], query_info: Optional[QueryInfo] = None
|
||||
) -> bool:
|
||||
def _try_import_from_store(module, query_spec, query_info=None):
|
||||
"""Return True if the module can be imported from an already
|
||||
installed spec, False otherwise.
|
||||
|
||||
@@ -57,7 +52,7 @@ def _try_import_from_store(
|
||||
module_paths = [
|
||||
os.path.join(candidate_spec.prefix, pkg.purelib),
|
||||
os.path.join(candidate_spec.prefix, pkg.platlib),
|
||||
]
|
||||
] # type: list[str]
|
||||
path_before = list(sys.path)
|
||||
|
||||
# NOTE: try module_paths first and last, last allows an existing version in path
|
||||
@@ -94,7 +89,7 @@ def _try_import_from_store(
|
||||
return False
|
||||
|
||||
|
||||
def _fix_ext_suffix(candidate_spec: "spack.spec.Spec"):
|
||||
def _fix_ext_suffix(candidate_spec):
|
||||
"""Fix the external suffixes of Python extensions on the fly for
|
||||
platforms that may need it
|
||||
|
||||
@@ -162,11 +157,7 @@ def _fix_ext_suffix(candidate_spec: "spack.spec.Spec"):
|
||||
os.symlink(abs_path, link_name)
|
||||
|
||||
|
||||
def _executables_in_store(
|
||||
executables: Sequence[str],
|
||||
query_spec: Union["spack.spec.Spec", str],
|
||||
query_info: Optional[QueryInfo] = None,
|
||||
) -> bool:
|
||||
def _executables_in_store(executables, query_spec, query_info=None):
|
||||
"""Return True if at least one of the executables can be retrieved from
|
||||
a spec in store, False otherwise.
|
||||
|
||||
@@ -202,7 +193,7 @@ def _executables_in_store(
|
||||
return False
|
||||
|
||||
|
||||
def _root_spec(spec_str: str) -> str:
|
||||
def _root_spec(spec_str):
|
||||
"""Add a proper compiler and target to a spec used during bootstrapping.
|
||||
|
||||
Args:
|
||||
|
||||
@@ -7,7 +7,6 @@
|
||||
import contextlib
|
||||
import os.path
|
||||
import sys
|
||||
from typing import Any, Dict, Generator, MutableSequence, Sequence
|
||||
|
||||
from llnl.util import tty
|
||||
|
||||
@@ -25,12 +24,12 @@
|
||||
_REF_COUNT = 0
|
||||
|
||||
|
||||
def is_bootstrapping() -> bool:
|
||||
def is_bootstrapping():
|
||||
"""Return True if we are in a bootstrapping context, False otherwise."""
|
||||
return _REF_COUNT > 0
|
||||
|
||||
|
||||
def spec_for_current_python() -> str:
|
||||
def spec_for_current_python():
|
||||
"""For bootstrapping purposes we are just interested in the Python
|
||||
minor version (all patches are ABI compatible with the same minor).
|
||||
|
||||
@@ -42,14 +41,14 @@ def spec_for_current_python() -> str:
|
||||
return f"python@{version_str}"
|
||||
|
||||
|
||||
def root_path() -> str:
|
||||
def root_path():
|
||||
"""Root of all the bootstrap related folders"""
|
||||
return spack.util.path.canonicalize_path(
|
||||
spack.config.get("bootstrap:root", spack.paths.default_user_bootstrap_path)
|
||||
)
|
||||
|
||||
|
||||
def store_path() -> str:
|
||||
def store_path():
|
||||
"""Path to the store used for bootstrapped software"""
|
||||
enabled = spack.config.get("bootstrap:enable", True)
|
||||
if not enabled:
|
||||
@@ -60,7 +59,7 @@ def store_path() -> str:
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def spack_python_interpreter() -> Generator:
|
||||
def spack_python_interpreter():
|
||||
"""Override the current configuration to set the interpreter under
|
||||
which Spack is currently running as the only Python external spec
|
||||
available.
|
||||
@@ -77,18 +76,18 @@ def spack_python_interpreter() -> Generator:
|
||||
yield
|
||||
|
||||
|
||||
def _store_path() -> str:
|
||||
def _store_path():
|
||||
bootstrap_root_path = root_path()
|
||||
return spack.util.path.canonicalize_path(os.path.join(bootstrap_root_path, "store"))
|
||||
|
||||
|
||||
def _config_path() -> str:
|
||||
def _config_path():
|
||||
bootstrap_root_path = root_path()
|
||||
return spack.util.path.canonicalize_path(os.path.join(bootstrap_root_path, "config"))
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def ensure_bootstrap_configuration() -> Generator:
|
||||
def ensure_bootstrap_configuration():
|
||||
"""Swap the current configuration for the one used to bootstrap Spack.
|
||||
|
||||
The context manager is reference counted to ensure we don't swap multiple
|
||||
@@ -108,7 +107,7 @@ def ensure_bootstrap_configuration() -> Generator:
|
||||
_REF_COUNT -= 1
|
||||
|
||||
|
||||
def _read_and_sanitize_configuration() -> Dict[str, Any]:
|
||||
def _read_and_sanitize_configuration():
|
||||
"""Read the user configuration that needs to be reused for bootstrapping
|
||||
and remove the entries that should not be copied over.
|
||||
"""
|
||||
@@ -121,11 +120,9 @@ def _read_and_sanitize_configuration() -> Dict[str, Any]:
|
||||
return user_configuration
|
||||
|
||||
|
||||
def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
|
||||
def _bootstrap_config_scopes():
|
||||
tty.debug("[BOOTSTRAP CONFIG SCOPE] name=_builtin")
|
||||
config_scopes: MutableSequence["spack.config.ConfigScope"] = [
|
||||
spack.config.InternalConfigScope("_builtin", spack.config.config_defaults)
|
||||
]
|
||||
config_scopes = [spack.config.InternalConfigScope("_builtin", spack.config.config_defaults)]
|
||||
configuration_paths = (spack.config.configuration_defaults_path, ("bootstrap", _config_path()))
|
||||
for name, path in configuration_paths:
|
||||
platform = spack.platforms.host().name
|
||||
@@ -140,7 +137,7 @@ def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
|
||||
return config_scopes
|
||||
|
||||
|
||||
def _add_compilers_if_missing() -> None:
|
||||
def _add_compilers_if_missing():
|
||||
arch = spack.spec.ArchSpec.frontend_arch()
|
||||
if not spack.compilers.compilers_for_arch(arch):
|
||||
new_compilers = spack.compilers.find_new_compilers()
|
||||
@@ -149,7 +146,7 @@ def _add_compilers_if_missing() -> None:
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _ensure_bootstrap_configuration() -> Generator:
|
||||
def _ensure_bootstrap_configuration():
|
||||
bootstrap_store_path = store_path()
|
||||
user_configuration = _read_and_sanitize_configuration()
|
||||
with spack.environment.no_active_environment():
|
||||
|
||||
@@ -29,7 +29,7 @@
|
||||
import os.path
|
||||
import sys
|
||||
import uuid
|
||||
from typing import Any, Callable, Dict, List, Optional, Tuple
|
||||
from typing import Callable, List, Optional
|
||||
|
||||
from llnl.util import tty
|
||||
from llnl.util.lang import GroupedExceptionHandler
|
||||
@@ -66,9 +66,6 @@
|
||||
_bootstrap_methods = {}
|
||||
|
||||
|
||||
ConfigDictionary = Dict[str, Any]
|
||||
|
||||
|
||||
def bootstrapper(bootstrapper_type: str):
|
||||
"""Decorator to register classes implementing bootstrapping
|
||||
methods.
|
||||
@@ -89,7 +86,7 @@ class Bootstrapper:
|
||||
|
||||
config_scope_name = ""
|
||||
|
||||
def __init__(self, conf: ConfigDictionary) -> None:
|
||||
def __init__(self, conf):
|
||||
self.conf = conf
|
||||
self.name = conf["name"]
|
||||
self.metadata_dir = spack.util.path.canonicalize_path(conf["metadata"])
|
||||
@@ -103,7 +100,7 @@ def __init__(self, conf: ConfigDictionary) -> None:
|
||||
self.url = url
|
||||
|
||||
@property
|
||||
def mirror_scope(self) -> spack.config.InternalConfigScope:
|
||||
def mirror_scope(self):
|
||||
"""Mirror scope to be pushed onto the bootstrapping configuration when using
|
||||
this bootstrapper.
|
||||
"""
|
||||
@@ -124,7 +121,7 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
|
||||
"""
|
||||
return False
|
||||
|
||||
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
|
||||
def try_search_path(self, executables: List[str], abstract_spec_str: str) -> bool:
|
||||
"""Try to search some executables in the prefix of specs satisfying the abstract
|
||||
spec passed as argument.
|
||||
|
||||
@@ -142,15 +139,13 @@ def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bo
|
||||
class BuildcacheBootstrapper(Bootstrapper):
|
||||
"""Install the software needed during bootstrapping from a buildcache."""
|
||||
|
||||
def __init__(self, conf) -> None:
|
||||
def __init__(self, conf):
|
||||
super().__init__(conf)
|
||||
self.last_search: Optional[ConfigDictionary] = None
|
||||
self.last_search = None
|
||||
self.config_scope_name = f"bootstrap_buildcache-{uuid.uuid4()}"
|
||||
|
||||
@staticmethod
|
||||
def _spec_and_platform(
|
||||
abstract_spec_str: str,
|
||||
) -> Tuple[spack.spec.Spec, spack.platforms.Platform]:
|
||||
def _spec_and_platform(abstract_spec_str):
|
||||
"""Return the spec object and platform we need to use when
|
||||
querying the buildcache.
|
||||
|
||||
@@ -163,7 +158,7 @@ def _spec_and_platform(
|
||||
bincache_platform = spack.platforms.real_host()
|
||||
return abstract_spec, bincache_platform
|
||||
|
||||
def _read_metadata(self, package_name: str) -> Any:
|
||||
def _read_metadata(self, package_name):
|
||||
"""Return metadata about the given package."""
|
||||
json_filename = f"{package_name}.json"
|
||||
json_dir = self.metadata_dir
|
||||
@@ -172,13 +167,7 @@ def _read_metadata(self, package_name: str) -> Any:
|
||||
data = json.load(stream)
|
||||
return data
|
||||
|
||||
def _install_by_hash(
|
||||
self,
|
||||
pkg_hash: str,
|
||||
pkg_sha256: str,
|
||||
index: List[spack.spec.Spec],
|
||||
bincache_platform: spack.platforms.Platform,
|
||||
) -> None:
|
||||
def _install_by_hash(self, pkg_hash, pkg_sha256, index, bincache_platform):
|
||||
index_spec = next(x for x in index if x.dag_hash() == pkg_hash)
|
||||
# Reconstruct the compiler that we need to use for bootstrapping
|
||||
compiler_entry = {
|
||||
@@ -203,13 +192,7 @@ def _install_by_hash(
|
||||
match, allow_root=True, unsigned=True, force=True, sha256=pkg_sha256
|
||||
)
|
||||
|
||||
def _install_and_test(
|
||||
self,
|
||||
abstract_spec: spack.spec.Spec,
|
||||
bincache_platform: spack.platforms.Platform,
|
||||
bincache_data,
|
||||
test_fn,
|
||||
) -> bool:
|
||||
def _install_and_test(self, abstract_spec, bincache_platform, bincache_data, test_fn):
|
||||
# Ensure we see only the buildcache being used to bootstrap
|
||||
with spack.config.override(self.mirror_scope):
|
||||
# This index is currently needed to get the compiler used to build some
|
||||
@@ -234,14 +217,13 @@ def _install_and_test(
|
||||
for _, pkg_hash, pkg_sha256 in item["binaries"]:
|
||||
self._install_by_hash(pkg_hash, pkg_sha256, index, bincache_platform)
|
||||
|
||||
info: ConfigDictionary = {}
|
||||
info = {}
|
||||
if test_fn(query_spec=abstract_spec, query_info=info):
|
||||
self.last_search = info
|
||||
return True
|
||||
return False
|
||||
|
||||
def try_import(self, module: str, abstract_spec_str: str) -> bool:
|
||||
info: ConfigDictionary
|
||||
def try_import(self, module, abstract_spec_str):
|
||||
test_fn, info = functools.partial(_try_import_from_store, module), {}
|
||||
if test_fn(query_spec=abstract_spec_str, query_info=info):
|
||||
return True
|
||||
@@ -253,8 +235,7 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
|
||||
data = self._read_metadata(module)
|
||||
return self._install_and_test(abstract_spec, bincache_platform, data, test_fn)
|
||||
|
||||
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
|
||||
info: ConfigDictionary
|
||||
def try_search_path(self, executables, abstract_spec_str):
|
||||
test_fn, info = functools.partial(_executables_in_store, executables), {}
|
||||
if test_fn(query_spec=abstract_spec_str, query_info=info):
|
||||
self.last_search = info
|
||||
@@ -270,13 +251,13 @@ def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bo
|
||||
class SourceBootstrapper(Bootstrapper):
|
||||
"""Install the software needed during bootstrapping from sources."""
|
||||
|
||||
def __init__(self, conf) -> None:
|
||||
def __init__(self, conf):
|
||||
super().__init__(conf)
|
||||
self.last_search: Optional[ConfigDictionary] = None
|
||||
self.last_search = None
|
||||
self.config_scope_name = f"bootstrap_source-{uuid.uuid4()}"
|
||||
|
||||
def try_import(self, module: str, abstract_spec_str: str) -> bool:
|
||||
info: ConfigDictionary = {}
|
||||
def try_import(self, module, abstract_spec_str):
|
||||
info = {}
|
||||
if _try_import_from_store(module, abstract_spec_str, query_info=info):
|
||||
self.last_search = info
|
||||
return True
|
||||
@@ -312,8 +293,8 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
|
||||
return True
|
||||
return False
|
||||
|
||||
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
|
||||
info: ConfigDictionary = {}
|
||||
def try_search_path(self, executables, abstract_spec_str):
|
||||
info = {}
|
||||
if _executables_in_store(executables, abstract_spec_str, query_info=info):
|
||||
self.last_search = info
|
||||
return True
|
||||
@@ -342,13 +323,13 @@ def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bo
|
||||
return False
|
||||
|
||||
|
||||
def create_bootstrapper(conf: ConfigDictionary):
|
||||
def create_bootstrapper(conf):
|
||||
"""Return a bootstrap object built according to the configuration argument"""
|
||||
btype = conf["type"]
|
||||
return _bootstrap_methods[btype](conf)
|
||||
|
||||
|
||||
def source_is_enabled_or_raise(conf: ConfigDictionary):
|
||||
def source_is_enabled_or_raise(conf):
|
||||
"""Raise ValueError if the source is not enabled for bootstrapping"""
|
||||
trusted, name = spack.config.get("bootstrap:trusted"), conf["name"]
|
||||
if not trusted.get(name, False):
|
||||
@@ -473,7 +454,7 @@ def ensure_executables_in_path_or_raise(
|
||||
raise RuntimeError(msg)
|
||||
|
||||
|
||||
def _add_externals_if_missing() -> None:
|
||||
def _add_externals_if_missing():
|
||||
search_list = [
|
||||
# clingo
|
||||
spack.repo.path.get_pkg_class("cmake"),
|
||||
@@ -487,41 +468,41 @@ def _add_externals_if_missing() -> None:
|
||||
spack.detection.update_configuration(detected_packages, scope="bootstrap")
|
||||
|
||||
|
||||
def clingo_root_spec() -> str:
|
||||
def clingo_root_spec():
|
||||
"""Return the root spec used to bootstrap clingo"""
|
||||
return _root_spec("clingo-bootstrap@spack+python")
|
||||
|
||||
|
||||
def ensure_clingo_importable_or_raise() -> None:
|
||||
def ensure_clingo_importable_or_raise():
|
||||
"""Ensure that the clingo module is available for import."""
|
||||
ensure_module_importable_or_raise(module="clingo", abstract_spec=clingo_root_spec())
|
||||
|
||||
|
||||
def gnupg_root_spec() -> str:
|
||||
def gnupg_root_spec():
|
||||
"""Return the root spec used to bootstrap GnuPG"""
|
||||
return _root_spec("gnupg@2.3:")
|
||||
|
||||
|
||||
def ensure_gpg_in_path_or_raise() -> None:
|
||||
def ensure_gpg_in_path_or_raise():
|
||||
"""Ensure gpg or gpg2 are in the PATH or raise."""
|
||||
return ensure_executables_in_path_or_raise(
|
||||
executables=["gpg2", "gpg"], abstract_spec=gnupg_root_spec()
|
||||
)
|
||||
|
||||
|
||||
def patchelf_root_spec() -> str:
|
||||
def patchelf_root_spec():
|
||||
"""Return the root spec used to bootstrap patchelf"""
|
||||
# 0.13.1 is the last version not to require C++17.
|
||||
return _root_spec("patchelf@0.13.1:")
|
||||
|
||||
|
||||
def verify_patchelf(patchelf: "spack.util.executable.Executable") -> bool:
|
||||
def verify_patchelf(patchelf):
|
||||
"""Older patchelf versions can produce broken binaries, so we
|
||||
verify the version here.
|
||||
|
||||
Arguments:
|
||||
|
||||
patchelf: patchelf executable
|
||||
patchelf (spack.util.executable.Executable): patchelf executable
|
||||
"""
|
||||
out = patchelf("--version", output=str, error=os.devnull, fail_on_error=False).strip()
|
||||
if patchelf.returncode != 0:
|
||||
@@ -536,7 +517,7 @@ def verify_patchelf(patchelf: "spack.util.executable.Executable") -> bool:
|
||||
return version >= spack.version.Version("0.13.1")
|
||||
|
||||
|
||||
def ensure_patchelf_in_path_or_raise() -> None:
|
||||
def ensure_patchelf_in_path_or_raise():
|
||||
"""Ensure patchelf is in the PATH or raise."""
|
||||
# The old concretizer is not smart and we're doing its job: if the latest patchelf
|
||||
# does not concretize because the compiler doesn't support C++17, we try to
|
||||
@@ -553,7 +534,7 @@ def ensure_patchelf_in_path_or_raise() -> None:
|
||||
)
|
||||
|
||||
|
||||
def ensure_core_dependencies() -> None:
|
||||
def ensure_core_dependencies():
|
||||
"""Ensure the presence of all the core dependencies."""
|
||||
if sys.platform.lower() == "linux":
|
||||
ensure_patchelf_in_path_or_raise()
|
||||
@@ -562,7 +543,7 @@ def ensure_core_dependencies() -> None:
|
||||
ensure_clingo_importable_or_raise()
|
||||
|
||||
|
||||
def all_core_root_specs() -> List[str]:
|
||||
def all_core_root_specs():
|
||||
"""Return a list of all the core root specs that may be used to bootstrap Spack"""
|
||||
return [clingo_root_spec(), gnupg_root_spec(), patchelf_root_spec()]
|
||||
|
||||
|
||||
@@ -9,7 +9,6 @@
|
||||
import pathlib
|
||||
import sys
|
||||
import warnings
|
||||
from typing import List
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
@@ -19,7 +18,6 @@
|
||||
import spack.environment
|
||||
import spack.tengine
|
||||
import spack.util.executable
|
||||
from spack.environment import depfile
|
||||
|
||||
from ._common import _root_spec
|
||||
from .config import root_path, spec_for_current_python, store_path
|
||||
@@ -29,7 +27,7 @@ class BootstrapEnvironment(spack.environment.Environment):
|
||||
"""Environment to install dependencies of Spack for a given interpreter and architecture"""
|
||||
|
||||
@classmethod
|
||||
def spack_dev_requirements(cls) -> List[str]:
|
||||
def spack_dev_requirements(cls):
|
||||
"""Spack development requirements"""
|
||||
return [
|
||||
isort_root_spec(),
|
||||
@@ -40,7 +38,7 @@ def spack_dev_requirements(cls) -> List[str]:
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def environment_root(cls) -> pathlib.Path:
|
||||
def environment_root(cls):
|
||||
"""Environment root directory"""
|
||||
bootstrap_root_path = root_path()
|
||||
python_part = spec_for_current_python().replace("@", "")
|
||||
@@ -54,12 +52,12 @@ def environment_root(cls) -> pathlib.Path:
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def view_root(cls) -> pathlib.Path:
|
||||
def view_root(cls):
|
||||
"""Location of the view"""
|
||||
return cls.environment_root().joinpath("view")
|
||||
|
||||
@classmethod
|
||||
def pythonpaths(cls) -> List[str]:
|
||||
def pythonpaths(cls):
|
||||
"""Paths to be added to sys.path or PYTHONPATH"""
|
||||
python_dir_part = f"python{'.'.join(str(x) for x in sys.version_info[:2])}"
|
||||
glob_expr = str(cls.view_root().joinpath("**", python_dir_part, "**"))
|
||||
@@ -70,21 +68,21 @@ def pythonpaths(cls) -> List[str]:
|
||||
return result
|
||||
|
||||
@classmethod
|
||||
def bin_dirs(cls) -> List[pathlib.Path]:
|
||||
def bin_dirs(cls):
|
||||
"""Paths to be added to PATH"""
|
||||
return [cls.view_root().joinpath("bin")]
|
||||
|
||||
@classmethod
|
||||
def spack_yaml(cls) -> pathlib.Path:
|
||||
def spack_yaml(cls):
|
||||
"""Environment spack.yaml file"""
|
||||
return cls.environment_root().joinpath("spack.yaml")
|
||||
|
||||
def __init__(self) -> None:
|
||||
def __init__(self):
|
||||
if not self.spack_yaml().exists():
|
||||
self._write_spack_yaml_file()
|
||||
super().__init__(self.environment_root())
|
||||
|
||||
def update_installations(self) -> None:
|
||||
def update_installations(self):
|
||||
"""Update the installations of this environment.
|
||||
|
||||
The update is done using a depfile on Linux and macOS, and using the ``install_all``
|
||||
@@ -105,7 +103,7 @@ def update_installations(self) -> None:
|
||||
self._install_with_depfile()
|
||||
self.write(regenerate=True)
|
||||
|
||||
def update_syspath_and_environ(self) -> None:
|
||||
def update_syspath_and_environ(self):
|
||||
"""Update ``sys.path`` and the PATH, PYTHONPATH environment variables to point to
|
||||
the environment view.
|
||||
"""
|
||||
@@ -121,13 +119,16 @@ def update_syspath_and_environ(self) -> None:
|
||||
+ [str(x) for x in self.pythonpaths()]
|
||||
)
|
||||
|
||||
def _install_with_depfile(self) -> None:
|
||||
model = depfile.MakefileModel.from_env(self)
|
||||
template = spack.tengine.make_environment().get_template(
|
||||
os.path.join("depfile", "Makefile")
|
||||
def _install_with_depfile(self):
|
||||
spackcmd = spack.util.executable.which("spack")
|
||||
spackcmd(
|
||||
"-e",
|
||||
str(self.environment_root()),
|
||||
"env",
|
||||
"depfile",
|
||||
"-o",
|
||||
str(self.environment_root().joinpath("Makefile")),
|
||||
)
|
||||
makefile = self.environment_root() / "Makefile"
|
||||
makefile.write_text(template.render(model.to_dict()))
|
||||
make = spack.util.executable.which("make")
|
||||
kwargs = {}
|
||||
if not tty.is_debug():
|
||||
@@ -140,7 +141,7 @@ def _install_with_depfile(self) -> None:
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def _write_spack_yaml_file(self) -> None:
|
||||
def _write_spack_yaml_file(self):
|
||||
tty.msg(
|
||||
"[BOOTSTRAPPING] Spack has missing dependencies, creating a bootstrapping environment"
|
||||
)
|
||||
@@ -158,32 +159,32 @@ def _write_spack_yaml_file(self) -> None:
|
||||
self.spack_yaml().write_text(template.render(context), encoding="utf-8")
|
||||
|
||||
|
||||
def isort_root_spec() -> str:
|
||||
def isort_root_spec():
|
||||
"""Return the root spec used to bootstrap isort"""
|
||||
return _root_spec("py-isort@4.3.5:")
|
||||
|
||||
|
||||
def mypy_root_spec() -> str:
|
||||
def mypy_root_spec():
|
||||
"""Return the root spec used to bootstrap mypy"""
|
||||
return _root_spec("py-mypy@0.900:")
|
||||
|
||||
|
||||
def black_root_spec() -> str:
|
||||
def black_root_spec():
|
||||
"""Return the root spec used to bootstrap black"""
|
||||
return _root_spec("py-black@:23.1.0")
|
||||
|
||||
|
||||
def flake8_root_spec() -> str:
|
||||
def flake8_root_spec():
|
||||
"""Return the root spec used to bootstrap flake8"""
|
||||
return _root_spec("py-flake8")
|
||||
|
||||
|
||||
def pytest_root_spec() -> str:
|
||||
def pytest_root_spec():
|
||||
"""Return the root spec used to bootstrap flake8"""
|
||||
return _root_spec("py-pytest")
|
||||
|
||||
|
||||
def ensure_environment_dependencies() -> None:
|
||||
def ensure_environment_dependencies():
|
||||
"""Ensure Spack dependencies from the bootstrap environment are installed and ready to use"""
|
||||
with BootstrapEnvironment() as env:
|
||||
env.update_installations()
|
||||
|
||||
@@ -4,7 +4,6 @@
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
"""Query the status of bootstrapping on this machine"""
|
||||
import platform
|
||||
from typing import List, Optional, Sequence, Tuple, Union
|
||||
|
||||
import spack.util.executable
|
||||
|
||||
@@ -20,12 +19,8 @@
|
||||
pytest_root_spec,
|
||||
)
|
||||
|
||||
ExecutablesType = Union[str, Sequence[str]]
|
||||
RequiredResponseType = Tuple[bool, Optional[str]]
|
||||
SpecLike = Union["spack.spec.Spec", str]
|
||||
|
||||
|
||||
def _required_system_executable(exes: ExecutablesType, msg: str) -> RequiredResponseType:
|
||||
def _required_system_executable(exes, msg):
|
||||
"""Search for an executable is the system path only."""
|
||||
if isinstance(exes, str):
|
||||
exes = (exes,)
|
||||
@@ -34,9 +29,7 @@ def _required_system_executable(exes: ExecutablesType, msg: str) -> RequiredResp
|
||||
return False, msg
|
||||
|
||||
|
||||
def _required_executable(
|
||||
exes: ExecutablesType, query_spec: SpecLike, msg: str
|
||||
) -> RequiredResponseType:
|
||||
def _required_executable(exes, query_spec, msg):
|
||||
"""Search for an executable in the system path or in the bootstrap store."""
|
||||
if isinstance(exes, str):
|
||||
exes = (exes,)
|
||||
@@ -45,7 +38,7 @@ def _required_executable(
|
||||
return False, msg
|
||||
|
||||
|
||||
def _required_python_module(module: str, query_spec: SpecLike, msg: str) -> RequiredResponseType:
|
||||
def _required_python_module(module, query_spec, msg):
|
||||
"""Check if a Python module is available in the current interpreter or
|
||||
if it can be loaded from the bootstrap store
|
||||
"""
|
||||
@@ -54,7 +47,7 @@ def _required_python_module(module: str, query_spec: SpecLike, msg: str) -> Requ
|
||||
return False, msg
|
||||
|
||||
|
||||
def _missing(name: str, purpose: str, system_only: bool = True) -> str:
|
||||
def _missing(name, purpose, system_only=True):
|
||||
"""Message to be printed if an executable is not found"""
|
||||
msg = '[{2}] MISSING "{0}": {1}'
|
||||
if not system_only:
|
||||
@@ -62,7 +55,7 @@ def _missing(name: str, purpose: str, system_only: bool = True) -> str:
|
||||
return msg.format(name, purpose, "@*y{{-}}")
|
||||
|
||||
|
||||
def _core_requirements() -> List[RequiredResponseType]:
|
||||
def _core_requirements():
|
||||
_core_system_exes = {
|
||||
"make": _missing("make", "required to build software from sources"),
|
||||
"patch": _missing("patch", "required to patch source code before building"),
|
||||
@@ -87,7 +80,7 @@ def _core_requirements() -> List[RequiredResponseType]:
|
||||
return result
|
||||
|
||||
|
||||
def _buildcache_requirements() -> List[RequiredResponseType]:
|
||||
def _buildcache_requirements():
|
||||
_buildcache_exes = {
|
||||
"file": _missing("file", "required to analyze files for buildcaches"),
|
||||
("gpg2", "gpg"): _missing("gpg2", "required to sign/verify buildcaches", False),
|
||||
@@ -110,7 +103,7 @@ def _buildcache_requirements() -> List[RequiredResponseType]:
|
||||
return result
|
||||
|
||||
|
||||
def _optional_requirements() -> List[RequiredResponseType]:
|
||||
def _optional_requirements():
|
||||
_optional_exes = {
|
||||
"zstd": _missing("zstd", "required to compress/decompress code archives"),
|
||||
"svn": _missing("svn", "required to manage subversion repositories"),
|
||||
@@ -121,7 +114,7 @@ def _optional_requirements() -> List[RequiredResponseType]:
|
||||
return result
|
||||
|
||||
|
||||
def _development_requirements() -> List[RequiredResponseType]:
|
||||
def _development_requirements():
|
||||
# Ensure we trigger environment modifications if we have an environment
|
||||
if BootstrapEnvironment.spack_yaml().exists():
|
||||
with BootstrapEnvironment() as env:
|
||||
@@ -146,7 +139,7 @@ def _development_requirements() -> List[RequiredResponseType]:
|
||||
]
|
||||
|
||||
|
||||
def status_message(section) -> Tuple[str, bool]:
|
||||
def status_message(section):
|
||||
"""Return a status message to be printed to screen that refers to the
|
||||
section passed as argument and a bool which is True if there are missing
|
||||
dependencies.
|
||||
@@ -168,7 +161,7 @@ def status_message(section) -> Tuple[str, bool]:
|
||||
with ensure_bootstrap_configuration():
|
||||
missing_software = False
|
||||
for found, err_msg in required_software():
|
||||
if not found and err_msg:
|
||||
if not found:
|
||||
missing_software = True
|
||||
msg += "\n " + err_msg
|
||||
msg += "\n"
|
||||
|
||||
@@ -423,6 +423,14 @@ def set_wrapper_variables(pkg, env):
|
||||
compiler = pkg.compiler
|
||||
env.extend(spack.schema.environment.parse(compiler.environment))
|
||||
|
||||
# Before setting up PATH to Spack compiler wrappers, make sure compiler is in PATH
|
||||
# This ensures that non-wrapped executables from the compiler bin directory are available
|
||||
bindirs = dedupe(
|
||||
[os.path.dirname(c) for c in [compiler.cc, compiler.cxx, compiler.fc, compiler.f77]]
|
||||
)
|
||||
for bindir in bindirs:
|
||||
env.prepend_path("PATH", bindir)
|
||||
|
||||
if compiler.extra_rpaths:
|
||||
extra_rpaths = ":".join(compiler.extra_rpaths)
|
||||
env.set("SPACK_COMPILER_EXTRA_RPATHS", extra_rpaths)
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
import platform
|
||||
import re
|
||||
import sys
|
||||
from typing import List, Optional, Tuple
|
||||
from typing import List, Tuple
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
import spack.builder
|
||||
import spack.package_base
|
||||
import spack.util.path
|
||||
from spack.directives import build_system, conflicts, depends_on, variant
|
||||
from spack.directives import build_system, depends_on, variant
|
||||
from spack.multimethod import when
|
||||
|
||||
from ._checks import BaseBuilder, execute_build_time_tests
|
||||
@@ -35,43 +35,6 @@ def _extract_primary_generator(generator):
|
||||
return primary_generator
|
||||
|
||||
|
||||
def generator(*names: str, default: Optional[str] = None):
|
||||
"""The build system generator to use.
|
||||
|
||||
See ``cmake --help`` for a list of valid generators.
|
||||
Currently, "Unix Makefiles" and "Ninja" are the only generators
|
||||
that Spack supports. Defaults to "Unix Makefiles".
|
||||
|
||||
See https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html
|
||||
for more information.
|
||||
|
||||
Args:
|
||||
names: allowed generators for this package
|
||||
default: default generator
|
||||
"""
|
||||
allowed_values = ("make", "ninja")
|
||||
if any(x not in allowed_values for x in names):
|
||||
msg = "only 'make' and 'ninja' are allowed for CMake's 'generator' directive"
|
||||
raise ValueError(msg)
|
||||
|
||||
default = default or names[0]
|
||||
not_used = [x for x in allowed_values if x not in names]
|
||||
|
||||
def _values(x):
|
||||
return x in allowed_values
|
||||
|
||||
_values.__doc__ = f"{','.join(names)}"
|
||||
|
||||
variant(
|
||||
"generator",
|
||||
default=default,
|
||||
values=_values,
|
||||
description="the build system generator to use",
|
||||
)
|
||||
for x in not_used:
|
||||
conflicts(f"generator={x}")
|
||||
|
||||
|
||||
class CMakePackage(spack.package_base.PackageBase):
|
||||
"""Specialized class for packages built using CMake
|
||||
|
||||
@@ -104,15 +67,8 @@ class CMakePackage(spack.package_base.PackageBase):
|
||||
when="^cmake@3.9:",
|
||||
description="CMake interprocedural optimization",
|
||||
)
|
||||
|
||||
if sys.platform == "win32":
|
||||
generator("ninja")
|
||||
else:
|
||||
generator("ninja", "make", default="make")
|
||||
|
||||
depends_on("cmake", type="build")
|
||||
depends_on("gmake", type="build", when="generator=make")
|
||||
depends_on("ninja", type="build", when="generator=ninja")
|
||||
depends_on("ninja", type="build", when="platform=windows")
|
||||
|
||||
def flags_to_build_system_args(self, flags):
|
||||
"""Return a list of all command line arguments to pass the specified
|
||||
@@ -182,6 +138,18 @@ class CMakeBuilder(BaseBuilder):
|
||||
| :py:meth:`~.CMakeBuilder.build_directory` | Directory where to |
|
||||
| | build the package |
|
||||
+-----------------------------------------------+--------------------+
|
||||
|
||||
The generator used by CMake can be specified by providing the ``generator``
|
||||
attribute. Per
|
||||
https://cmake.org/cmake/help/git-master/manual/cmake-generators.7.html,
|
||||
the format is: [<secondary-generator> - ]<primary_generator>.
|
||||
|
||||
The full list of primary and secondary generators supported by CMake may be found
|
||||
in the documentation for the version of CMake used; however, at this time Spack
|
||||
supports only the primary generators "Unix Makefiles" and "Ninja." Spack's CMake
|
||||
support is agnostic with respect to primary generators. Spack will generate a
|
||||
runtime error if the generator string does not follow the prescribed format, or if
|
||||
the primary generator is not supported.
|
||||
"""
|
||||
|
||||
#: Phases of a CMake package
|
||||
@@ -192,6 +160,7 @@ class CMakeBuilder(BaseBuilder):
|
||||
|
||||
#: Names associated with package attributes in the old build-system format
|
||||
legacy_attributes: Tuple[str, ...] = (
|
||||
"generator",
|
||||
"build_targets",
|
||||
"install_targets",
|
||||
"build_time_test_callbacks",
|
||||
@@ -202,6 +171,16 @@ class CMakeBuilder(BaseBuilder):
|
||||
"build_directory",
|
||||
)
|
||||
|
||||
#: The build system generator to use.
|
||||
#:
|
||||
#: See ``cmake --help`` for a list of valid generators.
|
||||
#: Currently, "Unix Makefiles" and "Ninja" are the only generators
|
||||
#: that Spack supports. Defaults to "Unix Makefiles".
|
||||
#:
|
||||
#: See https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html
|
||||
#: for more information.
|
||||
generator = "Ninja" if sys.platform == "win32" else "Unix Makefiles"
|
||||
|
||||
#: Targets to be used during the build phase
|
||||
build_targets: List[str] = []
|
||||
#: Targets to be used during the install phase
|
||||
@@ -223,20 +202,12 @@ def root_cmakelists_dir(self):
|
||||
"""
|
||||
return self.pkg.stage.source_path
|
||||
|
||||
@property
|
||||
def generator(self):
|
||||
if self.spec.satisfies("generator=make"):
|
||||
return "Unix Makefiles"
|
||||
if self.spec.satisfies("generator=ninja"):
|
||||
return "Ninja"
|
||||
msg = f'{self.spec.format()} has an unsupported value for the "generator" variant'
|
||||
raise ValueError(msg)
|
||||
|
||||
@property
|
||||
def std_cmake_args(self):
|
||||
"""Standard cmake arguments provided as a property for
|
||||
convenience of package writers
|
||||
"""
|
||||
# standard CMake arguments
|
||||
std_cmake_args = CMakeBuilder.std_args(self.pkg, generator=self.generator)
|
||||
std_cmake_args += getattr(self.pkg, "cmake_flag_args", [])
|
||||
return std_cmake_args
|
||||
|
||||
@@ -244,8 +244,7 @@ def __new__(mcs, name, bases, attr_dict):
|
||||
callbacks_from_base = getattr(base, temporary_stage.attribute_name, None)
|
||||
if callbacks_from_base:
|
||||
break
|
||||
else:
|
||||
callbacks_from_base = []
|
||||
callbacks_from_base = callbacks_from_base or []
|
||||
|
||||
# Set the callbacks in this class and flush the temporary stage
|
||||
attr_dict[temporary_stage.attribute_name] = staged_callbacks[:] + callbacks_from_base
|
||||
|
||||
@@ -5,7 +5,6 @@
|
||||
|
||||
"""Caches used by Spack to store data"""
|
||||
import os
|
||||
from typing import Union
|
||||
|
||||
import llnl.util.lang
|
||||
from llnl.util.filesystem import mkdirp
|
||||
@@ -35,9 +34,7 @@ def _misc_cache():
|
||||
|
||||
|
||||
#: Spack's cache for small data
|
||||
misc_cache: Union[
|
||||
spack.util.file_cache.FileCache, llnl.util.lang.Singleton
|
||||
] = llnl.util.lang.Singleton(_misc_cache)
|
||||
misc_cache = llnl.util.lang.Singleton(_misc_cache)
|
||||
|
||||
|
||||
def fetch_cache_location():
|
||||
@@ -91,6 +88,4 @@ def symlink(self, mirror_ref):
|
||||
|
||||
|
||||
#: Spack's local cache for downloaded source archives
|
||||
fetch_cache: Union[
|
||||
spack.fetch_strategy.FsCache, llnl.util.lang.Singleton
|
||||
] = llnl.util.lang.Singleton(_fetch_cache)
|
||||
fetch_cache = llnl.util.lang.Singleton(_fetch_cache)
|
||||
|
||||
@@ -38,7 +38,6 @@
|
||||
import spack.util.spack_yaml as syaml
|
||||
import spack.util.url as url_util
|
||||
import spack.util.web as web_util
|
||||
from spack import traverse
|
||||
from spack.error import SpackError
|
||||
from spack.reporters import CDash, CDashConfiguration
|
||||
from spack.reporters.cdash import build_stamp as cdash_build_stamp
|
||||
@@ -365,6 +364,59 @@ def _spec_matches(spec, match_string):
|
||||
return spec.intersects(match_string)
|
||||
|
||||
|
||||
def _remove_attributes(src_dict, dest_dict):
|
||||
if "tags" in src_dict and "tags" in dest_dict:
|
||||
# For 'tags', we remove any tags that are listed for removal
|
||||
for tag in src_dict["tags"]:
|
||||
while tag in dest_dict["tags"]:
|
||||
dest_dict["tags"].remove(tag)
|
||||
|
||||
|
||||
def _copy_attributes(attrs_list, src_dict, dest_dict):
|
||||
for runner_attr in attrs_list:
|
||||
if runner_attr in src_dict:
|
||||
if runner_attr in dest_dict and runner_attr == "tags":
|
||||
# For 'tags', we combine the lists of tags, while
|
||||
# avoiding duplicates
|
||||
for tag in src_dict[runner_attr]:
|
||||
if tag not in dest_dict[runner_attr]:
|
||||
dest_dict[runner_attr].append(tag)
|
||||
elif runner_attr in dest_dict and runner_attr == "variables":
|
||||
# For 'variables', we merge the dictionaries. Any conflicts
|
||||
# (i.e. 'runner-attributes' has same variable key as the
|
||||
# higher level) we resolve by keeping the more specific
|
||||
# 'runner-attributes' version.
|
||||
for src_key, src_val in src_dict[runner_attr].items():
|
||||
dest_dict[runner_attr][src_key] = copy.deepcopy(src_dict[runner_attr][src_key])
|
||||
else:
|
||||
dest_dict[runner_attr] = copy.deepcopy(src_dict[runner_attr])
|
||||
|
||||
|
||||
def _find_matching_config(spec, gitlab_ci):
|
||||
runner_attributes = {}
|
||||
overridable_attrs = ["image", "tags", "variables", "before_script", "script", "after_script"]
|
||||
|
||||
_copy_attributes(overridable_attrs, gitlab_ci, runner_attributes)
|
||||
|
||||
matched = False
|
||||
only_first = gitlab_ci.get("match_behavior", "first") == "first"
|
||||
for ci_mapping in gitlab_ci["mappings"]:
|
||||
for match_string in ci_mapping["match"]:
|
||||
if _spec_matches(spec, match_string):
|
||||
matched = True
|
||||
if "remove-attributes" in ci_mapping:
|
||||
_remove_attributes(ci_mapping["remove-attributes"], runner_attributes)
|
||||
if "runner-attributes" in ci_mapping:
|
||||
_copy_attributes(
|
||||
overridable_attrs, ci_mapping["runner-attributes"], runner_attributes
|
||||
)
|
||||
break
|
||||
if matched and only_first:
|
||||
break
|
||||
|
||||
return runner_attributes if matched else None
|
||||
|
||||
|
||||
def _format_job_needs(
|
||||
phase_name,
|
||||
strip_compilers,
|
||||
@@ -472,237 +524,18 @@ def get_spec_filter_list(env, affected_pkgs, dependent_traverse_depth=None):
|
||||
tty.debug("All concrete environment specs:")
|
||||
for s in all_concrete_specs:
|
||||
tty.debug(" {0}/{1}".format(s.name, s.dag_hash()[:7]))
|
||||
affected_pkgs = frozenset(affected_pkgs)
|
||||
env_matches = [s for s in all_concrete_specs if s.name in affected_pkgs]
|
||||
env_matches = [s for s in all_concrete_specs if s.name in frozenset(affected_pkgs)]
|
||||
visited = set()
|
||||
dag_hash = lambda s: s.dag_hash()
|
||||
for depth, parent in traverse.traverse_nodes(
|
||||
env_matches, direction="parents", key=dag_hash, depth=True, order="breadth"
|
||||
):
|
||||
if dependent_traverse_depth is not None and depth > dependent_traverse_depth:
|
||||
break
|
||||
affected_specs.update(parent.traverse(direction="children", visited=visited, key=dag_hash))
|
||||
for match in env_matches:
|
||||
for dep_level, parent in match.traverse(direction="parents", key=dag_hash, depth=True):
|
||||
if dependent_traverse_depth is None or dep_level <= dependent_traverse_depth:
|
||||
affected_specs.update(
|
||||
parent.traverse(direction="children", visited=visited, key=dag_hash)
|
||||
)
|
||||
return affected_specs
|
||||
|
||||
|
||||
def _build_jobs(phases, staged_phases):
|
||||
for phase in phases:
|
||||
phase_name = phase["name"]
|
||||
spec_labels, dependencies, stages = staged_phases[phase_name]
|
||||
|
||||
for stage_jobs in stages:
|
||||
for spec_label in stage_jobs:
|
||||
spec_record = spec_labels[spec_label]
|
||||
release_spec = spec_record["spec"]
|
||||
release_spec_dag_hash = release_spec.dag_hash()
|
||||
yield release_spec, release_spec_dag_hash
|
||||
|
||||
|
||||
def _noop(x):
|
||||
return x
|
||||
|
||||
|
||||
def _unpack_script(script_section, op=_noop):
|
||||
script = []
|
||||
for cmd in script_section:
|
||||
if isinstance(cmd, list):
|
||||
for subcmd in cmd:
|
||||
script.append(op(subcmd))
|
||||
else:
|
||||
script.append(op(cmd))
|
||||
|
||||
return script
|
||||
|
||||
|
||||
class SpackCI:
|
||||
"""Spack CI object used to generate intermediate representation
|
||||
used by the CI generator(s).
|
||||
"""
|
||||
|
||||
def __init__(self, ci_config, phases, staged_phases):
|
||||
"""Given the information from the ci section of the config
|
||||
and the job phases setup meta data needed for generating Spack
|
||||
CI IR.
|
||||
"""
|
||||
|
||||
self.ci_config = ci_config
|
||||
self.named_jobs = ["any", "build", "cleanup", "noop", "reindex", "signing"]
|
||||
|
||||
self.ir = {
|
||||
"jobs": {},
|
||||
"temporary-storage-url-prefix": self.ci_config.get(
|
||||
"temporary-storage-url-prefix", None
|
||||
),
|
||||
"enable-artifacts-buildcache": self.ci_config.get(
|
||||
"enable-artifacts-buildcache", False
|
||||
),
|
||||
"bootstrap": self.ci_config.get(
|
||||
"bootstrap", []
|
||||
), # This is deprecated and should be removed
|
||||
"rebuild-index": self.ci_config.get("rebuild-index", True),
|
||||
"broken-specs-url": self.ci_config.get("broken-specs-url", None),
|
||||
"broken-tests-packages": self.ci_config.get("broken-tests-packages", []),
|
||||
"target": self.ci_config.get("target", "gitlab"),
|
||||
}
|
||||
jobs = self.ir["jobs"]
|
||||
|
||||
for spec, dag_hash in _build_jobs(phases, staged_phases):
|
||||
jobs[dag_hash] = self.__init_job(spec)
|
||||
|
||||
for name in self.named_jobs:
|
||||
# Skip the special named jobs
|
||||
if name not in ["any", "build"]:
|
||||
jobs[name] = self.__init_job("")
|
||||
|
||||
def __init_job(self, spec):
|
||||
"""Initialize job object"""
|
||||
return {"spec": spec, "attributes": {}}
|
||||
|
||||
def __is_named(self, section):
|
||||
"""Check if a pipeline-gen configuration section is for a named job,
|
||||
and if so return the name otherwise return none.
|
||||
"""
|
||||
for _name in self.named_jobs:
|
||||
keys = ["{0}-job".format(_name), "{0}-job-remove".format(_name)]
|
||||
if any([key for key in keys if key in section]):
|
||||
return _name
|
||||
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def __job_name(name, suffix=""):
|
||||
"""Compute the name of a named job with appropriate suffix.
|
||||
Valid suffixes are either '-remove' or empty string or None
|
||||
"""
|
||||
assert type(name) == str
|
||||
|
||||
jname = name
|
||||
if suffix:
|
||||
jname = "{0}-job{1}".format(name, suffix)
|
||||
else:
|
||||
jname = "{0}-job".format(name)
|
||||
|
||||
return jname
|
||||
|
||||
def __apply_submapping(self, dest, spec, section):
|
||||
"""Apply submapping setion to the IR dict"""
|
||||
matched = False
|
||||
only_first = section.get("match_behavior", "first") == "first"
|
||||
|
||||
for match_attrs in reversed(section["submapping"]):
|
||||
attrs = cfg.InternalConfigScope._process_dict_keyname_overrides(match_attrs)
|
||||
for match_string in match_attrs["match"]:
|
||||
if _spec_matches(spec, match_string):
|
||||
matched = True
|
||||
if "build-job-remove" in match_attrs:
|
||||
spack.config.remove_yaml(dest, attrs["build-job-remove"])
|
||||
if "build-job" in match_attrs:
|
||||
spack.config.merge_yaml(dest, attrs["build-job"])
|
||||
break
|
||||
if matched and only_first:
|
||||
break
|
||||
|
||||
return dest
|
||||
|
||||
# Generate IR from the configs
|
||||
def generate_ir(self):
|
||||
"""Generate the IR from the Spack CI configurations."""
|
||||
|
||||
jobs = self.ir["jobs"]
|
||||
|
||||
# Implicit job defaults
|
||||
defaults = [
|
||||
{
|
||||
"build-job": {
|
||||
"script": [
|
||||
"cd {env_dir}",
|
||||
"spack env activate --without-view .",
|
||||
"spack ci rebuild",
|
||||
]
|
||||
}
|
||||
},
|
||||
{"noop-job": {"script": ['echo "All specs already up to date, nothing to rebuild."']}},
|
||||
]
|
||||
|
||||
# Job overrides
|
||||
overrides = [
|
||||
# Reindex script
|
||||
{
|
||||
"reindex-job": {
|
||||
"script:": [
|
||||
"spack buildcache update-index --keys --mirror-url {index_target_mirror}"
|
||||
]
|
||||
}
|
||||
},
|
||||
# Cleanup script
|
||||
{
|
||||
"cleanup-job": {
|
||||
"script:": [
|
||||
"spack -d mirror destroy --mirror-url {mirror_prefix}/$CI_PIPELINE_ID"
|
||||
]
|
||||
}
|
||||
},
|
||||
# Add signing job tags
|
||||
{"signing-job": {"tags": ["aws", "protected", "notary"]}},
|
||||
# Remove reserved tags
|
||||
{"any-job-remove": {"tags": SPACK_RESERVED_TAGS}},
|
||||
]
|
||||
|
||||
pipeline_gen = overrides + self.ci_config.get("pipeline-gen", []) + defaults
|
||||
|
||||
for section in reversed(pipeline_gen):
|
||||
name = self.__is_named(section)
|
||||
has_submapping = "submapping" in section
|
||||
section = cfg.InternalConfigScope._process_dict_keyname_overrides(section)
|
||||
|
||||
if name:
|
||||
remove_job_name = self.__job_name(name, suffix="-remove")
|
||||
merge_job_name = self.__job_name(name)
|
||||
do_remove = remove_job_name in section
|
||||
do_merge = merge_job_name in section
|
||||
|
||||
def _apply_section(dest, src):
|
||||
if do_remove:
|
||||
dest = spack.config.remove_yaml(dest, src[remove_job_name])
|
||||
if do_merge:
|
||||
dest = copy.copy(spack.config.merge_yaml(dest, src[merge_job_name]))
|
||||
|
||||
if name == "build":
|
||||
# Apply attributes to all build jobs
|
||||
for _, job in jobs.items():
|
||||
if job["spec"]:
|
||||
_apply_section(job["attributes"], section)
|
||||
elif name == "any":
|
||||
# Apply section attributes too all jobs
|
||||
for _, job in jobs.items():
|
||||
_apply_section(job["attributes"], section)
|
||||
else:
|
||||
# Create a signing job if there is script and the job hasn't
|
||||
# been initialized yet
|
||||
if name == "signing" and name not in jobs:
|
||||
if "signing-job" in section:
|
||||
if "script" not in section["signing-job"]:
|
||||
continue
|
||||
else:
|
||||
jobs[name] = self.__init_job("")
|
||||
# Apply attributes to named job
|
||||
_apply_section(jobs[name]["attributes"], section)
|
||||
|
||||
elif has_submapping:
|
||||
# Apply section jobs with specs to match
|
||||
for _, job in jobs.items():
|
||||
if job["spec"]:
|
||||
job["attributes"] = self.__apply_submapping(
|
||||
job["attributes"], job["spec"], section
|
||||
)
|
||||
|
||||
for _, job in jobs.items():
|
||||
if job["spec"]:
|
||||
job["spec"] = job["spec"].name
|
||||
|
||||
return self.ir
|
||||
|
||||
|
||||
def generate_gitlab_ci_yaml(
|
||||
env,
|
||||
print_summary,
|
||||
@@ -750,30 +583,14 @@ def generate_gitlab_ci_yaml(
|
||||
env.concretize()
|
||||
env.write()
|
||||
|
||||
yaml_root = ev.config_dict(env.manifest)
|
||||
yaml_root = ev.config_dict(env.yaml)
|
||||
|
||||
# Get the joined "ci" config with all of the current scopes resolved
|
||||
ci_config = cfg.get("ci")
|
||||
if "gitlab-ci" not in yaml_root:
|
||||
tty.die('Environment yaml does not have "gitlab-ci" section')
|
||||
|
||||
if not ci_config:
|
||||
tty.warn("Environment does not have `ci` a configuration")
|
||||
gitlabci_config = yaml_root.get("gitlab-ci")
|
||||
if not gitlabci_config:
|
||||
tty.die("Environment yaml does not have `gitlab-ci` config section. Cannot recover.")
|
||||
gitlab_ci = yaml_root["gitlab-ci"]
|
||||
|
||||
tty.warn(
|
||||
"The `gitlab-ci` configuration is deprecated in favor of `ci`.\n",
|
||||
"To update run \n\t$ spack env update /path/to/ci/spack.yaml",
|
||||
)
|
||||
translate_deprecated_config(gitlabci_config)
|
||||
ci_config = gitlabci_config
|
||||
|
||||
# Default target is gitlab...and only target is gitlab
|
||||
if not ci_config.get("target", "gitlab") == "gitlab":
|
||||
tty.die('Spack CI module only generates target "gitlab"')
|
||||
|
||||
cdash_config = cfg.get("cdash")
|
||||
cdash_handler = CDashHandler(cdash_config) if "build-group" in cdash_config else None
|
||||
cdash_handler = CDashHandler(yaml_root.get("cdash")) if "cdash" in yaml_root else None
|
||||
build_group = cdash_handler.build_group if cdash_handler else None
|
||||
|
||||
dependent_depth = os.environ.get("SPACK_PRUNE_UNTOUCHED_DEPENDENT_DEPTH", None)
|
||||
@@ -782,9 +599,9 @@ def generate_gitlab_ci_yaml(
|
||||
dependent_depth = int(dependent_depth)
|
||||
except (TypeError, ValueError):
|
||||
tty.warn(
|
||||
f"Unrecognized value ({dependent_depth}) "
|
||||
"provided for SPACK_PRUNE_UNTOUCHED_DEPENDENT_DEPTH, "
|
||||
"ignoring it."
|
||||
"Unrecognized value ({0}) ".format(dependent_depth),
|
||||
"provide forSPACK_PRUNE_UNTOUCHED_DEPENDENT_DEPTH, ",
|
||||
"ignoring it.",
|
||||
)
|
||||
dependent_depth = None
|
||||
|
||||
@@ -847,25 +664,25 @@ def generate_gitlab_ci_yaml(
|
||||
# trying to build.
|
||||
broken_specs_url = ""
|
||||
known_broken_specs_encountered = []
|
||||
if "broken-specs-url" in ci_config:
|
||||
broken_specs_url = ci_config["broken-specs-url"]
|
||||
if "broken-specs-url" in gitlab_ci:
|
||||
broken_specs_url = gitlab_ci["broken-specs-url"]
|
||||
|
||||
enable_artifacts_buildcache = False
|
||||
if "enable-artifacts-buildcache" in ci_config:
|
||||
enable_artifacts_buildcache = ci_config["enable-artifacts-buildcache"]
|
||||
if "enable-artifacts-buildcache" in gitlab_ci:
|
||||
enable_artifacts_buildcache = gitlab_ci["enable-artifacts-buildcache"]
|
||||
|
||||
rebuild_index_enabled = True
|
||||
if "rebuild-index" in ci_config and ci_config["rebuild-index"] is False:
|
||||
if "rebuild-index" in gitlab_ci and gitlab_ci["rebuild-index"] is False:
|
||||
rebuild_index_enabled = False
|
||||
|
||||
temp_storage_url_prefix = None
|
||||
if "temporary-storage-url-prefix" in ci_config:
|
||||
temp_storage_url_prefix = ci_config["temporary-storage-url-prefix"]
|
||||
if "temporary-storage-url-prefix" in gitlab_ci:
|
||||
temp_storage_url_prefix = gitlab_ci["temporary-storage-url-prefix"]
|
||||
|
||||
bootstrap_specs = []
|
||||
phases = []
|
||||
if "bootstrap" in ci_config:
|
||||
for phase in ci_config["bootstrap"]:
|
||||
if "bootstrap" in gitlab_ci:
|
||||
for phase in gitlab_ci["bootstrap"]:
|
||||
try:
|
||||
phase_name = phase.get("name")
|
||||
strip_compilers = phase.get("compiler-agnostic")
|
||||
@@ -930,31 +747,6 @@ def generate_gitlab_ci_yaml(
|
||||
shutil.copyfile(env.manifest_path, os.path.join(concrete_env_dir, "spack.yaml"))
|
||||
shutil.copyfile(env.lock_path, os.path.join(concrete_env_dir, "spack.lock"))
|
||||
|
||||
with open(env.manifest_path, "r") as env_fd:
|
||||
env_yaml_root = syaml.load(env_fd)
|
||||
# Add config scopes to environment
|
||||
env_includes = env_yaml_root["spack"].get("include", [])
|
||||
cli_scopes = [
|
||||
os.path.abspath(s.path)
|
||||
for s in cfg.scopes().values()
|
||||
if type(s) == cfg.ImmutableConfigScope
|
||||
and s.path not in env_includes
|
||||
and os.path.exists(s.path)
|
||||
]
|
||||
include_scopes = []
|
||||
for scope in cli_scopes:
|
||||
if scope not in include_scopes and scope not in env_includes:
|
||||
include_scopes.insert(0, scope)
|
||||
env_includes.extend(include_scopes)
|
||||
env_yaml_root["spack"]["include"] = env_includes
|
||||
|
||||
if "gitlab-ci" in env_yaml_root["spack"] and "ci" not in env_yaml_root["spack"]:
|
||||
env_yaml_root["spack"]["ci"] = env_yaml_root["spack"].pop("gitlab-ci")
|
||||
translate_deprecated_config(env_yaml_root["spack"]["ci"])
|
||||
|
||||
with open(os.path.join(concrete_env_dir, "spack.yaml"), "w") as fd:
|
||||
fd.write(syaml.dump_config(env_yaml_root, default_flow_style=False))
|
||||
|
||||
job_log_dir = os.path.join(pipeline_artifacts_dir, "logs")
|
||||
job_repro_dir = os.path.join(pipeline_artifacts_dir, "reproduction")
|
||||
job_test_dir = os.path.join(pipeline_artifacts_dir, "tests")
|
||||
@@ -966,7 +758,7 @@ def generate_gitlab_ci_yaml(
|
||||
# generation job and the rebuild jobs. This can happen when gitlab
|
||||
# checks out the project into a runner-specific directory, for example,
|
||||
# and different runners are picked for generate and rebuild jobs.
|
||||
ci_project_dir = os.environ.get("CI_PROJECT_DIR", os.getcwd())
|
||||
ci_project_dir = os.environ.get("CI_PROJECT_DIR")
|
||||
rel_artifacts_root = os.path.relpath(pipeline_artifacts_dir, ci_project_dir)
|
||||
rel_concrete_env_dir = os.path.relpath(concrete_env_dir, ci_project_dir)
|
||||
rel_job_log_dir = os.path.relpath(job_log_dir, ci_project_dir)
|
||||
@@ -980,7 +772,7 @@ def generate_gitlab_ci_yaml(
|
||||
try:
|
||||
bindist.binary_index.update()
|
||||
except bindist.FetchCacheError as e:
|
||||
tty.warn(e)
|
||||
tty.error(e)
|
||||
|
||||
staged_phases = {}
|
||||
try:
|
||||
@@ -1037,9 +829,7 @@ def generate_gitlab_ci_yaml(
|
||||
else:
|
||||
broken_spec_urls = web_util.list_url(broken_specs_url)
|
||||
|
||||
spack_ci = SpackCI(ci_config, phases, staged_phases)
|
||||
spack_ci_ir = spack_ci.generate_ir()
|
||||
|
||||
before_script, after_script = None, None
|
||||
for phase in phases:
|
||||
phase_name = phase["name"]
|
||||
strip_compilers = phase["strip-compilers"]
|
||||
@@ -1066,35 +856,54 @@ def generate_gitlab_ci_yaml(
|
||||
spec_record["needs_rebuild"] = False
|
||||
continue
|
||||
|
||||
job_object = spack_ci_ir["jobs"][release_spec_dag_hash]["attributes"]
|
||||
runner_attribs = _find_matching_config(release_spec, gitlab_ci)
|
||||
|
||||
if not job_object:
|
||||
if not runner_attribs:
|
||||
tty.warn("No match found for {0}, skipping it".format(release_spec))
|
||||
continue
|
||||
|
||||
tags = [tag for tag in runner_attribs["tags"]]
|
||||
|
||||
if spack_pipeline_type is not None:
|
||||
# For spack pipelines "public" and "protected" are reserved tags
|
||||
job_object["tags"] = _remove_reserved_tags(job_object.get("tags", []))
|
||||
tags = _remove_reserved_tags(tags)
|
||||
if spack_pipeline_type == "spack_protected_branch":
|
||||
job_object["tags"].extend(["protected"])
|
||||
tags.extend(["protected"])
|
||||
elif spack_pipeline_type == "spack_pull_request":
|
||||
job_object["tags"].extend(["public"])
|
||||
tags.extend(["public"])
|
||||
|
||||
if "script" not in job_object:
|
||||
raise AttributeError
|
||||
variables = {}
|
||||
if "variables" in runner_attribs:
|
||||
variables.update(runner_attribs["variables"])
|
||||
|
||||
def main_script_replacements(cmd):
|
||||
return cmd.replace("{env_dir}", concrete_env_dir)
|
||||
image_name = None
|
||||
image_entry = None
|
||||
if "image" in runner_attribs:
|
||||
build_image = runner_attribs["image"]
|
||||
try:
|
||||
image_name = build_image.get("name")
|
||||
entrypoint = build_image.get("entrypoint")
|
||||
image_entry = [p for p in entrypoint]
|
||||
except AttributeError:
|
||||
image_name = build_image
|
||||
|
||||
job_object["script"] = _unpack_script(
|
||||
job_object["script"], op=main_script_replacements
|
||||
)
|
||||
job_script = ["spack env activate --without-view ."]
|
||||
|
||||
if "before_script" in job_object:
|
||||
job_object["before_script"] = _unpack_script(job_object["before_script"])
|
||||
if artifacts_root:
|
||||
job_script.insert(0, "cd {0}".format(concrete_env_dir))
|
||||
|
||||
if "after_script" in job_object:
|
||||
job_object["after_script"] = _unpack_script(job_object["after_script"])
|
||||
job_script.extend(["spack ci rebuild"])
|
||||
|
||||
if "script" in runner_attribs:
|
||||
job_script = [s for s in runner_attribs["script"]]
|
||||
|
||||
before_script = None
|
||||
if "before_script" in runner_attribs:
|
||||
before_script = [s for s in runner_attribs["before_script"]]
|
||||
|
||||
after_script = None
|
||||
if "after_script" in runner_attribs:
|
||||
after_script = [s for s in runner_attribs["after_script"]]
|
||||
|
||||
osname = str(release_spec.architecture)
|
||||
job_name = get_job_name(
|
||||
@@ -1107,12 +916,13 @@ def main_script_replacements(cmd):
|
||||
if _is_main_phase(phase_name):
|
||||
compiler_action = "INSTALL_MISSING"
|
||||
|
||||
job_vars = job_object.setdefault("variables", {})
|
||||
job_vars["SPACK_JOB_SPEC_DAG_HASH"] = release_spec_dag_hash
|
||||
job_vars["SPACK_JOB_SPEC_PKG_NAME"] = release_spec.name
|
||||
job_vars["SPACK_COMPILER_ACTION"] = compiler_action
|
||||
job_vars = {
|
||||
"SPACK_JOB_SPEC_DAG_HASH": release_spec_dag_hash,
|
||||
"SPACK_JOB_SPEC_PKG_NAME": release_spec.name,
|
||||
"SPACK_COMPILER_ACTION": compiler_action,
|
||||
}
|
||||
|
||||
job_object["needs"] = []
|
||||
job_dependencies = []
|
||||
if spec_label in dependencies:
|
||||
if enable_artifacts_buildcache:
|
||||
# Get dependencies transitively, so they're all
|
||||
@@ -1125,7 +935,7 @@ def main_script_replacements(cmd):
|
||||
for dep_label in dependencies[spec_label]:
|
||||
dep_jobs.append(spec_labels[dep_label]["spec"])
|
||||
|
||||
job_object["needs"].extend(
|
||||
job_dependencies.extend(
|
||||
_format_job_needs(
|
||||
phase_name,
|
||||
strip_compilers,
|
||||
@@ -1182,7 +992,7 @@ def main_script_replacements(cmd):
|
||||
if enable_artifacts_buildcache:
|
||||
dep_jobs = [d for d in c_spec.traverse(deptype=all)]
|
||||
|
||||
job_object["needs"].extend(
|
||||
job_dependencies.extend(
|
||||
_format_job_needs(
|
||||
bs["phase-name"],
|
||||
bs["strip-compilers"],
|
||||
@@ -1248,7 +1058,7 @@ def main_script_replacements(cmd):
|
||||
]
|
||||
|
||||
if artifacts_root:
|
||||
job_object["needs"].append(
|
||||
job_dependencies.append(
|
||||
{"job": generate_job_name, "pipeline": "{0}".format(parent_pipeline_id)}
|
||||
)
|
||||
|
||||
@@ -1263,22 +1073,18 @@ def main_script_replacements(cmd):
|
||||
build_stamp = cdash_handler.build_stamp
|
||||
job_vars["SPACK_CDASH_BUILD_STAMP"] = build_stamp
|
||||
|
||||
job_object["artifacts"] = spack.config.merge_yaml(
|
||||
job_object.get("artifacts", {}),
|
||||
{
|
||||
"when": "always",
|
||||
"paths": [
|
||||
rel_job_log_dir,
|
||||
rel_job_repro_dir,
|
||||
rel_job_test_dir,
|
||||
rel_user_artifacts_dir,
|
||||
],
|
||||
},
|
||||
)
|
||||
variables.update(job_vars)
|
||||
|
||||
artifact_paths = [
|
||||
rel_job_log_dir,
|
||||
rel_job_repro_dir,
|
||||
rel_job_test_dir,
|
||||
rel_user_artifacts_dir,
|
||||
]
|
||||
|
||||
if enable_artifacts_buildcache:
|
||||
bc_root = os.path.join(local_mirror_dir, "build_cache")
|
||||
job_object["artifacts"]["paths"].extend(
|
||||
artifact_paths.extend(
|
||||
[
|
||||
os.path.join(bc_root, p)
|
||||
for p in [
|
||||
@@ -1288,15 +1094,33 @@ def main_script_replacements(cmd):
|
||||
]
|
||||
)
|
||||
|
||||
job_object["stage"] = stage_name
|
||||
job_object["retry"] = {"max": 2, "when": JOB_RETRY_CONDITIONS}
|
||||
job_object["interruptible"] = True
|
||||
job_object = {
|
||||
"stage": stage_name,
|
||||
"variables": variables,
|
||||
"script": job_script,
|
||||
"tags": tags,
|
||||
"artifacts": {"paths": artifact_paths, "when": "always"},
|
||||
"needs": sorted(job_dependencies, key=lambda d: d["job"]),
|
||||
"retry": {"max": 2, "when": JOB_RETRY_CONDITIONS},
|
||||
"interruptible": True,
|
||||
}
|
||||
|
||||
length_needs = len(job_object["needs"])
|
||||
length_needs = len(job_dependencies)
|
||||
if length_needs > max_length_needs:
|
||||
max_length_needs = length_needs
|
||||
max_needs_job = job_name
|
||||
|
||||
if before_script:
|
||||
job_object["before_script"] = before_script
|
||||
|
||||
if after_script:
|
||||
job_object["after_script"] = after_script
|
||||
|
||||
if image_name:
|
||||
job_object["image"] = image_name
|
||||
if image_entry is not None:
|
||||
job_object["image"] = {"name": image_name, "entrypoint": image_entry}
|
||||
|
||||
output_object[job_name] = job_object
|
||||
job_id += 1
|
||||
|
||||
@@ -1323,6 +1147,19 @@ def main_script_replacements(cmd):
|
||||
else:
|
||||
tty.warn("Unable to populate buildgroup without CDash credentials")
|
||||
|
||||
service_job_config = None
|
||||
if "service-job-attributes" in gitlab_ci:
|
||||
service_job_config = gitlab_ci["service-job-attributes"]
|
||||
|
||||
default_attrs = [
|
||||
"image",
|
||||
"tags",
|
||||
"variables",
|
||||
"before_script",
|
||||
# 'script',
|
||||
"after_script",
|
||||
]
|
||||
|
||||
service_job_retries = {
|
||||
"max": 2,
|
||||
"when": ["runner_system_failure", "stuck_or_timeout_failure", "script_failure"],
|
||||
@@ -1334,29 +1171,55 @@ def main_script_replacements(cmd):
|
||||
# schedule a job to clean up the temporary storage location
|
||||
# associated with this pipeline.
|
||||
stage_names.append("cleanup-temp-storage")
|
||||
cleanup_job = copy.deepcopy(spack_ci_ir["jobs"]["cleanup"]["attributes"])
|
||||
cleanup_job = {}
|
||||
|
||||
if service_job_config:
|
||||
_copy_attributes(default_attrs, service_job_config, cleanup_job)
|
||||
|
||||
if "tags" in cleanup_job:
|
||||
service_tags = _remove_reserved_tags(cleanup_job["tags"])
|
||||
cleanup_job["tags"] = service_tags
|
||||
|
||||
cleanup_job["stage"] = "cleanup-temp-storage"
|
||||
cleanup_job["script"] = [
|
||||
"spack -d mirror destroy --mirror-url {0}/$CI_PIPELINE_ID".format(
|
||||
temp_storage_url_prefix
|
||||
)
|
||||
]
|
||||
cleanup_job["when"] = "always"
|
||||
cleanup_job["retry"] = service_job_retries
|
||||
cleanup_job["interruptible"] = True
|
||||
|
||||
cleanup_job["script"] = _unpack_script(
|
||||
cleanup_job["script"],
|
||||
op=lambda cmd: cmd.replace("mirror_prefix", temp_storage_url_prefix),
|
||||
)
|
||||
|
||||
output_object["cleanup"] = cleanup_job
|
||||
|
||||
if (
|
||||
"script" in spack_ci_ir["jobs"]["signing"]["attributes"]
|
||||
"signing-job-attributes" in gitlab_ci
|
||||
and spack_pipeline_type == "spack_protected_branch"
|
||||
):
|
||||
# External signing: generate a job to check and sign binary pkgs
|
||||
stage_names.append("stage-sign-pkgs")
|
||||
signing_job = spack_ci_ir["jobs"]["signing"]["attributes"]
|
||||
signing_job_config = gitlab_ci["signing-job-attributes"]
|
||||
signing_job = {}
|
||||
|
||||
signing_job["script"] = _unpack_script(signing_job["script"])
|
||||
signing_job_attrs_to_copy = [
|
||||
"image",
|
||||
"tags",
|
||||
"variables",
|
||||
"before_script",
|
||||
"script",
|
||||
"after_script",
|
||||
]
|
||||
|
||||
_copy_attributes(signing_job_attrs_to_copy, signing_job_config, signing_job)
|
||||
|
||||
signing_job_tags = []
|
||||
if "tags" in signing_job:
|
||||
signing_job_tags = _remove_reserved_tags(signing_job["tags"])
|
||||
|
||||
for tag in ["aws", "protected", "notary"]:
|
||||
if tag not in signing_job_tags:
|
||||
signing_job_tags.append(tag)
|
||||
signing_job["tags"] = signing_job_tags
|
||||
|
||||
signing_job["stage"] = "stage-sign-pkgs"
|
||||
signing_job["when"] = "always"
|
||||
@@ -1368,17 +1231,23 @@ def main_script_replacements(cmd):
|
||||
if rebuild_index_enabled:
|
||||
# Add a final job to regenerate the index
|
||||
stage_names.append("stage-rebuild-index")
|
||||
final_job = spack_ci_ir["jobs"]["reindex"]["attributes"]
|
||||
final_job = {}
|
||||
|
||||
if service_job_config:
|
||||
_copy_attributes(default_attrs, service_job_config, final_job)
|
||||
|
||||
if "tags" in final_job:
|
||||
service_tags = _remove_reserved_tags(final_job["tags"])
|
||||
final_job["tags"] = service_tags
|
||||
|
||||
index_target_mirror = mirror_urls[0]
|
||||
if remote_mirror_override:
|
||||
index_target_mirror = remote_mirror_override
|
||||
final_job["stage"] = "stage-rebuild-index"
|
||||
final_job["script"] = _unpack_script(
|
||||
final_job["script"],
|
||||
op=lambda cmd: cmd.replace("{index_target_mirror}", index_target_mirror),
|
||||
)
|
||||
|
||||
final_job["stage"] = "stage-rebuild-index"
|
||||
final_job["script"] = [
|
||||
"spack buildcache update-index --keys --mirror-url {0}".format(index_target_mirror)
|
||||
]
|
||||
final_job["when"] = "always"
|
||||
final_job["retry"] = service_job_retries
|
||||
final_job["interruptible"] = True
|
||||
@@ -1426,9 +1295,6 @@ def main_script_replacements(cmd):
|
||||
if spack_stack_name:
|
||||
output_object["variables"]["SPACK_CI_STACK_NAME"] = spack_stack_name
|
||||
|
||||
# Ensure the child pipeline always runs
|
||||
output_object["workflow"] = {"rules": [{"when": "always"}]}
|
||||
|
||||
if spack_buildcache_copy:
|
||||
# Write out the file describing specs that should be copied
|
||||
copy_specs_dir = os.path.join(pipeline_artifacts_dir, "specs_to_copy")
|
||||
@@ -1462,7 +1328,13 @@ def main_script_replacements(cmd):
|
||||
else:
|
||||
# No jobs were generated
|
||||
tty.debug("No specs to rebuild, generating no-op job")
|
||||
noop_job = spack_ci_ir["jobs"]["noop"]["attributes"]
|
||||
noop_job = {}
|
||||
|
||||
if service_job_config:
|
||||
_copy_attributes(default_attrs, service_job_config, noop_job)
|
||||
|
||||
if "script" not in noop_job:
|
||||
noop_job["script"] = ['echo "All specs already up to date, nothing to rebuild."']
|
||||
|
||||
noop_job["retry"] = service_job_retries
|
||||
|
||||
@@ -1476,7 +1348,7 @@ def main_script_replacements(cmd):
|
||||
sys.exit(1)
|
||||
|
||||
with open(output_file, "w") as outf:
|
||||
outf.write(syaml.dump(sorted_output, default_flow_style=True))
|
||||
outf.write(syaml.dump_config(sorted_output, default_flow_style=True))
|
||||
|
||||
|
||||
def _url_encode_string(input_string):
|
||||
@@ -1656,22 +1528,19 @@ def copy_files_to_artifacts(src, artifacts_dir):
|
||||
try:
|
||||
fs.copy(src, artifacts_dir)
|
||||
except Exception as err:
|
||||
msg = ("Unable to copy files ({0}) to artifacts {1} due to " "exception: {2}").format(
|
||||
src, artifacts_dir, str(err)
|
||||
)
|
||||
tty.warn(msg)
|
||||
tty.warn(f"Unable to copy files ({src}) to artifacts {artifacts_dir} due to: {err}")
|
||||
|
||||
|
||||
def copy_stage_logs_to_artifacts(job_spec: spack.spec.Spec, job_log_dir: str) -> None:
|
||||
def copy_stage_logs_to_artifacts(job_spec, job_log_dir):
|
||||
"""Copy selected build stage file(s) to the given artifacts directory
|
||||
|
||||
Looks for build logs in the stage directory of the given
|
||||
job_spec, and attempts to copy the files into the directory given
|
||||
Looks for spack-build-out.txt in the stage directory of the given
|
||||
job_spec, and attempts to copy the file into the directory given
|
||||
by job_log_dir.
|
||||
|
||||
Args:
|
||||
job_spec: spec associated with spack install log
|
||||
job_log_dir: path into which build log should be copied
|
||||
Parameters:
|
||||
job_spec (spack.spec.Spec): spec associated with spack install log
|
||||
job_log_dir (str): path into which build log should be copied
|
||||
"""
|
||||
tty.debug("job spec: {0}".format(job_spec))
|
||||
if not job_spec:
|
||||
@@ -1690,8 +1559,8 @@ def copy_stage_logs_to_artifacts(job_spec: spack.spec.Spec, job_log_dir: str) ->
|
||||
|
||||
stage_dir = job_pkg.stage.path
|
||||
tty.debug("stage dir: {0}".format(stage_dir))
|
||||
for file in [job_pkg.log_path, job_pkg.env_mods_path, *job_pkg.builder.archive_files]:
|
||||
copy_files_to_artifacts(file, job_log_dir)
|
||||
build_out_src = os.path.join(stage_dir, "spack-build-out.txt")
|
||||
copy_files_to_artifacts(build_out_src, job_log_dir)
|
||||
|
||||
|
||||
def copy_test_logs_to_artifacts(test_stage, job_test_dir):
|
||||
@@ -1879,7 +1748,6 @@ def reproduce_ci_job(url, work_dir):
|
||||
function is a set of printed instructions for running docker and then
|
||||
commands to run to reproduce the build once inside the container.
|
||||
"""
|
||||
work_dir = os.path.realpath(work_dir)
|
||||
download_and_extract_artifacts(url, work_dir)
|
||||
|
||||
lock_file = fs.find(work_dir, "spack.lock")[0]
|
||||
@@ -2044,9 +1912,7 @@ def reproduce_ci_job(url, work_dir):
|
||||
if job_image:
|
||||
inst_list.append("\nRun the following command:\n\n")
|
||||
inst_list.append(
|
||||
" $ docker run --rm --name spack_reproducer -v {0}:{1}:Z -ti {2}\n".format(
|
||||
work_dir, mount_as_dir, job_image
|
||||
)
|
||||
" $ docker run --rm -v {0}:{1} -ti {2}\n".format(work_dir, mount_as_dir, job_image)
|
||||
)
|
||||
inst_list.append("\nOnce inside the container:\n\n")
|
||||
else:
|
||||
@@ -2097,16 +1963,13 @@ def process_command(name, commands, repro_dir):
|
||||
# Create a string [command 1] && [command 2] && ... && [command n] with commands
|
||||
# quoted using double quotes.
|
||||
args_to_string = lambda args: " ".join('"{}"'.format(arg) for arg in args)
|
||||
full_command = " \n ".join(map(args_to_string, commands))
|
||||
full_command = " && ".join(map(args_to_string, commands))
|
||||
|
||||
# Write the command to a shell script
|
||||
script = "{0}.sh".format(name)
|
||||
with open(script, "w") as fd:
|
||||
fd.write("#!/bin/sh\n\n")
|
||||
fd.write("\n# spack {0} command\n".format(name))
|
||||
fd.write("set -e\n")
|
||||
if os.environ.get("SPACK_VERBOSE_SCRIPT"):
|
||||
fd.write("set -x\n")
|
||||
fd.write(full_command)
|
||||
fd.write("\n")
|
||||
|
||||
@@ -2455,66 +2318,3 @@ def report_skipped(self, spec, directory_name, reason):
|
||||
)
|
||||
reporter = CDash(configuration=configuration)
|
||||
reporter.test_skipped_report(directory_name, spec, reason)
|
||||
|
||||
|
||||
def translate_deprecated_config(config):
|
||||
# Remove all deprecated keys from config
|
||||
mappings = config.pop("mappings", [])
|
||||
match_behavior = config.pop("match_behavior", "first")
|
||||
|
||||
build_job = {}
|
||||
if "image" in config:
|
||||
build_job["image"] = config.pop("image")
|
||||
if "tags" in config:
|
||||
build_job["tags"] = config.pop("tags")
|
||||
if "variables" in config:
|
||||
build_job["variables"] = config.pop("variables")
|
||||
if "before_script" in config:
|
||||
build_job["before_script"] = config.pop("before_script")
|
||||
if "script" in config:
|
||||
build_job["script"] = config.pop("script")
|
||||
if "after_script" in config:
|
||||
build_job["after_script"] = config.pop("after_script")
|
||||
|
||||
signing_job = None
|
||||
if "signing-job-attributes" in config:
|
||||
signing_job = {"signing-job": config.pop("signing-job-attributes")}
|
||||
|
||||
service_job_attributes = None
|
||||
if "service-job-attributes" in config:
|
||||
service_job_attributes = config.pop("service-job-attributes")
|
||||
|
||||
# If this config already has pipeline-gen do not more
|
||||
if "pipeline-gen" in config:
|
||||
return True if mappings or build_job or signing_job or service_job_attributes else False
|
||||
|
||||
config["target"] = "gitlab"
|
||||
|
||||
config["pipeline-gen"] = []
|
||||
pipeline_gen = config["pipeline-gen"]
|
||||
|
||||
# Build Job
|
||||
submapping = []
|
||||
for section in mappings:
|
||||
submapping_section = {"match": section["match"]}
|
||||
if "runner-attributes" in section:
|
||||
submapping_section["build-job"] = section["runner-attributes"]
|
||||
if "remove-attributes" in section:
|
||||
submapping_section["build-job-remove"] = section["remove-attributes"]
|
||||
submapping.append(submapping_section)
|
||||
pipeline_gen.append({"submapping": submapping, "match_behavior": match_behavior})
|
||||
|
||||
if build_job:
|
||||
pipeline_gen.append({"build-job": build_job})
|
||||
|
||||
# Signing Job
|
||||
if signing_job:
|
||||
pipeline_gen.append(signing_job)
|
||||
|
||||
# Service Jobs
|
||||
if service_job_attributes:
|
||||
pipeline_gen.append({"reindex-job": service_job_attributes})
|
||||
pipeline_gen.append({"noop-job": service_job_attributes})
|
||||
pipeline_gen.append({"cleanup-job": service_job_attributes})
|
||||
|
||||
return True
|
||||
|
||||
@@ -33,6 +33,12 @@ def deindent(desc):
|
||||
return desc.replace(" ", "")
|
||||
|
||||
|
||||
def get_env_var(variable_name):
|
||||
if variable_name in os.environ:
|
||||
return os.environ.get(variable_name)
|
||||
return None
|
||||
|
||||
|
||||
def setup_parser(subparser):
|
||||
setup_parser.parser = subparser
|
||||
subparsers = subparser.add_subparsers(help="CI sub-commands")
|
||||
@@ -227,7 +233,7 @@ def ci_reindex(args):
|
||||
Use the active, gitlab-enabled environment to rebuild the buildcache
|
||||
index for the associated mirror."""
|
||||
env = spack.cmd.require_active_env(cmd_name="ci rebuild-index")
|
||||
yaml_root = ev.config_dict(env.manifest)
|
||||
yaml_root = ev.config_dict(env.yaml)
|
||||
|
||||
if "mirrors" not in yaml_root or len(yaml_root["mirrors"].values()) < 1:
|
||||
tty.die("spack ci rebuild-index requires an env containing a mirror")
|
||||
@@ -249,9 +255,10 @@ def ci_rebuild(args):
|
||||
|
||||
# Make sure the environment is "gitlab-enabled", or else there's nothing
|
||||
# to do.
|
||||
ci_config = cfg.get("ci")
|
||||
if not ci_config:
|
||||
tty.die("spack ci rebuild requires an env containing ci cfg")
|
||||
yaml_root = ev.config_dict(env.yaml)
|
||||
gitlab_ci = yaml_root["gitlab-ci"] if "gitlab-ci" in yaml_root else None
|
||||
if not gitlab_ci:
|
||||
tty.die("spack ci rebuild requires an env containing gitlab-ci cfg")
|
||||
|
||||
tty.msg(
|
||||
"SPACK_BUILDCACHE_DESTINATION={0}".format(
|
||||
@@ -262,27 +269,27 @@ def ci_rebuild(args):
|
||||
# Grab the environment variables we need. These either come from the
|
||||
# pipeline generation step ("spack ci generate"), where they were written
|
||||
# out as variables, or else provided by GitLab itself.
|
||||
pipeline_artifacts_dir = os.environ.get("SPACK_ARTIFACTS_ROOT")
|
||||
job_log_dir = os.environ.get("SPACK_JOB_LOG_DIR")
|
||||
job_test_dir = os.environ.get("SPACK_JOB_TEST_DIR")
|
||||
repro_dir = os.environ.get("SPACK_JOB_REPRO_DIR")
|
||||
local_mirror_dir = os.environ.get("SPACK_LOCAL_MIRROR_DIR")
|
||||
concrete_env_dir = os.environ.get("SPACK_CONCRETE_ENV_DIR")
|
||||
ci_pipeline_id = os.environ.get("CI_PIPELINE_ID")
|
||||
ci_job_name = os.environ.get("CI_JOB_NAME")
|
||||
signing_key = os.environ.get("SPACK_SIGNING_KEY")
|
||||
job_spec_pkg_name = os.environ.get("SPACK_JOB_SPEC_PKG_NAME")
|
||||
job_spec_dag_hash = os.environ.get("SPACK_JOB_SPEC_DAG_HASH")
|
||||
compiler_action = os.environ.get("SPACK_COMPILER_ACTION")
|
||||
spack_pipeline_type = os.environ.get("SPACK_PIPELINE_TYPE")
|
||||
remote_mirror_override = os.environ.get("SPACK_REMOTE_MIRROR_OVERRIDE")
|
||||
remote_mirror_url = os.environ.get("SPACK_REMOTE_MIRROR_URL")
|
||||
spack_ci_stack_name = os.environ.get("SPACK_CI_STACK_NAME")
|
||||
shared_pr_mirror_url = os.environ.get("SPACK_CI_SHARED_PR_MIRROR_URL")
|
||||
rebuild_everything = os.environ.get("SPACK_REBUILD_EVERYTHING")
|
||||
pipeline_artifacts_dir = get_env_var("SPACK_ARTIFACTS_ROOT")
|
||||
job_log_dir = get_env_var("SPACK_JOB_LOG_DIR")
|
||||
job_test_dir = get_env_var("SPACK_JOB_TEST_DIR")
|
||||
repro_dir = get_env_var("SPACK_JOB_REPRO_DIR")
|
||||
local_mirror_dir = get_env_var("SPACK_LOCAL_MIRROR_DIR")
|
||||
concrete_env_dir = get_env_var("SPACK_CONCRETE_ENV_DIR")
|
||||
ci_pipeline_id = get_env_var("CI_PIPELINE_ID")
|
||||
ci_job_name = get_env_var("CI_JOB_NAME")
|
||||
signing_key = get_env_var("SPACK_SIGNING_KEY")
|
||||
job_spec_pkg_name = get_env_var("SPACK_JOB_SPEC_PKG_NAME")
|
||||
job_spec_dag_hash = get_env_var("SPACK_JOB_SPEC_DAG_HASH")
|
||||
compiler_action = get_env_var("SPACK_COMPILER_ACTION")
|
||||
spack_pipeline_type = get_env_var("SPACK_PIPELINE_TYPE")
|
||||
remote_mirror_override = get_env_var("SPACK_REMOTE_MIRROR_OVERRIDE")
|
||||
remote_mirror_url = get_env_var("SPACK_REMOTE_MIRROR_URL")
|
||||
spack_ci_stack_name = get_env_var("SPACK_CI_STACK_NAME")
|
||||
shared_pr_mirror_url = get_env_var("SPACK_CI_SHARED_PR_MIRROR_URL")
|
||||
rebuild_everything = get_env_var("SPACK_REBUILD_EVERYTHING")
|
||||
|
||||
# Construct absolute paths relative to current $CI_PROJECT_DIR
|
||||
ci_project_dir = os.environ.get("CI_PROJECT_DIR")
|
||||
ci_project_dir = get_env_var("CI_PROJECT_DIR")
|
||||
pipeline_artifacts_dir = os.path.join(ci_project_dir, pipeline_artifacts_dir)
|
||||
job_log_dir = os.path.join(ci_project_dir, job_log_dir)
|
||||
job_test_dir = os.path.join(ci_project_dir, job_test_dir)
|
||||
@@ -299,10 +306,8 @@ def ci_rebuild(args):
|
||||
# Query the environment manifest to find out whether we're reporting to a
|
||||
# CDash instance, and if so, gather some information from the manifest to
|
||||
# support that task.
|
||||
cdash_config = cfg.get("cdash")
|
||||
cdash_handler = None
|
||||
if "build-group" in cdash_config:
|
||||
cdash_handler = spack_ci.CDashHandler(cdash_config)
|
||||
cdash_handler = spack_ci.CDashHandler(yaml_root.get("cdash")) if "cdash" in yaml_root else None
|
||||
if cdash_handler:
|
||||
tty.debug("cdash url = {0}".format(cdash_handler.url))
|
||||
tty.debug("cdash project = {0}".format(cdash_handler.project))
|
||||
tty.debug("cdash project_enc = {0}".format(cdash_handler.project_enc))
|
||||
@@ -335,13 +340,13 @@ def ci_rebuild(args):
|
||||
pipeline_mirror_url = None
|
||||
|
||||
temp_storage_url_prefix = None
|
||||
if "temporary-storage-url-prefix" in ci_config:
|
||||
temp_storage_url_prefix = ci_config["temporary-storage-url-prefix"]
|
||||
if "temporary-storage-url-prefix" in gitlab_ci:
|
||||
temp_storage_url_prefix = gitlab_ci["temporary-storage-url-prefix"]
|
||||
pipeline_mirror_url = url_util.join(temp_storage_url_prefix, ci_pipeline_id)
|
||||
|
||||
enable_artifacts_mirror = False
|
||||
if "enable-artifacts-buildcache" in ci_config:
|
||||
enable_artifacts_mirror = ci_config["enable-artifacts-buildcache"]
|
||||
if "enable-artifacts-buildcache" in gitlab_ci:
|
||||
enable_artifacts_mirror = gitlab_ci["enable-artifacts-buildcache"]
|
||||
if enable_artifacts_mirror or (
|
||||
spack_is_pr_pipeline and not enable_artifacts_mirror and not temp_storage_url_prefix
|
||||
):
|
||||
@@ -588,8 +593,8 @@ def ci_rebuild(args):
|
||||
# avoid wasting compute cycles attempting to build those hashes.
|
||||
if install_exit_code == INSTALL_FAIL_CODE and spack_is_develop_pipeline:
|
||||
tty.debug("Install failed on develop")
|
||||
if "broken-specs-url" in ci_config:
|
||||
broken_specs_url = ci_config["broken-specs-url"]
|
||||
if "broken-specs-url" in gitlab_ci:
|
||||
broken_specs_url = gitlab_ci["broken-specs-url"]
|
||||
dev_fail_hash = job_spec.dag_hash()
|
||||
broken_spec_path = url_util.join(broken_specs_url, dev_fail_hash)
|
||||
tty.msg("Reporting broken develop build as: {0}".format(broken_spec_path))
|
||||
@@ -597,8 +602,8 @@ def ci_rebuild(args):
|
||||
broken_spec_path,
|
||||
job_spec_pkg_name,
|
||||
spack_ci_stack_name,
|
||||
os.environ.get("CI_JOB_URL"),
|
||||
os.environ.get("CI_PIPELINE_URL"),
|
||||
get_env_var("CI_JOB_URL"),
|
||||
get_env_var("CI_PIPELINE_URL"),
|
||||
job_spec.to_dict(hash=ht.dag_hash),
|
||||
)
|
||||
|
||||
@@ -610,14 +615,17 @@ def ci_rebuild(args):
|
||||
# the package, run them and copy the output. Failures of any kind should
|
||||
# *not* terminate the build process or preclude creating the build cache.
|
||||
broken_tests = (
|
||||
"broken-tests-packages" in ci_config
|
||||
and job_spec.name in ci_config["broken-tests-packages"]
|
||||
"broken-tests-packages" in gitlab_ci
|
||||
and job_spec.name in gitlab_ci["broken-tests-packages"]
|
||||
)
|
||||
reports_dir = fs.join_path(os.getcwd(), "cdash_report")
|
||||
if args.tests and broken_tests:
|
||||
tty.warn("Unable to run stand-alone tests since listed in " "ci's 'broken-tests-packages'")
|
||||
tty.warn(
|
||||
"Unable to run stand-alone tests since listed in "
|
||||
"gitlab-ci's 'broken-tests-packages'"
|
||||
)
|
||||
if cdash_handler:
|
||||
msg = "Package is listed in ci's broken-tests-packages"
|
||||
msg = "Package is listed in gitlab-ci's broken-tests-packages"
|
||||
cdash_handler.report_skipped(job_spec, reports_dir, reason=msg)
|
||||
cdash_handler.copy_test_results(reports_dir, job_test_dir)
|
||||
elif args.tests:
|
||||
@@ -680,8 +688,8 @@ def ci_rebuild(args):
|
||||
|
||||
# If this is a develop pipeline, check if the spec that we just built is
|
||||
# on the broken-specs list. If so, remove it.
|
||||
if spack_is_develop_pipeline and "broken-specs-url" in ci_config:
|
||||
broken_specs_url = ci_config["broken-specs-url"]
|
||||
if spack_is_develop_pipeline and "broken-specs-url" in gitlab_ci:
|
||||
broken_specs_url = gitlab_ci["broken-specs-url"]
|
||||
just_built_hash = job_spec.dag_hash()
|
||||
broken_spec_path = url_util.join(broken_specs_url, just_built_hash)
|
||||
if web_util.url_exists(broken_spec_path):
|
||||
@@ -698,9 +706,9 @@ def ci_rebuild(args):
|
||||
else:
|
||||
tty.debug("spack install exited non-zero, will not create buildcache")
|
||||
|
||||
api_root_url = os.environ.get("CI_API_V4_URL")
|
||||
ci_project_id = os.environ.get("CI_PROJECT_ID")
|
||||
ci_job_id = os.environ.get("CI_JOB_ID")
|
||||
api_root_url = get_env_var("CI_API_V4_URL")
|
||||
ci_project_id = get_env_var("CI_PROJECT_ID")
|
||||
ci_job_id = get_env_var("CI_JOB_ID")
|
||||
|
||||
repro_job_url = "{0}/projects/{1}/jobs/{2}/artifacts".format(
|
||||
api_root_url, ci_project_id, ci_job_id
|
||||
|
||||
@@ -514,15 +514,7 @@ def add_concretizer_args(subparser):
|
||||
dest="concretizer:reuse",
|
||||
const=True,
|
||||
default=None,
|
||||
help="reuse installed packages/buildcaches when possible",
|
||||
)
|
||||
subgroup.add_argument(
|
||||
"--reuse-deps",
|
||||
action=ConfigSetAction,
|
||||
dest="concretizer:reuse",
|
||||
const="dependencies",
|
||||
default=None,
|
||||
help="reuse installed dependencies only",
|
||||
help="reuse installed dependencies/buildcaches when possible",
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -7,7 +7,6 @@
|
||||
import collections
|
||||
import os
|
||||
import shutil
|
||||
from typing import List
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
import llnl.util.tty as tty
|
||||
@@ -245,35 +244,30 @@ def config_remove(args):
|
||||
spack.config.set(path, existing, scope)
|
||||
|
||||
|
||||
def _can_update_config_file(scope: spack.config.ConfigScope, cfg_file):
|
||||
if isinstance(scope, spack.config.SingleFileScope):
|
||||
return fs.can_access(cfg_file)
|
||||
return fs.can_write_to_dir(scope.path) and fs.can_access(cfg_file)
|
||||
def _can_update_config_file(scope_dir, cfg_file):
|
||||
dir_ok = fs.can_write_to_dir(scope_dir)
|
||||
cfg_ok = fs.can_access(cfg_file)
|
||||
return dir_ok and cfg_ok
|
||||
|
||||
|
||||
def config_update(args):
|
||||
# Read the configuration files
|
||||
spack.config.config.get_config(args.section, scope=args.scope)
|
||||
updates: List[spack.config.ConfigScope] = list(
|
||||
filter(
|
||||
lambda s: not isinstance(
|
||||
s, (spack.config.InternalConfigScope, spack.config.ImmutableConfigScope)
|
||||
),
|
||||
spack.config.config.format_updates[args.section],
|
||||
)
|
||||
)
|
||||
updates = spack.config.config.format_updates[args.section]
|
||||
|
||||
cannot_overwrite, skip_system_scope = [], False
|
||||
for scope in updates:
|
||||
cfg_file = spack.config.config.get_config_filename(scope.name, args.section)
|
||||
can_be_updated = _can_update_config_file(scope, cfg_file)
|
||||
scope_dir = scope.path
|
||||
can_be_updated = _can_update_config_file(scope_dir, cfg_file)
|
||||
if not can_be_updated:
|
||||
if scope.name == "system":
|
||||
skip_system_scope = True
|
||||
tty.warn(
|
||||
msg = (
|
||||
'Not enough permissions to write to "system" scope. '
|
||||
f"Skipping update at that location [cfg={cfg_file}]"
|
||||
"Skipping update at that location [cfg={0}]"
|
||||
)
|
||||
tty.warn(msg.format(cfg_file))
|
||||
continue
|
||||
cannot_overwrite.append((scope, cfg_file))
|
||||
|
||||
@@ -321,14 +315,18 @@ def config_update(args):
|
||||
# Get a function to update the format
|
||||
update_fn = spack.config.ensure_latest_format_fn(args.section)
|
||||
for scope in updates:
|
||||
data = scope.get_section(args.section).pop(args.section)
|
||||
cfg_file = spack.config.config.get_config_filename(scope.name, args.section)
|
||||
with open(cfg_file) as f:
|
||||
data = syaml.load_config(f) or {}
|
||||
data = data.pop(args.section, {})
|
||||
update_fn(data)
|
||||
|
||||
# Make a backup copy and rewrite the file
|
||||
bkp_file = cfg_file + ".bkp"
|
||||
shutil.copy(cfg_file, bkp_file)
|
||||
spack.config.config.update_config(args.section, data, scope=scope.name, force=True)
|
||||
tty.msg(f'File "{cfg_file}" update [backup={bkp_file}]')
|
||||
msg = 'File "{0}" updated [backup={1}]'
|
||||
tty.msg(msg.format(cfg_file, bkp_file))
|
||||
|
||||
|
||||
def _can_revert_update(scope_dir, cfg_file, bkp_file):
|
||||
|
||||
@@ -807,7 +807,7 @@ def get_versions(args, name):
|
||||
# Default version with hash
|
||||
hashed_versions = """\
|
||||
# FIXME: Add proper versions and checksums here.
|
||||
# version("1.2.3", md5="0123456789abcdef0123456789abcdef")"""
|
||||
# version("1.2.3", "0123456789abcdef0123456789abcdef")"""
|
||||
|
||||
# Default version without hash
|
||||
unhashed_versions = """\
|
||||
|
||||
@@ -4,6 +4,7 @@
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import argparse
|
||||
import io
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
@@ -23,11 +24,10 @@
|
||||
import spack.cmd.uninstall
|
||||
import spack.config
|
||||
import spack.environment as ev
|
||||
import spack.environment.depfile as depfile
|
||||
import spack.environment.shell
|
||||
import spack.schema.env
|
||||
import spack.spec
|
||||
import spack.tengine
|
||||
import spack.traverse as traverse
|
||||
import spack.util.string as string
|
||||
from spack.util.environment import EnvironmentModifications
|
||||
|
||||
@@ -163,7 +163,7 @@ def env_activate(args):
|
||||
env = create_temp_env_directory()
|
||||
env_path = os.path.abspath(env)
|
||||
short_name = os.path.basename(env_path)
|
||||
ev.create_in_dir(env).write(regenerate=False)
|
||||
ev.Environment(env).write(regenerate=False)
|
||||
|
||||
# Managed environment
|
||||
elif ev.exists(env_name_or_dir) and not args.dir:
|
||||
@@ -301,17 +301,16 @@ def env_create(args):
|
||||
# object could choose to enable a view by default. False means that
|
||||
# the environment should not include a view.
|
||||
with_view = None
|
||||
|
||||
_env_create(
|
||||
args.create_env,
|
||||
init_file=args.envfile,
|
||||
dir=args.dir,
|
||||
with_view=with_view,
|
||||
keep_relative=args.keep_relative,
|
||||
)
|
||||
if args.envfile:
|
||||
with open(args.envfile) as f:
|
||||
_env_create(
|
||||
args.create_env, f, args.dir, with_view=with_view, keep_relative=args.keep_relative
|
||||
)
|
||||
else:
|
||||
_env_create(args.create_env, None, args.dir, with_view=with_view)
|
||||
|
||||
|
||||
def _env_create(name_or_path, *, init_file=None, dir=False, with_view=None, keep_relative=False):
|
||||
def _env_create(name_or_path, init_file=None, dir=False, with_view=None, keep_relative=False):
|
||||
"""Create a new environment, with an optional yaml description.
|
||||
|
||||
Arguments:
|
||||
@@ -324,21 +323,18 @@ def _env_create(name_or_path, *, init_file=None, dir=False, with_view=None, keep
|
||||
the new environment file, otherwise they may be made absolute if the
|
||||
new environment is in a different location
|
||||
"""
|
||||
if not dir:
|
||||
env = ev.create(
|
||||
name_or_path, init_file=init_file, with_view=with_view, keep_relative=keep_relative
|
||||
)
|
||||
if dir:
|
||||
env = ev.Environment(name_or_path, init_file, with_view, keep_relative)
|
||||
env.write()
|
||||
tty.msg("Created environment in %s" % env.path)
|
||||
tty.msg("You can activate this environment with:")
|
||||
tty.msg(" spack env activate %s" % env.path)
|
||||
else:
|
||||
env = ev.create(name_or_path, init_file, with_view, keep_relative)
|
||||
env.write()
|
||||
tty.msg("Created environment '%s' in %s" % (name_or_path, env.path))
|
||||
tty.msg("You can activate this environment with:")
|
||||
tty.msg(" spack env activate %s" % (name_or_path))
|
||||
return env
|
||||
|
||||
env = ev.create_in_dir(
|
||||
name_or_path, init_file=init_file, with_view=with_view, keep_relative=keep_relative
|
||||
)
|
||||
tty.msg("Created environment in %s" % env.path)
|
||||
tty.msg("You can activate this environment with:")
|
||||
tty.msg(" spack env activate %s" % env.path)
|
||||
return env
|
||||
|
||||
|
||||
@@ -435,22 +431,21 @@ def env_view_setup_parser(subparser):
|
||||
def env_view(args):
|
||||
env = ev.active_environment()
|
||||
|
||||
if not env:
|
||||
if env:
|
||||
if args.action == ViewAction.regenerate:
|
||||
env.regenerate_views()
|
||||
elif args.action == ViewAction.enable:
|
||||
if args.view_path:
|
||||
view_path = args.view_path
|
||||
else:
|
||||
view_path = env.view_path_default
|
||||
env.update_default_view(view_path)
|
||||
env.write()
|
||||
elif args.action == ViewAction.disable:
|
||||
env.update_default_view(None)
|
||||
env.write()
|
||||
else:
|
||||
tty.msg("No active environment")
|
||||
return
|
||||
|
||||
if args.action == ViewAction.regenerate:
|
||||
env.regenerate_views()
|
||||
elif args.action == ViewAction.enable:
|
||||
if args.view_path:
|
||||
view_path = args.view_path
|
||||
else:
|
||||
view_path = env.view_path_default
|
||||
env.update_default_view(view_path)
|
||||
env.write()
|
||||
elif args.action == ViewAction.disable:
|
||||
env.update_default_view(path_or_bool=False)
|
||||
env.write()
|
||||
|
||||
|
||||
#
|
||||
@@ -642,23 +637,161 @@ def env_depfile_setup_parser(subparser):
|
||||
)
|
||||
|
||||
|
||||
def _deptypes(use_buildcache):
|
||||
"""What edges should we follow for a given node? If it's a cache-only
|
||||
node, then we can drop build type deps."""
|
||||
return ("link", "run") if use_buildcache == "only" else ("build", "link", "run")
|
||||
|
||||
|
||||
class MakeTargetVisitor(object):
|
||||
"""This visitor produces an adjacency list of a (reduced) DAG, which
|
||||
is used to generate Makefile targets with their prerequisites."""
|
||||
|
||||
def __init__(self, target, pkg_buildcache, deps_buildcache):
|
||||
"""
|
||||
Args:
|
||||
target: function that maps dag_hash -> make target string
|
||||
pkg_buildcache (str): "only", "never", "auto": when "only",
|
||||
redundant build deps of roots are dropped
|
||||
deps_buildcache (str): same as pkg_buildcache, but for non-root specs.
|
||||
"""
|
||||
self.adjacency_list = []
|
||||
self.target = target
|
||||
self.pkg_buildcache = pkg_buildcache
|
||||
self.deps_buildcache = deps_buildcache
|
||||
self.deptypes_root = _deptypes(pkg_buildcache)
|
||||
self.deptypes_deps = _deptypes(deps_buildcache)
|
||||
|
||||
def neighbors(self, node):
|
||||
"""Produce a list of spec to follow from node"""
|
||||
deptypes = self.deptypes_root if node.depth == 0 else self.deptypes_deps
|
||||
return traverse.sort_edges(node.edge.spec.edges_to_dependencies(deptype=deptypes))
|
||||
|
||||
def build_cache_flag(self, depth):
|
||||
setting = self.pkg_buildcache if depth == 0 else self.deps_buildcache
|
||||
if setting == "only":
|
||||
return "--use-buildcache=only"
|
||||
elif setting == "never":
|
||||
return "--use-buildcache=never"
|
||||
return ""
|
||||
|
||||
def accept(self, node):
|
||||
fmt = "{name}-{version}-{hash}"
|
||||
tgt = node.edge.spec.format(fmt)
|
||||
spec_str = node.edge.spec.format(
|
||||
"{name}{@version}{%compiler}{variants}{arch=architecture}"
|
||||
)
|
||||
buildcache_flag = self.build_cache_flag(node.depth)
|
||||
prereqs = " ".join([self.target(dep.spec.format(fmt)) for dep in self.neighbors(node)])
|
||||
self.adjacency_list.append(
|
||||
(tgt, prereqs, node.edge.spec.dag_hash(), spec_str, buildcache_flag)
|
||||
)
|
||||
|
||||
# We already accepted this
|
||||
return True
|
||||
|
||||
|
||||
def env_depfile(args):
|
||||
# Currently only make is supported.
|
||||
spack.cmd.require_active_env(cmd_name="env depfile")
|
||||
env = ev.active_environment()
|
||||
|
||||
# Special make targets are useful when including a makefile in another, and you
|
||||
# need to "namespace" the targets to avoid conflicts.
|
||||
if args.make_prefix is None:
|
||||
prefix = os.path.join(env.env_subdir_path, "makedeps")
|
||||
else:
|
||||
prefix = args.make_prefix
|
||||
|
||||
def get_target(name):
|
||||
# The `all` and `clean` targets are phony. It doesn't make sense to
|
||||
# have /abs/path/to/env/metadir/{all,clean} targets. But it *does* make
|
||||
# sense to have a prefix like `env/all`, `env/clean` when they are
|
||||
# supposed to be included
|
||||
if name in ("all", "clean") and os.path.isabs(prefix):
|
||||
return name
|
||||
else:
|
||||
return os.path.join(prefix, name)
|
||||
|
||||
def get_install_target(name):
|
||||
return os.path.join(prefix, "install", name)
|
||||
|
||||
def get_install_deps_target(name):
|
||||
return os.path.join(prefix, "install-deps", name)
|
||||
|
||||
# What things do we build when running make? By default, we build the
|
||||
# root specs. If specific specs are provided as input, we build those.
|
||||
filter_specs = spack.cmd.parse_specs(args.specs) if args.specs else None
|
||||
template = spack.tengine.make_environment().get_template(os.path.join("depfile", "Makefile"))
|
||||
model = depfile.MakefileModel.from_env(
|
||||
ev.active_environment(),
|
||||
filter_specs=filter_specs,
|
||||
pkg_buildcache=depfile.UseBuildCache.from_string(args.use_buildcache[0]),
|
||||
dep_buildcache=depfile.UseBuildCache.from_string(args.use_buildcache[1]),
|
||||
make_prefix=args.make_prefix,
|
||||
jobserver=args.jobserver,
|
||||
if args.specs:
|
||||
abstract_specs = spack.cmd.parse_specs(args.specs)
|
||||
roots = [env.matching_spec(s) for s in abstract_specs]
|
||||
else:
|
||||
roots = [s for _, s in env.concretized_specs()]
|
||||
|
||||
# We produce a sub-DAG from the DAG induced by roots, where we drop build
|
||||
# edges for those specs that are installed through a binary cache.
|
||||
pkg_buildcache, dep_buildcache = args.use_buildcache
|
||||
make_targets = MakeTargetVisitor(get_install_target, pkg_buildcache, dep_buildcache)
|
||||
traverse.traverse_breadth_first_with_visitor(
|
||||
roots, traverse.CoverNodesVisitor(make_targets, key=lambda s: s.dag_hash())
|
||||
)
|
||||
makefile = template.render(model.to_dict())
|
||||
|
||||
# Root specs without deps are the prereqs for the environment target
|
||||
root_install_targets = [get_install_target(h.format("{name}-{version}-{hash}")) for h in roots]
|
||||
|
||||
all_pkg_identifiers = []
|
||||
|
||||
# The SPACK_PACKAGE_IDS variable is "exported", which can be used when including
|
||||
# generated makefiles to add post-install hooks, like pushing to a buildcache,
|
||||
# running tests, etc.
|
||||
# NOTE: GNU Make allows directory separators in variable names, so for consistency
|
||||
# we can namespace this variable with the same prefix as targets.
|
||||
if args.make_prefix is None:
|
||||
pkg_identifier_variable = "SPACK_PACKAGE_IDS"
|
||||
else:
|
||||
pkg_identifier_variable = os.path.join(prefix, "SPACK_PACKAGE_IDS")
|
||||
|
||||
# All install and install-deps targets
|
||||
all_install_related_targets = []
|
||||
|
||||
# Convenience shortcuts: ensure that `make install/pkg-version-hash` triggers
|
||||
# <absolute path to env>/.spack-env/makedeps/install/pkg-version-hash in case
|
||||
# we don't have a custom make target prefix.
|
||||
phony_convenience_targets = []
|
||||
|
||||
for tgt, _, _, _, _ in make_targets.adjacency_list:
|
||||
all_pkg_identifiers.append(tgt)
|
||||
all_install_related_targets.append(get_install_target(tgt))
|
||||
all_install_related_targets.append(get_install_deps_target(tgt))
|
||||
if args.make_prefix is None:
|
||||
phony_convenience_targets.append(os.path.join("install", tgt))
|
||||
phony_convenience_targets.append(os.path.join("install-deps", tgt))
|
||||
|
||||
buf = io.StringIO()
|
||||
|
||||
template = spack.tengine.make_environment().get_template(os.path.join("depfile", "Makefile"))
|
||||
|
||||
rendered = template.render(
|
||||
{
|
||||
"all_target": get_target("all"),
|
||||
"env_target": get_target("env"),
|
||||
"clean_target": get_target("clean"),
|
||||
"all_install_related_targets": " ".join(all_install_related_targets),
|
||||
"root_install_targets": " ".join(root_install_targets),
|
||||
"dirs_target": get_target("dirs"),
|
||||
"environment": env.path,
|
||||
"install_target": get_target("install"),
|
||||
"install_deps_target": get_target("install-deps"),
|
||||
"any_hash_target": get_target("%"),
|
||||
"jobserver_support": "+" if args.jobserver else "",
|
||||
"adjacency_list": make_targets.adjacency_list,
|
||||
"phony_convenience_targets": " ".join(phony_convenience_targets),
|
||||
"pkg_ids_variable": pkg_identifier_variable,
|
||||
"pkg_ids": " ".join(all_pkg_identifiers),
|
||||
}
|
||||
)
|
||||
|
||||
buf.write(rendered)
|
||||
makefile = buf.getvalue()
|
||||
|
||||
# Finally write to stdout/file.
|
||||
if args.output:
|
||||
|
||||
@@ -263,6 +263,146 @@ def report_filename(args: argparse.Namespace, specs: List[spack.spec.Spec]) -> s
|
||||
return result
|
||||
|
||||
|
||||
def install_specs(specs, install_kwargs, cli_args):
|
||||
try:
|
||||
if ev.active_environment():
|
||||
install_specs_inside_environment(specs, install_kwargs, cli_args)
|
||||
else:
|
||||
install_specs_outside_environment(specs, install_kwargs)
|
||||
except spack.build_environment.InstallError as e:
|
||||
if cli_args.show_log_on_error:
|
||||
e.print_context()
|
||||
assert e.pkg, "Expected InstallError to include the associated package"
|
||||
if not os.path.exists(e.pkg.build_log_path):
|
||||
tty.error("'spack install' created no log.")
|
||||
else:
|
||||
sys.stderr.write("Full build log:\n")
|
||||
with open(e.pkg.build_log_path) as log:
|
||||
shutil.copyfileobj(log, sys.stderr)
|
||||
raise
|
||||
|
||||
|
||||
def install_specs_inside_environment(specs, install_kwargs, cli_args):
|
||||
specs_to_install, specs_to_add = [], []
|
||||
env = ev.active_environment()
|
||||
for abstract, concrete in specs:
|
||||
# This won't find specs added to the env since last
|
||||
# concretize, therefore should we consider enforcing
|
||||
# concretization of the env before allowing to install
|
||||
# specs?
|
||||
m_spec = env.matching_spec(abstract)
|
||||
|
||||
# If there is any ambiguity in the above call to matching_spec
|
||||
# (i.e. if more than one spec in the environment matches), then
|
||||
# SpackEnvironmentError is raised, with a message listing the
|
||||
# the matches. Getting to this point means there were either
|
||||
# no matches or exactly one match.
|
||||
|
||||
if not m_spec and not cli_args.add:
|
||||
msg = (
|
||||
"Cannot install '{0}' because it is not in the current environment."
|
||||
" You can add it to the environment with 'spack add {0}', or as part"
|
||||
" of the install command with 'spack install --add {0}'"
|
||||
).format(str(abstract))
|
||||
tty.die(msg)
|
||||
|
||||
if not m_spec:
|
||||
tty.debug("adding {0} as a root".format(abstract.name))
|
||||
specs_to_add.append((abstract, concrete))
|
||||
continue
|
||||
|
||||
tty.debug("exactly one match for {0} in env -> {1}".format(m_spec.name, m_spec.dag_hash()))
|
||||
|
||||
if m_spec in env.roots() or not cli_args.add:
|
||||
# either the single match is a root spec (in which case
|
||||
# the spec is not added to the env again), or the user did
|
||||
# not specify --add (in which case it is assumed we are
|
||||
# installing already-concretized specs in the env)
|
||||
tty.debug("just install {0}".format(m_spec.name))
|
||||
specs_to_install.append(m_spec)
|
||||
else:
|
||||
# the single match is not a root (i.e. it's a dependency),
|
||||
# and --add was specified, so we'll add it as a
|
||||
# root before installing
|
||||
tty.debug("add {0} then install it".format(m_spec.name))
|
||||
specs_to_add.append((abstract, concrete))
|
||||
if specs_to_add:
|
||||
tty.debug("Adding the following specs as roots:")
|
||||
for abstract, concrete in specs_to_add:
|
||||
tty.debug(" {0}".format(abstract.name))
|
||||
with env.write_transaction():
|
||||
specs_to_install.append(env.concretize_and_add(abstract, concrete))
|
||||
env.write(regenerate=False)
|
||||
# Install the validated list of cli specs
|
||||
if specs_to_install:
|
||||
tty.debug("Installing the following cli specs:")
|
||||
for s in specs_to_install:
|
||||
tty.debug(" {0}".format(s.name))
|
||||
env.install_specs(specs_to_install, **install_kwargs)
|
||||
|
||||
|
||||
def install_specs_outside_environment(specs, install_kwargs):
|
||||
installs = [(concrete.package, install_kwargs) for _, concrete in specs]
|
||||
builder = PackageInstaller(installs)
|
||||
builder.install()
|
||||
|
||||
|
||||
def install_all_specs_from_active_environment(
|
||||
install_kwargs, only_concrete, cli_test_arg, reporter_factory
|
||||
):
|
||||
"""Install all specs from the active environment
|
||||
|
||||
Args:
|
||||
install_kwargs (dict): dictionary of options to be passed to the installer
|
||||
only_concrete (bool): if true don't concretize the environment, but install
|
||||
only the specs that are already concrete
|
||||
cli_test_arg (bool or str): command line argument to select which test to run
|
||||
reporter: reporter object for the installations
|
||||
"""
|
||||
env = ev.active_environment()
|
||||
if not env:
|
||||
msg = "install requires a package argument or active environment"
|
||||
if "spack.yaml" in os.listdir(os.getcwd()):
|
||||
# There's a spack.yaml file in the working dir, the user may
|
||||
# have intended to use that
|
||||
msg += "\n\n"
|
||||
msg += "Did you mean to install using the `spack.yaml`"
|
||||
msg += " in this directory? Try: \n"
|
||||
msg += " spack env activate .\n"
|
||||
msg += " spack install\n"
|
||||
msg += " OR\n"
|
||||
msg += " spack --env . install"
|
||||
tty.die(msg)
|
||||
|
||||
install_kwargs["tests"] = compute_tests_install_kwargs(env.user_specs, cli_test_arg)
|
||||
if not only_concrete:
|
||||
with env.write_transaction():
|
||||
concretized_specs = env.concretize(tests=install_kwargs["tests"])
|
||||
ev.display_specs(concretized_specs)
|
||||
|
||||
# save view regeneration for later, so that we only do it
|
||||
# once, as it can be slow.
|
||||
env.write(regenerate=False)
|
||||
|
||||
specs = env.all_specs()
|
||||
if not specs:
|
||||
msg = "{0} environment has no specs to install".format(env.name)
|
||||
tty.msg(msg)
|
||||
return
|
||||
|
||||
reporter = reporter_factory(specs) or lang.nullcontext()
|
||||
|
||||
tty.msg("Installing environment {0}".format(env.name))
|
||||
with reporter:
|
||||
env.install_all(**install_kwargs)
|
||||
|
||||
tty.debug("Regenerating environment views for {0}".format(env.name))
|
||||
with env.write_transaction():
|
||||
# write env to trigger view generation and modulefile
|
||||
# generation
|
||||
env.write()
|
||||
|
||||
|
||||
def compute_tests_install_kwargs(specs, cli_test_arg):
|
||||
"""Translate the test cli argument into the proper install argument"""
|
||||
if cli_test_arg == "all":
|
||||
@@ -272,6 +412,43 @@ def compute_tests_install_kwargs(specs, cli_test_arg):
|
||||
return False
|
||||
|
||||
|
||||
def specs_from_cli(args, install_kwargs):
|
||||
"""Return abstract and concrete spec parsed from the command line."""
|
||||
abstract_specs = spack.cmd.parse_specs(args.spec)
|
||||
install_kwargs["tests"] = compute_tests_install_kwargs(abstract_specs, args.test)
|
||||
try:
|
||||
concrete_specs = spack.cmd.parse_specs(
|
||||
args.spec, concretize=True, tests=install_kwargs["tests"]
|
||||
)
|
||||
except SpackError as e:
|
||||
tty.debug(e)
|
||||
if args.log_format is not None:
|
||||
reporter = args.reporter()
|
||||
reporter.concretization_report(report_filename(args, abstract_specs), e.message)
|
||||
raise
|
||||
return abstract_specs, concrete_specs
|
||||
|
||||
|
||||
def concrete_specs_from_file(args):
|
||||
"""Return the list of concrete specs read from files."""
|
||||
result = []
|
||||
for file in args.specfiles:
|
||||
with open(file, "r") as f:
|
||||
if file.endswith("yaml") or file.endswith("yml"):
|
||||
s = spack.spec.Spec.from_yaml(f)
|
||||
else:
|
||||
s = spack.spec.Spec.from_json(f)
|
||||
|
||||
concretized = s.concretized()
|
||||
if concretized.dag_hash() != s.dag_hash():
|
||||
msg = 'skipped invalid file "{0}". '
|
||||
msg += "The file does not contain a concrete spec."
|
||||
tty.warn(msg.format(file))
|
||||
continue
|
||||
result.append(concretized)
|
||||
return result
|
||||
|
||||
|
||||
def require_user_confirmation_for_overwrite(concrete_specs, args):
|
||||
if args.yes_to_all:
|
||||
return
|
||||
@@ -298,40 +475,12 @@ def require_user_confirmation_for_overwrite(concrete_specs, args):
|
||||
tty.die("Reinstallation aborted.")
|
||||
|
||||
|
||||
def _dump_log_on_error(e: spack.build_environment.InstallError):
|
||||
e.print_context()
|
||||
assert e.pkg, "Expected InstallError to include the associated package"
|
||||
if not os.path.exists(e.pkg.build_log_path):
|
||||
tty.error("'spack install' created no log.")
|
||||
else:
|
||||
sys.stderr.write("Full build log:\n")
|
||||
with open(e.pkg.build_log_path, errors="replace") as log:
|
||||
shutil.copyfileobj(log, sys.stderr)
|
||||
|
||||
|
||||
def _die_require_env():
|
||||
msg = "install requires a package argument or active environment"
|
||||
if "spack.yaml" in os.listdir(os.getcwd()):
|
||||
# There's a spack.yaml file in the working dir, the user may
|
||||
# have intended to use that
|
||||
msg += (
|
||||
"\n\n"
|
||||
"Did you mean to install using the `spack.yaml`"
|
||||
" in this directory? Try: \n"
|
||||
" spack env activate .\n"
|
||||
" spack install\n"
|
||||
" OR\n"
|
||||
" spack --env . install"
|
||||
)
|
||||
tty.die(msg)
|
||||
|
||||
|
||||
def install(parser, args):
|
||||
# TODO: unify args.verbose?
|
||||
tty.set_verbose(args.verbose or args.install_verbose)
|
||||
|
||||
if args.help_cdash:
|
||||
arguments.print_cdash_help()
|
||||
spack.cmd.common.arguments.print_cdash_help()
|
||||
return
|
||||
|
||||
if args.no_checksum:
|
||||
@@ -340,154 +489,43 @@ def install(parser, args):
|
||||
if args.deprecated:
|
||||
spack.config.set("config:deprecated", True, scope="command_line")
|
||||
|
||||
if args.log_file and not args.log_format:
|
||||
msg = "the '--log-format' must be specified when using '--log-file'"
|
||||
tty.die(msg)
|
||||
|
||||
arguments.sanitize_reporter_options(args)
|
||||
spack.cmd.common.arguments.sanitize_reporter_options(args)
|
||||
|
||||
def reporter_factory(specs):
|
||||
if args.log_format is None:
|
||||
return lang.nullcontext()
|
||||
return None
|
||||
|
||||
return spack.report.build_context_manager(
|
||||
context_manager = spack.report.build_context_manager(
|
||||
reporter=args.reporter(), filename=report_filename(args, specs=specs), specs=specs
|
||||
)
|
||||
return context_manager
|
||||
|
||||
install_kwargs = install_kwargs_from_args(args)
|
||||
|
||||
env = ev.active_environment()
|
||||
|
||||
if not env and not args.spec and not args.specfiles:
|
||||
_die_require_env()
|
||||
|
||||
try:
|
||||
if env:
|
||||
install_with_active_env(env, args, install_kwargs, reporter_factory)
|
||||
else:
|
||||
install_without_active_env(args, install_kwargs, reporter_factory)
|
||||
except spack.build_environment.InstallError as e:
|
||||
if args.show_log_on_error:
|
||||
_dump_log_on_error(e)
|
||||
raise
|
||||
|
||||
|
||||
def _maybe_add_and_concretize(args, env, specs):
|
||||
"""Handle the overloaded spack install behavior of adding
|
||||
and automatically concretizing specs"""
|
||||
|
||||
# Users can opt out of accidental concretizations with --only-concrete
|
||||
if args.only_concrete:
|
||||
if not args.spec and not args.specfiles:
|
||||
# If there are no args but an active environment then install the packages from it.
|
||||
install_all_specs_from_active_environment(
|
||||
install_kwargs=install_kwargs,
|
||||
only_concrete=args.only_concrete,
|
||||
cli_test_arg=args.test,
|
||||
reporter_factory=reporter_factory,
|
||||
)
|
||||
return
|
||||
|
||||
# Otherwise, we will modify the environment.
|
||||
with env.write_transaction():
|
||||
# `spack add` adds these specs.
|
||||
if args.add:
|
||||
for spec in specs:
|
||||
env.add(spec)
|
||||
# Specs from CLI
|
||||
abstract_specs, concrete_specs = specs_from_cli(args, install_kwargs)
|
||||
|
||||
# `spack concretize`
|
||||
tests = compute_tests_install_kwargs(env.user_specs, args.test)
|
||||
concretized_specs = env.concretize(tests=tests)
|
||||
ev.display_specs(concretized_specs)
|
||||
|
||||
# save view regeneration for later, so that we only do it
|
||||
# once, as it can be slow.
|
||||
env.write(regenerate=False)
|
||||
|
||||
|
||||
def install_with_active_env(env: ev.Environment, args, install_kwargs, reporter_factory):
|
||||
specs = spack.cmd.parse_specs(args.spec)
|
||||
|
||||
# The following two commands are equivalent:
|
||||
# 1. `spack install --add x y z`
|
||||
# 2. `spack add x y z && spack concretize && spack install --only-concrete`
|
||||
# here we do the `add` and `concretize` part.
|
||||
_maybe_add_and_concretize(args, env, specs)
|
||||
|
||||
# Now we're doing `spack install --only-concrete`.
|
||||
if args.add or not specs:
|
||||
specs_to_install = env.concrete_roots()
|
||||
if not specs_to_install:
|
||||
tty.msg(f"{env.name} environment has no specs to install")
|
||||
return
|
||||
|
||||
# `spack install x y z` without --add is installing matching specs in the env.
|
||||
else:
|
||||
specs_to_install = env.all_matching_specs(*specs)
|
||||
if not specs_to_install:
|
||||
msg = (
|
||||
"Cannot install '{0}' because no matching specs are in the current environment."
|
||||
" You can add specs to the environment with 'spack add {0}', or as part"
|
||||
" of the install command with 'spack install --add {0}'"
|
||||
).format(" ".join(args.spec))
|
||||
tty.die(msg)
|
||||
|
||||
install_kwargs["tests"] = compute_tests_install_kwargs(specs_to_install, args.test)
|
||||
|
||||
if args.overwrite:
|
||||
require_user_confirmation_for_overwrite(specs_to_install, args)
|
||||
install_kwargs["overwrite"] = [spec.dag_hash() for spec in specs_to_install]
|
||||
|
||||
try:
|
||||
with reporter_factory(specs_to_install):
|
||||
env.install_specs(specs_to_install, **install_kwargs)
|
||||
finally:
|
||||
# TODO: this is doing way too much to trigger
|
||||
# views and modules to be generated.
|
||||
with env.write_transaction():
|
||||
env.write(regenerate=True)
|
||||
|
||||
|
||||
def concrete_specs_from_cli(args, install_kwargs):
|
||||
"""Return abstract and concrete spec parsed from the command line."""
|
||||
abstract_specs = spack.cmd.parse_specs(args.spec)
|
||||
install_kwargs["tests"] = compute_tests_install_kwargs(abstract_specs, args.test)
|
||||
try:
|
||||
concrete_specs = spack.cmd.parse_specs(
|
||||
args.spec, concretize=True, tests=install_kwargs["tests"]
|
||||
)
|
||||
except SpackError as e:
|
||||
tty.debug(e)
|
||||
if args.log_format is not None:
|
||||
reporter = args.reporter()
|
||||
reporter.concretization_report(report_filename(args, abstract_specs), e.message)
|
||||
raise
|
||||
return concrete_specs
|
||||
|
||||
|
||||
def concrete_specs_from_file(args):
|
||||
"""Return the list of concrete specs read from files."""
|
||||
result = []
|
||||
for file in args.specfiles:
|
||||
with open(file, "r") as f:
|
||||
if file.endswith("yaml") or file.endswith("yml"):
|
||||
s = spack.spec.Spec.from_yaml(f)
|
||||
else:
|
||||
s = spack.spec.Spec.from_json(f)
|
||||
|
||||
concretized = s.concretized()
|
||||
if concretized.dag_hash() != s.dag_hash():
|
||||
msg = 'skipped invalid file "{0}". '
|
||||
msg += "The file does not contain a concrete spec."
|
||||
tty.warn(msg.format(file))
|
||||
continue
|
||||
result.append(concretized)
|
||||
return result
|
||||
|
||||
|
||||
def install_without_active_env(args, install_kwargs, reporter_factory):
|
||||
concrete_specs = concrete_specs_from_cli(args, install_kwargs) + concrete_specs_from_file(args)
|
||||
# Concrete specs from YAML or JSON files
|
||||
specs_from_file = concrete_specs_from_file(args)
|
||||
abstract_specs.extend(specs_from_file)
|
||||
concrete_specs.extend(specs_from_file)
|
||||
|
||||
if len(concrete_specs) == 0:
|
||||
tty.die("The `spack install` command requires a spec to install.")
|
||||
|
||||
with reporter_factory(concrete_specs):
|
||||
reporter = reporter_factory(concrete_specs) or lang.nullcontext()
|
||||
with reporter:
|
||||
if args.overwrite:
|
||||
require_user_confirmation_for_overwrite(concrete_specs, args)
|
||||
install_kwargs["overwrite"] = [spec.dag_hash() for spec in concrete_specs]
|
||||
|
||||
installs = [(s.package, install_kwargs) for s in concrete_specs]
|
||||
builder = PackageInstaller(installs)
|
||||
builder.install()
|
||||
install_specs(zip(abstract_specs, concrete_specs), install_kwargs, args)
|
||||
|
||||
@@ -38,6 +38,6 @@ def remove(parser, args):
|
||||
env.clear()
|
||||
else:
|
||||
for spec in spack.cmd.parse_specs(args.specs):
|
||||
tty.msg("Removing %s from environment %s" % (spec, env.name))
|
||||
env.remove(spec, args.list_name, force=args.force)
|
||||
tty.msg(f"{spec} has been removed from {env.manifest}")
|
||||
env.write()
|
||||
|
||||
@@ -61,7 +61,7 @@ def is_clang_based(self):
|
||||
return version >= ver("9.0") and "classic" not in str(version)
|
||||
|
||||
version_argument = "--version"
|
||||
version_regex = r"[Cc]ray (?:clang|C :|C\+\+ :|Fortran :) [Vv]ersion.*?(\d+(\.\d+)+)"
|
||||
version_regex = r"[Vv]ersion.*?(\d+(\.\d+)+)"
|
||||
|
||||
@property
|
||||
def verbose_flag(self):
|
||||
|
||||
@@ -122,19 +122,7 @@ def platform_toolset_ver(self):
|
||||
@property
|
||||
def cl_version(self):
|
||||
"""Cl toolset version"""
|
||||
return Version(
|
||||
re.search(
|
||||
Msvc.version_regex,
|
||||
spack.compiler.get_compiler_version_output(self.cc, version_arg=None),
|
||||
).group(1)
|
||||
)
|
||||
|
||||
@property
|
||||
def vs_root(self):
|
||||
# The MSVC install root is located at a fix level above the compiler
|
||||
# and is referenceable idiomatically via the pattern below
|
||||
# this should be consistent accross versions
|
||||
return os.path.abspath(os.path.join(self.cc, "../../../../../../../.."))
|
||||
return spack.compiler.get_compiler_version_output(self.cc)
|
||||
|
||||
def setup_custom_environment(self, pkg, env):
|
||||
"""Set environment variables for MSVC using the
|
||||
@@ -164,16 +152,15 @@ def setup_custom_environment(self, pkg, env):
|
||||
out = out.decode("utf-16le", errors="replace") # novermin
|
||||
|
||||
int_env = dict(
|
||||
(key, value)
|
||||
(key.lower(), value)
|
||||
for key, _, value in (line.partition("=") for line in out.splitlines())
|
||||
if key and value
|
||||
)
|
||||
|
||||
for env_var in int_env:
|
||||
if os.pathsep not in int_env[env_var]:
|
||||
env.set(env_var, int_env[env_var])
|
||||
else:
|
||||
env.set_path(env_var, int_env[env_var].split(os.pathsep))
|
||||
if "path" in int_env:
|
||||
env.set_path("PATH", int_env["path"].split(";"))
|
||||
env.set_path("INCLUDE", int_env.get("include", "").split(";"))
|
||||
env.set_path("LIB", int_env.get("lib", "").split(";"))
|
||||
|
||||
env.set("CC", self.cc)
|
||||
env.set("CXX", self.cxx)
|
||||
|
||||
@@ -21,7 +21,6 @@
|
||||
import tempfile
|
||||
from contextlib import contextmanager
|
||||
from itertools import chain
|
||||
from typing import Union
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
@@ -44,9 +43,7 @@
|
||||
from spack.version import Version, VersionList, VersionRange, ver
|
||||
|
||||
#: impements rudimentary logic for ABI compatibility
|
||||
_abi: Union[spack.abi.ABI, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(
|
||||
lambda: spack.abi.ABI()
|
||||
)
|
||||
_abi = llnl.util.lang.Singleton(lambda: spack.abi.ABI())
|
||||
|
||||
|
||||
@functools.total_ordering
|
||||
|
||||
@@ -36,10 +36,9 @@
|
||||
import re
|
||||
import sys
|
||||
from contextlib import contextmanager
|
||||
from typing import Dict, List, Optional, Union
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
import ruamel.yaml as yaml
|
||||
from ruamel.yaml.comments import Comment
|
||||
from ruamel.yaml.error import MarkedYAMLError
|
||||
|
||||
import llnl.util.lang
|
||||
@@ -78,8 +77,6 @@
|
||||
"config": spack.schema.config.schema,
|
||||
"upstreams": spack.schema.upstreams.schema,
|
||||
"bootstrap": spack.schema.bootstrap.schema,
|
||||
"ci": spack.schema.ci.schema,
|
||||
"cdash": spack.schema.cdash.schema,
|
||||
}
|
||||
|
||||
# Same as above, but including keys for environments
|
||||
@@ -363,12 +360,6 @@ def _process_dict_keyname_overrides(data):
|
||||
if sk.endswith(":"):
|
||||
key = syaml.syaml_str(sk[:-1])
|
||||
key.override = True
|
||||
elif sk.endswith("+"):
|
||||
key = syaml.syaml_str(sk[:-1])
|
||||
key.prepend = True
|
||||
elif sk.endswith("-"):
|
||||
key = syaml.syaml_str(sk[:-1])
|
||||
key.append = True
|
||||
else:
|
||||
key = sk
|
||||
|
||||
@@ -544,14 +535,16 @@ def update_config(
|
||||
scope = self._validate_scope(scope) # get ConfigScope object
|
||||
|
||||
# manually preserve comments
|
||||
need_comment_copy = section in scope.sections and scope.sections[section]
|
||||
need_comment_copy = section in scope.sections and scope.sections[section] is not None
|
||||
if need_comment_copy:
|
||||
comments = getattr(scope.sections[section][section], Comment.attrib, None)
|
||||
comments = getattr(
|
||||
scope.sections[section][section], yaml.comments.Comment.attrib, None
|
||||
)
|
||||
|
||||
# read only the requested section's data.
|
||||
scope.sections[section] = syaml.syaml_dict({section: update_data})
|
||||
if need_comment_copy and comments:
|
||||
setattr(scope.sections[section][section], Comment.attrib, comments)
|
||||
setattr(scope.sections[section][section], yaml.comments.Comment.attrib, comments)
|
||||
|
||||
scope._write_section(section)
|
||||
|
||||
@@ -837,7 +830,7 @@ def _config():
|
||||
|
||||
|
||||
#: This is the singleton configuration instance for Spack.
|
||||
config: Union[Configuration, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(_config)
|
||||
config = llnl.util.lang.Singleton(_config)
|
||||
|
||||
|
||||
def add_from_file(filename, scope=None):
|
||||
@@ -1047,33 +1040,6 @@ def _override(string):
|
||||
return hasattr(string, "override") and string.override
|
||||
|
||||
|
||||
def _append(string):
|
||||
"""Test if a spack YAML string is an override.
|
||||
|
||||
See ``spack_yaml`` for details. Keys in Spack YAML can end in `+:`,
|
||||
and if they do, their values append lower-precedence
|
||||
configs.
|
||||
|
||||
str, str : concatenate strings.
|
||||
[obj], [obj] : append lists.
|
||||
|
||||
"""
|
||||
return getattr(string, "append", False)
|
||||
|
||||
|
||||
def _prepend(string):
|
||||
"""Test if a spack YAML string is an override.
|
||||
|
||||
See ``spack_yaml`` for details. Keys in Spack YAML can end in `+:`,
|
||||
and if they do, their values prepend lower-precedence
|
||||
configs.
|
||||
|
||||
str, str : concatenate strings.
|
||||
[obj], [obj] : prepend lists. (default behavior)
|
||||
"""
|
||||
return getattr(string, "prepend", False)
|
||||
|
||||
|
||||
def _mark_internal(data, name):
|
||||
"""Add a simple name mark to raw YAML/JSON data.
|
||||
|
||||
@@ -1136,57 +1102,7 @@ def get_valid_type(path):
|
||||
raise ConfigError("Cannot determine valid type for path '%s'." % path)
|
||||
|
||||
|
||||
def remove_yaml(dest, source):
|
||||
"""UnMerges source from dest; entries in source take precedence over dest.
|
||||
|
||||
This routine may modify dest and should be assigned to dest, in
|
||||
case dest was None to begin with, e.g.:
|
||||
|
||||
dest = remove_yaml(dest, source)
|
||||
|
||||
In the result, elements from lists from ``source`` will not appear
|
||||
as elements of lists from ``dest``. Likewise, when iterating over keys
|
||||
or items in merged ``OrderedDict`` objects, keys from ``source`` will not
|
||||
appear as keys in ``dest``.
|
||||
|
||||
Config file authors can optionally end any attribute in a dict
|
||||
with `::` instead of `:`, and the key will remove the entire section
|
||||
from ``dest``
|
||||
"""
|
||||
|
||||
def they_are(t):
|
||||
return isinstance(dest, t) and isinstance(source, t)
|
||||
|
||||
# If source is None, overwrite with source.
|
||||
if source is None:
|
||||
return dest
|
||||
|
||||
# Source list is prepended (for precedence)
|
||||
if they_are(list):
|
||||
# Make sure to copy ruamel comments
|
||||
dest[:] = [x for x in dest if x not in source]
|
||||
return dest
|
||||
|
||||
# Source dict is merged into dest.
|
||||
elif they_are(dict):
|
||||
for sk, sv in source.items():
|
||||
# always remove the dest items. Python dicts do not overwrite
|
||||
# keys on insert, so this ensures that source keys are copied
|
||||
# into dest along with mark provenance (i.e., file/line info).
|
||||
unmerge = sk in dest
|
||||
old_dest_value = dest.pop(sk, None)
|
||||
|
||||
if unmerge and not spack.config._override(sk):
|
||||
dest[sk] = remove_yaml(old_dest_value, sv)
|
||||
|
||||
return dest
|
||||
|
||||
# If we reach here source and dest are either different types or are
|
||||
# not both lists or dicts: replace with source.
|
||||
return dest
|
||||
|
||||
|
||||
def merge_yaml(dest, source, prepend=False, append=False):
|
||||
def merge_yaml(dest, source):
|
||||
"""Merges source into dest; entries in source take precedence over dest.
|
||||
|
||||
This routine may modify dest and should be assigned to dest, in
|
||||
@@ -1202,9 +1118,6 @@ def merge_yaml(dest, source, prepend=False, append=False):
|
||||
Config file authors can optionally end any attribute in a dict
|
||||
with `::` instead of `:`, and the key will override that of the
|
||||
parent instead of merging.
|
||||
|
||||
`+:` will extend the default prepend merge strategy to include string concatenation
|
||||
`-:` will change the merge strategy to append, it also includes string concatentation
|
||||
"""
|
||||
|
||||
def they_are(t):
|
||||
@@ -1216,12 +1129,8 @@ def they_are(t):
|
||||
|
||||
# Source list is prepended (for precedence)
|
||||
if they_are(list):
|
||||
if append:
|
||||
# Make sure to copy ruamel comments
|
||||
dest[:] = [x for x in dest if x not in source] + source
|
||||
else:
|
||||
# Make sure to copy ruamel comments
|
||||
dest[:] = source + [x for x in dest if x not in source]
|
||||
# Make sure to copy ruamel comments
|
||||
dest[:] = source + [x for x in dest if x not in source]
|
||||
return dest
|
||||
|
||||
# Source dict is merged into dest.
|
||||
@@ -1238,7 +1147,7 @@ def they_are(t):
|
||||
old_dest_value = dest.pop(sk, None)
|
||||
|
||||
if merge and not _override(sk):
|
||||
dest[sk] = merge_yaml(old_dest_value, sv, _prepend(sk), _append(sk))
|
||||
dest[sk] = merge_yaml(old_dest_value, sv)
|
||||
else:
|
||||
# if sk ended with ::, or if it's new, completely override
|
||||
dest[sk] = copy.deepcopy(sv)
|
||||
@@ -1249,13 +1158,6 @@ def they_are(t):
|
||||
|
||||
return dest
|
||||
|
||||
elif they_are(str):
|
||||
# Concatenate strings in prepend mode
|
||||
if prepend:
|
||||
return source + dest
|
||||
elif append:
|
||||
return dest + source
|
||||
|
||||
# If we reach here source and dest are either different types or are
|
||||
# not both lists or dicts: replace with source.
|
||||
return copy.copy(source)
|
||||
@@ -1281,17 +1183,6 @@ def process_config_path(path):
|
||||
front = syaml.syaml_str(front)
|
||||
front.override = True
|
||||
seen_override_in_path = True
|
||||
|
||||
elif front.endswith("+"):
|
||||
front = front.rstrip("+")
|
||||
front = syaml.syaml_str(front)
|
||||
front.prepend = True
|
||||
|
||||
elif front.endswith("-"):
|
||||
front = front.rstrip("-")
|
||||
front = syaml.syaml_str(front)
|
||||
front.append = True
|
||||
|
||||
result.append(front)
|
||||
return result
|
||||
|
||||
|
||||
@@ -39,10 +39,10 @@ def validate(configuration_file):
|
||||
# Ensure we have a "container" attribute with sensible defaults set
|
||||
env_dict = ev.config_dict(config)
|
||||
env_dict.setdefault(
|
||||
"container", {"format": "docker", "images": {"os": "ubuntu:22.04", "spack": "develop"}}
|
||||
"container", {"format": "docker", "images": {"os": "ubuntu:18.04", "spack": "develop"}}
|
||||
)
|
||||
env_dict["container"].setdefault("format", "docker")
|
||||
env_dict["container"].setdefault("images", {"os": "ubuntu:22.04", "spack": "develop"})
|
||||
env_dict["container"].setdefault("images", {"os": "ubuntu:18.04", "spack": "develop"})
|
||||
|
||||
# Remove attributes that are not needed / allowed in the
|
||||
# container recipe
|
||||
|
||||
@@ -12,90 +12,6 @@
|
||||
},
|
||||
"os_package_manager": "yum_amazon"
|
||||
},
|
||||
"fedora:38": {
|
||||
"bootstrap": {
|
||||
"template": "container/fedora_38.dockerfile",
|
||||
"image": "docker.io/fedora:38"
|
||||
},
|
||||
"os_package_manager": "yum",
|
||||
"build": "spack/fedora38",
|
||||
"build_tags": {
|
||||
"develop": "latest"
|
||||
},
|
||||
"final": {
|
||||
"image": "docker.io/fedora:38"
|
||||
}
|
||||
},
|
||||
"fedora:37": {
|
||||
"bootstrap": {
|
||||
"template": "container/fedora_37.dockerfile",
|
||||
"image": "docker.io/fedora:37"
|
||||
},
|
||||
"os_package_manager": "yum",
|
||||
"build": "spack/fedora37",
|
||||
"build_tags": {
|
||||
"develop": "latest"
|
||||
},
|
||||
"final": {
|
||||
"image": "docker.io/fedora:37"
|
||||
}
|
||||
},
|
||||
"rockylinux:9": {
|
||||
"bootstrap": {
|
||||
"template": "container/rockylinux_9.dockerfile",
|
||||
"image": "docker.io/rockylinux:9"
|
||||
},
|
||||
"os_package_manager": "yum",
|
||||
"build": "spack/rockylinux9",
|
||||
"build_tags": {
|
||||
"develop": "latest"
|
||||
},
|
||||
"final": {
|
||||
"image": "docker.io/rockylinux:9"
|
||||
}
|
||||
},
|
||||
"rockylinux:8": {
|
||||
"bootstrap": {
|
||||
"template": "container/rockylinux_8.dockerfile",
|
||||
"image": "docker.io/rockylinux:8"
|
||||
},
|
||||
"os_package_manager": "yum",
|
||||
"build": "spack/rockylinux8",
|
||||
"build_tags": {
|
||||
"develop": "latest"
|
||||
},
|
||||
"final": {
|
||||
"image": "docker.io/rockylinux:8"
|
||||
}
|
||||
},
|
||||
"almalinux:9": {
|
||||
"bootstrap": {
|
||||
"template": "container/almalinux_9.dockerfile",
|
||||
"image": "quay.io/almalinux/almalinux:9"
|
||||
},
|
||||
"os_package_manager": "yum",
|
||||
"build": "spack/almalinux9",
|
||||
"build_tags": {
|
||||
"develop": "latest"
|
||||
},
|
||||
"final": {
|
||||
"image": "quay.io/almalinux/almalinux:9"
|
||||
}
|
||||
},
|
||||
"almalinux:8": {
|
||||
"bootstrap": {
|
||||
"template": "container/almalinux_8.dockerfile",
|
||||
"image": "quay.io/almalinux/almalinux:8"
|
||||
},
|
||||
"os_package_manager": "yum",
|
||||
"build": "spack/almalinux8",
|
||||
"build_tags": {
|
||||
"develop": "latest"
|
||||
},
|
||||
"final": {
|
||||
"image": "quay.io/almalinux/almalinux:8"
|
||||
}
|
||||
},
|
||||
"centos:stream": {
|
||||
"bootstrap": {
|
||||
"template": "container/centos_stream.dockerfile",
|
||||
|
||||
@@ -7,7 +7,6 @@
|
||||
"""
|
||||
import collections
|
||||
import copy
|
||||
from typing import Optional
|
||||
|
||||
import spack.environment as ev
|
||||
import spack.schema.env
|
||||
@@ -132,9 +131,6 @@ class PathContext(tengine.Context):
|
||||
directly via PATH.
|
||||
"""
|
||||
|
||||
# Must be set by derived classes
|
||||
template_name: Optional[str] = None
|
||||
|
||||
def __init__(self, config, last_phase):
|
||||
self.config = ev.config_dict(config)
|
||||
self.container_config = self.config["container"]
|
||||
@@ -150,10 +146,6 @@ def __init__(self, config, last_phase):
|
||||
# Record the last phase
|
||||
self.last_phase = last_phase
|
||||
|
||||
@tengine.context_property
|
||||
def depfile(self):
|
||||
return self.container_config.get("depfile", False)
|
||||
|
||||
@tengine.context_property
|
||||
def run(self):
|
||||
"""Information related to the run image."""
|
||||
@@ -288,8 +280,7 @@ def render_phase(self):
|
||||
def __call__(self):
|
||||
"""Returns the recipe as a string"""
|
||||
env = tengine.make_environment()
|
||||
template_name = self.container_config.get("template", self.template_name)
|
||||
t = env.get_template(template_name)
|
||||
t = env.get_template(self.template_name)
|
||||
return t.render(**self.to_dict())
|
||||
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ class OpenMpi(Package):
|
||||
import functools
|
||||
import os.path
|
||||
import re
|
||||
from typing import List, Optional, Set
|
||||
from typing import List, Set
|
||||
|
||||
import llnl.util.lang
|
||||
import llnl.util.tty.color
|
||||
@@ -317,100 +317,34 @@ def remove_directives(arg):
|
||||
|
||||
|
||||
@directive("versions")
|
||||
def version(
|
||||
ver: str,
|
||||
# this positional argument is deprecated, use sha256=... instead
|
||||
checksum: Optional[str] = None,
|
||||
*,
|
||||
# generic version options
|
||||
preferred: Optional[bool] = None,
|
||||
deprecated: Optional[bool] = None,
|
||||
no_cache: Optional[bool] = None,
|
||||
# url fetch options
|
||||
url: Optional[str] = None,
|
||||
extension: Optional[str] = None,
|
||||
expand: Optional[bool] = None,
|
||||
fetch_options: Optional[dict] = None,
|
||||
# url archive verification options
|
||||
md5: Optional[str] = None,
|
||||
sha1: Optional[str] = None,
|
||||
sha224: Optional[str] = None,
|
||||
sha256: Optional[str] = None,
|
||||
sha384: Optional[str] = None,
|
||||
sha512: Optional[str] = None,
|
||||
# git fetch options
|
||||
git: Optional[str] = None,
|
||||
commit: Optional[str] = None,
|
||||
tag: Optional[str] = None,
|
||||
branch: Optional[str] = None,
|
||||
get_full_repo: Optional[bool] = None,
|
||||
submodules: Optional[bool] = None,
|
||||
submodules_delete: Optional[bool] = None,
|
||||
# other version control
|
||||
svn: Optional[str] = None,
|
||||
hg: Optional[str] = None,
|
||||
cvs: Optional[str] = None,
|
||||
revision: Optional[str] = None,
|
||||
date: Optional[str] = None,
|
||||
):
|
||||
def version(ver, checksum=None, **kwargs):
|
||||
"""Adds a version and, if appropriate, metadata for fetching its code.
|
||||
|
||||
The ``version`` directives are aggregated into a ``versions`` dictionary
|
||||
attribute with ``Version`` keys and metadata values, where the metadata
|
||||
is stored as a dictionary of ``kwargs``.
|
||||
|
||||
The (keyword) arguments are turned into a valid fetch strategy for
|
||||
The ``dict`` of arguments is turned into a valid fetch strategy for
|
||||
code packages later. See ``spack.fetch_strategy.for_package_version()``.
|
||||
|
||||
Keyword Arguments:
|
||||
deprecated (bool): whether or not this version is deprecated
|
||||
"""
|
||||
|
||||
def _execute_version(pkg):
|
||||
if (
|
||||
any((sha256, sha384, sha512, md5, sha1, sha224, checksum))
|
||||
and hasattr(pkg, "has_code")
|
||||
and not pkg.has_code
|
||||
):
|
||||
raise VersionChecksumError(
|
||||
"{0}: Checksums not allowed in no-code packages "
|
||||
"(see '{1}' version).".format(pkg.name, ver)
|
||||
)
|
||||
if checksum is not None:
|
||||
if hasattr(pkg, "has_code") and not pkg.has_code:
|
||||
raise VersionChecksumError(
|
||||
"{0}: Checksums not allowed in no-code packages"
|
||||
"(see '{1}' version).".format(pkg.name, ver)
|
||||
)
|
||||
|
||||
kwargs = {
|
||||
key: value
|
||||
for key, value in (
|
||||
("sha256", sha256),
|
||||
("sha384", sha384),
|
||||
("sha512", sha512),
|
||||
("preferred", preferred),
|
||||
("deprecated", deprecated),
|
||||
("expand", expand),
|
||||
("url", url),
|
||||
("extension", extension),
|
||||
("no_cache", no_cache),
|
||||
("fetch_options", fetch_options),
|
||||
("git", git),
|
||||
("svn", svn),
|
||||
("hg", hg),
|
||||
("cvs", cvs),
|
||||
("get_full_repo", get_full_repo),
|
||||
("branch", branch),
|
||||
("submodules", submodules),
|
||||
("submodules_delete", submodules_delete),
|
||||
("commit", commit),
|
||||
("tag", tag),
|
||||
("revision", revision),
|
||||
("date", date),
|
||||
("md5", md5),
|
||||
("sha1", sha1),
|
||||
("sha224", sha224),
|
||||
("checksum", checksum),
|
||||
)
|
||||
if value is not None
|
||||
}
|
||||
kwargs["checksum"] = checksum
|
||||
|
||||
# Store kwargs for the package to later with a fetch_strategy.
|
||||
version = Version(ver)
|
||||
if isinstance(version, GitVersion):
|
||||
if git is None and not hasattr(pkg, "git"):
|
||||
if not hasattr(pkg, "git") and "git" not in kwargs:
|
||||
msg = "Spack version directives cannot include git hashes fetched from"
|
||||
msg += " URLs. Error in package '%s'\n" % pkg.name
|
||||
msg += " version('%s', " % version.string
|
||||
|
||||
@@ -340,14 +340,11 @@
|
||||
all_environments,
|
||||
config_dict,
|
||||
create,
|
||||
create_in_dir,
|
||||
deactivate,
|
||||
default_manifest_yaml,
|
||||
default_view_name,
|
||||
display_specs,
|
||||
environment_dir_from_name,
|
||||
exists,
|
||||
initialize_environment_dir,
|
||||
installed_specs,
|
||||
is_env_dir,
|
||||
is_latest_format,
|
||||
@@ -372,14 +369,11 @@
|
||||
"all_environments",
|
||||
"config_dict",
|
||||
"create",
|
||||
"create_in_dir",
|
||||
"deactivate",
|
||||
"default_manifest_yaml",
|
||||
"default_view_name",
|
||||
"display_specs",
|
||||
"environment_dir_from_name",
|
||||
"exists",
|
||||
"initialize_environment_dir",
|
||||
"installed_specs",
|
||||
"is_env_dir",
|
||||
"is_latest_format",
|
||||
|
||||
@@ -1,239 +0,0 @@
|
||||
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
"""
|
||||
This module contains the traversal logic and models that can be used to generate
|
||||
depfiles from an environment.
|
||||
"""
|
||||
|
||||
import os
|
||||
from enum import Enum
|
||||
from typing import List, Optional
|
||||
|
||||
import spack.environment.environment as ev
|
||||
import spack.spec
|
||||
import spack.traverse as traverse
|
||||
|
||||
|
||||
class UseBuildCache(Enum):
|
||||
ONLY = 1
|
||||
NEVER = 2
|
||||
AUTO = 3
|
||||
|
||||
@staticmethod
|
||||
def from_string(s: str) -> "UseBuildCache":
|
||||
if s == "only":
|
||||
return UseBuildCache.ONLY
|
||||
elif s == "never":
|
||||
return UseBuildCache.NEVER
|
||||
elif s == "auto":
|
||||
return UseBuildCache.AUTO
|
||||
raise ValueError(f"invalid value for UseBuildCache: {s}")
|
||||
|
||||
|
||||
def _deptypes(use_buildcache: UseBuildCache):
|
||||
"""What edges should we follow for a given node? If it's a cache-only
|
||||
node, then we can drop build type deps."""
|
||||
return ("link", "run") if use_buildcache == UseBuildCache.ONLY else ("build", "link", "run")
|
||||
|
||||
|
||||
class DepfileNode:
|
||||
"""Contains a spec, a subset of its dependencies, and a flag whether it should be
|
||||
buildcache only/never/auto."""
|
||||
|
||||
def __init__(
|
||||
self, target: spack.spec.Spec, prereqs: List[spack.spec.Spec], buildcache: UseBuildCache
|
||||
):
|
||||
self.target = target
|
||||
self.prereqs = prereqs
|
||||
if buildcache == UseBuildCache.ONLY:
|
||||
self.buildcache_flag = "--use-buildcache=only"
|
||||
elif buildcache == UseBuildCache.NEVER:
|
||||
self.buildcache_flag = "--use-buildcache=never"
|
||||
else:
|
||||
self.buildcache_flag = ""
|
||||
|
||||
|
||||
class DepfileSpecVisitor:
|
||||
"""This visitor produces an adjacency list of a (reduced) DAG, which
|
||||
is used to generate depfile targets with their prerequisites. Currently
|
||||
it only drops build deps when using buildcache only mode.
|
||||
|
||||
Note that the DAG could be reduced even more by dropping build edges of specs
|
||||
installed at the moment the depfile is generated, but that would produce
|
||||
stateful depfiles that would not fail when the database is wiped later."""
|
||||
|
||||
def __init__(self, pkg_buildcache: UseBuildCache, deps_buildcache: UseBuildCache):
|
||||
self.adjacency_list: List[DepfileNode] = []
|
||||
self.pkg_buildcache = pkg_buildcache
|
||||
self.deps_buildcache = deps_buildcache
|
||||
self.deptypes_root = _deptypes(pkg_buildcache)
|
||||
self.deptypes_deps = _deptypes(deps_buildcache)
|
||||
|
||||
def neighbors(self, node):
|
||||
"""Produce a list of spec to follow from node"""
|
||||
deptypes = self.deptypes_root if node.depth == 0 else self.deptypes_deps
|
||||
return traverse.sort_edges(node.edge.spec.edges_to_dependencies(deptype=deptypes))
|
||||
|
||||
def accept(self, node):
|
||||
self.adjacency_list.append(
|
||||
DepfileNode(
|
||||
target=node.edge.spec,
|
||||
prereqs=[edge.spec for edge in self.neighbors(node)],
|
||||
buildcache=self.pkg_buildcache if node.depth == 0 else self.deps_buildcache,
|
||||
)
|
||||
)
|
||||
|
||||
# We already accepted this
|
||||
return True
|
||||
|
||||
|
||||
class MakefileModel:
|
||||
"""This class produces all data to render a makefile for specs of an environment."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
env: ev.Environment,
|
||||
roots: List[spack.spec.Spec],
|
||||
adjacency_list: List[DepfileNode],
|
||||
make_prefix: Optional[str],
|
||||
jobserver: bool,
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
env: environment to generate the makefile for
|
||||
roots: specs that get built in the default target
|
||||
adjacency_list: list of DepfileNode, mapping specs to their dependencies
|
||||
make_prefix: prefix for makefile targets
|
||||
jobserver: when enabled, make will invoke Spack with jobserver support. For
|
||||
dry-run this should be disabled.
|
||||
"""
|
||||
# Currently we can only use depfile with an environment since Spack needs to
|
||||
# find the concrete specs somewhere.
|
||||
self.env_path = env.path
|
||||
|
||||
# These specs are built in the default target.
|
||||
self.roots = roots
|
||||
|
||||
# The SPACK_PACKAGE_IDS variable is "exported", which can be used when including
|
||||
# generated makefiles to add post-install hooks, like pushing to a buildcache,
|
||||
# running tests, etc.
|
||||
if make_prefix is None:
|
||||
self.make_prefix = os.path.join(env.env_subdir_path, "makedeps")
|
||||
self.pkg_identifier_variable = "SPACK_PACKAGE_IDS"
|
||||
else:
|
||||
# NOTE: GNU Make allows directory separators in variable names, so for consistency
|
||||
# we can namespace this variable with the same prefix as targets.
|
||||
self.make_prefix = make_prefix
|
||||
self.pkg_identifier_variable = os.path.join(make_prefix, "SPACK_PACKAGE_IDS")
|
||||
|
||||
# And here we collect a tuple of (target, prereqs, dag_hash, nice_name, buildcache_flag)
|
||||
self.make_adjacency_list = [
|
||||
(
|
||||
self._safe_name(item.target),
|
||||
" ".join(self._install_target(self._safe_name(s)) for s in item.prereqs),
|
||||
item.target.dag_hash(),
|
||||
item.target.format("{name}{@version}{%compiler}{variants}{arch=architecture}"),
|
||||
item.buildcache_flag,
|
||||
)
|
||||
for item in adjacency_list
|
||||
]
|
||||
|
||||
# Root specs without deps are the prereqs for the environment target
|
||||
self.root_install_targets = [self._install_target(self._safe_name(s)) for s in roots]
|
||||
|
||||
self.jobserver_support = "+" if jobserver else ""
|
||||
|
||||
# All package identifiers, used to generate the SPACK_PACKAGE_IDS variable
|
||||
self.all_pkg_identifiers: List[str] = []
|
||||
|
||||
# All install and install-deps targets
|
||||
self.all_install_related_targets: List[str] = []
|
||||
|
||||
# Convenience shortcuts: ensure that `make install/pkg-version-hash` triggers
|
||||
# <absolute path to env>/.spack-env/makedeps/install/pkg-version-hash in case
|
||||
# we don't have a custom make target prefix.
|
||||
self.phony_convenience_targets: List[str] = []
|
||||
|
||||
for node in adjacency_list:
|
||||
tgt = self._safe_name(node.target)
|
||||
self.all_pkg_identifiers.append(tgt)
|
||||
self.all_install_related_targets.append(self._install_target(tgt))
|
||||
self.all_install_related_targets.append(self._install_deps_target(tgt))
|
||||
if make_prefix is None:
|
||||
self.phony_convenience_targets.append(os.path.join("install", tgt))
|
||||
self.phony_convenience_targets.append(os.path.join("install-deps", tgt))
|
||||
|
||||
def _safe_name(self, spec: spack.spec.Spec) -> str:
|
||||
return spec.format("{name}-{version}-{hash}")
|
||||
|
||||
def _target(self, name: str) -> str:
|
||||
# The `all` and `clean` targets are phony. It doesn't make sense to
|
||||
# have /abs/path/to/env/metadir/{all,clean} targets. But it *does* make
|
||||
# sense to have a prefix like `env/all`, `env/clean` when they are
|
||||
# supposed to be included
|
||||
if name in ("all", "clean") and os.path.isabs(self.make_prefix):
|
||||
return name
|
||||
else:
|
||||
return os.path.join(self.make_prefix, name)
|
||||
|
||||
def _install_target(self, name: str) -> str:
|
||||
return os.path.join(self.make_prefix, "install", name)
|
||||
|
||||
def _install_deps_target(self, name: str) -> str:
|
||||
return os.path.join(self.make_prefix, "install-deps", name)
|
||||
|
||||
def to_dict(self):
|
||||
return {
|
||||
"all_target": self._target("all"),
|
||||
"env_target": self._target("env"),
|
||||
"clean_target": self._target("clean"),
|
||||
"all_install_related_targets": " ".join(self.all_install_related_targets),
|
||||
"root_install_targets": " ".join(self.root_install_targets),
|
||||
"dirs_target": self._target("dirs"),
|
||||
"environment": self.env_path,
|
||||
"install_target": self._target("install"),
|
||||
"install_deps_target": self._target("install-deps"),
|
||||
"any_hash_target": self._target("%"),
|
||||
"jobserver_support": self.jobserver_support,
|
||||
"adjacency_list": self.make_adjacency_list,
|
||||
"phony_convenience_targets": " ".join(self.phony_convenience_targets),
|
||||
"pkg_ids_variable": self.pkg_identifier_variable,
|
||||
"pkg_ids": " ".join(self.all_pkg_identifiers),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def from_env(
|
||||
env: ev.Environment,
|
||||
*,
|
||||
filter_specs: Optional[List[spack.spec.Spec]] = None,
|
||||
pkg_buildcache: UseBuildCache = UseBuildCache.AUTO,
|
||||
dep_buildcache: UseBuildCache = UseBuildCache.AUTO,
|
||||
make_prefix: Optional[str] = None,
|
||||
jobserver: bool = True,
|
||||
) -> "MakefileModel":
|
||||
"""Produces a MakefileModel from an environment and a list of specs.
|
||||
|
||||
Args:
|
||||
env: the environment to use
|
||||
filter_specs: if provided, only these specs will be built from the environment,
|
||||
otherwise the environment roots are used.
|
||||
pkg_buildcache: whether to only use the buildcache for top-level specs.
|
||||
dep_buildcache: whether to only use the buildcache for non-top-level specs.
|
||||
make_prefix: the prefix for the makefile targets
|
||||
jobserver: when enabled, make will invoke Spack with jobserver support. For
|
||||
dry-run this should be disabled.
|
||||
"""
|
||||
# If no specs are provided as a filter, build all the specs in the environment.
|
||||
if filter_specs:
|
||||
entrypoints = [env.matching_spec(s) for s in filter_specs]
|
||||
else:
|
||||
entrypoints = [s for _, s in env.concretized_specs()]
|
||||
|
||||
visitor = DepfileSpecVisitor(pkg_buildcache, dep_buildcache)
|
||||
traverse.traverse_breadth_first_with_visitor(
|
||||
entrypoints, traverse.CoverNodesVisitor(visitor, key=lambda s: s.dag_hash())
|
||||
)
|
||||
|
||||
return MakefileModel(env, entrypoints, visitor.adjacency_list, make_prefix, jobserver)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -12,7 +12,7 @@
|
||||
Currently the following hooks are supported:
|
||||
|
||||
* pre_install(spec)
|
||||
* post_install(spec, explicit)
|
||||
* post_install(spec)
|
||||
* pre_uninstall(spec)
|
||||
* post_uninstall(spec)
|
||||
* on_install_start(spec)
|
||||
|
||||
@@ -131,7 +131,7 @@ def find_and_patch_sonames(prefix, exclude_list, patchelf):
|
||||
return patch_sonames(patchelf, prefix, relative_paths)
|
||||
|
||||
|
||||
def post_install(spec, explicit=None):
|
||||
def post_install(spec):
|
||||
# Skip if disabled
|
||||
if not spack.config.get("config:shared_linking:bind", False):
|
||||
return
|
||||
|
||||
@@ -9,7 +9,8 @@
|
||||
from llnl.util.filesystem import mkdirp
|
||||
from llnl.util.symlink import symlink
|
||||
|
||||
import spack.util.editor as ed
|
||||
from spack.util.editor import editor
|
||||
from spack.util.executable import Executable, which
|
||||
|
||||
|
||||
def pre_install(spec):
|
||||
@@ -37,9 +38,29 @@ def set_up_license(pkg):
|
||||
if not os.path.exists(license_path):
|
||||
# Create a new license file
|
||||
write_license_file(pkg, license_path)
|
||||
# Open up file in user's favorite $EDITOR for editing
|
||||
editor_exe = None
|
||||
if "VISUAL" in os.environ:
|
||||
editor_exe = Executable(os.environ["VISUAL"])
|
||||
# gvim runs in the background by default so we force it to run
|
||||
# in the foreground to make sure the license file is updated
|
||||
# before we try to install
|
||||
if "gvim" in os.environ["VISUAL"]:
|
||||
editor_exe.add_default_arg("-f")
|
||||
elif "EDITOR" in os.environ:
|
||||
editor_exe = Executable(os.environ["EDITOR"])
|
||||
else:
|
||||
editor_exe = which("vim", "vi", "emacs", "nano")
|
||||
if editor_exe is None:
|
||||
raise EnvironmentError(
|
||||
"No text editor found! Please set the VISUAL and/or EDITOR"
|
||||
" environment variable(s) to your preferred text editor."
|
||||
)
|
||||
|
||||
# use spack.util.executable so the editor does not hang on return here
|
||||
ed.editor(license_path, exec_fn=ed.executable)
|
||||
def editor_wrapper(exe, args):
|
||||
editor_exe(license_path)
|
||||
|
||||
editor(license_path, _exec_func=editor_wrapper)
|
||||
else:
|
||||
# Use already existing license file
|
||||
tty.msg("Found already existing license %s" % license_path)
|
||||
@@ -148,7 +169,7 @@ def write_license_file(pkg, license_path):
|
||||
f.close()
|
||||
|
||||
|
||||
def post_install(spec, explicit=None):
|
||||
def post_install(spec):
|
||||
"""This hook symlinks local licenses to the global license for
|
||||
licensed software.
|
||||
"""
|
||||
|
||||
@@ -3,24 +3,31 @@
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from llnl.util import tty
|
||||
import llnl.util.tty as tty
|
||||
|
||||
import spack.config
|
||||
import spack.modules
|
||||
import spack.modules.common
|
||||
|
||||
|
||||
def _for_each_enabled(spec, method_name, explicit=None):
|
||||
def _for_each_enabled(spec, method_name):
|
||||
"""Calls a method for each enabled module"""
|
||||
spack.modules.ensure_modules_are_enabled_or_warn()
|
||||
set_names = set(spack.config.get("modules", {}).keys())
|
||||
# If we have old-style modules enabled, we put those in the default set
|
||||
old_default_enabled = spack.config.get("modules:enable")
|
||||
if old_default_enabled:
|
||||
set_names.add("default")
|
||||
for name in set_names:
|
||||
enabled = spack.config.get("modules:%s:enable" % name)
|
||||
if name == "default":
|
||||
# combine enabled modules from default and old format
|
||||
enabled = spack.config.merge_yaml(old_default_enabled, enabled)
|
||||
if not enabled:
|
||||
tty.debug("NO MODULE WRITTEN: list of enabled module files is empty")
|
||||
continue
|
||||
|
||||
for module_type in enabled:
|
||||
generator = spack.modules.module_types[module_type](spec, name, explicit)
|
||||
for type in enabled:
|
||||
generator = spack.modules.module_types[type](spec, name)
|
||||
try:
|
||||
getattr(generator, method_name)()
|
||||
except RuntimeError as e:
|
||||
@@ -29,7 +36,7 @@ def _for_each_enabled(spec, method_name, explicit=None):
|
||||
tty.warn(msg.format(method_name, str(e)))
|
||||
|
||||
|
||||
def post_install(spec, explicit):
|
||||
def post_install(spec):
|
||||
import spack.environment as ev # break import cycle
|
||||
|
||||
if ev.active_environment():
|
||||
@@ -38,7 +45,7 @@ def post_install(spec, explicit):
|
||||
# can manage interactions between env views and modules
|
||||
return
|
||||
|
||||
_for_each_enabled(spec, "write", explicit)
|
||||
_for_each_enabled(spec, "write")
|
||||
|
||||
|
||||
def post_uninstall(spec):
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
import spack.util.file_permissions as fp
|
||||
|
||||
|
||||
def post_install(spec, explicit=None):
|
||||
def post_install(spec):
|
||||
if not spec.external:
|
||||
fp.set_permissions_by_spec(spec.prefix, spec)
|
||||
|
||||
|
||||
@@ -224,7 +224,7 @@ def install_sbang():
|
||||
os.rename(sbang_tmp_path, sbang_path)
|
||||
|
||||
|
||||
def post_install(spec, explicit=None):
|
||||
def post_install(spec):
|
||||
"""This hook edits scripts so that they call /bin/bash
|
||||
$spack_prefix/bin/sbang instead of something longer than the
|
||||
shebang limit.
|
||||
|
||||
@@ -6,6 +6,6 @@
|
||||
import spack.verify
|
||||
|
||||
|
||||
def post_install(spec, explicit=None):
|
||||
def post_install(spec):
|
||||
if not spec.external:
|
||||
spack.verify.write_manifest(spec)
|
||||
|
||||
@@ -315,7 +315,7 @@ def _install_from_cache(pkg, cache_only, explicit, unsigned=False):
|
||||
tty.debug("Successfully extracted {0} from binary cache".format(pkg_id))
|
||||
_print_timer(pre=_log_prefix(pkg.name), pkg_id=pkg_id, timer=t)
|
||||
_print_installed_pkg(pkg.spec.prefix)
|
||||
spack.hooks.post_install(pkg.spec, explicit)
|
||||
spack.hooks.post_install(pkg.spec)
|
||||
return True
|
||||
|
||||
|
||||
@@ -353,7 +353,7 @@ def _process_external_package(pkg, explicit):
|
||||
# For external packages we just need to run
|
||||
# post-install hooks to generate module files.
|
||||
tty.debug("{0} generating module file".format(pre))
|
||||
spack.hooks.post_install(spec, explicit)
|
||||
spack.hooks.post_install(spec)
|
||||
|
||||
# Add to the DB
|
||||
tty.debug("{0} registering into DB".format(pre))
|
||||
@@ -1260,10 +1260,6 @@ def _install_task(self, task):
|
||||
if not pkg.unit_test_check():
|
||||
return
|
||||
|
||||
# Injecting information to know if this installation request is the root one
|
||||
# to determine in BuildProcessInstaller whether installation is explicit or not
|
||||
install_args["is_root"] = task.is_root
|
||||
|
||||
try:
|
||||
self._setup_install_dir(pkg)
|
||||
|
||||
@@ -1883,9 +1879,6 @@ def __init__(self, pkg, install_args):
|
||||
# whether to enable echoing of build output initially or not
|
||||
self.verbose = install_args.get("verbose", False)
|
||||
|
||||
# whether installation was explicitly requested by the user
|
||||
self.explicit = install_args.get("is_root", False) and install_args.get("explicit", True)
|
||||
|
||||
# env before starting installation
|
||||
self.unmodified_env = install_args.get("unmodified_env", {})
|
||||
|
||||
@@ -1946,7 +1939,7 @@ def run(self):
|
||||
self.timer.write_json(timelog)
|
||||
|
||||
# Run post install hooks before build stage is removed.
|
||||
spack.hooks.post_install(self.pkg.spec, self.explicit)
|
||||
spack.hooks.post_install(self.pkg.spec)
|
||||
|
||||
_print_timer(pre=self.pre, pkg_id=self.pkg_id, timer=self.timer)
|
||||
_print_installed_pkg(self.pkg.prefix)
|
||||
@@ -2420,10 +2413,7 @@ def get_deptypes(self, pkg):
|
||||
else:
|
||||
cache_only = self.install_args.get("dependencies_cache_only")
|
||||
|
||||
# Include build dependencies if pkg is not installed and cache_only
|
||||
# is False, or if build depdencies are explicitly called for
|
||||
# by include_build_deps.
|
||||
if include_build_deps or not (cache_only or pkg.spec.installed):
|
||||
if not cache_only or include_build_deps:
|
||||
deptypes.append("build")
|
||||
if self.run_tests(pkg):
|
||||
deptypes.append("test")
|
||||
|
||||
@@ -59,10 +59,9 @@ def filter_compiler_wrappers(*files, **kwargs):
|
||||
|
||||
find_kwargs = {"recursive": kwargs.get("recursive", False)}
|
||||
|
||||
def _filter_compiler_wrappers_impl(pkg_or_builder):
|
||||
pkg = getattr(pkg_or_builder, "pkg", pkg_or_builder)
|
||||
def _filter_compiler_wrappers_impl(self):
|
||||
# Compute the absolute path of the search root
|
||||
root = os.path.join(pkg.prefix, relative_root) if relative_root else pkg.prefix
|
||||
root = os.path.join(self.prefix, relative_root) if relative_root else self.prefix
|
||||
|
||||
# Compute the absolute path of the files to be filtered and
|
||||
# remove links from the list.
|
||||
@@ -72,10 +71,10 @@ def _filter_compiler_wrappers_impl(pkg_or_builder):
|
||||
x = llnl.util.filesystem.FileFilter(*abs_files)
|
||||
|
||||
compiler_vars = [
|
||||
("CC", pkg.compiler.cc),
|
||||
("CXX", pkg.compiler.cxx),
|
||||
("F77", pkg.compiler.f77),
|
||||
("FC", pkg.compiler.fc),
|
||||
("CC", self.compiler.cc),
|
||||
("CXX", self.compiler.cxx),
|
||||
("F77", self.compiler.f77),
|
||||
("FC", self.compiler.fc),
|
||||
]
|
||||
|
||||
# Some paths to the compiler wrappers might be substrings of the others.
|
||||
@@ -104,11 +103,11 @@ def _filter_compiler_wrappers_impl(pkg_or_builder):
|
||||
x.filter(wrapper_path, compiler_path, **filter_kwargs)
|
||||
|
||||
# Remove this linking flag if present (it turns RPATH into RUNPATH)
|
||||
x.filter("{0}--enable-new-dtags".format(pkg.compiler.linker_arg), "", **filter_kwargs)
|
||||
x.filter("{0}--enable-new-dtags".format(self.compiler.linker_arg), "", **filter_kwargs)
|
||||
|
||||
# NAG compiler is usually mixed with GCC, which has a different
|
||||
# prefix for linker arguments.
|
||||
if pkg.compiler.name == "nag":
|
||||
if self.compiler.name == "nag":
|
||||
x.filter("-Wl,--enable-new-dtags", "", **filter_kwargs)
|
||||
|
||||
spack.builder.run_after(after)(_filter_compiler_wrappers_impl)
|
||||
|
||||
@@ -4,20 +4,15 @@
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
"""This package contains code for creating environment modules, which can
|
||||
include Tcl non-hierarchical modules, Lua hierarchical modules, and others.
|
||||
include TCL non-hierarchical modules, LUA hierarchical modules, and others.
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
from .common import disable_modules, ensure_modules_are_enabled_or_warn
|
||||
from .common import disable_modules
|
||||
from .lmod import LmodModulefileWriter
|
||||
from .tcl import TclModulefileWriter
|
||||
|
||||
__all__ = [
|
||||
"TclModulefileWriter",
|
||||
"LmodModulefileWriter",
|
||||
"disable_modules",
|
||||
"ensure_modules_are_enabled_or_warn",
|
||||
]
|
||||
__all__ = ["TclModulefileWriter", "LmodModulefileWriter", "disable_modules"]
|
||||
|
||||
module_types = {"tcl": TclModulefileWriter, "lmod": LmodModulefileWriter}
|
||||
|
||||
@@ -33,9 +33,7 @@
|
||||
import datetime
|
||||
import inspect
|
||||
import os.path
|
||||
import pathlib
|
||||
import re
|
||||
import warnings
|
||||
from typing import Optional
|
||||
|
||||
import llnl.util.filesystem
|
||||
@@ -430,17 +428,12 @@ class BaseConfiguration(object):
|
||||
|
||||
default_projections = {"all": "{name}-{version}-{compiler.name}-{compiler.version}"}
|
||||
|
||||
def __init__(self, spec, module_set_name, explicit=None):
|
||||
def __init__(self, spec, module_set_name):
|
||||
# Module where type(self) is defined
|
||||
self.module = inspect.getmodule(self)
|
||||
# Spec for which we want to generate a module file
|
||||
self.spec = spec
|
||||
self.name = module_set_name
|
||||
# Software installation has been explicitly asked (get this information from
|
||||
# db when querying an existing module, like during a refresh or rm operations)
|
||||
if explicit is None:
|
||||
explicit = spec._installed_explicitly()
|
||||
self.explicit = explicit
|
||||
# Dictionary of configuration options that should be applied
|
||||
# to the spec
|
||||
self.conf = merge_config_rules(self.module.configuration(self.name), self.spec)
|
||||
@@ -526,7 +519,8 @@ def excluded(self):
|
||||
# Should I exclude the module because it's implicit?
|
||||
# DEPRECATED: remove 'blacklist_implicits' in v0.20
|
||||
exclude_implicits = get_deprecated(conf, "exclude_implicits", "blacklist_implicits", None)
|
||||
excluded_as_implicit = exclude_implicits and not self.explicit
|
||||
installed_implicitly = not spec._installed_explicitly()
|
||||
excluded_as_implicit = exclude_implicits and installed_implicitly
|
||||
|
||||
def debug_info(line_header, match_list):
|
||||
if match_list:
|
||||
@@ -705,7 +699,7 @@ def configure_options(self):
|
||||
|
||||
if os.path.exists(pkg.install_configure_args_path):
|
||||
with open(pkg.install_configure_args_path, "r") as args_file:
|
||||
return spack.util.path.padding_filter(args_file.read())
|
||||
return args_file.read()
|
||||
|
||||
# Returning a false-like value makes the default templates skip
|
||||
# the configure option section
|
||||
@@ -794,8 +788,7 @@ def autoload(self):
|
||||
def _create_module_list_of(self, what):
|
||||
m = self.conf.module
|
||||
name = self.conf.name
|
||||
explicit = self.conf.explicit
|
||||
return [m.make_layout(x, name, explicit).use_name for x in getattr(self.conf, what)]
|
||||
return [m.make_layout(x, name).use_name for x in getattr(self.conf, what)]
|
||||
|
||||
@tengine.context_property
|
||||
def verbose(self):
|
||||
@@ -803,45 +796,8 @@ def verbose(self):
|
||||
return self.conf.verbose
|
||||
|
||||
|
||||
def ensure_modules_are_enabled_or_warn():
|
||||
"""Ensures that, if a custom configuration file is found with custom configuration for the
|
||||
default tcl module set, then tcl module file generation is enabled. Otherwise, a warning
|
||||
is emitted.
|
||||
"""
|
||||
|
||||
# TODO (v0.21 - Remove this function)
|
||||
# Check if TCL module generation is enabled, return early if it is
|
||||
enabled = spack.config.get("modules:default:enable", [])
|
||||
if "tcl" in enabled:
|
||||
return
|
||||
|
||||
# Check if we have custom TCL module sections
|
||||
for scope in spack.config.config.file_scopes:
|
||||
# Skip default configuration
|
||||
if scope.name.startswith("default"):
|
||||
continue
|
||||
|
||||
data = spack.config.get("modules:default:tcl", scope=scope.name)
|
||||
if data:
|
||||
config_file = pathlib.Path(scope.path)
|
||||
if not scope.name.startswith("env"):
|
||||
config_file = config_file / "modules.yaml"
|
||||
break
|
||||
else:
|
||||
return
|
||||
|
||||
# If we are here we have a custom "modules" section in "config_file"
|
||||
msg = (
|
||||
f"detected custom TCL modules configuration in {config_file}, while TCL module file "
|
||||
f"generation for the default module set is disabled. "
|
||||
f"In Spack v0.20 module file generation has been disabled by default. To enable "
|
||||
f"it run:\n\n\t$ spack config add 'modules:default:enable:[tcl]'\n"
|
||||
)
|
||||
warnings.warn(msg)
|
||||
|
||||
|
||||
class BaseModuleFileWriter(object):
|
||||
def __init__(self, spec, module_set_name, explicit=None):
|
||||
def __init__(self, spec, module_set_name):
|
||||
self.spec = spec
|
||||
|
||||
# This class is meant to be derived. Get the module of the
|
||||
@@ -850,9 +806,9 @@ def __init__(self, spec, module_set_name, explicit=None):
|
||||
m = self.module
|
||||
|
||||
# Create the triplet of configuration/layout/context
|
||||
self.conf = m.make_configuration(spec, module_set_name, explicit)
|
||||
self.layout = m.make_layout(spec, module_set_name, explicit)
|
||||
self.context = m.make_context(spec, module_set_name, explicit)
|
||||
self.conf = m.make_configuration(spec, module_set_name)
|
||||
self.layout = m.make_layout(spec, module_set_name)
|
||||
self.context = m.make_context(spec, module_set_name)
|
||||
|
||||
# Check if a default template has been defined,
|
||||
# throw if not found
|
||||
@@ -974,7 +930,6 @@ def remove(self):
|
||||
if os.path.exists(mod_file):
|
||||
try:
|
||||
os.remove(mod_file) # Remove the module file
|
||||
self.remove_module_defaults() # Remove default targeting module file
|
||||
os.removedirs(
|
||||
os.path.dirname(mod_file)
|
||||
) # Remove all the empty directories from the leaf up
|
||||
@@ -982,18 +937,6 @@ def remove(self):
|
||||
# removedirs throws OSError on first non-empty directory found
|
||||
pass
|
||||
|
||||
def remove_module_defaults(self):
|
||||
if not any(self.spec.satisfies(default) for default in self.conf.defaults):
|
||||
return
|
||||
|
||||
# This spec matches a default, symlink needs to be removed as we remove the module
|
||||
# file it targets.
|
||||
default_symlink = os.path.join(os.path.dirname(self.layout.filename), "default")
|
||||
try:
|
||||
os.unlink(default_symlink)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def disable_modules():
|
||||
|
||||
@@ -33,26 +33,24 @@ def configuration(module_set_name):
|
||||
configuration_registry: Dict[str, Any] = {}
|
||||
|
||||
|
||||
def make_configuration(spec, module_set_name, explicit):
|
||||
def make_configuration(spec, module_set_name):
|
||||
"""Returns the lmod configuration for spec"""
|
||||
key = (spec.dag_hash(), module_set_name, explicit)
|
||||
key = (spec.dag_hash(), module_set_name)
|
||||
try:
|
||||
return configuration_registry[key]
|
||||
except KeyError:
|
||||
return configuration_registry.setdefault(
|
||||
key, LmodConfiguration(spec, module_set_name, explicit)
|
||||
)
|
||||
return configuration_registry.setdefault(key, LmodConfiguration(spec, module_set_name))
|
||||
|
||||
|
||||
def make_layout(spec, module_set_name, explicit):
|
||||
def make_layout(spec, module_set_name):
|
||||
"""Returns the layout information for spec"""
|
||||
conf = make_configuration(spec, module_set_name, explicit)
|
||||
conf = make_configuration(spec, module_set_name)
|
||||
return LmodFileLayout(conf)
|
||||
|
||||
|
||||
def make_context(spec, module_set_name, explicit):
|
||||
def make_context(spec, module_set_name):
|
||||
"""Returns the context information for spec"""
|
||||
conf = make_configuration(spec, module_set_name, explicit)
|
||||
conf = make_configuration(spec, module_set_name)
|
||||
return LmodContext(conf)
|
||||
|
||||
|
||||
@@ -126,11 +124,6 @@ def core_specs(self):
|
||||
"""Returns the list of "Core" specs"""
|
||||
return configuration(self.name).get("core_specs", [])
|
||||
|
||||
@property
|
||||
def filter_hierarchy_specs(self):
|
||||
"""Returns the dict of specs with modified hierarchies"""
|
||||
return configuration(self.name).get("filter_hierarchy_specs", {})
|
||||
|
||||
@property
|
||||
def hierarchy_tokens(self):
|
||||
"""Returns the list of tokens that are part of the modulefile
|
||||
@@ -165,21 +158,11 @@ def requires(self):
|
||||
if any(self.spec.satisfies(core_spec) for core_spec in self.core_specs):
|
||||
return {"compiler": self.core_compilers[0]}
|
||||
|
||||
hierarchy_filter_list = []
|
||||
for spec, filter_list in self.filter_hierarchy_specs.items():
|
||||
if self.spec.satisfies(spec):
|
||||
hierarchy_filter_list = filter_list
|
||||
break
|
||||
|
||||
# Keep track of the requirements that this package has in terms
|
||||
# of virtual packages that participate in the hierarchical structure
|
||||
requirements = {"compiler": self.spec.compiler}
|
||||
# For each virtual dependency in the hierarchy
|
||||
for x in self.hierarchy_tokens:
|
||||
# Skip anything filtered for this spec
|
||||
if x in hierarchy_filter_list:
|
||||
continue
|
||||
|
||||
# If I depend on it
|
||||
if x in self.spec and not self.spec.package.provides(x):
|
||||
requirements[x] = self.spec[x] # record the actual provider
|
||||
@@ -426,7 +409,7 @@ def missing(self):
|
||||
@tengine.context_property
|
||||
def unlocked_paths(self):
|
||||
"""Returns the list of paths that are unlocked unconditionally."""
|
||||
layout = make_layout(self.spec, self.conf.name, self.conf.explicit)
|
||||
layout = make_layout(self.spec, self.conf.name)
|
||||
return [os.path.join(*parts) for parts in layout.unlocked_paths[None]]
|
||||
|
||||
@tengine.context_property
|
||||
@@ -434,7 +417,7 @@ def conditionally_unlocked_paths(self):
|
||||
"""Returns the list of paths that are unlocked conditionally.
|
||||
Each item in the list is a tuple with the structure (condition, path).
|
||||
"""
|
||||
layout = make_layout(self.spec, self.conf.name, self.conf.explicit)
|
||||
layout = make_layout(self.spec, self.conf.name)
|
||||
value = []
|
||||
conditional_paths = layout.unlocked_paths
|
||||
conditional_paths.pop(None)
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
"""This module implements the classes necessary to generate Tcl
|
||||
"""This module implements the classes necessary to generate TCL
|
||||
non-hierarchical modules.
|
||||
"""
|
||||
import posixpath
|
||||
@@ -19,7 +19,7 @@
|
||||
from .common import BaseConfiguration, BaseContext, BaseFileLayout, BaseModuleFileWriter
|
||||
|
||||
|
||||
#: Tcl specific part of the configuration
|
||||
#: TCL specific part of the configuration
|
||||
def configuration(module_set_name):
|
||||
config_path = "modules:%s:tcl" % module_set_name
|
||||
config = spack.config.get(config_path, {})
|
||||
@@ -30,26 +30,24 @@ def configuration(module_set_name):
|
||||
configuration_registry: Dict[str, Any] = {}
|
||||
|
||||
|
||||
def make_configuration(spec, module_set_name, explicit):
|
||||
def make_configuration(spec, module_set_name):
|
||||
"""Returns the tcl configuration for spec"""
|
||||
key = (spec.dag_hash(), module_set_name, explicit)
|
||||
key = (spec.dag_hash(), module_set_name)
|
||||
try:
|
||||
return configuration_registry[key]
|
||||
except KeyError:
|
||||
return configuration_registry.setdefault(
|
||||
key, TclConfiguration(spec, module_set_name, explicit)
|
||||
)
|
||||
return configuration_registry.setdefault(key, TclConfiguration(spec, module_set_name))
|
||||
|
||||
|
||||
def make_layout(spec, module_set_name, explicit):
|
||||
def make_layout(spec, module_set_name):
|
||||
"""Returns the layout information for spec"""
|
||||
conf = make_configuration(spec, module_set_name, explicit)
|
||||
conf = make_configuration(spec, module_set_name)
|
||||
return TclFileLayout(conf)
|
||||
|
||||
|
||||
def make_context(spec, module_set_name, explicit):
|
||||
def make_context(spec, module_set_name):
|
||||
"""Returns the context information for spec"""
|
||||
conf = make_configuration(spec, module_set_name, explicit)
|
||||
conf = make_configuration(spec, module_set_name)
|
||||
return TclContext(conf)
|
||||
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@
|
||||
cmake_cache_path,
|
||||
cmake_cache_string,
|
||||
)
|
||||
from spack.build_systems.cmake import CMakePackage, generator
|
||||
from spack.build_systems.cmake import CMakePackage
|
||||
from spack.build_systems.cuda import CudaPackage
|
||||
from spack.build_systems.generic import Package
|
||||
from spack.build_systems.gnu import GNUMirrorPackage
|
||||
|
||||
@@ -57,7 +57,7 @@
|
||||
from spack.filesystem_view import YamlFilesystemView
|
||||
from spack.install_test import TestFailure, TestSuite
|
||||
from spack.installer import InstallError, PackageInstaller
|
||||
from spack.stage import ResourceStage, Stage, StageComposite, compute_stage_name
|
||||
from spack.stage import ResourceStage, Stage, StageComposite, stage_prefix
|
||||
from spack.util.executable import ProcessError, which
|
||||
from spack.util.package_hash import package_hash
|
||||
from spack.util.prefix import Prefix
|
||||
@@ -1022,7 +1022,8 @@ def _make_root_stage(self, fetcher):
|
||||
)
|
||||
# Construct a path where the stage should build..
|
||||
s = self.spec
|
||||
stage_name = compute_stage_name(s)
|
||||
stage_name = "{0}{1}-{2}-{3}".format(stage_prefix, s.name, s.version, s.dag_hash())
|
||||
|
||||
stage = Stage(
|
||||
fetcher,
|
||||
mirror_paths=mirror_paths,
|
||||
|
||||
@@ -675,7 +675,7 @@ def relocate_text_bin(binaries, prefixes):
|
||||
Raises:
|
||||
spack.relocate_text.BinaryTextReplaceError: when the new path is longer than the old path
|
||||
"""
|
||||
return BinaryFilePrefixReplacer.from_strings_or_bytes(prefixes).apply(binaries)
|
||||
BinaryFilePrefixReplacer.from_strings_or_bytes(prefixes).apply(binaries)
|
||||
|
||||
|
||||
def is_relocatable(spec):
|
||||
|
||||
@@ -73,28 +73,24 @@ def is_noop(self) -> bool:
|
||||
"""Returns true when the prefix to prefix map
|
||||
is mapping everything to the same location (identity)
|
||||
or there are no prefixes to replace."""
|
||||
return not self.prefix_to_prefix
|
||||
return not bool(self.prefix_to_prefix)
|
||||
|
||||
def apply(self, filenames: list):
|
||||
"""Returns a list of files that were modified"""
|
||||
changed_files = []
|
||||
if self.is_noop:
|
||||
return []
|
||||
return
|
||||
for filename in filenames:
|
||||
if self.apply_to_filename(filename):
|
||||
changed_files.append(filename)
|
||||
return changed_files
|
||||
self.apply_to_filename(filename)
|
||||
|
||||
def apply_to_filename(self, filename):
|
||||
if self.is_noop:
|
||||
return False
|
||||
return
|
||||
with open(filename, "rb+") as f:
|
||||
return self.apply_to_file(f)
|
||||
self.apply_to_file(f)
|
||||
|
||||
def apply_to_file(self, f):
|
||||
if self.is_noop:
|
||||
return False
|
||||
return self._apply_to_file(f)
|
||||
return
|
||||
self._apply_to_file(f)
|
||||
|
||||
|
||||
class TextFilePrefixReplacer(PrefixReplacer):
|
||||
@@ -126,11 +122,10 @@ def _apply_to_file(self, f):
|
||||
data = f.read()
|
||||
new_data = re.sub(self.regex, replacement, data)
|
||||
if id(data) == id(new_data):
|
||||
return False
|
||||
return
|
||||
f.seek(0)
|
||||
f.write(new_data)
|
||||
f.truncate()
|
||||
return True
|
||||
|
||||
|
||||
class BinaryFilePrefixReplacer(PrefixReplacer):
|
||||
@@ -199,9 +194,6 @@ def _apply_to_file(self, f):
|
||||
|
||||
Arguments:
|
||||
f: file opened in rb+ mode
|
||||
|
||||
Returns:
|
||||
bool: True if file was modified
|
||||
"""
|
||||
assert f.tell() == 0
|
||||
|
||||
@@ -209,8 +201,6 @@ def _apply_to_file(self, f):
|
||||
# but it's nasty to deal with matches across boundaries, so let's stick to
|
||||
# something simple.
|
||||
|
||||
modified = True
|
||||
|
||||
for match in self.regex.finditer(f.read()):
|
||||
# The matching prefix (old) and its replacement (new)
|
||||
old = match.group(1)
|
||||
@@ -253,9 +243,6 @@ def _apply_to_file(self, f):
|
||||
|
||||
f.seek(match.start())
|
||||
f.write(replacement)
|
||||
modified = True
|
||||
|
||||
return modified
|
||||
|
||||
|
||||
class BinaryStringReplacementError(spack.error.SpackError):
|
||||
|
||||
@@ -1368,7 +1368,7 @@ def create(configuration):
|
||||
|
||||
|
||||
#: Singleton repo path instance
|
||||
path: Union[RepoPath, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(_path)
|
||||
path = llnl.util.lang.Singleton(_path)
|
||||
|
||||
# Add the finder to sys.meta_path
|
||||
REPOS_FINDER = ReposFinder()
|
||||
|
||||
@@ -119,7 +119,7 @@ def rewire_node(spec, explicit):
|
||||
spack.store.db.add(spec, spack.store.layout, explicit=explicit)
|
||||
|
||||
# run post install hooks
|
||||
spack.hooks.post_install(spec, explicit)
|
||||
spack.hooks.post_install(spec)
|
||||
|
||||
|
||||
class RewireError(spack.error.SpackError):
|
||||
|
||||
@@ -15,8 +15,7 @@
|
||||
"cdash": {
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
# "required": ["build-group", "url", "project", "site"],
|
||||
"required": ["build-group"],
|
||||
"required": ["build-group", "url", "project", "site"],
|
||||
"patternProperties": {
|
||||
r"build-group": {"type": "string"},
|
||||
r"url": {"type": "string"},
|
||||
|
||||
@@ -1,213 +0,0 @@
|
||||
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
"""Schema for gitlab-ci.yaml configuration file.
|
||||
|
||||
.. literalinclude:: ../spack/schema/ci.py
|
||||
:lines: 13-
|
||||
"""
|
||||
|
||||
from llnl.util.lang import union_dicts
|
||||
|
||||
import spack.schema.gitlab_ci
|
||||
|
||||
# Schema for script fields
|
||||
# List of lists and/or strings
|
||||
# This is similar to what is allowed in
|
||||
# the gitlab schema
|
||||
script_schema = {
|
||||
"type": "array",
|
||||
"items": {"anyOf": [{"type": "string"}, {"type": "array", "items": {"type": "string"}}]},
|
||||
}
|
||||
|
||||
# Schema for CI image
|
||||
image_schema = {
|
||||
"oneOf": [
|
||||
{"type": "string"},
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"entrypoint": {"type": "array", "items": {"type": "string"}},
|
||||
},
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
# Additional attributes are allow
|
||||
# and will be forwarded directly to the
|
||||
# CI target YAML for each job.
|
||||
attributes_schema = {
|
||||
"type": "object",
|
||||
"additionalProperties": True,
|
||||
"properties": {
|
||||
"image": image_schema,
|
||||
"tags": {"type": "array", "items": {"type": "string"}},
|
||||
"variables": {
|
||||
"type": "object",
|
||||
"patternProperties": {r"[\w\d\-_\.]+": {"type": "string"}},
|
||||
},
|
||||
"before_script": script_schema,
|
||||
"script": script_schema,
|
||||
"after_script": script_schema,
|
||||
},
|
||||
}
|
||||
|
||||
submapping_schema = {
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"required": ["submapping"],
|
||||
"properties": {
|
||||
"match_behavior": {"type": "string", "enum": ["first", "merge"], "default": "first"},
|
||||
"submapping": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"required": ["match"],
|
||||
"properties": {
|
||||
"match": {"type": "array", "items": {"type": "string"}},
|
||||
"build-job": attributes_schema,
|
||||
"build-job-remove": attributes_schema,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
named_attributes_schema = {
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {"noop-job": attributes_schema, "noop-job-remove": attributes_schema},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {"build-job": attributes_schema, "build-job-remove": attributes_schema},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {
|
||||
"reindex-job": attributes_schema,
|
||||
"reindex-job-remove": attributes_schema,
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {
|
||||
"signing-job": attributes_schema,
|
||||
"signing-job-remove": attributes_schema,
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {
|
||||
"cleanup-job": attributes_schema,
|
||||
"cleanup-job-remove": attributes_schema,
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {"any-job": attributes_schema, "any-job-remove": attributes_schema},
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
pipeline_gen_schema = {
|
||||
"type": "array",
|
||||
"items": {"oneOf": [submapping_schema, named_attributes_schema]},
|
||||
}
|
||||
|
||||
core_shared_properties = union_dicts(
|
||||
{
|
||||
"pipeline-gen": pipeline_gen_schema,
|
||||
"bootstrap": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"anyOf": [
|
||||
{"type": "string"},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"required": ["name"],
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"compiler-agnostic": {"type": "boolean", "default": False},
|
||||
},
|
||||
},
|
||||
]
|
||||
},
|
||||
},
|
||||
"rebuild-index": {"type": "boolean"},
|
||||
"broken-specs-url": {"type": "string"},
|
||||
"broken-tests-packages": {"type": "array", "items": {"type": "string"}},
|
||||
"target": {"type": "string", "enum": ["gitlab"], "default": "gitlab"},
|
||||
}
|
||||
)
|
||||
|
||||
ci_properties = {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
# "required": ["mappings"],
|
||||
"properties": union_dicts(
|
||||
core_shared_properties, {"enable-artifacts-buildcache": {"type": "boolean"}}
|
||||
),
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
# "required": ["mappings"],
|
||||
"properties": union_dicts(
|
||||
core_shared_properties, {"temporary-storage-url-prefix": {"type": "string"}}
|
||||
),
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
#: Properties for inclusion in other schemas
|
||||
properties = {
|
||||
"ci": {
|
||||
"oneOf": [
|
||||
ci_properties,
|
||||
# Allow legacy format under `ci` for `config update ci`
|
||||
spack.schema.gitlab_ci.gitlab_ci_properties,
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
#: Full schema with metadata
|
||||
schema = {
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Spack CI configuration file schema",
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": properties,
|
||||
}
|
||||
|
||||
|
||||
def update(data):
|
||||
import llnl.util.tty as tty
|
||||
|
||||
import spack.ci
|
||||
import spack.environment as ev
|
||||
|
||||
# Warn if deprecated section is still in the environment
|
||||
ci_env = ev.active_environment()
|
||||
if ci_env:
|
||||
env_config = ev.config_dict(ci_env.manifest)
|
||||
if "gitlab-ci" in env_config:
|
||||
tty.die("Error: `gitlab-ci` section detected with `ci`, these are not compatible")
|
||||
|
||||
# Detect if the ci section is using the new pipeline-gen
|
||||
# If it is, assume it has already been converted
|
||||
return spack.ci.translate_deprecated_config(data)
|
||||
@@ -14,9 +14,7 @@
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {
|
||||
"reuse": {
|
||||
"oneOf": [{"type": "boolean"}, {"type": "string", "enum": ["dependencies"]}]
|
||||
},
|
||||
"reuse": {"type": "boolean"},
|
||||
"enable_node_namespace": {"type": "boolean"},
|
||||
"targets": {
|
||||
"type": "object",
|
||||
|
||||
@@ -61,7 +61,6 @@
|
||||
"build_stage": {
|
||||
"oneOf": [{"type": "string"}, {"type": "array", "items": {"type": "string"}}]
|
||||
},
|
||||
"stage_name": {"type": "string"},
|
||||
"test_stage": {"type": "string"},
|
||||
"extensions": {"type": "array", "items": {"type": "string"}},
|
||||
"template_dirs": {"type": "array", "items": {"type": "string"}},
|
||||
|
||||
@@ -63,8 +63,6 @@
|
||||
},
|
||||
# Add labels to the image
|
||||
"labels": {"type": "object"},
|
||||
# Use a custom template to render the recipe
|
||||
"template": {"type": "string", "default": None},
|
||||
# Add a custom extra section at the bottom of a stage
|
||||
"extra_instructions": {
|
||||
"type": "object",
|
||||
@@ -84,16 +82,6 @@
|
||||
},
|
||||
},
|
||||
"docker": {"type": "object", "additionalProperties": False, "default": {}},
|
||||
"depfile": {"type": "boolean", "default": False},
|
||||
},
|
||||
"deprecatedProperties": {
|
||||
"properties": ["extra_instructions"],
|
||||
"message": (
|
||||
"container:extra_instructions has been deprecated and will be removed "
|
||||
"in Spack v0.21. Set container:template appropriately to use custom Jinja2 "
|
||||
"templates instead."
|
||||
),
|
||||
"error": False,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -10,7 +10,6 @@
|
||||
"""
|
||||
from llnl.util.lang import union_dicts
|
||||
|
||||
import spack.schema.gitlab_ci # DEPRECATED
|
||||
import spack.schema.merged
|
||||
import spack.schema.packages
|
||||
import spack.schema.projections
|
||||
@@ -53,8 +52,6 @@
|
||||
"default": {},
|
||||
"additionalProperties": False,
|
||||
"properties": union_dicts(
|
||||
# Include deprecated "gitlab-ci" section
|
||||
spack.schema.gitlab_ci.properties,
|
||||
# merged configuration scope schemas
|
||||
spack.schema.merged.properties,
|
||||
# extra environment schema properties
|
||||
@@ -133,15 +130,6 @@ def update(data):
|
||||
Returns:
|
||||
True if data was changed, False otherwise
|
||||
"""
|
||||
|
||||
import spack.ci
|
||||
|
||||
if "gitlab-ci" in data:
|
||||
data["ci"] = data.pop("gitlab-ci")
|
||||
|
||||
if "ci" in data:
|
||||
return spack.ci.translate_deprecated_config(data["ci"])
|
||||
|
||||
# There are not currently any deprecated attributes in this section
|
||||
# that have not been removed
|
||||
return False
|
||||
|
||||
@@ -12,11 +12,11 @@
|
||||
|
||||
import spack.schema.bootstrap
|
||||
import spack.schema.cdash
|
||||
import spack.schema.ci
|
||||
import spack.schema.compilers
|
||||
import spack.schema.concretizer
|
||||
import spack.schema.config
|
||||
import spack.schema.container
|
||||
import spack.schema.gitlab_ci
|
||||
import spack.schema.mirrors
|
||||
import spack.schema.modules
|
||||
import spack.schema.packages
|
||||
@@ -31,7 +31,7 @@
|
||||
spack.schema.concretizer.properties,
|
||||
spack.schema.config.properties,
|
||||
spack.schema.container.properties,
|
||||
spack.schema.ci.properties,
|
||||
spack.schema.gitlab_ci.properties,
|
||||
spack.schema.mirrors.properties,
|
||||
spack.schema.modules.properties,
|
||||
spack.schema.packages.properties,
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
#: THIS NEEDS TO BE UPDATED FOR EVERY NEW KEYWORD THAT
|
||||
#: IS ADDED IMMEDIATELY BELOW THE MODULE TYPE ATTRIBUTE
|
||||
spec_regex = (
|
||||
r"(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|"
|
||||
r"(?!hierarchy|core_specs|verbose|hash_length|defaults|"
|
||||
r"whitelist|blacklist|" # DEPRECATED: remove in 0.20.
|
||||
r"include|exclude|" # use these more inclusive/consistent options
|
||||
r"projections|naming_scheme|core_compilers|all)(^\w[\w-]*)"
|
||||
@@ -127,10 +127,6 @@
|
||||
"core_compilers": array_of_strings,
|
||||
"hierarchy": array_of_strings,
|
||||
"core_specs": array_of_strings,
|
||||
"filter_hierarchy_specs": {
|
||||
"type": "object",
|
||||
"patternProperties": {spec_regex: array_of_strings},
|
||||
},
|
||||
},
|
||||
}, # Specific lmod extensions
|
||||
]
|
||||
|
||||
@@ -32,17 +32,11 @@
|
||||
{
|
||||
"type": "array",
|
||||
"items": {
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {
|
||||
"one_of": {"type": "array"},
|
||||
"any_of": {"type": "array"},
|
||||
},
|
||||
},
|
||||
{"type": "string"},
|
||||
]
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"one_of": {"type": "array"},
|
||||
"any_of": {"type": "array"},
|
||||
},
|
||||
},
|
||||
},
|
||||
# Shorthand for a single requirement group with
|
||||
|
||||
@@ -103,7 +103,6 @@ def getter(node):
|
||||
"dev_spec",
|
||||
"external",
|
||||
"packages_yaml",
|
||||
"package_requirements",
|
||||
"package_py",
|
||||
"installed",
|
||||
]
|
||||
@@ -116,10 +115,9 @@ def getter(node):
|
||||
"VersionProvenance", version_origin_fields
|
||||
)(**{name: i for i, name in enumerate(version_origin_fields)})
|
||||
|
||||
|
||||
#: Named tuple to contain information on declared versions
|
||||
DeclaredVersion = collections.namedtuple("DeclaredVersion", ["version", "idx", "origin"])
|
||||
|
||||
|
||||
# Below numbers are used to map names of criteria to the order
|
||||
# they appear in the solution. See concretize.lp
|
||||
|
||||
@@ -503,7 +501,7 @@ def _compute_specs_from_answer_set(self):
|
||||
key = providers[0]
|
||||
candidate = answer.get(key)
|
||||
|
||||
if candidate and candidate.satisfies(input_spec):
|
||||
if candidate and candidate.intersects(input_spec):
|
||||
self._concrete_specs.append(answer[key])
|
||||
self._concrete_specs_by_input[input_spec] = answer[key]
|
||||
else:
|
||||
@@ -586,54 +584,6 @@ def extract_args(model, predicate_name):
|
||||
return [stringify(sym.arguments) for sym in model if sym.name == predicate_name]
|
||||
|
||||
|
||||
class ErrorHandler:
|
||||
def __init__(self, model):
|
||||
self.model = model
|
||||
self.error_args = extract_args(model, "error")
|
||||
|
||||
def multiple_values_error(self, attribute, pkg):
|
||||
return f'Cannot select a single "{attribute}" for package "{pkg}"'
|
||||
|
||||
def no_value_error(self, attribute, pkg):
|
||||
return f'Cannot select a single "{attribute}" for package "{pkg}"'
|
||||
|
||||
def handle_error(self, msg, *args):
|
||||
"""Handle an error state derived by the solver."""
|
||||
if msg == "multiple_values_error":
|
||||
return self.multiple_values_error(*args)
|
||||
|
||||
if msg == "no_value_error":
|
||||
return self.no_value_error(*args)
|
||||
|
||||
# For variant formatting, we sometimes have to construct specs
|
||||
# to format values properly. Find/replace all occurances of
|
||||
# Spec(...) with the string representation of the spec mentioned
|
||||
msg = msg.format(*args)
|
||||
specs_to_construct = re.findall(r"Spec\(([^)]*)\)", msg)
|
||||
for spec_str in specs_to_construct:
|
||||
msg = msg.replace("Spec(%s)" % spec_str, str(spack.spec.Spec(spec_str)))
|
||||
|
||||
return msg
|
||||
|
||||
def message(self, errors) -> str:
|
||||
messages = [
|
||||
f" {idx+1: 2}. {self.handle_error(msg, *args)}"
|
||||
for idx, (_, msg, args) in enumerate(errors)
|
||||
]
|
||||
header = "concretization failed for the following reasons:\n"
|
||||
return "\n".join([header] + messages)
|
||||
|
||||
def raise_if_errors(self):
|
||||
if not self.error_args:
|
||||
return
|
||||
|
||||
errors = sorted(
|
||||
[(int(priority), msg, args) for priority, msg, *args in self.error_args], reverse=True
|
||||
)
|
||||
msg = self.message(errors)
|
||||
raise UnsatisfiableSpecError(msg)
|
||||
|
||||
|
||||
class PyclingoDriver(object):
|
||||
def __init__(self, cores=True):
|
||||
"""Driver for the Python clingo interface.
|
||||
@@ -689,6 +639,20 @@ def fact(self, head):
|
||||
if choice:
|
||||
self.assumptions.append(atom)
|
||||
|
||||
def handle_error(self, msg, *args):
|
||||
"""Handle an error state derived by the solver."""
|
||||
msg = msg.format(*args)
|
||||
|
||||
# For variant formatting, we sometimes have to construct specs
|
||||
# to format values properly. Find/replace all occurances of
|
||||
# Spec(...) with the string representation of the spec mentioned
|
||||
specs_to_construct = re.findall(r"Spec\(([^)]*)\)", msg)
|
||||
for spec_str in specs_to_construct:
|
||||
msg = msg.replace("Spec(%s)" % spec_str, str(spack.spec.Spec(spec_str)))
|
||||
|
||||
# TODO: this raises early -- we should handle multiple errors if there are any.
|
||||
raise UnsatisfiableSpecError(msg)
|
||||
|
||||
def solve(self, setup, specs, reuse=None, output=None, control=None):
|
||||
"""Set up the input and solve for dependencies of ``specs``.
|
||||
|
||||
@@ -790,8 +754,10 @@ def on_model(model):
|
||||
min_cost, best_model = min(models)
|
||||
|
||||
# first check for errors
|
||||
error_handler = ErrorHandler(best_model)
|
||||
error_handler.raise_if_errors()
|
||||
error_args = extract_args(best_model, "error")
|
||||
errors = sorted((int(priority), msg, args) for priority, msg, *args in error_args)
|
||||
for _, msg, args in errors:
|
||||
self.handle_error(msg, *args)
|
||||
|
||||
# build specs from spec attributes in the model
|
||||
spec_attrs = [(name, tuple(rest)) for name, *rest in extract_args(best_model, "attr")]
|
||||
@@ -852,9 +818,6 @@ def __init__(self, tests=False):
|
||||
self.compiler_version_constraints = set()
|
||||
self.post_facts = []
|
||||
|
||||
# (ID, CompilerSpec) -> dictionary of attributes
|
||||
self.compiler_info = collections.defaultdict(dict)
|
||||
|
||||
# hashes we've already added facts for
|
||||
self.seen_hashes = set()
|
||||
self.reusable_and_possible = {}
|
||||
@@ -906,6 +869,30 @@ def key_fn(version):
|
||||
)
|
||||
)
|
||||
|
||||
for v in most_to_least_preferred:
|
||||
# There are two paths for creating the ref_version in GitVersions.
|
||||
# The first uses a lookup to supply a tag and distance as a version.
|
||||
# The second is user specified and can be resolved as a standard version.
|
||||
# This second option is constrained such that the user version must be known to Spack
|
||||
if (
|
||||
isinstance(v.version, spack.version.GitVersion)
|
||||
and v.version.user_supplied_reference
|
||||
):
|
||||
ref_version = spack.version.Version(v.version.ref_version_str)
|
||||
self.gen.fact(fn.version_equivalent(pkg.name, v.version, ref_version))
|
||||
# disqualify any git supplied version from user if they weren't already known
|
||||
# versions in spack
|
||||
if not any(ref_version == dv.version for dv in most_to_least_preferred if v != dv):
|
||||
msg = (
|
||||
"The reference version '{version}' for package '{package}' is not defined."
|
||||
" Either choose another reference version or define '{version}' in your"
|
||||
" version preferences or package.py file for {package}.".format(
|
||||
package=pkg.name, version=str(ref_version)
|
||||
)
|
||||
)
|
||||
|
||||
raise UnsatisfiableSpecError(msg)
|
||||
|
||||
# Declare deprecated versions for this package, if any
|
||||
deprecated = self.deprecated_versions[pkg.name]
|
||||
for v in sorted(deprecated):
|
||||
@@ -955,41 +942,54 @@ def conflict_rules(self, pkg):
|
||||
self.gen.fact(fn.conflict(pkg.name, trigger_id, constraint_id, conflict_msg))
|
||||
self.gen.newline()
|
||||
|
||||
def compiler_facts(self):
|
||||
def available_compilers(self):
|
||||
"""Facts about available compilers."""
|
||||
|
||||
self.gen.h2("Available compilers")
|
||||
indexed_possible_compilers = list(enumerate(self.possible_compilers))
|
||||
for compiler_id, compiler in indexed_possible_compilers:
|
||||
self.gen.fact(fn.compiler_id(compiler_id))
|
||||
self.gen.fact(fn.compiler_name(compiler_id, compiler.spec.name))
|
||||
self.gen.fact(fn.compiler_version(compiler_id, compiler.spec.version))
|
||||
compilers = self.possible_compilers
|
||||
|
||||
if compiler.operating_system:
|
||||
self.gen.fact(fn.compiler_os(compiler_id, compiler.operating_system))
|
||||
compiler_versions = collections.defaultdict(lambda: set())
|
||||
for compiler in compilers:
|
||||
compiler_versions[compiler.name].add(compiler.version)
|
||||
|
||||
if compiler.target == "any":
|
||||
compiler.target = None
|
||||
|
||||
if compiler.target is not None:
|
||||
self.gen.fact(fn.compiler_target(compiler_id, compiler.target))
|
||||
|
||||
for flag_type, flags in compiler.flags.items():
|
||||
for flag in flags:
|
||||
self.gen.fact(fn.compiler_flag(compiler_id, flag_type, flag))
|
||||
for compiler in sorted(compiler_versions):
|
||||
for v in sorted(compiler_versions[compiler]):
|
||||
self.gen.fact(fn.compiler_version(compiler, v))
|
||||
|
||||
self.gen.newline()
|
||||
|
||||
# Set compiler defaults, given a list of possible compilers
|
||||
self.gen.h2("Default compiler preferences (CompilerID, Weight)")
|
||||
def compiler_defaults(self):
|
||||
"""Set compiler defaults, given a list of possible compilers."""
|
||||
self.gen.h2("Default compiler preferences")
|
||||
|
||||
compiler_list = self.possible_compilers.copy()
|
||||
compiler_list = sorted(compiler_list, key=lambda x: (x.name, x.version), reverse=True)
|
||||
ppk = spack.package_prefs.PackagePrefs("all", "compiler", all=False)
|
||||
matches = sorted(indexed_possible_compilers, key=lambda x: ppk(x[1].spec))
|
||||
matches = sorted(compiler_list, key=ppk)
|
||||
|
||||
for weight, (compiler_id, cspec) in enumerate(matches):
|
||||
f = fn.default_compiler_preference(compiler_id, weight)
|
||||
for i, cspec in enumerate(matches):
|
||||
f = fn.default_compiler_preference(cspec.name, cspec.version, i)
|
||||
self.gen.fact(f)
|
||||
|
||||
# Enumerate target families. This may be redundant, but compilers with
|
||||
# custom versions will be able to concretize properly.
|
||||
for entry in spack.compilers.all_compilers_config():
|
||||
compiler_entry = entry["compiler"]
|
||||
cspec = spack.spec.CompilerSpec(compiler_entry["spec"])
|
||||
if not compiler_entry.get("target", None):
|
||||
continue
|
||||
|
||||
self.gen.fact(
|
||||
fn.compiler_supports_target(cspec.name, cspec.version, compiler_entry["target"])
|
||||
)
|
||||
|
||||
def compiler_supports_os(self):
|
||||
compilers_yaml = spack.compilers.all_compilers_config()
|
||||
for entry in compilers_yaml:
|
||||
c = spack.spec.CompilerSpec(entry["compiler"]["spec"])
|
||||
operating_system = entry["compiler"]["operating_system"]
|
||||
self.gen.fact(fn.compiler_supports_os(c.name, c.version, operating_system))
|
||||
|
||||
def package_compiler_defaults(self, pkg):
|
||||
"""Facts about packages' compiler prefs."""
|
||||
|
||||
@@ -998,16 +998,14 @@ def package_compiler_defaults(self, pkg):
|
||||
if not pkg_prefs or "compiler" not in pkg_prefs:
|
||||
return
|
||||
|
||||
compiler_list = self.possible_compilers
|
||||
compiler_list = self.possible_compilers.copy()
|
||||
compiler_list = sorted(compiler_list, key=lambda x: (x.name, x.version), reverse=True)
|
||||
ppk = spack.package_prefs.PackagePrefs(pkg.name, "compiler", all=False)
|
||||
matches = sorted(compiler_list, key=lambda x: ppk(x.spec))
|
||||
matches = sorted(compiler_list, key=ppk)
|
||||
|
||||
for i, compiler in enumerate(reversed(matches)):
|
||||
for i, cspec in enumerate(reversed(matches)):
|
||||
self.gen.fact(
|
||||
fn.node_compiler_preference(
|
||||
pkg.name, compiler.spec.name, compiler.spec.version, -i * 100
|
||||
)
|
||||
fn.node_compiler_preference(pkg.name, cspec.name, cspec.version, -i * 100)
|
||||
)
|
||||
|
||||
def package_requirement_rules(self, pkg):
|
||||
@@ -1030,14 +1028,9 @@ def _rules_from_requirements(self, pkg_name, requirements):
|
||||
else:
|
||||
rules = []
|
||||
for requirement in requirements:
|
||||
if isinstance(requirement, str):
|
||||
# A string represents a spec that must be satisfied. It is
|
||||
# equivalent to a one_of group with a single element
|
||||
rules.append((pkg_name, "one_of", [requirement]))
|
||||
else:
|
||||
for policy in ("one_of", "any_of"):
|
||||
if policy in requirement:
|
||||
rules.append((pkg_name, policy, requirement[policy]))
|
||||
for policy in ("one_of", "any_of"):
|
||||
if policy in requirement:
|
||||
rules.append((pkg_name, policy, requirement[policy]))
|
||||
return rules
|
||||
|
||||
def pkg_rules(self, pkg, tests):
|
||||
@@ -1399,6 +1392,28 @@ def target_preferences(self, pkg_name):
|
||||
fn.target_weight(pkg_name, str(preferred.architecture.target), i + offset)
|
||||
)
|
||||
|
||||
def flag_defaults(self):
|
||||
self.gen.h2("Compiler flag defaults")
|
||||
|
||||
# types of flags that can be on specs
|
||||
for flag in spack.spec.FlagMap.valid_compiler_flags():
|
||||
self.gen.fact(fn.flag_type(flag))
|
||||
self.gen.newline()
|
||||
|
||||
# flags from compilers.yaml
|
||||
compilers = all_compilers_in_config()
|
||||
seen = set()
|
||||
for compiler in compilers:
|
||||
# if there are multiple with the same spec, only use the first
|
||||
if compiler.spec in seen:
|
||||
continue
|
||||
seen.add(compiler.spec)
|
||||
for name, flags in compiler.flags.items():
|
||||
for flag in flags:
|
||||
self.gen.fact(
|
||||
fn.compiler_version_flag(compiler.name, compiler.version, name, flag)
|
||||
)
|
||||
|
||||
def spec_clauses(self, *args, **kwargs):
|
||||
"""Wrap a call to `_spec_clauses()` into a try/except block that
|
||||
raises a comprehensible error message in case of failure.
|
||||
@@ -1448,7 +1463,6 @@ class Head(object):
|
||||
node_compiler = fn.attr("node_compiler_set")
|
||||
node_compiler_version = fn.attr("node_compiler_version_set")
|
||||
node_flag = fn.attr("node_flag_set")
|
||||
node_flag_source = fn.attr("node_flag_source")
|
||||
node_flag_propagate = fn.attr("node_flag_propagate")
|
||||
variant_propagate = fn.attr("variant_propagate")
|
||||
|
||||
@@ -1462,7 +1476,6 @@ class Body(object):
|
||||
node_compiler = fn.attr("node_compiler")
|
||||
node_compiler_version = fn.attr("node_compiler_version")
|
||||
node_flag = fn.attr("node_flag")
|
||||
node_flag_source = fn.attr("node_flag_source")
|
||||
node_flag_propagate = fn.attr("node_flag_propagate")
|
||||
variant_propagate = fn.attr("variant_propagate")
|
||||
|
||||
@@ -1544,7 +1557,6 @@ class Body(object):
|
||||
for flag_type, flags in spec.compiler_flags.items():
|
||||
for flag in flags:
|
||||
clauses.append(f.node_flag(spec.name, flag_type, flag))
|
||||
clauses.append(f.node_flag_source(spec.name, flag_type, spec.name))
|
||||
if not spec.concrete and flag.propagate is True:
|
||||
clauses.append(f.node_flag_propagate(spec.name, flag_type))
|
||||
|
||||
@@ -1614,12 +1626,7 @@ def key_fn(item):
|
||||
# When COMPARING VERSIONS, the '@develop' version is always
|
||||
# larger than other versions. BUT when CONCRETIZING, the largest
|
||||
# NON-develop version is selected by default.
|
||||
return (
|
||||
info.get("preferred", False),
|
||||
not info.get("deprecated", False),
|
||||
not version.isdevelop(),
|
||||
version,
|
||||
)
|
||||
return info.get("preferred", False), not version.isdevelop(), version
|
||||
|
||||
for idx, item in enumerate(sorted(pkg_cls.versions.items(), key=key_fn, reverse=True)):
|
||||
v, version_info = item
|
||||
@@ -1669,15 +1676,6 @@ def add_concrete_versions_from_specs(self, specs, origin):
|
||||
DeclaredVersion(version=dep.version, idx=0, origin=origin)
|
||||
)
|
||||
self.possible_versions[dep.name].add(dep.version)
|
||||
if (
|
||||
isinstance(dep.version, spack.version.GitVersion)
|
||||
and dep.version.user_supplied_reference
|
||||
):
|
||||
defined_version = spack.version.Version(dep.version.ref_version_str)
|
||||
self.declared_versions[dep.name].append(
|
||||
DeclaredVersion(version=defined_version, idx=1, origin=origin)
|
||||
)
|
||||
self.possible_versions[dep.name].add(defined_version)
|
||||
|
||||
def _supported_targets(self, compiler_name, compiler_version, targets):
|
||||
"""Get a list of which targets are supported by the compiler.
|
||||
@@ -1769,6 +1767,8 @@ def target_defaults(self, specs):
|
||||
if granularity == "generic":
|
||||
candidate_targets = [t for t in candidate_targets if t.vendor == "generic"]
|
||||
|
||||
compilers = self.possible_compilers
|
||||
|
||||
# Add targets explicitly requested from specs
|
||||
for spec in specs:
|
||||
if not spec.architecture or not spec.architecture.target:
|
||||
@@ -1785,14 +1785,8 @@ def target_defaults(self, specs):
|
||||
if ancestor not in candidate_targets:
|
||||
candidate_targets.append(ancestor)
|
||||
|
||||
best_targets = {uarch.family.name}
|
||||
for compiler_id, compiler in enumerate(self.possible_compilers):
|
||||
# Stub support for cross-compilation, to be expanded later
|
||||
if compiler.target is not None and compiler.target != str(uarch.family):
|
||||
self.gen.fact(fn.compiler_supports_target(compiler_id, compiler.target))
|
||||
self.gen.newline()
|
||||
continue
|
||||
|
||||
best_targets = set([uarch.family.name])
|
||||
for compiler in sorted(compilers):
|
||||
supported = self._supported_targets(compiler.name, compiler.version, candidate_targets)
|
||||
|
||||
# If we can't find supported targets it may be due to custom
|
||||
@@ -1800,8 +1794,10 @@ def target_defaults(self, specs):
|
||||
# real_version from the compiler object to get more accurate
|
||||
# results.
|
||||
if not supported:
|
||||
compiler_obj = spack.compilers.compilers_for_spec(compiler)
|
||||
compiler_obj = compiler_obj[0]
|
||||
supported = self._supported_targets(
|
||||
compiler.name, compiler.real_version, candidate_targets
|
||||
compiler.name, compiler_obj.real_version, candidate_targets
|
||||
)
|
||||
|
||||
if not supported:
|
||||
@@ -1809,19 +1805,20 @@ def target_defaults(self, specs):
|
||||
|
||||
for target in supported:
|
||||
best_targets.add(target.name)
|
||||
self.gen.fact(fn.compiler_supports_target(compiler_id, target.name))
|
||||
self.gen.fact(
|
||||
fn.compiler_supports_target(compiler.name, compiler.version, target.name)
|
||||
)
|
||||
|
||||
self.gen.fact(fn.compiler_supports_target(compiler_id, uarch.family.name))
|
||||
self.gen.newline()
|
||||
self.gen.fact(
|
||||
fn.compiler_supports_target(compiler.name, compiler.version, uarch.family.name)
|
||||
)
|
||||
|
||||
i = 0 # TODO compute per-target offset?
|
||||
for target in candidate_targets:
|
||||
self.gen.fact(fn.target(target.name))
|
||||
self.gen.fact(fn.target_family(target.name, target.family.name))
|
||||
self.gen.fact(fn.target_compatible(target.name, target.name))
|
||||
# Code for ancestor can run on target
|
||||
for ancestor in target.ancestors:
|
||||
self.gen.fact(fn.target_compatible(target.name, ancestor.name))
|
||||
for parent in sorted(target.parents):
|
||||
self.gen.fact(fn.target_parent(target.name, parent.name))
|
||||
|
||||
# prefer best possible targets; weight others poorly so
|
||||
# they're not used unless set explicitly
|
||||
@@ -1832,9 +1829,9 @@ def target_defaults(self, specs):
|
||||
i += 1
|
||||
else:
|
||||
self.default_targets.append((100, target.name))
|
||||
self.gen.newline()
|
||||
|
||||
self.default_targets = list(sorted(set(self.default_targets)))
|
||||
self.default_targets = list(sorted(set(self.default_targets)))
|
||||
self.gen.newline()
|
||||
|
||||
def virtual_providers(self):
|
||||
self.gen.h2("Virtual providers")
|
||||
@@ -1851,22 +1848,6 @@ def virtual_providers(self):
|
||||
|
||||
def generate_possible_compilers(self, specs):
|
||||
compilers = all_compilers_in_config()
|
||||
|
||||
# Search for compilers which differs only by aspects that are
|
||||
# not selectable by users using the spec syntax
|
||||
seen, sanitized_list = set(), []
|
||||
for compiler in compilers:
|
||||
key = compiler.spec, compiler.operating_system, compiler.target
|
||||
if key in seen:
|
||||
warnings.warn(
|
||||
f"duplicate found for {compiler.spec} on "
|
||||
f"{compiler.operating_system}/{compiler.target}. "
|
||||
f"Edit your compilers.yaml configuration to remove it."
|
||||
)
|
||||
continue
|
||||
sanitized_list.append(compiler)
|
||||
seen.add(key)
|
||||
|
||||
cspecs = set([c.spec for c in compilers])
|
||||
|
||||
# add compiler specs from the input line to possibilities if we
|
||||
@@ -1887,38 +1868,22 @@ def generate_possible_compilers(self, specs):
|
||||
# Allow unknown compilers to exist if the associated spec
|
||||
# is already built
|
||||
else:
|
||||
compiler_cls = spack.compilers.class_for_compiler_name(s.compiler.name)
|
||||
compilers.append(
|
||||
compiler_cls(
|
||||
s.compiler, operating_system=None, target=None, paths=[None] * 4
|
||||
)
|
||||
)
|
||||
cspecs.add(s.compiler)
|
||||
self.gen.fact(fn.allow_compiler(s.compiler.name, s.compiler.version))
|
||||
|
||||
return list(
|
||||
sorted(
|
||||
compilers,
|
||||
key=lambda compiler: (compiler.spec.name, compiler.spec.version),
|
||||
reverse=True,
|
||||
)
|
||||
)
|
||||
return cspecs
|
||||
|
||||
def define_version_constraints(self):
|
||||
"""Define what version_satisfies(...) means in ASP logic."""
|
||||
for pkg_name, versions in sorted(self.version_constraints):
|
||||
# version must be *one* of the ones the spec allows.
|
||||
# Also, "possible versions" contain only concrete versions, so satisfies is appropriate
|
||||
allowed_versions = [
|
||||
v for v in sorted(self.possible_versions[pkg_name]) if v.satisfies(versions)
|
||||
v for v in sorted(self.possible_versions[pkg_name]) if v.intersects(versions)
|
||||
]
|
||||
|
||||
# This is needed to account for a variable number of
|
||||
# numbers e.g. if both 1.0 and 1.0.2 are possible versions
|
||||
exact_match = [
|
||||
v
|
||||
for v in allowed_versions
|
||||
if v == versions and not isinstance(v, spack.version.GitVersion)
|
||||
]
|
||||
exact_match = [v for v in allowed_versions if v == versions]
|
||||
if exact_match:
|
||||
allowed_versions = exact_match
|
||||
|
||||
@@ -1963,12 +1928,14 @@ def versions_for(v):
|
||||
self.possible_versions[pkg_name].add(version)
|
||||
|
||||
def define_compiler_version_constraints(self):
|
||||
compiler_list = spack.compilers.all_compiler_specs()
|
||||
compiler_list = list(sorted(set(compiler_list)))
|
||||
for constraint in sorted(self.compiler_version_constraints):
|
||||
for compiler_id, compiler in enumerate(self.possible_compilers):
|
||||
if compiler.spec.satisfies(constraint):
|
||||
for compiler in compiler_list:
|
||||
if compiler.satisfies(constraint):
|
||||
self.gen.fact(
|
||||
fn.compiler_version_satisfies(
|
||||
constraint.name, constraint.versions, compiler_id
|
||||
constraint.name, constraint.versions, compiler.version
|
||||
)
|
||||
)
|
||||
self.gen.newline()
|
||||
@@ -2120,11 +2087,6 @@ def setup(self, driver, specs, reuse=None):
|
||||
self.add_concrete_versions_from_specs(specs, version_provenance.spec)
|
||||
self.add_concrete_versions_from_specs(dev_specs, version_provenance.dev_spec)
|
||||
|
||||
req_version_specs = _get_versioned_specs_from_pkg_requirements()
|
||||
self.add_concrete_versions_from_specs(
|
||||
req_version_specs, version_provenance.package_requirements
|
||||
)
|
||||
|
||||
self.gen.h1("Concrete input spec definitions")
|
||||
self.define_concrete_input_specs(specs, possible)
|
||||
|
||||
@@ -2134,13 +2096,10 @@ def setup(self, driver, specs, reuse=None):
|
||||
for reusable_spec in reuse:
|
||||
self._facts_from_concrete_spec(reusable_spec, possible)
|
||||
|
||||
self.gen.h1("Possible flags on nodes")
|
||||
for flag in spack.spec.FlagMap.valid_compiler_flags():
|
||||
self.gen.fact(fn.flag_type(flag))
|
||||
self.gen.newline()
|
||||
|
||||
self.gen.h1("General Constraints")
|
||||
self.compiler_facts()
|
||||
self.available_compilers()
|
||||
self.compiler_defaults()
|
||||
self.compiler_supports_os()
|
||||
|
||||
# architecture defaults
|
||||
self.platform_defaults()
|
||||
@@ -2151,6 +2110,7 @@ def setup(self, driver, specs, reuse=None):
|
||||
self.provider_defaults()
|
||||
self.provider_requirements()
|
||||
self.external_packages()
|
||||
self.flag_defaults()
|
||||
|
||||
self.gen.h1("Package Constraints")
|
||||
for pkg in sorted(self.pkgs):
|
||||
@@ -2199,55 +2159,6 @@ def literal_specs(self, specs):
|
||||
self.gen.fact(fn.concretize_everything())
|
||||
|
||||
|
||||
def _get_versioned_specs_from_pkg_requirements():
|
||||
"""If package requirements mention versions that are not mentioned
|
||||
elsewhere, then we need to collect those to mark them as possible
|
||||
versions.
|
||||
"""
|
||||
req_version_specs = list()
|
||||
config = spack.config.get("packages")
|
||||
for pkg_name, d in config.items():
|
||||
if pkg_name == "all":
|
||||
continue
|
||||
if "require" in d:
|
||||
req_version_specs.extend(_specs_from_requires(pkg_name, d["require"]))
|
||||
return req_version_specs
|
||||
|
||||
|
||||
def _specs_from_requires(pkg_name, section):
|
||||
if isinstance(section, str):
|
||||
spec = spack.spec.Spec(section)
|
||||
if not spec.name:
|
||||
spec.name = pkg_name
|
||||
extracted_specs = [spec]
|
||||
else:
|
||||
spec_strs = []
|
||||
for spec_group in section:
|
||||
if isinstance(spec_group, str):
|
||||
spec_strs.append(spec_group)
|
||||
else:
|
||||
# Otherwise it is a one_of or any_of: get the values
|
||||
(x,) = spec_group.values()
|
||||
spec_strs.extend(x)
|
||||
|
||||
extracted_specs = []
|
||||
for spec_str in spec_strs:
|
||||
spec = spack.spec.Spec(spec_str)
|
||||
if not spec.name:
|
||||
spec.name = pkg_name
|
||||
extracted_specs.append(spec)
|
||||
|
||||
version_specs = []
|
||||
for spec in extracted_specs:
|
||||
try:
|
||||
spec.version
|
||||
version_specs.append(spec)
|
||||
except spack.error.SpecError:
|
||||
pass
|
||||
|
||||
return version_specs
|
||||
|
||||
|
||||
class SpecBuilder(object):
|
||||
"""Class with actions to rebuild a spec from ASP results."""
|
||||
|
||||
@@ -2270,7 +2181,6 @@ def __init__(self, specs, hash_lookup=None):
|
||||
self._specs = {}
|
||||
self._result = None
|
||||
self._command_line_specs = specs
|
||||
self._hash_specs = []
|
||||
self._flag_sources = collections.defaultdict(lambda: set())
|
||||
self._flag_compiler_defaults = set()
|
||||
|
||||
@@ -2281,7 +2191,6 @@ def __init__(self, specs, hash_lookup=None):
|
||||
def hash(self, pkg, h):
|
||||
if pkg not in self._specs:
|
||||
self._specs[pkg] = self._hash_lookup[h]
|
||||
self._hash_specs.append(pkg)
|
||||
|
||||
def node(self, pkg):
|
||||
if pkg not in self._specs:
|
||||
@@ -2322,8 +2231,10 @@ def variant_value(self, pkg, name, value):
|
||||
def version(self, pkg, version):
|
||||
self._specs[pkg].versions = spack.version.ver([version])
|
||||
|
||||
def node_compiler_version(self, pkg, compiler, version):
|
||||
def node_compiler(self, pkg, compiler):
|
||||
self._specs[pkg].compiler = spack.spec.CompilerSpec(compiler)
|
||||
|
||||
def node_compiler_version(self, pkg, compiler, version):
|
||||
self._specs[pkg].compiler.versions = spack.version.VersionList([version])
|
||||
|
||||
def node_flag_compiler_default(self, pkg):
|
||||
@@ -2378,7 +2289,7 @@ def reorder_flags(self):
|
||||
flags will appear last on the compile line, in the order they
|
||||
were specified.
|
||||
|
||||
The solver determines which flags are on nodes; this routine
|
||||
The solver determines wihch flags are on nodes; this routine
|
||||
imposes order afterwards.
|
||||
"""
|
||||
# reverse compilers so we get highest priority compilers that share a spec
|
||||
@@ -2405,8 +2316,8 @@ def reorder_flags(self):
|
||||
)
|
||||
|
||||
# add flags from each source, lowest to highest precedence
|
||||
for name in sorted_sources:
|
||||
source = self._specs[name] if name in self._hash_specs else cmd_specs[name]
|
||||
for source_name in sorted_sources:
|
||||
source = cmd_specs[source_name]
|
||||
extend_flag_list(from_sources, source.compiler_flags.get(flag_type, []))
|
||||
|
||||
# compiler flags from compilers config are lowest precedence
|
||||
@@ -2428,6 +2339,7 @@ def sort_fn(function_tuple):
|
||||
|
||||
hash attributes are handled first, since they imply entire concrete specs
|
||||
node attributes are handled next, since they instantiate nodes
|
||||
node_compiler attributes are handled next to ensure they come before node_compiler_version
|
||||
external_spec_selected attributes are handled last, so that external extensions can find
|
||||
the concrete specs on which they depend because all nodes are fully constructed before we
|
||||
consider which ones are external.
|
||||
@@ -2437,6 +2349,8 @@ def sort_fn(function_tuple):
|
||||
return (-5, 0)
|
||||
elif name == "node":
|
||||
return (-4, 0)
|
||||
elif name == "node_compiler":
|
||||
return (-3, 0)
|
||||
elif name == "node_flag":
|
||||
return (-2, 0)
|
||||
elif name == "external_spec_selected":
|
||||
@@ -2478,12 +2392,10 @@ def build_specs(self, function_tuples):
|
||||
continue
|
||||
|
||||
# if we've already gotten a concrete spec for this pkg,
|
||||
# do not bother calling actions on it except for node_flag_source,
|
||||
# since node_flag_source is tracking information not in the spec itself
|
||||
# do not bother calling actions on it.
|
||||
spec = self._specs.get(pkg)
|
||||
if spec and spec.concrete:
|
||||
if name != "node_flag_source":
|
||||
continue
|
||||
continue
|
||||
|
||||
action(*args)
|
||||
|
||||
@@ -2583,7 +2495,7 @@ def _check_input_and_extract_concrete_specs(specs):
|
||||
spack.spec.Spec.ensure_valid_variants(s)
|
||||
return reusable
|
||||
|
||||
def _reusable_specs(self, specs):
|
||||
def _reusable_specs(self):
|
||||
reusable_specs = []
|
||||
if self.reuse:
|
||||
# Specs from the local Database
|
||||
@@ -2605,13 +2517,6 @@ def _reusable_specs(self, specs):
|
||||
# TODO: update mirror configuration so it can indicate that the
|
||||
# TODO: source cache (or any mirror really) doesn't have binaries.
|
||||
pass
|
||||
|
||||
# If we only want to reuse dependencies, remove the root specs
|
||||
if self.reuse == "dependencies":
|
||||
reusable_specs = [
|
||||
spec for spec in reusable_specs if not any(root in spec for root in specs)
|
||||
]
|
||||
|
||||
return reusable_specs
|
||||
|
||||
def solve(self, specs, out=None, timers=False, stats=False, tests=False, setup_only=False):
|
||||
@@ -2628,7 +2533,7 @@ def solve(self, specs, out=None, timers=False, stats=False, tests=False, setup_o
|
||||
"""
|
||||
# Check upfront that the variants are admissible
|
||||
reusable_specs = self._check_input_and_extract_concrete_specs(specs)
|
||||
reusable_specs.extend(self._reusable_specs(specs))
|
||||
reusable_specs.extend(self._reusable_specs())
|
||||
setup = SpackSolverSetup(tests=tests)
|
||||
output = OutputConfiguration(timers=timers, stats=stats, out=out, setup_only=setup_only)
|
||||
result, _, _ = self.driver.solve(setup, specs, reuse=reusable_specs, output=output)
|
||||
@@ -2651,7 +2556,7 @@ def solve_in_rounds(self, specs, out=None, timers=False, stats=False, tests=Fals
|
||||
tests (bool): add test dependencies to the solve
|
||||
"""
|
||||
reusable_specs = self._check_input_and_extract_concrete_specs(specs)
|
||||
reusable_specs.extend(self._reusable_specs(specs))
|
||||
reusable_specs.extend(self._reusable_specs())
|
||||
setup = SpackSolverSetup(tests=tests)
|
||||
|
||||
# Tell clingo that we don't have to solve all the inputs at once
|
||||
|
||||
@@ -40,24 +40,6 @@ attr(Name, A1, A2, A3, A4) :- literal(LiteralID, Name, A1, A2, A3, A4), literal_
|
||||
#defined literal/5.
|
||||
#defined literal/6.
|
||||
|
||||
% Attributes for node packages which must have a single value
|
||||
attr_single_value("version").
|
||||
attr_single_value("node_platform").
|
||||
attr_single_value("node_os").
|
||||
attr_single_value("node_target").
|
||||
|
||||
% Error when no attribute is selected
|
||||
error(100, no_value_error, Attribute, Package)
|
||||
:- attr("node", Package),
|
||||
attr_single_value(Attribute),
|
||||
not attr(Attribute, Package, _).
|
||||
|
||||
% Error when multiple attr need to be selected
|
||||
error(100, multiple_values_error, Attribute, Package)
|
||||
:- attr("node", Package),
|
||||
attr_single_value(Attribute),
|
||||
2 { attr(Attribute, Package, Version) }.
|
||||
|
||||
%-----------------------------------------------------------------------------
|
||||
% Version semantics
|
||||
%-----------------------------------------------------------------------------
|
||||
@@ -95,11 +77,21 @@ version_satisfies(Package, Constraint, HashVersion) :- version_satisfies(Package
|
||||
% possible
|
||||
{ attr("version", Package, Version) : version_declared(Package, Version) }
|
||||
:- attr("node", Package).
|
||||
error(2, "No version for '{0}' satisfies '@{1}' and '@{2}'", Package, Version1, Version2)
|
||||
:- attr("node", Package),
|
||||
attr("version", Package, Version1),
|
||||
attr("version", Package, Version2),
|
||||
Version1 < Version2. % see[1]
|
||||
|
||||
error(2, "No versions available for package '{0}'", Package)
|
||||
:- attr("node", Package), not attr("version", Package, _).
|
||||
|
||||
% A virtual package may or may not have a version, but never has more than one
|
||||
error(100, "Cannot select a single version for virtual '{0}'", Virtual)
|
||||
error(2, "No version for '{0}' satisfies '@{1}' and '@{2}'", Virtual, Version1, Version2)
|
||||
:- attr("virtual_node", Virtual),
|
||||
2 { attr("version", Virtual, Version) }.
|
||||
attr("version", Virtual, Version1),
|
||||
attr("version", Virtual, Version2),
|
||||
Version1 < Version2. % see[1]
|
||||
|
||||
% If we select a deprecated version, mark the package as deprecated
|
||||
attr("deprecated", Package, Version) :-
|
||||
@@ -152,10 +144,10 @@ possible_version_weight(Package, Weight)
|
||||
|
||||
% More specific error message if the version cannot satisfy some constraint
|
||||
% Otherwise covered by `no_version_error` and `versions_conflict_error`.
|
||||
error(10, "Cannot satisfy '{0}@{1}'", Package, Constraint)
|
||||
error(1, "No valid version for '{0}' satisfies '@{1}'", Package, Constraint)
|
||||
:- attr("node_version_satisfies", Package, Constraint),
|
||||
attr("version", Package, Version),
|
||||
not version_satisfies(Package, Constraint, Version).
|
||||
C = #count{ Version : attr("version", Package, Version), version_satisfies(Package, Constraint, Version)},
|
||||
C < 1.
|
||||
|
||||
attr("node_version_satisfies", Package, Constraint)
|
||||
:- attr("version", Package, Version), version_satisfies(Package, Constraint, Version).
|
||||
@@ -261,7 +253,7 @@ attr("node", Dependency) :- attr("node", Package), depends_on(Package, Dependenc
|
||||
% dependencies) and get a two-node unconnected graph
|
||||
needed(Package) :- attr("root", Package).
|
||||
needed(Dependency) :- needed(Package), depends_on(Package, Dependency).
|
||||
error(10, "'{0}' is not a valid dependency for any package in the DAG", Package)
|
||||
error(1, "'{0}' is not a valid dependency for any package in the DAG", Package)
|
||||
:- attr("node", Package),
|
||||
not needed(Package).
|
||||
|
||||
@@ -270,7 +262,7 @@ error(10, "'{0}' is not a valid dependency for any package in the DAG", Package)
|
||||
% this ensures that we solve around them
|
||||
path(Parent, Child) :- depends_on(Parent, Child).
|
||||
path(Parent, Descendant) :- path(Parent, A), depends_on(A, Descendant).
|
||||
error(100, "Cyclic dependency detected between '{0}' and '{1}' (consider changing variants to avoid the cycle)", A, B)
|
||||
error(2, "Cyclic dependency detected between '{0}' and '{1}'\n Consider changing variants to avoid the cycle", A, B)
|
||||
:- path(A, B),
|
||||
path(B, A).
|
||||
|
||||
@@ -280,7 +272,7 @@ error(100, "Cyclic dependency detected between '{0}' and '{1}' (consider changin
|
||||
%-----------------------------------------------------------------------------
|
||||
% Conflicts
|
||||
%-----------------------------------------------------------------------------
|
||||
error(1, Msg) :- attr("node", Package),
|
||||
error(0, Msg) :- attr("node", Package),
|
||||
conflict(Package, TriggerID, ConstraintID, Msg),
|
||||
condition_holds(TriggerID),
|
||||
condition_holds(ConstraintID),
|
||||
@@ -309,14 +301,15 @@ attr("virtual_node", Virtual)
|
||||
% The provider must be selected among the possible providers.
|
||||
{ provider(Package, Virtual) : possible_provider(Package, Virtual) }
|
||||
:- attr("virtual_node", Virtual).
|
||||
|
||||
error(100, "Cannot find valid provider for virtual {0}", Virtual)
|
||||
error(2, "Cannot find valid provider for virtual {0}", Virtual)
|
||||
:- attr("virtual_node", Virtual),
|
||||
not provider(_, Virtual).
|
||||
|
||||
error(100, "Cannot select a single provider for virtual '{0}'", Virtual)
|
||||
P = #count{ Package : provider(Package, Virtual)},
|
||||
P < 1.
|
||||
error(2, "Spec cannot include multiple providers for virtual '{0}'\n Requested '{1}' and '{2}'", Virtual, P1, P2)
|
||||
:- attr("virtual_node", Virtual),
|
||||
2 { provider(P, Virtual) }.
|
||||
provider(P1, Virtual),
|
||||
provider(P2, Virtual),
|
||||
P1 < P2.
|
||||
|
||||
% virtual roots imply virtual nodes, and that one provider is a root
|
||||
attr("virtual_node", Virtual) :- attr("virtual_root", Virtual).
|
||||
@@ -405,13 +398,14 @@ possible_provider_weight(Dependency, Virtual, 100, "fallback") :- provider(Depen
|
||||
{ external_version(Package, Version, Weight):
|
||||
version_declared(Package, Version, Weight, "external") }
|
||||
:- external(Package).
|
||||
error(100, "Attempted to use external for '{0}' which does not satisfy any configured external spec", Package)
|
||||
error(2, "Attempted to use external for '{0}' which does not satisfy any configured external spec", Package)
|
||||
:- external(Package),
|
||||
not external_version(Package, _, _).
|
||||
|
||||
error(100, "Attempted to use external for '{0}' which does not satisfy any configured external spec", Package)
|
||||
error(2, "Attempted to use external for '{0}' which does not satisfy any configured external spec", Package)
|
||||
:- external(Package),
|
||||
2 { external_version(Package, Version, Weight) }.
|
||||
external_version(Package, Version1, Weight1),
|
||||
external_version(Package, Version2, Weight2),
|
||||
(Version1, Weight1) < (Version2, Weight2). % see[1]
|
||||
|
||||
version_weight(Package, Weight) :- external_version(Package, Version, Weight).
|
||||
attr("version", Package, Version) :- external_version(Package, Version, Weight).
|
||||
@@ -446,7 +440,7 @@ external_conditions_hold(Package, LocalIndex) :-
|
||||
|
||||
% it cannot happen that a spec is external, but none of the external specs
|
||||
% conditions hold.
|
||||
error(100, "Attempted to use external for '{0}' which does not satisfy any configured external spec", Package)
|
||||
error(2, "Attempted to use external for '{0}' which does not satisfy any configured external spec", Package)
|
||||
:- external(Package),
|
||||
not external_conditions_hold(Package, _).
|
||||
|
||||
@@ -494,7 +488,7 @@ requirement_weight(Package, Group, W) :-
|
||||
requirement_policy(Package, Group, "any_of"),
|
||||
requirement_group_satisfied(Package, Group).
|
||||
|
||||
error(100, "Cannot satisfy the requirements in packages.yaml for package '{0}'. You may want to delete them to proceed with concretization. To check where the requirements are defined run 'spack config blame packages'", Package) :-
|
||||
error(2, "Cannot satisfy the requirements in packages.yaml for the '{0}' package. You may want to delete them to proceed with concretization. To check where the requirements are defined run 'spack config blame packages'", Package) :-
|
||||
activate_requirement_rules(Package),
|
||||
requirement_group(Package, X),
|
||||
not requirement_group_satisfied(Package, X).
|
||||
@@ -524,20 +518,20 @@ attr("variant_value", Package, Variant, Value) :-
|
||||
attr("variant_propagate", Package, Variant, Value, _),
|
||||
variant_possible_value(Package, Variant, Value).
|
||||
|
||||
error(100, "{0} and {1} cannot both propagate variant '{2}' to package {3} with values '{4}' and '{5}'", Source1, Source2, Variant, Package, Value1, Value2) :-
|
||||
error(2, "{0} and {1} cannot both propagate variant '{2}' to package {3} with values '{4}' and '{5}'", Source1, Source2, Variant, Package, Value1, Value2) :-
|
||||
attr("variant_propagate", Package, Variant, Value1, Source1),
|
||||
attr("variant_propagate", Package, Variant, Value2, Source2),
|
||||
variant(Package, Variant),
|
||||
Value1 < Value2.
|
||||
Value1 != Value2.
|
||||
|
||||
% a variant cannot be set if it is not a variant on the package
|
||||
error(100, "Cannot set variant '{0}' for package '{1}' because the variant condition cannot be satisfied for the given spec", Variant, Package)
|
||||
error(2, "Cannot set variant '{0}' for package '{1}' because the variant condition cannot be satisfied for the given spec", Variant, Package)
|
||||
:- attr("variant_set", Package, Variant),
|
||||
not variant(Package, Variant),
|
||||
build(Package).
|
||||
|
||||
% a variant cannot take on a value if it is not a variant of the package
|
||||
error(100, "Cannot set variant '{0}' for package '{1}' because the variant condition cannot be satisfied for the given spec", Variant, Package)
|
||||
error(2, "Cannot set variant '{0}' for package '{1}' because the variant condition cannot be satisfied for the given spec", Variant, Package)
|
||||
:- attr("variant_value", Package, Variant, _),
|
||||
not variant(Package, Variant),
|
||||
build(Package).
|
||||
@@ -560,18 +554,20 @@ attr("variant_value", Package, Variant, Value) :-
|
||||
build(Package).
|
||||
|
||||
|
||||
error(100, "'{0}' required multiple values for single-valued variant '{1}'", Package, Variant)
|
||||
error(2, "'{0}' required multiple values for single-valued variant '{1}'\n Requested 'Spec({1}={2})' and 'Spec({1}={3})'", Package, Variant, Value1, Value2)
|
||||
:- attr("node", Package),
|
||||
variant(Package, Variant),
|
||||
variant_single_value(Package, Variant),
|
||||
build(Package),
|
||||
2 { attr("variant_value", Package, Variant, Value) }.
|
||||
|
||||
error(100, "No valid value for variant '{1}' of package '{0}'", Package, Variant)
|
||||
attr("variant_value", Package, Variant, Value1),
|
||||
attr("variant_value", Package, Variant, Value2),
|
||||
Value1 < Value2. % see[1]
|
||||
error(2, "No valid value for variant '{1}' of package '{0}'", Package, Variant)
|
||||
:- attr("node", Package),
|
||||
variant(Package, Variant),
|
||||
build(Package),
|
||||
not attr("variant_value", Package, Variant, _).
|
||||
C = #count{ Value : attr("variant_value", Package, Variant, Value) },
|
||||
C < 1.
|
||||
|
||||
% if a variant is set to anything, it is considered 'set'.
|
||||
attr("variant_set", Package, Variant) :- attr("variant_set", Package, Variant, _).
|
||||
@@ -579,7 +575,7 @@ attr("variant_set", Package, Variant) :- attr("variant_set", Package, Variant, _
|
||||
% A variant cannot have a value that is not also a possible value
|
||||
% This only applies to packages we need to build -- concrete packages may
|
||||
% have been built w/different variants from older/different package versions.
|
||||
error(10, "'Spec({1}={2})' is not a valid value for '{0}' variant '{1}'", Package, Variant, Value)
|
||||
error(1, "'Spec({1}={2})' is not a valid value for '{0}' variant '{1}'", Package, Variant, Value)
|
||||
:- attr("variant_value", Package, Variant, Value),
|
||||
not variant_possible_value(Package, Variant, Value),
|
||||
build(Package).
|
||||
@@ -587,7 +583,7 @@ error(10, "'Spec({1}={2})' is not a valid value for '{0}' variant '{1}'", Packag
|
||||
% Some multi valued variants accept multiple values from disjoint sets.
|
||||
% Ensure that we respect that constraint and we don't pick values from more
|
||||
% than one set at once
|
||||
error(100, "{0} variant '{1}' cannot have values '{2}' and '{3}' as they come from disjoint value sets", Package, Variant, Value1, Value2)
|
||||
error(2, "{0} variant '{1}' cannot have values '{2}' and '{3}' as they come from disjoing value sets", Package, Variant, Value1, Value2)
|
||||
:- attr("variant_value", Package, Variant, Value1),
|
||||
attr("variant_value", Package, Variant, Value2),
|
||||
variant_value_from_disjoint_sets(Package, Variant, Value1, Set1),
|
||||
@@ -622,7 +618,6 @@ variant_not_default(Package, Variant, Value)
|
||||
% A default variant value that is not used
|
||||
variant_default_not_used(Package, Variant, Value)
|
||||
:- variant_default_value(Package, Variant, Value),
|
||||
variant(Package, Variant),
|
||||
not attr("variant_value", Package, Variant, Value),
|
||||
attr("node", Package).
|
||||
|
||||
@@ -655,7 +650,7 @@ variant_default_value(Package, Variant, Value) :-
|
||||
|
||||
% Treat 'none' in a special way - it cannot be combined with other
|
||||
% values even if the variant is multi-valued
|
||||
error(100, "{0} variant '{1}' cannot have values '{2}' and 'none'", Package, Variant, Value)
|
||||
error(2, "{0} variant '{1}' cannot have values '{2}' and 'none'", Package, Variant, Value)
|
||||
:- attr("variant_value", Package, Variant, Value),
|
||||
attr("variant_value", Package, Variant, "none"),
|
||||
Value != "none",
|
||||
@@ -703,6 +698,18 @@ attr("node_platform", Package, Platform)
|
||||
% platform is set if set to anything
|
||||
attr("node_platform_set", Package) :- attr("node_platform_set", Package, _).
|
||||
|
||||
% each node must have a single platform
|
||||
error(2, "No valid platform found for {0}", Package)
|
||||
:- attr("node", Package),
|
||||
C = #count{ Platform : attr("node_platform", Package, Platform)},
|
||||
C < 1.
|
||||
|
||||
error(2, "Cannot concretize {0} with multiple platforms\n Requested 'platform={1}' and 'platform={2}'", Package, Platform1, Platform2)
|
||||
:- attr("node", Package),
|
||||
attr("node_platform", Package, Platform1),
|
||||
attr("node_platform", Package, Platform2),
|
||||
Platform1 < Platform2. % see[1]
|
||||
|
||||
%-----------------------------------------------------------------------------
|
||||
% OS semantics
|
||||
%-----------------------------------------------------------------------------
|
||||
@@ -712,14 +719,25 @@ os(OS) :- os(OS, _).
|
||||
% one os per node
|
||||
{ attr("node_os", Package, OS) : os(OS) } :- attr("node", Package).
|
||||
|
||||
error(2, "Cannot find valid operating system for '{0}'", Package)
|
||||
:- attr("node", Package),
|
||||
C = #count{ OS : attr("node_os", Package, OS)},
|
||||
C < 1.
|
||||
|
||||
error(2, "Cannot concretize {0} with multiple operating systems\n Requested 'os={1}' and 'os={2}'", Package, OS1, OS2)
|
||||
:- attr("node", Package),
|
||||
attr("node_os", Package, OS1),
|
||||
attr("node_os", Package, OS2),
|
||||
OS1 < OS2. %see [1]
|
||||
|
||||
% can't have a non-buildable OS on a node we need to build
|
||||
error(100, "Cannot select '{0} os={1}' (operating system '{1}' is not buildable)", Package, OS)
|
||||
error(2, "Cannot concretize '{0} os={1}'. Operating system '{1}' is not buildable", Package, OS)
|
||||
:- build(Package),
|
||||
attr("node_os", Package, OS),
|
||||
not buildable_os(OS).
|
||||
|
||||
% can't have dependencies on incompatible OS's
|
||||
error(100, "{0} and dependency {1} have incompatible operating systems 'os={2}' and 'os={3}'", Package, Dependency, PackageOS, DependencyOS)
|
||||
error(2, "{0} and dependency {1} have incompatible operating systems 'os={2}' and 'os={3}'", Package, Dependency, PackageOS, DependencyOS)
|
||||
:- depends_on(Package, Dependency),
|
||||
attr("node_os", Package, PackageOS),
|
||||
attr("node_os", Dependency, DependencyOS),
|
||||
@@ -763,8 +781,19 @@ attr("node_os", Package, OS) :- attr("node_os_set", Package, OS), attr("node", P
|
||||
% Each node has only one target chosen among the known targets
|
||||
{ attr("node_target", Package, Target) : target(Target) } :- attr("node", Package).
|
||||
|
||||
error(2, "Cannot find valid target for '{0}'", Package)
|
||||
:- attr("node", Package),
|
||||
C = #count{Target : attr("node_target", Package, Target)},
|
||||
C < 1.
|
||||
|
||||
error(2, "Cannot concretize '{0}' with multiple targets\n Requested 'target={1}' and 'target={2}'", Package, Target1, Target2)
|
||||
:- attr("node", Package),
|
||||
attr("node_target", Package, Target1),
|
||||
attr("node_target", Package, Target2),
|
||||
Target1 < Target2. % see[1]
|
||||
|
||||
% If a node must satisfy a target constraint, enforce it
|
||||
error(10, "'{0} target={1}' cannot satisfy constraint 'target={2}'", Package, Target, Constraint)
|
||||
error(1, "'{0} target={1}' cannot satisfy constraint 'target={2}'", Package, Target, Constraint)
|
||||
:- attr("node_target", Package, Target),
|
||||
attr("node_target_satisfies", Package, Constraint),
|
||||
not target_satisfies(Constraint, Target).
|
||||
@@ -775,7 +804,7 @@ attr("node_target_satisfies", Package, Constraint)
|
||||
:- attr("node_target", Package, Target), target_satisfies(Constraint, Target).
|
||||
|
||||
% If a node has a target, all of its dependencies must be compatible with that target
|
||||
error(100, "Cannot find compatible targets for {0} and {1}", Package, Dependency)
|
||||
error(2, "Cannot find compatible targets for {0} and {1}", Package, Dependency)
|
||||
:- depends_on(Package, Dependency),
|
||||
attr("node_target", Package, Target),
|
||||
not node_target_compatible(Dependency, Target).
|
||||
@@ -787,15 +816,25 @@ node_target_compatible(Package, Target)
|
||||
:- attr("node_target", Package, MyTarget),
|
||||
target_compatible(Target, MyTarget).
|
||||
|
||||
% target_compatible(T1, T2) means code for T2 can run on T1
|
||||
% This order is dependent -> dependency in the node DAG, which
|
||||
% is contravariant with the target DAG.
|
||||
target_compatible(Target, Target) :- target(Target).
|
||||
target_compatible(Child, Parent) :- target_parent(Child, Parent).
|
||||
target_compatible(Descendent, Ancestor)
|
||||
:- target_parent(Target, Ancestor),
|
||||
target_compatible(Descendent, Target),
|
||||
target(Target).
|
||||
|
||||
#defined target_satisfies/2.
|
||||
#defined target_parent/2.
|
||||
|
||||
% can't use targets on node if the compiler for the node doesn't support them
|
||||
error(100, "{0} compiler '{2}@{3}' incompatible with 'target={1}'", Package, Target, Compiler, Version)
|
||||
error(2, "{0} compiler '{2}@{3}' incompatible with 'target={1}'", Package, Target, Compiler, Version)
|
||||
:- attr("node_target", Package, Target),
|
||||
node_compiler(Package, CompilerID),
|
||||
not compiler_supports_target(CompilerID, Target),
|
||||
compiler_name(CompilerID, Compiler),
|
||||
compiler_version(CompilerID, Version),
|
||||
not compiler_supports_target(Compiler, Version, Target),
|
||||
attr("node_compiler", Package, Compiler),
|
||||
attr("node_compiler_version", Package, Compiler, Version),
|
||||
build(Package).
|
||||
|
||||
% if a target is set explicitly, respect it
|
||||
@@ -819,7 +858,7 @@ node_target_mismatch(Parent, Dependency)
|
||||
not node_target_match(Parent, Dependency).
|
||||
|
||||
% disallow reusing concrete specs that don't have a compatible target
|
||||
error(100, "'{0} target={1}' is not compatible with this machine", Package, Target)
|
||||
error(2, "'{0} target={1}' is not compatible with this machine", Package, Target)
|
||||
:- attr("node", Package),
|
||||
attr("node_target", Package, Target),
|
||||
not target(Target).
|
||||
@@ -829,88 +868,73 @@ error(100, "'{0} target={1}' is not compatible with this machine", Package, Targ
|
||||
%-----------------------------------------------------------------------------
|
||||
% Compiler semantics
|
||||
%-----------------------------------------------------------------------------
|
||||
% There must be only one compiler set per built node.
|
||||
{ node_compiler(Package, CompilerID) : compiler_id(CompilerID) } :-
|
||||
compiler(Compiler) :- compiler_version(Compiler, _).
|
||||
|
||||
% There must be only one compiler set per built node. The compiler
|
||||
% is chosen among available versions.
|
||||
{ attr("node_compiler_version", Package, Compiler, Version) : compiler_version(Compiler, Version) } :-
|
||||
attr("node", Package),
|
||||
build(Package).
|
||||
|
||||
% Infer the compiler that matches a reused node
|
||||
node_compiler(Package, CompilerID)
|
||||
:- attr("node_compiler_version", Package, CompilerName, CompilerVersion),
|
||||
attr("node", Package),
|
||||
compiler_name(CompilerID, CompilerName),
|
||||
compiler_version(CompilerID, CompilerVersion),
|
||||
concrete(Package).
|
||||
|
||||
% Expand the internal attribute into "attr("node_compiler_version")
|
||||
attr("node_compiler_version", Package, CompilerName, CompilerVersion)
|
||||
:- node_compiler(Package, CompilerID),
|
||||
compiler_name(CompilerID, CompilerName),
|
||||
compiler_version(CompilerID, CompilerVersion),
|
||||
build(Package).
|
||||
|
||||
attr("node_compiler", Package, CompilerName)
|
||||
:- attr("node_compiler_version", Package, CompilerName, CompilerVersion).
|
||||
|
||||
error(100, "No valid compiler version found for '{0}'", Package)
|
||||
error(2, "No valid compiler version found for '{0}'", Package)
|
||||
:- attr("node", Package),
|
||||
not node_compiler(Package, _).
|
||||
C = #count{ Version : attr("node_compiler_version", Package, _, Version)},
|
||||
C < 1.
|
||||
error(2, "'{0}' compiler constraints '%{1}@{2}' and '%{3}@{4}' are incompatible", Package, Compiler1, Version1, Compiler2, Version2)
|
||||
:- attr("node", Package),
|
||||
attr("node_compiler_version", Package, Compiler1, Version1),
|
||||
attr("node_compiler_version", Package, Compiler2, Version2),
|
||||
(Compiler1, Version1) < (Compiler2, Version2). % see[1]
|
||||
|
||||
% Sometimes we just need to know the compiler and not the version
|
||||
attr("node_compiler", Package, Compiler) :- attr("node_compiler_version", Package, Compiler, _).
|
||||
|
||||
% We can't have a compiler be enforced and select the version from another compiler
|
||||
error(100, "Cannot select a single compiler for package {0}", Package)
|
||||
:- attr("node", Package),
|
||||
2 { attr("node_compiler_version", Package, C, V) }.
|
||||
error(2, "Cannot concretize {0} with two compilers {1}@{2} and {3}@{4}", Package, C1, V1, C2, V2)
|
||||
:- attr("node_compiler_version", Package, C1, V1),
|
||||
attr("node_compiler_version", Package, C2, V2),
|
||||
(C1, V1) != (C2, V2).
|
||||
|
||||
error(100, "Cannot concretize {0} with two compilers {1} and {2}@{3}", Package, Compiler1, Compiler2, Version)
|
||||
error(2, "Cannot concretize {0} with two compilers {1} and {2}@{3}", Package, Compiler1, Compiler2, Version)
|
||||
:- attr("node_compiler", Package, Compiler1),
|
||||
attr("node_compiler_version", Package, Compiler2, Version),
|
||||
Compiler1 != Compiler2.
|
||||
|
||||
% If the compiler of a node cannot be satisfied, raise
|
||||
error(10, "No valid compiler for {0} satisfies '%{1}'", Package, Compiler)
|
||||
error(1, "No valid compiler for {0} satisfies '%{1}'", Package, Compiler)
|
||||
:- attr("node", Package),
|
||||
attr("node_compiler_version_satisfies", Package, Compiler, ":"),
|
||||
not compiler_version_satisfies(Compiler, ":", _).
|
||||
C = #count{ Version : attr("node_compiler_version", Package, Compiler, Version), compiler_version_satisfies(Compiler, ":", Version) },
|
||||
C < 1.
|
||||
|
||||
% If the compiler of a node must satisfy a constraint, then its version
|
||||
% must be chosen among the ones that satisfy said constraint
|
||||
error(100, "No valid version for '{0}' compiler '{1}' satisfies '@{2}'", Package, Compiler, Constraint)
|
||||
error(2, "No valid version for '{0}' compiler '{1}' satisfies '@{2}'", Package, Compiler, Constraint)
|
||||
:- attr("node", Package),
|
||||
attr("node_compiler_version_satisfies", Package, Compiler, Constraint),
|
||||
not compiler_version_satisfies(Compiler, Constraint, _).
|
||||
|
||||
error(100, "No valid version for '{0}' compiler '{1}' satisfies '@{2}'", Package, Compiler, Constraint)
|
||||
:- attr("node", Package),
|
||||
attr("node_compiler_version_satisfies", Package, Compiler, Constraint),
|
||||
not compiler_version_satisfies(Compiler, Constraint, ID),
|
||||
node_compiler(Package, ID).
|
||||
C = #count{ Version : attr("node_compiler_version", Package, Compiler, Version), compiler_version_satisfies(Compiler, Constraint, Version) },
|
||||
C < 1.
|
||||
|
||||
% If the node is associated with a compiler and the compiler satisfy a constraint, then
|
||||
% the compiler associated with the node satisfy the same constraint
|
||||
attr("node_compiler_version_satisfies", Package, Compiler, Constraint)
|
||||
:- node_compiler(Package, CompilerID),
|
||||
compiler_name(CompilerID, Compiler),
|
||||
compiler_version_satisfies(Compiler, Constraint, CompilerID).
|
||||
:- attr("node_compiler_version", Package, Compiler, Version),
|
||||
compiler_version_satisfies(Compiler, Constraint, Version).
|
||||
|
||||
#defined compiler_version_satisfies/3.
|
||||
|
||||
% If the compiler version was set from the command line,
|
||||
% respect it verbatim
|
||||
:- attr("node_compiler_version_set", Package, Compiler, Version),
|
||||
not attr("node_compiler_version", Package, Compiler, Version).
|
||||
|
||||
:- attr("node_compiler_set", Package, Compiler),
|
||||
not attr("node_compiler_version", Package, Compiler, _).
|
||||
attr("node_compiler_version", Package, Compiler, Version) :-
|
||||
attr("node_compiler_version_set", Package, Compiler, Version).
|
||||
|
||||
% Cannot select a compiler if it is not supported on the OS
|
||||
% Compilers that are explicitly marked as allowed
|
||||
% are excluded from this check
|
||||
error(100, "{0} compiler '%{1}@{2}' incompatible with 'os={3}'", Package, Compiler, Version, OS)
|
||||
:- attr("node_os", Package, OS),
|
||||
node_compiler(Package, CompilerID),
|
||||
compiler_name(CompilerID, Compiler),
|
||||
compiler_version(CompilerID, Version),
|
||||
not compiler_os(CompilerID, OS),
|
||||
error(2, "{0} compiler '%{1}@{2}' incompatible with 'os={3}'", Package, Compiler, Version, OS)
|
||||
:- attr("node_compiler_version", Package, Compiler, Version),
|
||||
attr("node_os", Package, OS),
|
||||
not compiler_supports_os(Compiler, Version, OS),
|
||||
not allow_compiler(Compiler, Version),
|
||||
build(Package).
|
||||
|
||||
@@ -918,8 +942,8 @@ error(100, "{0} compiler '%{1}@{2}' incompatible with 'os={3}'", Package, Compil
|
||||
% same compiler there's a mismatch.
|
||||
compiler_match(Package, Dependency)
|
||||
:- depends_on(Package, Dependency),
|
||||
node_compiler(Package, CompilerID),
|
||||
node_compiler(Dependency, CompilerID).
|
||||
attr("node_compiler_version", Package, Compiler, Version),
|
||||
attr("node_compiler_version", Dependency, Compiler, Version).
|
||||
|
||||
compiler_mismatch(Package, Dependency)
|
||||
:- depends_on(Package, Dependency),
|
||||
@@ -931,32 +955,25 @@ compiler_mismatch_required(Package, Dependency)
|
||||
attr("node_compiler_set", Dependency, _),
|
||||
not compiler_match(Package, Dependency).
|
||||
|
||||
#defined compiler_os/3.
|
||||
#defined compiler_supports_os/3.
|
||||
#defined allow_compiler/2.
|
||||
|
||||
% compilers weighted by preference according to packages.yaml
|
||||
compiler_weight(Package, Weight)
|
||||
:- node_compiler(Package, CompilerID),
|
||||
compiler_name(CompilerID, Compiler),
|
||||
compiler_version(CompilerID, V),
|
||||
:- attr("node_compiler_version", Package, Compiler, V),
|
||||
node_compiler_preference(Package, Compiler, V, Weight).
|
||||
compiler_weight(Package, Weight)
|
||||
:- node_compiler(Package, CompilerID),
|
||||
compiler_name(CompilerID, Compiler),
|
||||
compiler_version(CompilerID, V),
|
||||
:- attr("node_compiler_version", Package, Compiler, V),
|
||||
not node_compiler_preference(Package, Compiler, V, _),
|
||||
default_compiler_preference(CompilerID, Weight).
|
||||
default_compiler_preference(Compiler, V, Weight).
|
||||
compiler_weight(Package, 100)
|
||||
:- node_compiler(Package, CompilerID),
|
||||
compiler_name(CompilerID, Compiler),
|
||||
compiler_version(CompilerID, V),
|
||||
not node_compiler_preference(Package, Compiler, V, _),
|
||||
not default_compiler_preference(CompilerID, _).
|
||||
:- attr("node_compiler_version", Package, Compiler, Version),
|
||||
not node_compiler_preference(Package, Compiler, Version, _),
|
||||
not default_compiler_preference(Compiler, Version, _).
|
||||
|
||||
% For the time being, be strict and reuse only if the compiler match one we have on the system
|
||||
error(100, "Compiler {1}@{2} requested for {0} cannot be found. Set install_missing_compilers:true if intended.", Package, Compiler, Version)
|
||||
:- attr("node_compiler_version", Package, Compiler, Version),
|
||||
not node_compiler(Package, _).
|
||||
error(2, "Compiler {1}@{2} requested for {0} cannot be found. Set install_missing_compilers:true if intended.", Package, Compiler, Version)
|
||||
:- attr("node_compiler_version", Package, Compiler, Version), not compiler_version(Compiler, Version).
|
||||
|
||||
#defined node_compiler_preference/4.
|
||||
#defined default_compiler_preference/3.
|
||||
@@ -968,11 +985,10 @@ error(100, "Compiler {1}@{2} requested for {0} cannot be found. Set install_miss
|
||||
% propagate flags when compiler match
|
||||
can_inherit_flags(Package, Dependency, FlagType)
|
||||
:- depends_on(Package, Dependency),
|
||||
node_compiler(Package, CompilerID),
|
||||
node_compiler(Dependency, CompilerID),
|
||||
attr("node_compiler", Package, Compiler),
|
||||
attr("node_compiler", Dependency, Compiler),
|
||||
not attr("node_flag_set", Dependency, FlagType, _),
|
||||
compiler_id(CompilerID),
|
||||
flag_type(FlagType).
|
||||
compiler(Compiler), flag_type(FlagType).
|
||||
|
||||
node_flag_inherited(Dependency, FlagType, Flag)
|
||||
:- attr("node_flag_set", Package, FlagType, Flag), can_inherit_flags(Package, Dependency, FlagType),
|
||||
@@ -982,14 +998,14 @@ node_flag_inherited(Dependency, FlagType, Flag)
|
||||
can_inherit_flags(Package, Dependency, FlagType),
|
||||
attr("node_flag_propagate", Package, FlagType).
|
||||
|
||||
error(100, "{0} and {1} cannot both propagate compiler flags '{2}' to {3}", Source1, Source2, Package, FlagType) :-
|
||||
error(2, "{0} and {1} cannot both propagate compiler flags '{2}' to {3}", Source1, Source2, Package, FlagType) :-
|
||||
depends_on(Source1, Package),
|
||||
depends_on(Source2, Package),
|
||||
attr("node_flag_propagate", Source1, FlagType),
|
||||
attr("node_flag_propagate", Source2, FlagType),
|
||||
can_inherit_flags(Source1, Package, FlagType),
|
||||
can_inherit_flags(Source2, Package, FlagType),
|
||||
Source1 < Source2.
|
||||
Source1 != Source2.
|
||||
|
||||
% remember where flags came from
|
||||
attr("node_flag_source", Package, FlagType, Package) :- attr("node_flag_set", Package, FlagType, _).
|
||||
@@ -999,21 +1015,19 @@ attr("node_flag_source", Dependency, FlagType, Q)
|
||||
|
||||
% compiler flags from compilers.yaml are put on nodes if compiler matches
|
||||
attr("node_flag", Package, FlagType, Flag)
|
||||
:- compiler_flag(CompilerID, FlagType, Flag),
|
||||
node_compiler(Package, CompilerID),
|
||||
:- compiler_version_flag(Compiler, Version, FlagType, Flag),
|
||||
attr("node_compiler_version", Package, Compiler, Version),
|
||||
flag_type(FlagType),
|
||||
compiler_id(CompilerID),
|
||||
compiler_name(CompilerID, CompilerName),
|
||||
compiler_version(CompilerID, Version).
|
||||
compiler(Compiler),
|
||||
compiler_version(Compiler, Version).
|
||||
|
||||
attr("node_flag_compiler_default", Package)
|
||||
:- not attr("node_flag_set", Package, FlagType, _),
|
||||
compiler_flag(CompilerID, FlagType, Flag),
|
||||
node_compiler(Package, CompilerID),
|
||||
compiler_version_flag(Compiler, Version, FlagType, Flag),
|
||||
attr("node_compiler_version", Package, Compiler, Version),
|
||||
flag_type(FlagType),
|
||||
compiler_id(CompilerID),
|
||||
compiler_name(CompilerID, CompilerName),
|
||||
compiler_version(CompilerID, Version).
|
||||
compiler(Compiler),
|
||||
compiler_version(Compiler, Version).
|
||||
|
||||
% if a flag is set to something or inherited, it's included
|
||||
attr("node_flag", Package, FlagType, Flag) :- attr("node_flag_set", Package, FlagType, Flag).
|
||||
@@ -1024,7 +1038,7 @@ attr("node_flag", Package, FlagType, Flag)
|
||||
attr("no_flags", Package, FlagType)
|
||||
:- not attr("node_flag", Package, FlagType, _), attr("node", Package), flag_type(FlagType).
|
||||
|
||||
#defined compiler_flag/3.
|
||||
#defined compiler_version_flag/4.
|
||||
|
||||
|
||||
%-----------------------------------------------------------------------------
|
||||
@@ -1040,7 +1054,7 @@ attr("no_flags", Package, FlagType)
|
||||
% You can't install a hash, if it is not installed
|
||||
:- attr("hash", Package, Hash), not installed_hash(Package, Hash).
|
||||
% This should be redundant given the constraint above
|
||||
:- attr("node", Package), 2 { attr("hash", Package, Hash) }.
|
||||
:- attr("hash", Package, Hash1), attr("hash", Package, Hash2), Hash1 != Hash2.
|
||||
|
||||
% if a hash is selected, we impose all the constraints that implies
|
||||
impose(Hash) :- attr("hash", Package, Hash).
|
||||
@@ -1091,14 +1105,18 @@ build_priority(Package, 0) :- attr("node", Package), not optimize_for_reuse().
|
||||
% Optimization to avoid errors
|
||||
%-----------------------------------------------------------------
|
||||
% Some errors are handled as rules instead of constraints because
|
||||
% it allows us to explain why something failed.
|
||||
% it allows us to explain why something failed. Here we optimize
|
||||
% HEAVILY against the facts generated by those rules.
|
||||
#minimize{ 0@1000: #true}.
|
||||
#minimize{ 0@1001: #true}.
|
||||
#minimize{ 0@1002: #true}.
|
||||
|
||||
#minimize{ Weight@1000,Msg: error(Weight, Msg) }.
|
||||
#minimize{ Weight@1000,Msg,Arg1: error(Weight, Msg, Arg1) }.
|
||||
#minimize{ Weight@1000,Msg,Arg1,Arg2: error(Weight, Msg, Arg1, Arg2) }.
|
||||
#minimize{ Weight@1000,Msg,Arg1,Arg2,Arg3: error(Weight, Msg, Arg1, Arg2, Arg3) }.
|
||||
#minimize{ Weight@1000,Msg,Arg1,Arg2,Arg3,Arg4: error(Weight, Msg, Arg1, Arg2, Arg3, Arg4) }.
|
||||
#minimize{ 1000@1000+Priority,Msg: error(Priority, Msg) }.
|
||||
#minimize{ 1000@1000+Priority,Msg,Arg1: error(Priority, Msg, Arg1) }.
|
||||
#minimize{ 1000@1000+Priority,Msg,Arg1,Arg2: error(Priority, Msg, Arg1, Arg2) }.
|
||||
#minimize{ 1000@1000+Priority,Msg,Arg1,Arg2,Arg3: error(Priority, Msg, Arg1, Arg2, Arg3) }.
|
||||
#minimize{ 1000@1000+Priority,Msg,Arg1,Arg2,Arg3,Arg4: error(Priority, Msg, Arg1, Arg2, Arg3, Arg4) }.
|
||||
#minimize{ 1000@1000+Priority,Msg,Arg1,Arg2,Arg3,Arg4,Arg5: error(Priority, Msg, Arg1, Arg2, Arg3, Arg4, Arg5) }.
|
||||
|
||||
%-----------------------------------------------------------------------------
|
||||
% How to optimize the spec (high to low priority)
|
||||
@@ -1293,29 +1311,13 @@ opt_criterion(5, "non-preferred targets").
|
||||
%-----------------
|
||||
% Domain heuristic
|
||||
%-----------------
|
||||
#heuristic literal_solved(ID) : literal(ID). [1, sign]
|
||||
#heuristic literal_solved(ID) : literal(ID). [50, init]
|
||||
#heuristic attr("hash", Package, Hash) : attr("root", Package). [45, init]
|
||||
|
||||
#heuristic attr("version", Package, Version) : version_declared(Package, Version, 0), attr("root", Package). [40, true]
|
||||
#heuristic version_weight(Package, 0) : version_declared(Package, Version, 0), attr("root", Package). [40, true]
|
||||
#heuristic attr("variant_value", Package, Variant, Value) : variant_default_value(Package, Variant, Value), attr("root", Package). [40, true]
|
||||
#heuristic attr("node_target", Package, Target) : package_target_weight(Target, Package, 0), attr("root", Package). [40, true]
|
||||
#heuristic node_target_weight(Package, 0) : attr("root", Package). [40, true]
|
||||
#heuristic node_compiler(Package, CompilerID) : default_compiler_preference(ID, 0), compiler_id(ID), attr("root", Package). [40, true]
|
||||
|
||||
#heuristic provider(Package, Virtual) : possible_provider_weight(Package, Virtual, 0, _), attr("virtual_node", Virtual). [30, true]
|
||||
#heuristic provider_weight(Package, Virtual, 0, R) : possible_provider_weight(Package, Virtual, 0, R), attr("virtual_node", Virtual). [30, true]
|
||||
#heuristic attr("node", Package) : possible_provider_weight(Package, Virtual, 0, _), attr("virtual_node", Virtual). [30, true]
|
||||
|
||||
#heuristic attr("version", Package, Version) : version_declared(Package, Version, 0), attr("node", Package). [20, true]
|
||||
#heuristic version_weight(Package, 0) : version_declared(Package, Version, 0), attr("node", Package). [20, true]
|
||||
|
||||
#heuristic attr("node_target", Package, Target) : package_target_weight(Target, Package, 0), attr("node", Package). [20, true]
|
||||
#heuristic node_target_weight(Package, 0) : attr("node", Package). [20, true]
|
||||
#heuristic node_compiler(Package, CompilerID) : default_compiler_preference(ID, 0), compiler_id(ID), attr("node", Package). [15, true]
|
||||
|
||||
#heuristic attr("version", Package, Version) : version_declared(Package, Version, 0), attr("node", Package). [10, true]
|
||||
#heuristic version_weight(Package, 0) : version_declared(Package, Version, 0), attr("node", Package). [10, true]
|
||||
#heuristic attr("node_target", Package, Target) : package_target_weight(Target, Package, 0), attr("node", Package). [10, true]
|
||||
#heuristic node_target_weight(Package, 0) : attr("node", Package). [10, true]
|
||||
#heuristic attr("variant_value", Package, Variant, Value) : variant_default_value(Package, Variant, Value), attr("node", Package). [10, true]
|
||||
#heuristic provider(Package, Virtual) : possible_provider_weight(Package, Virtual, 0, _), attr("virtual_node", Virtual). [10, true]
|
||||
#heuristic attr("node", Package) : possible_provider_weight(Package, Virtual, 0, _), attr("virtual_node", Virtual). [10, true]
|
||||
#heuristic attr("node_os", Package, OS) : buildable_os(OS). [10, true]
|
||||
|
||||
%-----------
|
||||
|
||||
@@ -23,5 +23,6 @@
|
||||
#show error/4.
|
||||
#show error/5.
|
||||
#show error/6.
|
||||
#show error/7.
|
||||
|
||||
% debug
|
||||
|
||||
@@ -4115,17 +4115,6 @@ def write(s, c=None):
|
||||
clr.cwrite(f, stream=out, color=color)
|
||||
|
||||
def write_attribute(spec, attribute, color):
|
||||
attribute = attribute.lower()
|
||||
|
||||
sig = ""
|
||||
if attribute.startswith(("@", "%", "/")):
|
||||
# color sigils that are inside braces
|
||||
sig = attribute[0]
|
||||
attribute = attribute[1:]
|
||||
elif attribute.startswith("arch="):
|
||||
sig = " arch=" # include space as separator
|
||||
attribute = attribute[5:]
|
||||
|
||||
current = spec
|
||||
if attribute.startswith("^"):
|
||||
attribute = attribute[1:]
|
||||
@@ -4134,6 +4123,16 @@ def write_attribute(spec, attribute, color):
|
||||
|
||||
if attribute == "":
|
||||
raise SpecFormatStringError("Format string attributes must be non-empty")
|
||||
attribute = attribute.lower()
|
||||
|
||||
sig = ""
|
||||
if attribute[0] in "@%/":
|
||||
# color sigils that are inside braces
|
||||
sig = attribute[0]
|
||||
attribute = attribute[1:]
|
||||
elif attribute.startswith("arch="):
|
||||
sig = " arch=" # include space as separator
|
||||
attribute = attribute[5:]
|
||||
|
||||
parts = attribute.split(".")
|
||||
assert parts
|
||||
@@ -4163,9 +4162,9 @@ def write_attribute(spec, attribute, color):
|
||||
col = "#"
|
||||
if ":" in attribute:
|
||||
_, length = attribute.split(":")
|
||||
write(sig + morph(spec, current.dag_hash(int(length))), col)
|
||||
write(sig + morph(spec, spec.dag_hash(int(length))), col)
|
||||
else:
|
||||
write(sig + morph(spec, current.dag_hash()), col)
|
||||
write(sig + morph(spec, spec.dag_hash()), col)
|
||||
return
|
||||
|
||||
# Iterate over components using getattr to get next element
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user