Compare commits

..

6 Commits

Author SHA1 Message Date
Gregory Becker
5a23819165 style 2023-03-23 12:48:02 -07:00
Gregory Becker
9df3b57f1f update broadcast test to test excludes as well 2023-03-23 11:24:47 -07:00
Gregory Becker
788ad561bd fix excludes with bcast 2023-03-23 11:24:06 -07:00
becker33
c8c025215d [@spackbot] updating style on behalf of becker33 2023-03-13 17:14:28 +00:00
Gregory Becker
1df6a3196a matrix broadcast: more robust test 2023-02-23 15:47:11 -08:00
Gregory Becker
7014eb3236 matrices: broadcast key combinatorially applies to all nodes in matrix 2023-02-23 15:40:34 -08:00
1905 changed files with 13227 additions and 27218 deletions

View File

@@ -9,7 +9,7 @@ body:
Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue `Installation issue: <name-of-the-package>`.
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
- type: textarea
id: reproduce
@@ -29,9 +29,7 @@ body:
description: |
Please post the error message from spack inside the `<details>` tag below:
value: |
<details><summary>Error message</summary>
<pre>
<details><summary>Error message</summary><pre>
...
</pre></details>
validations:
@@ -55,7 +53,7 @@ body:
Please upload the following files:
* **`spack-build-out.txt`**
* **`spack-build-env.txt`**
They should be present in the stage directory of the failing build. Also upload any `config.log` or similar file if one exists.
- type: markdown
attributes:

View File

@@ -1,4 +1,4 @@
name: "\U0001F38A Feature request"
name: "\U0001F38A Feature request"
description: Suggest adding a feature that is not yet in Spack
labels: [feature]
body:
@@ -29,11 +29,13 @@ body:
attributes:
label: General information
options:
- label: I have run `spack --version` and reported the version of Spack
required: true
- label: I have searched the issues of this repo and believe this is not a duplicate
required: true
- type: markdown
attributes:
value: |
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!

View File

@@ -21,9 +21,7 @@ body:
description: |
Please post the error message from spack inside the `<details>` tag below:
value: |
<details><summary>Error message</summary>
<pre>
<details><summary>Error message</summary><pre>
...
</pre></details>
validations:

View File

@@ -19,13 +19,13 @@ jobs:
package-audits:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
with:
python-version: ${{inputs.python_version}}
- name: Install Python packages
run: |
pip install --upgrade pip setuptools pytest coverage[toml]
pip install --upgrade pip six setuptools pytest codecov coverage[toml]
- name: Package audits (with coverage)
if: ${{ inputs.with_coverage == 'true' }}
run: |
@@ -38,7 +38,7 @@ jobs:
run: |
. share/spack/setup-env.sh
$(which spack) audit packages
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2 # @v2.1.0
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70 # @v2.1.0
if: ${{ inputs.with_coverage == 'true' }}
with:
flags: unittests,linux,audits

View File

@@ -24,7 +24,7 @@ jobs:
make patch unzip which xz python3 python3-devel tree \
cmake bison bison-devel libstdc++-static
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
with:
fetch-depth: 0
- name: Setup non-root user
@@ -62,7 +62,7 @@ jobs:
make patch unzip xz-utils python3 python3-dev tree \
cmake bison
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
with:
fetch-depth: 0
- name: Setup non-root user
@@ -99,7 +99,7 @@ jobs:
bzip2 curl file g++ gcc gfortran git gnupg2 gzip \
make patch unzip xz-utils python3 python3-dev tree
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
with:
fetch-depth: 0
- name: Setup non-root user
@@ -133,7 +133,7 @@ jobs:
make patch unzip which xz python3 python3-devel tree \
cmake bison
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
with:
fetch-depth: 0
- name: Setup repo
@@ -158,7 +158,7 @@ jobs:
run: |
brew install cmake bison@2.7 tree
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh
@@ -179,7 +179,7 @@ jobs:
run: |
brew install tree
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
- name: Bootstrap clingo
run: |
set -ex
@@ -204,7 +204,7 @@ jobs:
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
with:
fetch-depth: 0
- name: Setup repo
@@ -247,7 +247,7 @@ jobs:
bzip2 curl file g++ gcc patchelf gfortran git gzip \
make patch unzip xz-utils python3 python3-dev tree
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
with:
fetch-depth: 0
- name: Setup non-root user
@@ -283,7 +283,7 @@ jobs:
make patch unzip xz-utils python3 python3-dev tree \
gawk
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
with:
fetch-depth: 0
- name: Setup non-root user
@@ -316,7 +316,7 @@ jobs:
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
@@ -333,7 +333,7 @@ jobs:
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh

View File

@@ -45,18 +45,12 @@ jobs:
[leap15, 'linux/amd64,linux/arm64,linux/ppc64le', 'opensuse/leap:15'],
[ubuntu-bionic, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:18.04'],
[ubuntu-focal, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:20.04'],
[ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04'],
[almalinux8, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:8'],
[almalinux9, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:9'],
[rockylinux8, 'linux/amd64,linux/arm64', 'rockylinux:8'],
[rockylinux9, 'linux/amd64,linux/arm64,linux/ppc64le', 'rockylinux:9'],
[fedora37, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:37'],
[fedora38, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:38']]
[ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04']]
name: Build ${{ matrix.dockerfile[0] }}
if: github.repository == 'spack/spack'
steps:
- name: Checkout
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
- name: Set Container Tag Normal (Nightly)
run: |
@@ -95,7 +89,7 @@ jobs:
uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # @v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@4b4e9c3e2d4531116a6f8ba8e71fc6e2cb6e6c8c # @v1
uses: docker/setup-buildx-action@f03ac48505955848960e80bbb68046aa35c7b9e7 # @v1
- name: Log in to GitHub Container Registry
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # @v1

View File

@@ -35,7 +35,7 @@ jobs:
core: ${{ steps.filter.outputs.core }}
packages: ${{ steps.filter.outputs.packages }}
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
if: ${{ github.event_name == 'push' }}
with:
fetch-depth: 0

View File

@@ -47,10 +47,10 @@ jobs:
on_develop: false
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
with:
python-version: ${{ matrix.python-version }}
- name: Install System packages
@@ -62,7 +62,7 @@ jobs:
cmake bison libbison-dev kcov
- name: Install Python packages
run: |
pip install --upgrade pip setuptools pytest pytest-xdist pytest-cov
pip install --upgrade pip six setuptools pytest codecov[toml] pytest-xdist pytest-cov
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black"
- name: Setup git configuration
run: |
@@ -87,17 +87,17 @@ jobs:
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }}
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
flags: unittests,linux,${{ matrix.concretizer }}
# Test shell integration
shell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
with:
python-version: '3.11'
- name: Install System packages
@@ -107,7 +107,7 @@ jobs:
sudo apt-get install -y coreutils kcov csh zsh tcsh fish dash bash
- name: Install Python packages
run: |
pip install --upgrade pip setuptools pytest coverage[toml] pytest-xdist
pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-xdist
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -118,7 +118,7 @@ jobs:
COVERAGE: true
run: |
share/spack/qa/run-shell-tests
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
flags: shelltests,linux
@@ -133,7 +133,7 @@ jobs:
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
- name: Setup repo and non-root user
run: |
git --version
@@ -151,10 +151,10 @@ jobs:
clingo-cffi:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
with:
python-version: '3.11'
- name: Install System packages
@@ -163,7 +163,7 @@ jobs:
sudo apt-get -y install coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build kcov
- name: Install Python packages
run: |
pip install --upgrade pip setuptools pytest coverage[toml] pytest-cov clingo pytest-xdist
pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-cov clingo pytest-xdist
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -175,7 +175,7 @@ jobs:
SPACK_TEST_SOLVER: clingo
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2 # @v2.1.0
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70 # @v2.1.0
with:
flags: unittests,linux,clingo
# Run unit tests on MacOS
@@ -185,16 +185,16 @@ jobs:
matrix:
python-version: ["3.10"]
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Python packages
run: |
pip install --upgrade pip setuptools
pip install --upgrade pytest coverage[toml] pytest-xdist pytest-cov
pip install --upgrade pip six setuptools
pip install --upgrade pytest codecov coverage[toml] pytest-xdist pytest-cov
- name: Setup Homebrew packages
run: |
brew install dash fish gcc gnupg2 kcov
@@ -210,6 +210,6 @@ jobs:
$(which spack) solve zlib
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
$(which spack) unit-test --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}"
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
flags: unittests,macos

View File

@@ -18,8 +18,8 @@ jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
with:
python-version: '3.11'
cache: 'pip'
@@ -35,16 +35,16 @@ jobs:
style:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b # @v2
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
with:
python-version: '3.11'
cache: 'pip'
- name: Install Python packages
run: |
python3 -m pip install --upgrade pip setuptools types-six black==23.1.0 mypy isort clingo flake8
python3 -m pip install --upgrade pip six setuptools types-six black==23.1.0 mypy isort clingo flake8
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -58,28 +58,3 @@ jobs:
with:
with_coverage: ${{ inputs.with_coverage }}
python_version: '3.11'
# Check that spack can bootstrap the development environment on Python 3.6 - RHEL8
bootstrap-dev-rhel8:
runs-on: ubuntu-latest
container: registry.access.redhat.com/ubi8/ubi
steps:
- name: Install dependencies
run: |
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- name: Setup repo and non-root user
run: |
git --version
git fetch --unshallow
. .github/workflows/setup_git.sh
useradd spack-test
chown -R spack-test .
- name: Bootstrap Spack development environment
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack -d bootstrap now --dev
spack style -t black
spack unit-test -V

View File

@@ -15,15 +15,15 @@ jobs:
unit-tests:
runs-on: windows-latest
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
with:
fetch-depth: 0
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip pywin32 setuptools pytest-cov clingo
python -m pip install --upgrade pip six pywin32 setuptools codecov pytest-cov clingo
- name: Create local develop
run: |
./.github/workflows/setup_git.ps1
@@ -33,21 +33,21 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
coverage combine -a
coverage xml
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
flags: unittests,windows
unit-tests-cmd:
runs-on: windows-latest
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
with:
fetch-depth: 0
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip pywin32 setuptools coverage pytest-cov clingo
python -m pip install --upgrade pip six pywin32 setuptools codecov coverage pytest-cov clingo
- name: Create local develop
run: |
./.github/workflows/setup_git.ps1
@@ -57,24 +57,99 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
coverage combine -a
coverage xml
- uses: codecov/codecov-action@894ff025c7b54547a9a2a1e9f228beae737ad3c2
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
flags: unittests,windows
build-abseil:
runs-on: windows-latest
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
with:
fetch-depth: 0
- uses: actions/setup-python@57ded4d7d5e986d7296eab16560982c6dd7c923b
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip pywin32 setuptools coverage
python -m pip install --upgrade pip six pywin32 setuptools codecov coverage
- name: Build Test
run: |
spack compiler find
spack external find cmake
spack external find ninja
spack -d install abseil-cpp
# TODO: johnwparent - reduce the size of the installer operations
# make-installer:
# runs-on: windows-latest
# steps:
# - name: Disable Windows Symlinks
# run: |
# git config --global core.symlinks false
# shell:
# powershell
# - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
# with:
# fetch-depth: 0
# - uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
# with:
# python-version: 3.9
# - name: Install Python packages
# run: |
# python -m pip install --upgrade pip six pywin32 setuptools
# - name: Add Light and Candle to Path
# run: |
# $env:WIX >> $GITHUB_PATH
# - name: Run Installer
# run: |
# ./share/spack/qa/setup_spack_installer.ps1
# spack make-installer -s . -g SILENT pkg
# echo "installer_root=$((pwd).Path)" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
# env:
# ProgressPreference: SilentlyContinue
# - uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
# with:
# name: Windows Spack Installer Bundle
# path: ${{ env.installer_root }}\pkg\Spack.exe
# - uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
# with:
# name: Windows Spack Installer
# path: ${{ env.installer_root}}\pkg\Spack.msi
# execute-installer:
# needs: make-installer
# runs-on: windows-latest
# defaults:
# run:
# shell: pwsh
# steps:
# - uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
# with:
# python-version: 3.9
# - name: Install Python packages
# run: |
# python -m pip install --upgrade pip six pywin32 setuptools
# - name: Setup installer directory
# run: |
# mkdir -p spack_installer
# echo "spack_installer=$((pwd).Path)\spack_installer" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
# - uses: actions/download-artifact@v3
# with:
# name: Windows Spack Installer Bundle
# path: ${{ env.spack_installer }}
# - name: Execute Bundled Installer
# run: |
# $proc = Start-Process ${{ env.spack_installer }}\spack.exe "/install /quiet" -Passthru
# $handle = $proc.Handle # cache proc.Handle
# $proc.WaitForExit();
# $LASTEXITCODE
# env:
# ProgressPreference: SilentlyContinue
# - uses: actions/download-artifact@v3
# with:
# name: Windows Spack Installer
# path: ${{ env.spack_installer }}
# - name: Execute MSI
# run: |
# $proc = Start-Process ${{ env.spack_installer }}\spack.msi "/quiet" -Passthru
# $handle = $proc.Handle # cache proc.Handle
# $proc.WaitForExit();
# $LASTEXITCODE

View File

@@ -50,69 +50,24 @@ setlocal enabledelayedexpansion
:: flags will always start with '-', e.g. --help or -V
:: subcommands will never start with '-'
:: everything after the subcommand is an arg
:: we cannot allow batch "for" loop to directly process CL args
:: a number of batch reserved characters are commonly passed to
:: spack and allowing batch's "for" method to process the raw inputs
:: results in a large number of formatting issues
:: instead, treat the entire CLI as one string
:: and split by space manually
:: capture cl args in variable named cl_args
set cl_args=%*
:process_cl_args
rem tokens=1* returns the first processed token produced
rem by tokenizing the input string cl_args on spaces into
rem the named variable %%g
rem While this make look like a for loop, it only
rem executes a single time for each of the cl args
rem the actual iterative loop is performed by the
rem goto process_cl_args stanza
rem we are simply leveraging the "for" method's string
rem tokenization
for /f "tokens=1*" %%g in ("%cl_args%") do (
set t=%%~g
rem remainder of string is composed into %%h
rem these are the cl args yet to be processed
rem assign cl_args var to only the args to be processed
rem effectively discarding the current arg %%g
rem this will be nul when we have no further tokens to process
set cl_args=%%h
rem process the first space delineated cl arg
rem of this iteration
for %%x in (%*) do (
set t="%%~x"
if "!t:~0,1!" == "-" (
if defined _sp_subcommand (
rem We already have a subcommand, processing args now
if not defined _sp_args (
set "_sp_args=!t!"
) else (
set "_sp_args=!_sp_args! !t!"
)
:: We already have a subcommand, processing args now
set "_sp_args=!_sp_args! !t!"
) else (
if not defined _sp_flags (
set "_sp_flags=!t!"
shift
) else (
set "_sp_flags=!_sp_flags! !t!"
shift
)
set "_sp_flags=!_sp_flags! !t!"
shift
)
) else if not defined _sp_subcommand (
set "_sp_subcommand=!t!"
shift
) else (
if not defined _sp_args (
set "_sp_args=!t!"
shift
) else (
set "_sp_args=!_sp_args! !t!"
shift
)
set "_sp_args=!_sp_args! !t!"
shift
)
)
rem if this is not nil, we have more tokens to process
rem start above process again with remaining unprocessed cl args
if defined cl_args goto :process_cl_args
:: --help, -h and -V flags don't require further output parsing.
:: If we encounter, execute and exit
@@ -140,21 +95,31 @@ if not defined _sp_subcommand (
:: pass parsed variables outside of local scope. Need to do
:: this because delayedexpansion can only be set by setlocal
endlocal & (
set "_sp_flags=%_sp_flags%"
set "_sp_args=%_sp_args%"
set "_sp_subcommand=%_sp_subcommand%"
)
echo %_sp_flags%>flags
echo %_sp_args%>args
echo %_sp_subcommand%>subcmd
endlocal
set /p _sp_subcommand=<subcmd
set /p _sp_flags=<flags
set /p _sp_args=<args
if "%_sp_subcommand%"=="ECHO is off." (set "_sp_subcommand=")
if "%_sp_subcommand%"=="ECHO is on." (set "_sp_subcommand=")
if "%_sp_flags%"=="ECHO is off." (set "_sp_flags=")
if "%_sp_flags%"=="ECHO is on." (set "_sp_flags=")
if "%_sp_args%"=="ECHO is off." (set "_sp_args=")
if "%_sp_args%"=="ECHO is on." (set "_sp_args=")
del subcmd
del flags
del args
:: Filter out some commands. For any others, just run the command.
if "%_sp_subcommand%" == "cd" (
if %_sp_subcommand% == "cd" (
goto :case_cd
) else if "%_sp_subcommand%" == "env" (
) else if %_sp_subcommand% == "env" (
goto :case_env
) else if "%_sp_subcommand%" == "load" (
) else if %_sp_subcommand% == "load" (
goto :case_load
) else if "%_sp_subcommand%" == "unload" (
) else if %_sp_subcommand% == "unload" (
goto :case_load
) else (
goto :default_case
@@ -189,20 +154,20 @@ goto :end_switch
if NOT defined _sp_args (
goto :default_case
)
if NOT "%_sp_args%"=="%_sp_args:--help=%" (
set args_no_quote=%_sp_args:"=%
if NOT "%args_no_quote%"=="%args_no_quote:--help=%" (
goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args: -h=%" (
) else if NOT "%args_no_quote%"=="%args_no_quote: -h=%" (
goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:--bat=%" (
) else if NOT "%args_no_quote%"=="%args_no_quote:--bat=%" (
goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:deactivate=%" (
) else if NOT "%args_no_quote%"=="%args_no_quote:deactivate=%" (
for /f "tokens=* USEBACKQ" %%I in (
`call python %spack% %_sp_flags% env deactivate --bat %_sp_args:deactivate=%`
`call python %spack% %_sp_flags% env deactivate --bat %args_no_quote:deactivate=%`
) do %%I
) else if NOT "%_sp_args%"=="%_sp_args:activate=%" (
) else if NOT "%args_no_quote%"=="%args_no_quote:activate=%" (
for /f "tokens=* USEBACKQ" %%I in (
`python %spack% %_sp_flags% env activate --bat %_sp_args:activate=%`
`python %spack% %_sp_flags% env activate --bat %args_no_quote:activate=%`
) do %%I
) else (
goto :default_case
@@ -223,7 +188,7 @@ if defined _sp_args (
for /f "tokens=* USEBACKQ" %%I in (
`python "%spack%" %_sp_flags% %_sp_subcommand% --bat %_sp_args%`) do %%I
)
goto :end_switch
:case_unload

View File

@@ -13,18 +13,16 @@ concretizer:
# Whether to consider installed packages or packages from buildcaches when
# concretizing specs. If `true`, we'll try to use as many installs/binaries
# as possible, rather than building. If `false`, we'll always give you a fresh
# concretization. If `dependencies`, we'll only reuse dependencies but
# give you a fresh concretization for your root specs.
reuse: dependencies
# concretization.
reuse: true
# Options that tune which targets are considered for concretization. The
# concretization process is very sensitive to the number targets, and the time
# needed to reach a solution increases noticeably with the number of targets
# considered.
targets:
# Determine whether we want to target specific or generic
# microarchitectures. Valid values are: "microarchitectures" or "generic".
# An example of "microarchitectures" would be "skylake" or "bulldozer",
# while an example of "generic" would be "aarch64" or "x86_64_v4".
# Determine whether we want to target specific or generic microarchitectures.
# An example of the first kind might be for instance "skylake" or "bulldozer",
# while generic microarchitectures are for instance "aarch64" or "x86_64_v4".
granularity: microarchitectures
# If "false" allow targets that are incompatible with the current host (for
# instance concretize with target "icelake" while running on "haswell").
@@ -35,4 +33,4 @@ concretizer:
# environments can always be activated. When "false" perform concretization separately
# on each root spec, allowing different versions and variants of the same package in
# an environment.
unify: true
unify: true

View File

@@ -23,20 +23,8 @@ packages:
providers:
elf: [libelf]
fuse: [macfuse]
gl: [apple-gl]
glu: [apple-glu]
unwind: [apple-libunwind]
uuid: [apple-libuuid]
apple-gl:
buildable: false
externals:
- spec: apple-gl@4.1.0
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
apple-glu:
buildable: false
externals:
- spec: apple-glu@1.3.0
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
apple-libunwind:
buildable: false
externals:

View File

@@ -40,12 +40,13 @@ modules:
roots:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
# What type of modules to use ("tcl" and/or "lmod")
enable: []
# What type of modules to use
enable:
- tcl
tcl:
all:
autoload: direct
autoload: none
# Default configurations if lmod is enabled
lmod:

View File

@@ -28,7 +28,7 @@ packages:
gl: [glx, osmesa]
glu: [mesa-glu, openglu]
golang: [go, gcc]
go-or-gccgo-bootstrap: [go-bootstrap, gcc]
go-external-or-gccgo-bootstrap: [go-bootstrap, gcc]
iconv: [libiconv]
ipp: [intel-ipp]
java: [openjdk, jdk, ibm-java]

View File

@@ -3,4 +3,3 @@ config:
concretizer: clingo
build_stage::
- '$spack/.staging'
stage_name: '{name}-{version}-{hash:7}'

View File

@@ -19,4 +19,3 @@ packages:
- msvc
providers:
mpi: [msmpi]
gl: [wgl]

View File

@@ -942,7 +942,7 @@ first ``libelf`` above, you would run:
$ spack load /qmm4kso
To see which packages that you have loaded to your environment you would
To see which packages that you have loaded to your enviornment you would
use ``spack find --loaded``.
.. code-block:: console

View File

@@ -18,7 +18,7 @@ your Spack mirror and then downloaded and installed by others.
Whenever a mirror provides prebuilt packages, Spack will take these packages
into account during concretization and installation, making ``spack install``
significantly faster.
signficantly faster.
.. note::

View File

@@ -28,14 +28,11 @@ This package provides the following variants:
* **cuda_arch**
This variant supports the optional specification of one or multiple architectures.
This variant supports the optional specification of the architecture.
Valid values are maintained in the ``cuda_arch_values`` property and
are the numeric character equivalent of the compute capability version
(e.g., '10' for version 1.0). Each provided value affects associated
``CUDA`` dependencies and compiler conflicts.
The variant builds both PTX code for the _virtual_ architecture
(e.g. ``compute_10``) and binary code for the _real_ architecture (e.g. ``sm_10``).
GPUs and their compute capability versions are listed at
https://developer.nvidia.com/cuda-gpus .

View File

@@ -124,7 +124,7 @@ Using oneAPI Tools Installed by Spack
=====================================
Spack can be a convenient way to install and configure compilers and
libraries, even if you do not intend to build a Spack package. If you
libaries, even if you do not intend to build a Spack package. If you
want to build a Makefile project using Spack-installed oneAPI compilers,
then use spack to configure your environment::

View File

@@ -397,7 +397,7 @@ for specifics and examples for ``packages.yaml`` files.
.. If your system administrator did not provide modules for pre-installed Intel
tools, you could do well to ask for them, because installing multiple copies
of the Intel tools, as is won't to happen once Spack is in the picture, is
of the Intel tools, as is wont to happen once Spack is in the picture, is
bound to stretch disk space and patience thin. If you *are* the system
administrator and are still new to modules, then perhaps it's best to follow
the `next section <Installing Intel tools within Spack_>`_ and install the tools
@@ -653,7 +653,7 @@ follow `the next section <intel-install-libs_>`_ instead.
* If you specified a custom variant (for example ``+vtune``) you may want to add this as your
preferred variant in the packages configuration for the ``intel-parallel-studio`` package
as described in :ref:`package-preferences`. Otherwise you will have to specify
the variant every time ``intel-parallel-studio`` is being used as ``mkl``, ``fftw`` or ``mpi``
the variant everytime ``intel-parallel-studio`` is being used as ``mkl``, ``fftw`` or ``mpi``
implementation to avoid pulling in a different variant.
* To set the Intel compilers for default use in Spack, instead of the usual ``%gcc``,

View File

@@ -582,7 +582,7 @@ libraries. Make sure not to add modules/packages containing the word
"test", as these likely won't end up in the installation directory,
or may require test dependencies like pytest to be installed.
Instead of defining the ``import_modules`` explicitly, only the subset
Instead of defining the ``import_modules`` explicity, only the subset
of module names to be skipped can be defined by using ``skip_modules``.
If a defined module has submodules, they are skipped as well, e.g.,
in case the ``plotting`` modules should be excluded from the

View File

@@ -227,9 +227,6 @@ You can get the name to use for ``<platform>`` by running ``spack arch
--platform``. The system config scope has a ``<platform>`` section for
sites at which ``/etc`` is mounted on multiple heterogeneous machines.
.. _config-scope-precedence:
----------------
Scope Precedence
----------------
@@ -242,11 +239,6 @@ lower-precedence settings. Completely ignoring higher-level configuration
options is supported with the ``::`` notation for keys (see
:ref:`config-overrides` below).
There are also special notations for string concatenation and precendense override.
Using the ``+:`` notation can be used to force *prepending* strings or lists. For lists, this is identical
to the default behavior. Using the ``-:`` works similarly, but for *appending* values.
:ref:`config-prepend-append`
^^^^^^^^^^^
Simple keys
^^^^^^^^^^^
@@ -287,47 +279,6 @@ command:
- ~/.spack/stage
.. _config-prepend-append:
^^^^^^^^^^^^^^^^^^^^
String Concatenation
^^^^^^^^^^^^^^^^^^^^
Above, the user ``config.yaml`` *completely* overrides specific settings in the
default ``config.yaml``. Sometimes, it is useful to add a suffix/prefix
to a path or name. To do this, you can use the ``-:`` notation for *append*
string concatenation at the end of a key in a configuration file. For example:
.. code-block:: yaml
:emphasize-lines: 1
:caption: ~/.spack/config.yaml
config:
install_tree-: /my/custom/suffix/
Spack will then append to the lower-precedence configuration under the
``install_tree-:`` section:
.. code-block:: console
$ spack config get config
config:
install_tree: /some/other/directory/my/custom/suffix
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
Similarly, ``+:`` can be used to *prepend* to a path or name:
.. code-block:: yaml
:emphasize-lines: 1
:caption: ~/.spack/config.yaml
config:
install_tree+: /my/custom/suffix/
.. _config-overrides:
^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -444,120 +444,6 @@ attribute:
The minimum version of Singularity required to build a SIF (Singularity Image Format)
image from the recipes generated by Spack is ``3.5.3``.
------------------------------
Extending the Jinja2 Templates
------------------------------
The Dockerfile and the Singularity definition file that Spack can generate are based on
a few Jinja2 templates that are rendered according to the environment being containerized.
Even though Spack allows a great deal of customization by just setting appropriate values for
the configuration options, sometimes that is not enough.
In those cases, a user can directly extend the template that Spack uses to render the image
to e.g. set additional environment variables or perform specific operations either before or
after a given stage of the build. Let's consider as an example the following structure:
.. code-block:: console
$ tree /opt/environment
/opt/environment
├── data
│ └── data.csv
├── spack.yaml
├── data
└── templates
└── container
└── CustomDockerfile
containing both the custom template extension and the environment manifest file. To use a custom
template, the environment must register the directory containing it, and declare its use under the
``container`` configuration:
.. code-block:: yaml
:emphasize-lines: 7-8,12
spack:
specs:
- hdf5~mpi
concretizer:
unify: true
config:
template_dirs:
- /opt/environment/templates
container:
format: docker
depfile: true
template: container/CustomDockerfile
The template extension can override two blocks, named ``build_stage`` and ``final_stage``, similarly to
the example below:
.. code-block::
:emphasize-lines: 3,8
{% extends "container/Dockerfile" %}
{% block build_stage %}
RUN echo "Start building"
{{ super() }}
{% endblock %}
{% block final_stage %}
{{ super() }}
COPY data /share/myapp/data
{% endblock %}
The recipe that gets generated contains the two extra instruction that we added in our template extension:
.. code-block:: Dockerfile
:emphasize-lines: 4,43
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-jammy:latest as builder
RUN echo "Start building"
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - hdf5~mpi" \
&& echo " concretizer:" \
&& echo " unify: true" \
&& echo " config:" \
&& echo " template_dirs:" \
&& echo " - /tmp/environment/templates" \
&& echo " install_tree: /opt/software" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unnecessary deps
RUN cd /opt/spack-environment && spack env activate . && spack concretize && spack env depfile -o Makefile && make -j $(nproc) && spack gc -y
# Strip all the binaries
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM ubuntu:22.04
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/._view /opt/._view
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
COPY data /share/myapp/data
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l", "-c", "$*", "--" ]
CMD [ "/bin/bash" ]
.. _container_config_options:
-----------------------
@@ -578,10 +464,6 @@ to customize the generation of container recipes:
- The format of the recipe
- ``docker`` or ``singularity``
- Yes
* - ``depfile``
- Whether to use a depfile for installation, or not
- True or False (default)
- No
* - ``images:os``
- Operating system used as a base for the image
- See :ref:`containers-supported-os`
@@ -630,6 +512,14 @@ to customize the generation of container recipes:
- System packages needed at run-time
- Valid packages for the current OS
- No
* - ``extra_instructions:build``
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``build`` stage
- Anything understood by the current ``format``
- No
* - ``extra_instructions:final``
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``final`` stage
- Anything understood by the current ``format``
- No
* - ``labels``
- Labels to tag the image
- Pairs of key-value strings

View File

@@ -472,7 +472,7 @@ use my new hook as follows:
.. code-block:: python
def post_log_write(message, level):
"""Do something custom with the message and level every time we write
"""Do something custom with the messsage and level every time we write
to the log
"""
print('running post_log_write!')

View File

@@ -368,8 +368,7 @@ Manual compiler configuration
If auto-detection fails, you can manually configure a compiler by
editing your ``~/.spack/<platform>/compilers.yaml`` file. You can do this by running
``spack config edit compilers``, which will open the file in
:ref:`your favorite editor <controlling-the-editor>`.
``spack config edit compilers``, which will open the file in your ``$EDITOR``.
Each compiler configuration in the file looks like this:
@@ -1598,8 +1597,8 @@ in a Windows CMD prompt.
.. note::
If you chose to install Spack into a directory on Windows that is set up to require Administrative
Privileges, Spack will require elevated privileges to run.
Administrative Privileges can be denoted either by default such as
Privleges, Spack will require elevated privleges to run.
Administrative Privleges can be denoted either by default such as
``C:\Program Files``, or aministrator applied administrative restrictions
on a directory that spack installs files to such as ``C:\Users``
@@ -1695,7 +1694,7 @@ Spack console via:
spack install cpuinfo
If in the previous step, you did not have CMake or Ninja installed, running the command above should bootstrap both packages
If in the previous step, you did not have CMake or Ninja installed, running the command above should boostrap both packages
"""""""""""""""""""""""""""
Windows Compatible Packages

View File

@@ -13,7 +13,7 @@ The use of module systems to manage user environment in a controlled way
is a common practice at HPC centers that is often embraced also by
individual programmers on their development machines. To support this
common practice Spack integrates with `Environment Modules
<http://modules.sourceforge.net/>`_ and `Lmod
<http://modules.sourceforge.net/>`_ and `LMod
<http://lmod.readthedocs.io/en/latest/>`_ by providing post-install hooks
that generate module files and commands to manipulate them.
@@ -26,8 +26,8 @@ Using module files via Spack
----------------------------
If you have installed a supported module system you should be able to
run ``module avail`` to see what module
files have been installed. Here is sample output of those programs,
run either ``module avail`` or ``use -l spack`` to see what module
files have been installed. Here is sample output of those programs,
showing lots of installed packages:
.. code-block:: console
@@ -51,7 +51,12 @@ showing lots of installed packages:
help2man-1.47.4-gcc-4.8-kcnqmau lua-luaposix-33.4.0-gcc-4.8-mdod2ry netlib-scalapack-2.0.2-gcc-6.3.0-rgqfr6d py-scipy-0.19.0-gcc-6.3.0-kr7nat4 zlib-1.2.11-gcc-6.3.0-7cqp6cj
The names should look familiar, as they resemble the output from ``spack find``.
For example, you could type the following command to load the ``cmake`` module:
You *can* use the modules here directly. For example, you could type either of these commands
to load the ``cmake`` module:
.. code-block:: console
$ use cmake-3.7.2-gcc-6.3.0-fowuuby
.. code-block:: console
@@ -88,9 +93,9 @@ the different file formats that can be generated by Spack:
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
| | **Hook name** | **Default root directory** | **Default template file** | **Compatible tools** |
+=============================+====================+===============================+==============================================+======================+
| **Tcl - Non-Hierarchical** | ``tcl`` | share/spack/modules | share/spack/templates/modules/modulefile.tcl | Env. Modules/Lmod |
| **TCL - Non-Hierarchical** | ``tcl`` | share/spack/modules | share/spack/templates/modules/modulefile.tcl | Env. Modules/LMod |
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
| **Lua - Hierarchical** | ``lmod`` | share/spack/lmod | share/spack/templates/modules/modulefile.lua | Lmod |
| **Lua - Hierarchical** | ``lmod`` | share/spack/lmod | share/spack/templates/modules/modulefile.lua | LMod |
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
@@ -391,13 +396,13 @@ name and version for all packages that depend on mpi.
When specifying module names by projection for Lmod modules, we
recommend NOT including names of dependencies (e.g., MPI, compilers)
that are already in the Lmod hierarchy.
that are already in the LMod hierarchy.
.. note::
Tcl modules
Tcl modules also allow for explicit conflicts between modulefiles.
TCL modules
TCL modules also allow for explicit conflicts between modulefiles.
.. code-block:: yaml
@@ -421,9 +426,9 @@ that are already in the Lmod hierarchy.
.. note::
Lmod hierarchical module files
LMod hierarchical module files
When ``lmod`` is activated Spack will generate a set of hierarchical lua module
files that are understood by Lmod. The hierarchy will always contain the
files that are understood by LMod. The hierarchy will always contain the
two layers ``Core`` / ``Compiler`` but can be further extended to
any of the virtual dependencies present in Spack. A case that could be useful in
practice is for instance:
@@ -445,7 +450,7 @@ that are already in the Lmod hierarchy.
that will generate a hierarchy in which the ``lapack`` and ``mpi`` layer can be switched
independently. This allows a site to build the same libraries or applications against different
implementations of ``mpi`` and ``lapack``, and let Lmod switch safely from one to the
implementations of ``mpi`` and ``lapack``, and let LMod switch safely from one to the
other.
All packages built with a compiler in ``core_compilers`` and all
@@ -455,12 +460,12 @@ that are already in the Lmod hierarchy.
.. warning::
Consistency of Core packages
The user is responsible for maintining consistency among core packages, as ``core_specs``
bypasses the hierarchy that allows Lmod to safely switch between coherent software stacks.
bypasses the hierarchy that allows LMod to safely switch between coherent software stacks.
.. warning::
Deep hierarchies and ``lmod spider``
For hierarchies that are deeper than three layers ``lmod spider`` may have some issues.
See `this discussion on the Lmod project <https://github.com/TACC/Lmod/issues/114>`_.
See `this discussion on the LMod project <https://github.com/TACC/Lmod/issues/114>`_.
""""""""""""""""""""""
Select default modules
@@ -529,7 +534,7 @@ installed to ``/spack/prefix/foo``, if ``foo`` installs executables to
update ``MANPATH``.
The default list of environment variables in this config section
includes ``PATH``, ``MANPATH``, ``ACLOCAL_PATH``, ``PKG_CONFIG_PATH``
inludes ``PATH``, ``MANPATH``, ``ACLOCAL_PATH``, ``PKG_CONFIG_PATH``
and ``CMAKE_PREFIX_PATH``, as well as ``DYLD_FALLBACK_LIBRARY_PATH``
on macOS. On Linux however, the corresponding ``LD_LIBRARY_PATH``
variable is *not* set, because it affects the behavior of
@@ -629,9 +634,8 @@ by its dependency; when the dependency is autoloaded, the executable will be in
PATH. Similarly for scripting languages such as Python, packages and their dependencies
have to be loaded together.
Autoloading is enabled by default for Lmod and Environment Modules. The former
has builtin support for through the ``depends_on`` function. The latter uses
``module load`` statement to load and track dependencies.
Autoloading is enabled by default for LMod, as it has great builtin support for through
the ``depends_on`` function. For Environment Modules it is disabled by default.
Autoloading can also be enabled conditionally:
@@ -651,14 +655,12 @@ The allowed values for the ``autoload`` statement are either ``none``,
``direct`` or ``all``.
.. note::
Tcl prerequisites
TCL prerequisites
In the ``tcl`` section of the configuration file it is possible to use
the ``prerequisites`` directive that accepts the same values as
``autoload``. It will produce module files that have a ``prereq``
statement, which autoloads dependencies on Environment Modules when its
``auto_handling`` configuration option is enabled. If Environment Modules
is installed with Spack, ``auto_handling`` is enabled by default starting
version 4.2. Otherwise it is enabled by default since version 5.0.
statement, which can be used to autoload dependencies in some versions
of Environment Modules.
------------------------
Maintaining Module Files

File diff suppressed because it is too large Load Diff

View File

@@ -9,32 +9,27 @@
CI Pipelines
============
Spack provides commands that support generating and running automated build pipelines in CI instances. At the highest
level it works like this: provide a spack environment describing the set of packages you care about, and include a
description of how those packages should be mapped to Gitlab runners. Spack can then generate a ``.gitlab-ci.yml``
file containing job descriptions for all your packages that can be run by a properly configured CI instance. When
run, the generated pipeline will build and deploy binaries, and it can optionally report to a CDash instance
Spack provides commands that support generating and running automated build
pipelines designed for Gitlab CI. At the highest level it works like this:
provide a spack environment describing the set of packages you care about,
and include within that environment file a description of how those packages
should be mapped to Gitlab runners. Spack can then generate a ``.gitlab-ci.yml``
file containing job descriptions for all your packages that can be run by a
properly configured Gitlab CI instance. When run, the generated pipeline will
build and deploy binaries, and it can optionally report to a CDash instance
regarding the health of the builds as they evolve over time.
------------------------------
Getting started with pipelines
------------------------------
To get started with automated build pipelines a Gitlab instance with version ``>= 12.9``
(more about Gitlab CI `here <https://about.gitlab.com/product/continuous-integration/>`_)
with at least one `runner <https://docs.gitlab.com/runner/>`_ configured is required. This
can be done quickly by setting up a local Gitlab instance.
It is fairly straightforward to get started with automated build pipelines. At
a minimum, you'll need to set up a Gitlab instance (more about Gitlab CI
`here <https://about.gitlab.com/product/continuous-integration/>`_) and configure
at least one `runner <https://docs.gitlab.com/runner/>`_. Then the basic steps
for setting up a build pipeline are as follows:
It is possible to set up pipelines on gitlab.com, but the builds there are limited to
60 minutes and generic hardware. It is possible to
`hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
topics are outside the scope of this document.
After setting up a Gitlab instance for running CI, the basic steps for setting up a build pipeline are as follows:
#. Create a repository in the Gitlab instance with CI and a runner enabled.
#. Create a repository on your gitlab instance
#. Add a ``spack.yaml`` at the root containing your pipeline environment
#. Add a ``.gitlab-ci.yml`` at the root containing two jobs (one to generate
the pipeline dynamically, and one to run the generated jobs).
@@ -45,6 +40,13 @@ See the :ref:`functional_example` section for a minimal working example. See al
the :ref:`custom_Workflow` section for a link to an example of a custom workflow
based on spack pipelines.
While it is possible to set up pipelines on gitlab.com, as illustrated above, the
builds there are limited to 60 minutes and generic hardware. It is also possible to
`hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
topics are outside the scope of this document.
Spack's pipelines are now making use of the
`trigger <https://docs.gitlab.com/ee/ci/yaml/#trigger>`_ syntax to run
dynamically generated
@@ -130,35 +132,29 @@ And here's the spack environment built by the pipeline represented as a
mirrors: { "mirror": "s3://spack-public/mirror" }
ci:
gitlab-ci:
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
- . "./spack/share/spack/setup-env.sh"
script:
- pushd ${SPACK_CONCRETE_ENV_DIR} && spack env activate --without-view . && popd
- spack -d ci rebuild
mappings:
- match: ["os=ubuntu18.04"]
runner-attributes:
image:
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]
tags:
- docker
enable-artifacts-buildcache: True
rebuild-index: False
pipeline-gen:
- any-job:
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
- . "./spack/share/spack/setup-env.sh"
- build-job:
tags: [docker]
image:
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]
The elements of this file important to spack ci pipelines are described in more
detail below, but there are a couple of things to note about the above working
example:
.. note::
There is no ``script`` attribute specified for here. The reason for this is
Spack CI will automatically generate reasonable default scripts. More
detail on what is in these scripts can be found below.
Also notice the ``before_script`` section. It is required when using any of the
default scripts to source the ``setup-env.sh`` script in order to inform
the default scripts where to find the ``spack`` executable.
Normally ``enable-artifacts-buildcache`` is not recommended in production as it
results in large binary artifacts getting transferred back and forth between
gitlab and the runners. But in this example on gitlab.com where there is no
@@ -178,7 +174,7 @@ during subsequent pipeline runs.
With the addition of reproducible builds (#22887) a previously working
pipeline will require some changes:
* In the build-jobs, the environment location changed.
* In the build jobs (``runner-attributes``), the environment location changed.
This will typically show as a ``KeyError`` in the failing job. Be sure to
point to ``${SPACK_CONCRETE_ENV_DIR}``.
@@ -200,9 +196,9 @@ ci pipelines. These commands are covered in more detail in this section.
.. _cmd-spack-ci:
^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^
``spack ci``
^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^
Super-command for functionality related to generating pipelines and executing
pipeline jobs.
@@ -231,7 +227,7 @@ Using ``--prune-dag`` or ``--no-prune-dag`` configures whether or not jobs are
generated for specs that are already up to date on the mirror. If enabling
DAG pruning using ``--prune-dag``, more information may be required in your
``spack.yaml`` file, see the :ref:`noop_jobs` section below regarding
``noop-job``.
``service-job-attributes``.
The optional ``--check-index-only`` argument can be used to speed up pipeline
generation by telling spack to consider only remote buildcache indices when
@@ -267,11 +263,11 @@ generated by jobs in the pipeline.
.. _cmd-spack-ci-rebuild:
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^
``spack ci rebuild``
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^
The purpose of ``spack ci rebuild`` is to take an assigned
The purpose of ``spack ci rebuild`` is straightforward: take its assigned
spec and ensure a binary of a successful build exists on the target mirror.
If the binary does not already exist, it is built from source and pushed
to the mirror. The associated stand-alone tests are optionally run against
@@ -284,7 +280,7 @@ directory. The script is run in a job to install the spec from source. The
resulting binary package is pushed to the mirror. If ``cdash`` is configured
for the environment, then the build results will be uploaded to the site.
Environment variables and values in the ``ci::pipeline-gen`` section of the
Environment variables and values in the ``gitlab-ci`` section of the
``spack.yaml`` environment file provide inputs to this process. The
two main sources of environment variables are variables written into
``.gitlab-ci.yml`` by ``spack ci generate`` and the GitLab CI runtime.
@@ -302,23 +298,21 @@ A snippet from an example ``spack.yaml`` file illustrating use of this
option *and* specification of a package with broken tests is given below.
The inclusion of a spec for building ``gptune`` is not shown here. Note
that ``--tests`` is passed to ``spack ci rebuild`` as part of the
``build-job`` script.
``gitlab-ci`` script.
.. code-block:: yaml
ci:
pipeline-gen:
- build-job
script:
- . "./share/spack/setup-env.sh"
- spack --version
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
gitlab-ci:
script:
- . "./share/spack/setup-env.sh"
- spack --version
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
broken-tests-packages:
- gptune
@@ -360,31 +354,113 @@ arguments you can pass to ``spack ci reproduce-build`` in order to reproduce
a particular build locally.
------------------------------------
Job Types
A pipeline-enabled spack environment
------------------------------------
^^^^^^^^^^^^^^^
Rebuild (build)
^^^^^^^^^^^^^^^
Here's an example of a spack environment file that has been enhanced with
sections describing a build pipeline:
Rebuild jobs, denoted as ``build-job``'s in the ``pipeline-gen`` list, are jobs
associated with concrete specs that have been marked for rebuild. By default a simple
script for doing rebuild is generated, but may be modified as needed.
.. code-block:: yaml
The default script does three main steps, change directories to the pipelines concrete
environment, activate the concrete environment, and run the ``spack ci rebuild`` command:
spack:
definitions:
- pkgs:
- readline@7.0
- compilers:
- '%gcc@5.5.0'
- oses:
- os=ubuntu18.04
- os=centos7
specs:
- matrix:
- [$pkgs]
- [$compilers]
- [$oses]
mirrors:
cloud_gitlab: https://mirror.spack.io
gitlab-ci:
mappings:
- match:
- os=ubuntu18.04
runner-attributes:
tags:
- spack-kube
image: spack/ubuntu-bionic
- match:
- os=centos7
runner-attributes:
tags:
- spack-kube
image: spack/centos7
cdash:
build-group: Release Testing
url: https://cdash.spack.io
project: Spack
site: Spack AWS Gitlab Instance
.. code-block:: bash
Hopefully, the ``definitions``, ``specs``, ``mirrors``, etc. sections are already
familiar, as they are part of spack :ref:`environments`. So let's take a more
in-depth look some of the pipeline-related sections in that environment file
that might not be as familiar.
cd ${concrete_environment_dir}
spack env activate --without-view .
spack ci rebuild
The ``gitlab-ci`` section is used to configure how the pipeline workload should be
generated, mainly how the jobs for building specs should be assigned to the
configured runners on your instance. Each entry within the list of ``mappings``
corresponds to a known gitlab runner, where the ``match`` section is used
in assigning a release spec to one of the runners, and the ``runner-attributes``
section is used to configure the spec/job for that particular runner.
Both the top-level ``gitlab-ci`` section as well as each ``runner-attributes``
section can also contain the following keys: ``image``, ``tags``, ``variables``,
``before_script``, ``script``, and ``after_script``. If any of these keys are
provided at the ``gitlab-ci`` level, they will be used as the defaults for any
``runner-attributes``, unless they are overridden in those sections. Specifying
any of these keys at the ``runner-attributes`` level generally overrides the
keys specified at the higher level, with a couple exceptions. Any ``variables``
specified at both levels result in those dictionaries getting merged in the
resulting generated job, and any duplicate variable names get assigned the value
provided in the specific ``runner-attributes``. If ``tags`` are specified both
at the ``gitlab-ci`` level as well as the ``runner-attributes`` level, then the
lists of tags are combined, and any duplicates are removed.
See the section below on using a custom spack for an example of how these keys
could be used.
There are other pipeline options you can configure within the ``gitlab-ci`` section
as well.
The ``bootstrap`` section allows you to specify lists of specs from
your ``definitions`` that should be staged ahead of the environment's ``specs`` (this
section is described in more detail below). The ``enable-artifacts-buildcache`` key
takes a boolean and determines whether the pipeline uses artifacts to store and
pass along the buildcaches from one stage to the next (the default if you don't
provide this option is ``False``).
The optional ``broken-specs-url`` key tells Spack to check against a list of
specs that are known to be currently broken in ``develop``. If any such specs
are found, the ``spack ci generate`` command will fail with an error message
informing the user what broken specs were encountered. This allows the pipeline
to fail early and avoid wasting compute resources attempting to build packages
that will not succeed.
The optional ``cdash`` section provides information that will be used by the
``spack ci generate`` command (invoked by ``spack ci start``) for reporting
to CDash. All the jobs generated from this environment will belong to a
"build group" within CDash that can be tracked over time. As the release
progresses, this build group may have jobs added or removed. The url, project,
and site are used to specify the CDash instance to which build results should
be reported.
Take a look at the
`schema <https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/gitlab_ci.py>`_
for the gitlab-ci section of the spack environment file, to see precisely what
syntax is allowed there.
.. _rebuild_index:
^^^^^^^^^^^^^^^^^^^^^^
Update Index (reindex)
^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Note about rebuilding buildcache index
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default, while a pipeline job may rebuild a package, create a buildcache
entry, and push it to the mirror, it does not automatically re-generate the
@@ -399,44 +475,21 @@ not correctly reflect the mirror's contents at the end of a pipeline.
To make sure the buildcache index is up to date at the end of your pipeline,
spack generates a job to update the buildcache index of the target mirror
at the end of each pipeline by default. You can disable this behavior by
adding ``rebuild-index: False`` inside the ``ci`` section of your
spack environment.
Reindex jobs do not allow modifying the ``script`` attribute since it is automatically
generated using the target mirror listed in the ``mirrors::mirror`` configuration.
^^^^^^^^^^^^^^^^^
Signing (signing)
^^^^^^^^^^^^^^^^^
This job is run after all of the rebuild jobs are completed and is intended to be used
to sign the package binaries built by a protected CI run. Signing jobs are generated
only if a signing job ``script`` is specified and the spack CI job type is protected.
Note, if an ``any-job`` section contains a script, this will not implicitly create a
``signing`` job, a signing job may only exist if it is explicitly specified in the
configuration with a ``script`` attribute. Specifying a signing job without a script
does not create a signing job and the job configuration attributes will be ignored.
Signing jobs are always assigned the runner tags ``aws``, ``protected``, and ``notary``.
^^^^^^^^^^^^^^^^^
Cleanup (cleanup)
^^^^^^^^^^^^^^^^^
When using ``temporary-storage-url-prefix`` the cleanup job will destroy the mirror
created for the associated Gitlab pipeline. Cleanup jobs do not allow modifying the
script, but do expect that the spack command is in the path and require a
``before_script`` to be specified that sources the ``setup-env.sh`` script.
adding ``rebuild-index: False`` inside the ``gitlab-ci`` section of your
spack environment. Spack will assign the job any runner attributes found
on the ``service-job-attributes``, if you have provided that in your
``spack.yaml``.
.. _noop_jobs:
^^^^^^^^^^^^
No Op (noop)
^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^
Note about "no-op" jobs
^^^^^^^^^^^^^^^^^^^^^^^
If no specs in an environment need to be rebuilt during a given pipeline run
(meaning all are already up to date on the mirror), a single successful job
(a NO-OP) is still generated to avoid an empty pipeline (which GitLab
considers to be an error). The ``noop-job*`` sections
considers to be an error). An optional ``service-job-attributes`` section
can be added to your ``spack.yaml`` where you can provide ``tags`` and
``image`` or ``variables`` for the generated NO-OP job. This section also
supports providing ``before_script``, ``script``, and ``after_script``, in
@@ -446,100 +499,51 @@ Following is an example of this section added to a ``spack.yaml``:
.. code-block:: yaml
spack:
ci:
pipeline-gen:
- noop-job:
tags: ['custom', 'tag']
image:
name: 'some.image.registry/custom-image:latest'
entrypoint: ['/bin/bash']
script::
- echo "Custom message in a custom script"
spack:
specs:
- openmpi
mirrors:
cloud_gitlab: https://mirror.spack.io
gitlab-ci:
mappings:
- match:
- os=centos8
runner-attributes:
tags:
- custom
- tag
image: spack/centos7
service-job-attributes:
tags: ['custom', 'tag']
image:
name: 'some.image.registry/custom-image:latest'
entrypoint: ['/bin/bash']
script:
- echo "Custom message in a custom script"
The example above illustrates how you can provide the attributes used to run
the NO-OP job in the case of an empty pipeline. The only field for the NO-OP
job that might be generated for you is ``script``, but that will only happen
if you do not provide one yourself. Notice in this example the ``script``
uses the ``::`` notation to prescribe override behavior. Without this, the
``echo`` command would have been prepended to the automatically generated script
rather than replacing it.
if you do not provide one yourself.
------------------------------------
ci.yaml
------------------------------------
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Assignment of specs to runners
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here's an example of a spack configuration file describing a build pipeline:
The ``mappings`` section corresponds to a list of runners, and during assignment
of specs to runners, the list is traversed in order looking for matches, the
first runner that matches a release spec is assigned to build that spec. The
``match`` section within each runner mapping section is a list of specs, and
if any of those specs match the release spec (the ``spec.satisfies()`` method
is used), then that runner is considered a match.
.. code-block:: yaml
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Configuration of specs/jobs for a runner
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ci:
target: gitlab
rebuild_index: True
broken-specs-url: https://broken.specs.url
broken-tests-packages:
- gptune
pipeline-gen:
- submapping:
- match:
- os=ubuntu18.04
build-job:
tags:
- spack-kube
image: spack/ubuntu-bionic
- match:
- os=centos7
build-job:
tags:
- spack-kube
image: spack/centos7
cdash:
build-group: Release Testing
url: https://cdash.spack.io
project: Spack
site: Spack AWS Gitlab Instance
The ``ci`` config section is used to configure how the pipeline workload should be
generated, mainly how the jobs for building specs should be assigned to the
configured runners on your instance. The main section for configuring pipelines
is ``pipeline-gen``, which is a list of job attribute sections that are merged,
using the same rules as Spack configs (:ref:`config-scope-precedence`), from the bottom up.
The order sections are applied is to be consistent with how spack orders scope precedence when merging lists.
There are two main section types, ``<type>-job`` sections and ``submapping``
sections.
^^^^^^^^^^^^^^^^^^^^^^
Job Attribute Sections
^^^^^^^^^^^^^^^^^^^^^^
Each type of job may have attributes added or removed via sections in the ``pipeline-gen``
list. Job type specific attributes may be specified using the keys ``<type>-job`` to
add attributes to all jobs of type ``<type>`` or ``<type>-job-remove`` to remove attributes
of type ``<type>``. Each section may only contain one type of job attribute specification, ie. ,
``build-job`` and ``noop-job`` may not coexist but ``build-job`` and ``build-job-remove`` may.
.. note::
The ``*-remove`` specifications are applied before the additive attribute specification.
For example, in the case where both ``build-job`` and ``build-job-remove`` are listed in
the same ``pipeline-gen`` section, the value will still exist in the merged build-job after
applying the section.
All of the attributes specified are forwarded to the generated CI jobs, however special
treatment is applied to the attributes ``tags``, ``image``, ``variables``, ``script``,
``before_script``, and ``after_script`` as they are components recognized explicitly by the
Spack CI generator. For the ``tags`` attribute, Spack will remove reserved tags
(:ref:`reserved_tags`) from all jobs specified in the config. In some cases, such as for
``signing`` jobs, reserved tags will be added back based on the type of CI that is being run.
Once a runner has been chosen to build a release spec, the ``build-job*``
sections provide information determining details of the job in the context of
the runner. At lease one of the ``build-job*`` sections must contain a ``tags`` key, which
Once a runner has been chosen to build a release spec, the ``runner-attributes``
section provides information determining details of the job in the context of
the runner. The ``runner-attributes`` section must have a ``tags`` key, which
is a list containing at least one tag used to select the runner from among the
runners known to the gitlab instance. For Docker executor type runners, the
``image`` key is used to specify the Docker image used to build the release spec
@@ -550,7 +554,7 @@ information on to the runner that it needs to do its work (e.g. scheduler
parameters, etc.). Any ``variables`` provided here will be added, verbatim, to
each job.
The ``build-job`` section also allows users to supply custom ``script``,
The ``runner-attributes`` section also allows users to supply custom ``script``,
``before_script``, and ``after_script`` sections to be applied to every job
scheduled on that runner. This allows users to do any custom preparation or
cleanup tasks that fit their particular workflow, as well as completely
@@ -561,45 +565,46 @@ environment directory is located within your ``--artifacts_root`` (or if not
provided, within your ``$CI_PROJECT_DIR``), activates that environment for
you, and invokes ``spack ci rebuild``.
Sections that specify scripts (``script``, ``before_script``, ``after_script``) are all
read as lists of commands or lists of lists of commands. It is recommended to write scripts
as lists of lists if scripts will be composed via merging. The default behavior of merging
lists will remove duplicate commands and potentially apply unwanted reordering, whereas
merging lists of lists will preserve the local ordering and never removes duplicate
commands. When writing commands to the CI target script, all lists are expanded and
flattened into a single list.
.. _staging_algorithm:
^^^^^^^^^^^^^^^^^^^
Submapping Sections
^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Summary of ``.gitlab-ci.yml`` generation algorithm
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A special case of attribute specification is the ``submapping`` section which may be used
to apply job attributes to build jobs based on the package spec associated with the rebuild
job. Submapping is specified as a list of spec ``match`` lists associated with
``build-job``/``build-job-remove`` sections. There are two options for ``match_behavior``,
either ``first`` or ``merge`` may be specified. In either case, the ``submapping`` list is
processed from the bottom up, and then each ``match`` list is searched for a string that
satisfies the check ``spec.satisfies({match_item})`` for each concrete spec.
All specs yielded by the matrix (or all the specs in the environment) have their
dependencies computed, and the entire resulting set of specs are staged together
before being run through the ``gitlab-ci/mappings`` entries, where each staged
spec is assigned a runner. "Staging" is the name given to the process of
figuring out in what order the specs should be built, taking into consideration
Gitlab CI rules about jobs/stages. In the staging process the goal is to maximize
the number of jobs in any stage of the pipeline, while ensuring that the jobs in
any stage only depend on jobs in previous stages (since those jobs are guaranteed
to have completed already). As a runner is determined for a job, the information
in the ``runner-attributes`` is used to populate various parts of the job
description that will be used by Gitlab CI. Once all the jobs have been assigned
a runner, the ``.gitlab-ci.yml`` is written to disk.
The the case of ``match_behavior: first``, the first ``match`` section in the list of
``submappings`` that contains a string that satisfies the spec will apply it's
``build-job*`` attributes to the rebuild job associated with that spec. This is the
default behavior and will be the method if no ``match_behavior`` is specified.
The short example provided above would result in the ``readline``, ``ncurses``,
and ``pkgconf`` packages getting staged and built on the runner chosen by the
``spack-k8s`` tag. In this example, spack assumes the runner is a Docker executor
type runner, and thus certain jobs will be run in the ``centos7`` container,
and others in the ``ubuntu-18.04`` container. The resulting ``.gitlab-ci.yml``
will contain 6 jobs in three stages. Once the jobs have been generated, the
presence of a ``SPACK_CDASH_AUTH_TOKEN`` environment variable during the
``spack ci generate`` command would result in all of the jobs being put in a
build group on CDash called "Release Testing" (that group will be created if
it didn't already exist).
The the case of ``merge`` match, all of the ``match`` sections in the list of
``submappings`` that contain a string that satisfies the spec will have the associated
``build-job*`` attributes applied to the rebuild job associated with that spec. Again,
the attributes will be merged starting from the bottom match going up to the top match.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Optional compiler bootstrapping
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In the case that no match is found in a submapping section, no additional attributes will be applied.
^^^^^^^^^^^^^
Bootstrapping
^^^^^^^^^^^^^
The ``bootstrap`` section allows you to specify lists of specs from
your ``definitions`` that should be staged ahead of the environment's ``specs``. At the moment
Spack pipelines also have support for bootstrapping compilers on systems that
may not already have the desired compilers installed. The idea here is that
you can specify a list of things to bootstrap in your ``definitions``, and
spack will guarantee those will be installed in a phase of the pipeline before
your release specs, so that you can rely on those packages being available in
the binary mirror when you need them later on in the pipeline. At the moment
the only viable use-case for bootstrapping is to install compilers.
Here's an example of what bootstrapping some compilers might look like:
@@ -632,18 +637,18 @@ Here's an example of what bootstrapping some compilers might look like:
exclude:
- '%gcc@7.3.0 os=centos7'
- '%gcc@5.5.0 os=ubuntu18.04'
ci:
gitlab-ci:
bootstrap:
- name: compiler-pkgs
compiler-agnostic: true
pipeline-gen:
# similar to the example higher up in this description
mappings:
# mappings similar to the example higher up in this description
...
The example above adds a list to the ``definitions`` called ``compiler-pkgs``
(you can add any number of these), which lists compiler packages that should
be staged ahead of the full matrix of release specs (in this example, only
readline). Then within the ``ci`` section, note the addition of a
readline). Then within the ``gitlab-ci`` section, note the addition of a
``bootstrap`` section, which can contain a list of items, each referring to
a list in the ``definitions`` section. These items can either
be a dictionary or a string. If you supply a dictionary, it must have a name
@@ -675,86 +680,6 @@ environment/stack file, and in that case no bootstrapping will be done (only the
specs will be staged for building) and the runners will be expected to already
have all needed compilers installed and configured for spack to use.
^^^^^^^^^^^^^^^^^^^
Pipeline Buildcache
^^^^^^^^^^^^^^^^^^^
The ``enable-artifacts-buildcache`` key
takes a boolean and determines whether the pipeline uses artifacts to store and
pass along the buildcaches from one stage to the next (the default if you don't
provide this option is ``False``).
^^^^^^^^^^^^^^^^
Broken Specs URL
^^^^^^^^^^^^^^^^
The optional ``broken-specs-url`` key tells Spack to check against a list of
specs that are known to be currently broken in ``develop``. If any such specs
are found, the ``spack ci generate`` command will fail with an error message
informing the user what broken specs were encountered. This allows the pipeline
to fail early and avoid wasting compute resources attempting to build packages
that will not succeed.
^^^^^
CDash
^^^^^
The optional ``cdash`` section provides information that will be used by the
``spack ci generate`` command (invoked by ``spack ci start``) for reporting
to CDash. All the jobs generated from this environment will belong to a
"build group" within CDash that can be tracked over time. As the release
progresses, this build group may have jobs added or removed. The url, project,
and site are used to specify the CDash instance to which build results should
be reported.
Take a look at the
`schema <https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/ci.py>`_
for the ci section of the spack environment file, to see precisely what
syntax is allowed there.
.. _reserved_tags:
^^^^^^^^^^^^^
Reserved Tags
^^^^^^^^^^^^^
Spack has a subset of tags (``public``, ``protected``, and ``notary``) that it reserves
for classifying runners that may require special permissions or access. The tags
``public`` and ``protected`` are used to distinguish between runners that use public
permissions and runners with protected permissions. The ``notary`` tag is a special tag
that is used to indicate runners that have access to the highly protected information
used for signing binaries using the ``signing`` job.
.. _staging_algorithm:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Summary of ``.gitlab-ci.yml`` generation algorithm
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All specs yielded by the matrix (or all the specs in the environment) have their
dependencies computed, and the entire resulting set of specs are staged together
before being run through the ``ci/pipeline-gen`` entries, where each staged
spec is assigned a runner. "Staging" is the name given to the process of
figuring out in what order the specs should be built, taking into consideration
Gitlab CI rules about jobs/stages. In the staging process the goal is to maximize
the number of jobs in any stage of the pipeline, while ensuring that the jobs in
any stage only depend on jobs in previous stages (since those jobs are guaranteed
to have completed already). As a runner is determined for a job, the information
in the merged ``any-job*`` and ``build-job*`` sections is used to populate various parts of the job
description that will be used by the target CI pipelines. Once all the jobs have been assigned
a runner, the ``.gitlab-ci.yml`` is written to disk.
The short example provided above would result in the ``readline``, ``ncurses``,
and ``pkgconf`` packages getting staged and built on the runner chosen by the
``spack-k8s`` tag. In this example, spack assumes the runner is a Docker executor
type runner, and thus certain jobs will be run in the ``centos7`` container,
and others in the ``ubuntu-18.04`` container. The resulting ``.gitlab-ci.yml``
will contain 6 jobs in three stages. Once the jobs have been generated, the
presence of a ``SPACK_CDASH_AUTH_TOKEN`` environment variable during the
``spack ci generate`` command would result in all of the jobs being put in a
build group on CDash called "Release Testing" (that group will be created if
it didn't already exist).
-------------------------------------
Using a custom spack in your pipeline
-------------------------------------
@@ -801,21 +726,23 @@ generated by ``spack ci generate``. You also want your generated rebuild jobs
spack:
...
ci:
pipeline-gen:
- build-job:
tags:
- spack-kube
image: spack/ubuntu-bionic
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
script:
- spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR}
- spack -d ci rebuild
after_script:
- rm -rf ./spack
gitlab-ci:
mappings:
- match:
- os=ubuntu18.04
runner-attributes:
tags:
- spack-kube
image: spack/ubuntu-bionic
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
script:
- spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR}
- spack -d ci rebuild
after_script:
- rm -rf ./spack
Now all of the generated rebuild jobs will use the same shell script to clone
spack before running their actual workload.
@@ -904,4 +831,3 @@ verify binary packages (when installing or creating buildcaches). You could
also have already trusted a key spack know about, or if no key is present anywhere,
spack will install specs using ``--no-check-signature`` and create buildcaches
using ``-u`` (for unsigned binaries).

178
lib/spack/env/cc vendored
View File

@@ -427,55 +427,6 @@ isystem_include_dirs_list=""
libs_list=""
other_args_list=""
# Global state for keeping track of -Wl,-rpath -Wl,/path
wl_expect_rpath=no
# Same, but for -Xlinker -rpath -Xlinker /path
xlinker_expect_rpath=no
parse_Wl() {
# drop -Wl
shift
while [ $# -ne 0 ]; do
if [ "$wl_expect_rpath" = yes ]; then
if system_dir "$1"; then
append system_rpath_dirs_list "$1"
else
append rpath_dirs_list "$1"
fi
wl_expect_rpath=no
else
case "$1" in
-rpath=*)
arg="${1#-rpath=}"
if system_dir "$arg"; then
append system_rpath_dirs_list "$arg"
else
append rpath_dirs_list "$arg"
fi
;;
--rpath=*)
arg="${1#--rpath=}"
if system_dir "$arg"; then
append system_rpath_dirs_list "$arg"
else
append rpath_dirs_list "$arg"
fi
;;
-rpath|--rpath)
wl_expect_rpath=yes
;;
"$dtags_to_strip")
;;
*)
append other_args_list "-Wl,$1"
;;
esac
fi
shift
done
}
while [ $# -ne 0 ]; do
@@ -575,77 +526,88 @@ while [ $# -ne 0 ]; do
append other_args_list "-l$arg"
;;
-Wl,*)
IFS=,
parse_Wl $1
unset IFS
arg="${1#-Wl,}"
if [ -z "$arg" ]; then shift; arg="$1"; fi
case "$arg" in
-rpath=*) rp="${arg#-rpath=}" ;;
--rpath=*) rp="${arg#--rpath=}" ;;
-rpath,*) rp="${arg#-rpath,}" ;;
--rpath,*) rp="${arg#--rpath,}" ;;
-rpath|--rpath)
shift; arg="$1"
case "$arg" in
-Wl,*)
rp="${arg#-Wl,}"
;;
*)
die "-Wl,-rpath was not followed by -Wl,*"
;;
esac
;;
"$dtags_to_strip")
: # We want to remove explicitly this flag
;;
*)
append other_args_list "-Wl,$arg"
;;
esac
;;
-Xlinker,*)
arg="${1#-Xlinker,}"
if [ -z "$arg" ]; then shift; arg="$1"; fi
case "$arg" in
-rpath=*) rp="${arg#-rpath=}" ;;
--rpath=*) rp="${arg#--rpath=}" ;;
-rpath|--rpath)
shift; arg="$1"
case "$arg" in
-Xlinker,*)
rp="${arg#-Xlinker,}"
;;
*)
die "-Xlinker,-rpath was not followed by -Xlinker,*"
;;
esac
;;
*)
append other_args_list "-Xlinker,$arg"
;;
esac
;;
-Xlinker)
shift
if [ $# -eq 0 ]; then
# -Xlinker without value: let the compiler error about it.
append other_args_list -Xlinker
xlinker_expect_rpath=no
break
elif [ "$xlinker_expect_rpath" = yes ]; then
# Register the path of -Xlinker -rpath <other args> -Xlinker <path>
if system_dir "$1"; then
append system_rpath_dirs_list "$1"
else
append rpath_dirs_list "$1"
if [ "$2" = "-rpath" ]; then
if [ "$3" != "-Xlinker" ]; then
die "-Xlinker,-rpath was not followed by -Xlinker,*"
fi
xlinker_expect_rpath=no
shift 3;
rp="$1"
elif [ "$2" = "$dtags_to_strip" ]; then
shift # We want to remove explicitly this flag
else
case "$1" in
-rpath=*)
arg="${1#-rpath=}"
if system_dir "$arg"; then
append system_rpath_dirs_list "$arg"
else
append rpath_dirs_list "$arg"
fi
;;
--rpath=*)
arg="${1#--rpath=}"
if system_dir "$arg"; then
append system_rpath_dirs_list "$arg"
else
append rpath_dirs_list "$arg"
fi
;;
-rpath|--rpath)
xlinker_expect_rpath=yes
;;
"$dtags_to_strip")
;;
*)
append other_args_list -Xlinker
append other_args_list "$1"
;;
esac
append other_args_list "$1"
fi
;;
"$dtags_to_strip")
;;
*)
append other_args_list "$1"
if [ "$1" = "$dtags_to_strip" ]; then
: # We want to remove explicitly this flag
else
append other_args_list "$1"
fi
;;
esac
# test rpaths against system directories in one place.
if [ -n "$rp" ]; then
if system_dir "$rp"; then
append system_rpath_dirs_list "$rp"
else
append rpath_dirs_list "$rp"
fi
fi
shift
done
# We found `-Xlinker -rpath` but no matching value `-Xlinker /path`. Just append
# `-Xlinker -rpath` again and let the compiler or linker handle the error during arg
# parsing.
if [ "$xlinker_expect_rpath" = yes ]; then
append other_args_list -Xlinker
append other_args_list -rpath
fi
# Same, but for -Wl flags.
if [ "$wl_expect_rpath" = yes ]; then
append other_args_list -Wl,-rpath
fi
#
# Add flags from Spack's cppflags, cflags, cxxflags, fcflags, fflags, and
# ldflags. We stick to the order that gmake puts the flags in by default.

View File

@@ -18,7 +18,7 @@
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.2.0-dev (commit d02dadbac4fa8f3a60293c4fbfd59feadaf546dc)
* Version: 0.2.0 (commit e44bad9c7b6defac73696f64078b2fe634719b62)
astunparse
----------------

View File

@@ -1,8 +0,0 @@
"""
Run the `archspec` CLI as a module.
"""
import sys
from .cli import main
sys.exit(main())

View File

@@ -6,61 +6,19 @@
archspec command line interface
"""
import argparse
import typing
import click
import archspec
import archspec.cpu
def _make_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(
"archspec",
description="archspec command line interface",
add_help=False,
)
parser.add_argument(
"--version",
"-V",
help="Show the version and exit.",
action="version",
version=f"archspec, version {archspec.__version__}",
)
parser.add_argument("--help", "-h", help="Show the help and exit.", action="help")
subcommands = parser.add_subparsers(
title="command",
metavar="COMMAND",
dest="command",
)
cpu_command = subcommands.add_parser(
"cpu",
help="archspec command line interface for CPU",
description="archspec command line interface for CPU",
)
cpu_command.set_defaults(run=cpu)
return parser
@click.group(name="archspec")
@click.version_option(version=archspec.__version__)
def main():
"""archspec command line interface"""
def cpu() -> int:
"""Run the `archspec cpu` subcommand."""
print(archspec.cpu.host())
return 0
def main(argv: typing.Optional[typing.List[str]] = None) -> int:
"""Run the `archspec` command line interface."""
parser = _make_parser()
try:
args = parser.parse_args(argv)
except SystemExit as err:
return err.code
if args.command is None:
parser.print_help()
return 0
return args.run()
@main.command()
def cpu():
"""archspec command line interface for CPU"""
click.echo(archspec.cpu.host())

View File

@@ -268,14 +268,15 @@ def tuplify(ver):
return flags
msg = (
"cannot produce optimized binary for micro-architecture '{0}' with {1}@{2}"
"cannot produce optimized binary for micro-architecture '{0}'"
" with {1}@{2} [supported compiler versions are {3}]"
)
msg = msg.format(
self.name,
compiler,
version,
", ".join([x["versions"] for x in compiler_info]),
)
if compiler_info:
versions = [x["versions"] for x in compiler_info]
msg += f' [supported compiler versions are {", ".join(versions)}]'
else:
msg += " [no supported compiler versions]"
msg = msg.format(self.name, compiler, version)
raise UnsupportedMicroarchitecture(msg)

View File

@@ -102,8 +102,7 @@
"name": "x86-64",
"flags": "-march={name} -mtune=generic"
}
],
"nvhpc": []
]
}
},
"x86_64_v2": {
@@ -158,8 +157,7 @@
"name": "x86-64-v2",
"flags": "-march={name} -mtune=generic"
}
],
"nvhpc": []
]
}
},
"x86_64_v3": {
@@ -230,13 +228,6 @@
"name": "x86-64-v3",
"flags": "-march={name} -mtune=generic"
}
],
"nvhpc" : [
{
"versions": ":",
"name": "px",
"flags": "-tp {name} -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mxsave"
}
]
}
},
@@ -313,13 +304,6 @@
"name": "x86-64-v4",
"flags": "-march={name} -mtune=generic"
}
],
"nvhpc": [
{
"versions": ":",
"name": "px",
"flags": "-tp {name} -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mxsave -mavx512f -mavx512bw -mavx512cd -mavx512dq -mavx512vl"
}
]
}
},
@@ -374,8 +358,7 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": []
]
}
},
"core2": {
@@ -429,8 +412,7 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": []
]
}
},
"nehalem": {
@@ -495,8 +477,7 @@
"name": "corei7",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": []
]
}
},
"westmere": {
@@ -558,8 +539,7 @@
"name": "corei7",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": []
]
}
},
"sandybridge": {
@@ -629,12 +609,6 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -707,12 +681,6 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -790,12 +758,6 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -865,13 +827,6 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "haswell",
"flags": "-tp {name}"
}
]
}
},
@@ -944,13 +899,6 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "haswell",
"flags": "-tp {name}"
}
]
}
},
@@ -1115,13 +1063,6 @@
"name": "skylake-avx512",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "skylake",
"flags": "-tp {name}"
}
]
}
},
@@ -1202,13 +1143,6 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "skylake",
"flags": "-tp {name}"
}
]
}
},
@@ -1288,13 +1222,6 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "skylake",
"flags": "-tp {name}"
}
]
}
},
@@ -1402,13 +1329,6 @@
"name": "icelake-client",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "skylake",
"flags": "-tp {name}"
}
]
}
},
@@ -1467,8 +1387,7 @@
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags": "-msse2"
}
],
"nvhpc": []
]
}
},
"bulldozer": {
@@ -1532,12 +1451,6 @@
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags": "-msse3"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -1606,12 +1519,6 @@
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags": "-msse3"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -1681,13 +1588,6 @@
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags": "-msse4.2"
}
],
"nvhpc": [
{
"versions": ":",
"name": "piledriver",
"flags": "-tp {name}"
}
]
}
},
@@ -1763,13 +1663,6 @@
"name": "core-avx2",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "piledriver",
"flags": "-tp {name}"
}
]
}
},
@@ -1848,12 +1741,6 @@
"name": "core-avx2",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -1933,12 +1820,6 @@
"name": "core-avx2",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": "20.5:",
"flags": "-tp {name}"
}
]
}
},
@@ -2021,12 +1902,6 @@
"name": "core-avx2",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": "21.11:",
"flags": "-tp {name}"
}
]
}
},
@@ -2107,15 +1982,7 @@
"name": "znver4",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": "21.11:",
"name": "zen3",
"flags": "-tp {name}",
"warnings": "zen4 is not fully supported by nvhpc yet, falling back to zen3"
}
]
]
}
},
"ppc64": {
@@ -2220,8 +2087,7 @@
"versions": ":",
"flags": "-mcpu={name} -mtune={name}"
}
],
"nvhpc": []
]
}
},
"power8le": {
@@ -2250,13 +2116,6 @@
"name": "power8",
"flags": "-mcpu={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "pwr8",
"flags": "-tp {name}"
}
]
}
},
@@ -2280,13 +2139,6 @@
"name": "power9",
"flags": "-mcpu={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "pwr9",
"flags": "-tp {name}"
}
]
}
},
@@ -2318,8 +2170,7 @@
"versions": ":",
"flags": "-march=armv8-a -mtune=generic"
}
],
"nvhpc": []
]
}
},
"armv8.1a": {
@@ -2701,13 +2552,6 @@
"versions": "20:",
"flags" : "-march=armv8.2-a+fp16+rcpc+dotprod+crypto"
}
],
"nvhpc" : [
{
"versions": "22.5:",
"name": "neoverse-n1",
"flags": "-tp {name}"
}
]
}
},
@@ -2773,31 +2617,15 @@
"flags" : "-march=armv8.2-a+crypto+fp16 -mtune=cortex-a72"
},
{
"versions": "8.0:8.4",
"versions": "8.0:8.9",
"flags" : "-march=armv8.2-a+fp16+dotprod+crypto -mtune=cortex-a72"
},
{
"versions": "8.5:8.9",
"versions": "9.0:9.9",
"flags" : "-mcpu=neoverse-v1"
},
{
"versions": "9.0:9.3",
"flags" : "-march=armv8.2-a+fp16+dotprod+crypto -mtune=cortex-a72"
},
{
"versions": "9.4:9.9",
"flags" : "-mcpu=neoverse-v1"
},
{
"versions": "10.0:10.1",
"flags" : "-march=armv8.2-a+fp16+dotprod+crypto -mtune=cortex-a72"
},
{
"versions": "10.2",
"flags" : "-mcpu=zeus"
},
{
"versions": "10.3:",
{
"versions": "10.0:",
"flags" : "-mcpu=neoverse-v1"
}
@@ -2829,13 +2657,6 @@
"versions": "22:",
"flags" : "-march=armv8.4-a+sve+ssbs+fp16+bf16+crypto+i8mm+rng"
}
],
"nvhpc" : [
{
"versions": "22.5:",
"name": "neoverse-n1",
"flags": "-tp {name}"
}
]
}
},
@@ -2961,10 +2782,6 @@
{
"versions": "13.0:",
"flags" : "-mcpu=apple-m1"
},
{
"versions": "16.0:",
"flags" : "-mcpu=apple-m2"
}
],
"apple-clang": [
@@ -2973,12 +2790,8 @@
"flags" : "-march=armv8.5-a"
},
{
"versions": "13.0:14.0.2",
"flags" : "-mcpu=apple-m1"
},
{
"versions": "14.0.2:",
"flags" : "-mcpu=apple-m2"
"versions": "13.0:",
"flags" : "-mcpu=vortex"
}
]
}

View File

@@ -5,20 +5,19 @@
import collections
import collections.abc
import errno
import fnmatch
import glob
import hashlib
import itertools
import numbers
import os
import posixpath
import re
import shutil
import stat
import sys
import tempfile
from contextlib import contextmanager
from typing import Callable, Iterable, List, Match, Optional, Tuple, Union
from sys import platform as _platform
from typing import Callable, List, Match, Optional, Tuple, Union
from llnl.util import tty
from llnl.util.lang import dedupe, memoized
@@ -27,7 +26,9 @@
from spack.util.executable import Executable, which
from spack.util.path import path_to_os_path, system_path_filter
if sys.platform != "win32":
is_windows = _platform == "win32"
if not is_windows:
import grp
import pwd
else:
@@ -153,7 +154,7 @@ def lookup(name):
def getuid():
if sys.platform == "win32":
if is_windows:
import ctypes
if ctypes.windll.shell32.IsUserAnAdmin() == 0:
@@ -166,7 +167,7 @@ def getuid():
@system_path_filter
def rename(src, dst):
# On Windows, os.rename will fail if the destination file already exists
if sys.platform == "win32":
if is_windows:
# Windows path existence checks will sometimes fail on junctions/links/symlinks
# so check for that case
if os.path.exists(dst) or os.path.islink(dst):
@@ -195,7 +196,7 @@ def _get_mime_type():
"""Generate method to call `file` system command to aquire mime type
for a specified path
"""
if sys.platform == "win32":
if is_windows:
# -h option (no-dereference) does not exist in Windows
return file_command("-b", "--mime-type")
else:
@@ -550,7 +551,7 @@ def get_owner_uid(path, err_msg=None):
else:
p_stat = os.stat(path)
if sys.platform != "win32":
if _platform != "win32":
owner_uid = p_stat.st_uid
else:
sid = win32security.GetFileSecurity(
@@ -583,7 +584,7 @@ def group_ids(uid=None):
Returns:
(list of int): gids of groups the user is a member of
"""
if sys.platform == "win32":
if is_windows:
tty.warn("Function is not supported on Windows")
return []
@@ -603,7 +604,7 @@ def group_ids(uid=None):
@system_path_filter(arg_slice=slice(1))
def chgrp(path, group, follow_symlinks=True):
"""Implement the bash chgrp function on a single path"""
if sys.platform == "win32":
if is_windows:
raise OSError("Function 'chgrp' is not supported on Windows")
if isinstance(group, str):
@@ -1130,7 +1131,7 @@ def open_if_filename(str_or_file, mode="r"):
@system_path_filter
def touch(path):
"""Creates an empty file at the specified path."""
if sys.platform == "win32":
if is_windows:
perms = os.O_WRONLY | os.O_CREAT
else:
perms = os.O_WRONLY | os.O_CREAT | os.O_NONBLOCK | os.O_NOCTTY
@@ -1192,7 +1193,7 @@ def temp_cwd():
yield tmp_dir
finally:
kwargs = {}
if sys.platform == "win32":
if is_windows:
kwargs["ignore_errors"] = False
kwargs["onerror"] = readonly_file_handler(ignore_errors=True)
shutil.rmtree(tmp_dir, **kwargs)
@@ -1437,7 +1438,7 @@ def visit_directory_tree(root, visitor, rel_path="", depth=0):
try:
isdir = f.is_dir()
except OSError as e:
if sys.platform == "win32" and hasattr(e, "winerror") and e.winerror == 5 and islink:
if is_windows and hasattr(e, "winerror") and e.winerror == 5 and islink:
# if path is a symlink, determine destination and
# evaluate file vs directory
link_target = resolve_link_target_relative_to_the_link(f)
@@ -1546,11 +1547,11 @@ def readonly_file_handler(ignore_errors=False):
"""
def error_remove_readonly(func, path, exc):
if sys.platform != "win32":
if not is_windows:
raise RuntimeError("This method should only be invoked on Windows")
excvalue = exc[1]
if (
sys.platform == "win32"
is_windows
and func in (os.rmdir, os.remove, os.unlink)
and excvalue.errno == errno.EACCES
):
@@ -1580,7 +1581,7 @@ def remove_linked_tree(path):
# Windows readonly files cannot be removed by Python
# directly.
if sys.platform == "win32":
if is_windows:
kwargs["ignore_errors"] = False
kwargs["onerror"] = readonly_file_handler(ignore_errors=True)
@@ -1673,38 +1674,6 @@ def fix_darwin_install_name(path):
break
def find_first(root: str, files: Union[Iterable[str], str], bfs_depth: int = 2) -> Optional[str]:
"""Find the first file matching a pattern.
The following
.. code-block:: console
$ find /usr -name 'abc*' -o -name 'def*' -quit
is equivalent to:
>>> find_first("/usr", ["abc*", "def*"])
Any glob pattern supported by fnmatch can be used.
The search order of this method is breadth-first over directories,
until depth bfs_depth, after which depth-first search is used.
Parameters:
root (str): The root directory to start searching from
files (str or Iterable): File pattern(s) to search for
bfs_depth (int): (advanced) parameter that specifies at which
depth to switch to depth-first search.
Returns:
str or None: The matching file or None when no file is found.
"""
if isinstance(files, str):
files = [files]
return FindFirstFile(root, *files, bfs_depth=bfs_depth).find()
def find(root, files, recursive=True):
"""Search for ``files`` starting from the ``root`` directory.
@@ -2126,7 +2095,7 @@ def names(self):
# on non Windows platform
# Windows valid library extensions are:
# ['.dll', '.lib']
valid_exts = [".dll", ".lib"] if sys.platform == "win32" else [".dylib", ".so", ".a"]
valid_exts = [".dll", ".lib"] if is_windows else [".dylib", ".so", ".a"]
for ext in valid_exts:
i = name.rfind(ext)
if i != -1:
@@ -2274,7 +2243,7 @@ def find_libraries(libraries, root, shared=True, recursive=False, runtime=True):
message = message.format(find_libraries.__name__, type(libraries))
raise TypeError(message)
if sys.platform == "win32":
if is_windows:
static_ext = "lib"
# For linking (runtime=False) you need the .lib files regardless of
# whether you are doing a shared or static link
@@ -2306,7 +2275,7 @@ def find_libraries(libraries, root, shared=True, recursive=False, runtime=True):
# finally search all of root recursively. The search stops when the first
# match is found.
common_lib_dirs = ["lib", "lib64"]
if sys.platform == "win32":
if is_windows:
common_lib_dirs.extend(["bin", "Lib"])
for subdir in common_lib_dirs:
@@ -2441,7 +2410,7 @@ def _link(self, path, dest_dir):
# For py2 compatibility, we have to catch the specific Windows error code
# associate with trying to create a file that already exists (winerror 183)
except OSError as e:
if sys.platform == "win32" and (e.winerror == 183 or e.errno == errno.EEXIST):
if e.winerror == 183:
# We have either already symlinked or we are encoutering a naming clash
# either way, we don't want to overwrite existing libraries
already_linked = islink(dest_file)
@@ -2754,105 +2723,3 @@ def filesummary(path, print_bytes=16) -> Tuple[int, bytes]:
return size, short_contents
except OSError:
return 0, b""
class FindFirstFile:
"""Uses hybrid iterative deepening to locate the first matching
file. Up to depth ``bfs_depth`` it uses iterative deepening, which
mimics breadth-first with the same memory footprint as depth-first
search, after which it switches to ordinary depth-first search using
``os.walk``."""
def __init__(self, root: str, *file_patterns: str, bfs_depth: int = 2):
"""Create a small summary of the given file. Does not error
when file does not exist.
Args:
root (str): directory in which to recursively search
file_patterns (str): glob file patterns understood by fnmatch
bfs_depth (int): until this depth breadth-first traversal is used,
when no match is found, the mode is switched to depth-first search.
"""
self.root = root
self.bfs_depth = bfs_depth
self.match: Callable
# normcase is trivial on posix
regex = re.compile("|".join(fnmatch.translate(os.path.normcase(p)) for p in file_patterns))
# On case sensitive filesystems match against normcase'd paths.
if os.path is posixpath:
self.match = regex.match
else:
self.match = lambda p: regex.match(os.path.normcase(p))
def find(self) -> Optional[str]:
"""Run the file search
Returns:
str or None: path of the matching file
"""
self.file = None
# First do iterative deepening (i.e. bfs through limited depth dfs)
for i in range(self.bfs_depth + 1):
if self._find_at_depth(self.root, i):
return self.file
# Then fall back to depth-first search
return self._find_dfs()
def _find_at_depth(self, path, max_depth, depth=0) -> bool:
"""Returns True when done. Notice search can be done
either because a file was found, or because it recursed
through all directories."""
try:
entries = os.scandir(path)
except OSError:
return True
done = True
with entries:
# At max depth we look for matching files.
if depth == max_depth:
for f in entries:
# Exit on match
if self.match(f.name):
self.file = os.path.join(path, f.name)
return True
# is_dir should not require a stat call, so it's a good optimization.
if self._is_dir(f):
done = False
return done
# At lower depth only recurse into subdirs
for f in entries:
if not self._is_dir(f):
continue
# If any subdir is not fully traversed, we're not done yet.
if not self._find_at_depth(os.path.join(path, f.name), max_depth, depth + 1):
done = False
# Early exit when we've found something.
if self.file:
return True
return done
def _is_dir(self, f: os.DirEntry) -> bool:
"""Returns True when f is dir we can enter (and not a symlink)."""
try:
return f.is_dir(follow_symlinks=False)
except OSError:
return False
def _find_dfs(self) -> Optional[str]:
"""Returns match or None"""
for dirpath, _, filenames in os.walk(self.root):
for file in filenames:
if self.match(file):
return os.path.join(dirpath, file)
return None

View File

@@ -5,13 +5,15 @@
import errno
import os
import shutil
import sys
import tempfile
from os.path import exists, join
from sys import platform as _platform
from llnl.util import lang
if sys.platform == "win32":
is_windows = _platform == "win32"
if is_windows:
from win32file import CreateHardLink
@@ -21,7 +23,7 @@ def symlink(real_path, link_path):
On Windows, use junctions if os.symlink fails.
"""
if sys.platform != "win32":
if not is_windows:
os.symlink(real_path, link_path)
elif _win32_can_symlink():
# Windows requires target_is_directory=True when the target is a dir.
@@ -30,15 +32,9 @@ def symlink(real_path, link_path):
try:
# Try to use junctions
_win32_junction(real_path, link_path)
except OSError as e:
if e.errno == errno.EEXIST:
# EEXIST error indicates that file we're trying to "link"
# is already present, don't bother trying to copy which will also fail
# just raise
raise
else:
# If all else fails, fall back to copying files
shutil.copyfile(real_path, link_path)
except OSError:
# If all else fails, fall back to copying files
shutil.copyfile(real_path, link_path)
def islink(path):
@@ -103,7 +99,7 @@ def _win32_is_junction(path):
if os.path.islink(path):
return False
if sys.platform == "win32":
if is_windows:
import ctypes.wintypes
GetFileAttributes = ctypes.windll.kernel32.GetFileAttributesW

View File

@@ -25,7 +25,7 @@ def architecture_compatible(self, target, constraint):
return (
not target.architecture
or not constraint.architecture
or target.architecture.intersects(constraint.architecture)
or target.architecture.satisfies(constraint.architecture)
)
@memoized
@@ -104,7 +104,7 @@ def compiler_compatible(self, parent, child, **kwargs):
for cversion in child.compiler.versions:
# For a few compilers use specialized comparisons.
# Otherwise match on version match.
if pversion.intersects(cversion):
if pversion.satisfies(cversion):
return True
elif parent.compiler.name == "gcc" and self._gcc_compiler_compare(
pversion, cversion

View File

@@ -695,11 +695,8 @@ def _ensure_variant_defaults_are_parsable(pkgs, error_cls):
try:
variant.validate_or_raise(vspec, pkg_cls=pkg_cls)
except spack.variant.InvalidVariantValueError:
error_msg = (
"The default value of the variant '{}' in package '{}' failed validation"
)
question = "Is it among the allowed values?"
errors.append(error_cls(error_msg.format(variant_name, pkg_name), [question]))
error_msg = "The variant '{}' default value in package '{}' cannot be validated"
errors.append(error_cls(error_msg.format(variant_name, pkg_name), []))
return errors
@@ -724,7 +721,7 @@ def _version_constraints_are_satisfiable_by_some_version_in_repo(pkgs, error_cls
dependency_pkg_cls = None
try:
dependency_pkg_cls = spack.repo.path.get_pkg_class(s.name)
assert any(v.intersects(s.versions) for v in list(dependency_pkg_cls.versions))
assert any(v.satisfies(s.versions) for v in list(dependency_pkg_cls.versions))
except Exception:
summary = (
"{0}: dependency on {1} cannot be satisfied " "by known versions of {1.name}"

View File

@@ -6,8 +6,6 @@
import codecs
import collections
import hashlib
import io
import itertools
import json
import multiprocessing.pool
import os
@@ -22,9 +20,7 @@
import urllib.parse
import urllib.request
import warnings
from contextlib import closing, contextmanager
from gzip import GzipFile
from typing import Union
from contextlib import closing
from urllib.error import HTTPError, URLError
import ruamel.yaml as yaml
@@ -43,7 +39,6 @@
import spack.platforms
import spack.relocate as relocate
import spack.repo
import spack.stage
import spack.store
import spack.traverse as traverse
import spack.util.crypto
@@ -503,9 +498,7 @@ def _binary_index():
#: Singleton binary_index instance
binary_index: Union[BinaryCacheIndex, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(
_binary_index
)
binary_index = llnl.util.lang.Singleton(_binary_index)
class NoOverwriteException(spack.error.SpackError):
@@ -746,31 +739,34 @@ def get_buildfile_manifest(spec):
return data
def prefixes_to_hashes(spec):
return {
str(s.prefix): s.dag_hash()
for s in itertools.chain(
spec.traverse(root=True, deptype="link"), spec.dependencies(deptype="run")
)
}
def get_buildinfo_dict(spec, rel=False):
"""Create metadata for a tarball"""
def write_buildinfo_file(spec, workdir, rel=False):
"""
Create a cache file containing information
required for the relocation
"""
manifest = get_buildfile_manifest(spec)
return {
"sbang_install_path": spack.hooks.sbang.sbang_install_path(),
"relative_rpaths": rel,
"buildpath": spack.store.layout.root,
"spackprefix": spack.paths.prefix,
"relative_prefix": os.path.relpath(spec.prefix, spack.store.layout.root),
"relocate_textfiles": manifest["text_to_relocate"],
"relocate_binaries": manifest["binary_to_relocate"],
"relocate_links": manifest["link_to_relocate"],
"hardlinks_deduped": manifest["hardlinks_deduped"],
"prefix_to_hash": prefixes_to_hashes(spec),
}
prefix_to_hash = dict()
prefix_to_hash[str(spec.package.prefix)] = spec.dag_hash()
deps = spack.build_environment.get_rpath_deps(spec.package)
for d in deps + spec.dependencies(deptype="run"):
prefix_to_hash[str(d.prefix)] = d.dag_hash()
# Create buildinfo data and write it to disk
buildinfo = {}
buildinfo["sbang_install_path"] = spack.hooks.sbang.sbang_install_path()
buildinfo["relative_rpaths"] = rel
buildinfo["buildpath"] = spack.store.layout.root
buildinfo["spackprefix"] = spack.paths.prefix
buildinfo["relative_prefix"] = os.path.relpath(spec.prefix, spack.store.layout.root)
buildinfo["relocate_textfiles"] = manifest["text_to_relocate"]
buildinfo["relocate_binaries"] = manifest["binary_to_relocate"]
buildinfo["relocate_links"] = manifest["link_to_relocate"]
buildinfo["hardlinks_deduped"] = manifest["hardlinks_deduped"]
buildinfo["prefix_to_hash"] = prefix_to_hash
filename = buildinfo_file_name(workdir)
with open(filename, "w") as outfile:
outfile.write(syaml.dump(buildinfo, default_flow_style=True))
def tarball_directory_name(spec):
@@ -1143,68 +1139,6 @@ def generate_key_index(key_prefix, tmpdir=None):
shutil.rmtree(tmpdir)
@contextmanager
def gzip_compressed_tarfile(path):
"""Create a reproducible, compressed tarfile"""
# Create gzip compressed tarball of the install prefix
# 1) Use explicit empty filename and mtime 0 for gzip header reproducibility.
# If the filename="" is dropped, Python will use fileobj.name instead.
# This should effectively mimick `gzip --no-name`.
# 2) On AMD Ryzen 3700X and an SSD disk, we have the following on compression speed:
# compresslevel=6 gzip default: llvm takes 4mins, roughly 2.1GB
# compresslevel=9 python default: llvm takes 12mins, roughly 2.1GB
# So we follow gzip.
with open(path, "wb") as fileobj, closing(
GzipFile(filename="", mode="wb", compresslevel=6, mtime=0, fileobj=fileobj)
) as gzip_file, tarfile.TarFile(name="", mode="w", fileobj=gzip_file) as tar:
yield tar
def deterministic_tarinfo(tarinfo: tarfile.TarInfo):
# We only add files, symlinks, hardlinks, and directories
# No character devices, block devices and FIFOs should ever enter a tarball.
if tarinfo.isdev():
return None
# For distribution, it makes no sense to user/group data; since (a) they don't exist
# on other machines, and (b) they lead to surprises as `tar x` run as root will change
# ownership if it can. We want to extract as the current user. By setting owner to root,
# root will extract as root, and non-privileged user will extract as themselves.
tarinfo.uid = 0
tarinfo.gid = 0
tarinfo.uname = ""
tarinfo.gname = ""
# Reset mtime to epoch time, our prefixes are not truly immutable, so files may get
# touched; as long as the content does not change, this ensures we get stable tarballs.
tarinfo.mtime = 0
# Normalize mode
if tarinfo.isfile() or tarinfo.islnk():
# If user can execute, use 0o755; else 0o644
# This is to avoid potentially unsafe world writable & exeutable files that may get
# extracted when Python or tar is run with privileges
tarinfo.mode = 0o644 if tarinfo.mode & 0o100 == 0 else 0o755
else: # symbolic link and directories
tarinfo.mode = 0o755
return tarinfo
def tar_add_metadata(tar: tarfile.TarFile, path: str, data: dict):
# Serialize buildinfo for the tarball
bstring = syaml.dump(data, default_flow_style=True).encode("utf-8")
tarinfo = tarfile.TarInfo(name=path)
tarinfo.size = len(bstring)
tar.addfile(deterministic_tarinfo(tarinfo), io.BytesIO(bstring))
def _do_create_tarball(tarfile_path, binaries_dir, pkg_dir, buildinfo):
with gzip_compressed_tarfile(tarfile_path) as tar:
tar.add(name=binaries_dir, arcname=pkg_dir, filter=deterministic_tarinfo)
tar_add_metadata(tar, buildinfo_file_name(pkg_dir), buildinfo)
def _build_tarball(
spec,
out_url,
@@ -1222,37 +1156,15 @@ def _build_tarball(
if not spec.concrete:
raise ValueError("spec must be concrete to build tarball")
with tempfile.TemporaryDirectory(dir=spack.stage.get_stage_root()) as tmpdir:
_build_tarball_in_stage_dir(
spec,
out_url,
stage_dir=tmpdir,
force=force,
relative=relative,
unsigned=unsigned,
allow_root=allow_root,
key=key,
regenerate_index=regenerate_index,
)
# set up some paths
tmpdir = tempfile.mkdtemp()
cache_prefix = build_cache_prefix(tmpdir)
def _build_tarball_in_stage_dir(
spec,
out_url,
stage_dir,
force=False,
relative=False,
unsigned=False,
allow_root=False,
key=None,
regenerate_index=False,
):
cache_prefix = build_cache_prefix(stage_dir)
tarfile_name = tarball_name(spec, ".spack")
tarfile_dir = os.path.join(cache_prefix, tarball_directory_name(spec))
tarfile_path = os.path.join(tarfile_dir, tarfile_name)
spackfile_path = os.path.join(cache_prefix, tarball_path_name(spec, ".spack"))
remote_spackfile_path = url_util.join(out_url, os.path.relpath(spackfile_path, stage_dir))
remote_spackfile_path = url_util.join(out_url, os.path.relpath(spackfile_path, tmpdir))
mkdirp(tarfile_dir)
if web_util.url_exists(remote_spackfile_path):
@@ -1271,7 +1183,7 @@ def _build_tarball_in_stage_dir(
signed_specfile_path = "{0}.sig".format(specfile_path)
remote_specfile_path = url_util.join(
out_url, os.path.relpath(specfile_path, os.path.realpath(stage_dir))
out_url, os.path.relpath(specfile_path, os.path.realpath(tmpdir))
)
remote_signed_specfile_path = "{0}.sig".format(remote_specfile_path)
@@ -1287,7 +1199,7 @@ def _build_tarball_in_stage_dir(
raise NoOverwriteException(url_util.format(remote_specfile_path))
pkg_dir = os.path.basename(spec.prefix.rstrip(os.path.sep))
workdir = os.path.join(stage_dir, pkg_dir)
workdir = os.path.join(tmpdir, pkg_dir)
# TODO: We generally don't want to mutate any files, but when using relative
# mode, Spack unfortunately *does* mutate rpaths and links ahead of time.
@@ -1305,22 +1217,39 @@ def _build_tarball_in_stage_dir(
os.remove(temp_tarfile_path)
else:
binaries_dir = spec.prefix
mkdirp(os.path.join(workdir, ".spack"))
# create info for later relocation and create tar
buildinfo = get_buildinfo_dict(spec, relative)
write_buildinfo_file(spec, workdir, relative)
# optionally make the paths in the binaries relative to each other
# in the spack install tree before creating tarball
if relative:
make_package_relative(workdir, spec, buildinfo, allow_root)
elif not allow_root:
ensure_package_relocatable(buildinfo, binaries_dir)
try:
if relative:
make_package_relative(workdir, spec, allow_root)
elif not allow_root:
ensure_package_relocatable(workdir, binaries_dir)
except Exception as e:
shutil.rmtree(workdir)
shutil.rmtree(tarfile_dir)
shutil.rmtree(tmpdir)
tty.die(e)
_do_create_tarball(tarfile_path, binaries_dir, pkg_dir, buildinfo)
# create gzip compressed tarball of the install prefix
# On AMD Ryzen 3700X and an SSD disk, we have the following on compression speed:
# compresslevel=6 gzip default: llvm takes 4mins, roughly 2.1GB
# compresslevel=9 python default: llvm takes 12mins, roughly 2.1GB
# So we follow gzip.
with closing(tarfile.open(tarfile_path, "w:gz", compresslevel=6)) as tar:
tar.add(name=binaries_dir, arcname=pkg_dir)
if not relative:
# Add buildinfo file
buildinfo_path = buildinfo_file_name(workdir)
buildinfo_arcname = buildinfo_file_name(pkg_dir)
tar.add(name=buildinfo_path, arcname=buildinfo_arcname)
# remove copy of install directory
if relative:
shutil.rmtree(workdir)
shutil.rmtree(workdir)
# get the sha256 checksum of the tarball
checksum = checksum_tarball(tarfile_path)
@@ -1346,11 +1275,7 @@ def _build_tarball_in_stage_dir(
spec_dict["buildinfo"] = buildinfo
with open(specfile_path, "w") as outfile:
# Note: when using gpg clear sign, we need to avoid long lines (19995 chars).
# If lines are longer, they are truncated without error. Thanks GPG!
# So, here we still add newlines, but no indent, so save on file size and
# line length.
json.dump(spec_dict, outfile, indent=0, separators=(",", ":"))
outfile.write(sjson.dump(spec_dict))
# sign the tarball and spec file with gpg
if not unsigned:
@@ -1367,15 +1292,18 @@ def _build_tarball_in_stage_dir(
tty.debug('Buildcache for "{0}" written to \n {1}'.format(spec, remote_spackfile_path))
# push the key to the build cache's _pgp directory so it can be
# imported
if not unsigned:
push_keys(out_url, keys=[key], regenerate_index=regenerate_index, tmpdir=stage_dir)
try:
# push the key to the build cache's _pgp directory so it can be
# imported
if not unsigned:
push_keys(out_url, keys=[key], regenerate_index=regenerate_index, tmpdir=tmpdir)
# create an index.json for the build_cache directory so specs can be
# found
if regenerate_index:
generate_package_index(url_util.join(out_url, os.path.relpath(cache_prefix, stage_dir)))
# create an index.json for the build_cache directory so specs can be
# found
if regenerate_index:
generate_package_index(url_util.join(out_url, os.path.relpath(cache_prefix, tmpdir)))
finally:
shutil.rmtree(tmpdir)
return None
@@ -1608,12 +1536,13 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
return None
def make_package_relative(workdir, spec, buildinfo, allow_root):
def make_package_relative(workdir, spec, allow_root):
"""
Change paths in binaries to relative paths. Change absolute symlinks
to relative symlinks.
"""
prefix = spec.prefix
buildinfo = read_buildinfo_file(workdir)
old_layout_root = buildinfo["buildpath"]
orig_path_names = list()
cur_path_names = list()
@@ -1637,8 +1566,9 @@ def make_package_relative(workdir, spec, buildinfo, allow_root):
relocate.make_link_relative(cur_path_names, orig_path_names)
def ensure_package_relocatable(buildinfo, binaries_dir):
def ensure_package_relocatable(workdir, binaries_dir):
"""Check if package binaries are relocatable."""
buildinfo = read_buildinfo_file(workdir)
binaries = [os.path.join(binaries_dir, f) for f in buildinfo["relocate_binaries"]]
relocate.ensure_binaries_are_relocatable(binaries)
@@ -1799,15 +1729,7 @@ def is_backup_file(file):
relocate.relocate_text(text_names, prefix_to_prefix_text)
# relocate the install prefixes in binary files including dependencies
changed_files = relocate.relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
# Add ad-hoc signatures to patched macho files when on macOS.
if "macho" in platform.binary_formats and sys.platform == "darwin":
codesign = which("codesign")
if not codesign:
return
for binary in changed_files:
codesign("-fs-", binary)
relocate.relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
# If we are installing back to the same location
# relocate the sbang location if the spack directory changed
@@ -2026,7 +1948,7 @@ def install_root_node(spec, allow_root, unsigned=False, force=False, sha256=None
with spack.util.path.filter_padding():
tty.msg('Installing "{0}" from a buildcache'.format(spec.format()))
extract_tarball(spec, download_result, allow_root, unsigned, force)
spack.hooks.post_install(spec, False)
spack.hooks.post_install(spec)
spack.store.db.add(spec, spack.store.layout)

View File

@@ -9,7 +9,6 @@
import sys
import sysconfig
import warnings
from typing import Dict, Optional, Sequence, Union
import archspec.cpu
@@ -22,10 +21,8 @@
from .config import spec_for_current_python
QueryInfo = Dict[str, "spack.spec.Spec"]
def _python_import(module: str) -> bool:
def _python_import(module):
try:
__import__(module)
except ImportError:
@@ -33,9 +30,7 @@ def _python_import(module: str) -> bool:
return True
def _try_import_from_store(
module: str, query_spec: Union[str, "spack.spec.Spec"], query_info: Optional[QueryInfo] = None
) -> bool:
def _try_import_from_store(module, query_spec, query_info=None):
"""Return True if the module can be imported from an already
installed spec, False otherwise.
@@ -57,7 +52,7 @@ def _try_import_from_store(
module_paths = [
os.path.join(candidate_spec.prefix, pkg.purelib),
os.path.join(candidate_spec.prefix, pkg.platlib),
]
] # type: list[str]
path_before = list(sys.path)
# NOTE: try module_paths first and last, last allows an existing version in path
@@ -94,7 +89,7 @@ def _try_import_from_store(
return False
def _fix_ext_suffix(candidate_spec: "spack.spec.Spec"):
def _fix_ext_suffix(candidate_spec):
"""Fix the external suffixes of Python extensions on the fly for
platforms that may need it
@@ -162,11 +157,7 @@ def _fix_ext_suffix(candidate_spec: "spack.spec.Spec"):
os.symlink(abs_path, link_name)
def _executables_in_store(
executables: Sequence[str],
query_spec: Union["spack.spec.Spec", str],
query_info: Optional[QueryInfo] = None,
) -> bool:
def _executables_in_store(executables, query_spec, query_info=None):
"""Return True if at least one of the executables can be retrieved from
a spec in store, False otherwise.
@@ -202,7 +193,7 @@ def _executables_in_store(
return False
def _root_spec(spec_str: str) -> str:
def _root_spec(spec_str):
"""Add a proper compiler and target to a spec used during bootstrapping.
Args:

View File

@@ -7,7 +7,6 @@
import contextlib
import os.path
import sys
from typing import Any, Dict, Generator, MutableSequence, Sequence
from llnl.util import tty
@@ -25,12 +24,12 @@
_REF_COUNT = 0
def is_bootstrapping() -> bool:
def is_bootstrapping():
"""Return True if we are in a bootstrapping context, False otherwise."""
return _REF_COUNT > 0
def spec_for_current_python() -> str:
def spec_for_current_python():
"""For bootstrapping purposes we are just interested in the Python
minor version (all patches are ABI compatible with the same minor).
@@ -42,14 +41,14 @@ def spec_for_current_python() -> str:
return f"python@{version_str}"
def root_path() -> str:
def root_path():
"""Root of all the bootstrap related folders"""
return spack.util.path.canonicalize_path(
spack.config.get("bootstrap:root", spack.paths.default_user_bootstrap_path)
)
def store_path() -> str:
def store_path():
"""Path to the store used for bootstrapped software"""
enabled = spack.config.get("bootstrap:enable", True)
if not enabled:
@@ -60,7 +59,7 @@ def store_path() -> str:
@contextlib.contextmanager
def spack_python_interpreter() -> Generator:
def spack_python_interpreter():
"""Override the current configuration to set the interpreter under
which Spack is currently running as the only Python external spec
available.
@@ -77,18 +76,18 @@ def spack_python_interpreter() -> Generator:
yield
def _store_path() -> str:
def _store_path():
bootstrap_root_path = root_path()
return spack.util.path.canonicalize_path(os.path.join(bootstrap_root_path, "store"))
def _config_path() -> str:
def _config_path():
bootstrap_root_path = root_path()
return spack.util.path.canonicalize_path(os.path.join(bootstrap_root_path, "config"))
@contextlib.contextmanager
def ensure_bootstrap_configuration() -> Generator:
def ensure_bootstrap_configuration():
"""Swap the current configuration for the one used to bootstrap Spack.
The context manager is reference counted to ensure we don't swap multiple
@@ -108,7 +107,7 @@ def ensure_bootstrap_configuration() -> Generator:
_REF_COUNT -= 1
def _read_and_sanitize_configuration() -> Dict[str, Any]:
def _read_and_sanitize_configuration():
"""Read the user configuration that needs to be reused for bootstrapping
and remove the entries that should not be copied over.
"""
@@ -121,11 +120,9 @@ def _read_and_sanitize_configuration() -> Dict[str, Any]:
return user_configuration
def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
def _bootstrap_config_scopes():
tty.debug("[BOOTSTRAP CONFIG SCOPE] name=_builtin")
config_scopes: MutableSequence["spack.config.ConfigScope"] = [
spack.config.InternalConfigScope("_builtin", spack.config.config_defaults)
]
config_scopes = [spack.config.InternalConfigScope("_builtin", spack.config.config_defaults)]
configuration_paths = (spack.config.configuration_defaults_path, ("bootstrap", _config_path()))
for name, path in configuration_paths:
platform = spack.platforms.host().name
@@ -140,7 +137,7 @@ def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
return config_scopes
def _add_compilers_if_missing() -> None:
def _add_compilers_if_missing():
arch = spack.spec.ArchSpec.frontend_arch()
if not spack.compilers.compilers_for_arch(arch):
new_compilers = spack.compilers.find_new_compilers()
@@ -149,7 +146,7 @@ def _add_compilers_if_missing() -> None:
@contextlib.contextmanager
def _ensure_bootstrap_configuration() -> Generator:
def _ensure_bootstrap_configuration():
bootstrap_store_path = store_path()
user_configuration = _read_and_sanitize_configuration()
with spack.environment.no_active_environment():

View File

@@ -29,7 +29,7 @@
import os.path
import sys
import uuid
from typing import Any, Callable, Dict, List, Optional, Tuple
from typing import Callable, List, Optional
from llnl.util import tty
from llnl.util.lang import GroupedExceptionHandler
@@ -66,9 +66,6 @@
_bootstrap_methods = {}
ConfigDictionary = Dict[str, Any]
def bootstrapper(bootstrapper_type: str):
"""Decorator to register classes implementing bootstrapping
methods.
@@ -89,7 +86,7 @@ class Bootstrapper:
config_scope_name = ""
def __init__(self, conf: ConfigDictionary) -> None:
def __init__(self, conf):
self.conf = conf
self.name = conf["name"]
self.metadata_dir = spack.util.path.canonicalize_path(conf["metadata"])
@@ -103,7 +100,7 @@ def __init__(self, conf: ConfigDictionary) -> None:
self.url = url
@property
def mirror_scope(self) -> spack.config.InternalConfigScope:
def mirror_scope(self):
"""Mirror scope to be pushed onto the bootstrapping configuration when using
this bootstrapper.
"""
@@ -124,7 +121,7 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
"""
return False
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
def try_search_path(self, executables: List[str], abstract_spec_str: str) -> bool:
"""Try to search some executables in the prefix of specs satisfying the abstract
spec passed as argument.
@@ -142,15 +139,13 @@ def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bo
class BuildcacheBootstrapper(Bootstrapper):
"""Install the software needed during bootstrapping from a buildcache."""
def __init__(self, conf) -> None:
def __init__(self, conf):
super().__init__(conf)
self.last_search: Optional[ConfigDictionary] = None
self.last_search = None
self.config_scope_name = f"bootstrap_buildcache-{uuid.uuid4()}"
@staticmethod
def _spec_and_platform(
abstract_spec_str: str,
) -> Tuple[spack.spec.Spec, spack.platforms.Platform]:
def _spec_and_platform(abstract_spec_str):
"""Return the spec object and platform we need to use when
querying the buildcache.
@@ -163,7 +158,7 @@ def _spec_and_platform(
bincache_platform = spack.platforms.real_host()
return abstract_spec, bincache_platform
def _read_metadata(self, package_name: str) -> Any:
def _read_metadata(self, package_name):
"""Return metadata about the given package."""
json_filename = f"{package_name}.json"
json_dir = self.metadata_dir
@@ -172,13 +167,7 @@ def _read_metadata(self, package_name: str) -> Any:
data = json.load(stream)
return data
def _install_by_hash(
self,
pkg_hash: str,
pkg_sha256: str,
index: List[spack.spec.Spec],
bincache_platform: spack.platforms.Platform,
) -> None:
def _install_by_hash(self, pkg_hash, pkg_sha256, index, bincache_platform):
index_spec = next(x for x in index if x.dag_hash() == pkg_hash)
# Reconstruct the compiler that we need to use for bootstrapping
compiler_entry = {
@@ -203,13 +192,7 @@ def _install_by_hash(
match, allow_root=True, unsigned=True, force=True, sha256=pkg_sha256
)
def _install_and_test(
self,
abstract_spec: spack.spec.Spec,
bincache_platform: spack.platforms.Platform,
bincache_data,
test_fn,
) -> bool:
def _install_and_test(self, abstract_spec, bincache_platform, bincache_data, test_fn):
# Ensure we see only the buildcache being used to bootstrap
with spack.config.override(self.mirror_scope):
# This index is currently needed to get the compiler used to build some
@@ -225,7 +208,7 @@ def _install_and_test(
# This will be None for things that don't depend on python
python_spec = item.get("python", None)
# Skip specs which are not compatible
if not abstract_spec.intersects(candidate_spec):
if not abstract_spec.satisfies(candidate_spec):
continue
if python_spec is not None and python_spec not in abstract_spec:
@@ -234,14 +217,13 @@ def _install_and_test(
for _, pkg_hash, pkg_sha256 in item["binaries"]:
self._install_by_hash(pkg_hash, pkg_sha256, index, bincache_platform)
info: ConfigDictionary = {}
info = {}
if test_fn(query_spec=abstract_spec, query_info=info):
self.last_search = info
return True
return False
def try_import(self, module: str, abstract_spec_str: str) -> bool:
info: ConfigDictionary
def try_import(self, module, abstract_spec_str):
test_fn, info = functools.partial(_try_import_from_store, module), {}
if test_fn(query_spec=abstract_spec_str, query_info=info):
return True
@@ -253,8 +235,7 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
data = self._read_metadata(module)
return self._install_and_test(abstract_spec, bincache_platform, data, test_fn)
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
info: ConfigDictionary
def try_search_path(self, executables, abstract_spec_str):
test_fn, info = functools.partial(_executables_in_store, executables), {}
if test_fn(query_spec=abstract_spec_str, query_info=info):
self.last_search = info
@@ -270,13 +251,13 @@ def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bo
class SourceBootstrapper(Bootstrapper):
"""Install the software needed during bootstrapping from sources."""
def __init__(self, conf) -> None:
def __init__(self, conf):
super().__init__(conf)
self.last_search: Optional[ConfigDictionary] = None
self.last_search = None
self.config_scope_name = f"bootstrap_source-{uuid.uuid4()}"
def try_import(self, module: str, abstract_spec_str: str) -> bool:
info: ConfigDictionary = {}
def try_import(self, module, abstract_spec_str):
info = {}
if _try_import_from_store(module, abstract_spec_str, query_info=info):
self.last_search = info
return True
@@ -312,8 +293,8 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
return True
return False
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
info: ConfigDictionary = {}
def try_search_path(self, executables, abstract_spec_str):
info = {}
if _executables_in_store(executables, abstract_spec_str, query_info=info):
self.last_search = info
return True
@@ -342,13 +323,13 @@ def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bo
return False
def create_bootstrapper(conf: ConfigDictionary):
def create_bootstrapper(conf):
"""Return a bootstrap object built according to the configuration argument"""
btype = conf["type"]
return _bootstrap_methods[btype](conf)
def source_is_enabled_or_raise(conf: ConfigDictionary):
def source_is_enabled_or_raise(conf):
"""Raise ValueError if the source is not enabled for bootstrapping"""
trusted, name = spack.config.get("bootstrap:trusted"), conf["name"]
if not trusted.get(name, False):
@@ -473,7 +454,7 @@ def ensure_executables_in_path_or_raise(
raise RuntimeError(msg)
def _add_externals_if_missing() -> None:
def _add_externals_if_missing():
search_list = [
# clingo
spack.repo.path.get_pkg_class("cmake"),
@@ -487,41 +468,41 @@ def _add_externals_if_missing() -> None:
spack.detection.update_configuration(detected_packages, scope="bootstrap")
def clingo_root_spec() -> str:
def clingo_root_spec():
"""Return the root spec used to bootstrap clingo"""
return _root_spec("clingo-bootstrap@spack+python")
def ensure_clingo_importable_or_raise() -> None:
def ensure_clingo_importable_or_raise():
"""Ensure that the clingo module is available for import."""
ensure_module_importable_or_raise(module="clingo", abstract_spec=clingo_root_spec())
def gnupg_root_spec() -> str:
def gnupg_root_spec():
"""Return the root spec used to bootstrap GnuPG"""
return _root_spec("gnupg@2.3:")
def ensure_gpg_in_path_or_raise() -> None:
def ensure_gpg_in_path_or_raise():
"""Ensure gpg or gpg2 are in the PATH or raise."""
return ensure_executables_in_path_or_raise(
executables=["gpg2", "gpg"], abstract_spec=gnupg_root_spec()
)
def patchelf_root_spec() -> str:
def patchelf_root_spec():
"""Return the root spec used to bootstrap patchelf"""
# 0.13.1 is the last version not to require C++17.
return _root_spec("patchelf@0.13.1:")
def verify_patchelf(patchelf: "spack.util.executable.Executable") -> bool:
def verify_patchelf(patchelf):
"""Older patchelf versions can produce broken binaries, so we
verify the version here.
Arguments:
patchelf: patchelf executable
patchelf (spack.util.executable.Executable): patchelf executable
"""
out = patchelf("--version", output=str, error=os.devnull, fail_on_error=False).strip()
if patchelf.returncode != 0:
@@ -536,7 +517,7 @@ def verify_patchelf(patchelf: "spack.util.executable.Executable") -> bool:
return version >= spack.version.Version("0.13.1")
def ensure_patchelf_in_path_or_raise() -> None:
def ensure_patchelf_in_path_or_raise():
"""Ensure patchelf is in the PATH or raise."""
# The old concretizer is not smart and we're doing its job: if the latest patchelf
# does not concretize because the compiler doesn't support C++17, we try to
@@ -553,7 +534,7 @@ def ensure_patchelf_in_path_or_raise() -> None:
)
def ensure_core_dependencies() -> None:
def ensure_core_dependencies():
"""Ensure the presence of all the core dependencies."""
if sys.platform.lower() == "linux":
ensure_patchelf_in_path_or_raise()
@@ -562,7 +543,7 @@ def ensure_core_dependencies() -> None:
ensure_clingo_importable_or_raise()
def all_core_root_specs() -> List[str]:
def all_core_root_specs():
"""Return a list of all the core root specs that may be used to bootstrap Spack"""
return [clingo_root_spec(), gnupg_root_spec(), patchelf_root_spec()]

View File

@@ -9,7 +9,6 @@
import pathlib
import sys
import warnings
from typing import List
import archspec.cpu
@@ -19,7 +18,6 @@
import spack.environment
import spack.tengine
import spack.util.executable
from spack.environment import depfile
from ._common import _root_spec
from .config import root_path, spec_for_current_python, store_path
@@ -29,7 +27,7 @@ class BootstrapEnvironment(spack.environment.Environment):
"""Environment to install dependencies of Spack for a given interpreter and architecture"""
@classmethod
def spack_dev_requirements(cls) -> List[str]:
def spack_dev_requirements(cls):
"""Spack development requirements"""
return [
isort_root_spec(),
@@ -40,7 +38,7 @@ def spack_dev_requirements(cls) -> List[str]:
]
@classmethod
def environment_root(cls) -> pathlib.Path:
def environment_root(cls):
"""Environment root directory"""
bootstrap_root_path = root_path()
python_part = spec_for_current_python().replace("@", "")
@@ -54,12 +52,12 @@ def environment_root(cls) -> pathlib.Path:
)
@classmethod
def view_root(cls) -> pathlib.Path:
def view_root(cls):
"""Location of the view"""
return cls.environment_root().joinpath("view")
@classmethod
def pythonpaths(cls) -> List[str]:
def pythonpaths(cls):
"""Paths to be added to sys.path or PYTHONPATH"""
python_dir_part = f"python{'.'.join(str(x) for x in sys.version_info[:2])}"
glob_expr = str(cls.view_root().joinpath("**", python_dir_part, "**"))
@@ -70,21 +68,21 @@ def pythonpaths(cls) -> List[str]:
return result
@classmethod
def bin_dirs(cls) -> List[pathlib.Path]:
def bin_dirs(cls):
"""Paths to be added to PATH"""
return [cls.view_root().joinpath("bin")]
@classmethod
def spack_yaml(cls) -> pathlib.Path:
def spack_yaml(cls):
"""Environment spack.yaml file"""
return cls.environment_root().joinpath("spack.yaml")
def __init__(self) -> None:
def __init__(self):
if not self.spack_yaml().exists():
self._write_spack_yaml_file()
super().__init__(self.environment_root())
def update_installations(self) -> None:
def update_installations(self):
"""Update the installations of this environment.
The update is done using a depfile on Linux and macOS, and using the ``install_all``
@@ -105,7 +103,7 @@ def update_installations(self) -> None:
self._install_with_depfile()
self.write(regenerate=True)
def update_syspath_and_environ(self) -> None:
def update_syspath_and_environ(self):
"""Update ``sys.path`` and the PATH, PYTHONPATH environment variables to point to
the environment view.
"""
@@ -121,13 +119,16 @@ def update_syspath_and_environ(self) -> None:
+ [str(x) for x in self.pythonpaths()]
)
def _install_with_depfile(self) -> None:
model = depfile.MakefileModel.from_env(self)
template = spack.tengine.make_environment().get_template(
os.path.join("depfile", "Makefile")
def _install_with_depfile(self):
spackcmd = spack.util.executable.which("spack")
spackcmd(
"-e",
str(self.environment_root()),
"env",
"depfile",
"-o",
str(self.environment_root().joinpath("Makefile")),
)
makefile = self.environment_root() / "Makefile"
makefile.write_text(template.render(model.to_dict()))
make = spack.util.executable.which("make")
kwargs = {}
if not tty.is_debug():
@@ -140,7 +141,7 @@ def _install_with_depfile(self) -> None:
**kwargs,
)
def _write_spack_yaml_file(self) -> None:
def _write_spack_yaml_file(self):
tty.msg(
"[BOOTSTRAPPING] Spack has missing dependencies, creating a bootstrapping environment"
)
@@ -158,32 +159,32 @@ def _write_spack_yaml_file(self) -> None:
self.spack_yaml().write_text(template.render(context), encoding="utf-8")
def isort_root_spec() -> str:
def isort_root_spec():
"""Return the root spec used to bootstrap isort"""
return _root_spec("py-isort@4.3.5:")
def mypy_root_spec() -> str:
def mypy_root_spec():
"""Return the root spec used to bootstrap mypy"""
return _root_spec("py-mypy@0.900:")
def black_root_spec() -> str:
def black_root_spec():
"""Return the root spec used to bootstrap black"""
return _root_spec("py-black@:23.1.0")
def flake8_root_spec() -> str:
def flake8_root_spec():
"""Return the root spec used to bootstrap flake8"""
return _root_spec("py-flake8")
def pytest_root_spec() -> str:
def pytest_root_spec():
"""Return the root spec used to bootstrap flake8"""
return _root_spec("py-pytest")
def ensure_environment_dependencies() -> None:
def ensure_environment_dependencies():
"""Ensure Spack dependencies from the bootstrap environment are installed and ready to use"""
with BootstrapEnvironment() as env:
env.update_installations()

View File

@@ -4,7 +4,6 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Query the status of bootstrapping on this machine"""
import platform
from typing import List, Optional, Sequence, Tuple, Union
import spack.util.executable
@@ -20,12 +19,8 @@
pytest_root_spec,
)
ExecutablesType = Union[str, Sequence[str]]
RequiredResponseType = Tuple[bool, Optional[str]]
SpecLike = Union["spack.spec.Spec", str]
def _required_system_executable(exes: ExecutablesType, msg: str) -> RequiredResponseType:
def _required_system_executable(exes, msg):
"""Search for an executable is the system path only."""
if isinstance(exes, str):
exes = (exes,)
@@ -34,9 +29,7 @@ def _required_system_executable(exes: ExecutablesType, msg: str) -> RequiredResp
return False, msg
def _required_executable(
exes: ExecutablesType, query_spec: SpecLike, msg: str
) -> RequiredResponseType:
def _required_executable(exes, query_spec, msg):
"""Search for an executable in the system path or in the bootstrap store."""
if isinstance(exes, str):
exes = (exes,)
@@ -45,7 +38,7 @@ def _required_executable(
return False, msg
def _required_python_module(module: str, query_spec: SpecLike, msg: str) -> RequiredResponseType:
def _required_python_module(module, query_spec, msg):
"""Check if a Python module is available in the current interpreter or
if it can be loaded from the bootstrap store
"""
@@ -54,7 +47,7 @@ def _required_python_module(module: str, query_spec: SpecLike, msg: str) -> Requ
return False, msg
def _missing(name: str, purpose: str, system_only: bool = True) -> str:
def _missing(name, purpose, system_only=True):
"""Message to be printed if an executable is not found"""
msg = '[{2}] MISSING "{0}": {1}'
if not system_only:
@@ -62,7 +55,7 @@ def _missing(name: str, purpose: str, system_only: bool = True) -> str:
return msg.format(name, purpose, "@*y{{-}}")
def _core_requirements() -> List[RequiredResponseType]:
def _core_requirements():
_core_system_exes = {
"make": _missing("make", "required to build software from sources"),
"patch": _missing("patch", "required to patch source code before building"),
@@ -87,7 +80,7 @@ def _core_requirements() -> List[RequiredResponseType]:
return result
def _buildcache_requirements() -> List[RequiredResponseType]:
def _buildcache_requirements():
_buildcache_exes = {
"file": _missing("file", "required to analyze files for buildcaches"),
("gpg2", "gpg"): _missing("gpg2", "required to sign/verify buildcaches", False),
@@ -110,7 +103,7 @@ def _buildcache_requirements() -> List[RequiredResponseType]:
return result
def _optional_requirements() -> List[RequiredResponseType]:
def _optional_requirements():
_optional_exes = {
"zstd": _missing("zstd", "required to compress/decompress code archives"),
"svn": _missing("svn", "required to manage subversion repositories"),
@@ -121,7 +114,7 @@ def _optional_requirements() -> List[RequiredResponseType]:
return result
def _development_requirements() -> List[RequiredResponseType]:
def _development_requirements():
# Ensure we trigger environment modifications if we have an environment
if BootstrapEnvironment.spack_yaml().exists():
with BootstrapEnvironment() as env:
@@ -146,7 +139,7 @@ def _development_requirements() -> List[RequiredResponseType]:
]
def status_message(section) -> Tuple[str, bool]:
def status_message(section):
"""Return a status message to be printed to screen that refers to the
section passed as argument and a bool which is True if there are missing
dependencies.
@@ -168,7 +161,7 @@ def status_message(section) -> Tuple[str, bool]:
with ensure_bootstrap_configuration():
missing_software = False
for found, err_msg in required_software():
if not found and err_msg:
if not found:
missing_software = True
msg += "\n " + err_msg
msg += "\n"

View File

@@ -69,13 +69,13 @@
from spack.installer import InstallError
from spack.util.cpus import cpus_available
from spack.util.environment import (
SYSTEM_DIRS,
EnvironmentModifications,
env_flag,
filter_system_paths,
get_path,
inspect_path,
is_system_path,
system_dirs,
validate,
)
from spack.util.executable import Executable
@@ -397,7 +397,7 @@ def set_compiler_environment_variables(pkg, env):
env.set("SPACK_COMPILER_SPEC", str(spec.compiler))
env.set("SPACK_SYSTEM_DIRS", ":".join(SYSTEM_DIRS))
env.set("SPACK_SYSTEM_DIRS", ":".join(system_dirs))
compiler.setup_custom_environment(pkg, env)
@@ -485,13 +485,7 @@ def update_compiler_args_for_dep(dep):
query = pkg.spec[dep.name]
dep_link_dirs = list()
try:
# In some circumstances (particularly for externals) finding
# libraries packages can be time consuming, so indicate that
# we are performing this operation (and also report when it
# finishes).
tty.debug("Collecting libraries for {0}".format(dep.name))
dep_link_dirs.extend(query.libs.directories)
tty.debug("Libraries for {0} have been collected.".format(dep.name))
except NoLibrariesError:
tty.debug("No libraries found for {0}".format(dep.name))
@@ -778,9 +772,7 @@ def setup_package(pkg, dirty, context="build"):
set_compiler_environment_variables(pkg, env_mods)
set_wrapper_variables(pkg, env_mods)
tty.debug("setup_package: grabbing modifications from dependencies")
env_mods.extend(modifications_from_dependencies(pkg.spec, context, custom_mods_only=False))
tty.debug("setup_package: collected all modifications from dependencies")
# architecture specific setup
platform = spack.platforms.by_name(pkg.spec.architecture.platform)
@@ -788,7 +780,6 @@ def setup_package(pkg, dirty, context="build"):
platform.setup_platform_environment(pkg, env_mods)
if context == "build":
tty.debug("setup_package: setup build environment for root")
builder = spack.builder.create(pkg)
builder.setup_build_environment(env_mods)
@@ -799,7 +790,6 @@ def setup_package(pkg, dirty, context="build"):
" includes and omit it when invoked with '--cflags'."
)
elif context == "test":
tty.debug("setup_package: setup test environment for root")
env_mods.extend(
inspect_path(
pkg.spec.prefix,
@@ -816,7 +806,6 @@ def setup_package(pkg, dirty, context="build"):
# Load modules on an already clean environment, just before applying Spack's
# own environment modifications. This ensures Spack controls CC/CXX/... variables.
if need_compiler:
tty.debug("setup_package: loading compiler modules")
for mod in pkg.compiler.modules:
load_module(mod)
@@ -954,7 +943,6 @@ def default_modifications_for_dep(dep):
_make_runnable(dep, env)
def add_modifications_for_dep(dep):
tty.debug("Adding env modifications for {0}".format(dep.name))
# Some callers of this function only want the custom modifications.
# For callers that want both custom and default modifications, we want
# to perform the default modifications here (this groups custom
@@ -980,7 +968,6 @@ def add_modifications_for_dep(dep):
builder.setup_dependent_build_environment(env, spec)
else:
dpkg.setup_dependent_run_environment(env, spec)
tty.debug("Added env modifications for {0}".format(dep.name))
# Note that we want to perform environment modifications in a fixed order.
# The Spec.traverse method provides this: i.e. in addition to

View File

@@ -8,7 +8,7 @@
import platform
import re
import sys
from typing import List, Optional, Tuple
from typing import List, Tuple
import llnl.util.filesystem as fs
@@ -16,7 +16,7 @@
import spack.builder
import spack.package_base
import spack.util.path
from spack.directives import build_system, conflicts, depends_on, variant
from spack.directives import build_system, depends_on, variant
from spack.multimethod import when
from ._checks import BaseBuilder, execute_build_time_tests
@@ -35,43 +35,6 @@ def _extract_primary_generator(generator):
return primary_generator
def generator(*names: str, default: Optional[str] = None):
"""The build system generator to use.
See ``cmake --help`` for a list of valid generators.
Currently, "Unix Makefiles" and "Ninja" are the only generators
that Spack supports. Defaults to "Unix Makefiles".
See https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html
for more information.
Args:
names: allowed generators for this package
default: default generator
"""
allowed_values = ("make", "ninja")
if any(x not in allowed_values for x in names):
msg = "only 'make' and 'ninja' are allowed for CMake's 'generator' directive"
raise ValueError(msg)
default = default or names[0]
not_used = [x for x in allowed_values if x not in names]
def _values(x):
return x in allowed_values
_values.__doc__ = f"{','.join(names)}"
variant(
"generator",
default=default,
values=_values,
description="the build system generator to use",
)
for x in not_used:
conflicts(f"generator={x}")
class CMakePackage(spack.package_base.PackageBase):
"""Specialized class for packages built using CMake
@@ -104,15 +67,8 @@ class CMakePackage(spack.package_base.PackageBase):
when="^cmake@3.9:",
description="CMake interprocedural optimization",
)
if sys.platform == "win32":
generator("ninja")
else:
generator("ninja", "make", default="make")
depends_on("cmake", type="build")
depends_on("gmake", type="build", when="generator=make")
depends_on("ninja", type="build", when="generator=ninja")
depends_on("ninja", type="build", when="platform=windows")
def flags_to_build_system_args(self, flags):
"""Return a list of all command line arguments to pass the specified
@@ -182,6 +138,18 @@ class CMakeBuilder(BaseBuilder):
| :py:meth:`~.CMakeBuilder.build_directory` | Directory where to |
| | build the package |
+-----------------------------------------------+--------------------+
The generator used by CMake can be specified by providing the ``generator``
attribute. Per
https://cmake.org/cmake/help/git-master/manual/cmake-generators.7.html,
the format is: [<secondary-generator> - ]<primary_generator>.
The full list of primary and secondary generators supported by CMake may be found
in the documentation for the version of CMake used; however, at this time Spack
supports only the primary generators "Unix Makefiles" and "Ninja." Spack's CMake
support is agnostic with respect to primary generators. Spack will generate a
runtime error if the generator string does not follow the prescribed format, or if
the primary generator is not supported.
"""
#: Phases of a CMake package
@@ -192,6 +160,7 @@ class CMakeBuilder(BaseBuilder):
#: Names associated with package attributes in the old build-system format
legacy_attributes: Tuple[str, ...] = (
"generator",
"build_targets",
"install_targets",
"build_time_test_callbacks",
@@ -202,6 +171,16 @@ class CMakeBuilder(BaseBuilder):
"build_directory",
)
#: The build system generator to use.
#:
#: See ``cmake --help`` for a list of valid generators.
#: Currently, "Unix Makefiles" and "Ninja" are the only generators
#: that Spack supports. Defaults to "Unix Makefiles".
#:
#: See https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html
#: for more information.
generator = "Ninja" if sys.platform == "win32" else "Unix Makefiles"
#: Targets to be used during the build phase
build_targets: List[str] = []
#: Targets to be used during the install phase
@@ -223,20 +202,12 @@ def root_cmakelists_dir(self):
"""
return self.pkg.stage.source_path
@property
def generator(self):
if self.spec.satisfies("generator=make"):
return "Unix Makefiles"
if self.spec.satisfies("generator=ninja"):
return "Ninja"
msg = f'{self.spec.format()} has an unsupported value for the "generator" variant'
raise ValueError(msg)
@property
def std_cmake_args(self):
"""Standard cmake arguments provided as a property for
convenience of package writers
"""
# standard CMake arguments
std_cmake_args = CMakeBuilder.std_args(self.pkg, generator=self.generator)
std_cmake_args += getattr(self.pkg, "cmake_flag_args", [])
return std_cmake_args

View File

@@ -244,8 +244,7 @@ def __new__(mcs, name, bases, attr_dict):
callbacks_from_base = getattr(base, temporary_stage.attribute_name, None)
if callbacks_from_base:
break
else:
callbacks_from_base = []
callbacks_from_base = callbacks_from_base or []
# Set the callbacks in this class and flush the temporary stage
attr_dict[temporary_stage.attribute_name] = staged_callbacks[:] + callbacks_from_base

View File

@@ -5,7 +5,6 @@
"""Caches used by Spack to store data"""
import os
from typing import Union
import llnl.util.lang
from llnl.util.filesystem import mkdirp
@@ -35,9 +34,7 @@ def _misc_cache():
#: Spack's cache for small data
misc_cache: Union[
spack.util.file_cache.FileCache, llnl.util.lang.Singleton
] = llnl.util.lang.Singleton(_misc_cache)
misc_cache = llnl.util.lang.Singleton(_misc_cache)
def fetch_cache_location():
@@ -91,6 +88,4 @@ def symlink(self, mirror_ref):
#: Spack's local cache for downloaded source archives
fetch_cache: Union[
spack.fetch_strategy.FsCache, llnl.util.lang.Singleton
] = llnl.util.lang.Singleton(_fetch_cache)
fetch_cache = llnl.util.lang.Singleton(_fetch_cache)

View File

@@ -38,7 +38,6 @@
import spack.util.spack_yaml as syaml
import spack.util.url as url_util
import spack.util.web as web_util
from spack import traverse
from spack.error import SpackError
from spack.reporters import CDash, CDashConfiguration
from spack.reporters.cdash import build_stamp as cdash_build_stamp
@@ -362,7 +361,60 @@ def append_dep(s, d):
def _spec_matches(spec, match_string):
return spec.intersects(match_string)
return spec.satisfies(match_string)
def _remove_attributes(src_dict, dest_dict):
if "tags" in src_dict and "tags" in dest_dict:
# For 'tags', we remove any tags that are listed for removal
for tag in src_dict["tags"]:
while tag in dest_dict["tags"]:
dest_dict["tags"].remove(tag)
def _copy_attributes(attrs_list, src_dict, dest_dict):
for runner_attr in attrs_list:
if runner_attr in src_dict:
if runner_attr in dest_dict and runner_attr == "tags":
# For 'tags', we combine the lists of tags, while
# avoiding duplicates
for tag in src_dict[runner_attr]:
if tag not in dest_dict[runner_attr]:
dest_dict[runner_attr].append(tag)
elif runner_attr in dest_dict and runner_attr == "variables":
# For 'variables', we merge the dictionaries. Any conflicts
# (i.e. 'runner-attributes' has same variable key as the
# higher level) we resolve by keeping the more specific
# 'runner-attributes' version.
for src_key, src_val in src_dict[runner_attr].items():
dest_dict[runner_attr][src_key] = copy.deepcopy(src_dict[runner_attr][src_key])
else:
dest_dict[runner_attr] = copy.deepcopy(src_dict[runner_attr])
def _find_matching_config(spec, gitlab_ci):
runner_attributes = {}
overridable_attrs = ["image", "tags", "variables", "before_script", "script", "after_script"]
_copy_attributes(overridable_attrs, gitlab_ci, runner_attributes)
matched = False
only_first = gitlab_ci.get("match_behavior", "first") == "first"
for ci_mapping in gitlab_ci["mappings"]:
for match_string in ci_mapping["match"]:
if _spec_matches(spec, match_string):
matched = True
if "remove-attributes" in ci_mapping:
_remove_attributes(ci_mapping["remove-attributes"], runner_attributes)
if "runner-attributes" in ci_mapping:
_copy_attributes(
overridable_attrs, ci_mapping["runner-attributes"], runner_attributes
)
break
if matched and only_first:
break
return runner_attributes if matched else None
def _format_job_needs(
@@ -438,28 +490,16 @@ def compute_affected_packages(rev1="HEAD^", rev2="HEAD"):
return spack.repo.get_all_package_diffs("ARC", rev1=rev1, rev2=rev2)
def get_spec_filter_list(env, affected_pkgs, dependent_traverse_depth=None):
def get_spec_filter_list(env, affected_pkgs):
"""Given a list of package names and an active/concretized
environment, return the set of all concrete specs from the
environment that could have been affected by changing the
list of packages.
If a ``dependent_traverse_depth`` is given, it is used to limit
upward (in the parent direction) traversal of specs of touched
packages. E.g. if 1 is provided, then only direct dependents
of touched package specs are traversed to produce specs that
could have been affected by changing the package, while if 0 is
provided, only the changed specs themselves are traversed. If ``None``
is given, upward traversal of touched package specs is done all
the way to the environment roots. Providing a negative number
results in no traversals at all, yielding an empty set.
Arguments:
env (spack.environment.Environment): Active concrete environment
affected_pkgs (List[str]): Affected package names
dependent_traverse_depth: Optional integer to limit dependent
traversal, or None to disable the limit.
Returns:
@@ -472,237 +512,17 @@ def get_spec_filter_list(env, affected_pkgs, dependent_traverse_depth=None):
tty.debug("All concrete environment specs:")
for s in all_concrete_specs:
tty.debug(" {0}/{1}".format(s.name, s.dag_hash()[:7]))
affected_pkgs = frozenset(affected_pkgs)
env_matches = [s for s in all_concrete_specs if s.name in affected_pkgs]
env_matches = [s for s in all_concrete_specs if s.name in frozenset(affected_pkgs)]
visited = set()
dag_hash = lambda s: s.dag_hash()
for depth, parent in traverse.traverse_nodes(
env_matches, direction="parents", key=dag_hash, depth=True, order="breadth"
):
if dependent_traverse_depth is not None and depth > dependent_traverse_depth:
break
affected_specs.update(parent.traverse(direction="children", visited=visited, key=dag_hash))
for match in env_matches:
for parent in match.traverse(direction="parents", key=dag_hash):
affected_specs.update(
parent.traverse(direction="children", visited=visited, key=dag_hash)
)
return affected_specs
def _build_jobs(phases, staged_phases):
for phase in phases:
phase_name = phase["name"]
spec_labels, dependencies, stages = staged_phases[phase_name]
for stage_jobs in stages:
for spec_label in stage_jobs:
spec_record = spec_labels[spec_label]
release_spec = spec_record["spec"]
release_spec_dag_hash = release_spec.dag_hash()
yield release_spec, release_spec_dag_hash
def _noop(x):
return x
def _unpack_script(script_section, op=_noop):
script = []
for cmd in script_section:
if isinstance(cmd, list):
for subcmd in cmd:
script.append(op(subcmd))
else:
script.append(op(cmd))
return script
class SpackCI:
"""Spack CI object used to generate intermediate representation
used by the CI generator(s).
"""
def __init__(self, ci_config, phases, staged_phases):
"""Given the information from the ci section of the config
and the job phases setup meta data needed for generating Spack
CI IR.
"""
self.ci_config = ci_config
self.named_jobs = ["any", "build", "cleanup", "noop", "reindex", "signing"]
self.ir = {
"jobs": {},
"temporary-storage-url-prefix": self.ci_config.get(
"temporary-storage-url-prefix", None
),
"enable-artifacts-buildcache": self.ci_config.get(
"enable-artifacts-buildcache", False
),
"bootstrap": self.ci_config.get(
"bootstrap", []
), # This is deprecated and should be removed
"rebuild-index": self.ci_config.get("rebuild-index", True),
"broken-specs-url": self.ci_config.get("broken-specs-url", None),
"broken-tests-packages": self.ci_config.get("broken-tests-packages", []),
"target": self.ci_config.get("target", "gitlab"),
}
jobs = self.ir["jobs"]
for spec, dag_hash in _build_jobs(phases, staged_phases):
jobs[dag_hash] = self.__init_job(spec)
for name in self.named_jobs:
# Skip the special named jobs
if name not in ["any", "build"]:
jobs[name] = self.__init_job("")
def __init_job(self, spec):
"""Initialize job object"""
return {"spec": spec, "attributes": {}}
def __is_named(self, section):
"""Check if a pipeline-gen configuration section is for a named job,
and if so return the name otherwise return none.
"""
for _name in self.named_jobs:
keys = ["{0}-job".format(_name), "{0}-job-remove".format(_name)]
if any([key for key in keys if key in section]):
return _name
return None
@staticmethod
def __job_name(name, suffix=""):
"""Compute the name of a named job with appropriate suffix.
Valid suffixes are either '-remove' or empty string or None
"""
assert type(name) == str
jname = name
if suffix:
jname = "{0}-job{1}".format(name, suffix)
else:
jname = "{0}-job".format(name)
return jname
def __apply_submapping(self, dest, spec, section):
"""Apply submapping setion to the IR dict"""
matched = False
only_first = section.get("match_behavior", "first") == "first"
for match_attrs in reversed(section["submapping"]):
attrs = cfg.InternalConfigScope._process_dict_keyname_overrides(match_attrs)
for match_string in match_attrs["match"]:
if _spec_matches(spec, match_string):
matched = True
if "build-job-remove" in match_attrs:
spack.config.remove_yaml(dest, attrs["build-job-remove"])
if "build-job" in match_attrs:
spack.config.merge_yaml(dest, attrs["build-job"])
break
if matched and only_first:
break
return dest
# Generate IR from the configs
def generate_ir(self):
"""Generate the IR from the Spack CI configurations."""
jobs = self.ir["jobs"]
# Implicit job defaults
defaults = [
{
"build-job": {
"script": [
"cd {env_dir}",
"spack env activate --without-view .",
"spack ci rebuild",
]
}
},
{"noop-job": {"script": ['echo "All specs already up to date, nothing to rebuild."']}},
]
# Job overrides
overrides = [
# Reindex script
{
"reindex-job": {
"script:": [
"spack buildcache update-index --keys --mirror-url {index_target_mirror}"
]
}
},
# Cleanup script
{
"cleanup-job": {
"script:": [
"spack -d mirror destroy --mirror-url {mirror_prefix}/$CI_PIPELINE_ID"
]
}
},
# Add signing job tags
{"signing-job": {"tags": ["aws", "protected", "notary"]}},
# Remove reserved tags
{"any-job-remove": {"tags": SPACK_RESERVED_TAGS}},
]
pipeline_gen = overrides + self.ci_config.get("pipeline-gen", []) + defaults
for section in reversed(pipeline_gen):
name = self.__is_named(section)
has_submapping = "submapping" in section
section = cfg.InternalConfigScope._process_dict_keyname_overrides(section)
if name:
remove_job_name = self.__job_name(name, suffix="-remove")
merge_job_name = self.__job_name(name)
do_remove = remove_job_name in section
do_merge = merge_job_name in section
def _apply_section(dest, src):
if do_remove:
dest = spack.config.remove_yaml(dest, src[remove_job_name])
if do_merge:
dest = copy.copy(spack.config.merge_yaml(dest, src[merge_job_name]))
if name == "build":
# Apply attributes to all build jobs
for _, job in jobs.items():
if job["spec"]:
_apply_section(job["attributes"], section)
elif name == "any":
# Apply section attributes too all jobs
for _, job in jobs.items():
_apply_section(job["attributes"], section)
else:
# Create a signing job if there is script and the job hasn't
# been initialized yet
if name == "signing" and name not in jobs:
if "signing-job" in section:
if "script" not in section["signing-job"]:
continue
else:
jobs[name] = self.__init_job("")
# Apply attributes to named job
_apply_section(jobs[name]["attributes"], section)
elif has_submapping:
# Apply section jobs with specs to match
for _, job in jobs.items():
if job["spec"]:
job["attributes"] = self.__apply_submapping(
job["attributes"], job["spec"], section
)
for _, job in jobs.items():
if job["spec"]:
job["spec"] = job["spec"].name
return self.ir
def generate_gitlab_ci_yaml(
env,
print_summary,
@@ -750,44 +570,16 @@ def generate_gitlab_ci_yaml(
env.concretize()
env.write()
yaml_root = ev.config_dict(env.manifest)
yaml_root = ev.config_dict(env.yaml)
# Get the joined "ci" config with all of the current scopes resolved
ci_config = cfg.get("ci")
if "gitlab-ci" not in yaml_root:
tty.die('Environment yaml does not have "gitlab-ci" section')
if not ci_config:
tty.warn("Environment does not have `ci` a configuration")
gitlabci_config = yaml_root.get("gitlab-ci")
if not gitlabci_config:
tty.die("Environment yaml does not have `gitlab-ci` config section. Cannot recover.")
gitlab_ci = yaml_root["gitlab-ci"]
tty.warn(
"The `gitlab-ci` configuration is deprecated in favor of `ci`.\n",
"To update run \n\t$ spack env update /path/to/ci/spack.yaml",
)
translate_deprecated_config(gitlabci_config)
ci_config = gitlabci_config
# Default target is gitlab...and only target is gitlab
if not ci_config.get("target", "gitlab") == "gitlab":
tty.die('Spack CI module only generates target "gitlab"')
cdash_config = cfg.get("cdash")
cdash_handler = CDashHandler(cdash_config) if "build-group" in cdash_config else None
cdash_handler = CDashHandler(yaml_root.get("cdash")) if "cdash" in yaml_root else None
build_group = cdash_handler.build_group if cdash_handler else None
dependent_depth = os.environ.get("SPACK_PRUNE_UNTOUCHED_DEPENDENT_DEPTH", None)
if dependent_depth is not None:
try:
dependent_depth = int(dependent_depth)
except (TypeError, ValueError):
tty.warn(
f"Unrecognized value ({dependent_depth}) "
"provided for SPACK_PRUNE_UNTOUCHED_DEPENDENT_DEPTH, "
"ignoring it."
)
dependent_depth = None
prune_untouched_packages = False
spack_prune_untouched = os.environ.get("SPACK_PRUNE_UNTOUCHED", None)
if spack_prune_untouched is not None and spack_prune_untouched.lower() == "true":
@@ -803,9 +595,7 @@ def generate_gitlab_ci_yaml(
tty.debug("affected pkgs:")
for p in affected_pkgs:
tty.debug(" {0}".format(p))
affected_specs = get_spec_filter_list(
env, affected_pkgs, dependent_traverse_depth=dependent_depth
)
affected_specs = get_spec_filter_list(env, affected_pkgs)
tty.debug("all affected specs:")
for s in affected_specs:
tty.debug(" {0}/{1}".format(s.name, s.dag_hash()[:7]))
@@ -847,25 +637,25 @@ def generate_gitlab_ci_yaml(
# trying to build.
broken_specs_url = ""
known_broken_specs_encountered = []
if "broken-specs-url" in ci_config:
broken_specs_url = ci_config["broken-specs-url"]
if "broken-specs-url" in gitlab_ci:
broken_specs_url = gitlab_ci["broken-specs-url"]
enable_artifacts_buildcache = False
if "enable-artifacts-buildcache" in ci_config:
enable_artifacts_buildcache = ci_config["enable-artifacts-buildcache"]
if "enable-artifacts-buildcache" in gitlab_ci:
enable_artifacts_buildcache = gitlab_ci["enable-artifacts-buildcache"]
rebuild_index_enabled = True
if "rebuild-index" in ci_config and ci_config["rebuild-index"] is False:
if "rebuild-index" in gitlab_ci and gitlab_ci["rebuild-index"] is False:
rebuild_index_enabled = False
temp_storage_url_prefix = None
if "temporary-storage-url-prefix" in ci_config:
temp_storage_url_prefix = ci_config["temporary-storage-url-prefix"]
if "temporary-storage-url-prefix" in gitlab_ci:
temp_storage_url_prefix = gitlab_ci["temporary-storage-url-prefix"]
bootstrap_specs = []
phases = []
if "bootstrap" in ci_config:
for phase in ci_config["bootstrap"]:
if "bootstrap" in gitlab_ci:
for phase in gitlab_ci["bootstrap"]:
try:
phase_name = phase.get("name")
strip_compilers = phase.get("compiler-agnostic")
@@ -930,31 +720,6 @@ def generate_gitlab_ci_yaml(
shutil.copyfile(env.manifest_path, os.path.join(concrete_env_dir, "spack.yaml"))
shutil.copyfile(env.lock_path, os.path.join(concrete_env_dir, "spack.lock"))
with open(env.manifest_path, "r") as env_fd:
env_yaml_root = syaml.load(env_fd)
# Add config scopes to environment
env_includes = env_yaml_root["spack"].get("include", [])
cli_scopes = [
os.path.abspath(s.path)
for s in cfg.scopes().values()
if type(s) == cfg.ImmutableConfigScope
and s.path not in env_includes
and os.path.exists(s.path)
]
include_scopes = []
for scope in cli_scopes:
if scope not in include_scopes and scope not in env_includes:
include_scopes.insert(0, scope)
env_includes.extend(include_scopes)
env_yaml_root["spack"]["include"] = env_includes
if "gitlab-ci" in env_yaml_root["spack"] and "ci" not in env_yaml_root["spack"]:
env_yaml_root["spack"]["ci"] = env_yaml_root["spack"].pop("gitlab-ci")
translate_deprecated_config(env_yaml_root["spack"]["ci"])
with open(os.path.join(concrete_env_dir, "spack.yaml"), "w") as fd:
fd.write(syaml.dump_config(env_yaml_root, default_flow_style=False))
job_log_dir = os.path.join(pipeline_artifacts_dir, "logs")
job_repro_dir = os.path.join(pipeline_artifacts_dir, "reproduction")
job_test_dir = os.path.join(pipeline_artifacts_dir, "tests")
@@ -966,7 +731,7 @@ def generate_gitlab_ci_yaml(
# generation job and the rebuild jobs. This can happen when gitlab
# checks out the project into a runner-specific directory, for example,
# and different runners are picked for generate and rebuild jobs.
ci_project_dir = os.environ.get("CI_PROJECT_DIR", os.getcwd())
ci_project_dir = os.environ.get("CI_PROJECT_DIR")
rel_artifacts_root = os.path.relpath(pipeline_artifacts_dir, ci_project_dir)
rel_concrete_env_dir = os.path.relpath(concrete_env_dir, ci_project_dir)
rel_job_log_dir = os.path.relpath(job_log_dir, ci_project_dir)
@@ -980,7 +745,7 @@ def generate_gitlab_ci_yaml(
try:
bindist.binary_index.update()
except bindist.FetchCacheError as e:
tty.warn(e)
tty.error(e)
staged_phases = {}
try:
@@ -1037,9 +802,7 @@ def generate_gitlab_ci_yaml(
else:
broken_spec_urls = web_util.list_url(broken_specs_url)
spack_ci = SpackCI(ci_config, phases, staged_phases)
spack_ci_ir = spack_ci.generate_ir()
before_script, after_script = None, None
for phase in phases:
phase_name = phase["name"]
strip_compilers = phase["strip-compilers"]
@@ -1066,35 +829,54 @@ def generate_gitlab_ci_yaml(
spec_record["needs_rebuild"] = False
continue
job_object = spack_ci_ir["jobs"][release_spec_dag_hash]["attributes"]
runner_attribs = _find_matching_config(release_spec, gitlab_ci)
if not job_object:
if not runner_attribs:
tty.warn("No match found for {0}, skipping it".format(release_spec))
continue
tags = [tag for tag in runner_attribs["tags"]]
if spack_pipeline_type is not None:
# For spack pipelines "public" and "protected" are reserved tags
job_object["tags"] = _remove_reserved_tags(job_object.get("tags", []))
tags = _remove_reserved_tags(tags)
if spack_pipeline_type == "spack_protected_branch":
job_object["tags"].extend(["protected"])
tags.extend(["protected"])
elif spack_pipeline_type == "spack_pull_request":
job_object["tags"].extend(["public"])
tags.extend(["public"])
if "script" not in job_object:
raise AttributeError
variables = {}
if "variables" in runner_attribs:
variables.update(runner_attribs["variables"])
def main_script_replacements(cmd):
return cmd.replace("{env_dir}", concrete_env_dir)
image_name = None
image_entry = None
if "image" in runner_attribs:
build_image = runner_attribs["image"]
try:
image_name = build_image.get("name")
entrypoint = build_image.get("entrypoint")
image_entry = [p for p in entrypoint]
except AttributeError:
image_name = build_image
job_object["script"] = _unpack_script(
job_object["script"], op=main_script_replacements
)
job_script = ["spack env activate --without-view ."]
if "before_script" in job_object:
job_object["before_script"] = _unpack_script(job_object["before_script"])
if artifacts_root:
job_script.insert(0, "cd {0}".format(concrete_env_dir))
if "after_script" in job_object:
job_object["after_script"] = _unpack_script(job_object["after_script"])
job_script.extend(["spack ci rebuild"])
if "script" in runner_attribs:
job_script = [s for s in runner_attribs["script"]]
before_script = None
if "before_script" in runner_attribs:
before_script = [s for s in runner_attribs["before_script"]]
after_script = None
if "after_script" in runner_attribs:
after_script = [s for s in runner_attribs["after_script"]]
osname = str(release_spec.architecture)
job_name = get_job_name(
@@ -1107,12 +889,13 @@ def main_script_replacements(cmd):
if _is_main_phase(phase_name):
compiler_action = "INSTALL_MISSING"
job_vars = job_object.setdefault("variables", {})
job_vars["SPACK_JOB_SPEC_DAG_HASH"] = release_spec_dag_hash
job_vars["SPACK_JOB_SPEC_PKG_NAME"] = release_spec.name
job_vars["SPACK_COMPILER_ACTION"] = compiler_action
job_vars = {
"SPACK_JOB_SPEC_DAG_HASH": release_spec_dag_hash,
"SPACK_JOB_SPEC_PKG_NAME": release_spec.name,
"SPACK_COMPILER_ACTION": compiler_action,
}
job_object["needs"] = []
job_dependencies = []
if spec_label in dependencies:
if enable_artifacts_buildcache:
# Get dependencies transitively, so they're all
@@ -1125,7 +908,7 @@ def main_script_replacements(cmd):
for dep_label in dependencies[spec_label]:
dep_jobs.append(spec_labels[dep_label]["spec"])
job_object["needs"].extend(
job_dependencies.extend(
_format_job_needs(
phase_name,
strip_compilers,
@@ -1155,7 +938,7 @@ def main_script_replacements(cmd):
bs_arch = c_spec.architecture
bs_arch_family = bs_arch.target.microarchitecture.family
if (
c_spec.intersects(compiler_pkg_spec)
c_spec.satisfies(compiler_pkg_spec)
and bs_arch_family == spec_arch_family
):
# We found the bootstrap compiler this release spec
@@ -1182,7 +965,7 @@ def main_script_replacements(cmd):
if enable_artifacts_buildcache:
dep_jobs = [d for d in c_spec.traverse(deptype=all)]
job_object["needs"].extend(
job_dependencies.extend(
_format_job_needs(
bs["phase-name"],
bs["strip-compilers"],
@@ -1248,7 +1031,7 @@ def main_script_replacements(cmd):
]
if artifacts_root:
job_object["needs"].append(
job_dependencies.append(
{"job": generate_job_name, "pipeline": "{0}".format(parent_pipeline_id)}
)
@@ -1263,22 +1046,18 @@ def main_script_replacements(cmd):
build_stamp = cdash_handler.build_stamp
job_vars["SPACK_CDASH_BUILD_STAMP"] = build_stamp
job_object["artifacts"] = spack.config.merge_yaml(
job_object.get("artifacts", {}),
{
"when": "always",
"paths": [
rel_job_log_dir,
rel_job_repro_dir,
rel_job_test_dir,
rel_user_artifacts_dir,
],
},
)
variables.update(job_vars)
artifact_paths = [
rel_job_log_dir,
rel_job_repro_dir,
rel_job_test_dir,
rel_user_artifacts_dir,
]
if enable_artifacts_buildcache:
bc_root = os.path.join(local_mirror_dir, "build_cache")
job_object["artifacts"]["paths"].extend(
artifact_paths.extend(
[
os.path.join(bc_root, p)
for p in [
@@ -1288,15 +1067,33 @@ def main_script_replacements(cmd):
]
)
job_object["stage"] = stage_name
job_object["retry"] = {"max": 2, "when": JOB_RETRY_CONDITIONS}
job_object["interruptible"] = True
job_object = {
"stage": stage_name,
"variables": variables,
"script": job_script,
"tags": tags,
"artifacts": {"paths": artifact_paths, "when": "always"},
"needs": sorted(job_dependencies, key=lambda d: d["job"]),
"retry": {"max": 2, "when": JOB_RETRY_CONDITIONS},
"interruptible": True,
}
length_needs = len(job_object["needs"])
length_needs = len(job_dependencies)
if length_needs > max_length_needs:
max_length_needs = length_needs
max_needs_job = job_name
if before_script:
job_object["before_script"] = before_script
if after_script:
job_object["after_script"] = after_script
if image_name:
job_object["image"] = image_name
if image_entry is not None:
job_object["image"] = {"name": image_name, "entrypoint": image_entry}
output_object[job_name] = job_object
job_id += 1
@@ -1323,6 +1120,19 @@ def main_script_replacements(cmd):
else:
tty.warn("Unable to populate buildgroup without CDash credentials")
service_job_config = None
if "service-job-attributes" in gitlab_ci:
service_job_config = gitlab_ci["service-job-attributes"]
default_attrs = [
"image",
"tags",
"variables",
"before_script",
# 'script',
"after_script",
]
service_job_retries = {
"max": 2,
"when": ["runner_system_failure", "stuck_or_timeout_failure", "script_failure"],
@@ -1334,29 +1144,55 @@ def main_script_replacements(cmd):
# schedule a job to clean up the temporary storage location
# associated with this pipeline.
stage_names.append("cleanup-temp-storage")
cleanup_job = copy.deepcopy(spack_ci_ir["jobs"]["cleanup"]["attributes"])
cleanup_job = {}
if service_job_config:
_copy_attributes(default_attrs, service_job_config, cleanup_job)
if "tags" in cleanup_job:
service_tags = _remove_reserved_tags(cleanup_job["tags"])
cleanup_job["tags"] = service_tags
cleanup_job["stage"] = "cleanup-temp-storage"
cleanup_job["script"] = [
"spack -d mirror destroy --mirror-url {0}/$CI_PIPELINE_ID".format(
temp_storage_url_prefix
)
]
cleanup_job["when"] = "always"
cleanup_job["retry"] = service_job_retries
cleanup_job["interruptible"] = True
cleanup_job["script"] = _unpack_script(
cleanup_job["script"],
op=lambda cmd: cmd.replace("mirror_prefix", temp_storage_url_prefix),
)
output_object["cleanup"] = cleanup_job
if (
"script" in spack_ci_ir["jobs"]["signing"]["attributes"]
"signing-job-attributes" in gitlab_ci
and spack_pipeline_type == "spack_protected_branch"
):
# External signing: generate a job to check and sign binary pkgs
stage_names.append("stage-sign-pkgs")
signing_job = spack_ci_ir["jobs"]["signing"]["attributes"]
signing_job_config = gitlab_ci["signing-job-attributes"]
signing_job = {}
signing_job["script"] = _unpack_script(signing_job["script"])
signing_job_attrs_to_copy = [
"image",
"tags",
"variables",
"before_script",
"script",
"after_script",
]
_copy_attributes(signing_job_attrs_to_copy, signing_job_config, signing_job)
signing_job_tags = []
if "tags" in signing_job:
signing_job_tags = _remove_reserved_tags(signing_job["tags"])
for tag in ["aws", "protected", "notary"]:
if tag not in signing_job_tags:
signing_job_tags.append(tag)
signing_job["tags"] = signing_job_tags
signing_job["stage"] = "stage-sign-pkgs"
signing_job["when"] = "always"
@@ -1368,17 +1204,23 @@ def main_script_replacements(cmd):
if rebuild_index_enabled:
# Add a final job to regenerate the index
stage_names.append("stage-rebuild-index")
final_job = spack_ci_ir["jobs"]["reindex"]["attributes"]
final_job = {}
if service_job_config:
_copy_attributes(default_attrs, service_job_config, final_job)
if "tags" in final_job:
service_tags = _remove_reserved_tags(final_job["tags"])
final_job["tags"] = service_tags
index_target_mirror = mirror_urls[0]
if remote_mirror_override:
index_target_mirror = remote_mirror_override
final_job["stage"] = "stage-rebuild-index"
final_job["script"] = _unpack_script(
final_job["script"],
op=lambda cmd: cmd.replace("{index_target_mirror}", index_target_mirror),
)
final_job["stage"] = "stage-rebuild-index"
final_job["script"] = [
"spack buildcache update-index --keys --mirror-url {0}".format(index_target_mirror)
]
final_job["when"] = "always"
final_job["retry"] = service_job_retries
final_job["interruptible"] = True
@@ -1426,9 +1268,6 @@ def main_script_replacements(cmd):
if spack_stack_name:
output_object["variables"]["SPACK_CI_STACK_NAME"] = spack_stack_name
# Ensure the child pipeline always runs
output_object["workflow"] = {"rules": [{"when": "always"}]}
if spack_buildcache_copy:
# Write out the file describing specs that should be copied
copy_specs_dir = os.path.join(pipeline_artifacts_dir, "specs_to_copy")
@@ -1462,7 +1301,13 @@ def main_script_replacements(cmd):
else:
# No jobs were generated
tty.debug("No specs to rebuild, generating no-op job")
noop_job = spack_ci_ir["jobs"]["noop"]["attributes"]
noop_job = {}
if service_job_config:
_copy_attributes(default_attrs, service_job_config, noop_job)
if "script" not in noop_job:
noop_job["script"] = ['echo "All specs already up to date, nothing to rebuild."']
noop_job["retry"] = service_job_retries
@@ -1476,7 +1321,7 @@ def main_script_replacements(cmd):
sys.exit(1)
with open(output_file, "w") as outf:
outf.write(syaml.dump(sorted_output, default_flow_style=True))
outf.write(syaml.dump_config(sorted_output, default_flow_style=True))
def _url_encode_string(input_string):
@@ -1656,22 +1501,19 @@ def copy_files_to_artifacts(src, artifacts_dir):
try:
fs.copy(src, artifacts_dir)
except Exception as err:
msg = ("Unable to copy files ({0}) to artifacts {1} due to " "exception: {2}").format(
src, artifacts_dir, str(err)
)
tty.warn(msg)
tty.warn(f"Unable to copy files ({src}) to artifacts {artifacts_dir} due to: {err}")
def copy_stage_logs_to_artifacts(job_spec: spack.spec.Spec, job_log_dir: str) -> None:
def copy_stage_logs_to_artifacts(job_spec, job_log_dir):
"""Copy selected build stage file(s) to the given artifacts directory
Looks for build logs in the stage directory of the given
job_spec, and attempts to copy the files into the directory given
Looks for spack-build-out.txt in the stage directory of the given
job_spec, and attempts to copy the file into the directory given
by job_log_dir.
Args:
job_spec: spec associated with spack install log
job_log_dir: path into which build log should be copied
Parameters:
job_spec (spack.spec.Spec): spec associated with spack install log
job_log_dir (str): path into which build log should be copied
"""
tty.debug("job spec: {0}".format(job_spec))
if not job_spec:
@@ -1690,8 +1532,8 @@ def copy_stage_logs_to_artifacts(job_spec: spack.spec.Spec, job_log_dir: str) ->
stage_dir = job_pkg.stage.path
tty.debug("stage dir: {0}".format(stage_dir))
for file in [job_pkg.log_path, job_pkg.env_mods_path, *job_pkg.builder.archive_files]:
copy_files_to_artifacts(file, job_log_dir)
build_out_src = os.path.join(stage_dir, "spack-build-out.txt")
copy_files_to_artifacts(build_out_src, job_log_dir)
def copy_test_logs_to_artifacts(test_stage, job_test_dir):
@@ -1879,7 +1721,6 @@ def reproduce_ci_job(url, work_dir):
function is a set of printed instructions for running docker and then
commands to run to reproduce the build once inside the container.
"""
work_dir = os.path.realpath(work_dir)
download_and_extract_artifacts(url, work_dir)
lock_file = fs.find(work_dir, "spack.lock")[0]
@@ -2044,9 +1885,7 @@ def reproduce_ci_job(url, work_dir):
if job_image:
inst_list.append("\nRun the following command:\n\n")
inst_list.append(
" $ docker run --rm --name spack_reproducer -v {0}:{1}:Z -ti {2}\n".format(
work_dir, mount_as_dir, job_image
)
" $ docker run --rm -v {0}:{1} -ti {2}\n".format(work_dir, mount_as_dir, job_image)
)
inst_list.append("\nOnce inside the container:\n\n")
else:
@@ -2097,16 +1936,13 @@ def process_command(name, commands, repro_dir):
# Create a string [command 1] && [command 2] && ... && [command n] with commands
# quoted using double quotes.
args_to_string = lambda args: " ".join('"{}"'.format(arg) for arg in args)
full_command = " \n ".join(map(args_to_string, commands))
full_command = " && ".join(map(args_to_string, commands))
# Write the command to a shell script
script = "{0}.sh".format(name)
with open(script, "w") as fd:
fd.write("#!/bin/sh\n\n")
fd.write("\n# spack {0} command\n".format(name))
fd.write("set -e\n")
if os.environ.get("SPACK_VERBOSE_SCRIPT"):
fd.write("set -x\n")
fd.write(full_command)
fd.write("\n")
@@ -2455,66 +2291,3 @@ def report_skipped(self, spec, directory_name, reason):
)
reporter = CDash(configuration=configuration)
reporter.test_skipped_report(directory_name, spec, reason)
def translate_deprecated_config(config):
# Remove all deprecated keys from config
mappings = config.pop("mappings", [])
match_behavior = config.pop("match_behavior", "first")
build_job = {}
if "image" in config:
build_job["image"] = config.pop("image")
if "tags" in config:
build_job["tags"] = config.pop("tags")
if "variables" in config:
build_job["variables"] = config.pop("variables")
if "before_script" in config:
build_job["before_script"] = config.pop("before_script")
if "script" in config:
build_job["script"] = config.pop("script")
if "after_script" in config:
build_job["after_script"] = config.pop("after_script")
signing_job = None
if "signing-job-attributes" in config:
signing_job = {"signing-job": config.pop("signing-job-attributes")}
service_job_attributes = None
if "service-job-attributes" in config:
service_job_attributes = config.pop("service-job-attributes")
# If this config already has pipeline-gen do not more
if "pipeline-gen" in config:
return True if mappings or build_job or signing_job or service_job_attributes else False
config["target"] = "gitlab"
config["pipeline-gen"] = []
pipeline_gen = config["pipeline-gen"]
# Build Job
submapping = []
for section in mappings:
submapping_section = {"match": section["match"]}
if "runner-attributes" in section:
submapping_section["build-job"] = section["runner-attributes"]
if "remove-attributes" in section:
submapping_section["build-job-remove"] = section["remove-attributes"]
submapping.append(submapping_section)
pipeline_gen.append({"submapping": submapping, "match_behavior": match_behavior})
if build_job:
pipeline_gen.append({"build-job": build_job})
# Signing Job
if signing_job:
pipeline_gen.append(signing_job)
# Service Jobs
if service_job_attributes:
pipeline_gen.append({"reindex-job": service_job_attributes})
pipeline_gen.append({"noop-job": service_job_attributes})
pipeline_gen.append({"cleanup-job": service_job_attributes})
return True

View File

@@ -498,11 +498,11 @@ def list_fn(args):
if not args.allarch:
arch = spack.spec.Spec.default_arch()
specs = [s for s in specs if s.intersects(arch)]
specs = [s for s in specs if s.satisfies(arch)]
if args.specs:
constraints = set(args.specs)
specs = [s for s in specs if any(s.intersects(c) for c in constraints)]
specs = [s for s in specs if any(s.satisfies(c) for c in constraints)]
if sys.stdout.isatty():
builds = len(specs)
tty.msg("%s." % plural(builds, "cached build"))

View File

@@ -33,6 +33,12 @@ def deindent(desc):
return desc.replace(" ", "")
def get_env_var(variable_name):
if variable_name in os.environ:
return os.environ.get(variable_name)
return None
def setup_parser(subparser):
setup_parser.parser = subparser
subparsers = subparser.add_subparsers(help="CI sub-commands")
@@ -227,7 +233,7 @@ def ci_reindex(args):
Use the active, gitlab-enabled environment to rebuild the buildcache
index for the associated mirror."""
env = spack.cmd.require_active_env(cmd_name="ci rebuild-index")
yaml_root = ev.config_dict(env.manifest)
yaml_root = ev.config_dict(env.yaml)
if "mirrors" not in yaml_root or len(yaml_root["mirrors"].values()) < 1:
tty.die("spack ci rebuild-index requires an env containing a mirror")
@@ -249,9 +255,10 @@ def ci_rebuild(args):
# Make sure the environment is "gitlab-enabled", or else there's nothing
# to do.
ci_config = cfg.get("ci")
if not ci_config:
tty.die("spack ci rebuild requires an env containing ci cfg")
yaml_root = ev.config_dict(env.yaml)
gitlab_ci = yaml_root["gitlab-ci"] if "gitlab-ci" in yaml_root else None
if not gitlab_ci:
tty.die("spack ci rebuild requires an env containing gitlab-ci cfg")
tty.msg(
"SPACK_BUILDCACHE_DESTINATION={0}".format(
@@ -262,27 +269,27 @@ def ci_rebuild(args):
# Grab the environment variables we need. These either come from the
# pipeline generation step ("spack ci generate"), where they were written
# out as variables, or else provided by GitLab itself.
pipeline_artifacts_dir = os.environ.get("SPACK_ARTIFACTS_ROOT")
job_log_dir = os.environ.get("SPACK_JOB_LOG_DIR")
job_test_dir = os.environ.get("SPACK_JOB_TEST_DIR")
repro_dir = os.environ.get("SPACK_JOB_REPRO_DIR")
local_mirror_dir = os.environ.get("SPACK_LOCAL_MIRROR_DIR")
concrete_env_dir = os.environ.get("SPACK_CONCRETE_ENV_DIR")
ci_pipeline_id = os.environ.get("CI_PIPELINE_ID")
ci_job_name = os.environ.get("CI_JOB_NAME")
signing_key = os.environ.get("SPACK_SIGNING_KEY")
job_spec_pkg_name = os.environ.get("SPACK_JOB_SPEC_PKG_NAME")
job_spec_dag_hash = os.environ.get("SPACK_JOB_SPEC_DAG_HASH")
compiler_action = os.environ.get("SPACK_COMPILER_ACTION")
spack_pipeline_type = os.environ.get("SPACK_PIPELINE_TYPE")
remote_mirror_override = os.environ.get("SPACK_REMOTE_MIRROR_OVERRIDE")
remote_mirror_url = os.environ.get("SPACK_REMOTE_MIRROR_URL")
spack_ci_stack_name = os.environ.get("SPACK_CI_STACK_NAME")
shared_pr_mirror_url = os.environ.get("SPACK_CI_SHARED_PR_MIRROR_URL")
rebuild_everything = os.environ.get("SPACK_REBUILD_EVERYTHING")
pipeline_artifacts_dir = get_env_var("SPACK_ARTIFACTS_ROOT")
job_log_dir = get_env_var("SPACK_JOB_LOG_DIR")
job_test_dir = get_env_var("SPACK_JOB_TEST_DIR")
repro_dir = get_env_var("SPACK_JOB_REPRO_DIR")
local_mirror_dir = get_env_var("SPACK_LOCAL_MIRROR_DIR")
concrete_env_dir = get_env_var("SPACK_CONCRETE_ENV_DIR")
ci_pipeline_id = get_env_var("CI_PIPELINE_ID")
ci_job_name = get_env_var("CI_JOB_NAME")
signing_key = get_env_var("SPACK_SIGNING_KEY")
job_spec_pkg_name = get_env_var("SPACK_JOB_SPEC_PKG_NAME")
job_spec_dag_hash = get_env_var("SPACK_JOB_SPEC_DAG_HASH")
compiler_action = get_env_var("SPACK_COMPILER_ACTION")
spack_pipeline_type = get_env_var("SPACK_PIPELINE_TYPE")
remote_mirror_override = get_env_var("SPACK_REMOTE_MIRROR_OVERRIDE")
remote_mirror_url = get_env_var("SPACK_REMOTE_MIRROR_URL")
spack_ci_stack_name = get_env_var("SPACK_CI_STACK_NAME")
shared_pr_mirror_url = get_env_var("SPACK_CI_SHARED_PR_MIRROR_URL")
rebuild_everything = get_env_var("SPACK_REBUILD_EVERYTHING")
# Construct absolute paths relative to current $CI_PROJECT_DIR
ci_project_dir = os.environ.get("CI_PROJECT_DIR")
ci_project_dir = get_env_var("CI_PROJECT_DIR")
pipeline_artifacts_dir = os.path.join(ci_project_dir, pipeline_artifacts_dir)
job_log_dir = os.path.join(ci_project_dir, job_log_dir)
job_test_dir = os.path.join(ci_project_dir, job_test_dir)
@@ -299,10 +306,8 @@ def ci_rebuild(args):
# Query the environment manifest to find out whether we're reporting to a
# CDash instance, and if so, gather some information from the manifest to
# support that task.
cdash_config = cfg.get("cdash")
cdash_handler = None
if "build-group" in cdash_config:
cdash_handler = spack_ci.CDashHandler(cdash_config)
cdash_handler = spack_ci.CDashHandler(yaml_root.get("cdash")) if "cdash" in yaml_root else None
if cdash_handler:
tty.debug("cdash url = {0}".format(cdash_handler.url))
tty.debug("cdash project = {0}".format(cdash_handler.project))
tty.debug("cdash project_enc = {0}".format(cdash_handler.project_enc))
@@ -335,13 +340,13 @@ def ci_rebuild(args):
pipeline_mirror_url = None
temp_storage_url_prefix = None
if "temporary-storage-url-prefix" in ci_config:
temp_storage_url_prefix = ci_config["temporary-storage-url-prefix"]
if "temporary-storage-url-prefix" in gitlab_ci:
temp_storage_url_prefix = gitlab_ci["temporary-storage-url-prefix"]
pipeline_mirror_url = url_util.join(temp_storage_url_prefix, ci_pipeline_id)
enable_artifacts_mirror = False
if "enable-artifacts-buildcache" in ci_config:
enable_artifacts_mirror = ci_config["enable-artifacts-buildcache"]
if "enable-artifacts-buildcache" in gitlab_ci:
enable_artifacts_mirror = gitlab_ci["enable-artifacts-buildcache"]
if enable_artifacts_mirror or (
spack_is_pr_pipeline and not enable_artifacts_mirror and not temp_storage_url_prefix
):
@@ -588,8 +593,8 @@ def ci_rebuild(args):
# avoid wasting compute cycles attempting to build those hashes.
if install_exit_code == INSTALL_FAIL_CODE and spack_is_develop_pipeline:
tty.debug("Install failed on develop")
if "broken-specs-url" in ci_config:
broken_specs_url = ci_config["broken-specs-url"]
if "broken-specs-url" in gitlab_ci:
broken_specs_url = gitlab_ci["broken-specs-url"]
dev_fail_hash = job_spec.dag_hash()
broken_spec_path = url_util.join(broken_specs_url, dev_fail_hash)
tty.msg("Reporting broken develop build as: {0}".format(broken_spec_path))
@@ -597,8 +602,8 @@ def ci_rebuild(args):
broken_spec_path,
job_spec_pkg_name,
spack_ci_stack_name,
os.environ.get("CI_JOB_URL"),
os.environ.get("CI_PIPELINE_URL"),
get_env_var("CI_JOB_URL"),
get_env_var("CI_PIPELINE_URL"),
job_spec.to_dict(hash=ht.dag_hash),
)
@@ -610,14 +615,17 @@ def ci_rebuild(args):
# the package, run them and copy the output. Failures of any kind should
# *not* terminate the build process or preclude creating the build cache.
broken_tests = (
"broken-tests-packages" in ci_config
and job_spec.name in ci_config["broken-tests-packages"]
"broken-tests-packages" in gitlab_ci
and job_spec.name in gitlab_ci["broken-tests-packages"]
)
reports_dir = fs.join_path(os.getcwd(), "cdash_report")
if args.tests and broken_tests:
tty.warn("Unable to run stand-alone tests since listed in " "ci's 'broken-tests-packages'")
tty.warn(
"Unable to run stand-alone tests since listed in "
"gitlab-ci's 'broken-tests-packages'"
)
if cdash_handler:
msg = "Package is listed in ci's broken-tests-packages"
msg = "Package is listed in gitlab-ci's broken-tests-packages"
cdash_handler.report_skipped(job_spec, reports_dir, reason=msg)
cdash_handler.copy_test_results(reports_dir, job_test_dir)
elif args.tests:
@@ -680,8 +688,8 @@ def ci_rebuild(args):
# If this is a develop pipeline, check if the spec that we just built is
# on the broken-specs list. If so, remove it.
if spack_is_develop_pipeline and "broken-specs-url" in ci_config:
broken_specs_url = ci_config["broken-specs-url"]
if spack_is_develop_pipeline and "broken-specs-url" in gitlab_ci:
broken_specs_url = gitlab_ci["broken-specs-url"]
just_built_hash = job_spec.dag_hash()
broken_spec_path = url_util.join(broken_specs_url, just_built_hash)
if web_util.url_exists(broken_spec_path):
@@ -698,9 +706,9 @@ def ci_rebuild(args):
else:
tty.debug("spack install exited non-zero, will not create buildcache")
api_root_url = os.environ.get("CI_API_V4_URL")
ci_project_id = os.environ.get("CI_PROJECT_ID")
ci_job_id = os.environ.get("CI_JOB_ID")
api_root_url = get_env_var("CI_API_V4_URL")
ci_project_id = get_env_var("CI_PROJECT_ID")
ci_job_id = get_env_var("CI_JOB_ID")
repro_job_url = "{0}/projects/{1}/jobs/{2}/artifacts".format(
api_root_url, ci_project_id, ci_job_id

View File

@@ -514,15 +514,7 @@ def add_concretizer_args(subparser):
dest="concretizer:reuse",
const=True,
default=None,
help="reuse installed packages/buildcaches when possible",
)
subgroup.add_argument(
"--reuse-deps",
action=ConfigSetAction,
dest="concretizer:reuse",
const="dependencies",
default=None,
help="reuse installed dependencies only",
help="reuse installed dependencies/buildcaches when possible",
)

View File

@@ -7,7 +7,6 @@
import collections
import os
import shutil
from typing import List
import llnl.util.filesystem as fs
import llnl.util.tty as tty
@@ -245,35 +244,30 @@ def config_remove(args):
spack.config.set(path, existing, scope)
def _can_update_config_file(scope: spack.config.ConfigScope, cfg_file):
if isinstance(scope, spack.config.SingleFileScope):
return fs.can_access(cfg_file)
return fs.can_write_to_dir(scope.path) and fs.can_access(cfg_file)
def _can_update_config_file(scope_dir, cfg_file):
dir_ok = fs.can_write_to_dir(scope_dir)
cfg_ok = fs.can_access(cfg_file)
return dir_ok and cfg_ok
def config_update(args):
# Read the configuration files
spack.config.config.get_config(args.section, scope=args.scope)
updates: List[spack.config.ConfigScope] = list(
filter(
lambda s: not isinstance(
s, (spack.config.InternalConfigScope, spack.config.ImmutableConfigScope)
),
spack.config.config.format_updates[args.section],
)
)
updates = spack.config.config.format_updates[args.section]
cannot_overwrite, skip_system_scope = [], False
for scope in updates:
cfg_file = spack.config.config.get_config_filename(scope.name, args.section)
can_be_updated = _can_update_config_file(scope, cfg_file)
scope_dir = scope.path
can_be_updated = _can_update_config_file(scope_dir, cfg_file)
if not can_be_updated:
if scope.name == "system":
skip_system_scope = True
tty.warn(
msg = (
'Not enough permissions to write to "system" scope. '
f"Skipping update at that location [cfg={cfg_file}]"
"Skipping update at that location [cfg={0}]"
)
tty.warn(msg.format(cfg_file))
continue
cannot_overwrite.append((scope, cfg_file))
@@ -321,14 +315,18 @@ def config_update(args):
# Get a function to update the format
update_fn = spack.config.ensure_latest_format_fn(args.section)
for scope in updates:
data = scope.get_section(args.section).pop(args.section)
cfg_file = spack.config.config.get_config_filename(scope.name, args.section)
with open(cfg_file) as f:
data = syaml.load_config(f) or {}
data = data.pop(args.section, {})
update_fn(data)
# Make a backup copy and rewrite the file
bkp_file = cfg_file + ".bkp"
shutil.copy(cfg_file, bkp_file)
spack.config.config.update_config(args.section, data, scope=scope.name, force=True)
tty.msg(f'File "{cfg_file}" update [backup={bkp_file}]')
msg = 'File "{0}" updated [backup={1}]'
tty.msg(msg.format(cfg_file, bkp_file))
def _can_revert_update(scope_dir, cfg_file, bkp_file):

View File

@@ -807,7 +807,7 @@ def get_versions(args, name):
# Default version with hash
hashed_versions = """\
# FIXME: Add proper versions and checksums here.
# version("1.2.3", md5="0123456789abcdef0123456789abcdef")"""
# version("1.2.3", "0123456789abcdef0123456789abcdef")"""
# Default version without hash
unhashed_versions = """\

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse
import io
import os
import shutil
import sys
@@ -23,11 +24,10 @@
import spack.cmd.uninstall
import spack.config
import spack.environment as ev
import spack.environment.depfile as depfile
import spack.environment.shell
import spack.schema.env
import spack.spec
import spack.tengine
import spack.traverse as traverse
import spack.util.string as string
from spack.util.environment import EnvironmentModifications
@@ -163,7 +163,7 @@ def env_activate(args):
env = create_temp_env_directory()
env_path = os.path.abspath(env)
short_name = os.path.basename(env_path)
ev.create_in_dir(env).write(regenerate=False)
ev.Environment(env).write(regenerate=False)
# Managed environment
elif ev.exists(env_name_or_dir) and not args.dir:
@@ -301,17 +301,16 @@ def env_create(args):
# object could choose to enable a view by default. False means that
# the environment should not include a view.
with_view = None
_env_create(
args.create_env,
init_file=args.envfile,
dir=args.dir,
with_view=with_view,
keep_relative=args.keep_relative,
)
if args.envfile:
with open(args.envfile) as f:
_env_create(
args.create_env, f, args.dir, with_view=with_view, keep_relative=args.keep_relative
)
else:
_env_create(args.create_env, None, args.dir, with_view=with_view)
def _env_create(name_or_path, *, init_file=None, dir=False, with_view=None, keep_relative=False):
def _env_create(name_or_path, init_file=None, dir=False, with_view=None, keep_relative=False):
"""Create a new environment, with an optional yaml description.
Arguments:
@@ -324,21 +323,18 @@ def _env_create(name_or_path, *, init_file=None, dir=False, with_view=None, keep
the new environment file, otherwise they may be made absolute if the
new environment is in a different location
"""
if not dir:
env = ev.create(
name_or_path, init_file=init_file, with_view=with_view, keep_relative=keep_relative
)
if dir:
env = ev.Environment(name_or_path, init_file, with_view, keep_relative)
env.write()
tty.msg("Created environment in %s" % env.path)
tty.msg("You can activate this environment with:")
tty.msg(" spack env activate %s" % env.path)
else:
env = ev.create(name_or_path, init_file, with_view, keep_relative)
env.write()
tty.msg("Created environment '%s' in %s" % (name_or_path, env.path))
tty.msg("You can activate this environment with:")
tty.msg(" spack env activate %s" % (name_or_path))
return env
env = ev.create_in_dir(
name_or_path, init_file=init_file, with_view=with_view, keep_relative=keep_relative
)
tty.msg("Created environment in %s" % env.path)
tty.msg("You can activate this environment with:")
tty.msg(" spack env activate %s" % env.path)
return env
@@ -435,22 +431,21 @@ def env_view_setup_parser(subparser):
def env_view(args):
env = ev.active_environment()
if not env:
if env:
if args.action == ViewAction.regenerate:
env.regenerate_views()
elif args.action == ViewAction.enable:
if args.view_path:
view_path = args.view_path
else:
view_path = env.view_path_default
env.update_default_view(view_path)
env.write()
elif args.action == ViewAction.disable:
env.update_default_view(None)
env.write()
else:
tty.msg("No active environment")
return
if args.action == ViewAction.regenerate:
env.regenerate_views()
elif args.action == ViewAction.enable:
if args.view_path:
view_path = args.view_path
else:
view_path = env.view_path_default
env.update_default_view(view_path)
env.write()
elif args.action == ViewAction.disable:
env.update_default_view(path_or_bool=False)
env.write()
#
@@ -642,23 +637,161 @@ def env_depfile_setup_parser(subparser):
)
def _deptypes(use_buildcache):
"""What edges should we follow for a given node? If it's a cache-only
node, then we can drop build type deps."""
return ("link", "run") if use_buildcache == "only" else ("build", "link", "run")
class MakeTargetVisitor(object):
"""This visitor produces an adjacency list of a (reduced) DAG, which
is used to generate Makefile targets with their prerequisites."""
def __init__(self, target, pkg_buildcache, deps_buildcache):
"""
Args:
target: function that maps dag_hash -> make target string
pkg_buildcache (str): "only", "never", "auto": when "only",
redundant build deps of roots are dropped
deps_buildcache (str): same as pkg_buildcache, but for non-root specs.
"""
self.adjacency_list = []
self.target = target
self.pkg_buildcache = pkg_buildcache
self.deps_buildcache = deps_buildcache
self.deptypes_root = _deptypes(pkg_buildcache)
self.deptypes_deps = _deptypes(deps_buildcache)
def neighbors(self, node):
"""Produce a list of spec to follow from node"""
deptypes = self.deptypes_root if node.depth == 0 else self.deptypes_deps
return traverse.sort_edges(node.edge.spec.edges_to_dependencies(deptype=deptypes))
def build_cache_flag(self, depth):
setting = self.pkg_buildcache if depth == 0 else self.deps_buildcache
if setting == "only":
return "--use-buildcache=only"
elif setting == "never":
return "--use-buildcache=never"
return ""
def accept(self, node):
fmt = "{name}-{version}-{hash}"
tgt = node.edge.spec.format(fmt)
spec_str = node.edge.spec.format(
"{name}{@version}{%compiler}{variants}{arch=architecture}"
)
buildcache_flag = self.build_cache_flag(node.depth)
prereqs = " ".join([self.target(dep.spec.format(fmt)) for dep in self.neighbors(node)])
self.adjacency_list.append(
(tgt, prereqs, node.edge.spec.dag_hash(), spec_str, buildcache_flag)
)
# We already accepted this
return True
def env_depfile(args):
# Currently only make is supported.
spack.cmd.require_active_env(cmd_name="env depfile")
env = ev.active_environment()
# Special make targets are useful when including a makefile in another, and you
# need to "namespace" the targets to avoid conflicts.
if args.make_prefix is None:
prefix = os.path.join(env.env_subdir_path, "makedeps")
else:
prefix = args.make_prefix
def get_target(name):
# The `all` and `clean` targets are phony. It doesn't make sense to
# have /abs/path/to/env/metadir/{all,clean} targets. But it *does* make
# sense to have a prefix like `env/all`, `env/clean` when they are
# supposed to be included
if name in ("all", "clean") and os.path.isabs(prefix):
return name
else:
return os.path.join(prefix, name)
def get_install_target(name):
return os.path.join(prefix, "install", name)
def get_install_deps_target(name):
return os.path.join(prefix, "install-deps", name)
# What things do we build when running make? By default, we build the
# root specs. If specific specs are provided as input, we build those.
filter_specs = spack.cmd.parse_specs(args.specs) if args.specs else None
template = spack.tengine.make_environment().get_template(os.path.join("depfile", "Makefile"))
model = depfile.MakefileModel.from_env(
ev.active_environment(),
filter_specs=filter_specs,
pkg_buildcache=depfile.UseBuildCache.from_string(args.use_buildcache[0]),
dep_buildcache=depfile.UseBuildCache.from_string(args.use_buildcache[1]),
make_prefix=args.make_prefix,
jobserver=args.jobserver,
if args.specs:
abstract_specs = spack.cmd.parse_specs(args.specs)
roots = [env.matching_spec(s) for s in abstract_specs]
else:
roots = [s for _, s in env.concretized_specs()]
# We produce a sub-DAG from the DAG induced by roots, where we drop build
# edges for those specs that are installed through a binary cache.
pkg_buildcache, dep_buildcache = args.use_buildcache
make_targets = MakeTargetVisitor(get_install_target, pkg_buildcache, dep_buildcache)
traverse.traverse_breadth_first_with_visitor(
roots, traverse.CoverNodesVisitor(make_targets, key=lambda s: s.dag_hash())
)
makefile = template.render(model.to_dict())
# Root specs without deps are the prereqs for the environment target
root_install_targets = [get_install_target(h.format("{name}-{version}-{hash}")) for h in roots]
all_pkg_identifiers = []
# The SPACK_PACKAGE_IDS variable is "exported", which can be used when including
# generated makefiles to add post-install hooks, like pushing to a buildcache,
# running tests, etc.
# NOTE: GNU Make allows directory separators in variable names, so for consistency
# we can namespace this variable with the same prefix as targets.
if args.make_prefix is None:
pkg_identifier_variable = "SPACK_PACKAGE_IDS"
else:
pkg_identifier_variable = os.path.join(prefix, "SPACK_PACKAGE_IDS")
# All install and install-deps targets
all_install_related_targets = []
# Convenience shortcuts: ensure that `make install/pkg-version-hash` triggers
# <absolute path to env>/.spack-env/makedeps/install/pkg-version-hash in case
# we don't have a custom make target prefix.
phony_convenience_targets = []
for tgt, _, _, _, _ in make_targets.adjacency_list:
all_pkg_identifiers.append(tgt)
all_install_related_targets.append(get_install_target(tgt))
all_install_related_targets.append(get_install_deps_target(tgt))
if args.make_prefix is None:
phony_convenience_targets.append(os.path.join("install", tgt))
phony_convenience_targets.append(os.path.join("install-deps", tgt))
buf = io.StringIO()
template = spack.tengine.make_environment().get_template(os.path.join("depfile", "Makefile"))
rendered = template.render(
{
"all_target": get_target("all"),
"env_target": get_target("env"),
"clean_target": get_target("clean"),
"all_install_related_targets": " ".join(all_install_related_targets),
"root_install_targets": " ".join(root_install_targets),
"dirs_target": get_target("dirs"),
"environment": env.path,
"install_target": get_target("install"),
"install_deps_target": get_target("install-deps"),
"any_hash_target": get_target("%"),
"jobserver_support": "+" if args.jobserver else "",
"adjacency_list": make_targets.adjacency_list,
"phony_convenience_targets": " ".join(phony_convenience_targets),
"pkg_ids_variable": pkg_identifier_variable,
"pkg_ids": " ".join(all_pkg_identifiers),
}
)
buf.write(rendered)
makefile = buf.getvalue()
# Finally write to stdout/file.
if args.output:

View File

@@ -39,14 +39,19 @@
compiler flags:
@g{cflags="flags"} cppflags, cflags, cxxflags,
fflags, ldflags, ldlibs
@g{==} propagate flags to package dependencies
@g{cflags=="flags"} propagate flags to package dependencies
cppflags, cflags, cxxflags, fflags,
ldflags, ldlibs
variants:
@B{+variant} enable <variant>
@B{++variant} propagate enable <variant>
@r{-variant} or @r{~variant} disable <variant>
@r{--variant} or @r{~~variant} propagate disable <variant>
@B{variant=value} set non-boolean <variant> to <value>
@B{variant==value} propagate non-boolean <variant> to <value>
@B{variant=value1,value2,value3} set multi-value <variant> values
@B{++}, @r{--}, @r{~~}, @B{==} propagate variants to package dependencies
@B{variant==value1,value2,value3} propagate multi-value <variant> values
architecture variants:
@m{platform=platform} linux, darwin, cray, etc.

View File

@@ -283,7 +283,7 @@ def print_tests(pkg):
c_names = ("gcc", "intel", "intel-parallel-studio", "pgi")
if pkg.name in c_names:
v_names.extend(["c", "cxx", "fortran"])
if pkg.spec.intersects("llvm+clang"):
if pkg.spec.satisfies("llvm+clang"):
v_names.extend(["c", "cxx"])
# TODO Refactor END

View File

@@ -263,6 +263,146 @@ def report_filename(args: argparse.Namespace, specs: List[spack.spec.Spec]) -> s
return result
def install_specs(specs, install_kwargs, cli_args):
try:
if ev.active_environment():
install_specs_inside_environment(specs, install_kwargs, cli_args)
else:
install_specs_outside_environment(specs, install_kwargs)
except spack.build_environment.InstallError as e:
if cli_args.show_log_on_error:
e.print_context()
assert e.pkg, "Expected InstallError to include the associated package"
if not os.path.exists(e.pkg.build_log_path):
tty.error("'spack install' created no log.")
else:
sys.stderr.write("Full build log:\n")
with open(e.pkg.build_log_path) as log:
shutil.copyfileobj(log, sys.stderr)
raise
def install_specs_inside_environment(specs, install_kwargs, cli_args):
specs_to_install, specs_to_add = [], []
env = ev.active_environment()
for abstract, concrete in specs:
# This won't find specs added to the env since last
# concretize, therefore should we consider enforcing
# concretization of the env before allowing to install
# specs?
m_spec = env.matching_spec(abstract)
# If there is any ambiguity in the above call to matching_spec
# (i.e. if more than one spec in the environment matches), then
# SpackEnvironmentError is raised, with a message listing the
# the matches. Getting to this point means there were either
# no matches or exactly one match.
if not m_spec and not cli_args.add:
msg = (
"Cannot install '{0}' because it is not in the current environment."
" You can add it to the environment with 'spack add {0}', or as part"
" of the install command with 'spack install --add {0}'"
).format(str(abstract))
tty.die(msg)
if not m_spec:
tty.debug("adding {0} as a root".format(abstract.name))
specs_to_add.append((abstract, concrete))
continue
tty.debug("exactly one match for {0} in env -> {1}".format(m_spec.name, m_spec.dag_hash()))
if m_spec in env.roots() or not cli_args.add:
# either the single match is a root spec (in which case
# the spec is not added to the env again), or the user did
# not specify --add (in which case it is assumed we are
# installing already-concretized specs in the env)
tty.debug("just install {0}".format(m_spec.name))
specs_to_install.append(m_spec)
else:
# the single match is not a root (i.e. it's a dependency),
# and --add was specified, so we'll add it as a
# root before installing
tty.debug("add {0} then install it".format(m_spec.name))
specs_to_add.append((abstract, concrete))
if specs_to_add:
tty.debug("Adding the following specs as roots:")
for abstract, concrete in specs_to_add:
tty.debug(" {0}".format(abstract.name))
with env.write_transaction():
specs_to_install.append(env.concretize_and_add(abstract, concrete))
env.write(regenerate=False)
# Install the validated list of cli specs
if specs_to_install:
tty.debug("Installing the following cli specs:")
for s in specs_to_install:
tty.debug(" {0}".format(s.name))
env.install_specs(specs_to_install, **install_kwargs)
def install_specs_outside_environment(specs, install_kwargs):
installs = [(concrete.package, install_kwargs) for _, concrete in specs]
builder = PackageInstaller(installs)
builder.install()
def install_all_specs_from_active_environment(
install_kwargs, only_concrete, cli_test_arg, reporter_factory
):
"""Install all specs from the active environment
Args:
install_kwargs (dict): dictionary of options to be passed to the installer
only_concrete (bool): if true don't concretize the environment, but install
only the specs that are already concrete
cli_test_arg (bool or str): command line argument to select which test to run
reporter: reporter object for the installations
"""
env = ev.active_environment()
if not env:
msg = "install requires a package argument or active environment"
if "spack.yaml" in os.listdir(os.getcwd()):
# There's a spack.yaml file in the working dir, the user may
# have intended to use that
msg += "\n\n"
msg += "Did you mean to install using the `spack.yaml`"
msg += " in this directory? Try: \n"
msg += " spack env activate .\n"
msg += " spack install\n"
msg += " OR\n"
msg += " spack --env . install"
tty.die(msg)
install_kwargs["tests"] = compute_tests_install_kwargs(env.user_specs, cli_test_arg)
if not only_concrete:
with env.write_transaction():
concretized_specs = env.concretize(tests=install_kwargs["tests"])
ev.display_specs(concretized_specs)
# save view regeneration for later, so that we only do it
# once, as it can be slow.
env.write(regenerate=False)
specs = env.all_specs()
if not specs:
msg = "{0} environment has no specs to install".format(env.name)
tty.msg(msg)
return
reporter = reporter_factory(specs) or lang.nullcontext()
tty.msg("Installing environment {0}".format(env.name))
with reporter:
env.install_all(**install_kwargs)
tty.debug("Regenerating environment views for {0}".format(env.name))
with env.write_transaction():
# write env to trigger view generation and modulefile
# generation
env.write()
def compute_tests_install_kwargs(specs, cli_test_arg):
"""Translate the test cli argument into the proper install argument"""
if cli_test_arg == "all":
@@ -272,6 +412,43 @@ def compute_tests_install_kwargs(specs, cli_test_arg):
return False
def specs_from_cli(args, install_kwargs):
"""Return abstract and concrete spec parsed from the command line."""
abstract_specs = spack.cmd.parse_specs(args.spec)
install_kwargs["tests"] = compute_tests_install_kwargs(abstract_specs, args.test)
try:
concrete_specs = spack.cmd.parse_specs(
args.spec, concretize=True, tests=install_kwargs["tests"]
)
except SpackError as e:
tty.debug(e)
if args.log_format is not None:
reporter = args.reporter()
reporter.concretization_report(report_filename(args, abstract_specs), e.message)
raise
return abstract_specs, concrete_specs
def concrete_specs_from_file(args):
"""Return the list of concrete specs read from files."""
result = []
for file in args.specfiles:
with open(file, "r") as f:
if file.endswith("yaml") or file.endswith("yml"):
s = spack.spec.Spec.from_yaml(f)
else:
s = spack.spec.Spec.from_json(f)
concretized = s.concretized()
if concretized.dag_hash() != s.dag_hash():
msg = 'skipped invalid file "{0}". '
msg += "The file does not contain a concrete spec."
tty.warn(msg.format(file))
continue
result.append(concretized)
return result
def require_user_confirmation_for_overwrite(concrete_specs, args):
if args.yes_to_all:
return
@@ -298,40 +475,12 @@ def require_user_confirmation_for_overwrite(concrete_specs, args):
tty.die("Reinstallation aborted.")
def _dump_log_on_error(e: spack.build_environment.InstallError):
e.print_context()
assert e.pkg, "Expected InstallError to include the associated package"
if not os.path.exists(e.pkg.build_log_path):
tty.error("'spack install' created no log.")
else:
sys.stderr.write("Full build log:\n")
with open(e.pkg.build_log_path, errors="replace") as log:
shutil.copyfileobj(log, sys.stderr)
def _die_require_env():
msg = "install requires a package argument or active environment"
if "spack.yaml" in os.listdir(os.getcwd()):
# There's a spack.yaml file in the working dir, the user may
# have intended to use that
msg += (
"\n\n"
"Did you mean to install using the `spack.yaml`"
" in this directory? Try: \n"
" spack env activate .\n"
" spack install\n"
" OR\n"
" spack --env . install"
)
tty.die(msg)
def install(parser, args):
# TODO: unify args.verbose?
tty.set_verbose(args.verbose or args.install_verbose)
if args.help_cdash:
arguments.print_cdash_help()
spack.cmd.common.arguments.print_cdash_help()
return
if args.no_checksum:
@@ -340,154 +489,43 @@ def install(parser, args):
if args.deprecated:
spack.config.set("config:deprecated", True, scope="command_line")
if args.log_file and not args.log_format:
msg = "the '--log-format' must be specified when using '--log-file'"
tty.die(msg)
arguments.sanitize_reporter_options(args)
spack.cmd.common.arguments.sanitize_reporter_options(args)
def reporter_factory(specs):
if args.log_format is None:
return lang.nullcontext()
return None
return spack.report.build_context_manager(
context_manager = spack.report.build_context_manager(
reporter=args.reporter(), filename=report_filename(args, specs=specs), specs=specs
)
return context_manager
install_kwargs = install_kwargs_from_args(args)
env = ev.active_environment()
if not env and not args.spec and not args.specfiles:
_die_require_env()
try:
if env:
install_with_active_env(env, args, install_kwargs, reporter_factory)
else:
install_without_active_env(args, install_kwargs, reporter_factory)
except spack.build_environment.InstallError as e:
if args.show_log_on_error:
_dump_log_on_error(e)
raise
def _maybe_add_and_concretize(args, env, specs):
"""Handle the overloaded spack install behavior of adding
and automatically concretizing specs"""
# Users can opt out of accidental concretizations with --only-concrete
if args.only_concrete:
if not args.spec and not args.specfiles:
# If there are no args but an active environment then install the packages from it.
install_all_specs_from_active_environment(
install_kwargs=install_kwargs,
only_concrete=args.only_concrete,
cli_test_arg=args.test,
reporter_factory=reporter_factory,
)
return
# Otherwise, we will modify the environment.
with env.write_transaction():
# `spack add` adds these specs.
if args.add:
for spec in specs:
env.add(spec)
# Specs from CLI
abstract_specs, concrete_specs = specs_from_cli(args, install_kwargs)
# `spack concretize`
tests = compute_tests_install_kwargs(env.user_specs, args.test)
concretized_specs = env.concretize(tests=tests)
ev.display_specs(concretized_specs)
# save view regeneration for later, so that we only do it
# once, as it can be slow.
env.write(regenerate=False)
def install_with_active_env(env: ev.Environment, args, install_kwargs, reporter_factory):
specs = spack.cmd.parse_specs(args.spec)
# The following two commands are equivalent:
# 1. `spack install --add x y z`
# 2. `spack add x y z && spack concretize && spack install --only-concrete`
# here we do the `add` and `concretize` part.
_maybe_add_and_concretize(args, env, specs)
# Now we're doing `spack install --only-concrete`.
if args.add or not specs:
specs_to_install = env.concrete_roots()
if not specs_to_install:
tty.msg(f"{env.name} environment has no specs to install")
return
# `spack install x y z` without --add is installing matching specs in the env.
else:
specs_to_install = env.all_matching_specs(*specs)
if not specs_to_install:
msg = (
"Cannot install '{0}' because no matching specs are in the current environment."
" You can add specs to the environment with 'spack add {0}', or as part"
" of the install command with 'spack install --add {0}'"
).format(" ".join(args.spec))
tty.die(msg)
install_kwargs["tests"] = compute_tests_install_kwargs(specs_to_install, args.test)
if args.overwrite:
require_user_confirmation_for_overwrite(specs_to_install, args)
install_kwargs["overwrite"] = [spec.dag_hash() for spec in specs_to_install]
try:
with reporter_factory(specs_to_install):
env.install_specs(specs_to_install, **install_kwargs)
finally:
# TODO: this is doing way too much to trigger
# views and modules to be generated.
with env.write_transaction():
env.write(regenerate=True)
def concrete_specs_from_cli(args, install_kwargs):
"""Return abstract and concrete spec parsed from the command line."""
abstract_specs = spack.cmd.parse_specs(args.spec)
install_kwargs["tests"] = compute_tests_install_kwargs(abstract_specs, args.test)
try:
concrete_specs = spack.cmd.parse_specs(
args.spec, concretize=True, tests=install_kwargs["tests"]
)
except SpackError as e:
tty.debug(e)
if args.log_format is not None:
reporter = args.reporter()
reporter.concretization_report(report_filename(args, abstract_specs), e.message)
raise
return concrete_specs
def concrete_specs_from_file(args):
"""Return the list of concrete specs read from files."""
result = []
for file in args.specfiles:
with open(file, "r") as f:
if file.endswith("yaml") or file.endswith("yml"):
s = spack.spec.Spec.from_yaml(f)
else:
s = spack.spec.Spec.from_json(f)
concretized = s.concretized()
if concretized.dag_hash() != s.dag_hash():
msg = 'skipped invalid file "{0}". '
msg += "The file does not contain a concrete spec."
tty.warn(msg.format(file))
continue
result.append(concretized)
return result
def install_without_active_env(args, install_kwargs, reporter_factory):
concrete_specs = concrete_specs_from_cli(args, install_kwargs) + concrete_specs_from_file(args)
# Concrete specs from YAML or JSON files
specs_from_file = concrete_specs_from_file(args)
abstract_specs.extend(specs_from_file)
concrete_specs.extend(specs_from_file)
if len(concrete_specs) == 0:
tty.die("The `spack install` command requires a spec to install.")
with reporter_factory(concrete_specs):
reporter = reporter_factory(concrete_specs) or lang.nullcontext()
with reporter:
if args.overwrite:
require_user_confirmation_for_overwrite(concrete_specs, args)
install_kwargs["overwrite"] = [spec.dag_hash() for spec in concrete_specs]
installs = [(s.package, install_kwargs) for s in concrete_specs]
builder = PackageInstaller(installs)
builder.install()
install_specs(zip(abstract_specs, concrete_specs), install_kwargs, args)

View File

@@ -335,7 +335,7 @@ def not_excluded_fn(args):
exclude_specs.extend(spack.cmd.parse_specs(str(args.exclude_specs).split()))
def not_excluded(x):
return not any(x.satisfies(y) for y in exclude_specs)
return not any(x.satisfies(y, strict=True) for y in exclude_specs)
return not_excluded

View File

@@ -38,6 +38,6 @@ def remove(parser, args):
env.clear()
else:
for spec in spack.cmd.parse_specs(args.specs):
tty.msg("Removing %s from environment %s" % (spec, env.name))
env.remove(spec, args.list_name, force=args.force)
tty.msg(f"{spec} has been removed from {env.manifest}")
env.write()

View File

@@ -26,6 +26,7 @@
description = "run spack's unit tests (wrapper around pytest)"
section = "developer"
level = "long"
is_windows = sys.platform == "win32"
def setup_parser(subparser):
@@ -211,7 +212,7 @@ def unit_test(parser, args, unknown_args):
# mock configuration used by unit tests
# Note: skip on windows here because for the moment,
# clingo is wholly unsupported from bootstrap
if sys.platform != "win32":
if not is_windows:
with spack.bootstrap.ensure_bootstrap_configuration():
spack.bootstrap.ensure_core_dependencies()
if pytest is None:

View File

@@ -28,6 +28,8 @@
__all__ = ["Compiler"]
is_windows = sys.platform == "win32"
@llnl.util.lang.memoized
def _get_compiler_version_output(compiler_path, version_arg, ignore_errors=()):
@@ -596,7 +598,7 @@ def search_regexps(cls, language):
suffixes = [""]
# Windows compilers generally have an extension of some sort
# as do most files on Windows, handle that case here
if sys.platform == "win32":
if is_windows:
ext = r"\.(?:exe|bat)"
cls_suf = [suf + ext for suf in cls.suffixes]
ext_suf = [ext]

View File

@@ -84,7 +84,7 @@ def _to_dict(compiler):
d = {}
d["spec"] = str(compiler.spec)
d["paths"] = dict((attr, getattr(compiler, attr, None)) for attr in _path_instance_vars)
d["flags"] = dict((fname, " ".join(fvals)) for fname, fvals in compiler.flags.items())
d["flags"] = dict((fname, fvals) for fname, fvals in compiler.flags)
d["flags"].update(
dict(
(attr, getattr(compiler, attr, None))

View File

@@ -61,7 +61,7 @@ def is_clang_based(self):
return version >= ver("9.0") and "classic" not in str(version)
version_argument = "--version"
version_regex = r"[Cc]ray (?:clang|C :|C\+\+ :|Fortran :) [Vv]ersion.*?(\d+(\.\d+)+)"
version_regex = r"[Vv]ersion.*?(\d+(\.\d+)+)"
@property
def verbose_flag(self):

View File

@@ -122,19 +122,7 @@ def platform_toolset_ver(self):
@property
def cl_version(self):
"""Cl toolset version"""
return Version(
re.search(
Msvc.version_regex,
spack.compiler.get_compiler_version_output(self.cc, version_arg=None),
).group(1)
)
@property
def vs_root(self):
# The MSVC install root is located at a fix level above the compiler
# and is referenceable idiomatically via the pattern below
# this should be consistent accross versions
return os.path.abspath(os.path.join(self.cc, "../../../../../../../.."))
return spack.compiler.get_compiler_version_output(self.cc)
def setup_custom_environment(self, pkg, env):
"""Set environment variables for MSVC using the
@@ -164,16 +152,15 @@ def setup_custom_environment(self, pkg, env):
out = out.decode("utf-16le", errors="replace") # novermin
int_env = dict(
(key, value)
(key.lower(), value)
for key, _, value in (line.partition("=") for line in out.splitlines())
if key and value
)
for env_var in int_env:
if os.pathsep not in int_env[env_var]:
env.set(env_var, int_env[env_var])
else:
env.set_path(env_var, int_env[env_var].split(os.pathsep))
if "path" in int_env:
env.set_path("PATH", int_env["path"].split(";"))
env.set_path("INCLUDE", int_env.get("include", "").split(";"))
env.set_path("LIB", int_env.get("lib", "").split(";"))
env.set("CC", self.cc)
env.set("CXX", self.cxx)

View File

@@ -21,7 +21,6 @@
import tempfile
from contextlib import contextmanager
from itertools import chain
from typing import Union
import archspec.cpu
@@ -44,9 +43,7 @@
from spack.version import Version, VersionList, VersionRange, ver
#: impements rudimentary logic for ABI compatibility
_abi: Union[spack.abi.ABI, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(
lambda: spack.abi.ABI()
)
_abi = llnl.util.lang.Singleton(lambda: spack.abi.ABI())
@functools.total_ordering
@@ -137,7 +134,7 @@ def _valid_virtuals_and_externals(self, spec):
externals = spec_externals(cspec)
for ext in externals:
if ext.intersects(spec):
if ext.satisfies(spec):
usable.append(ext)
# If nothing is in the usable list now, it's because we aren't
@@ -203,7 +200,7 @@ def concretize_version(self, spec):
# List of versions we could consider, in sorted order
pkg_versions = spec.package_class.versions
usable = [v for v in pkg_versions if any(v.intersects(sv) for sv in spec.versions)]
usable = [v for v in pkg_versions if any(v.satisfies(sv) for sv in spec.versions)]
yaml_prefs = PackagePrefs(spec.name, "version")
@@ -347,7 +344,7 @@ def concretize_architecture(self, spec):
new_target_arch = spack.spec.ArchSpec((None, None, str(new_target)))
curr_target_arch = spack.spec.ArchSpec((None, None, str(curr_target)))
if not new_target_arch.intersects(curr_target_arch):
if not new_target_arch.satisfies(curr_target_arch):
# new_target is an incorrect guess based on preferences
# and/or default
valid_target_ranges = str(curr_target).split(",")

View File

@@ -36,10 +36,9 @@
import re
import sys
from contextlib import contextmanager
from typing import Dict, List, Optional, Union
from typing import Dict, List, Optional
import ruamel.yaml as yaml
from ruamel.yaml.comments import Comment
from ruamel.yaml.error import MarkedYAMLError
import llnl.util.lang
@@ -78,8 +77,6 @@
"config": spack.schema.config.schema,
"upstreams": spack.schema.upstreams.schema,
"bootstrap": spack.schema.bootstrap.schema,
"ci": spack.schema.ci.schema,
"cdash": spack.schema.cdash.schema,
}
# Same as above, but including keys for environments
@@ -363,12 +360,6 @@ def _process_dict_keyname_overrides(data):
if sk.endswith(":"):
key = syaml.syaml_str(sk[:-1])
key.override = True
elif sk.endswith("+"):
key = syaml.syaml_str(sk[:-1])
key.prepend = True
elif sk.endswith("-"):
key = syaml.syaml_str(sk[:-1])
key.append = True
else:
key = sk
@@ -544,14 +535,16 @@ def update_config(
scope = self._validate_scope(scope) # get ConfigScope object
# manually preserve comments
need_comment_copy = section in scope.sections and scope.sections[section]
need_comment_copy = section in scope.sections and scope.sections[section] is not None
if need_comment_copy:
comments = getattr(scope.sections[section][section], Comment.attrib, None)
comments = getattr(
scope.sections[section][section], yaml.comments.Comment.attrib, None
)
# read only the requested section's data.
scope.sections[section] = syaml.syaml_dict({section: update_data})
if need_comment_copy and comments:
setattr(scope.sections[section][section], Comment.attrib, comments)
setattr(scope.sections[section][section], yaml.comments.Comment.attrib, comments)
scope._write_section(section)
@@ -837,7 +830,7 @@ def _config():
#: This is the singleton configuration instance for Spack.
config: Union[Configuration, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(_config)
config = llnl.util.lang.Singleton(_config)
def add_from_file(filename, scope=None):
@@ -1047,33 +1040,6 @@ def _override(string):
return hasattr(string, "override") and string.override
def _append(string):
"""Test if a spack YAML string is an override.
See ``spack_yaml`` for details. Keys in Spack YAML can end in `+:`,
and if they do, their values append lower-precedence
configs.
str, str : concatenate strings.
[obj], [obj] : append lists.
"""
return getattr(string, "append", False)
def _prepend(string):
"""Test if a spack YAML string is an override.
See ``spack_yaml`` for details. Keys in Spack YAML can end in `+:`,
and if they do, their values prepend lower-precedence
configs.
str, str : concatenate strings.
[obj], [obj] : prepend lists. (default behavior)
"""
return getattr(string, "prepend", False)
def _mark_internal(data, name):
"""Add a simple name mark to raw YAML/JSON data.
@@ -1136,57 +1102,7 @@ def get_valid_type(path):
raise ConfigError("Cannot determine valid type for path '%s'." % path)
def remove_yaml(dest, source):
"""UnMerges source from dest; entries in source take precedence over dest.
This routine may modify dest and should be assigned to dest, in
case dest was None to begin with, e.g.:
dest = remove_yaml(dest, source)
In the result, elements from lists from ``source`` will not appear
as elements of lists from ``dest``. Likewise, when iterating over keys
or items in merged ``OrderedDict`` objects, keys from ``source`` will not
appear as keys in ``dest``.
Config file authors can optionally end any attribute in a dict
with `::` instead of `:`, and the key will remove the entire section
from ``dest``
"""
def they_are(t):
return isinstance(dest, t) and isinstance(source, t)
# If source is None, overwrite with source.
if source is None:
return dest
# Source list is prepended (for precedence)
if they_are(list):
# Make sure to copy ruamel comments
dest[:] = [x for x in dest if x not in source]
return dest
# Source dict is merged into dest.
elif they_are(dict):
for sk, sv in source.items():
# always remove the dest items. Python dicts do not overwrite
# keys on insert, so this ensures that source keys are copied
# into dest along with mark provenance (i.e., file/line info).
unmerge = sk in dest
old_dest_value = dest.pop(sk, None)
if unmerge and not spack.config._override(sk):
dest[sk] = remove_yaml(old_dest_value, sv)
return dest
# If we reach here source and dest are either different types or are
# not both lists or dicts: replace with source.
return dest
def merge_yaml(dest, source, prepend=False, append=False):
def merge_yaml(dest, source):
"""Merges source into dest; entries in source take precedence over dest.
This routine may modify dest and should be assigned to dest, in
@@ -1202,9 +1118,6 @@ def merge_yaml(dest, source, prepend=False, append=False):
Config file authors can optionally end any attribute in a dict
with `::` instead of `:`, and the key will override that of the
parent instead of merging.
`+:` will extend the default prepend merge strategy to include string concatenation
`-:` will change the merge strategy to append, it also includes string concatentation
"""
def they_are(t):
@@ -1216,12 +1129,8 @@ def they_are(t):
# Source list is prepended (for precedence)
if they_are(list):
if append:
# Make sure to copy ruamel comments
dest[:] = [x for x in dest if x not in source] + source
else:
# Make sure to copy ruamel comments
dest[:] = source + [x for x in dest if x not in source]
# Make sure to copy ruamel comments
dest[:] = source + [x for x in dest if x not in source]
return dest
# Source dict is merged into dest.
@@ -1238,7 +1147,7 @@ def they_are(t):
old_dest_value = dest.pop(sk, None)
if merge and not _override(sk):
dest[sk] = merge_yaml(old_dest_value, sv, _prepend(sk), _append(sk))
dest[sk] = merge_yaml(old_dest_value, sv)
else:
# if sk ended with ::, or if it's new, completely override
dest[sk] = copy.deepcopy(sv)
@@ -1249,13 +1158,6 @@ def they_are(t):
return dest
elif they_are(str):
# Concatenate strings in prepend mode
if prepend:
return source + dest
elif append:
return dest + source
# If we reach here source and dest are either different types or are
# not both lists or dicts: replace with source.
return copy.copy(source)
@@ -1281,17 +1183,6 @@ def process_config_path(path):
front = syaml.syaml_str(front)
front.override = True
seen_override_in_path = True
elif front.endswith("+"):
front = front.rstrip("+")
front = syaml.syaml_str(front)
front.prepend = True
elif front.endswith("-"):
front = front.rstrip("-")
front = syaml.syaml_str(front)
front.append = True
result.append(front)
return result

View File

@@ -39,10 +39,10 @@ def validate(configuration_file):
# Ensure we have a "container" attribute with sensible defaults set
env_dict = ev.config_dict(config)
env_dict.setdefault(
"container", {"format": "docker", "images": {"os": "ubuntu:22.04", "spack": "develop"}}
"container", {"format": "docker", "images": {"os": "ubuntu:18.04", "spack": "develop"}}
)
env_dict["container"].setdefault("format", "docker")
env_dict["container"].setdefault("images", {"os": "ubuntu:22.04", "spack": "develop"})
env_dict["container"].setdefault("images", {"os": "ubuntu:18.04", "spack": "develop"})
# Remove attributes that are not needed / allowed in the
# container recipe

View File

@@ -12,90 +12,6 @@
},
"os_package_manager": "yum_amazon"
},
"fedora:38": {
"bootstrap": {
"template": "container/fedora_38.dockerfile",
"image": "docker.io/fedora:38"
},
"os_package_manager": "yum",
"build": "spack/fedora38",
"build_tags": {
"develop": "latest"
},
"final": {
"image": "docker.io/fedora:38"
}
},
"fedora:37": {
"bootstrap": {
"template": "container/fedora_37.dockerfile",
"image": "docker.io/fedora:37"
},
"os_package_manager": "yum",
"build": "spack/fedora37",
"build_tags": {
"develop": "latest"
},
"final": {
"image": "docker.io/fedora:37"
}
},
"rockylinux:9": {
"bootstrap": {
"template": "container/rockylinux_9.dockerfile",
"image": "docker.io/rockylinux:9"
},
"os_package_manager": "yum",
"build": "spack/rockylinux9",
"build_tags": {
"develop": "latest"
},
"final": {
"image": "docker.io/rockylinux:9"
}
},
"rockylinux:8": {
"bootstrap": {
"template": "container/rockylinux_8.dockerfile",
"image": "docker.io/rockylinux:8"
},
"os_package_manager": "yum",
"build": "spack/rockylinux8",
"build_tags": {
"develop": "latest"
},
"final": {
"image": "docker.io/rockylinux:8"
}
},
"almalinux:9": {
"bootstrap": {
"template": "container/almalinux_9.dockerfile",
"image": "quay.io/almalinux/almalinux:9"
},
"os_package_manager": "yum",
"build": "spack/almalinux9",
"build_tags": {
"develop": "latest"
},
"final": {
"image": "quay.io/almalinux/almalinux:9"
}
},
"almalinux:8": {
"bootstrap": {
"template": "container/almalinux_8.dockerfile",
"image": "quay.io/almalinux/almalinux:8"
},
"os_package_manager": "yum",
"build": "spack/almalinux8",
"build_tags": {
"develop": "latest"
},
"final": {
"image": "quay.io/almalinux/almalinux:8"
}
},
"centos:stream": {
"bootstrap": {
"template": "container/centos_stream.dockerfile",

View File

@@ -7,7 +7,6 @@
"""
import collections
import copy
from typing import Optional
import spack.environment as ev
import spack.schema.env
@@ -132,9 +131,6 @@ class PathContext(tengine.Context):
directly via PATH.
"""
# Must be set by derived classes
template_name: Optional[str] = None
def __init__(self, config, last_phase):
self.config = ev.config_dict(config)
self.container_config = self.config["container"]
@@ -150,10 +146,6 @@ def __init__(self, config, last_phase):
# Record the last phase
self.last_phase = last_phase
@tengine.context_property
def depfile(self):
return self.container_config.get("depfile", False)
@tengine.context_property
def run(self):
"""Information related to the run image."""
@@ -288,8 +280,7 @@ def render_phase(self):
def __call__(self):
"""Returns the recipe as a string"""
env = tengine.make_environment()
template_name = self.container_config.get("template", self.template_name)
t = env.get_template(template_name)
t = env.get_template(self.template_name)
return t.render(**self.to_dict())

View File

@@ -1525,7 +1525,7 @@ def _query(
if not (start_date < inst_date < end_date):
continue
if query_spec is any or rec.spec.satisfies(query_spec):
if query_spec is any or rec.spec.satisfies(query_spec, strict=True):
results.append(rec.spec)
return results

View File

@@ -29,6 +29,7 @@
import spack.util.spack_yaml
import spack.util.windows_registry
is_windows = sys.platform == "win32"
#: Information on a package that has been detected
DetectedPackage = collections.namedtuple("DetectedPackage", ["spec", "prefix"])
@@ -183,7 +184,7 @@ def library_prefix(library_dir):
elif "lib" in lowered_components:
idx = lowered_components.index("lib")
return os.sep.join(components[:idx])
elif sys.platform == "win32" and "bin" in lowered_components:
elif is_windows and "bin" in lowered_components:
idx = lowered_components.index("bin")
return os.sep.join(components[:idx])
else:
@@ -259,13 +260,13 @@ def find_windows_compiler_bundled_packages():
class WindowsKitExternalPaths(object):
if sys.platform == "win32":
if is_windows:
plat_major_ver = str(winOs.windows_version()[0])
@staticmethod
def find_windows_kit_roots():
"""Return Windows kit root, typically %programfiles%\\Windows Kits\\10|11\\"""
if sys.platform != "win32":
if not is_windows:
return []
program_files = os.environ["PROGRAMFILES(x86)"]
kit_base = os.path.join(
@@ -358,7 +359,7 @@ def compute_windows_program_path_for_package(pkg):
pkg (spack.package_base.PackageBase): package for which
Program Files location is to be computed
"""
if sys.platform != "win32":
if not is_windows:
return []
# note windows paths are fine here as this method should only ever be invoked
# to interact with Windows
@@ -378,7 +379,7 @@ def compute_windows_user_path_for_package(pkg):
installs see:
https://learn.microsoft.com/en-us/dotnet/api/system.environment.specialfolder?view=netframework-4.8
"""
if sys.platform != "win32":
if not is_windows:
return []
# Current user directory

View File

@@ -31,6 +31,8 @@
path_to_dict,
)
is_windows = sys.platform == "win32"
def common_windows_package_paths():
paths = WindowsCompilerExternalPaths.find_windows_compiler_bundled_packages()
@@ -55,7 +57,7 @@ def executables_in_path(path_hints):
path_hints (list): list of paths to be searched. If None the list will be
constructed based on the PATH environment variable.
"""
if sys.platform == "win32":
if is_windows:
path_hints.extend(common_windows_package_paths())
search_paths = llnl.util.filesystem.search_paths_for_executables(*path_hints)
return path_to_dict(search_paths)
@@ -147,7 +149,7 @@ def by_library(packages_to_check, path_hints=None):
path_to_lib_name = (
libraries_in_ld_and_system_library_path(path_hints=path_hints)
if sys.platform != "win32"
if not is_windows
else libraries_in_windows_paths(path_hints)
)

View File

@@ -32,7 +32,7 @@ class OpenMpi(Package):
import functools
import os.path
import re
from typing import List, Optional, Set
from typing import List, Set
import llnl.util.lang
import llnl.util.tty.color
@@ -317,100 +317,34 @@ def remove_directives(arg):
@directive("versions")
def version(
ver: str,
# this positional argument is deprecated, use sha256=... instead
checksum: Optional[str] = None,
*,
# generic version options
preferred: Optional[bool] = None,
deprecated: Optional[bool] = None,
no_cache: Optional[bool] = None,
# url fetch options
url: Optional[str] = None,
extension: Optional[str] = None,
expand: Optional[bool] = None,
fetch_options: Optional[dict] = None,
# url archive verification options
md5: Optional[str] = None,
sha1: Optional[str] = None,
sha224: Optional[str] = None,
sha256: Optional[str] = None,
sha384: Optional[str] = None,
sha512: Optional[str] = None,
# git fetch options
git: Optional[str] = None,
commit: Optional[str] = None,
tag: Optional[str] = None,
branch: Optional[str] = None,
get_full_repo: Optional[bool] = None,
submodules: Optional[bool] = None,
submodules_delete: Optional[bool] = None,
# other version control
svn: Optional[str] = None,
hg: Optional[str] = None,
cvs: Optional[str] = None,
revision: Optional[str] = None,
date: Optional[str] = None,
):
def version(ver, checksum=None, **kwargs):
"""Adds a version and, if appropriate, metadata for fetching its code.
The ``version`` directives are aggregated into a ``versions`` dictionary
attribute with ``Version`` keys and metadata values, where the metadata
is stored as a dictionary of ``kwargs``.
The (keyword) arguments are turned into a valid fetch strategy for
The ``dict`` of arguments is turned into a valid fetch strategy for
code packages later. See ``spack.fetch_strategy.for_package_version()``.
Keyword Arguments:
deprecated (bool): whether or not this version is deprecated
"""
def _execute_version(pkg):
if (
any((sha256, sha384, sha512, md5, sha1, sha224, checksum))
and hasattr(pkg, "has_code")
and not pkg.has_code
):
raise VersionChecksumError(
"{0}: Checksums not allowed in no-code packages "
"(see '{1}' version).".format(pkg.name, ver)
)
if checksum is not None:
if hasattr(pkg, "has_code") and not pkg.has_code:
raise VersionChecksumError(
"{0}: Checksums not allowed in no-code packages"
"(see '{1}' version).".format(pkg.name, ver)
)
kwargs = {
key: value
for key, value in (
("sha256", sha256),
("sha384", sha384),
("sha512", sha512),
("preferred", preferred),
("deprecated", deprecated),
("expand", expand),
("url", url),
("extension", extension),
("no_cache", no_cache),
("fetch_options", fetch_options),
("git", git),
("svn", svn),
("hg", hg),
("cvs", cvs),
("get_full_repo", get_full_repo),
("branch", branch),
("submodules", submodules),
("submodules_delete", submodules_delete),
("commit", commit),
("tag", tag),
("revision", revision),
("date", date),
("md5", md5),
("sha1", sha1),
("sha224", sha224),
("checksum", checksum),
)
if value is not None
}
kwargs["checksum"] = checksum
# Store kwargs for the package to later with a fetch_strategy.
version = Version(ver)
if isinstance(version, GitVersion):
if git is None and not hasattr(pkg, "git"):
if not hasattr(pkg, "git") and "git" not in kwargs:
msg = "Spack version directives cannot include git hashes fetched from"
msg += " URLs. Error in package '%s'\n" % pkg.name
msg += " version('%s', " % version.string

View File

@@ -21,6 +21,7 @@
import spack.util.spack_json as sjson
from spack.error import SpackError
is_windows = sys.platform == "win32"
# Note: Posixpath is used here as opposed to
# os.path.join due to spack.spec.Spec.format
# requiring forward slash path seperators at this stage
@@ -345,7 +346,7 @@ def remove_install_directory(self, spec, deprecated=False):
# Windows readonly files cannot be removed by Python
# directly, change permissions before attempting to remove
if sys.platform == "win32":
if is_windows:
kwargs = {
"ignore_errors": False,
"onerror": fs.readonly_file_handler(ignore_errors=False),

View File

@@ -340,14 +340,11 @@
all_environments,
config_dict,
create,
create_in_dir,
deactivate,
default_manifest_yaml,
default_view_name,
display_specs,
environment_dir_from_name,
exists,
initialize_environment_dir,
installed_specs,
is_env_dir,
is_latest_format,
@@ -372,14 +369,11 @@
"all_environments",
"config_dict",
"create",
"create_in_dir",
"deactivate",
"default_manifest_yaml",
"default_view_name",
"display_specs",
"environment_dir_from_name",
"exists",
"initialize_environment_dir",
"installed_specs",
"is_env_dir",
"is_latest_format",

View File

@@ -1,239 +0,0 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""
This module contains the traversal logic and models that can be used to generate
depfiles from an environment.
"""
import os
from enum import Enum
from typing import List, Optional
import spack.environment.environment as ev
import spack.spec
import spack.traverse as traverse
class UseBuildCache(Enum):
ONLY = 1
NEVER = 2
AUTO = 3
@staticmethod
def from_string(s: str) -> "UseBuildCache":
if s == "only":
return UseBuildCache.ONLY
elif s == "never":
return UseBuildCache.NEVER
elif s == "auto":
return UseBuildCache.AUTO
raise ValueError(f"invalid value for UseBuildCache: {s}")
def _deptypes(use_buildcache: UseBuildCache):
"""What edges should we follow for a given node? If it's a cache-only
node, then we can drop build type deps."""
return ("link", "run") if use_buildcache == UseBuildCache.ONLY else ("build", "link", "run")
class DepfileNode:
"""Contains a spec, a subset of its dependencies, and a flag whether it should be
buildcache only/never/auto."""
def __init__(
self, target: spack.spec.Spec, prereqs: List[spack.spec.Spec], buildcache: UseBuildCache
):
self.target = target
self.prereqs = prereqs
if buildcache == UseBuildCache.ONLY:
self.buildcache_flag = "--use-buildcache=only"
elif buildcache == UseBuildCache.NEVER:
self.buildcache_flag = "--use-buildcache=never"
else:
self.buildcache_flag = ""
class DepfileSpecVisitor:
"""This visitor produces an adjacency list of a (reduced) DAG, which
is used to generate depfile targets with their prerequisites. Currently
it only drops build deps when using buildcache only mode.
Note that the DAG could be reduced even more by dropping build edges of specs
installed at the moment the depfile is generated, but that would produce
stateful depfiles that would not fail when the database is wiped later."""
def __init__(self, pkg_buildcache: UseBuildCache, deps_buildcache: UseBuildCache):
self.adjacency_list: List[DepfileNode] = []
self.pkg_buildcache = pkg_buildcache
self.deps_buildcache = deps_buildcache
self.deptypes_root = _deptypes(pkg_buildcache)
self.deptypes_deps = _deptypes(deps_buildcache)
def neighbors(self, node):
"""Produce a list of spec to follow from node"""
deptypes = self.deptypes_root if node.depth == 0 else self.deptypes_deps
return traverse.sort_edges(node.edge.spec.edges_to_dependencies(deptype=deptypes))
def accept(self, node):
self.adjacency_list.append(
DepfileNode(
target=node.edge.spec,
prereqs=[edge.spec for edge in self.neighbors(node)],
buildcache=self.pkg_buildcache if node.depth == 0 else self.deps_buildcache,
)
)
# We already accepted this
return True
class MakefileModel:
"""This class produces all data to render a makefile for specs of an environment."""
def __init__(
self,
env: ev.Environment,
roots: List[spack.spec.Spec],
adjacency_list: List[DepfileNode],
make_prefix: Optional[str],
jobserver: bool,
):
"""
Args:
env: environment to generate the makefile for
roots: specs that get built in the default target
adjacency_list: list of DepfileNode, mapping specs to their dependencies
make_prefix: prefix for makefile targets
jobserver: when enabled, make will invoke Spack with jobserver support. For
dry-run this should be disabled.
"""
# Currently we can only use depfile with an environment since Spack needs to
# find the concrete specs somewhere.
self.env_path = env.path
# These specs are built in the default target.
self.roots = roots
# The SPACK_PACKAGE_IDS variable is "exported", which can be used when including
# generated makefiles to add post-install hooks, like pushing to a buildcache,
# running tests, etc.
if make_prefix is None:
self.make_prefix = os.path.join(env.env_subdir_path, "makedeps")
self.pkg_identifier_variable = "SPACK_PACKAGE_IDS"
else:
# NOTE: GNU Make allows directory separators in variable names, so for consistency
# we can namespace this variable with the same prefix as targets.
self.make_prefix = make_prefix
self.pkg_identifier_variable = os.path.join(make_prefix, "SPACK_PACKAGE_IDS")
# And here we collect a tuple of (target, prereqs, dag_hash, nice_name, buildcache_flag)
self.make_adjacency_list = [
(
self._safe_name(item.target),
" ".join(self._install_target(self._safe_name(s)) for s in item.prereqs),
item.target.dag_hash(),
item.target.format("{name}{@version}{%compiler}{variants}{arch=architecture}"),
item.buildcache_flag,
)
for item in adjacency_list
]
# Root specs without deps are the prereqs for the environment target
self.root_install_targets = [self._install_target(self._safe_name(s)) for s in roots]
self.jobserver_support = "+" if jobserver else ""
# All package identifiers, used to generate the SPACK_PACKAGE_IDS variable
self.all_pkg_identifiers: List[str] = []
# All install and install-deps targets
self.all_install_related_targets: List[str] = []
# Convenience shortcuts: ensure that `make install/pkg-version-hash` triggers
# <absolute path to env>/.spack-env/makedeps/install/pkg-version-hash in case
# we don't have a custom make target prefix.
self.phony_convenience_targets: List[str] = []
for node in adjacency_list:
tgt = self._safe_name(node.target)
self.all_pkg_identifiers.append(tgt)
self.all_install_related_targets.append(self._install_target(tgt))
self.all_install_related_targets.append(self._install_deps_target(tgt))
if make_prefix is None:
self.phony_convenience_targets.append(os.path.join("install", tgt))
self.phony_convenience_targets.append(os.path.join("install-deps", tgt))
def _safe_name(self, spec: spack.spec.Spec) -> str:
return spec.format("{name}-{version}-{hash}")
def _target(self, name: str) -> str:
# The `all` and `clean` targets are phony. It doesn't make sense to
# have /abs/path/to/env/metadir/{all,clean} targets. But it *does* make
# sense to have a prefix like `env/all`, `env/clean` when they are
# supposed to be included
if name in ("all", "clean") and os.path.isabs(self.make_prefix):
return name
else:
return os.path.join(self.make_prefix, name)
def _install_target(self, name: str) -> str:
return os.path.join(self.make_prefix, "install", name)
def _install_deps_target(self, name: str) -> str:
return os.path.join(self.make_prefix, "install-deps", name)
def to_dict(self):
return {
"all_target": self._target("all"),
"env_target": self._target("env"),
"clean_target": self._target("clean"),
"all_install_related_targets": " ".join(self.all_install_related_targets),
"root_install_targets": " ".join(self.root_install_targets),
"dirs_target": self._target("dirs"),
"environment": self.env_path,
"install_target": self._target("install"),
"install_deps_target": self._target("install-deps"),
"any_hash_target": self._target("%"),
"jobserver_support": self.jobserver_support,
"adjacency_list": self.make_adjacency_list,
"phony_convenience_targets": " ".join(self.phony_convenience_targets),
"pkg_ids_variable": self.pkg_identifier_variable,
"pkg_ids": " ".join(self.all_pkg_identifiers),
}
@staticmethod
def from_env(
env: ev.Environment,
*,
filter_specs: Optional[List[spack.spec.Spec]] = None,
pkg_buildcache: UseBuildCache = UseBuildCache.AUTO,
dep_buildcache: UseBuildCache = UseBuildCache.AUTO,
make_prefix: Optional[str] = None,
jobserver: bool = True,
) -> "MakefileModel":
"""Produces a MakefileModel from an environment and a list of specs.
Args:
env: the environment to use
filter_specs: if provided, only these specs will be built from the environment,
otherwise the environment roots are used.
pkg_buildcache: whether to only use the buildcache for top-level specs.
dep_buildcache: whether to only use the buildcache for non-top-level specs.
make_prefix: the prefix for the makefile targets
jobserver: when enabled, make will invoke Spack with jobserver support. For
dry-run this should be disabled.
"""
# If no specs are provided as a filter, build all the specs in the environment.
if filter_specs:
entrypoints = [env.matching_spec(s) for s in filter_specs]
else:
entrypoints = [s for _, s in env.concretized_specs()]
visitor = DepfileSpecVisitor(pkg_buildcache, dep_buildcache)
traverse.traverse_breadth_first_with_visitor(
entrypoints, traverse.CoverNodesVisitor(visitor, key=lambda s: s.dag_hash())
)
return MakefileModel(env, entrypoints, visitor.adjacency_list, make_prefix, jobserver)

File diff suppressed because it is too large Load Diff

View File

@@ -28,6 +28,7 @@
import os.path
import re
import shutil
import sys
import urllib.parse
from typing import List, Optional
@@ -52,6 +53,7 @@
#: List of all fetch strategies, created by FetchStrategy metaclass.
all_strategies = []
is_windows = sys.platform == "win32"
CONTENT_TYPE_MISMATCH_WARNING_TEMPLATE = (
"The contents of {subject} look like {content_type}. Either the URL"
@@ -1501,7 +1503,7 @@ def _from_merged_attrs(fetcher, pkg, version):
return fetcher(**attrs)
def for_package_version(pkg, version=None):
def for_package_version(pkg, version):
"""Determine a fetch strategy based on the arguments supplied to
version() in the package description."""
@@ -1512,18 +1514,8 @@ def for_package_version(pkg, version=None):
check_pkg_attributes(pkg)
if version is not None:
assert not pkg.spec.concrete, "concrete specs should not pass the 'version=' argument"
# Specs are initialized with the universe range, if no version information is given,
# so here we make sure we always match the version passed as argument
if not isinstance(version, spack.version.VersionBase):
version = spack.version.Version(version)
version_list = spack.version.VersionList()
version_list.add(version)
pkg.spec.versions = version_list
else:
version = pkg.version
if not isinstance(version, spack.version.VersionBase):
version = spack.version.Version(version)
# if it's a commit, we must use a GitFetchStrategy
if isinstance(version, spack.version.GitVersion):

View File

@@ -12,7 +12,7 @@
Currently the following hooks are supported:
* pre_install(spec)
* post_install(spec, explicit)
* post_install(spec)
* pre_uninstall(spec)
* post_uninstall(spec)
* on_install_start(spec)

View File

@@ -131,7 +131,7 @@ def find_and_patch_sonames(prefix, exclude_list, patchelf):
return patch_sonames(patchelf, prefix, relative_paths)
def post_install(spec, explicit=None):
def post_install(spec):
# Skip if disabled
if not spack.config.get("config:shared_linking:bind", False):
return

View File

@@ -9,7 +9,8 @@
from llnl.util.filesystem import mkdirp
from llnl.util.symlink import symlink
import spack.util.editor as ed
from spack.util.editor import editor
from spack.util.executable import Executable, which
def pre_install(spec):
@@ -37,9 +38,29 @@ def set_up_license(pkg):
if not os.path.exists(license_path):
# Create a new license file
write_license_file(pkg, license_path)
# Open up file in user's favorite $EDITOR for editing
editor_exe = None
if "VISUAL" in os.environ:
editor_exe = Executable(os.environ["VISUAL"])
# gvim runs in the background by default so we force it to run
# in the foreground to make sure the license file is updated
# before we try to install
if "gvim" in os.environ["VISUAL"]:
editor_exe.add_default_arg("-f")
elif "EDITOR" in os.environ:
editor_exe = Executable(os.environ["EDITOR"])
else:
editor_exe = which("vim", "vi", "emacs", "nano")
if editor_exe is None:
raise EnvironmentError(
"No text editor found! Please set the VISUAL and/or EDITOR"
" environment variable(s) to your preferred text editor."
)
# use spack.util.executable so the editor does not hang on return here
ed.editor(license_path, exec_fn=ed.executable)
def editor_wrapper(exe, args):
editor_exe(license_path)
editor(license_path, _exec_func=editor_wrapper)
else:
# Use already existing license file
tty.msg("Found already existing license %s" % license_path)
@@ -148,7 +169,7 @@ def write_license_file(pkg, license_path):
f.close()
def post_install(spec, explicit=None):
def post_install(spec):
"""This hook symlinks local licenses to the global license for
licensed software.
"""

View File

@@ -3,24 +3,31 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from llnl.util import tty
import llnl.util.tty as tty
import spack.config
import spack.modules
import spack.modules.common
def _for_each_enabled(spec, method_name, explicit=None):
def _for_each_enabled(spec, method_name):
"""Calls a method for each enabled module"""
spack.modules.ensure_modules_are_enabled_or_warn()
set_names = set(spack.config.get("modules", {}).keys())
# If we have old-style modules enabled, we put those in the default set
old_default_enabled = spack.config.get("modules:enable")
if old_default_enabled:
set_names.add("default")
for name in set_names:
enabled = spack.config.get("modules:%s:enable" % name)
if name == "default":
# combine enabled modules from default and old format
enabled = spack.config.merge_yaml(old_default_enabled, enabled)
if not enabled:
tty.debug("NO MODULE WRITTEN: list of enabled module files is empty")
continue
for module_type in enabled:
generator = spack.modules.module_types[module_type](spec, name, explicit)
for type in enabled:
generator = spack.modules.module_types[type](spec, name)
try:
getattr(generator, method_name)()
except RuntimeError as e:
@@ -29,7 +36,7 @@ def _for_each_enabled(spec, method_name, explicit=None):
tty.warn(msg.format(method_name, str(e)))
def post_install(spec, explicit):
def post_install(spec):
import spack.environment as ev # break import cycle
if ev.active_environment():
@@ -38,7 +45,7 @@ def post_install(spec, explicit):
# can manage interactions between env views and modules
return
_for_each_enabled(spec, "write", explicit)
_for_each_enabled(spec, "write")
def post_uninstall(spec):

View File

@@ -8,7 +8,7 @@
import spack.util.file_permissions as fp
def post_install(spec, explicit=None):
def post_install(spec):
if not spec.external:
fp.set_permissions_by_spec(spec.prefix, spec)

View File

@@ -30,7 +30,8 @@
#: Groupdb does not exist on Windows, prevent imports
#: on supported systems
if sys.platform != "win32":
is_windows = sys.platform == "win32"
if not is_windows:
import grp
#: Spack itself also limits the shebang line to at most 4KB, which should be plenty.
@@ -224,7 +225,7 @@ def install_sbang():
os.rename(sbang_tmp_path, sbang_path)
def post_install(spec, explicit=None):
def post_install(spec):
"""This hook edits scripts so that they call /bin/bash
$spack_prefix/bin/sbang instead of something longer than the
shebang limit.

View File

@@ -6,6 +6,6 @@
import spack.verify
def post_install(spec, explicit=None):
def post_install(spec):
if not spec.external:
spack.verify.write_manifest(spec)

View File

@@ -84,6 +84,9 @@
#: queue invariants).
STATUS_REMOVED = "removed"
is_windows = sys.platform == "win32"
is_osx = sys.platform == "darwin"
class InstallAction(object):
#: Don't perform an install
@@ -166,9 +169,9 @@ def _do_fake_install(pkg):
if not pkg.name.startswith("lib"):
library = "lib" + library
plat_shared = ".dll" if sys.platform == "win32" else ".so"
plat_static = ".lib" if sys.platform == "win32" else ".a"
dso_suffix = ".dylib" if sys.platform == "darwin" else plat_shared
plat_shared = ".dll" if is_windows else ".so"
plat_static = ".lib" if is_windows else ".a"
dso_suffix = ".dylib" if is_osx else plat_shared
# Install fake command
fs.mkdirp(pkg.prefix.bin)
@@ -315,7 +318,7 @@ def _install_from_cache(pkg, cache_only, explicit, unsigned=False):
tty.debug("Successfully extracted {0} from binary cache".format(pkg_id))
_print_timer(pre=_log_prefix(pkg.name), pkg_id=pkg_id, timer=t)
_print_installed_pkg(pkg.spec.prefix)
spack.hooks.post_install(pkg.spec, explicit)
spack.hooks.post_install(pkg.spec)
return True
@@ -353,7 +356,7 @@ def _process_external_package(pkg, explicit):
# For external packages we just need to run
# post-install hooks to generate module files.
tty.debug("{0} generating module file".format(pre))
spack.hooks.post_install(spec, explicit)
spack.hooks.post_install(spec)
# Add to the DB
tty.debug("{0} registering into DB".format(pre))
@@ -1260,10 +1263,6 @@ def _install_task(self, task):
if not pkg.unit_test_check():
return
# Injecting information to know if this installation request is the root one
# to determine in BuildProcessInstaller whether installation is explicit or not
install_args["is_root"] = task.is_root
try:
self._setup_install_dir(pkg)
@@ -1883,9 +1882,6 @@ def __init__(self, pkg, install_args):
# whether to enable echoing of build output initially or not
self.verbose = install_args.get("verbose", False)
# whether installation was explicitly requested by the user
self.explicit = install_args.get("is_root", False) and install_args.get("explicit", True)
# env before starting installation
self.unmodified_env = install_args.get("unmodified_env", {})
@@ -1946,7 +1942,7 @@ def run(self):
self.timer.write_json(timelog)
# Run post install hooks before build stage is removed.
spack.hooks.post_install(self.pkg.spec, self.explicit)
spack.hooks.post_install(self.pkg.spec)
_print_timer(pre=self.pre, pkg_id=self.pkg_id, timer=self.timer)
_print_installed_pkg(self.pkg.prefix)
@@ -2420,10 +2416,7 @@ def get_deptypes(self, pkg):
else:
cache_only = self.install_args.get("dependencies_cache_only")
# Include build dependencies if pkg is not installed and cache_only
# is False, or if build depdencies are explicitly called for
# by include_build_deps.
if include_build_deps or not (cache_only or pkg.spec.installed):
if not cache_only or include_build_deps:
deptypes.append("build")
if self.run_tests(pkg):
deptypes.append("test")

View File

@@ -575,7 +575,7 @@ def setup_main_options(args):
if args.debug:
spack.util.debug.register_interrupt_handler()
spack.config.set("config:debug", True, scope="command_line")
spack.util.environment.TRACING_ENABLED = True
spack.util.environment.tracing_enabled = True
if args.timestamp:
tty.set_timestamp(True)

View File

@@ -492,7 +492,7 @@ def get_matching_versions(specs, num_versions=1):
break
# Generate only versions that satisfy the spec.
if spec.concrete or v.intersects(spec.versions):
if spec.concrete or v.satisfies(spec.versions):
s = spack.spec.Spec(pkg.name)
s.versions = VersionList([v])
s.variants = spec.variants.copy()

View File

@@ -59,10 +59,9 @@ def filter_compiler_wrappers(*files, **kwargs):
find_kwargs = {"recursive": kwargs.get("recursive", False)}
def _filter_compiler_wrappers_impl(pkg_or_builder):
pkg = getattr(pkg_or_builder, "pkg", pkg_or_builder)
def _filter_compiler_wrappers_impl(self):
# Compute the absolute path of the search root
root = os.path.join(pkg.prefix, relative_root) if relative_root else pkg.prefix
root = os.path.join(self.prefix, relative_root) if relative_root else self.prefix
# Compute the absolute path of the files to be filtered and
# remove links from the list.
@@ -72,10 +71,10 @@ def _filter_compiler_wrappers_impl(pkg_or_builder):
x = llnl.util.filesystem.FileFilter(*abs_files)
compiler_vars = [
("CC", pkg.compiler.cc),
("CXX", pkg.compiler.cxx),
("F77", pkg.compiler.f77),
("FC", pkg.compiler.fc),
("CC", self.compiler.cc),
("CXX", self.compiler.cxx),
("F77", self.compiler.f77),
("FC", self.compiler.fc),
]
# Some paths to the compiler wrappers might be substrings of the others.
@@ -104,11 +103,11 @@ def _filter_compiler_wrappers_impl(pkg_or_builder):
x.filter(wrapper_path, compiler_path, **filter_kwargs)
# Remove this linking flag if present (it turns RPATH into RUNPATH)
x.filter("{0}--enable-new-dtags".format(pkg.compiler.linker_arg), "", **filter_kwargs)
x.filter("{0}--enable-new-dtags".format(self.compiler.linker_arg), "", **filter_kwargs)
# NAG compiler is usually mixed with GCC, which has a different
# prefix for linker arguments.
if pkg.compiler.name == "nag":
if self.compiler.name == "nag":
x.filter("-Wl,--enable-new-dtags", "", **filter_kwargs)
spack.builder.run_after(after)(_filter_compiler_wrappers_impl)

View File

@@ -4,20 +4,15 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""This package contains code for creating environment modules, which can
include Tcl non-hierarchical modules, Lua hierarchical modules, and others.
include TCL non-hierarchical modules, LUA hierarchical modules, and others.
"""
from __future__ import absolute_import
from .common import disable_modules, ensure_modules_are_enabled_or_warn
from .common import disable_modules
from .lmod import LmodModulefileWriter
from .tcl import TclModulefileWriter
__all__ = [
"TclModulefileWriter",
"LmodModulefileWriter",
"disable_modules",
"ensure_modules_are_enabled_or_warn",
]
__all__ = ["TclModulefileWriter", "LmodModulefileWriter", "disable_modules"]
module_types = {"tcl": TclModulefileWriter, "lmod": LmodModulefileWriter}

View File

@@ -33,9 +33,7 @@
import datetime
import inspect
import os.path
import pathlib
import re
import warnings
from typing import Optional
import llnl.util.filesystem
@@ -209,7 +207,7 @@ def merge_config_rules(configuration, spec):
# evaluated in order of appearance in the module file
spec_configuration = module_specific_configuration.pop("all", {})
for constraint, action in module_specific_configuration.items():
if spec.satisfies(constraint):
if spec.satisfies(constraint, strict=True):
if hasattr(constraint, "override") and constraint.override:
spec_configuration = {}
update_dictionary_extending_lists(spec_configuration, action)
@@ -430,17 +428,12 @@ class BaseConfiguration(object):
default_projections = {"all": "{name}-{version}-{compiler.name}-{compiler.version}"}
def __init__(self, spec, module_set_name, explicit=None):
def __init__(self, spec, module_set_name):
# Module where type(self) is defined
self.module = inspect.getmodule(self)
# Spec for which we want to generate a module file
self.spec = spec
self.name = module_set_name
# Software installation has been explicitly asked (get this information from
# db when querying an existing module, like during a refresh or rm operations)
if explicit is None:
explicit = spec._installed_explicitly()
self.explicit = explicit
# Dictionary of configuration options that should be applied
# to the spec
self.conf = merge_config_rules(self.module.configuration(self.name), self.spec)
@@ -526,7 +519,8 @@ def excluded(self):
# Should I exclude the module because it's implicit?
# DEPRECATED: remove 'blacklist_implicits' in v0.20
exclude_implicits = get_deprecated(conf, "exclude_implicits", "blacklist_implicits", None)
excluded_as_implicit = exclude_implicits and not self.explicit
installed_implicitly = not spec._installed_explicitly()
excluded_as_implicit = exclude_implicits and installed_implicitly
def debug_info(line_header, match_list):
if match_list:
@@ -705,7 +699,7 @@ def configure_options(self):
if os.path.exists(pkg.install_configure_args_path):
with open(pkg.install_configure_args_path, "r") as args_file:
return spack.util.path.padding_filter(args_file.read())
return args_file.read()
# Returning a false-like value makes the default templates skip
# the configure option section
@@ -794,8 +788,7 @@ def autoload(self):
def _create_module_list_of(self, what):
m = self.conf.module
name = self.conf.name
explicit = self.conf.explicit
return [m.make_layout(x, name, explicit).use_name for x in getattr(self.conf, what)]
return [m.make_layout(x, name).use_name for x in getattr(self.conf, what)]
@tengine.context_property
def verbose(self):
@@ -803,45 +796,8 @@ def verbose(self):
return self.conf.verbose
def ensure_modules_are_enabled_or_warn():
"""Ensures that, if a custom configuration file is found with custom configuration for the
default tcl module set, then tcl module file generation is enabled. Otherwise, a warning
is emitted.
"""
# TODO (v0.21 - Remove this function)
# Check if TCL module generation is enabled, return early if it is
enabled = spack.config.get("modules:default:enable", [])
if "tcl" in enabled:
return
# Check if we have custom TCL module sections
for scope in spack.config.config.file_scopes:
# Skip default configuration
if scope.name.startswith("default"):
continue
data = spack.config.get("modules:default:tcl", scope=scope.name)
if data:
config_file = pathlib.Path(scope.path)
if not scope.name.startswith("env"):
config_file = config_file / "modules.yaml"
break
else:
return
# If we are here we have a custom "modules" section in "config_file"
msg = (
f"detected custom TCL modules configuration in {config_file}, while TCL module file "
f"generation for the default module set is disabled. "
f"In Spack v0.20 module file generation has been disabled by default. To enable "
f"it run:\n\n\t$ spack config add 'modules:default:enable:[tcl]'\n"
)
warnings.warn(msg)
class BaseModuleFileWriter(object):
def __init__(self, spec, module_set_name, explicit=None):
def __init__(self, spec, module_set_name):
self.spec = spec
# This class is meant to be derived. Get the module of the
@@ -850,9 +806,9 @@ def __init__(self, spec, module_set_name, explicit=None):
m = self.module
# Create the triplet of configuration/layout/context
self.conf = m.make_configuration(spec, module_set_name, explicit)
self.layout = m.make_layout(spec, module_set_name, explicit)
self.context = m.make_context(spec, module_set_name, explicit)
self.conf = m.make_configuration(spec, module_set_name)
self.layout = m.make_layout(spec, module_set_name)
self.context = m.make_context(spec, module_set_name)
# Check if a default template has been defined,
# throw if not found
@@ -974,7 +930,6 @@ def remove(self):
if os.path.exists(mod_file):
try:
os.remove(mod_file) # Remove the module file
self.remove_module_defaults() # Remove default targeting module file
os.removedirs(
os.path.dirname(mod_file)
) # Remove all the empty directories from the leaf up
@@ -982,18 +937,6 @@ def remove(self):
# removedirs throws OSError on first non-empty directory found
pass
def remove_module_defaults(self):
if not any(self.spec.satisfies(default) for default in self.conf.defaults):
return
# This spec matches a default, symlink needs to be removed as we remove the module
# file it targets.
default_symlink = os.path.join(os.path.dirname(self.layout.filename), "default")
try:
os.unlink(default_symlink)
except OSError:
pass
@contextlib.contextmanager
def disable_modules():

View File

@@ -33,26 +33,24 @@ def configuration(module_set_name):
configuration_registry: Dict[str, Any] = {}
def make_configuration(spec, module_set_name, explicit):
def make_configuration(spec, module_set_name):
"""Returns the lmod configuration for spec"""
key = (spec.dag_hash(), module_set_name, explicit)
key = (spec.dag_hash(), module_set_name)
try:
return configuration_registry[key]
except KeyError:
return configuration_registry.setdefault(
key, LmodConfiguration(spec, module_set_name, explicit)
)
return configuration_registry.setdefault(key, LmodConfiguration(spec, module_set_name))
def make_layout(spec, module_set_name, explicit):
def make_layout(spec, module_set_name):
"""Returns the layout information for spec"""
conf = make_configuration(spec, module_set_name, explicit)
conf = make_configuration(spec, module_set_name)
return LmodFileLayout(conf)
def make_context(spec, module_set_name, explicit):
def make_context(spec, module_set_name):
"""Returns the context information for spec"""
conf = make_configuration(spec, module_set_name, explicit)
conf = make_configuration(spec, module_set_name)
return LmodContext(conf)
@@ -73,7 +71,7 @@ def guess_core_compilers(name, store=False):
# A compiler is considered to be a core compiler if any of the
# C, C++ or Fortran compilers reside in a system directory
is_system_compiler = any(
os.path.dirname(x) in spack.util.environment.SYSTEM_DIRS
os.path.dirname(x) in spack.util.environment.system_dirs
for x in compiler["paths"].values()
if x is not None
)
@@ -126,11 +124,6 @@ def core_specs(self):
"""Returns the list of "Core" specs"""
return configuration(self.name).get("core_specs", [])
@property
def filter_hierarchy_specs(self):
"""Returns the dict of specs with modified hierarchies"""
return configuration(self.name).get("filter_hierarchy_specs", {})
@property
def hierarchy_tokens(self):
"""Returns the list of tokens that are part of the modulefile
@@ -165,21 +158,11 @@ def requires(self):
if any(self.spec.satisfies(core_spec) for core_spec in self.core_specs):
return {"compiler": self.core_compilers[0]}
hierarchy_filter_list = []
for spec, filter_list in self.filter_hierarchy_specs.items():
if self.spec.satisfies(spec):
hierarchy_filter_list = filter_list
break
# Keep track of the requirements that this package has in terms
# of virtual packages that participate in the hierarchical structure
requirements = {"compiler": self.spec.compiler}
# For each virtual dependency in the hierarchy
for x in self.hierarchy_tokens:
# Skip anything filtered for this spec
if x in hierarchy_filter_list:
continue
# If I depend on it
if x in self.spec and not self.spec.package.provides(x):
requirements[x] = self.spec[x] # record the actual provider
@@ -426,7 +409,7 @@ def missing(self):
@tengine.context_property
def unlocked_paths(self):
"""Returns the list of paths that are unlocked unconditionally."""
layout = make_layout(self.spec, self.conf.name, self.conf.explicit)
layout = make_layout(self.spec, self.conf.name)
return [os.path.join(*parts) for parts in layout.unlocked_paths[None]]
@tengine.context_property
@@ -434,7 +417,7 @@ def conditionally_unlocked_paths(self):
"""Returns the list of paths that are unlocked conditionally.
Each item in the list is a tuple with the structure (condition, path).
"""
layout = make_layout(self.spec, self.conf.name, self.conf.explicit)
layout = make_layout(self.spec, self.conf.name)
value = []
conditional_paths = layout.unlocked_paths
conditional_paths.pop(None)

View File

@@ -3,7 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""This module implements the classes necessary to generate Tcl
"""This module implements the classes necessary to generate TCL
non-hierarchical modules.
"""
import posixpath
@@ -19,7 +19,7 @@
from .common import BaseConfiguration, BaseContext, BaseFileLayout, BaseModuleFileWriter
#: Tcl specific part of the configuration
#: TCL specific part of the configuration
def configuration(module_set_name):
config_path = "modules:%s:tcl" % module_set_name
config = spack.config.get(config_path, {})
@@ -30,26 +30,24 @@ def configuration(module_set_name):
configuration_registry: Dict[str, Any] = {}
def make_configuration(spec, module_set_name, explicit):
def make_configuration(spec, module_set_name):
"""Returns the tcl configuration for spec"""
key = (spec.dag_hash(), module_set_name, explicit)
key = (spec.dag_hash(), module_set_name)
try:
return configuration_registry[key]
except KeyError:
return configuration_registry.setdefault(
key, TclConfiguration(spec, module_set_name, explicit)
)
return configuration_registry.setdefault(key, TclConfiguration(spec, module_set_name))
def make_layout(spec, module_set_name, explicit):
def make_layout(spec, module_set_name):
"""Returns the layout information for spec"""
conf = make_configuration(spec, module_set_name, explicit)
conf = make_configuration(spec, module_set_name)
return TclFileLayout(conf)
def make_context(spec, module_set_name, explicit):
def make_context(spec, module_set_name):
"""Returns the context information for spec"""
conf = make_configuration(spec, module_set_name, explicit)
conf = make_configuration(spec, module_set_name)
return TclContext(conf)

View File

@@ -36,7 +36,7 @@
cmake_cache_path,
cmake_cache_string,
)
from spack.build_systems.cmake import CMakePackage, generator
from spack.build_systems.cmake import CMakePackage
from spack.build_systems.cuda import CudaPackage
from spack.build_systems.generic import Package
from spack.build_systems.gnu import GNUMirrorPackage

View File

@@ -57,7 +57,7 @@
from spack.filesystem_view import YamlFilesystemView
from spack.install_test import TestFailure, TestSuite
from spack.installer import InstallError, PackageInstaller
from spack.stage import ResourceStage, Stage, StageComposite, compute_stage_name
from spack.stage import ResourceStage, Stage, StageComposite, stage_prefix
from spack.util.executable import ProcessError, which
from spack.util.package_hash import package_hash
from spack.util.prefix import Prefix
@@ -92,6 +92,9 @@
_spack_configure_argsfile = "spack-configure-args.txt"
is_windows = sys.platform == "win32"
def deprecated_version(pkg, version):
"""Return True if the version is deprecated, False otherwise.
@@ -162,7 +165,7 @@ def windows_establish_runtime_linkage(self):
Performs symlinking to incorporate rpath dependencies to Windows runtime search paths
"""
if sys.platform == "win32":
if is_windows:
self.win_rpath.add_library_dependent(*self.win_add_library_dependent())
self.win_rpath.add_rpath(*self.win_add_rpath())
self.win_rpath.establish_link()
@@ -207,7 +210,7 @@ def to_windows_exe(exe):
plat_exe = []
if hasattr(cls, "executables"):
for exe in cls.executables:
if sys.platform == "win32":
if is_windows:
exe = to_windows_exe(exe)
plat_exe.append(exe)
return plat_exe
@@ -1022,7 +1025,8 @@ def _make_root_stage(self, fetcher):
)
# Construct a path where the stage should build..
s = self.spec
stage_name = compute_stage_name(s)
stage_name = "{0}{1}-{2}-{3}".format(stage_prefix, s.name, s.version, s.dag_hash())
stage = Stage(
fetcher,
mirror_paths=mirror_paths,
@@ -1196,7 +1200,7 @@ def _make_fetcher(self):
# one element (the root package). In case there are resources
# associated with the package, append their fetcher to the
# composite.
root_fetcher = fs.for_package_version(self)
root_fetcher = fs.for_package_version(self, self.version)
fetcher = fs.FetchStrategyComposite() # Composite fetcher
fetcher.append(root_fetcher) # Root fetcher is always present
resources = self._get_needed_resources()
@@ -1307,7 +1311,7 @@ def provides(self, vpkg_name):
True if this package provides a virtual package with the specified name
"""
return any(
any(self.spec.intersects(c) for c in constraints)
any(self.spec.satisfies(c) for c in constraints)
for s, constraints in self.provided.items()
if s.name == vpkg_name
)
@@ -1613,7 +1617,7 @@ def content_hash(self, content=None):
# TODO: resources
if self.spec.versions.concrete:
try:
source_id = fs.for_package_version(self).source_id()
source_id = fs.for_package_version(self, self.version).source_id()
except (fs.ExtrapolationError, fs.InvalidArgsError):
# ExtrapolationError happens if the package has no fetchers defined.
# InvalidArgsError happens when there are version directives with args,
@@ -1776,7 +1780,7 @@ def _get_needed_resources(self):
# conflict with the spec, so we need to invoke
# when_spec.satisfies(self.spec) vs.
# self.spec.satisfies(when_spec)
if when_spec.intersects(self.spec):
if when_spec.satisfies(self.spec, strict=False):
resources.extend(resource_list)
# Sorts the resources by the length of the string representing their
# destination. Since any nested resource must contain another
@@ -2397,7 +2401,7 @@ def rpath(self):
# on Windows, libraries of runtime interest are typically
# stored in the bin directory
if sys.platform == "win32":
if is_windows:
rpaths = [self.prefix.bin]
rpaths.extend(d.prefix.bin for d in deps if os.path.isdir(d.prefix.bin))
else:

View File

@@ -73,7 +73,7 @@ def __call__(self, spec):
# integer is the index of the first spec in order that satisfies
# spec, or it's a number larger than any position in the order.
match_index = next(
(i for i, s in enumerate(spec_order) if spec.intersects(s)), len(spec_order)
(i for i, s in enumerate(spec_order) if spec.satisfies(s)), len(spec_order)
)
if match_index < len(spec_order) and spec_order[match_index] == spec:
# If this is called with multiple specs that all satisfy the same
@@ -185,7 +185,7 @@ def _package(maybe_abstract_spec):
),
extra_attributes=entry.get("extra_attributes", {}),
)
if external_spec.intersects(spec):
if external_spec.satisfies(spec):
external_specs.append(external_spec)
# Defensively copy returned specs

View File

@@ -37,7 +37,7 @@
def slingshot_network():
return os.path.exists("/opt/cray/pe") and os.path.exists("/lib64/libcxi.so")
return os.path.exists("/lib64/libcxi.so")
def _target_name_from_craype_target_name(name):

Some files were not shown because too many files have changed in this diff Show More