Compare commits

..

1 Commits

Author SHA1 Message Date
kwryankrattiger
4583161224 Revert "Revert "Revert "gitlab: Add shared PR mirror to places pipelines look for binaries. (#33746)" (#34087)" (#34153)"
This reverts commit efa1dba9e4.
2022-12-02 12:13:44 -06:00
19579 changed files with 485363 additions and 681758 deletions

View File

@@ -5,7 +5,7 @@ coverage:
status: status:
project: project:
default: default:
threshold: 2.0% threshold: 0.2%
ignore: ignore:
- lib/spack/spack/test/.* - lib/spack/spack/test/.*

View File

@@ -1,20 +0,0 @@
#!/bin/bash
# Load spack environment at terminal startup
cat <<EOF >> /root/.bashrc
. /workspaces/spack/share/spack/setup-env.sh
EOF
# Load spack environment in this script
. /workspaces/spack/share/spack/setup-env.sh
# Ensure generic targets for maximum matching with buildcaches
spack config --scope site add "packages:all:require:[target=x86_64_v3]"
spack config --scope site add "concretizer:targets:granularity:generic"
# Find compiler and install gcc-runtime
spack compiler find --scope site
# Setup buildcaches
spack mirror add --scope site develop https://binaries.spack.io/develop
spack buildcache keys --install --trust

View File

@@ -1,5 +0,0 @@
{
"name": "Ubuntu 20.04",
"image": "ghcr.io/spack/ubuntu20.04-runner-amd64-gcc-11.4:2023.08.01",
"postCreateCommand": "./.devcontainer/postCreateCommand.sh"
}

View File

@@ -1,5 +0,0 @@
{
"name": "Ubuntu 22.04",
"image": "ghcr.io/spack/ubuntu-22.04:v2024-05-07",
"postCreateCommand": "./.devcontainer/postCreateCommand.sh"
}

View File

@@ -28,7 +28,7 @@ max-line-length = 99
# - F821: undefined name `name` # - F821: undefined name `name`
# #
per-file-ignores = per-file-ignores =
var/spack/*/package.py:F403,F405,F821 var/spack/repos/*/package.py:F403,F405,F821
*-ci-package.py:F403,F405,F821 *-ci-package.py:F403,F405,F821
# exclude things we usually do not want linting for. # exclude things we usually do not want linting for.

View File

@@ -1,5 +1,3 @@
# .git-blame-ignore-revs # .git-blame-ignore-revs
# Formatted entire codebase with black 23 # Formatted entire codebase with black
603569e321013a1a63a637813c94c2834d0a0023
# Formatted entire codebase with black 22
f52f6e99dbf1131886a80112b8c79dfc414afb7c f52f6e99dbf1131886a80112b8c79dfc414afb7c

2
.gitattributes vendored
View File

@@ -1,3 +1,3 @@
*.py diff=python *.py diff=python
*.lp linguist-language=Prolog
lib/spack/external/* linguist-vendored lib/spack/external/* linguist-vendored
*.bat text eol=crlf

View File

@@ -9,7 +9,7 @@ body:
Thanks for taking the time to report this build failure. To proceed with the report please: Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue `Installation issue: <name-of-the-package>`. 1. Title the issue `Installation issue: <name-of-the-package>`.
2. Provide the information required below. 2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
- type: textarea - type: textarea
id: reproduce id: reproduce
@@ -29,9 +29,7 @@ body:
description: | description: |
Please post the error message from spack inside the `<details>` tag below: Please post the error message from spack inside the `<details>` tag below:
value: | value: |
<details><summary>Error message</summary> <details><summary>Error message</summary><pre>
<pre>
... ...
</pre></details> </pre></details>
validations: validations:
@@ -55,7 +53,7 @@ body:
Please upload the following files: Please upload the following files:
* **`spack-build-out.txt`** * **`spack-build-out.txt`**
* **`spack-build-env.txt`** * **`spack-build-env.txt`**
They should be present in the stage directory of the failing build. Also upload any `config.log` or similar file if one exists. They should be present in the stage directory of the failing build. Also upload any `config.log` or similar file if one exists.
- type: markdown - type: markdown
attributes: attributes:

View File

@@ -1,4 +1,4 @@
name: "\U0001F38A Feature request" name: "\U0001F38A Feature request"
description: Suggest adding a feature that is not yet in Spack description: Suggest adding a feature that is not yet in Spack
labels: [feature] labels: [feature]
body: body:
@@ -29,11 +29,13 @@ body:
attributes: attributes:
label: General information label: General information
options: options:
- label: I have run `spack --version` and reported the version of Spack
required: true
- label: I have searched the issues of this repo and believe this is not a duplicate - label: I have searched the issues of this repo and believe this is not a duplicate
required: true required: true
- type: markdown - type: markdown
attributes: attributes:
value: | value: |
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue. If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack! Other than that, thanks for taking the time to contribute to Spack!

View File

@@ -21,9 +21,7 @@ body:
description: | description: |
Please post the error message from spack inside the `<details>` tag below: Please post the error message from spack inside the `<details>` tag below:
value: | value: |
<details><summary>Error message</summary> <details><summary>Error message</summary><pre>
<pre>
... ...
</pre></details> </pre></details>
validations: validations:

View File

@@ -5,10 +5,3 @@ updates:
directory: "/" directory: "/"
schedule: schedule:
interval: "daily" interval: "daily"
# Requirements to run style checks and build documentation
- package-ecosystem: "pip"
directories:
- "/.github/workflows/requirements/style/*"
- "/lib/spack/docs"
schedule:
interval: "daily"

View File

@@ -1,6 +0,0 @@
<!--
Remember that `spackbot` can help with your PR in multiple ways:
- `@spackbot help` shows all the commands that are currently available
- `@spackbot fix style` tries to push a commit to fix style issues in this PR
- `@spackbot re-run pipeline` runs the pipelines again, if you have write access to the repository
-->

View File

@@ -17,57 +17,28 @@ concurrency:
jobs: jobs:
# Run audits on all the packages in the built-in repository # Run audits on all the packages in the built-in repository
package-audits: package-audits:
runs-on: ${{ matrix.system.os }} runs-on: ubuntu-latest
strategy:
matrix:
system:
- { os: windows-latest, shell: 'powershell Invoke-Expression -Command "./share/spack/qa/windows_test_setup.ps1"; {0}' }
- { os: ubuntu-latest, shell: bash }
- { os: macos-latest, shell: bash }
defaults:
run:
shell: ${{ matrix.system.shell }}
steps: steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: ${{inputs.python_version}} python-version: ${{inputs.python_version}}
- name: Install Python packages - name: Install Python packages
run: | run: |
pip install --upgrade pip setuptools pytest coverage[toml] pip install --upgrade pip six setuptools pytest codecov coverage[toml]
- name: Setup for Windows run
if: runner.os == 'Windows'
run: |
python -m pip install --upgrade pywin32
- name: Package audits (with coverage) - name: Package audits (with coverage)
env: if: ${{ inputs.with_coverage == 'true' }}
COVERAGE_FILE: coverage/.coverage-audits-${{ matrix.system.os }}
if: ${{ inputs.with_coverage == 'true' && runner.os != 'Windows' }}
run: | run: |
. share/spack/setup-env.sh . share/spack/setup-env.sh
coverage run $(which spack) audit packages coverage run $(which spack) audit packages
coverage run $(which spack) audit configs
coverage run $(which spack) -d audit externals
coverage combine coverage combine
coverage xml
- name: Package audits (without coverage) - name: Package audits (without coverage)
if: ${{ inputs.with_coverage == 'false' && runner.os != 'Windows' }} if: ${{ inputs.with_coverage == 'false' }}
run: | run: |
. share/spack/setup-env.sh . share/spack/setup-env.sh
spack -d audit packages $(which spack) audit packages
spack -d audit configs - uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70 # @v2.1.0
spack -d audit externals if: ${{ inputs.with_coverage == 'true' }}
- name: Package audits (without coverage)
if: ${{ runner.os == 'Windows' }}
run: |
spack -d audit packages
./share/spack/qa/validate_last_exit.ps1
spack -d audit configs
./share/spack/qa/validate_last_exit.ps1
spack -d audit externals
./share/spack/qa/validate_last_exit.ps1
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
if: ${{ inputs.with_coverage == 'true' && runner.os != 'Windows' }}
with: with:
name: coverage-audits-${{ matrix.system.os }} flags: unittests,linux,audits
path: coverage
include-hidden-files: true

View File

@@ -1,8 +0,0 @@
#!/bin/bash
set -e
source share/spack/setup-env.sh
$PYTHON bin/spack bootstrap disable github-actions-v0.5
$PYTHON bin/spack bootstrap disable spack-install
$PYTHON bin/spack $SPACK_FLAGS solve zlib
tree $BOOTSTRAP/store
exit 0

View File

@@ -1,8 +0,0 @@
git config --global user.email "spack@example.com"
git config --global user.name "Test User"
git config --global core.longpaths true
if ($(git branch --show-current) -ne "develop")
{
git branch develop origin/develop
}

View File

@@ -1,8 +0,0 @@
#!/bin/bash -e
git config --global user.email "spack@example.com"
git config --global user.name "Test User"
# create a local pr base branch
if [[ -n $GITHUB_BASE_REF ]]; then
git fetch origin "${GITHUB_BASE_REF}:${GITHUB_BASE_REF}"
fi

7
.github/workflows/bootstrap-test.sh vendored Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
set -ex
source share/spack/setup-env.sh
$PYTHON bin/spack bootstrap disable spack-install
$PYTHON bin/spack -d solve zlib
tree $BOOTSTRAP/store
exit 0

View File

@@ -13,22 +13,118 @@ concurrency:
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
distros-clingo-sources: fedora-clingo-sources:
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: ${{ matrix.image }} container: "fedora:latest"
strategy:
matrix:
image: ["fedora:latest", "opensuse/leap:latest"]
steps: steps:
- name: Setup Fedora - name: Install dependencies
if: ${{ matrix.image == 'fedora:latest' }}
run: | run: |
dnf install -y \ dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gzip \ bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch unzip which xz python3 python3-devel tree \ make patch unzip which xz python3 python3-devel tree \
cmake bison bison-devel libstdc++-static gawk cmake bison bison-devel libstdc++-static
- name: Setup OpenSUSE - name: Checkout
if: ${{ matrix.image == 'opensuse/leap:latest' }} uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.3
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
ubuntu-clingo-sources:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc gfortran git gnupg2 gzip \
make patch unzip xz-utils python3 python3-dev tree \
cmake bison
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.3
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
ubuntu-clingo-binaries-and-patchelf:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc gfortran git gnupg2 gzip \
make patch unzip xz-utils python3 python3-dev tree
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack -d solve zlib
tree ~/.spack/bootstrap/store/
opensuse-clingo-sources:
runs-on: ubuntu-latest
container: "opensuse/leap:latest"
steps:
- name: Install dependencies
run: | run: |
# Harden CI by applying the workaround described here: https://www.suse.com/support/kb/doc/?id=000019505 # Harden CI by applying the workaround described here: https://www.suse.com/support/kb/doc/?id=000019505
zypper update -y || zypper update -y zypper update -y || zypper update -y
@@ -37,117 +133,98 @@ jobs:
make patch unzip which xz python3 python3-devel tree \ make patch unzip which xz python3 python3-devel tree \
cmake bison cmake bison
- name: Checkout - name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Setup repo
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap clingo - name: Bootstrap clingo
run: | run: |
source share/spack/setup-env.sh source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.6 spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.5 spack bootstrap disable github-actions-v0.3
spack external find cmake bison spack external find cmake bison
spack -d solve zlib spack -d solve zlib
tree ~/.spack/bootstrap/store/ tree ~/.spack/bootstrap/store/
clingo-sources: macos-clingo-sources:
runs-on: ${{ matrix.runner }} runs-on: macos-latest
strategy:
matrix:
runner: ['macos-13', 'macos-14', "ubuntu-latest"]
steps: steps:
- name: Setup macOS - name: Install dependencies
if: ${{ matrix.runner != 'ubuntu-latest' }}
run: | run: |
brew install cmake bison tree brew install cmake bison@2.7 tree
- name: Checkout - name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: "3.12"
- name: Bootstrap clingo - name: Bootstrap clingo
run: | run: |
source share/spack/setup-env.sh source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.6 export PATH=/usr/local/opt/bison@2.7/bin:$PATH
spack bootstrap disable github-actions-v0.5 spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.3
spack external find --not-buildable cmake bison spack external find --not-buildable cmake bison
spack -d solve zlib spack -d solve zlib
tree $HOME/.spack/bootstrap/store/
gnupg-sources:
runs-on: ${{ matrix.runner }}
strategy:
matrix:
runner: [ 'macos-13', 'macos-14', "ubuntu-latest" ]
steps:
- name: Setup macOS
if: ${{ matrix.runner != 'ubuntu-latest' }}
run: brew install tree gawk
- name: Remove system executables
run: |
while [ -n "$(command -v gpg gpg2 patchelf)" ]; do
sudo rm $(command -v gpg gpg2 patchelf)
done
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
spack solve zlib
spack bootstrap disable github-actions-v0.6
spack bootstrap disable github-actions-v0.5
spack -d gpg list
tree ~/.spack/bootstrap/store/ tree ~/.spack/bootstrap/store/
from-binaries: macos-clingo-binaries:
runs-on: ${{ matrix.runner }} runs-on: ${{ matrix.macos-version }}
strategy: strategy:
matrix: matrix:
runner: ['macos-13', 'macos-14', "ubuntu-latest"] macos-version: ['macos-11', 'macos-12']
steps: steps:
- name: Setup macOS - name: Install dependencies
if: ${{ matrix.runner != 'ubuntu-latest' }}
run: brew install tree
- name: Remove system executables
run: | run: |
while [ -n "$(command -v gpg gpg2 patchelf)" ]; do brew install tree
sudo rm $(command -v gpg gpg2 patchelf)
done
- name: Checkout - name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: |
3.8
3.9
3.10
3.11
3.12
3.13
- name: Set bootstrap sources
run: |
source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.5
spack bootstrap disable spack-install
- name: Bootstrap clingo - name: Bootstrap clingo
run: | run: |
set -e set -ex
for ver in '3.8' '3.9' '3.10' '3.11' '3.12' '3.13'; do for ver in '3.6' '3.7' '3.8' '3.9' '3.10' ; do
not_found=1 not_found=1
ver_dir="$(find $RUNNER_TOOL_CACHE/Python -wholename "*/${ver}.*/*/bin" | grep . || true)" ver_dir="$(find $RUNNER_TOOL_CACHE/Python -wholename "*/${ver}.*/*/bin" | grep . || true)"
echo "Testing $ver_dir"
if [[ -d "$ver_dir" ]] ; then if [[ -d "$ver_dir" ]] ; then
echo "Testing $ver_dir"
if $ver_dir/python --version ; then if $ver_dir/python --version ; then
export PYTHON="$ver_dir/python" export PYTHON="$ver_dir/python"
not_found=0 not_found=0
old_path="$PATH" old_path="$PATH"
export PATH="$ver_dir:$PATH" export PATH="$ver_dir:$PATH"
./bin/spack-tmpconfig -b ./.github/workflows/bin/bootstrap-test.sh ./bin/spack-tmpconfig -b ./.github/workflows/bootstrap-test.sh
export PATH="$old_path"
fi
fi
# NOTE: test all pythons that exist, not all do on 12
done
ubuntu-clingo-binaries:
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup repo
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
run: |
set -ex
for ver in '3.6' '3.7' '3.8' '3.9' '3.10' ; do
not_found=1
ver_dir="$(find $RUNNER_TOOL_CACHE/Python -wholename "*/${ver}.*/*/bin" | grep . || true)"
echo "Testing $ver_dir"
if [[ -d "$ver_dir" ]] ; then
if $ver_dir/python --version ; then
export PYTHON="$ver_dir/python"
not_found=0
old_path="$PATH"
export PATH="$ver_dir:$PATH"
./bin/spack-tmpconfig -b ./.github/workflows/bootstrap-test.sh
export PATH="$old_path" export PATH="$old_path"
fi fi
fi fi
@@ -156,39 +233,120 @@ jobs:
exit 1 exit 1
fi fi
done done
ubuntu-gnupg-binaries:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc patchelf gfortran git gzip \
make patch unzip xz-utils python3 python3-dev tree
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap GnuPG
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack bootstrap disable spack-install
spack -d gpg list
tree ~/.spack/bootstrap/store/
ubuntu-gnupg-sources:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc patchelf gfortran git gzip \
make patch unzip xz-utils python3 python3-dev tree \
gawk
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap GnuPG
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack solve zlib
spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.3
spack -d gpg list
tree ~/.spack/bootstrap/store/
macos-gnupg-binaries:
runs-on: macos-latest
steps:
- name: Install dependencies
run: |
brew install tree
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
- name: Bootstrap GnuPG - name: Bootstrap GnuPG
run: | run: |
source share/spack/setup-env.sh source share/spack/setup-env.sh
spack bootstrap disable spack-install
spack -d gpg list spack -d gpg list
tree $HOME/.spack/bootstrap/store/ tree ~/.spack/bootstrap/store/
macos-gnupg-sources:
windows: runs-on: macos-latest
runs-on: "windows-latest"
steps: steps:
- name: Install dependencies
run: |
brew install gawk tree
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout - name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: "3.12"
- name: Setup Windows
run: |
Remove-Item -Path (Get-Command gpg).Path
Remove-Item -Path (Get-Command file).Path
- name: Bootstrap clingo
run: |
./share/spack/setup-env.ps1
spack bootstrap disable github-actions-v0.6
spack bootstrap disable github-actions-v0.5
spack external find --not-buildable cmake bison
spack -d solve zlib
./share/spack/qa/validate_last_exit.ps1
tree $env:userprofile/.spack/bootstrap/store/
- name: Bootstrap GnuPG - name: Bootstrap GnuPG
run: | run: |
./share/spack/setup-env.ps1 source share/spack/setup-env.sh
spack solve zlib
spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.3
spack -d gpg list spack -d gpg list
./share/spack/qa/validate_last_exit.ps1 tree ~/.spack/bootstrap/store/
tree $env:userprofile/.spack/bootstrap/store/
# [1] Distros that have patched git to resolve CVE-2022-24765 (e.g. Ubuntu patching v2.25.1)
# introduce breaking behaviorso we have to set `safe.directory` in gitconfig ourselves.
# See:
# - https://github.blog/2022-04-12-git-security-vulnerability-announced/
# - https://github.com/actions/checkout/issues/760
# - http://changelogs.ubuntu.com/changelogs/pool/main/g/git/git_2.25.1-1ubuntu3.3/changelog

View File

@@ -38,52 +38,38 @@ jobs:
# Meaning of the various items in the matrix list # Meaning of the various items in the matrix list
# 0: Container name (e.g. ubuntu-bionic) # 0: Container name (e.g. ubuntu-bionic)
# 1: Platforms to build for # 1: Platforms to build for
# 2: Base image (e.g. ubuntu:22.04) # 2: Base image (e.g. ubuntu:18.04)
dockerfile: [[amazon-linux, 'linux/amd64,linux/arm64', 'amazonlinux:2'], dockerfile: [[amazon-linux, 'linux/amd64,linux/arm64', 'amazonlinux:2'],
[centos-stream9, 'linux/amd64,linux/arm64', 'centos:stream9'], [centos7, 'linux/amd64,linux/arm64,linux/ppc64le', 'centos:7'],
[leap15, 'linux/amd64,linux/arm64', 'opensuse/leap:15'], [centos-stream, 'linux/amd64,linux/arm64,linux/ppc64le', 'centos:stream'],
[ubuntu-focal, 'linux/amd64,linux/arm64', 'ubuntu:20.04'], [leap15, 'linux/amd64,linux/arm64,linux/ppc64le', 'opensuse/leap:15'],
[ubuntu-jammy, 'linux/amd64,linux/arm64', 'ubuntu:22.04'], [ubuntu-bionic, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:18.04'],
[ubuntu-noble, 'linux/amd64,linux/arm64', 'ubuntu:24.04'], [ubuntu-focal, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:20.04'],
[almalinux8, 'linux/amd64,linux/arm64', 'almalinux:8'], [ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04']]
[almalinux9, 'linux/amd64,linux/arm64', 'almalinux:9'],
[rockylinux8, 'linux/amd64,linux/arm64', 'rockylinux:8'],
[rockylinux9, 'linux/amd64,linux/arm64', 'rockylinux:9'],
[fedora39, 'linux/amd64,linux/arm64', 'fedora:39'],
[fedora40, 'linux/amd64,linux/arm64', 'fedora:40']]
name: Build ${{ matrix.dockerfile[0] }} name: Build ${{ matrix.dockerfile[0] }}
if: github.repository == 'spack/spack' if: github.repository == 'spack/spack'
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
- name: Determine latest release tag - name: Set Container Tag Normal (Nightly)
id: latest
run: | run: |
git fetch --quiet --tags container="${{ matrix.dockerfile[0] }}:latest"
echo "tag=$(git tag --list --sort=-v:refname | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | head -n 1)" | tee -a $GITHUB_OUTPUT echo "container=${container}" >> $GITHUB_ENV
echo "versioned=${container}" >> $GITHUB_ENV
- uses: docker/metadata-action@369eb591f429131d6889c46b94e711f089e6ca96 # On a new release create a container with the same tag as the release.
id: docker_meta - name: Set Container Tag on Release
with: if: github.event_name == 'release'
images: | run: |
ghcr.io/${{ github.repository_owner }}/${{ matrix.dockerfile[0] }} versioned="${{matrix.dockerfile[0]}}:${GITHUB_REF##*/}"
${{ github.repository_owner }}/${{ matrix.dockerfile[0] }} echo "versioned=${versioned}" >> $GITHUB_ENV
tags: |
type=schedule,pattern=nightly
type=schedule,pattern=develop
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=ref,event=branch
type=ref,event=pr
type=raw,value=latest,enable=${{ github.ref == format('refs/tags/{0}', steps.latest.outputs.tag) }}
- name: Generate the Dockerfile - name: Generate the Dockerfile
env: env:
SPACK_YAML_OS: "${{ matrix.dockerfile[2] }}" SPACK_YAML_OS: "${{ matrix.dockerfile[2] }}"
run: | run: |
.github/workflows/bin/generate_spack_yaml_containerize.sh .github/workflows/generate_spack_yaml_containerize.sh
. share/spack/setup-env.sh . share/spack/setup-env.sh
mkdir -p dockerfiles/${{ matrix.dockerfile[0] }} mkdir -p dockerfiles/${{ matrix.dockerfile[0] }}
spack containerize --last-stage=bootstrap | tee dockerfiles/${{ matrix.dockerfile[0] }}/Dockerfile spack containerize --last-stage=bootstrap | tee dockerfiles/${{ matrix.dockerfile[0] }}/Dockerfile
@@ -94,19 +80,19 @@ jobs:
fi fi
- name: Upload Dockerfile - name: Upload Dockerfile
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
with: with:
name: dockerfiles_${{ matrix.dockerfile[0] }} name: dockerfiles
path: dockerfiles path: dockerfiles
- name: Set up QEMU - name: Set up QEMU
uses: docker/setup-qemu-action@49b3bc8e6bdd4a60e6116a5414239cba5943d3cf uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # @v1
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@6524bf65af31da8d45b59e8c27de4bd072b392f5 uses: docker/setup-buildx-action@8c0edbc76e98fa90f69d9a2c020dcb50019dc325 # @v1
- name: Log in to GitHub Container Registry - name: Log in to GitHub Container Registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # @v1
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.actor }} username: ${{ github.actor }}
@@ -114,27 +100,21 @@ jobs:
- name: Log in to DockerHub - name: Log in to DockerHub
if: github.event_name != 'pull_request' if: github.event_name != 'pull_request'
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # @v1
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Deploy ${{ matrix.dockerfile[0] }} - name: Build & Deploy ${{ matrix.dockerfile[0] }}
uses: docker/build-push-action@48aba3b46d1b1fec4febb7c5d0c644b249a11355 uses: docker/build-push-action@c56af957549030174b10d6867f20e78cfd7debc5 # @v2
with: with:
context: dockerfiles/${{ matrix.dockerfile[0] }} context: dockerfiles/${{ matrix.dockerfile[0] }}
platforms: ${{ matrix.dockerfile[1] }} platforms: ${{ matrix.dockerfile[1] }}
push: ${{ github.event_name != 'pull_request' }} push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.docker_meta.outputs.tags }} cache-from: type=gha
labels: ${{ steps.docker_meta.outputs.labels }} cache-to: type=gha,mode=max
tags: |
merge-dockerfiles: spack/${{ env.container }}
runs-on: ubuntu-latest spack/${{ env.versioned }}
needs: deploy-images ghcr.io/spack/${{ env.container }}
steps: ghcr.io/spack/${{ env.versioned }}
- name: Merge Artifacts
uses: actions/upload-artifact/merge@6f51ac03b9356f520e9adb1b1b7802705f340c2b
with:
name: dockerfiles
pattern: dockerfiles_*
delete-merged: true

View File

@@ -9,13 +9,23 @@ on:
branches: branches:
- develop - develop
- releases/** - releases/**
merge_group:
concurrency: concurrency:
group: ci-${{github.ref}}-${{github.event.pull_request.number || github.run_number}} group: ci-${{github.ref}}-${{github.event.pull_request.number || github.run_number}}
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
prechecks:
needs: [ changes ]
uses: ./.github/workflows/valid-style.yml
with:
with_coverage: ${{ needs.changes.outputs.core }}
all-prechecks:
needs: [ prechecks ]
runs-on: ubuntu-latest
steps:
- name: Success
run: "true"
# Check which files have been updated by the PR # Check which files have been updated by the PR
changes: changes:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -25,34 +35,23 @@ jobs:
core: ${{ steps.filter.outputs.core }} core: ${{ steps.filter.outputs.core }}
packages: ${{ steps.filter.outputs.packages }} packages: ${{ steps.filter.outputs.packages }}
steps: steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
if: ${{ github.event_name == 'push' || github.event_name == 'merge_group' }} if: ${{ github.event_name == 'push' }}
with: with:
fetch-depth: 0 fetch-depth: 0
# For pull requests it's not necessary to checkout the code # For pull requests it's not necessary to checkout the code
- uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 - uses: dorny/paths-filter@4512585405083f25c027a35db413c2b3b9006d50
id: filter id: filter
with: with:
# For merge group events, compare against the target branch (main)
base: ${{ github.event_name == 'merge_group' && github.event.merge_group.base_ref || '' }}
# For merge group events, use the merge group head ref
ref: ${{ github.event_name == 'merge_group' && github.event.merge_group.head_sha || github.ref }}
# See https://github.com/dorny/paths-filter/issues/56 for the syntax used below # See https://github.com/dorny/paths-filter/issues/56 for the syntax used below
# Don't run if we only modified packages in the # Don't run if we only modified packages in the
# built-in repository or documentation # built-in repository or documentation
filters: | filters: |
bootstrap: bootstrap:
- 'var/spack/repos/spack_repo/builtin/packages/clingo-bootstrap/**' - 'var/spack/repos/builtin/packages/clingo-bootstrap/**'
- 'var/spack/repos/spack_repo/builtin/packages/clingo/**' - 'var/spack/repos/builtin/packages/clingo/**'
- 'var/spack/repos/spack_repo/builtin/packages/python/**' - 'var/spack/repos/builtin/packages/python/**'
- 'var/spack/repos/spack_repo/builtin/packages/re2c/**' - 'var/spack/repos/builtin/packages/re2c/**'
- 'var/spack/repos/spack_repo/builtin/packages/gnupg/**'
- 'var/spack/repos/spack_repo/builtin/packages/libassuan/**'
- 'var/spack/repos/spack_repo/builtin/packages/libgcrypt/**'
- 'var/spack/repos/spack_repo/builtin/packages/libgpg-error/**'
- 'var/spack/repos/spack_repo/builtin/packages/libksba/**'
- 'var/spack/repos/spack_repo/builtin/packages/npth/**'
- 'var/spack/repos/spack_repo/builtin/packages/pinentry/**'
- 'lib/spack/**' - 'lib/spack/**'
- 'share/spack/**' - 'share/spack/**'
- '.github/workflows/bootstrap.yml' - '.github/workflows/bootstrap.yml'
@@ -71,60 +70,17 @@ jobs:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.bootstrap == 'true' }} if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.bootstrap == 'true' }}
needs: [ prechecks, changes ] needs: [ prechecks, changes ]
uses: ./.github/workflows/bootstrap.yml uses: ./.github/workflows/bootstrap.yml
secrets: inherit
unit-tests: unit-tests:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }} if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }}
needs: [ prechecks, changes ] needs: [ prechecks, changes ]
uses: ./.github/workflows/unit_tests.yaml uses: ./.github/workflows/unit_tests.yaml
secrets: inherit windows:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }}
prechecks:
needs: [ changes ]
uses: ./.github/workflows/prechecks.yml
secrets: inherit
with:
with_coverage: ${{ needs.changes.outputs.core }}
with_packages: ${{ needs.changes.outputs.packages }}
import-check:
needs: [ changes ]
uses: ./.github/workflows/import-check.yaml
all-prechecks:
needs: [ prechecks ] needs: [ prechecks ]
if: ${{ always() }} uses: ./.github/workflows/windows_python.yml
all:
needs: [ windows, unit-tests, bootstrap ]
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Success - name: Success
run: | run: "true"
if [ "${{ needs.prechecks.result }}" == "failure" ] || [ "${{ needs.prechecks.result }}" == "canceled" ]; then
echo "Unit tests failed."
exit 1
else
exit 0
fi
coverage:
needs: [ unit-tests, prechecks ]
if: ${{ needs.changes.outputs.core }}
uses: ./.github/workflows/coverage.yml
secrets: inherit
all:
needs: [ unit-tests, coverage, bootstrap ]
if: ${{ always() }}
runs-on: ubuntu-latest
# See https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/accessing-contextual-information-about-workflow-runs#needs-context
steps:
- name: Status summary
run: |
if [ "${{ needs.unit-tests.result }}" == "failure" ] || [ "${{ needs.unit-tests.result }}" == "canceled" ]; then
echo "Unit tests failed."
exit 1
elif [ "${{ needs.bootstrap.result }}" == "failure" ] || [ "${{ needs.bootstrap.result }}" == "canceled" ]; then
echo "Bootstrap tests failed."
exit 1
else
exit 0
fi

View File

@@ -1,36 +0,0 @@
name: coverage
on:
workflow_call:
jobs:
# Upload coverage reports to codecov once as a single bundle
upload:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.11'
cache: 'pip'
- name: Install python dependencies
run: pip install -r .github/workflows/requirements/coverage/requirements.txt
- name: Download coverage artifact files
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16
with:
pattern: coverage-*
path: coverage
merge-multiple: true
- run: ls -la coverage
- run: coverage combine -a coverage/.coverage*
- run: coverage xml
- name: "Upload coverage report to CodeCov"
uses: codecov/codecov-action@1e68e06f1dbfde0e4cefc87efeba9e4643565303
with:
verbose: true
fail_ci_if_error: false
token: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -1,49 +0,0 @@
name: import-check
on:
workflow_call:
jobs:
# Check we don't make the situation with circular imports worse
import-check:
runs-on: ubuntu-latest
steps:
- uses: julia-actions/setup-julia@v2
with:
version: '1.10'
- uses: julia-actions/cache@v2
# PR: use the base of the PR as the old commit
- name: Checkout PR base commit
if: github.event_name == 'pull_request'
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
ref: ${{ github.event.pull_request.base.sha }}
path: old
# not a PR: use the previous commit as the old commit
- name: Checkout previous commit
if: github.event_name != 'pull_request'
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 2
path: old
- name: Checkout previous commit
if: github.event_name != 'pull_request'
run: git -C old reset --hard HEAD^
- name: Checkout new commit
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
path: new
- name: Install circular import checker
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: haampie/circular-import-fighter
ref: 4cdb0bf15f04ab6b49041d5ef1bfd9644cce7f33
path: circular-import-fighter
- name: Install dependencies
working-directory: circular-import-fighter
run: make -j dependencies
- name: Circular import check
working-directory: circular-import-fighter
run: make -j compare "SPACK_ROOT=../old ../new"

8
.github/workflows/install_spack.sh vendored Executable file
View File

@@ -0,0 +1,8 @@
#!/usr/bin/env sh
. share/spack/setup-env.sh
echo -e "config:\n build_jobs: 2" > etc/spack/config.yaml
spack config add "packages:all:target:[x86_64]"
spack compiler find
spack compiler info apple-clang
spack debug report
spack solve zlib

View File

@@ -1,31 +0,0 @@
name: Windows Paraview Nightly
on:
schedule:
- cron: '0 2 * * *' # Run at 2 am
defaults:
run:
shell:
powershell Invoke-Expression -Command "./share/spack/qa/windows_test_setup.ps1"; {0}
jobs:
build-paraview-deps:
runs-on: windows-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools coverage
- name: Build Test
run: |
spack compiler find
spack external find cmake ninja win-sdk win-wdk wgl msmpi
spack -d install -y --cdash-upload-url https://cdash.spack.io/submit.php?project=Spack+on+Windows --cdash-track Nightly --only dependencies paraview
exit 0

View File

@@ -1,108 +0,0 @@
name: prechecks
on:
workflow_call:
inputs:
with_coverage:
required: true
type: string
with_packages:
required: true
type: string
concurrency:
group: style-${{github.ref}}-${{github.event.pull_request.number || github.run_number}}
cancel-in-progress: true
jobs:
# Validate that the code can be run on all the Python versions supported by Spack
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.13'
cache: 'pip'
cache-dependency-path: '.github/workflows/requirements/style/requirements.txt'
- name: Install Python Packages
run: |
pip install -r .github/workflows/requirements/style/requirements.txt
- name: vermin (Spack's Core)
run: |
vermin --backport importlib --backport argparse --violations --backport typing -t=3.6- -vvv lib/spack/spack/ lib/spack/llnl/ bin/
- name: vermin (Repositories)
run: |
vermin --backport importlib --backport argparse --violations --backport typing -t=3.6- -vvv var/spack/repos var/spack/test_repos
# Run style checks on the files that have been changed
style:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 2
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.13'
cache: 'pip'
cache-dependency-path: '.github/workflows/requirements/style/requirements.txt'
- name: Install Python packages
run: |
pip install -r .github/workflows/requirements/style/requirements.txt
- name: Run style tests
run: |
bin/spack style --base HEAD^1
bin/spack license verify
pylint -j $(nproc) --disable=all --enable=unspecified-encoding --ignore-paths=lib/spack/external lib
audit:
uses: ./.github/workflows/audit.yaml
secrets: inherit
with:
with_coverage: ${{ inputs.with_coverage }}
python_version: '3.13'
verify-checksums:
# do not run if the commit message or PR description contains [skip-verify-checksums]
if: >-
${{ inputs.with_packages == 'true' &&
!contains(github.event.pull_request.body, '[skip-verify-checksums]') &&
!contains(github.event.head_commit.message, '[skip-verify-checksums]') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
with:
fetch-depth: 2
- name: Verify Added Checksums
run: |
bin/spack ci verify-versions HEAD^1 HEAD
# Check that spack can bootstrap the development environment on Python 3.6 - RHEL8
bootstrap-dev-rhel8:
runs-on: ubuntu-latest
container: registry.access.redhat.com/ubi8/ubi
steps:
- name: Install dependencies
run: |
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: Setup repo and non-root user
run: |
git --version
git config --global --add safe.directory '*'
git fetch --unshallow
. .github/workflows/bin/setup_git.sh
useradd spack-test
chown -R spack-test .
- name: Bootstrap Spack development environment
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack debug report
spack -d bootstrap now --dev
spack -d style -t black
spack unit-test -V

View File

@@ -1 +0,0 @@
coverage==7.6.1

View File

@@ -1,8 +0,0 @@
black==25.1.0
clingo==5.8.0
flake8==7.2.0
isort==6.0.1
mypy==1.15.0
types-six==1.17.0.20250403
vermin==1.6.0
pylint==3.3.7

16
.github/workflows/setup_git.ps1 vendored Normal file
View File

@@ -0,0 +1,16 @@
# (c) 2021 Lawrence Livermore National Laboratory
Set-Location spack
git config --global user.email "spack@example.com"
git config --global user.name "Test User"
git config --global core.longpaths true
# See https://github.com/git/git/security/advisories/GHSA-3wp6-j8xr-qw85 (CVE-2022-39253)
# This is needed to let some fixture in our unit-test suite run
git config --global protocol.file.allow always
if ($(git branch --show-current) -ne "develop")
{
git branch develop origin/develop
}

12
.github/workflows/setup_git.sh vendored Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash -e
git config --global user.email "spack@example.com"
git config --global user.name "Test User"
# See https://github.com/git/git/security/advisories/GHSA-3wp6-j8xr-qw85 (CVE-2022-39253)
# This is needed to let some fixture in our unit-test suite run
git config --global protocol.file.allow always
# create a local pr base branch
if [[ -n $GITHUB_BASE_REF ]]; then
git fetch origin "${GITHUB_BASE_REF}:${GITHUB_BASE_REF}"
fi

View File

@@ -15,32 +15,42 @@ jobs:
strategy: strategy:
matrix: matrix:
os: [ubuntu-latest] os: [ubuntu-latest]
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12'] python-version: ['3.7', '3.8', '3.9', '3.10', '3.11']
concretizer: ['clingo']
on_develop: on_develop:
- ${{ github.ref == 'refs/heads/develop' }} - ${{ github.ref == 'refs/heads/develop' }}
include: include:
- python-version: '3.7' - python-version: '3.11'
os: ubuntu-22.04 os: ubuntu-latest
concretizer: original
on_develop: ${{ github.ref == 'refs/heads/develop' }}
- python-version: '3.6'
os: ubuntu-20.04
concretizer: clingo
on_develop: ${{ github.ref == 'refs/heads/develop' }} on_develop: ${{ github.ref == 'refs/heads/develop' }}
exclude: exclude:
- python-version: '3.7'
os: ubuntu-latest
concretizer: 'clingo'
on_develop: false
- python-version: '3.8' - python-version: '3.8'
os: ubuntu-latest os: ubuntu-latest
concretizer: 'clingo'
on_develop: false on_develop: false
- python-version: '3.9' - python-version: '3.9'
os: ubuntu-latest os: ubuntu-latest
concretizer: 'clingo'
on_develop: false on_develop: false
- python-version: '3.10' - python-version: '3.10'
os: ubuntu-latest os: ubuntu-latest
on_develop: false concretizer: 'clingo'
- python-version: '3.11'
os: ubuntu-latest
on_develop: false on_develop: false
steps: steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Install System packages - name: Install System packages
@@ -49,22 +59,16 @@ jobs:
# Needed for unit tests # Needed for unit tests
sudo apt-get -y install \ sudo apt-get -y install \
coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build \ coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build \
cmake bison libbison-dev subversion cmake bison libbison-dev kcov
# On ubuntu 24.04, kcov was removed. It may come back in some future Ubuntu
- name: Set up Homebrew
id: set-up-homebrew
uses: Homebrew/actions/setup-homebrew@40e9946c182a64b3db1bf51be0dcb915f7802aa9
- name: Install kcov with brew
run: "brew install kcov"
- name: Install Python packages - name: Install Python packages
run: | run: |
pip install --upgrade pip setuptools pytest pytest-xdist pytest-cov pip install --upgrade pip six setuptools pytest codecov[toml] pytest-xdist pytest-cov
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black" pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black"
- name: Setup git configuration - name: Setup git configuration
run: | run: |
# Need this for the git tests to succeed. # Need this for the git tests to succeed.
git --version git --version
. .github/workflows/bin/setup_git.sh . .github/workflows/setup_git.sh
- name: Bootstrap clingo - name: Bootstrap clingo
if: ${{ matrix.concretizer == 'clingo' }} if: ${{ matrix.concretizer == 'clingo' }}
env: env:
@@ -77,56 +81,46 @@ jobs:
- name: Run unit tests - name: Run unit tests
env: env:
SPACK_PYTHON: python SPACK_PYTHON: python
SPACK_TEST_SOLVER: ${{ matrix.concretizer }}
SPACK_TEST_PARALLEL: 2 SPACK_TEST_PARALLEL: 2
COVERAGE: true COVERAGE: true
COVERAGE_FILE: coverage/.coverage-${{ matrix.os }}-python${{ matrix.python-version }}
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }} UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }}
run: | run: |
share/spack/qa/run-unit-tests share/spack/qa/run-unit-tests
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b - uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with: with:
name: coverage-${{ matrix.os }}-python${{ matrix.python-version }} flags: unittests,linux,${{ matrix.concretizer }}
path: coverage
include-hidden-files: true
# Test shell integration # Test shell integration
shell: shell:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: '3.11' python-version: '3.11'
- name: Install System packages - name: Install System packages
run: | run: |
sudo apt-get -y update sudo apt-get -y update
# Needed for shell tests # Needed for shell tests
sudo apt-get install -y coreutils csh zsh tcsh fish dash bash subversion sudo apt-get install -y coreutils kcov csh zsh tcsh fish dash bash
# On ubuntu 24.04, kcov was removed. It may come back in some future Ubuntu
- name: Set up Homebrew
id: set-up-homebrew
uses: Homebrew/actions/setup-homebrew@40e9946c182a64b3db1bf51be0dcb915f7802aa9
- name: Install kcov with brew
run: "brew install kcov"
- name: Install Python packages - name: Install Python packages
run: | run: |
pip install --upgrade pip setuptools pytest coverage[toml] pytest-xdist pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-xdist
- name: Setup git configuration - name: Setup git configuration
run: | run: |
# Need this for the git tests to succeed. # Need this for the git tests to succeed.
git --version git --version
. .github/workflows/bin/setup_git.sh . .github/workflows/setup_git.sh
- name: Run shell tests - name: Run shell tests
env: env:
COVERAGE: true COVERAGE: true
run: | run: |
share/spack/qa/run-shell-tests share/spack/qa/run-shell-tests
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b - uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with: with:
name: coverage-shell flags: shelltests,linux
path: coverage
include-hidden-files: true
# Test RHEL8 UBI with platform Python. This job is run # Test RHEL8 UBI with platform Python. This job is run
# only on PRs modifying core Spack # only on PRs modifying core Spack
@@ -137,124 +131,85 @@ jobs:
- name: Install dependencies - name: Install dependencies
run: | run: |
dnf install -y \ dnf install -y \
bzip2 curl gcc-c++ gcc gcc-gfortran git gnupg2 gzip \ bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz make patch tcl unzip which xz
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
- name: Setup repo and non-root user - name: Setup repo and non-root user
run: | run: |
git --version git --version
git config --global --add safe.directory '*'
git fetch --unshallow git fetch --unshallow
. .github/workflows/bin/setup_git.sh . .github/workflows/setup_git.sh
useradd spack-test useradd spack-test
chown -R spack-test . chown -R spack-test .
- name: Run unit tests - name: Run unit tests
shell: runuser -u spack-test -- bash {0} shell: runuser -u spack-test -- bash {0}
run: | run: |
source share/spack/setup-env.sh source share/spack/setup-env.sh
spack -d bootstrap now --dev spack -d solve zlib
spack unit-test -k 'not cvs and not svn and not hg' -x --verbose spack unit-test -k 'not cvs and not svn and not hg' -x --verbose
# Test for the clingo based solver (using clingo-cffi) # Test for the clingo based solver (using clingo-cffi)
clingo-cffi: clingo-cffi:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: '3.13' python-version: '3.11'
- name: Install System packages - name: Install System packages
run: | run: |
sudo apt-get -y update sudo apt-get -y update
sudo apt-get -y install coreutils gfortran graphviz gnupg2 sudo apt-get -y install coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build kcov
- name: Install Python packages - name: Install Python packages
run: | run: |
pip install --upgrade pip setuptools pytest coverage[toml] pytest-cov clingo pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-cov clingo pytest-xdist
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black" - name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Run unit tests (full suite with coverage) - name: Run unit tests (full suite with coverage)
env: env:
COVERAGE: true COVERAGE: true
COVERAGE_FILE: coverage/.coverage-clingo-cffi SPACK_TEST_SOLVER: clingo
run: | run: |
. share/spack/setup-env.sh share/spack/qa/run-unit-tests
spack bootstrap disable spack-install - uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70 # @v2.1.0
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.6
spack bootstrap status
spack solve zlib
spack unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml lib/spack/spack/test/concretization/core.py
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
with: with:
name: coverage-clingo-cffi flags: unittests,linux,clingo
path: coverage
include-hidden-files: true
# Run unit tests on MacOS # Run unit tests on MacOS
macos: macos:
runs-on: ${{ matrix.os }} runs-on: macos-latest
strategy: strategy:
matrix: matrix:
os: [macos-13, macos-14] python-version: ["3.10"]
python-version: ["3.11"]
steps: steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Install Python packages - name: Install Python packages
run: | run: |
pip install --upgrade pip setuptools pip install --upgrade pip six setuptools
pip install --upgrade pytest coverage[toml] pytest-xdist pytest-cov pip install --upgrade pytest codecov coverage[toml] pytest-xdist pytest-cov
- name: Setup Homebrew packages - name: Setup Homebrew packages
run: | run: |
brew install dash fish gcc gnupg kcov brew install dash fish gcc gnupg2 kcov
- name: Run unit tests - name: Run unit tests
env: env:
SPACK_TEST_SOLVER: clingo
SPACK_TEST_PARALLEL: 4 SPACK_TEST_PARALLEL: 4
COVERAGE_FILE: coverage/.coverage-${{ matrix.os }}-python${{ matrix.python-version }}
run: | run: |
git --version git --version
. .github/workflows/bin/setup_git.sh . .github/workflows/setup_git.sh
. share/spack/setup-env.sh . share/spack/setup-env.sh
$(which spack) bootstrap disable spack-install $(which spack) bootstrap disable spack-install
$(which spack) solve zlib $(which spack) solve zlib
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x) common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
$(which spack) unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}" $(which spack) unit-test --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}"
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b - uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with: with:
name: coverage-${{ matrix.os }}-python${{ matrix.python-version }} flags: unittests,macos
path: coverage
include-hidden-files: true
# Run unit tests on Windows
windows:
defaults:
run:
shell:
powershell Invoke-Expression -Command "./share/spack/qa/windows_test_setup.ps1"; {0}
runs-on: windows-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip pywin32 setuptools pytest-cov clingo
- name: Create local develop
run: |
./.github/workflows/bin/setup_git.ps1
- name: Unit Test
env:
COVERAGE_FILE: coverage/.coverage-windows
run: |
spack unit-test -x --verbose --cov --cov-config=pyproject.toml
./share/spack/qa/validate_last_exit.ps1
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
with:
name: coverage-windows
path: coverage
include-hidden-files: true

60
.github/workflows/valid-style.yml vendored Normal file
View File

@@ -0,0 +1,60 @@
name: style
on:
workflow_call:
inputs:
with_coverage:
required: true
type: string
concurrency:
group: style-${{github.ref}}-${{github.event.pull_request.number || github.run_number}}
cancel-in-progress: true
jobs:
# Validate that the code can be run on all the Python versions
# supported by Spack
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with:
python-version: '3.11'
cache: 'pip'
- name: Install Python Packages
run: |
pip install --upgrade pip
pip install --upgrade vermin
- name: vermin (Spack's Core)
run: vermin --backport importlib --backport argparse --violations --backport typing -t=3.6- -vvv lib/spack/spack/ lib/spack/llnl/ bin/
- name: vermin (Repositories)
run: vermin --backport importlib --backport argparse --violations --backport typing -t=3.6- -vvv var/spack/repos
# Run style checks on the files that have been changed
style:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with:
python-version: '3.11'
cache: 'pip'
- name: Install Python packages
run: |
python3 -m pip install --upgrade pip six setuptools types-six black mypy isort clingo flake8
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Run style tests
run: |
share/spack/qa/run-style-tests
audit:
uses: ./.github/workflows/audit.yaml
with:
with_coverage: ${{ inputs.with_coverage }}
python_version: '3.11'

158
.github/workflows/windows_python.yml vendored Normal file
View File

@@ -0,0 +1,158 @@
name: windows
on:
workflow_call:
concurrency:
group: windows-${{github.ref}}-${{github.event.pull_request.number || github.run_number}}
cancel-in-progress: true
defaults:
run:
shell:
powershell Invoke-Expression -Command ".\share\spack\qa\windows_test_setup.ps1"; {0}
jobs:
unit-tests:
runs-on: windows-latest
steps:
- uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools codecov pytest-cov clingo
- name: Create local develop
run: |
.\spack\.github\workflows\setup_git.ps1
- name: Unit Test
run: |
echo F|xcopy .\spack\share\spack\qa\configuration\windows_config.yaml $env:USERPROFILE\.spack\windows\config.yaml
cd spack
dir
spack unit-test -x --verbose --cov --cov-config=pyproject.toml --ignore=lib/spack/spack/test/cmd
coverage combine -a
coverage xml
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
flags: unittests,windows
unit-tests-cmd:
runs-on: windows-latest
steps:
- uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools codecov coverage pytest-cov clingo
- name: Create local develop
run: |
.\spack\.github\workflows\setup_git.ps1
- name: Command Unit Test
run: |
echo F|xcopy .\spack\share\spack\qa\configuration\windows_config.yaml $env:USERPROFILE\.spack\windows\config.yaml
cd spack
spack unit-test -x --verbose --cov --cov-config=pyproject.toml lib/spack/spack/test/cmd
coverage combine -a
coverage xml
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
flags: unittests,windows
build-abseil:
runs-on: windows-latest
steps:
- uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools codecov coverage
- name: Build Test
run: |
spack compiler find
echo F|xcopy .\spack\share\spack\qa\configuration\windows_config.yaml $env:USERPROFILE\.spack\windows\config.yaml
spack external find cmake
spack external find ninja
spack -d install abseil-cpp
make-installer:
runs-on: windows-latest
steps:
- name: Disable Windows Symlinks
run: |
git config --global core.symlinks false
shell:
powershell
- uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools
- name: Add Light and Candle to Path
run: |
$env:WIX >> $GITHUB_PATH
- name: Run Installer
run: |
.\spack\share\spack\qa\setup_spack.ps1
spack make-installer -s spack -g SILENT pkg
echo "installer_root=$((pwd).Path)" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
env:
ProgressPreference: SilentlyContinue
- uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
with:
name: Windows Spack Installer Bundle
path: ${{ env.installer_root }}\pkg\Spack.exe
- uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
with:
name: Windows Spack Installer
path: ${{ env.installer_root}}\pkg\Spack.msi
execute-installer:
needs: make-installer
runs-on: windows-latest
defaults:
run:
shell: pwsh
steps:
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools
- name: Setup installer directory
run: |
mkdir -p spack_installer
echo "spack_installer=$((pwd).Path)\spack_installer" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
- uses: actions/download-artifact@v3
with:
name: Windows Spack Installer Bundle
path: ${{ env.spack_installer }}
- name: Execute Bundled Installer
run: |
$proc = Start-Process ${{ env.spack_installer }}\spack.exe "/install /quiet" -Passthru
$handle = $proc.Handle # cache proc.Handle
$proc.WaitForExit();
$LASTEXITCODE
env:
ProgressPreference: SilentlyContinue
- uses: actions/download-artifact@v3
with:
name: Windows Spack Installer
path: ${{ env.spack_installer }}
- name: Execute MSI
run: |
$proc = Start-Process ${{ env.spack_installer }}\spack.msi "/quiet" -Passthru
$handle = $proc.Handle # cache proc.Handle
$proc.WaitForExit();
$LASTEXITCODE

1
.gitignore vendored
View File

@@ -201,6 +201,7 @@ tramp
# Org-mode # Org-mode
.org-id-locations .org-id-locations
*_archive
# flymake-mode # flymake-mode
*_flymake.* *_flymake.*

View File

@@ -1,39 +1,10 @@
version: 2 version: 2
build:
os: "ubuntu-22.04"
apt_packages:
- graphviz
tools:
python: "3.11"
sphinx: sphinx:
configuration: lib/spack/docs/conf.py configuration: lib/spack/docs/conf.py
fail_on_warning: true fail_on_warning: true
python: python:
version: 3.7
install: install:
- requirements: lib/spack/docs/requirements.txt - requirements: lib/spack/docs/requirements.txt
search:
ranking:
spack.html: -10
spack.*.html: -10
llnl.html: -10
llnl.*.html: -10
_modules/*: -10
command_index.html: -9
basic_usage.html: 5
configuration.html: 5
config_yaml.html: 5
packages_yaml.html: 5
build_settings.html: 5
environments.html: 5
containers.html: 5
mirrors.html: 5
module_file_support.html: 5
repositories.html: 5
binary_caches.html: 5
chain.html: 5
pipelines.html: 5
packaging_guide.html: 5

File diff suppressed because it is too large Load Diff

View File

@@ -27,57 +27,12 @@
# And here's the CITATION.cff format: # And here's the CITATION.cff format:
# #
cff-version: 1.2.0 cff-version: 1.2.0
type: software
message: "If you are referencing Spack in a publication, please cite the paper below." message: "If you are referencing Spack in a publication, please cite the paper below."
title: "The Spack Package Manager: Bringing Order to HPC Software Chaos"
abstract: >-
Large HPC centers spend considerable time supporting software for thousands of users, but the
complexity of HPC software is quickly outpacing the capabilities of existing software management
tools. Scientific applications require specific versions of compilers, MPI, and other dependency
libraries, so using a single, standard software stack is infeasible. However, managing many
configurations is difficult because the configuration space is combinatorial in size. We
introduce Spack, a tool used at Lawrence Livermore National Laboratory to manage this complexity.
Spack provides a novel, re- cursive specification syntax to invoke parametric builds of packages
and dependencies. It allows any number of builds to coexist on the same system, and it ensures
that installed packages can find their dependencies, regardless of the environment. We show
through real-world use cases that Spack supports diverse and demanding applications, bringing
order to HPC software chaos.
preferred-citation: preferred-citation:
title: "The Spack Package Manager: Bringing Order to HPC Software Chaos"
type: conference-paper type: conference-paper
url: "https://tgamblin.github.io/pubs/spack-sc15.pdf" doi: "10.1145/2807591.2807623"
url: "https://github.com/spack/spack"
authors: authors:
- family-names: "Gamblin"
given-names: "Todd"
- family-names: "LeGendre"
given-names: "Matthew"
- family-names: "Collette"
given-names: "Michael R."
- family-names: "Lee"
given-names: "Gregory L."
- family-names: "Moody"
given-names: "Adam"
- family-names: "de Supinski"
given-names: "Bronis R."
- family-names: "Futral"
given-names: "Scott"
conference:
name: "Supercomputing 2015 (SC15)"
city: "Austin"
region: "Texas"
country: "US"
date-start: 2015-11-15
date-end: 2015-11-20
month: 11
year: 2015
identifiers:
- description: "The concept DOI of the work."
type: doi
value: 10.1145/2807591.2807623
- description: "The DOE Document Release Number of the work"
type: other
value: "LLNL-CONF-669890"
authors:
- family-names: "Gamblin" - family-names: "Gamblin"
given-names: "Todd" given-names: "Todd"
- family-names: "LeGendre" - family-names: "LeGendre"
@@ -92,3 +47,12 @@ authors:
given-names: "Bronis R." given-names: "Bronis R."
- family-names: "Futral" - family-names: "Futral"
given-names: "Scott" given-names: "Scott"
title: "The Spack Package Manager: Bringing Order to HPC Software Chaos"
conference:
name: "Supercomputing 2015 (SC15)"
city: "Austin"
region: "Texas"
country: "USA"
month: November 15-20
year: 2015
notes: LLNL-CONF-669890

View File

@@ -8,9 +8,8 @@ or http://www.apache.org/licenses/LICENSE-2.0) or the MIT license,
Copyrights and patents in the Spack project are retained by contributors. Copyrights and patents in the Spack project are retained by contributors.
No copyright assignment is required to contribute to Spack. No copyright assignment is required to contribute to Spack.
Spack was originally developed in 2013 by Lawrence Livermore National Spack was originally distributed under the LGPL-2.1 license. Consent from
Security, LLC. It was originally distributed under the LGPL-2.1 license. contributors to relicense to Apache-2.0/MIT is documented at
Consent from contributors to relicense to Apache-2.0/MIT is documented at
https://github.com/spack/spack/issues/9137. https://github.com/spack/spack/issues/9137.
@@ -103,6 +102,6 @@ PackageName: sbang
PackageHomePage: https://github.com/spack/sbang PackageHomePage: https://github.com/spack/sbang
PackageLicenseDeclared: Apache-2.0 OR MIT PackageLicenseDeclared: Apache-2.0 OR MIT
PackageName: typing_extensions PackageName: six
PackageHomePage: https://pypi.org/project/typing-extensions/ PackageHomePage: https://pypi.python.org/pypi/six
PackageLicenseDeclared: Python-2.0 PackageLicenseDeclared: MIT

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) Spack Project Developers. Copyright (c) 2013-2022 LLNS, LLC and other Spack Project Developers.
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,38 +1,16 @@
<div align="left"> # <img src="https://cdn.rawgit.com/spack/spack/develop/share/spack/logo/spack-logo.svg" width="64" valign="middle" alt="Spack"/> Spack
<h2> [![Unit Tests](https://github.com/spack/spack/workflows/linux%20tests/badge.svg)](https://github.com/spack/spack/actions)
<picture> [![Bootstrapping](https://github.com/spack/spack/actions/workflows/bootstrap.yml/badge.svg)](https://github.com/spack/spack/actions/workflows/bootstrap.yml)
<source media="(prefers-color-scheme: dark)" srcset="https://cdn.rawgit.com/spack/spack/develop/share/spack/logo/spack-logo-white-text.svg" width="250"> [![codecov](https://codecov.io/gh/spack/spack/branch/develop/graph/badge.svg)](https://codecov.io/gh/spack/spack)
<source media="(prefers-color-scheme: light)" srcset="https://cdn.rawgit.com/spack/spack/develop/share/spack/logo/spack-logo-text.svg" width="250"> [![Containers](https://github.com/spack/spack/actions/workflows/build-containers.yml/badge.svg)](https://github.com/spack/spack/actions/workflows/build-containers.yml)
<img alt="Spack" src="https://cdn.rawgit.com/spack/spack/develop/share/spack/logo/spack-logo-text.svg" width="250"> [![Read the Docs](https://readthedocs.org/projects/spack/badge/?version=latest)](https://spack.readthedocs.io)
</picture> [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Slack](https://slack.spack.io/badge.svg)](https://slack.spack.io)
<br>
<br clear="all">
<a href="https://github.com/spack/spack/actions/workflows/ci.yml"><img src="https://github.com/spack/spack/workflows/ci/badge.svg" alt="CI Status"></a>
<a href="https://github.com/spack/spack/actions/workflows/bootstrapping.yml"><img src="https://github.com/spack/spack/actions/workflows/bootstrap.yml/badge.svg" alt="Bootstrap Status"></a>
<a href="https://github.com/spack/spack/actions/workflows/build-containers.yml"><img src="https://github.com/spack/spack/actions/workflows/build-containers.yml/badge.svg" alt="Containers Status"></a>
<a href="https://spack.readthedocs.io"><img src="https://readthedocs.org/projects/spack/badge/?version=latest" alt="Documentation Status"></a>
<a href="https://codecov.io/gh/spack/spack"><img src="https://codecov.io/gh/spack/spack/branch/develop/graph/badge.svg" alt="Code coverage"/></a>
<a href="https://slack.spack.io"><img src="https://slack.spack.io/badge.svg" alt="Slack"/></a>
<a href="https://matrix.to/#/#spack-space:matrix.org"><img src="https://img.shields.io/matrix/spack-space%3Amatrix.org?label=matrix" alt="Matrix"/></a>
</h2>
**[Getting Started] &nbsp;&nbsp; [Config] &nbsp;&nbsp; [Community] &nbsp;&nbsp; [Contributing] &nbsp;&nbsp; [Packaging Guide]**
[Getting Started]: https://spack.readthedocs.io/en/latest/getting_started.html
[Config]: https://spack.readthedocs.io/en/latest/configuration.html
[Community]: #community
[Contributing]: https://spack.readthedocs.io/en/latest/contribution_guide.html
[Packaging Guide]: https://spack.readthedocs.io/en/latest/packaging_guide.html
</div>
Spack is a multi-platform package manager that builds and installs Spack is a multi-platform package manager that builds and installs
multiple versions and configurations of software. It works on Linux, multiple versions and configurations of software. It works on Linux,
macOS, Windows, and many supercomputers. Spack is non-destructive: installing a macOS, and many supercomputers. Spack is non-destructive: installing a
new version of a package does not break existing installations, so many new version of a package does not break existing installations, so many
configurations of the same package can coexist. configurations of the same package can coexist.
@@ -46,41 +24,12 @@ See the
[Feature Overview](https://spack.readthedocs.io/en/latest/features.html) [Feature Overview](https://spack.readthedocs.io/en/latest/features.html)
for examples and highlights. for examples and highlights.
Installation To install spack and your first package, make sure you have Python.
----------------
To install spack, first make sure you have Python & Git.
Then: Then:
```bash $ git clone -c feature.manyFiles=true https://github.com/spack/spack.git
git clone -c feature.manyFiles=true --depth=2 https://github.com/spack/spack.git $ cd spack/bin
``` $ ./spack install zlib
<details>
<summary>What are <code>manyFiles=true</code> and <code>--depth=2</code>?</summary>
<br>
> `-c feature.manyFiles=true` improves git's performance on repositories with 1,000+ files.
>
> `--depth=2` prunes the git history to reduce the size of the Spack installation.
</details>
```bash
# For bash/zsh/sh
. spack/share/spack/setup-env.sh
# For tcsh/csh
source spack/share/spack/setup-env.csh
# For fish
. spack/share/spack/setup-env.fish
```
```bash
# Now you're ready to install a package!
spack install zlib-ng
```
Documentation Documentation
---------------- ----------------
@@ -94,7 +43,7 @@ Tutorial
---------------- ----------------
We maintain a We maintain a
[**hands-on tutorial**](https://spack-tutorial.readthedocs.io/). [**hands-on tutorial**](https://spack.readthedocs.io/en/latest/tutorial.html).
It covers basic to advanced usage, packaging, developer features, and large HPC It covers basic to advanced usage, packaging, developer features, and large HPC
deployments. You can do all of the exercises on your own laptop using a deployments. You can do all of the exercises on your own laptop using a
Docker container. Docker container.
@@ -113,14 +62,10 @@ Resources:
* **Slack workspace**: [spackpm.slack.com](https://spackpm.slack.com). * **Slack workspace**: [spackpm.slack.com](https://spackpm.slack.com).
To get an invitation, visit [slack.spack.io](https://slack.spack.io). To get an invitation, visit [slack.spack.io](https://slack.spack.io).
* **Matrix space**: [#spack-space:matrix.org](https://matrix.to/#/#spack-space:matrix.org): * [**Github Discussions**](https://github.com/spack/spack/discussions): not just for discussions, also Q&A.
[bridged](https://github.com/matrix-org/matrix-appservice-slack#matrix-appservice-slack) to Slack. * **Mailing list**: [groups.google.com/d/forum/spack](https://groups.google.com/d/forum/spack)
* [**Github Discussions**](https://github.com/spack/spack/discussions): * **Twitter**: [@spackpm](https://twitter.com/spackpm). Be sure to
for Q&A and discussions. Note the pinned discussions for announcements.
* **X**: [@spackpm](https://twitter.com/spackpm). Be sure to
`@mention` us! `@mention` us!
* **Mailing list**: [groups.google.com/d/forum/spack](https://groups.google.com/d/forum/spack):
only for announcements. Please use other venues for discussions.
Contributing Contributing
------------------------ ------------------------

View File

@@ -2,26 +2,24 @@
## Supported Versions ## Supported Versions
We provide security updates for `develop` and for the last two We provide security updates for the following releases.
stable (`0.x`) release series of Spack. Security updates will be
made available as patch (`0.x.1`, `0.x.2`, etc.) releases.
For more on Spack's release structure, see For more on Spack's release structure, see
[`README.md`](https://github.com/spack/spack#releases). [`README.md`](https://github.com/spack/spack#releases).
| Version | Supported |
| ------- | ------------------ |
| develop | :white_check_mark: |
| 0.19.x | :white_check_mark: |
| 0.18.x | :white_check_mark: |
## Reporting a Vulnerability ## Reporting a Vulnerability
You can report a vulnerability using GitHub's private reporting To report a vulnerability or other security
feature: issue, email maintainers@spack.io.
1. Go to [github.com/spack/spack/security](https://github.com/spack/spack/security). You can expect to hear back within two days.
2. Click "Report a vulnerability" in the upper right corner of that page. If your security issue is accepted, we will do
3. Fill out the form and submit your draft security advisory. our best to release a fix within a week. If
fixing the issue will take longer than this,
More details are available in we will discuss timeline options with you.
[GitHub's docs](https://docs.github.com/en/code-security/security-advisories/guidance-on-reporting-and-writing/privately-reporting-a-security-vulnerability).
You can expect to hear back about security issues within two days.
If your security issue is accepted, we will do our best to release
a fix within a week. If fixing the issue will take longer than
this, we will discuss timeline options with you.

View File

@@ -1,4 +1,5 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details. # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import subprocess import subprocess
@@ -9,7 +10,6 @@ def getpywin():
try: try:
import win32con # noqa: F401 import win32con # noqa: F401
except ImportError: except ImportError:
print("pyWin32 not installed but is required...\nInstalling via pip:")
subprocess.check_call([sys.executable, "-m", "pip", "-q", "install", "--upgrade", "pip"]) subprocess.check_call([sys.executable, "-m", "pip", "-q", "install", "--upgrade", "pip"])
subprocess.check_call([sys.executable, "-m", "pip", "-q", "install", "pywin32"]) subprocess.check_call([sys.executable, "-m", "pip", "-q", "install", "pywin32"])

View File

@@ -1,6 +1,7 @@
#!/bin/sh #!/bin/sh
# #
# Copyright sbang project developers. See COPYRIGHT file for details. # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# sbang project developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,7 +1,8 @@
#!/bin/sh #!/bin/sh
# -*- python -*- # -*- python -*-
# #
# Copyright Spack Project Developers. See COPYRIGHT file for details. # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -24,7 +25,10 @@ exit 1
# Line above is a shell no-op, and ends a python multi-line comment. # Line above is a shell no-op, and ends a python multi-line comment.
# The code above runs this file with our preferred python interpreter. # The code above runs this file with our preferred python interpreter.
from __future__ import print_function
import os import os
import os.path
import sys import sys
min_python3 = (3, 6) min_python3 = (3, 6)

View File

@@ -1,6 +1,7 @@
#!/bin/sh #!/bin/sh
# #
# Copyright Spack Project Developers. See COPYRIGHT file for details. # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -21,4 +22,4 @@
# #
# This is compatible across platforms. # This is compatible across platforms.
# #
exec spack python "$@" exec /usr/bin/env spack python "$@"

View File

@@ -72,7 +72,6 @@ config:
root: $TMP_DIR/install root: $TMP_DIR/install
misc_cache: $$user_cache_path/cache misc_cache: $$user_cache_path/cache
source_cache: $$user_cache_path/source source_cache: $$user_cache_path/source
environments_root: $TMP_DIR/envs
EOF EOF
cat >"$SPACK_USER_CONFIG_PATH/bootstrap.yaml" <<EOF cat >"$SPACK_USER_CONFIG_PATH/bootstrap.yaml" <<EOF
bootstrap: bootstrap:

View File

@@ -1,4 +1,5 @@
:: Copyright Spack Project Developers. See COPYRIGHT file for details. :: Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
:: Spack Project Developers. See the top-level COPYRIGHT file for details.
:: ::
:: SPDX-License-Identifier: (Apache-2.0 OR MIT) :: SPDX-License-Identifier: (Apache-2.0 OR MIT)
::####################################################################### ::#######################################################################
@@ -13,7 +14,7 @@
:: ::
@echo off @echo off
set spack="%SPACK_ROOT%"\bin\spack set spack=%SPACK_ROOT%\bin\spack
::####################################################################### ::#######################################################################
:: This is a wrapper around the spack command that forwards calls to :: This is a wrapper around the spack command that forwards calls to
@@ -49,48 +50,25 @@ setlocal enabledelayedexpansion
:: flags will always start with '-', e.g. --help or -V :: flags will always start with '-', e.g. --help or -V
:: subcommands will never start with '-' :: subcommands will never start with '-'
:: everything after the subcommand is an arg :: everything after the subcommand is an arg
for %%x in (%*) do (
set t="%%~x"
:process_cl_args if "!t:~0,1!" == "-" (
rem Set first cl argument (denoted by %1) to be processed if defined _sp_subcommand (
set t=%1 :: We already have a subcommand, processing args now
rem shift moves all cl positional arguments left by one
rem meaning %2 is now %1, this allows us to iterate over each
rem argument
shift
rem assign next "first" cl argument to cl_args, will be null when
rem there are now further arguments to process
set cl_args=%1
if "!t:~0,1!" == "-" (
if defined _sp_subcommand (
rem We already have a subcommand, processing args now
if not defined _sp_args (
set "_sp_args=!t!"
) else (
set "_sp_args=!_sp_args! !t!" set "_sp_args=!_sp_args! !t!"
)
) else (
if not defined _sp_flags (
set "_sp_flags=!t!"
) else ( ) else (
set "_sp_flags=!_sp_flags! !t!" set "_sp_flags=!_sp_flags! !t!"
shift
) )
) ) else if not defined _sp_subcommand (
) else if not defined _sp_subcommand ( set "_sp_subcommand=!t!"
set "_sp_subcommand=!t!" shift
) else (
if not defined _sp_args (
set "_sp_args=!t!"
) else ( ) else (
set "_sp_args=!_sp_args! !t!" set "_sp_args=!_sp_args! !t!"
shift
) )
) )
rem if this is not nu;ll, we have more tokens to process
rem start above process again with remaining unprocessed cl args
if defined cl_args goto :process_cl_args
:: --help, -h and -V flags don't require further output parsing. :: --help, -h and -V flags don't require further output parsing.
:: If we encounter, execute and exit :: If we encounter, execute and exit
if defined _sp_flags ( if defined _sp_flags (
@@ -105,24 +83,24 @@ if defined _sp_flags (
exit /B 0 exit /B 0
) )
) )
if not defined _sp_subcommand (
if not defined _sp_args (
if not defined _sp_flags (
python "%spack%" --help
exit /B 0
)
)
)
:: pass parsed variables outside of local scope. Need to do :: pass parsed variables outside of local scope. Need to do
:: this because delayedexpansion can only be set by setlocal :: this because delayedexpansion can only be set by setlocal
endlocal & ( echo %_sp_flags%>flags
set "_sp_flags=%_sp_flags%" echo %_sp_args%>args
set "_sp_args=%_sp_args%" echo %_sp_subcommand%>subcmd
set "_sp_subcommand=%_sp_subcommand%" endlocal
) set /p _sp_subcommand=<subcmd
set /p _sp_flags=<flags
set /p _sp_args=<args
set str_subcommand=%_sp_subcommand:"='%
set str_flags=%_sp_flags:"='%
set str_args=%_sp_args:"='%
if "%str_subcommand%"=="ECHO is off." (set "_sp_subcommand=")
if "%str_flags%"=="ECHO is off." (set "_sp_flags=")
if "%str_args%"=="ECHO is off." (set "_sp_args=")
del subcmd
del flags
del args
:: Filter out some commands. For any others, just run the command. :: Filter out some commands. For any others, just run the command.
if "%_sp_subcommand%" == "cd" ( if "%_sp_subcommand%" == "cd" (
@@ -165,9 +143,7 @@ goto :end_switch
:: If no args or args contain --bat or -h/--help: just execute. :: If no args or args contain --bat or -h/--help: just execute.
if NOT defined _sp_args ( if NOT defined _sp_args (
goto :default_case goto :default_case
) )else if NOT "%_sp_args%"=="%_sp_args:--help=%" (
if NOT "%_sp_args%"=="%_sp_args:--help=%" (
goto :default_case goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args: -h=%" ( ) else if NOT "%_sp_args%"=="%_sp_args: -h=%" (
goto :default_case goto :default_case
@@ -175,11 +151,11 @@ if NOT "%_sp_args%"=="%_sp_args:--help=%" (
goto :default_case goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:deactivate=%" ( ) else if NOT "%_sp_args%"=="%_sp_args:deactivate=%" (
for /f "tokens=* USEBACKQ" %%I in ( for /f "tokens=* USEBACKQ" %%I in (
`call python %spack% %_sp_flags% env deactivate --bat %_sp_args:deactivate=%` `call python "%spack%" %_sp_flags% env deactivate --bat %_sp_args:deactivate=%`
) do %%I ) do %%I
) else if NOT "%_sp_args%"=="%_sp_args:activate=%" ( ) else if NOT "%_sp_args%"=="%_sp_args:activate=%" (
for /f "tokens=* USEBACKQ" %%I in ( for /f "tokens=* USEBACKQ" %%I in (
`python %spack% %_sp_flags% env activate --bat %_sp_args:activate=%` `call python "%spack%" %_sp_flags% env activate --bat %_sp_args:activate=%`
) do %%I ) do %%I
) else ( ) else (
goto :default_case goto :default_case
@@ -187,27 +163,25 @@ if NOT "%_sp_args%"=="%_sp_args:--help=%" (
goto :end_switch goto :end_switch
:case_load :case_load
if NOT defined _sp_args ( :: If args contain --sh, --csh, or -h/--help: just execute.
exit /B 0 if defined _sp_args (
) if NOT "%_sp_args%"=="%_sp_args:--help=%" (
goto :default_case
:: If args contain --bat, or -h/--help: just execute. ) else if NOT "%_sp_args%"=="%_sp_args: -h=%" (
if NOT "%_sp_args%"=="%_sp_args:--help=%" ( goto :default_case
goto :default_case ) else if NOT "%_sp_args%"=="%_sp_args:--bat=%" (
) else if NOT "%_sp_args%"=="%_sp_args:-h=%" ( goto :default_case
goto :default_case )
) else if NOT "%_sp_args%"=="%_sp_args:--bat=%" (
goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:--list=%" (
goto :default_case
) )
for /f "tokens=* USEBACKQ" %%I in ( for /f "tokens=* USEBACKQ" %%I in (
`python "%spack%" %_sp_flags% %_sp_subcommand% --bat %_sp_args%` `python "%spack%" %_sp_flags% %_sp_subcommand% --bat %_sp_args%`) do %%I
) do %%I )
goto :end_switch goto :end_switch
:case_unload
goto :case_load
:default_case :default_case
python "%spack%" %_sp_flags% %_sp_subcommand% %_sp_args% python "%spack%" %_sp_flags% %_sp_subcommand% %_sp_args%
goto :end_switch goto :end_switch
@@ -240,10 +214,10 @@ for %%Z in ("%_pa_new_path%") do if EXIST %%~sZ\NUL (
exit /b 0 exit /b 0
:: set module system roots :: set module system roots
:_sp_multi_pathadd :_sp_multi_pathadd
for %%I in (%~2) do ( for %%I in (%~2) do (
for %%Z in (%_sp_compatible_sys_types%) do ( for %%Z in (%_sp_compatible_sys_types%) do (
:pathadd "%~1" "%%I\%%Z" :pathadd "%~1" "%%I\%%Z"
) )
) )
exit /B %ERRORLEVEL% exit /B %ERRORLEVEL%

View File

@@ -1,147 +0,0 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# #######################################################################
function Compare-CommonArgs {
$CMDArgs = $args[0]
# These aruments take precedence and call for no futher parsing of arguments
# invoke actual Spack entrypoint with that context and exit after
"--help", "-h", "--version", "-V" | ForEach-Object {
$arg_opt = $_
if(($CMDArgs) -and ([bool]($CMDArgs.Where({$_ -eq $arg_opt})))) {
return $true
}
}
return $false
}
function Read-SpackArgs {
$SpackCMD_params = @()
$SpackSubCommand = $NULL
$SpackSubCommandArgs = @()
$args_ = $args[0]
$args_ | ForEach-Object {
if (!$SpackSubCommand) {
if($_.SubString(0,1) -eq "-")
{
$SpackCMD_params += $_
}
else{
$SpackSubCommand = $_
}
}
else{
$SpackSubCommandArgs += $_
}
}
return $SpackCMD_params, $SpackSubCommand, $SpackSubCommandArgs
}
function Set-SpackEnv {
# This method is responsible
# for processing the return from $(spack <command>)
# which are returned as System.Object[]'s containing
# a list of env commands
# Invoke-Expression can only handle one command at a time
# so we iterate over the list to invoke the env modification
# expressions one at a time
foreach($envop in $args[0]){
Invoke-Expression $envop
}
}
function Invoke-SpackCD {
if (Compare-CommonArgs $SpackSubCommandArgs) {
python "$Env:SPACK_ROOT/bin/spack" cd -h
}
else {
$LOC = $(python "$Env:SPACK_ROOT/bin/spack" location $SpackSubCommandArgs)
if (($NULL -ne $LOC)){
if ( Test-Path -Path $LOC){
Set-Location $LOC
}
else{
exit 1
}
}
else {
exit 1
}
}
}
function Invoke-SpackEnv {
if (Compare-CommonArgs $SpackSubCommandArgs[0]) {
python "$Env:SPACK_ROOT/bin/spack" env -h
}
else {
$SubCommandSubCommand = $SpackSubCommandArgs[0]
$SubCommandSubCommandArgs = $SpackSubCommandArgs[1..$SpackSubCommandArgs.Count]
switch ($SubCommandSubCommand) {
"activate" {
if (Compare-CommonArgs $SubCommandSubCommandArgs) {
python "$Env:SPACK_ROOT/bin/spack" env activate $SubCommandSubCommandArgs
}
elseif ([bool]($SubCommandSubCommandArgs.Where({$_ -eq "--pwsh"}))) {
python "$Env:SPACK_ROOT/bin/spack" env activate $SubCommandSubCommandArgs
}
elseif (!$SubCommandSubCommandArgs) {
python "$Env:SPACK_ROOT/bin/spack" env activate $SubCommandSubCommandArgs
}
else {
$SpackEnv = $(python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params env activate "--pwsh" $SubCommandSubCommandArgs)
Set-SpackEnv $SpackEnv
}
}
"deactivate" {
if ([bool]($SubCommandSubCommandArgs.Where({$_ -eq "--pwsh"}))) {
python"$Env:SPACK_ROOT/bin/spack" env deactivate $SubCommandSubCommandArgs
}
elseif($SubCommandSubCommandArgs) {
python "$Env:SPACK_ROOT/bin/spack" env deactivate -h
}
else {
$SpackEnv = $(python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params env deactivate "--pwsh")
Set-SpackEnv $SpackEnv
}
}
default {python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand $SpackSubCommandArgs}
}
}
}
function Invoke-SpackLoad {
if (Compare-CommonArgs $SpackSubCommandArgs) {
python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand $SpackSubCommandArgs
}
elseif ([bool]($SpackSubCommandArgs.Where({($_ -eq "--pwsh") -or ($_ -eq "--list")}))) {
python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand $SpackSubCommandArgs
}
else {
$SpackEnv = $(python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand "--pwsh" $SpackSubCommandArgs)
Set-SpackEnv $SpackEnv
}
}
$SpackCMD_params, $SpackSubCommand, $SpackSubCommandArgs = Read-SpackArgs $args
if (Compare-CommonArgs $SpackCMD_params) {
python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand $SpackSubCommandArgs
exit $LASTEXITCODE
}
# Process Spack commands with special conditions
# all other commands are piped directly to Spack
switch($SpackSubCommand)
{
"cd" {Invoke-SpackCD}
"env" {Invoke-SpackEnv}
"load" {Invoke-SpackLoad}
"unload" {Invoke-SpackLoad}
default {python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand $SpackSubCommandArgs}
}
exit $LASTEXITCODE

View File

@@ -1,11 +1,72 @@
@ECHO OFF @ECHO OFF
setlocal EnableDelayedExpansion
:: (c) 2021 Lawrence Livermore National Laboratory :: (c) 2021 Lawrence Livermore National Laboratory
:: To use this file independently of Spack's installer, execute this script in its directory, or add the :: To use this file independently of Spack's installer, execute this script in its directory, or add the
:: associated bin directory to your PATH. Invoke to launch Spack Shell. :: associated bin directory to your PATH. Invoke to launch Spack Shell.
:: ::
:: source_dir/spack/bin/spack_cmd.bat :: source_dir/spack/bin/spack_cmd.bat
:: ::
pushd %~dp0..
set SPACK_ROOT=%CD%
pushd %CD%\..
set spackinstdir=%CD%
popd
call "%~dp0..\share\spack\setup-env.bat"
pushd %SPACK_ROOT% :: Check if Python is on the PATH
%comspec% /K if not defined python_pf_ver (
(for /f "delims=" %%F in ('where python.exe') do (
set "python_pf_ver=%%F"
goto :found_python
) ) 2> NUL
)
:found_python
if not defined python_pf_ver (
:: If not, look for Python from the Spack installer
:get_builtin
(for /f "tokens=*" %%g in ('dir /b /a:d "!spackinstdir!\Python*"') do (
set "python_ver=%%g")) 2> NUL
if not defined python_ver (
echo Python was not found on your system.
echo Please install Python or add Python to your PATH.
) else (
set "py_path=!spackinstdir!\!python_ver!"
set "py_exe=!py_path!\python.exe"
)
goto :exitpoint
) else (
:: Python is already on the path
set "py_exe=!python_pf_ver!"
(for /F "tokens=* USEBACKQ" %%F in (
`"!py_exe!" --version`) do (set "output=%%F")) 2>NUL
if not "!output:Microsoft Store=!"=="!output!" goto :get_builtin
goto :exitpoint
)
:exitpoint
set "PATH=%SPACK_ROOT%\bin\;%PATH%"
if defined py_path (
set "PATH=%py_path%;%PATH%"
)
if defined py_exe (
"%py_exe%" "%SPACK_ROOT%\bin\haspywin.py"
"%py_exe%" "%SPACK_ROOT%\bin\spack" external find python >NUL
)
set "EDITOR=notepad"
DOSKEY spacktivate=spack env activate $*
@echo **********************************************************************
@echo ** Spack Package Manager
@echo **********************************************************************
IF "%1"=="" GOTO CONTINUE
set
GOTO:EOF
:continue
set PROMPT=[spack] %PROMPT%
%comspec% /k

View File

@@ -1,4 +1,5 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details. # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -9,15 +9,15 @@ bootstrap:
# may not be able to bootstrap all the software that Spack needs, # may not be able to bootstrap all the software that Spack needs,
# depending on its type. # depending on its type.
sources: sources:
- name: github-actions-v0.6 - name: 'github-actions-v0.4'
metadata: $spack/share/spack/bootstrap/github-actions-v0.6 metadata: $spack/share/spack/bootstrap/github-actions-v0.4
- name: github-actions-v0.5 - name: 'github-actions-v0.3'
metadata: $spack/share/spack/bootstrap/github-actions-v0.5 metadata: $spack/share/spack/bootstrap/github-actions-v0.3
- name: spack-install - name: 'spack-install'
metadata: $spack/share/spack/bootstrap/spack-install metadata: $spack/share/spack/bootstrap/spack-install
trusted: trusted:
# By default we trust bootstrapping from sources and from binaries # By default we trust bootstrapping from sources and from binaries
# produced on Github via the workflow # produced on Github via the workflow
github-actions-v0.6: true github-actions-v0.4: true
github-actions-v0.5: true github-actions-v0.3: true
spack-install: true spack-install: true

View File

@@ -13,18 +13,16 @@ concretizer:
# Whether to consider installed packages or packages from buildcaches when # Whether to consider installed packages or packages from buildcaches when
# concretizing specs. If `true`, we'll try to use as many installs/binaries # concretizing specs. If `true`, we'll try to use as many installs/binaries
# as possible, rather than building. If `false`, we'll always give you a fresh # as possible, rather than building. If `false`, we'll always give you a fresh
# concretization. If `dependencies`, we'll only reuse dependencies but # concretization.
# give you a fresh concretization for your root specs.
reuse: true reuse: true
# Options that tune which targets are considered for concretization. The # Options that tune which targets are considered for concretization. The
# concretization process is very sensitive to the number targets, and the time # concretization process is very sensitive to the number targets, and the time
# needed to reach a solution increases noticeably with the number of targets # needed to reach a solution increases noticeably with the number of targets
# considered. # considered.
targets: targets:
# Determine whether we want to target specific or generic # Determine whether we want to target specific or generic microarchitectures.
# microarchitectures. Valid values are: "microarchitectures" or "generic". # An example of the first kind might be for instance "skylake" or "bulldozer",
# An example of "microarchitectures" would be "skylake" or "bulldozer", # while generic microarchitectures are for instance "aarch64" or "x86_64_v4".
# while an example of "generic" would be "aarch64" or "x86_64_v4".
granularity: microarchitectures granularity: microarchitectures
# If "false" allow targets that are incompatible with the current host (for # If "false" allow targets that are incompatible with the current host (for
# instance concretize with target "icelake" while running on "haswell"). # instance concretize with target "icelake" while running on "haswell").
@@ -35,57 +33,4 @@ concretizer:
# environments can always be activated. When "false" perform concretization separately # environments can always be activated. When "false" perform concretization separately
# on each root spec, allowing different versions and variants of the same package in # on each root spec, allowing different versions and variants of the same package in
# an environment. # an environment.
unify: true unify: true
# Option to deal with possible duplicate nodes (i.e. different nodes from the same package) in the DAG.
duplicates:
# "none": allows a single node for any package in the DAG.
# "minimal": allows the duplication of 'build-tools' nodes only
# (e.g. py-setuptools, cmake etc.)
# "full" (experimental): allows separation of the entire build-tool stack (e.g. the entire "cmake" subDAG)
strategy: minimal
# Maximum number of duplicates in a DAG, when using a strategy that allows duplicates. "default" is the
# number used if there isn't a more specific alternative
max_dupes:
default: 1
# Virtuals
c: 2
cxx: 2
fortran: 1
# Regular packages
cmake: 2
gmake: 2
python: 2
python-venv: 2
py-cython: 2
py-flit-core: 2
py-pip: 2
py-setuptools: 2
py-wheel: 2
xcb-proto: 2
# Compilers
gcc: 2
llvm: 2
# Option to specify compatibility between operating systems for reuse of compilers and packages
# Specified as a key: [list] where the key is the os that is being targeted, and the list contains the OS's
# it can reuse. Note this is a directional compatibility so mutual compatibility between two OS's
# requires two entries i.e. os_compatible: {sonoma: [monterey], monterey: [sonoma]}
os_compatible: {}
# Option to specify whether to support splicing. Splicing allows for
# the relinking of concrete package dependencies in order to better
# reuse already built packages with ABI compatible dependencies
splice:
explicit: []
automatic: false
# Maximum time, in seconds, allowed for the 'solve' phase. If set to 0, there is no time limit.
timeout: 0
# If set to true, exceeding the timeout will always result in a concretization error. If false,
# the best (suboptimal) model computed before the timeout is used.
#
# Setting this to false yields unreproducible results, so we advise to use that value only
# for debugging purposes (e.g. check which constraints can help Spack concretize faster).
error_on_timeout: true
# Static analysis may reduce the concretization time by generating smaller ASP problems, in
# cases where there are requirements that prevent part of the search space to be explored.
static_analysis: false

View File

@@ -19,7 +19,7 @@ config:
install_tree: install_tree:
root: $spack/opt/spack root: $spack/opt/spack
projections: projections:
all: "{architecture.platform}-{architecture.target}/{name}-{version}-{hash}" all: "{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}"
# install_tree can include an optional padded length (int or boolean) # install_tree can include an optional padded length (int or boolean)
# default is False (do not pad) # default is False (do not pad)
# if padded_length is True, Spack will pad as close to the system max path # if padded_length is True, Spack will pad as close to the system max path
@@ -54,11 +54,6 @@ config:
# are that it precludes its use as a system package and its ability to be # are that it precludes its use as a system package and its ability to be
# pip installable. # pip installable.
# #
# In Spack environment files, chaining onto existing system Spack
# installations, the $env variable can be used to download, cache and build
# into user-writable paths that are relative to the currently active
# environment.
#
# In any case, if the username is not already in the path, Spack will append # In any case, if the username is not already in the path, Spack will append
# the value of `$user` in an attempt to avoid potential conflicts between # the value of `$user` in an attempt to avoid potential conflicts between
# users in shared temporary spaces. # users in shared temporary spaces.
@@ -81,18 +76,15 @@ config:
source_cache: $spack/var/spack/cache source_cache: $spack/var/spack/cache
## Directory where spack managed environments are created and stored
# environments_root: $spack/var/spack/environments
# Cache directory for miscellaneous files, like the package index. # Cache directory for miscellaneous files, like the package index.
# This can be purged with `spack clean --misc-cache` # This can be purged with `spack clean --misc-cache`
misc_cache: $user_cache_path/cache misc_cache: $user_cache_path/cache
# Abort downloads after this many seconds if not data is received. # Timeout in seconds used for downloading sources etc. This only applies
# Setting this to 0 will disable the timeout. # to the connection phase and can be increased for slow connections or
connect_timeout: 30 # servers. 0 means no timeout.
connect_timeout: 10
# If this is false, tools like curl that use SSL will not verify # If this is false, tools like curl that use SSL will not verify
@@ -100,12 +92,6 @@ config:
verify_ssl: true verify_ssl: true
# This is where custom certs for proxy/firewall are stored.
# It can be a path or environment variable. To match ssl env configuration
# the default is the environment variable SSL_CERT_FILE
ssl_certs: $SSL_CERT_FILE
# Suppress gpg warnings from binary package verification # Suppress gpg warnings from binary package verification
# Only suppresses warnings, gpg failure will still fail the install # Only suppresses warnings, gpg failure will still fail the install
# Potential rationale to set True: users have already explicitly trusted the # Potential rationale to set True: users have already explicitly trusted the
@@ -114,6 +100,12 @@ config:
suppress_gpg_warnings: false suppress_gpg_warnings: false
# If set to true, Spack will attempt to build any compiler on the spec
# that is not already available. If set to False, Spack will only use
# compilers already configured in compilers.yaml
install_missing_compilers: false
# If set to true, Spack will always check checksums after downloading # If set to true, Spack will always check checksums after downloading
# archives. If false, Spack skips the checksum step. # archives. If false, Spack skips the checksum step.
checksum: true checksum: true
@@ -163,11 +155,28 @@ config:
# If set to true, Spack will use ccache to cache C compiles. # If set to true, Spack will use ccache to cache C compiles.
ccache: false ccache: false
# The concretization algorithm to use in Spack. Options are:
#
# 'clingo': Uses a logic solver under the hood to solve DAGs with full
# backtracking and optimization for user preferences. Spack will
# try to bootstrap the logic solver, if not already available.
#
# 'original': Spack's original greedy, fixed-point concretizer. This
# algorithm can make decisions too early and will not backtrack
# sufficiently for many specs. This will soon be deprecated in
# favor of clingo.
#
# See `concretizer.yaml` for more settings you can fine-tune when
# using clingo.
concretizer: clingo
# How long to wait to lock the Spack installation database. This lock is used # How long to wait to lock the Spack installation database. This lock is used
# when Spack needs to manage its own package metadata and all operations are # when Spack needs to manage its own package metadata and all operations are
# expected to complete within the default time limit. The timeout should # expected to complete within the default time limit. The timeout should
# therefore generally be left untouched. # therefore generally be left untouched.
db_lock_timeout: 60 db_lock_timeout: 3
# How long to wait when attempting to modify a package (e.g. to install it). # How long to wait when attempting to modify a package (e.g. to install it).
@@ -193,22 +202,15 @@ config:
# executables with many dependencies, in particular on slow filesystems. # executables with many dependencies, in particular on slow filesystems.
bind: false bind: false
# Controls the handling of missing dynamic libraries after installation.
# Options are ignore (default), warn, or error. If set to error, the
# installation fails if installed binaries reference dynamic libraries that
# are not found in their specified rpaths.
missing_library_policy: ignore
# Set to 'false' to allow installation on filesystems that doesn't allow setgid bit # Set to 'false' to allow installation on filesystems that doesn't allow setgid bit
# manipulation by unprivileged user (e.g. AFS) # manipulation by unprivileged user (e.g. AFS)
allow_sgid: true allow_sgid: true
# Whether to show status information during building and installing packages. # Whether to set the terminal title to display status information during
# This gives information about Spack's current progress as well as the current # building and installing packages. This gives information about Spack's
# and total number of packages. Information is shown both in the terminal # current progress as well as the current and total number of packages.
# title and inline. terminal_title: false
install_status: true
# Number of seconds a buildcache's index.json is cached locally before probing # Number of seconds a buildcache's index.json is cached locally before probing
# for updates, within a single Spack invocation. Defaults to 10 minutes. # for updates, within a single Spack invocation. Defaults to 10 minutes.
@@ -217,11 +219,3 @@ config:
flags: flags:
# Whether to keep -Werror flags active in package builds. # Whether to keep -Werror flags active in package builds.
keep_werror: 'none' keep_werror: 'none'
# A mapping of aliases that can be used to define new commands. For instance,
# `sp: spec -I` will define a new command `sp` that will execute `spec` with
# the `-I` argument. Aliases cannot override existing commands.
aliases:
concretise: concretize
containerise: containerize
rm: remove

View File

@@ -0,0 +1,16 @@
# -------------------------------------------------------------------------
# This is the default configuration for Spack's module file generation.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/modules.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/modules.yaml
# -------------------------------------------------------------------------
modules: {}

View File

@@ -15,28 +15,16 @@
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
packages: packages:
all: all:
compiler:
- apple-clang
- clang
- gcc
- intel
providers: providers:
c: [apple-clang, llvm, gcc]
cxx: [apple-clang, llvm, gcc]
elf: [libelf] elf: [libelf]
fortran: [gcc]
fuse: [macfuse] fuse: [macfuse]
gl: [apple-gl]
glu: [apple-glu]
unwind: [apple-libunwind] unwind: [apple-libunwind]
uuid: [apple-libuuid] uuid: [apple-libuuid]
apple-clang:
buildable: false
apple-gl:
buildable: false
externals:
- spec: apple-gl@4.1.0
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
apple-glu:
buildable: false
externals:
- spec: apple-glu@1.3.0
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
apple-libunwind: apple-libunwind:
buildable: false buildable: false
externals: externals:
@@ -50,13 +38,4 @@ packages:
# Apple bundles libuuid in libsystem_c version 1353.100.2, # Apple bundles libuuid in libsystem_c version 1353.100.2,
# although the version number used here isn't critical # although the version number used here isn't critical
- spec: apple-libuuid@1353.100.2 - spec: apple-libuuid@1353.100.2
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk prefix: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk
c:
prefer:
- apple-clang
cxx:
prefer:
- apple-clang
fortran:
prefer:
- gcc

View File

@@ -1,4 +1,2 @@
mirrors: mirrors:
spack-public: spack-public: https://mirror.spack.io
binary: false
url: https://mirror.spack.io

View File

@@ -40,12 +40,13 @@ modules:
roots: roots:
tcl: $spack/share/spack/modules tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod lmod: $spack/share/spack/lmod
# What type of modules to use ("tcl" and/or "lmod") # What type of modules to use
enable: [] enable:
- tcl
tcl: tcl:
all: all:
autoload: direct autoload: none
# Default configurations if lmod is enabled # Default configurations if lmod is enabled
lmod: lmod:

View File

@@ -15,38 +15,32 @@
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
packages: packages:
all: all:
compiler: [gcc, intel, pgi, clang, xl, nag, fj, aocc]
providers: providers:
awk: [gawk] awk: [gawk]
armci: [armcimpi]
blas: [openblas, amdblis] blas: [openblas, amdblis]
c: [gcc, llvm, intel-oneapi-compilers]
cxx: [gcc, llvm, intel-oneapi-compilers]
D: [ldc] D: [ldc]
daal: [intel-oneapi-daal] daal: [intel-daal]
elf: [elfutils] elf: [elfutils]
fftw-api: [fftw, amdfftw] fftw-api: [fftw, amdfftw]
flame: [libflame, amdlibflame] flame: [libflame, amdlibflame]
fortran: [gcc, llvm, intel-oneapi-compilers]
fortran-rt: [gcc-runtime, intel-oneapi-runtime]
fuse: [libfuse] fuse: [libfuse]
gl: [glx, osmesa] gl: [glx, osmesa]
glu: [mesa-glu, openglu] glu: [mesa-glu, openglu]
golang: [go, gcc] golang: [go, gcc]
go-or-gccgo-bootstrap: [go-bootstrap, gcc] go-external-or-gccgo-bootstrap: [go-bootstrap, gcc]
iconv: [libiconv] iconv: [libiconv]
ipp: [intel-oneapi-ipp] ipp: [intel-ipp]
java: [openjdk, jdk] java: [openjdk, jdk, ibm-java]
jpeg: [libjpeg-turbo, libjpeg] jpeg: [libjpeg-turbo, libjpeg]
lapack: [openblas, amdlibflame] lapack: [openblas, amdlibflame]
libc: [glibc, musl] libglx: [mesa+glx, mesa18+glx]
libgfortran: [gcc-runtime]
libglx: [mesa+glx]
libifcore: [intel-oneapi-runtime]
libllvm: [llvm] libllvm: [llvm]
libosmesa: [mesa+osmesa, mesa18+osmesa]
lua-lang: [lua, lua-luajit-openresty, lua-luajit] lua-lang: [lua, lua-luajit-openresty, lua-luajit]
luajit: [lua-luajit-openresty, lua-luajit] luajit: [lua-luajit-openresty, lua-luajit]
mariadb-client: [mariadb-c-client, mariadb] mariadb-client: [mariadb-c-client, mariadb]
mkl: [intel-oneapi-mkl] mkl: [intel-mkl]
mpe: [mpe2] mpe: [mpe2]
mpi: [openmpi, mpich] mpi: [openmpi, mpich]
mysql-client: [mysql, mariadb-c-client] mysql-client: [mysql, mariadb-c-client]
@@ -55,7 +49,6 @@ packages:
pbs: [openpbs, torque] pbs: [openpbs, torque]
pil: [py-pillow] pil: [py-pillow]
pkgconfig: [pkgconf, pkg-config] pkgconfig: [pkgconf, pkg-config]
qmake: [qt-base, qt]
rpc: [libtirpc] rpc: [libtirpc]
scalapack: [netlib-scalapack, amdscalapack] scalapack: [netlib-scalapack, amdscalapack]
sycl: [hipsycl] sycl: [hipsycl]
@@ -63,48 +56,9 @@ packages:
tbb: [intel-tbb] tbb: [intel-tbb]
unwind: [libunwind] unwind: [libunwind]
uuid: [util-linux-uuid, libuuid] uuid: [util-linux-uuid, libuuid]
wasi-sdk: [wasi-sdk-prebuilt]
xkbdata-api: [xkeyboard-config, xkbdata]
xxd: [xxd-standalone, vim] xxd: [xxd-standalone, vim]
yacc: [bison, byacc] yacc: [bison, byacc]
ziglang: [zig] ziglang: [zig]
zlib-api: [zlib-ng+compat, zlib]
permissions: permissions:
read: world read: world
write: user write: user
cce:
buildable: false
cray-fftw:
buildable: false
cray-libsci:
buildable: false
cray-mpich:
buildable: false
cray-mvapich2:
buildable: false
cray-pmi:
buildable: false
egl:
buildable: false
essl:
buildable: false
fj:
buildable: false
fujitsu-mpi:
buildable: false
fujitsu-ssl2:
buildable: false
glibc:
buildable: false
hpcx-mpi:
buildable: false
iconv:
prefer: [libiconv]
mpt:
buildable: false
musl:
buildable: false
spectrum-mpi:
buildable: false
xl:
buildable: false

View File

@@ -11,4 +11,4 @@
# ~/.spack/repos.yaml # ~/.spack/repos.yaml
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
repos: repos:
- $spack/var/spack/repos/spack_repo/builtin - $spack/var/spack/repos/builtin

View File

@@ -1,5 +1,5 @@
config: config:
locks: false locks: false
concretizer: clingo
build_stage:: build_stage::
- '$user_cache_path/stage' - '$spack/.staging'
stage_name: '{name}-{version}-{hash:7}'

View File

@@ -1,27 +0,0 @@
# -------------------------------------------------------------------------
# This file controls default concretization preferences for Spack.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/packages.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/packages.yaml
# -------------------------------------------------------------------------
packages:
all:
providers:
c : [msvc]
cxx: [msvc]
mpi: [msmpi]
gl: [wgl]
mpi:
require:
- one_of: [msmpi]
msvc:
buildable: false

View File

@@ -1,7 +1,7 @@
package_list.html
command_index.rst command_index.rst
spack*.rst spack*.rst
llnl*.rst llnl*.rst
_build _build
.spack-env .spack-env
spack.lock spack.lock
_spack_root

View File

@@ -1,15 +0,0 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# The name of the Pygments (syntax highlighting) style to use.
# We use our own extension of the default style with a few modifications
from pygments.styles.default import DefaultStyle
from pygments.token import Generic
class SpackStyle(DefaultStyle):
styles = DefaultStyle.styles.copy()
background_color = "#f4f4f8"
styles[Generic.Output] = "#355"
styles[Generic.Prompt] = "bold #346ec9"

View File

@@ -1,12 +0,0 @@
{% extends "!layout.html" %}
{%- block extrahead %}
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-S0PQ7WV75K"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-S0PQ7WV75K');
</script>
{% endblock %}

162
lib/spack/docs/analyze.rst Normal file
View File

@@ -0,0 +1,162 @@
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _analyze:
=======
Analyze
=======
The analyze command is a front-end to various tools that let us analyze
package installations. Each analyzer is a module for a different kind
of analysis that can be done on a package installation, including (but not
limited to) binary, log, or text analysis. Thus, the analyze command group
allows you to take an existing package install, choose an analyzer,
and extract some output for the package using it.
-----------------
Analyzer Metadata
-----------------
For all analyzers, we write to an ``analyzers`` folder in ``~/.spack``, or the
value that you specify in your spack config at ``config:analyzers_dir``.
For example, here we see the results of running an analysis on zlib:
.. code-block:: console
$ tree ~/.spack/analyzers/
└── linux-ubuntu20.04-skylake
└── gcc-9.3.0
└── zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2
├── environment_variables
│   └── spack-analyzer-environment-variables.json
├── install_files
│   └── spack-analyzer-install-files.json
└── libabigail
└── spack-analyzer-libabigail-libz.so.1.2.11.xml
This means that you can always find analyzer output in this folder, and it
is organized with the same logic as the package install it was run for.
If you want to customize this top level folder, simply provide the ``--path``
argument to ``spack analyze run``. The nested organization will be maintained
within your custom root.
-----------------
Listing Analyzers
-----------------
If you aren't familiar with Spack's analyzers, you can quickly list those that
are available:
.. code-block:: console
$ spack analyze list-analyzers
install_files : install file listing read from install_manifest.json
environment_variables : environment variables parsed from spack-build-env.txt
config_args : config args loaded from spack-configure-args.txt
libabigail : Application Binary Interface (ABI) features for objects
In the above, the first three are fairly simple - parsing metadata files from
a package install directory to save
-------------------
Analyzing a Package
-------------------
The analyze command, akin to install, will accept a package spec to perform
an analysis for. The package must be installed. Let's walk through an example
with zlib. We first ask to analyze it. However, since we have more than one
install, we are asked to disambiguate:
.. code-block:: console
$ spack analyze run zlib
==> Error: zlib matches multiple packages.
Matching packages:
fz2bs56 zlib@1.2.11%gcc@7.5.0 arch=linux-ubuntu18.04-skylake
sl7m27m zlib@1.2.11%gcc@9.3.0 arch=linux-ubuntu20.04-skylake
Use a more specific spec.
We can then specify the spec version that we want to analyze:
.. code-block:: console
$ spack analyze run zlib/fz2bs56
If you don't provide any specific analyzer names, by default all analyzers
(shown in the ``list-analyzers`` subcommand list) will be run. If an analyzer does not
have any result, it will be skipped. For example, here is a result running for
zlib:
.. code-block:: console
$ ls ~/.spack/analyzers/linux-ubuntu20.04-skylake/gcc-9.3.0/zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2/
spack-analyzer-environment-variables.json
spack-analyzer-install-files.json
spack-analyzer-libabigail-libz.so.1.2.11.xml
If you want to run a specific analyzer, ask for it with `--analyzer`. Here we run
spack analyze on libabigail (already installed) _using_ libabigail1
.. code-block:: console
$ spack analyze run --analyzer abigail libabigail
.. _analyze_monitoring:
----------------------
Monitoring An Analysis
----------------------
For any kind of analysis, you can
use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
as a server to upload the same run metadata to. You can
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
to first create a server along with a username and token for yourself.
You can then use this guide to interact with the server.
You should first export our spack monitor token and username to the environment:
.. code-block:: console
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
$ export SPACKMON_USER=spacky
By default, the host for your server is expected to be at ``http://127.0.0.1``
with a prefix of ``ms1``, and if this is the case, you can simply add the
``--monitor`` flag to the install command:
.. code-block:: console
$ spack analyze run --monitor wget
If you need to customize the host or the prefix, you can do that as well:
.. code-block:: console
$ spack analyze run --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io wget
If your server doesn't have authentication, you can skip it:
.. code-block:: console
$ spack analyze run --monitor --monitor-disable-auth wget
Regardless of your choice, when you run analyze on an installed package (whether
it was installed with ``--monitor`` or not, you'll see the results generating as they did
before, and a message that the monitor server was pinged:
.. code-block:: console
$ spack analyze --monitor wget
...
==> Sending result for wget bin/wget to monitor.

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -44,8 +45,7 @@ Listing available packages
To install software with Spack, you need to know what software is To install software with Spack, you need to know what software is
available. You can see a list of available package names at the available. You can see a list of available package names at the
`packages.spack.io <https://packages.spack.io>`_ website, or :ref:`package-list` webpage, or using the ``spack list`` command.
using the ``spack list`` command.
.. _cmd-spack-list: .. _cmd-spack-list:
@@ -60,7 +60,7 @@ can install:
:ellipsis: 10 :ellipsis: 10
There are thousands of them, so we've truncated the output above, but you There are thousands of them, so we've truncated the output above, but you
can find a `full list here <https://packages.spack.io>`_. can find a :ref:`full list here <package-list>`.
Packages are listed by name in alphabetical order. Packages are listed by name in alphabetical order.
A pattern to match with no wildcards, ``*`` or ``?``, A pattern to match with no wildcards, ``*`` or ``?``,
will be treated as though it started and ended with will be treated as though it started and ended with
@@ -864,7 +864,7 @@ There are several different ways to use Spack packages once you have
installed them. As you've seen, spack packages are installed into long installed them. As you've seen, spack packages are installed into long
paths with hashes, and you need a way to get them into your path. The paths with hashes, and you need a way to get them into your path. The
easiest way is to use :ref:`spack load <cmd-spack-load>`, which is easiest way is to use :ref:`spack load <cmd-spack-load>`, which is
described in this section. described in the next section.
Some more advanced ways to use Spack packages include: Some more advanced ways to use Spack packages include:
@@ -942,7 +942,7 @@ first ``libelf`` above, you would run:
$ spack load /qmm4kso $ spack load /qmm4kso
To see which packages that you have loaded to your environment you would To see which packages that you have loaded to your enviornment you would
use ``spack find --loaded``. use ``spack find --loaded``.
.. code-block:: console .. code-block:: console
@@ -958,86 +958,7 @@ use ``spack find --loaded``.
You can also use ``spack load --list`` to get the same output, but it You can also use ``spack load --list`` to get the same output, but it
does not have the full set of query options that ``spack find`` offers. does not have the full set of query options that ``spack find`` offers.
We'll learn more about Spack's spec syntax in :ref:`a later section <sec-specs>`. We'll learn more about Spack's spec syntax in the next section.
.. _extensions:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Python packages and virtual environments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack can install a large number of Python packages. Their names are
typically prefixed with ``py-``. Installing and using them is no
different from any other package:
.. code-block:: console
$ spack install py-numpy
$ spack load py-numpy
$ python3
>>> import numpy
The ``spack load`` command sets the ``PATH`` variable so that the right Python
executable is used, and makes sure that ``numpy`` and its dependencies can be
located in the ``PYTHONPATH``.
Spack is different from other Python package managers in that it installs
every package into its *own* prefix. This is in contrast to ``pip``, which
installs all packages into the same prefix, be it in a virtual environment
or not.
For many users, **virtual environments** are more convenient than repeated
``spack load`` commands, particularly when working with multiple Python
packages. Fortunately Spack supports environments itself, which together
with a view are no different from Python virtual environments.
The recommended way of working with Python extensions such as ``py-numpy``
is through :ref:`Environments <environments>`. The following example creates
a Spack environment with ``numpy`` in the current working directory. It also
puts a filesystem view in ``./view``, which is a more traditional combined
prefix for all packages in the environment.
.. code-block:: console
$ spack env create --with-view view --dir .
$ spack -e . add py-numpy
$ spack -e . concretize
$ spack -e . install
Now you can activate the environment and start using the packages:
.. code-block:: console
$ spack env activate .
$ python3
>>> import numpy
The environment view is also a virtual environment, which is useful if you are
sharing the environment with others who are unfamiliar with Spack. They can
either use the Python executable directly:
.. code-block:: console
$ ./view/bin/python3
>>> import numpy
or use the activation script:
.. code-block:: console
$ source ./view/bin/activate
$ python3
>>> import numpy
In general, there should not be much difference between ``spack env activate``
and using the virtual environment. The main advantage of ``spack env activate``
is that it knows about more packages than just Python packages, and it may set
additional runtime variables that are not covered by the virtual environment
activation script.
See :ref:`environments` for a more in-depth description of Spack
environments and customizations to views.
.. _sec-specs: .. _sec-specs:
@@ -1174,17 +1095,6 @@ unspecified version, but packages can depend on other packages with
could depend on ``mpich@1.2:`` if it can only build with version could depend on ``mpich@1.2:`` if it can only build with version
``1.2`` or higher of ``mpich``. ``1.2`` or higher of ``mpich``.
.. note:: Windows Spec Syntax Caveats
Windows has a few idiosyncrasies when it comes to the Spack spec syntax and the use of certain shells
Spack's spec dependency syntax uses the carat (``^``) character, however this is an escape string in CMD
so it must be escaped with an additional carat (i.e. ``^^``).
CMD also will attempt to interpret strings with ``=`` characters in them. Any spec including this symbol
must double quote the string.
Note: All of these issues are unique to CMD, they can be avoided by using Powershell.
For more context on these caveats see the related issues: `carat <https://github.com/spack/spack/issues/42833>`_ and `equals <https://github.com/spack/spack/issues/43348>`_
Below are more details about the specifiers that you can add to specs. Below are more details about the specifiers that you can add to specs.
.. _version-specifier: .. _version-specifier:
@@ -1193,38 +1103,16 @@ Below are more details about the specifiers that you can add to specs.
Version specifier Version specifier
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
A version specifier ``pkg@<specifier>`` comes after a package name A version specifier comes somewhere after a package name and starts
and starts with ``@``. It can be something abstract that matches with ``@``. It can be a single version, e.g. ``@1.0``, ``@3``, or
multiple known versions, or a specific version. During concretization, ``@1.2a7``. Or, it can be a range of versions, such as ``@1.0:1.5``
Spack will pick the optimal version within the spec's constraints (all versions between ``1.0`` and ``1.5``, inclusive). Version ranges
according to policies set for the particular Spack installation. can be open, e.g. ``:3`` means any version up to and including ``3``.
This would include ``3.4`` and ``3.4.2``. ``4.2:`` means any version
The version specifier can be *a specific version*, such as ``@=1.0.0`` or above and including ``4.2``. Finally, a version specifier can be a
``@=1.2a7``. Or, it can be *a range of versions*, such as ``@1.0:1.5``. set of arbitrary versions, such as ``@1.0,1.5,1.7`` (``1.0``, ``1.5``,
Version ranges are inclusive, so this example includes both ``1.0`` or ``1.7``). When you supply such a specifier to ``spack install``,
and any ``1.5.x`` version. Version ranges can be unbounded, e.g. ``@:3`` it constrains the set of versions that Spack will install.
means any version up to and including ``3``. This would include ``3.4``
and ``3.4.2``. Similarly, ``@4.2:`` means any version above and including
``4.2``. As a short-hand, ``@3`` is equivalent to the range ``@3:3`` and
includes any version with major version ``3``.
Versions are ordered lexicograpically by its components. For more details
on the order, see :ref:`the packaging guide <version-comparison>`.
Notice that you can distinguish between the specific version ``@=3.2`` and
the range ``@3.2``. This is useful for packages that follow a versioning
scheme that omits the zero patch version number: ``3.2``, ``3.2.1``,
``3.2.2``, etc. In general it is preferable to use the range syntax
``@3.2``, since ranges also match versions with one-off suffixes, such as
``3.2-custom``.
A version specifier can also be a list of ranges and specific versions,
separated by commas. For example, ``@1.0:1.5,=1.7.1`` matches any version
in the range ``1.0:1.5`` and the specific version ``1.7.1``.
^^^^^^^^^^^^
Git versions
^^^^^^^^^^^^
For packages with a ``git`` attribute, ``git`` references For packages with a ``git`` attribute, ``git`` references
may be specified instead of a numerical version i.e. branches, tags may be specified instead of a numerical version i.e. branches, tags
@@ -1233,35 +1121,36 @@ reference provided. Acceptable syntaxes for this are:
.. code-block:: sh .. code-block:: sh
# commit hashes
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # 40 character hashes are automatically treated as git commits
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234
# branches and tags # branches and tags
foo@git.develop # use the develop branch foo@git.develop # use the develop branch
foo@git.0.19 # use the 0.19 tag foo@git.0.19 # use the 0.19 tag
Spack always needs to associate a Spack version with the git reference, # commit hashes
which is used for version comparison. This Spack version is heuristically foo@abcdef1234abcdef1234abcdef1234abcdef1234 # 40 character hashes are automatically treated as git commits
taken from the closest valid git tag among ancestors of the git ref. foo@git.abcdef1234abcdef1234abcdef1234abcdef1234
Once a Spack version is associated with a git ref, it always printed with Spack versions from git reference either have an associated version supplied by the user,
the git ref. For example, if the commit ``@git.abcdefg`` is tagged or infer a relationship to known versions from the structure of the git repository. If an
``0.19``, then the spec will be shown as ``@git.abcdefg=0.19``. associated version is supplied by the user, Spack treats the git version as equivalent to that
version for all version comparisons in the package logic (e.g. ``depends_on('foo', when='@1.5')``).
If the git ref is not exactly a tag, then the distance to the nearest tag The associated version can be assigned with ``[git ref]=[version]`` syntax, with the caveat that the specified version is known to Spack from either the package definition, or in the configuration preferences (i.e. ``packages.yaml``).
is also part of the resolved version. ``@git.abcdefg=0.19.git.8`` means
that the commit is 8 commits away from the ``0.19`` tag.
In cases where Spack cannot resolve a sensible version from a git ref,
users can specify the Spack version to use for the git ref. This is done
by appending ``=`` and the Spack version to the git ref. For example:
.. code-block:: sh .. code-block:: sh
foo@git.my_ref=3.2 # use the my_ref tag or branch, but treat it as version 3.2 for version comparisons foo@git.my_ref=3.2 # use the my_ref tag or branch, but treat it as version 3.2 for version comparisons
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop # use the given commit, but treat it as develop for version comparisons foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop # use the given commit, but treat it as develop for version comparisons
If an associated version is not supplied then the tags in the git repo are used to determine
the most recent previous version known to Spack. Details about how versions are compared
and how Spack determines if one version is less than another are discussed in the developer guide.
If the version spec is not provided, then Spack will choose one
according to policies set for the particular spack installation. If
the spec is ambiguous, i.e. it could match multiple versions, Spack
will choose a version within the spec's constraints according to
policies set for the particular Spack installation.
Details about how versions are compared and how Spack determines if Details about how versions are compared and how Spack determines if
one version is less than another are discussed in the developer guide. one version is less than another are discussed in the developer guide.
@@ -1291,61 +1180,55 @@ based on site policies.
Variants Variants
^^^^^^^^ ^^^^^^^^
Variants are named options associated with a particular package and are Variants are named options associated with a particular package. They are
typically used to enable or disable certain features at build time. They optional, as each package must provide default values for each variant it
are optional, as each package must provide default values for each variant makes available. Variants can be specified using
it makes available. a flexible parameter syntax ``name=<value>``. For example,
``spack install mercury debug=True`` will install mercury built with debug
The names of variants available for a particular package depend on flags. The names of particular variants available for a package depend on
what was provided by the package author. ``spack info <package>`` will what was provided by the package author. ``spack info <package>`` will
provide information on what build variants are available. provide information on what build variants are available.
There are different types of variants: For compatibility with earlier versions, variants which happen to be
boolean in nature can be specified by a syntax that represents turning
options on and off. For example, in the previous spec we could have
supplied ``mercury +debug`` with the same effect of enabling the debug
compile time option for the libelf package.
1. Boolean variants. Typically used to enable or disable a feature at Depending on the package a variant may have any default value. For
compile time. For example, a package might have a ``debug`` variant that ``mercury`` here, ``debug`` is ``False`` by default, and we turned it on
can be explicitly enabled with ``+debug`` and disabled with ``~debug``. with ``debug=True`` or ``+debug``. If a variant is ``True`` by default
2. Single-valued variants. Often used to set defaults. For example, a package you can turn it off by either adding ``-name`` or ``~name`` to the spec.
might have a ``compression`` variant that determines the default
compression algorithm, which users could set to ``compression=gzip`` or
``compression=zstd``.
3. Multi-valued variants. A package might have a ``fabrics`` variant that
determines which network fabrics to support. Users could set this to
``fabrics=verbs,ofi`` to enable both InfiniBand verbs and OpenFabrics
interfaces. The values are separated by commas.
The meaning of ``fabrics=verbs,ofi`` is to enable *at least* the specified There are two syntaxes here because, depending on context, ``~`` and
fabrics, but other fabrics may be enabled as well. If the intent is to ``-`` may mean different things. In most shells, the following will
enable *only* the specified fabrics, then the ``fabrics:=verbs,ofi`` result in the shell performing home directory substitution:
syntax should be used with the ``:=`` operator.
.. note:: .. code-block:: sh
In certain shells, the the ``~`` character is expanded to the home mpileaks ~debug # shell may try to substitute this!
directory. To avoid these issues, avoid whitespace between the package mpileaks~debug # use this instead
name and the variant:
.. code-block:: sh If there is a user called ``debug``, the ``~`` will be incorrectly
expanded. In this situation, you would want to write ``libelf
-debug``. However, ``-`` can be ambiguous when included after a
package name without spaces:
mpileaks ~debug # shell may try to substitute this! .. code-block:: sh
mpileaks~debug # use this instead
Alternatively, you can use the ``-`` character to disable a variant, mpileaks-debug # wrong!
but be aware that this requires a space between the package name and mpileaks -debug # right
the variant:
.. code-block:: sh Spack allows the ``-`` character to be part of package names, so the
above will be interpreted as a request for the ``mpileaks-debug``
package, not a request for ``mpileaks`` built without ``debug``
options. In this scenario, you should write ``mpileaks~debug`` to
avoid ambiguity.
mpileaks-debug # wrong: refers to a package named "mpileaks-debug" When spack normalizes specs, it prints them out with no spaces boolean
mpileaks -debug # right: refers to a package named mpileaks with debug disabled variants using the backwards compatibility syntax and uses only ``~``
for disabled boolean variants. The ``-`` and spaces on the command
As a last resort, ``debug=False`` can also be used to disable a boolean variant. line are provided for convenience and legibility.
"""""""""""""""""""""""""""""""""""
Variant propagation to dependencies
"""""""""""""""""""""""""""""""""""
Spack allows variants to propagate their value to the package's Spack allows variants to propagate their value to the package's
dependency by using ``++``, ``--``, and ``~~`` for boolean variants. dependency by using ``++``, ``--``, and ``~~`` for boolean variants.
@@ -1364,10 +1247,6 @@ For example, for the ``stackstart`` variant:
mpileaks stackstart==4 # variant will be propagated to dependencies mpileaks stackstart==4 # variant will be propagated to dependencies
mpileaks stackstart=4 # only mpileaks will have this variant value mpileaks stackstart=4 # only mpileaks will have this variant value
Spack also allows variants to be propagated from a package that does
not have that variant.
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
Compiler Flags Compiler Flags
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
@@ -1415,29 +1294,27 @@ that executables will run without the need to set ``LD_LIBRARY_PATH``.
.. code-block:: yaml .. code-block:: yaml
packages: compilers:
gcc: - compiler:
externals: spec: gcc@4.9.3
- spec: gcc@4.9.3 paths:
prefix: /opt/gcc cc: /opt/gcc/bin/gcc
extra_attributes: c++: /opt/gcc/bin/g++
compilers: f77: /opt/gcc/bin/gfortran
c: /opt/gcc/bin/gcc fc: /opt/gcc/bin/gfortran
cxx: /opt/gcc/bin/g++ environment:
fortran: /opt/gcc/bin/gfortran unset:
environment: - BAD_VARIABLE
unset: set:
- BAD_VARIABLE GOOD_VARIABLE_NUM: 1
set: GOOD_VARIABLE_STR: good
GOOD_VARIABLE_NUM: 1 prepend_path:
GOOD_VARIABLE_STR: good PATH: /path/to/binutils
prepend_path: append_path:
PATH: /path/to/binutils LD_LIBRARY_PATH: /opt/gcc/lib
append_path: extra_rpaths:
LD_LIBRARY_PATH: /opt/gcc/lib - /path/to/some/compiler/runtime/directory
extra_rpaths: - /path/to/some/other/compiler/runtime/directory
- /path/to/some/compiler/runtime/directory
- /path/to/some/other/compiler/runtime/directory
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
@@ -1455,12 +1332,22 @@ the reserved keywords ``platform``, ``os`` and ``target``:
$ spack install libelf os=ubuntu18.04 $ spack install libelf os=ubuntu18.04
$ spack install libelf target=broadwell $ spack install libelf target=broadwell
or together by using the reserved keyword ``arch``:
.. code-block:: console
$ spack install libelf arch=cray-CNL10-haswell
Normally users don't have to bother specifying the architecture if they Normally users don't have to bother specifying the architecture if they
are installing software for their current host, as in that case the are installing software for their current host, as in that case the
values will be detected automatically. If you need fine-grained control values will be detected automatically. If you need fine-grained control
over which packages use which targets (or over *all* packages' default over which packages use which targets (or over *all* packages' default
target), see :ref:`package-preferences`. target), see :ref:`package-preferences`.
.. admonition:: Cray machines
The situation is a little bit different for Cray machines and a detailed
explanation on how the architecture can be set on them can be found at :ref:`cray-support`
.. _support-for-microarchitectures: .. _support-for-microarchitectures:
@@ -1624,30 +1511,6 @@ any MPI implementation will do. If another package depends on
error. Likewise, if you try to plug in some package that doesn't error. Likewise, if you try to plug in some package that doesn't
provide MPI, Spack will raise an error. provide MPI, Spack will raise an error.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Explicit binding of virtual dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are packages that provide more than just one virtual dependency. When interacting with them, users
might want to utilize just a subset of what they could provide, and use other providers for virtuals they
need.
It is possible to be more explicit and tell Spack which dependency should provide which virtual, using a
special syntax:
.. code-block:: console
$ spack spec strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
Concretizing the spec above produces the following DAG:
.. figure:: images/strumpack_virtuals.svg
:scale: 60 %
:align: center
where ``intel-parallel-studio`` *could* provide ``mpi``, ``lapack``, and ``blas`` but is used only for the former. The ``lapack``
and ``blas`` dependencies are satisfied by ``openblas``.
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
Specifying Specs by Hash Specifying Specs by Hash
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
@@ -1769,24 +1632,19 @@ Verifying installations
The ``spack verify`` command can be used to verify the validity of The ``spack verify`` command can be used to verify the validity of
Spack-installed packages any time after installation. Spack-installed packages any time after installation.
^^^^^^^^^^^^^^^^^^^^^^^^^
``spack verify manifest``
^^^^^^^^^^^^^^^^^^^^^^^^^
At installation time, Spack creates a manifest of every file in the At installation time, Spack creates a manifest of every file in the
installation prefix. For links, Spack tracks the mode, ownership, and installation prefix. For links, Spack tracks the mode, ownership, and
destination. For directories, Spack tracks the mode, and destination. For directories, Spack tracks the mode, and
ownership. For files, Spack tracks the mode, ownership, modification ownership. For files, Spack tracks the mode, ownership, modification
time, hash, and size. The ``spack verify manifest`` command will check, time, hash, and size. The Spack verify command will check, for every
for every file in each package, whether any of those attributes have file in each package, whether any of those attributes have changed. It
changed. It will also check for newly added files or deleted files from will also check for newly added files or deleted files from the
the installation prefix. Spack can either check all installed packages installation prefix. Spack can either check all installed packages
using the `-a,--all` or accept specs listed on the command line to using the `-a,--all` or accept specs listed on the command line to
verify. verify.
The ``spack verify manifest`` command can also verify for individual files The ``spack verify`` command can also verify for individual files that
that they haven't been altered since installation time. If the given file they haven't been altered since installation time. If the given file
is not in a Spack installation prefix, Spack will report that it is is not in a Spack installation prefix, Spack will report that it is
not owned by any package. To check individual files instead of specs, not owned by any package. To check individual files instead of specs,
use the ``-f,--files`` option. use the ``-f,--files`` option.
@@ -1801,21 +1659,164 @@ check only local packages (as opposed to those used transparently from
``upstream`` spack instances) and the ``-j,--json`` option to output ``upstream`` spack instances) and the ``-j,--json`` option to output
machine-readable json data for any errors. machine-readable json data for any errors.
^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack verify libraries``
^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``spack verify libraries`` command can be used to verify that packages .. _extensions:
do not have accidental system dependencies. This command scans the install
prefixes of packages for executables and shared libraries, and resolves
their needed libraries in their RPATHs. When needed libraries cannot be
located, an error is reported. This typically indicates that a package
was linked against a system library, instead of a library provided by
a Spack package.
This verification can also be enabled as a post-install hook by setting ---------------------------
``config:shared_linking:missing_library_policy`` to ``error`` or ``warn`` Extensions & Python support
in :ref:`config.yaml <config-yaml>`. ---------------------------
Spack's installation model assumes that each package will live in its
own install prefix. However, certain packages are typically installed
*within* the directory hierarchy of other packages. For example,
`Python <https://www.python.org>`_ packages are typically installed in the
``$prefix/lib/python-2.7/site-packages`` directory.
In Spack, installation prefixes are immutable, so this type of installation
is not directly supported. However, it is possible to create views that
allow you to merge install prefixes of multiple packages into a single new prefix.
Views are a convenient way to get a more traditional filesystem structure.
Using *extensions*, you can ensure that Python packages always share the
same prefix in the view as Python itself. Suppose you have
Python installed like so:
.. code-block:: console
$ spack find python
==> 1 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
python@2.7.8
.. _cmd-spack-extensions:
^^^^^^^^^^^^^^^^^^^^
``spack extensions``
^^^^^^^^^^^^^^^^^^^^
You can find extensions for your Python installation like this:
.. code-block:: console
$ spack extensions python
==> python@2.7.8%gcc@4.4.7 arch=linux-debian7-x86_64-703c7a96
==> 36 extensions:
geos py-ipython py-pexpect py-pyside py-sip
py-basemap py-libxml2 py-pil py-pytz py-six
py-biopython py-mako py-pmw py-rpy2 py-sympy
py-cython py-matplotlib py-pychecker py-scientificpython py-virtualenv
py-dateutil py-mpi4py py-pygments py-scikit-learn
py-epydoc py-mx py-pylint py-scipy
py-gnuplot py-nose py-pyparsing py-setuptools
py-h5py py-numpy py-pyqt py-shiboken
==> 12 installed:
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
py-dateutil@2.4.0 py-nose@1.3.4 py-pyside@1.2.2
py-dateutil@2.4.0 py-numpy@1.9.1 py-pytz@2014.10
py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1
py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0
The extensions are a subset of what's returned by ``spack list``, and
they are packages like any other. They are installed into their own
prefixes, and you can see this with ``spack find --paths``:
.. code-block:: console
$ spack find --paths py-numpy
==> 1 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
py-numpy@1.9.1 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/py-numpy@1.9.1-66733244
However, even though this package is installed, you cannot use it
directly when you run ``python``:
.. code-block:: console
$ spack load python
$ python
Python 2.7.8 (default, Feb 17 2015, 01:35:25)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named numpy
>>>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using Extensions in Environments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The recommended way of working with extensions such as ``py-numpy``
above is through :ref:`Environments <environments>`. For example,
the following creates an environment in the current working directory
with a filesystem view in the ``./view`` directory:
.. code-block:: console
$ spack env create --with-view view --dir .
$ spack -e . add py-numpy
$ spack -e . concretize
$ spack -e . install
We recommend environments for two reasons. Firstly, environments
can be activated (requires :ref:`shell-support`):
.. code-block:: console
$ spack env activate .
which sets all the right environment variables such as ``PATH`` and
``PYTHONPATH``. This ensures that
.. code-block:: console
$ python
>>> import numpy
works. Secondly, even without shell support, the view ensures
that Python can locate its extensions:
.. code-block:: console
$ ./view/bin/python
>>> import numpy
See :ref:`environments` for a more in-depth description of Spack
environments and customizations to views.
^^^^^^^^^^^^^^^^^^^^
Using ``spack load``
^^^^^^^^^^^^^^^^^^^^
A more traditional way of using Spack and extensions is ``spack load``
(requires :ref:`shell-support`). This will add the extension to ``PYTHONPATH``
in your current shell, and Python itself will be available in the ``PATH``:
.. code-block:: console
$ spack load py-numpy
$ python
>>> import numpy
The loaded packages can be checked using ``spack find --loaded``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Loading Extensions via Modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Apart from ``spack env activate`` and ``spack load``, you can load numpy
through your environment modules (using ``environment-modules`` or
``lmod``). This will also add the extension to the ``PYTHONPATH`` in
your current shell.
.. code-block:: console
$ module load <name of numpy module>
If you do not know the name of the specific numpy module you wish to
load, you can use the ``spack module tcl|lmod loads`` command to get
the name of the module from the Spack spec.
----------------------- -----------------------
Filesystem requirements Filesystem requirements
@@ -1916,7 +1917,7 @@ diagnostics. Issues, if found, are reported to stdout:
PKG-DIRECTIVES: 1 issue found PKG-DIRECTIVES: 1 issue found
1. lammps: wrong variant in "conflicts" directive 1. lammps: wrong variant in "conflicts" directive
the variant 'adios' does not exist the variant 'adios' does not exist
in /home/spack/spack/var/spack/repos/spack_repo/builtin/packages/lammps/package.py in /home/spack/spack/var/spack/repos/builtin/packages/lammps/package.py
------------ ------------

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -12,47 +13,49 @@ Some sites may encourage users to set up their own test environments
before carrying out central installations, or some users may prefer to set before carrying out central installations, or some users may prefer to set
up these environments on their own motivation. To reduce the load of up these environments on their own motivation. To reduce the load of
recompiling otherwise identical package specs in different installations, recompiling otherwise identical package specs in different installations,
installed packages can be put into build cache tarballs, pushed to installed packages can be put into build cache tarballs, uploaded to
your Spack mirror and then downloaded and installed by others. your Spack mirror and then downloaded and installed by others.
Whenever a mirror provides prebuilt packages, Spack will take these packages
into account during concretization and installation, making ``spack install``
significantly faster.
--------------------------
Creating build cache files
--------------------------
.. note:: A compressed tarball of an installed package is created. Tarballs are created
for all of its link and run dependency packages as well. Compressed tarballs are
We use the terms "build cache" and "mirror" often interchangeably. Mirrors signed with gpg and signature and tarball and put in a ``.spack`` file. Optionally,
are used during installation both for sources and prebuilt packages. Build the rpaths (and ids and deps on macOS) can be changed to paths relative to
caches refer to mirrors that provide prebuilt packages. the Spack install tree before the tarball is created.
----------------------
Creating a build cache
----------------------
Build caches are created via: Build caches are created via:
.. code-block:: console .. code-block:: console
$ spack buildcache push <path/url/mirror name> <spec> $ spack buildcache create <spec>
This command takes the locally installed spec and its dependencies, and
creates tarballs of their install prefixes. It also generates metadata files,
signed with GPG. These tarballs and metadata files are then pushed to the
provided binary cache, which can be a local directory or a remote URL.
Here is an example where a build cache is created in a local directory named If you wanted to create a build cache in a local directory, you would provide
"spack-cache", to which we push the "ninja" spec: the ``-d`` argument to target that directory, again also specifying the spec.
Here is an example creating a local directory, "spack-cache" and creating
build cache files for the "ninja" spec:
.. code-block:: console .. code-block:: console
$ spack buildcache push ./spack-cache ninja $ mkdir -p ./spack-cache
==> Pushing binary packages to file:///home/spackuser/spack/spack-cache/build_cache $ spack buildcache create -d ./spack-cache ninja
==> Buildcache files will be output to file:///home/spackuser/spack/spack-cache/build_cache
gpgconf: socketdir is '/run/user/1000/gnupg'
gpg: using "E6DF6A8BD43208E4D6F392F23777740B7DBD643D" as default secret key for signing
Note that ``ninja`` must be installed locally for this to work. Note that the targeted spec must already be installed. Once you have a build cache,
you can add it as a mirror, discussed next.
Once you have a build cache, you can add it as a mirror, discussed next. .. warning::
Spack improved the format used for binary caches in v0.18. The entire v0.18 series
will be able to verify and install binary caches both in the new and in the old format.
Support for using the old format is expected to end in v0.19, so we advise users to
recreate relevant buildcaches using Spack v0.18 or higher.
--------------------------------------- ---------------------------------------
Finding or installing build cache files Finding or installing build cache files
@@ -63,10 +66,10 @@ with:
.. code-block:: console .. code-block:: console
$ spack mirror add <name> <url or path> $ spack mirror add <name> <url>
Both web URLs and local paths on the filesystem can be specified. In the previous Note that the url can be a web url _or_ a local filesystem location. In the previous
example, you might add the directory "spack-cache" and call it ``mymirror``: example, you might add the directory "spack-cache" and call it ``mymirror``:
@@ -91,7 +94,7 @@ this new build cache as follows:
.. code-block:: console .. code-block:: console
$ spack buildcache update-index ./spack-cache $ spack buildcache update-index -d spack-cache/
Now you can use list: Now you can use list:
@@ -102,38 +105,46 @@ Now you can use list:
-- linux-ubuntu20.04-skylake / gcc@9.3.0 ------------------------ -- linux-ubuntu20.04-skylake / gcc@9.3.0 ------------------------
ninja@1.10.2 ninja@1.10.2
With ``mymirror`` configured and an index available, Spack will automatically
use it during concretization and installation. That means that you can expect Great! So now let's say you have a different spack installation, or perhaps just
``spack install ninja`` to fetch prebuilt packages from the mirror. Let's a different environment for the same one, and you want to install a package from
verify by re-installing ninja: that build cache. Let's first uninstall the actual library "ninja" to see if we can
re-install it from the cache.
.. code-block:: console .. code-block:: console
$ spack uninstall ninja $ spack uninstall ninja
$ spack install ninja
==> Installing ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz
==> Fetching file:///home/spackuser/spack/spack-cache/build_cache/linux-ubuntu20.04-skylake-gcc-9.3.0-ninja-1.10.2-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spec.json.sig
gpg: Signature made Do 12 Jan 2023 16:01:04 CET
gpg: using RSA key 61B82B2B2350E171BD17A1744E3A689061D57BF6
gpg: Good signature from "example (GPG created for Spack) <example@example.com>" [ultimate]
==> Fetching file:///home/spackuser/spack/spack-cache/build_cache/linux-ubuntu20.04-skylake/gcc-9.3.0/ninja-1.10.2/linux-ubuntu20.04-skylake-gcc-9.3.0-ninja-1.10.2-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spack
==> Extracting ninja-1.10.2-yxferyhmrjkosgta5ei6b4lqf6bxbscz from binary cache
==> ninja: Successfully installed ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz
Search: 0.00s. Fetch: 0.17s. Install: 0.12s. Total: 0.29s
[+] /home/harmen/spack/opt/spack/linux-ubuntu20.04-skylake/gcc-9.3.0/ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz
It worked! You've just completed a full example of creating a build cache with And now reinstall from the buildcache
a spec of interest, adding it as a mirror, updating its index, listing the contents,
and finally, installing from it.
By default Spack falls back to building from sources when the mirror is not available
or when the package is simply not already available. To force Spack to only install
prebuilt packages, you can use
.. code-block:: console .. code-block:: console
$ spack install --use-buildcache only <package> $ spack buildcache install ninja
==> buildcache spec(s) matching ninja
==> Fetching file:///home/spackuser/spack/spack-cache/build_cache/linux-ubuntu20.04-skylake/gcc-9.3.0/ninja-1.10.2/linux-ubuntu20.04-skylake-gcc-9.3.0-ninja-1.10.2-i4e5luour7jxdpc3bkiykd4imke3mkym.spack
####################################################################################################################################### 100.0%
==> Installing buildcache for spec ninja@1.10.2%gcc@9.3.0 arch=linux-ubuntu20.04-skylake
gpgconf: socketdir is '/run/user/1000/gnupg'
gpg: Signature made Tue 23 Mar 2021 10:16:29 PM MDT
gpg: using RSA key E6DF6A8BD43208E4D6F392F23777740B7DBD643D
gpg: Good signature from "spackuser (GPG created for Spack) <spackuser@noreply.users.github.com>" [ultimate]
It worked! You've just completed a full example of creating a build cache with
a spec of interest, adding it as a mirror, updating it's index, listing the contents,
and finally, installing from it.
Note that the above command is intended to install a particular package to a
build cache you have created, and not to install a package from a build cache.
For the latter, once a mirror is added, by default when you do ``spack install`` the ``--use-cache``
flag is set, and you will install a package from a build cache if it is available.
If you want to always use the cache, you can do:
.. code-block:: console
$ spack install --cache-only <package>
For example, to combine all of the commands above to add the E4S build cache For example, to combine all of the commands above to add the E4S build cache
and then install from it exclusively, you would do: and then install from it exclusively, you would do:
@@ -142,7 +153,7 @@ and then install from it exclusively, you would do:
$ spack mirror add E4S https://cache.e4s.io $ spack mirror add E4S https://cache.e4s.io
$ spack buildcache keys --install --trust $ spack buildcache keys --install --trust
$ spack install --use-buildcache only <package> $ spack install --cache-only <package>
We use ``--install`` and ``--trust`` to say that we are installing keys to our We use ``--install`` and ``--trust`` to say that we are installing keys to our
keyring, and trusting all downloaded keys. keyring, and trusting all downloaded keys.
@@ -152,186 +163,18 @@ keyring, and trusting all downloaded keys.
List of popular build caches List of popular build caches
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* `Extreme-scale Scientific Software Stack (E4S) <https://e4s-project.github.io/>`_: `build cache <https://oaciss.uoregon.edu/e4s/inventory.html>`_' * `Extreme-scale Scientific Software Stack (E4S) <https://e4s-project.github.io/>`_: `build cache <https://oaciss.uoregon.edu/e4s/inventory.html>`_
-------------------
Build cache signing
-------------------
By default, Spack will add a cryptographic signature to each package pushed to
a build cache, and verifies the signature when installing from a build cache.
Keys for signing can be managed with the :ref:`spack gpg <cmd-spack-gpg>` command,
as well as ``spack buildcache keys`` as mentioned above.
You can disable signing when pushing with ``spack buildcache push --unsigned``,
and disable verification when installing from any build cache with
``spack install --no-check-signature``.
Alternatively, signing and verification can be enabled or disabled on a per build cache
basis:
.. code-block:: console
$ spack mirror add --signed <name> <url> # enable signing and verification
$ spack mirror add --unsigned <name> <url> # disable signing and verification
$ spack mirror set --signed <name> # enable signing and verification for an existing mirror
$ spack mirror set --unsigned <name> # disable signing and verification for an existing mirror
Or you can directly edit the ``mirrors.yaml`` configuration file:
.. code-block:: yaml
mirrors:
<name>:
url: <url>
signed: false # disable signing and verification
See also :ref:`mirrors`.
---------- ----------
Relocation Relocation
---------- ----------
When using buildcaches across different machines, it is likely that the install Initial build and later installation do not necessarily happen at the same
root will be different from the one used to build the binaries. location. Spack provides a relocation capability and corrects for RPATHs and
non-relocatable scripts. However, many packages compile paths into binary
To address this issue, Spack automatically relocates all paths encoded in binaries artifacts directly. In such cases, the build instructions of this package would
and scripts to their new location upon install. need to be adjusted for better re-locatability.
Note that there are some cases where this is not possible: if binaries are built in
a relatively short path, and then installed to a longer path, there may not be enough
space in the binary to encode the new path. In this case, Spack will fail to install
the package from the build cache, and a source build is required.
To reduce the likelihood of this happening, it is highly recommended to add padding to
the install root during the build, as specified in the :ref:`config <config-yaml>`
section of the configuration:
.. code-block:: yaml
config:
install_tree:
root: /opt/spack
padded_length: 128
.. _binary_caches_oci:
---------------------------------
Automatic push to a build cache
---------------------------------
Sometimes it is convenient to push packages to a build cache as soon as they are installed. Spack can do this by setting autopush flag when adding a mirror:
.. code-block:: console
$ spack mirror add --autopush <name> <url or path>
Or the autopush flag can be set for an existing mirror:
.. code-block:: console
$ spack mirror set --autopush <name> # enable automatic push for an existing mirror
$ spack mirror set --no-autopush <name> # disable automatic push for an existing mirror
Then after installing a package it is automatically pushed to all mirrors with ``autopush: true``. The command
.. code-block:: console
$ spack install <package>
will have the same effect as
.. code-block:: console
$ spack install <package>
$ spack buildcache push <cache> <package> # for all caches with autopush: true
.. note::
Packages are automatically pushed to a build cache only if they are built from source.
-----------------------------------------
OCI / Docker V2 registries as build cache
-----------------------------------------
Spack can also use OCI or Docker V2 registries such as Dockerhub, Quay.io,
Github Packages, GitLab Container Registry, JFrog Artifactory, and others
as build caches. This is a convenient way to share binaries using public
infrastructure, or to cache Spack built binaries in Github Actions and
GitLab CI.
To get started, configure an OCI mirror using ``oci://`` as the scheme,
and optionally specify variables that hold the username and password (or
personal access token) for the registry:
.. code-block:: console
$ spack mirror add --oci-username-variable REGISTRY_USER \
--oci-password-variable REGISTRY_TOKEN \
my_registry oci://example.com/my_image
Spack follows the naming conventions of Docker, with Dockerhub as the default
registry. To use Dockerhub, you can omit the registry domain:
.. code-block:: console
$ spack mirror add ... my_registry oci://username/my_image
From here, you can use the mirror as any other build cache:
.. code-block:: console
$ export REGISTRY_USER=...
$ export REGISTRY_TOKEN=...
$ spack buildcache push my_registry <specs...> # push to the registry
$ spack install <specs...> # or install from the registry
A unique feature of buildcaches on top of OCI registries is that it's incredibly
easy to generate get a runnable container image with the binaries installed. This
is a great way to make applications available to users without requiring them to
install Spack -- all you need is Docker, Podman or any other OCI-compatible container
runtime.
To produce container images, all you need to do is add the ``--base-image`` flag
when pushing to the build cache:
.. code-block:: console
$ spack buildcache push --base-image ubuntu:20.04 my_registry ninja
Pushed to example.com/my_image:ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spack
$ docker run -it example.com/my_image:ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spack
root@e4c2b6f6b3f4:/# ninja --version
1.11.1
If ``--base-image`` is not specified, distroless images are produced. In practice,
you won't be able to run these as containers, since they don't come with libc and
other system dependencies. However, they are still compatible with tools like
``skopeo``, ``podman``, and ``docker`` for pulling and pushing.
.. note::
The docker ``overlayfs2`` storage driver is limited to 128 layers, above which a
``max depth exceeded`` error may be produced when pulling the image. There
are `alternative drivers <https://docs.docker.com/storage/storagedriver/>`_.
------------------------------------
Spack build cache for GitHub Actions
------------------------------------
To significantly speed up Spack in GitHub Actions, binaries can be cached in
GitHub Packages. This service is an OCI registry that can be linked to a GitHub
repository.
Spack offers a public build cache for GitHub Actions with a set of common packages,
which lets you get started quickly. See the following resources for more information:
* `spack/setup-spack <https://github.com/spack/setup-spack>`_ for setting up Spack in GitHub
Actions
* `spack/github-actions-buildcache <https://github.com/spack/github-actions-buildcache>`_ for
more details on the public build cache
.. _cmd-spack-buildcache: .. _cmd-spack-buildcache:
@@ -340,7 +183,7 @@ which lets you get started quickly. See the following resources for more informa
-------------------- --------------------
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack buildcache push`` ``spack buildcache create``
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
Create tarball of installed Spack package and all dependencies. Create tarball of installed Spack package and all dependencies.

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -31,14 +32,9 @@ can't be found. You can readily check if any prerequisite for using Spack is mis
Spack will take care of bootstrapping any missing dependency marked as [B]. Dependencies marked as [-] are instead required to be found on the system. Spack will take care of bootstrapping any missing dependency marked as [B]. Dependencies marked as [-] are instead required to be found on the system.
% echo $?
1
In the case of the output shown above Spack detected that both ``clingo`` and ``gnupg`` In the case of the output shown above Spack detected that both ``clingo`` and ``gnupg``
are missing and it's giving detailed information on why they are needed and whether are missing and it's giving detailed information on why they are needed and whether
they can be bootstrapped. The return code of this command summarizes the results, if any they can be bootstrapped. Running a command that concretize a spec, like:
dependencies are missing the return code is ``1``, otherwise ``0``. Running a command that
concretizes a spec, like:
.. code-block:: console .. code-block:: console
@@ -48,7 +44,7 @@ concretizes a spec, like:
==> Installing "clingo-bootstrap@spack%apple-clang@12.0.0~docs~ipo+python build_type=Release arch=darwin-catalina-x86_64" from a buildcache ==> Installing "clingo-bootstrap@spack%apple-clang@12.0.0~docs~ipo+python build_type=Release arch=darwin-catalina-x86_64" from a buildcache
[ ... ] [ ... ]
automatically triggers the bootstrapping of clingo from pre-built binaries as expected. triggers the bootstrapping of clingo from pre-built binaries as expected.
Users can also bootstrap all the dependencies needed by Spack in a single command, which Users can also bootstrap all the dependencies needed by Spack in a single command, which
might be useful to setup containers or other similar environments: might be useful to setup containers or other similar environments:
@@ -86,7 +82,7 @@ You can check what is installed in the bootstrapping store at any time using:
.. code-block:: console .. code-block:: console
% spack -b find % spack find -b
==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store" ==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store"
==> 11 installed packages ==> 11 installed packages
-- darwin-catalina-x86_64 / apple-clang@12.0.0 ------------------ -- darwin-catalina-x86_64 / apple-clang@12.0.0 ------------------
@@ -100,7 +96,7 @@ In case it is needed you can remove all the software in the current bootstrappin
% spack clean -b % spack clean -b
==> Removing bootstrapped software and configuration in "/Users/spack/.spack/bootstrap" ==> Removing bootstrapped software and configuration in "/Users/spack/.spack/bootstrap"
% spack -b find % spack find -b
==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store" ==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store"
==> 0 installed packages ==> 0 installed packages
@@ -170,8 +166,8 @@ bootstrapping.
To register the mirror on the platform where it's supposed to be used run the following command(s): To register the mirror on the platform where it's supposed to be used run the following command(s):
% spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources % spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources
% spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries % spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries
% spack buildcache update-index /opt/bootstrap/bootstrap_cache
This command needs to be run on a machine with internet access and the resulting folder This command needs to be run on a machine with internet access and the resulting folder
has to be moved over to the air-gapped system. Once the local sources are added using the has to be moved over to the air-gapped system. Once the local sources are added using the
commands suggested at the prompt, they can be used to bootstrap Spack. commands suggested at the prompt, they can be used to bootstrap Spack.

View File

@@ -1,116 +1,278 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _build-settings:
================================
Package Settings (packages.yaml)
================================
Spack allows you to customize how your software is built through the
``packages.yaml`` file. Using it, you can make Spack prefer particular
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
or you can make it prefer to build with particular compilers. You can
also tell Spack to use *external* software installations already
present on your system.
At a high level, the ``packages.yaml`` file is structured like this:
.. code-block:: yaml
packages:
package1:
# settings for package1
package2:
# settings for package2
# ...
all:
# settings that apply to all packages.
So you can either set build preferences specifically for *one* package,
or you can specify that certain settings should apply to *all* packages.
The types of settings you can customize are described in detail below.
Spack's build defaults are in the default
``etc/spack/defaults/packages.yaml`` file. You can override them in
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
details on how this works, see :ref:`configuration-scopes`.
.. _sec-external-packages:
-----------------
External Packages
-----------------
Spack can be configured to use externally-installed
packages rather than building its own packages. This may be desirable
if machines ship with system packages, such as a customized MPI
that should be used instead of Spack building its own MPI.
External packages are configured through the ``packages.yaml`` file.
Here's an example of an external configuration:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This example lists three installations of OpenMPI, one built with GCC,
one built with GCC and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the pre-installed OpenMPI in
the given directory. Note that the specified path is the top-level
install prefix, not the ``bin`` subdirectory.
``packages.yaml`` can also be used to specify modules to load instead
of the installation prefixes. The following example says that module
``CMake/3.7.2`` provides cmake version 3.7.2.
.. code-block:: yaml
cmake:
externals:
- spec: cmake@3.7.2
modules:
- CMake/3.7.2
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
by a list of package names. To specify externals, add an ``externals:``
attribute under the package name, which lists externals.
Each external should specify a ``spec:`` string that should be as
well-defined as reasonably possible. If a
package lacks a spec component, such as missing a compiler or
package version, then Spack will guess the missing component based
on its most-favored packages, and it may guess incorrectly.
Each package version and compiler listed in an external should
have entries in Spack's packages and compiler configuration, even
though the package and compiler may not ever be built.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Prevent packages from being built from sources
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
but it does not prevent Spack from building packages from sources. In the above example,
Spack might choose for many valid reasons to start building and linking with the
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
To prevent this, the ``packages.yaml`` configuration also allows packages
to be flagged as non-buildable. The previous example could be modified to
be:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
buildable: False
The addition of the ``buildable`` flag tells Spack that it should never build
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
OpenMPI.
.. note::
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
pre-built specs include specs already available from a local store, an upstream store, a registered
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
external specs in ``packages.yaml`` are included in the list of pre-built specs.
If an external module is specified as not buildable, then Spack will load the
external module into the build environment which can be used for linking.
The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Non-buildable virtual packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Virtual packages in Spack can also be specified as not buildable, and
external implementations can be provided. In the example above,
OpenMPI is configured as not buildable, but Spack will often prefer
other MPI implementations over the externally available OpenMPI. Spack
can be configured with every MPI provider not buildable individually,
but more conveniently:
.. code-block:: yaml
packages:
mpi:
buildable: False
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
Spack can then use any of the listed external implementations of MPI
to satisfy a dependency, and will choose depending on the compiler and
architecture.
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
(available via stores or buildcaches) are not wanted, Spack can be configured to require
specs matching only the available externals:
.. code-block:: yaml
packages:
mpi:
buildable: False
require:
- one_of: [
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
]
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
:ref:`package-requirements`.
.. _cmd-spack-external-find:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatically Find External Packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can run the :ref:`spack external find <spack-external-find>` command
to search for system-provided packages and add them to ``packages.yaml``.
After running this command your ``packages.yaml`` may include new entries:
.. code-block:: yaml
packages:
cmake:
externals:
- spec: cmake@3.17.2
prefix: /usr
Generally this is useful for detecting a small set of commonly-used packages;
for now this is generally limited to finding build-only dependencies.
Specific limitations include:
* Packages are not discoverable by default: For a package to be
discoverable with ``spack external find``, it needs to add special
logic. See :ref:`here <make-package-findable>` for more details.
* The logic does not search through module files, it can only detect
packages with executables defined in ``PATH``; you can help Spack locate
externals which use module files by loading any associated modules for
packages that you want Spack to know about before running
``spack external find``.
* Spack does not overwrite existing entries in the package configuration:
If there is an external defined for a spec at any configuration scope,
then Spack will not add a new external entry (``spack config blame packages``
can help locate all external entries).
.. _concretizer-options: .. _concretizer-options:
========================================== ----------------------
Concretization Settings (concretizer.yaml) Concretizer options
========================================== ----------------------
The ``concretizer.yaml`` configuration file allows to customize aspects of the ``packages.yaml`` gives the concretizer preferences for specific packages,
algorithm used to select the dependencies you install. The default configuration but you can also use ``concretizer.yaml`` to customize aspects of the
is the following: algorithm it uses to select the dependencies you install:
.. literalinclude:: _spack_root/etc/spack/defaults/concretizer.yaml .. literalinclude:: _spack_root/etc/spack/defaults/concretizer.yaml
:language: yaml :language: yaml
-------------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Reuse already installed packages Reuse already installed packages
-------------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``reuse`` attribute controls how aggressively Spack reuses binary packages during concretization. The The ``reuse`` attribute controls whether Spack will prefer to use installed packages (``true``), or
attribute can either be a single value, or an object for more complex configurations. whether it will do a "fresh" installation and prefer the latest settings from
``package.py`` files and ``packages.yaml`` (``false``).
In the former case ("single value") it allows Spack to: You can use:
1. Reuse installed packages and buildcaches for all the specs to be concretized, when ``true``
2. Reuse installed packages and buildcaches only for the dependencies of the root specs, when ``dependencies``
3. Disregard reusing installed packages and buildcaches, when ``false``
In case a finer control over which specs are reused is needed, then the value of this attribute can be
an object, with the following keys:
1. ``roots``: if ``true`` root specs are reused, if ``false`` only dependencies of root specs are reused
2. ``from``: list of sources from which reused specs are taken
Each source in ``from`` is itself an object:
.. list-table:: Attributes for a source or reusable specs
:header-rows: 1
* - Attribute name
- Description
* - type (mandatory, string)
- Can be ``local``, ``buildcache``, or ``external``
* - include (optional, list of specs)
- If present, reusable specs must match at least one of the constraint in the list
* - exclude (optional, list of specs)
- If present, reusable specs must not match any of the constraint in the list.
For instance, the following configuration:
.. code-block:: yaml
concretizer:
reuse:
roots: true
from:
- type: local
include:
- "%gcc"
- "%clang"
tells the concretizer to reuse all specs compiled with either ``gcc`` or ``clang``, that are installed
in the local store. Any spec from remote buildcaches is disregarded.
To reduce the boilerplate in configuration files, default values for the ``include`` and
``exclude`` options can be pushed up one level:
.. code-block:: yaml
concretizer:
reuse:
roots: true
include:
- "%gcc"
from:
- type: local
- type: buildcache
- type: local
include:
- "foo %oneapi"
In the example above we reuse all specs compiled with ``gcc`` from the local store
and remote buildcaches, and we also reuse ``foo %oneapi``. Note that the last source of
specs override the default ``include`` attribute.
For one-off concretizations, the are command line arguments for each of the simple "single value"
configurations. This means a user can:
.. code-block:: console .. code-block:: console
% spack install --reuse <spec> % spack install --reuse <spec>
to enable reuse for a single installation, or: to enable reuse for a single installation, and you can use:
.. code-block:: console .. code-block:: console
spack install --fresh <spec> spack install --fresh <spec>
to do a fresh install if ``reuse`` is enabled by default. to do a fresh install if ``reuse`` is enabled by default.
``reuse: true`` is the default.
.. seealso:: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
------------------------------------------
Selection of the target microarchitectures Selection of the target microarchitectures
------------------------------------------ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The options under the ``targets`` attribute control which targets are considered during a solve. The options under the ``targets`` attribute control which targets are considered during a solve.
Currently the options in this section are only configurable from the ``concretizer.yaml`` file Currently the options in this section are only configurable from the ``concretization.yaml`` file
and there are no corresponding command line arguments to enable them for a single solve. and there are no corresponding command line arguments to enable them for a single solve.
The ``granularity`` option can take two possible values: ``microarchitectures`` and ``generic``. The ``granularity`` option can take two possible values: ``microarchitectures`` and ``generic``.
@@ -140,131 +302,257 @@ microarchitectures considered during the solve are constrained to be compatible
host Spack is currently running on. For instance, if this option is set to ``true``, a host Spack is currently running on. For instance, if this option is set to ``true``, a
user cannot concretize for ``target=icelake`` while running on an Haswell node. user cannot concretize for ``target=icelake`` while running on an Haswell node.
--------------- .. _package-requirements:
Duplicate nodes
---------------
The ``duplicates`` attribute controls whether the DAG can contain multiple configurations of --------------------
the same package. This is mainly relevant for build dependencies, which may have their version Package Requirements
pinned by some nodes, and thus be required at different versions by different nodes in the same --------------------
DAG.
The ``strategy`` option controls how the solver deals with duplicates. If the value is ``none``, Spack can be configured to always use certain compilers, package
then a single configuration per package is allowed in the DAG. This means, for instance, that only versions, and variants during concretization through package
a single ``cmake`` or a single ``py-setuptools`` version is allowed. The result would be a slightly requirements.
faster concretization, at the expense of making a few specs unsolvable.
If the value is ``minimal`` Spack will allow packages tagged as ``build-tools`` to have duplicates. Package requirements are useful when you find yourself repeatedly
This allows, for instance, to concretize specs whose nodes require different, and incompatible, ranges specifying the same constraints on the command line, and wish that
of some build tool. For instance, in the figure below the latest `py-shapely` requires a newer `py-setuptools`, Spack respects these constraints whether you mention them explicitly
while `py-numpy` still needs an older version: or not. Another use case is specifying constraints that should apply
to all root specs in an environment, without having to repeat the
constraint everywhere.
.. figure:: images/shapely_duplicates.svg Apart from that, requirements config is more flexible than constraints
:scale: 70 % on the command line, because it can specify constraints on packages
:align: center *when they occur* as a dependency. In contrast, on the command line it
is not possible to specify constraints on dependencies while also keeping
those dependencies optional.
Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the The package requirements configuration is specified in ``packages.yaml``
default behavior is ``duplicates:strategy:minimal``. keyed by package name:
--------
Splicing
--------
The ``splice`` key covers config attributes for splicing specs in the solver.
"Splicing" is a method for replacing a dependency with another spec
that provides the same package or virtual. There are two types of
splices, referring to different behaviors for shared dependencies
between the root spec and the new spec replacing a dependency:
"transitive" and "intransitive". A "transitive" splice is one that
resolves all conflicts by taking the dependency from the new node. An
"intransitive" splice is one that resolves all conflicts by taking the
dependency from the original root. From a theory perspective, hybrid
splices are possible but are not modeled by Spack.
All spliced specs retain a ``build_spec`` attribute that points to the
original Spec before any splice occurred. The ``build_spec`` for a
non-spliced spec is itself.
The figure below shows examples of transitive and intransitive splices:
.. figure:: images/splices.png
:align: center
The concretizer can be configured to explicitly splice particular
replacements for a target spec. Splicing will allow the user to make
use of generically built public binary caches, while swapping in
highly optimized local builds for performance critical components
and/or components that interact closely with the specific hardware
details of the system. The most prominent candidate for splicing is
MPI providers. MPI packages have relatively well-understood ABI
characteristics, and most High Performance Computing facilities deploy
highly optimized MPI packages tailored to their particular
hardware. The following config block configures Spack to replace
whatever MPI provider each spec was concretized to use with the
particular package of ``mpich`` with the hash that begins ``abcdef``.
.. code-block:: yaml .. code-block:: yaml
concretizer: packages:
splice: libfabric:
explicit: require: "@1.13.2"
- target: mpi openmpi:
replacement: mpich/abcdef require:
transitive: false - any_of: ["~cuda", "%gcc"]
mpich:
require:
- one_of: ["+cuda", "+rocm"]
.. warning:: Requirements are expressed using Spec syntax (the same as what is provided
to ``spack install``). In the simplest case, you can specify attributes
that you always want the package to have by providing a single spec to
``require``; in the above example, ``libfabric`` will always build
with version 1.13.2.
When configuring an explicit splice, you as the user take on the You can provide a more-relaxed constraint and allow the concretizer to
responsibility for ensuring ABI compatibility between the specs choose between a set of options using ``any_of`` or ``one_of``:
matched by the target and the replacement you provide. If they are
not compatible, Spack will not warn you and your application will
fail to run.
The ``target`` field of an explicit splice can be any abstract * ``any_of`` is a list of specs. One of those specs must be satisfied
spec. The ``replacement`` field must be a spec that includes the hash and it is also allowed for the concretized spec to match more than one.
of a concrete spec, and the replacement must either be the same In the above example, that means you could build ``openmpi+cuda%gcc``,
package as the target, provide the virtual that is the target, or ``openmpi~cuda%clang`` or ``openmpi~cuda%gcc`` (in the last case,
provide a virtual that the target provides. The ``transitive`` field note that both specs in the ``any_of`` for ``openmpi`` are
is optional -- by default, splices will be transitive. satisfied).
* ``one_of`` is also a list of specs, and the final concretized spec
must match exactly one of them. In the above example, that means
you could build ``mpich+cuda`` or ``mpich+rocm`` but not
``mpich+cuda+rocm`` (note the current package definition for
``mpich`` already includes a conflict, so this is redundant but
still demonstrates the concept).
.. note:: .. note::
With explicit splices configured, it is possible for Spack to For ``any_of`` and ``one_of``, the order of specs indicates a
concretize to a spec that does not satisfy the input. For example, preference: items that appear earlier in the list are preferred
with the config above ``hdf5 ^mvapich2`` will concretize to user (note that these preferences can be ignored in favor of others).
``mpich/abcdef`` instead of ``mvapich2`` as the MPI provider. Spack
will warn the user in this case, but will not fail the
concretization.
.. _automatic_splicing: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting default requirements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^ You can also set default requirements for all packages under ``all``
Automatic Splicing like this:
^^^^^^^^^^^^^^^^^^
The Spack solver can be configured to do automatic splicing for
ABI-compatible packages. Automatic splices are enabled in the concretizer
config section
.. code-block:: yaml .. code-block:: yaml
concretizer: packages:
splice: all:
automatic: True require: '%clang'
Packages can include ABI-compatibility information using the which means every spec will be required to use ``clang`` as a compiler.
``can_splice`` directive. See :ref:`the packaging
guide<abi_compatibility>` for instructions on specifying ABI
compatibility using the ``can_splice`` directive.
.. note:: Note that in this case ``all`` represents a *default set of requirements* -
if there are specific package requirements, then the default requirements
under ``all`` are disregarded. For example, with a configuration like this:
The ``can_splice`` directive is experimental and may be changed in .. code-block:: yaml
future versions.
When automatic splicing is enabled, the concretizer will combine any packages:
number of ABI-compatible specs if possible to reuse installed packages all:
and packages available from binary caches. The end result of these require: '%clang'
specs is equivalent to a series of transitive/intransitive splices, cmake:
but the series may be non-obvious. require: '%gcc'
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
dependencies) to use ``clang``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting requirements on virtual specs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
This can be useful for fixing which virtual provider you want to use:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
Requirements on the virtual spec and on the specific provider are both applied, if
present. For instance with a configuration like:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
mvapich2:
require: '~cuda'
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
.. _package-preferences:
-------------------
Package Preferences
-------------------
In some cases package requirements can be too strong, and package
preferences are the better option. Package preferences do not impose
constraints on packages for particular versions or variants values,
they rather only set defaults -- the concretizer is free to change
them if it must due to other constraints. Also note that package
preferences are of lower priority than reuse of already installed
packages.
Here's an example ``packages.yaml`` file that sets preferred packages:
.. code-block:: yaml
packages:
opencv:
compiler: [gcc@4.9]
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
all:
compiler: [gcc@4.4.7, 'gcc@4.6:', intel, clang, pgi]
target: [sandybridge]
providers:
mpi: [mvapich2, mpich, openmpi]
At a high level, this example is specifying how packages are preferably
concretized. The opencv package should prefer using GCC 4.9 and
be built with debug options. The gperftools package should prefer version
2.2 over 2.4. Every package on the system should prefer mvapich2 for
its MPI and GCC 4.4.7 (except for opencv, which overrides this by preferring GCC 4.9).
These options are used to fill in implicit defaults. Any of them can be overwritten
on the command line if explicitly requested.
Package preferences accept the follow keys or components under
the specific package (or ``all``) section: ``compiler``, ``variants``,
``version``, ``providers``, and ``target``. Each component has an
ordered list of spec ``constraints``, with earlier entries in the
list being preferred over later entries.
Sometimes a package installation may have constraints that forbid
the first concretization rule, in which case Spack will use the first
legal concretization rule. Going back to the example, if a user
requests gperftools 2.3 or later, then Spack will install version 2.4
as the 2.4 version of gperftools is preferred over 2.3.
An explicit concretization rule in the preferred section will always
take preference over unlisted concretizations. In the above example,
xlc isn't listed in the compiler list. Every listed compiler from
gcc to pgi will thus be preferred over the xlc compiler.
The syntax for the ``provider`` section differs slightly from other
concretization rules. A provider lists a value that packages may
``depends_on`` (e.g, MPI) and a list of rules for fulfilling that
dependency.
.. _package_permissions:
-------------------
Package Permissions
-------------------
Spack can be configured to assign permissions to the files installed
by a package.
In the ``packages.yaml`` file under ``permissions``, the attributes
``read``, ``write``, and ``group`` control the package
permissions. These attributes can be set per-package, or for all
packages under ``all``. If permissions are set under ``all`` and for a
specific package, the package-specific settings take precedence.
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
and ``world``.
.. code-block:: yaml
packages:
all:
permissions:
write: group
group: spack
my_app:
permissions:
read: group
group: my_team
The permissions settings describe the broadest level of access to
installations of the specified packages. The execute permissions of
the file are set to the same level as read permissions for those files
that are executable. The default setting for ``read`` is ``world``,
and for ``write`` is ``user``. In the example above, installations of
``my_app`` will be installed with user and group permissions but no
world permissions, and owned by the group ``my_team``. All other
packages will be installed with user and group write privileges, and
world read privileges. Those packages will be owned by the group
``spack``.
The ``group`` attribute assigns a Unix-style group to a package. All
files installed by the package will be owned by the assigned group,
and the sticky group bit will be set on the install prefix and all
directories inside the install prefix. This will ensure that even
manually placed files within the install prefix are owned by the
assigned group. If no group is assigned, Spack will allow the OS
default behavior to go as expected.
----------------------------
Assigning Package Attributes
----------------------------
You can assign class-level attributes in the configuration:
.. code-block:: yaml
packages:
mpileaks:
# Override existing attributes
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
# ... or add new ones
x: 1
Attributes set this way will be accessible to any method executed
in the package.py file (e.g. the ``install()`` method). Values for these
attributes may be any value parseable by yaml.
These can only be applied to specific packages, not "all" or
virtual packages.

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -63,6 +64,7 @@ on these ideas for each distinct build system that Spack supports:
build_systems/cudapackage build_systems/cudapackage
build_systems/custompackage build_systems/custompackage
build_systems/inteloneapipackage build_systems/inteloneapipackage
build_systems/intelpackage
build_systems/rocmpackage build_systems/rocmpackage
build_systems/sourceforgepackage build_systems/sourceforgepackage
@@ -83,7 +85,7 @@ packages. You can quickly find examples by running:
.. code-block:: console .. code-block:: console
$ cd var/spack/repos/spack_repo/builtin/packages $ cd var/spack/repos/builtin/packages
$ grep -l QMakePackage */package.py $ grep -l QMakePackage */package.py

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -126,9 +127,9 @@ check out a commit from the ``master`` branch, you would want to add:
.. code-block:: python .. code-block:: python
depends_on("autoconf", type="build", when="@master") depends_on('autoconf', type='build', when='@master')
depends_on("automake", type="build", when="@master") depends_on('automake', type='build', when='@master')
depends_on("libtool", type="build", when="@master") depends_on('libtool', type='build', when='@master')
It is typically redundant to list the ``m4`` macro processor package as a It is typically redundant to list the ``m4`` macro processor package as a
dependency, since ``autoconf`` already depends on it. dependency, since ``autoconf`` already depends on it.
@@ -144,16 +145,7 @@ example, the ``bash`` shell is used to run the ``autogen.sh`` script.
.. code-block:: python .. code-block:: python
def autoreconf(self, spec, prefix): def autoreconf(self, spec, prefix):
which("bash")("autogen.sh") which('bash')('autogen.sh')
If the ``package.py`` has build instructions in a separate
:ref:`builder class <multiple_build_systems>`, the signature for a phase changes slightly:
.. code-block:: python
class AutotoolsBuilder(AutotoolsBuilder):
def autoreconf(self, pkg, spec, prefix):
which("bash")("autogen.sh")
""""""""""""""""""""""""""""""""""""""" """""""""""""""""""""""""""""""""""""""
patching configure or Makefile.in files patching configure or Makefile.in files
@@ -194,9 +186,9 @@ To opt out of this feature, use the following setting:
To enable it conditionally on different architectures, define a property and To enable it conditionally on different architectures, define a property and
make the package depend on ``gnuconfig`` as a build dependency: make the package depend on ``gnuconfig`` as a build dependency:
.. code-block:: python .. code-block
depends_on("gnuconfig", when="@1.0:") depends_on('gnuconfig', when='@1.0:')
@property @property
def patch_config_files(self): def patch_config_files(self):
@@ -238,7 +230,7 @@ version, this can be done like so:
@property @property
def force_autoreconf(self): def force_autoreconf(self):
return self.version == Version("1.2.3") return self.version == Version('1.2.3')
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
Finding configure flags Finding configure flags
@@ -272,9 +264,9 @@ often lists dependencies and the flags needed to locate them. The
"environment variables" section lists environment variables that the "environment variables" section lists environment variables that the
build system uses to pass flags to the compiler and linker. build system uses to pass flags to the compiler and linker.
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
Adding flags to configure Addings flags to configure
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
For most of the flags you encounter, you will want a variant to For most of the flags you encounter, you will want a variant to
optionally enable/disable them. You can then optionally pass these optionally enable/disable them. You can then optionally pass these
@@ -285,26 +277,14 @@ function like so:
def configure_args(self): def configure_args(self):
args = [] args = []
...
if self.spec.satisfies("+mpi"): if '+mpi' in self.spec:
args.append("--enable-mpi") args.append('--enable-mpi')
else: else:
args.append("--disable-mpi") args.append('--disable-mpi')
return args return args
Alternatively, you can use the :ref:`enable_or_disable <autotools_enable_or_disable>` helper:
.. code-block:: python
def configure_args(self):
args = []
...
args.extend(self.enable_or_disable("mpi"))
return args
Note that we are explicitly disabling MPI support if it is not Note that we are explicitly disabling MPI support if it is not
requested. This is important, as many Autotools packages will enable requested. This is important, as many Autotools packages will enable
options by default if the dependencies are found, and disable them options by default if the dependencies are found, and disable them
@@ -315,11 +295,9 @@ and `here <https://wiki.gentoo.org/wiki/Project:Quality_Assurance/Automagic_depe
for a rationale as to why these so-called "automagic" dependencies for a rationale as to why these so-called "automagic" dependencies
are a problem. are a problem.
.. note:: By default, Autotools installs packages to ``/usr``. We don't want this,
so Spack automatically adds ``--prefix=/path/to/installation/prefix``
By default, Autotools installs packages to ``/usr``. We don't want this, to your list of ``configure_args``. You don't need to add this yourself.
so Spack automatically adds ``--prefix=/path/to/installation/prefix``
to your list of ``configure_args``. You don't need to add this yourself.
^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^
Helper functions Helper functions
@@ -330,8 +308,6 @@ You may have noticed that most of the Autotools flags are of the form
``--without-baz``. Since these flags are so common, Spack provides a ``--without-baz``. Since these flags are so common, Spack provides a
couple of helper functions to make your life easier. couple of helper functions to make your life easier.
.. _autotools_enable_or_disable:
""""""""""""""""" """""""""""""""""
enable_or_disable enable_or_disable
""""""""""""""""" """""""""""""""""
@@ -343,18 +319,11 @@ typically used to enable or disable some feature within the package.
.. code-block:: python .. code-block:: python
variant( variant(
"memchecker", 'memchecker',
default=False, default=False,
description="Memchecker support for debugging [degrades performance]" description='Memchecker support for debugging [degrades performance]'
) )
... config_args.extend(self.enable_or_disable('memchecker'))
def configure_args(self):
args = []
...
args.extend(self.enable_or_disable("memchecker"))
return args
In this example, specifying the variant ``+memchecker`` will generate In this example, specifying the variant ``+memchecker`` will generate
the following configuration options: the following configuration options:
@@ -374,15 +343,15 @@ the ``with_or_without`` method.
.. code-block:: python .. code-block:: python
variant( variant(
"schedulers", 'schedulers',
values=disjoint_sets( values=disjoint_sets(
("auto",), ("alps", "lsf", "tm", "slurm", "sge", "loadleveler") ('auto',), ('alps', 'lsf', 'tm', 'slurm', 'sge', 'loadleveler')
).with_non_feature_values("auto", "none"), ).with_non_feature_values('auto', 'none'),
description="List of schedulers for which support is enabled; " description="List of schedulers for which support is enabled; "
"'auto' lets openmpi determine", "'auto' lets openmpi determine",
) )
if not spec.satisfies("schedulers=auto"): if 'schedulers=auto' not in spec:
config_args.extend(self.with_or_without("schedulers")) config_args.extend(self.with_or_without('schedulers'))
In this example, specifying the variant ``schedulers=slurm,sge`` will In this example, specifying the variant ``schedulers=slurm,sge`` will
generate the following configuration options: generate the following configuration options:
@@ -407,16 +376,16 @@ generated, using the ``activation_value`` argument to
.. code-block:: python .. code-block:: python
variant( variant(
"fabrics", 'fabrics',
values=disjoint_sets( values=disjoint_sets(
("auto",), ("psm", "psm2", "verbs", "mxm", "ucx", "libfabric") ('auto',), ('psm', 'psm2', 'verbs', 'mxm', 'ucx', 'libfabric')
).with_non_feature_values("auto", "none"), ).with_non_feature_values('auto', 'none'),
description="List of fabrics that are enabled; " description="List of fabrics that are enabled; "
"'auto' lets openmpi determine", "'auto' lets openmpi determine",
) )
if not spec.satisfies("fabrics=auto"): if 'fabrics=auto' not in spec:
config_args.extend(self.with_or_without("fabrics", config_args.extend(self.with_or_without('fabrics',
activation_value="prefix")) activation_value='prefix'))
``activation_value`` accepts a callable that generates the configure ``activation_value`` accepts a callable that generates the configure
parameter value given the variant value; but the special value parameter value given the variant value; but the special value
@@ -440,16 +409,16 @@ When Spack variants and configure flags do not correspond one-to-one, the
.. code-block:: python .. code-block:: python
variant("debug_tools", default=False) variant('debug_tools', default=False)
config_args += self.enable_or_disable("debug-tools", variant="debug_tools") config_args += self.enable_or_disable('debug-tools', variant='debug_tools')
Or when one variant controls multiple flags: Or when one variant controls multiple flags:
.. code-block:: python .. code-block:: python
variant("debug_tools", default=False) variant('debug_tools', default=False)
config_args += self.with_or_without("memchecker", variant="debug_tools") config_args += self.with_or_without('memchecker', variant='debug_tools')
config_args += self.with_or_without("profiler", variant="debug_tools") config_args += self.with_or_without('profiler', variant='debug_tools')
"""""""""""""""""""" """"""""""""""""""""
@@ -463,8 +432,8 @@ For example:
.. code-block:: python .. code-block:: python
variant("profiler", when="@2.0:") variant('profiler', when='@2.0:')
config_args += self.with_or_without("profiler") config_args += self.with_or_without('profiler')
will neither add ``--with-profiler`` nor ``--without-profiler`` when the version is will neither add ``--with-profiler`` nor ``--without-profiler`` when the version is
below ``2.0``. below ``2.0``.
@@ -483,10 +452,10 @@ the variant values require atypical behavior.
def with_or_without_verbs(self, activated): def with_or_without_verbs(self, activated):
# Up through version 1.6, this option was named --with-openib. # Up through version 1.6, this option was named --with-openib.
# In version 1.7, it was renamed to be --with-verbs. # In version 1.7, it was renamed to be --with-verbs.
opt = "verbs" if self.spec.satisfies("@1.7:") else "openib" opt = 'verbs' if self.spec.satisfies('@1.7:') else 'openib'
if not activated: if not activated:
return f"--without-{opt}" return '--without-{0}'.format(opt)
return f"--with-{opt}={self.spec['rdma-core'].prefix}" return '--with-{0}={1}'.format(opt, self.spec['rdma-core'].prefix)
Defining ``with_or_without_verbs`` overrides the behavior of a Defining ``with_or_without_verbs`` overrides the behavior of a
``fabrics=verbs`` variant, changing the configure-time option to ``fabrics=verbs`` variant, changing the configure-time option to
@@ -510,7 +479,7 @@ do this like so:
.. code-block:: python .. code-block:: python
configure_directory = "src" configure_directory = 'src'
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
Building out of source Building out of source
@@ -522,7 +491,7 @@ This can be done using the ``build_directory`` variable:
.. code-block:: python .. code-block:: python
build_directory = "spack-build" build_directory = 'spack-build'
By default, Spack will build the package in the same directory that By default, Spack will build the package in the same directory that
contains the ``configure`` script contains the ``configure`` script
@@ -545,8 +514,8 @@ library or build the documentation, you can add these like so:
.. code-block:: python .. code-block:: python
build_targets = ["all", "docs"] build_targets = ['all', 'docs']
install_targets = ["install", "docs"] install_targets = ['install', 'docs']
^^^^^^^ ^^^^^^^
Testing Testing

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -8,32 +9,9 @@
Bundle Bundle
------ ------
``BundlePackage`` represents a set of packages that are expected to work ``BundlePackage`` represents a set of packages that are expected to work well
well together, such as a collection of commonly used software libraries. together, such as a collection of commonly used software libraries. The
The associated software is specified as dependencies. associated software is specified as bundle dependencies.
If it makes sense, variants, conflicts, and requirements can be added to
the package. :ref:`Variants <variants>` ensure that common build options
are consistent across the packages supporting them. :ref:`Conflicts
and requirements <packaging_conflicts>` prevent attempts to build with known
bugs or limitations.
For example, if ``MyBundlePackage`` is known to only build on ``linux``,
it could use the ``require`` directive as follows:
.. code-block:: python
require("platform=linux", msg="MyBundlePackage only builds on linux")
Spack has a number of built-in bundle packages, such as:
* `AmdAocl <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/amd_aocl/package.py>`_
* `EcpProxyApps <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/ecp_proxy_apps/package.py>`_
* `Libc <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/libc/package.py>`_
* `Xsdk <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/xsdk/package.py>`_
where ``Xsdk`` also inherits from ``CudaPackage`` and ``RocmPackage`` and
``Libc`` is a virtual bundle package for the C standard library.
^^^^^^^^ ^^^^^^^^

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -86,7 +87,7 @@ A typical usage of these methods may look something like this:
.. code-block:: python .. code-block:: python
def initconfig_mpi_entries(self): def initconfig_mpi_entries(self)
# Get existing MPI configurations # Get existing MPI configurations
entries = super(self, Foo).initconfig_mpi_entries() entries = super(self, Foo).initconfig_mpi_entries()
@@ -94,25 +95,25 @@ A typical usage of these methods may look something like this:
# This spec has an MPI variant, and we need to enable MPI when it is on. # This spec has an MPI variant, and we need to enable MPI when it is on.
# This hypothetical package controls MPI with the ``FOO_MPI`` option to # This hypothetical package controls MPI with the ``FOO_MPI`` option to
# cmake. # cmake.
if self.spec.satisfies("+mpi"): if '+mpi' in self.spec:
entries.append(cmake_cache_option("FOO_MPI", True, "enable mpi")) entries.append(cmake_cache_option('FOO_MPI', True, "enable mpi"))
else: else:
entries.append(cmake_cache_option("FOO_MPI", False, "disable mpi")) entries.append(cmake_cache_option('FOO_MPI', False, "disable mpi"))
def initconfig_package_entries(self): def initconfig_package_entries(self):
# Package specific options # Package specific options
entries = [] entries = []
entries.append("#Entries for build options") entries.append('#Entries for build options')
bar_on = self.spec.satisfies("+bar") bar_on = '+bar' in self.spec
entries.append(cmake_cache_option("FOO_BAR", bar_on, "toggle bar")) entries.append(cmake_cache_option('FOO_BAR', bar_on, 'toggle bar'))
entries.append("#Entries for dependencies") entries.append('#Entries for dependencies')
if self.spec["blas"].name == "baz": # baz is our blas provider if self.spec['blas'].name == 'baz': # baz is our blas provider
entries.append(cmake_cache_string("FOO_BLAS", "baz", "Use baz")) entries.append(cmake_cache_string('FOO_BLAS', 'baz', 'Use baz'))
entries.append(cmake_cache_path("BAZ_PREFIX", self.spec["baz"].prefix)) entries.append(cmake_cache_path('BAZ_PREFIX', self.spec['baz'].prefix))
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
External documentation External documentation

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -81,7 +82,7 @@ class already contains:
.. code-block:: python .. code-block:: python
depends_on("cmake", type="build") depends_on('cmake', type='build')
If you need to specify a particular version requirement, you can If you need to specify a particular version requirement, you can
@@ -89,7 +90,7 @@ override this in your package:
.. code-block:: python .. code-block:: python
depends_on("cmake@2.8.12:", type="build") depends_on('cmake@2.8.12:', type='build')
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
@@ -136,10 +137,10 @@ and without the :meth:`~spack.build_systems.cmake.CMakeBuilder.define` and
def cmake_args(self): def cmake_args(self):
args = [ args = [
"-DWHATEVER:STRING=somevalue", '-DWHATEVER:STRING=somevalue',
self.define("ENABLE_BROKEN_FEATURE", False), self.define('ENABLE_BROKEN_FEATURE', False),
self.define_from_variant("DETECT_HDF5", "hdf5"), self.define_from_variant('DETECT_HDF5', 'hdf5'),
self.define_from_variant("THREADS"), # True if +threads self.define_from_variant('THREADS'), # True if +threads
] ]
return args return args
@@ -150,10 +151,10 @@ and CMake simply ignores the empty command line argument. For example the follow
.. code-block:: python .. code-block:: python
variant("example", default=True, when="@2.0:") variant('example', default=True, when='@2.0:')
def cmake_args(self): def cmake_args(self):
return [self.define_from_variant("EXAMPLE", "example")] return [self.define_from_variant('EXAMPLE', 'example')]
will generate ``'cmake' '-DEXAMPLE=ON' ...`` when `@2.0: +example` is met, but will will generate ``'cmake' '-DEXAMPLE=ON' ...`` when `@2.0: +example` is met, but will
result in ``'cmake' '' ...`` when the spec version is below ``2.0``. result in ``'cmake' '' ...`` when the spec version is below ``2.0``.
@@ -192,21 +193,21 @@ a variant to control this:
.. code-block:: python .. code-block:: python
variant("build_type", default="RelWithDebInfo", variant('build_type', default='RelWithDebInfo',
description="CMake build type", description='CMake build type',
values=("Debug", "Release", "RelWithDebInfo", "MinSizeRel")) values=('Debug', 'Release', 'RelWithDebInfo', 'MinSizeRel'))
However, not every CMake package accepts all four of these options. However, not every CMake package accepts all four of these options.
Grep the ``CMakeLists.txt`` file to see if the default values are Grep the ``CMakeLists.txt`` file to see if the default values are
missing or replaced. For example, the missing or replaced. For example, the
`dealii <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/dealii/package.py>`_ `dealii <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dealii/package.py>`_
package overrides the default variant with: package overrides the default variant with:
.. code-block:: python .. code-block:: python
variant("build_type", default="DebugRelease", variant('build_type', default='DebugRelease',
description="The build type to build", description='The build type to build',
values=("Debug", "Release", "DebugRelease")) values=('Debug', 'Release', 'DebugRelease'))
For more information on ``CMAKE_BUILD_TYPE``, see: For more information on ``CMAKE_BUILD_TYPE``, see:
https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html
@@ -249,7 +250,7 @@ generator is Ninja. To switch to the Ninja generator, simply add:
.. code-block:: python .. code-block:: python
generator("ninja") generator = 'Ninja'
``CMakePackage`` defaults to "Unix Makefiles". If you switch to the ``CMakePackage`` defaults to "Unix Makefiles". If you switch to the
@@ -257,7 +258,7 @@ Ninja generator, make sure to add:
.. code-block:: python .. code-block:: python
depends_on("ninja", type="build") depends_on('ninja', type='build')
to the package as well. Aside from that, you shouldn't need to do to the package as well. Aside from that, you shouldn't need to do
anything else. Spack will automatically detect that you are using anything else. Spack will automatically detect that you are using
@@ -287,7 +288,7 @@ like so:
.. code-block:: python .. code-block:: python
root_cmakelists_dir = "src" root_cmakelists_dir = 'src'
Note that this path is relative to the root of the extracted tarball, Note that this path is relative to the root of the extracted tarball,
@@ -303,7 +304,7 @@ different sub-directory, simply override ``build_directory`` like so:
.. code-block:: python .. code-block:: python
build_directory = "my-build" build_directory = 'my-build'
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
Build and install targets Build and install targets
@@ -323,8 +324,8 @@ library or build the documentation, you can add these like so:
.. code-block:: python .. code-block:: python
build_targets = ["all", "docs"] build_targets = ['all', 'docs']
install_targets = ["install", "docs"] install_targets = ['install', 'docs']
^^^^^^^ ^^^^^^^
Testing Testing

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -27,14 +28,11 @@ This package provides the following variants:
* **cuda_arch** * **cuda_arch**
This variant supports the optional specification of one or multiple architectures. This variant supports the optional specification of the architecture.
Valid values are maintained in the ``cuda_arch_values`` property and Valid values are maintained in the ``cuda_arch_values`` property and
are the numeric character equivalent of the compute capability version are the numeric character equivalent of the compute capability version
(e.g., '10' for version 1.0). Each provided value affects associated (e.g., '10' for version 1.0). Each provided value affects associated
``CUDA`` dependencies and compiler conflicts. ``CUDA`` dependencies and compiler conflicts.
The variant builds both PTX code for the _virtual_ architecture
(e.g. ``compute_10``) and binary code for the _real_ architecture (e.g. ``sm_10``).
GPUs and their compute capability versions are listed at GPUs and their compute capability versions are listed at
https://developer.nvidia.com/cuda-gpus . https://developer.nvidia.com/cuda-gpus .
@@ -53,8 +51,8 @@ to terminate such build attempts with a suitable message:
.. code-block:: python .. code-block:: python
conflicts("cuda_arch=none", when="+cuda", conflicts('cuda_arch=none', when='+cuda',
msg="CUDA architecture is required") msg='CUDA architecture is required')
Similarly, if your software does not support all versions of the property, Similarly, if your software does not support all versions of the property,
you could add ``conflicts`` to your package for those versions. For example, you could add ``conflicts`` to your package for those versions. For example,
@@ -65,13 +63,13 @@ custom message should a user attempt such a build:
.. code-block:: python .. code-block:: python
unsupported_cuda_archs = [ unsupported_cuda_archs = [
"10", "11", "12", "13", '10', '11', '12', '13',
"20", "21", '20', '21',
"30", "32", "35", "37" '30', '32', '35', '37'
] ]
for value in unsupported_cuda_archs: for value in unsupported_cuda_archs:
conflicts(f"cuda_arch={value}", when="+cuda", conflicts('cuda_arch={0}'.format(value), when='+cuda',
msg=f"CUDA architecture {value} is not supported") msg='CUDA architecture {0} is not supported'.format(value))
^^^^^^^ ^^^^^^^
Methods Methods
@@ -106,16 +104,16 @@ class of your package. For example, you can add it to your
spec = self.spec spec = self.spec
args = [] args = []
... ...
if spec.satisfies("+cuda"): if '+cuda' in spec:
# Set up the cuda macros needed by the build # Set up the cuda macros needed by the build
args.append("-DWITH_CUDA=ON") args.append('-DWITH_CUDA=ON')
cuda_arch_list = spec.variants["cuda_arch"].value cuda_arch_list = spec.variants['cuda_arch'].value
cuda_arch = cuda_arch_list[0] cuda_arch = cuda_arch_list[0]
if cuda_arch != "none": if cuda_arch != 'none':
args.append(f"-DCUDA_FLAGS=-arch=sm_{cuda_arch}") args.append('-DCUDA_FLAGS=-arch=sm_{0}'.format(cuda_arch))
else: else:
# Ensure build with cuda is disabled # Ensure build with cuda is disabled
args.append("-DWITH_CUDA=OFF") args.append('-DWITH_CUDA=OFF')
... ...
return args return args
@@ -124,7 +122,7 @@ You will need to customize options as needed for your build.
This example also illustrates how to check for the ``cuda`` variant using This example also illustrates how to check for the ``cuda`` variant using
``self.spec`` and how to retrieve the ``cuda_arch`` variant's value, which ``self.spec`` and how to retrieve the ``cuda_arch`` variant's value, which
is a list, using ``self.spec.variants["cuda_arch"].value``. is a list, using ``self.spec.variants['cuda_arch'].value``.
With over 70 packages using ``CudaPackage`` as of January 2021 there are With over 70 packages using ``CudaPackage`` as of January 2021 there are
lots of examples to choose from to get more ideas for using this package. lots of examples to choose from to get more ideas for using this package.

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -20,8 +21,8 @@ start is to look at the definitions of other build systems. This guide
focuses mostly on how Spack's build systems work. focuses mostly on how Spack's build systems work.
In this guide, we will be using the In this guide, we will be using the
`perl <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/perl/package.py>`_ and `perl <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl/package.py>`_ and
`cmake <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/cmake/package.py>`_ `cmake <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cmake/package.py>`_
packages as examples. ``perl``'s build system is a hand-written packages as examples. ``perl``'s build system is a hand-written
``Configure`` shell script, while ``cmake`` bootstraps itself during ``Configure`` shell script, while ``cmake`` bootstraps itself during
installation. Both of these packages require custom build systems. installation. Both of these packages require custom build systems.
@@ -56,13 +57,13 @@ If you look at the ``perl`` package, you'll see:
.. code-block:: python .. code-block:: python
phases = ("configure", "build", "install") phases = ['configure', 'build', 'install']
Similarly, ``cmake`` defines: Similarly, ``cmake`` defines:
.. code-block:: python .. code-block:: python
phases = ("bootstrap", "build", "install") phases = ['bootstrap', 'build', 'install']
If we look at the ``cmake`` example, this tells Spack's ``PackageBase`` If we look at the ``cmake`` example, this tells Spack's ``PackageBase``
class to run the ``bootstrap``, ``build``, and ``install`` functions class to run the ``bootstrap``, ``build``, and ``install`` functions
@@ -77,7 +78,7 @@ If we look at ``perl``, we see that it defines a ``configure`` method:
.. code-block:: python .. code-block:: python
def configure(self, spec, prefix): def configure(self, spec, prefix):
configure = Executable("./Configure") configure = Executable('./Configure')
configure(*self.configure_args()) configure(*self.configure_args())
There is also a corresponding ``configure_args`` function that handles There is also a corresponding ``configure_args`` function that handles
@@ -91,7 +92,7 @@ phases are pretty simple:
make() make()
def install(self, spec, prefix): def install(self, spec, prefix):
make("install") make('install')
The ``cmake`` package looks very similar, but with a ``bootstrap`` The ``cmake`` package looks very similar, but with a ``bootstrap``
function instead of ``configure``: function instead of ``configure``:
@@ -99,14 +100,14 @@ function instead of ``configure``:
.. code-block:: python .. code-block:: python
def bootstrap(self, spec, prefix): def bootstrap(self, spec, prefix):
bootstrap = Executable("./bootstrap") bootstrap = Executable('./bootstrap')
bootstrap(*self.bootstrap_args()) bootstrap(*self.bootstrap_args())
def build(self, spec, prefix): def build(self, spec, prefix):
make() make()
def install(self, spec, prefix): def install(self, spec, prefix):
make("install") make('install')
Again, there is a ``boostrap_args`` function that determines the Again, there is a ``boostrap_args`` function that determines the
correct bootstrap flags to use. correct bootstrap flags to use.
@@ -127,21 +128,16 @@ before or after a particular phase. For example, in ``perl``, we see:
.. code-block:: python .. code-block:: python
@run_after("install") @run_after('install')
def install_cpanm(self): def install_cpanm(self):
spec = self.spec spec = self.spec
maker = make
cpan_dir = join_path("cpanm", "cpanm") if '+cpanm' in spec:
if sys.platform == "win32": with working_dir(join_path('cpanm', 'cpanm')):
maker = nmake perl = spec['perl'].command
cpan_dir = join_path(self.stage.source_path, cpan_dir) perl('Makefile.PL')
cpan_dir = windows_sfn(cpan_dir) make()
if "+cpanm" in spec: make('install')
with working_dir(cpan_dir):
perl = spec["perl"].command
perl("Makefile.PL")
maker()
maker("install")
This extra step automatically installs ``cpanm`` in addition to the This extra step automatically installs ``cpanm`` in addition to the
base Perl installation. base Perl installation.
@@ -178,16 +174,10 @@ In the ``perl`` package, we can see:
.. code-block:: python .. code-block:: python
@run_after("build") @run_after('build')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def build_test(self): def test(self):
if sys.platform == "win32": make('test')
win32_dir = os.path.join(self.stage.source_path, "win32")
win32_dir = windows_sfn(win32_dir)
with working_dir(win32_dir):
nmake("test", ignore_quotes=True)
else:
make("test")
As you can guess, this runs ``make test`` *after* building the package, As you can guess, this runs ``make test`` *after* building the package,
if and only if testing is requested. Again, this is not specific to if and only if testing is requested. Again, this is not specific to
@@ -199,7 +189,7 @@ custom build systems, it can be added to existing build systems as well.
.. code-block:: python .. code-block:: python
@run_after("install") @run_after('install')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
works as expected. However, if you reverse the ordering: works as expected. However, if you reverse the ordering:
@@ -207,7 +197,7 @@ custom build systems, it can be added to existing build systems as well.
.. code-block:: python .. code-block:: python
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
@run_after("install") @run_after('install')
the tests will always be run regardless of whether or not the tests will always be run regardless of whether or not
``--test=root`` is requested. See https://github.com/spack/spack/issues/3833 ``--test=root`` is requested. See https://github.com/spack/spack/issues/3833

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -24,8 +25,8 @@ use Spack to build packages with the tools.
The Spack Python class ``IntelOneapiPackage`` is a base class that is The Spack Python class ``IntelOneapiPackage`` is a base class that is
used by ``IntelOneapiCompilers``, ``IntelOneapiMkl``, used by ``IntelOneapiCompilers``, ``IntelOneapiMkl``,
``IntelOneapiTbb`` and other classes to implement the oneAPI ``IntelOneapiTbb`` and other classes to implement the oneAPI
packages. Search for ``oneAPI`` at `packages.spack.io <https://packages.spack.io>`_ for the full packages. See the :ref:`package-list` for the full list of available
list of available oneAPI packages, or use:: oneAPI packages or use::
spack list -d oneAPI spack list -d oneAPI
@@ -33,6 +34,9 @@ For more information on a specific package, do::
spack info --all <package-name> spack info --all <package-name>
Intel no longer releases new versions of Parallel Studio, which can be
used in Spack via the :ref:`intelpackage`. All of its components can
now be found in oneAPI.
Examples Examples
======== ========
@@ -47,51 +51,31 @@ Install the oneAPI compilers::
spack install intel-oneapi-compilers spack install intel-oneapi-compilers
Add the compilers to your ``compilers.yaml`` so spack can use them::
To build the ``patchelf`` Spack package with ``icx``, do:: spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin/intel64
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin
Verify that the compilers are available::
spack compiler list
The ``intel-oneapi-compilers`` package includes 2 families of
compilers:
* ``intel``: ``icc``, ``icpc``, ``ifort``. Intel's *classic*
compilers.
* ``oneapi``: ``icx``, ``icpx``, ``ifx``. Intel's new generation of
compilers based on LLVM.
To build the ``patchelf`` Spack package with ``icc``, do::
spack install patchelf%intel
To build with with ``icx``, do ::
spack install patchelf%oneapi spack install patchelf%oneapi
Using oneAPI Spack environment
-------------------------------
In this example, we build lammps with ``icx`` using Spack environment for oneAPI packages created by Intel. The
compilers are installed with Spack like in example above.
Install the oneAPI compilers::
spack install intel-oneapi-compilers
Clone `spack-configs <https://github.com/spack/spack-configs>`_ repo and activate Intel oneAPI CPU environment::
git clone https://github.com/spack/spack-configs
spack env activate spack-configs/INTEL/CPU
spack concretize -f
`Intel oneAPI CPU environment <https://github.com/spack/spack-configs/blob/main/INTEL/CPU/spack.yaml>`_ contains applications tested and validated by Intel, this list is constantly extended. And currently it supports:
- `Devito <https://www.devitoproject.org/>`_
- `GROMACS <https://www.gromacs.org/>`_
- `HPCG <https://www.hpcg-benchmark.org/>`_
- `HPL <https://netlib.org/benchmark/hpl/>`_
- `LAMMPS <https://www.lammps.org/#gsc.tab=0>`_
- `OpenFOAM <https://www.openfoam.com/>`_
- `Quantum Espresso <https://www.quantum-espresso.org/>`_
- `STREAM <https://www.cs.virginia.edu/stream/>`_
- `WRF <https://github.com/wrf-model/WRF>`_
To build lammps with oneAPI compiler from this environment just run::
spack install lammps
Compiled binaries can be find using::
spack cd -i lammps
You can do the same for all other applications from this environment.
Using oneAPI MPI to Satisfy a Virtual Dependence Using oneAPI MPI to Satisfy a Virtual Dependence
------------------------------------------------------ ------------------------------------------------------
@@ -111,23 +95,18 @@ Compilers
--------- ---------
To use the compilers, add some information about the installation to To use the compilers, add some information about the installation to
``packages.yaml``. For most users, it is sufficient to do:: ``compilers.yaml``. For most users, it is sufficient to do::
spack compiler add /opt/intel/oneapi/compiler/latest/bin spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin/intel64
spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin
Adapt the paths above if you did not install the tools in the default Adapt the paths above if you did not install the tools in the default
location. After adding the compilers, using them is the same location. After adding the compilers, using them is the same
as if you had installed the ``intel-oneapi-compilers`` package. as if you had installed the ``intel-oneapi-compilers`` package.
Another option is to manually add the configuration to Another option is to manually add the configuration to
``packages.yaml`` as described in :ref:`Compiler configuration ``compilers.yaml`` as described in :ref:`Compiler configuration
<compiler-config>`. <compiler-config>`.
Before 2024, the directory structure was different::
spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin/intel64
spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin
Libraries Libraries
--------- ---------
@@ -145,7 +124,7 @@ Using oneAPI Tools Installed by Spack
===================================== =====================================
Spack can be a convenient way to install and configure compilers and Spack can be a convenient way to install and configure compilers and
libraries, even if you do not intend to build a Spack package. If you libaries, even if you do not intend to build a Spack package. If you
want to build a Makefile project using Spack-installed oneAPI compilers, want to build a Makefile project using Spack-installed oneAPI compilers,
then use spack to configure your environment:: then use spack to configure your environment::
@@ -162,5 +141,15 @@ You can also use Spack-installed libraries. For example::
Will update your environment CPATH, LIBRARY_PATH, and other Will update your environment CPATH, LIBRARY_PATH, and other
environment variables for building an application with oneMKL. environment variables for building an application with oneMKL.
More information
================
This section describes basic use of oneAPI, especially if it has
changed compared to Parallel Studio. See :ref:`intelpackage` for more
information on :ref:`intel-virtual-packages`,
:ref:`intel-unrelated-packages`,
:ref:`intel-integrating-external-libraries`, and
:ref:`using-mkl-tips`.
.. _`Intel installers`: https://software.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top.html .. _`Intel installers`: https://software.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top.html

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -87,7 +88,7 @@ override the ``luarocks_args`` method like so:
.. code-block:: python .. code-block:: python
def luarocks_args(self): def luarocks_args(self):
return ["flag1", "flag2"] return ['flag1', 'flag2']
One common use of this is to override warnings or flags for newer compilers, as in: One common use of this is to override warnings or flags for newer compilers, as in:

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -58,7 +59,7 @@ using GNU Make, you should add a dependency on ``gmake``:
.. code-block:: python .. code-block:: python
depends_on("gmake", type="build") depends_on('gmake', type='build')
^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -87,18 +88,18 @@ command-line. However, Makefiles that use ``?=`` for assignment honor
environment variables. Since Spack already sets ``CC``, ``CXX``, ``F77``, environment variables. Since Spack already sets ``CC``, ``CXX``, ``F77``,
and ``FC``, you won't need to worry about setting these variables. If and ``FC``, you won't need to worry about setting these variables. If
there are any other variables you need to set, you can do this in the there are any other variables you need to set, you can do this in the
``setup_build_environment`` method: ``edit`` method:
.. code-block:: python .. code-block:: python
def setup_build_environment(self, env: EnvironmentModifications) -> None: def edit(self, spec, prefix):
env.set("PREFIX", prefix) env['PREFIX'] = prefix
env.set("BLASLIB", spec["blas"].libs.ld_flags) env['BLASLIB'] = spec['blas'].libs.ld_flags
`cbench <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/cbench/package.py>`_ `cbench <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cbench/package.py>`_
is a good example of a simple package that does this, while is a good example of a simple package that does this, while
`esmf <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/esmf/package.py>`_ `esmf <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/esmf/package.py>`_
is a good example of a more complex package. is a good example of a more complex package.
"""""""""""""""""""""" """"""""""""""""""""""
@@ -112,7 +113,7 @@ you can do this like so:
.. code-block:: python .. code-block:: python
build_targets = ["CC=cc"] build_targets = ['CC=cc']
If you do need access to the spec, you can create a property like so: If you do need access to the spec, you can create a property like so:
@@ -124,12 +125,12 @@ If you do need access to the spec, you can create a property like so:
spec = self.spec spec = self.spec
return [ return [
"CC=cc", 'CC=cc',
f"BLASLIB={spec['blas'].libs.ld_flags}", 'BLASLIB={0}'.format(spec['blas'].libs.ld_flags),
] ]
`cloverleaf <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/cloverleaf/package.py>`_ `cloverleaf <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cloverleaf/package.py>`_
is a good example of a package that uses this strategy. is a good example of a package that uses this strategy.
""""""""""""" """""""""""""
@@ -139,20 +140,20 @@ Edit Makefile
Some Makefiles are just plain stubborn and will ignore command-line Some Makefiles are just plain stubborn and will ignore command-line
variables. The only way to ensure that these packages build correctly variables. The only way to ensure that these packages build correctly
is to directly edit the Makefile. Spack provides a ``FileFilter`` class is to directly edit the Makefile. Spack provides a ``FileFilter`` class
and a ``filter`` method to help with this. For example: and a ``filter_file`` method to help with this. For example:
.. code-block:: python .. code-block:: python
def edit(self, spec, prefix): def edit(self, spec, prefix):
makefile = FileFilter("Makefile") makefile = FileFilter('Makefile')
makefile.filter(r"^\s*CC\s*=.*", f"CC = {spack_cc}") makefile.filter(r'^\s*CC\s*=.*', 'CC = ' + spack_cc)
makefile.filter(r"^\s*CXX\s*=.*", f"CXX = {spack_cxx}") makefile.filter(r'^\s*CXX\s*=.*', 'CXX = ' + spack_cxx)
makefile.filter(r"^\s*F77\s*=.*", f"F77 = {spack_f77}") makefile.filter(r'^\s*F77\s*=.*', 'F77 = ' + spack_f77)
makefile.filter(r"^\s*FC\s*=.*", f"FC = {spack_fc}") makefile.filter(r'^\s*FC\s*=.*', 'FC = ' + spack_fc)
`stream <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/stream/package.py>`_ `stream <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stream/package.py>`_
is a good example of a package that involves editing a Makefile to set is a good example of a package that involves editing a Makefile to set
the appropriate variables. the appropriate variables.
@@ -180,19 +181,19 @@ well for storing variables:
def edit(self, spec, prefix): def edit(self, spec, prefix):
config = { config = {
"CC": "cc", 'CC': 'cc',
"MAKE": "make", 'MAKE': 'make',
} }
if spec.satisfies("+blas"): if '+blas' in spec:
config["BLAS_LIBS"] = spec["blas"].libs.joined() config['BLAS_LIBS'] = spec['blas'].libs.joined()
with open("make.inc", "w") as inc: with open('make.inc', 'w') as inc:
for key in config: for key in config:
inc.write(f"{key} = {config[key]}\n") inc.write('{0} = {1}\n'.format(key, config[key]))
`elk <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/elk/package.py>`_ `elk <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/elk/package.py>`_
is a good example of a package that uses a dictionary to store is a good example of a package that uses a dictionary to store
configuration variables. configuration variables.
@@ -203,17 +204,17 @@ them in a list:
def edit(self, spec, prefix): def edit(self, spec, prefix):
config = [ config = [
f"INSTALL_DIR = {prefix}", 'INSTALL_DIR = {0}'.format(prefix),
"INCLUDE_DIR = $(INSTALL_DIR)/include", 'INCLUDE_DIR = $(INSTALL_DIR)/include',
"LIBRARY_DIR = $(INSTALL_DIR)/lib", 'LIBRARY_DIR = $(INSTALL_DIR)/lib',
] ]
with open("make.inc", "w") as inc: with open('make.inc', 'w') as inc:
for var in config: for var in config:
inc.write(f"{var}\n") inc.write('{0}\n'.format(var))
`hpl <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/hpl/package.py>`_ `hpl <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpl/package.py>`_
is a good example of a package that uses a list to store is a good example of a package that uses a list to store
configuration variables. configuration variables.
@@ -283,7 +284,7 @@ can tell Spack where to locate it like so:
.. code-block:: python .. code-block:: python
build_directory = "src" build_directory = 'src'
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
@@ -298,8 +299,8 @@ install the package:
def install(self, spec, prefix): def install(self, spec, prefix):
mkdir(prefix.bin) mkdir(prefix.bin)
install("foo", prefix.bin) install('foo', prefix.bin)
install_tree("lib", prefix.lib) install_tree('lib', prefix.lib)
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -47,8 +48,8 @@ class automatically adds the following dependencies:
.. code-block:: python .. code-block:: python
depends_on("java", type=("build", "run")) depends_on('java', type=('build', 'run'))
depends_on("maven", type="build") depends_on('maven', type='build')
In the ``pom.xml`` file, you may see sections like: In the ``pom.xml`` file, you may see sections like:
@@ -71,8 +72,8 @@ should add:
.. code-block:: python .. code-block:: python
depends_on("java@7:", type="build") depends_on('java@7:', type='build')
depends_on("maven@3.5.4:", type="build") depends_on('maven@3.5.4:', type='build')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -87,9 +88,9 @@ the build phase. For example:
def build_args(self): def build_args(self):
return [ return [
"-Pdist,native", '-Pdist,native',
"-Dtar", '-Dtar',
"-Dmaven.javadoc.skip=true" '-Dmaven.javadoc.skip=true'
] ]

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -85,8 +86,8 @@ the ``MesonPackage`` base class already contains:
.. code-block:: python .. code-block:: python
depends_on("meson", type="build") depends_on('meson', type='build')
depends_on("ninja", type="build") depends_on('ninja', type='build')
If you need to specify a particular version requirement, you can If you need to specify a particular version requirement, you can
@@ -94,8 +95,8 @@ override this in your package:
.. code-block:: python .. code-block:: python
depends_on("meson@0.43.0:", type="build") depends_on('meson@0.43.0:', type='build')
depends_on("ninja", type="build") depends_on('ninja', type='build')
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
@@ -120,7 +121,7 @@ override the ``meson_args`` method like so:
.. code-block:: python .. code-block:: python
def meson_args(self): def meson_args(self):
return ["--warnlevel=3"] return ['--warnlevel=3']
This method can be used to pass flags as well as variables. This method can be used to pass flags as well as variables.

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -117,7 +118,7 @@ so ``PerlPackage`` contains:
.. code-block:: python .. code-block:: python
extends("perl") extends('perl')
If your package requires a specific version of Perl, you should If your package requires a specific version of Perl, you should
@@ -131,14 +132,14 @@ properly. If your package uses ``Makefile.PL`` to build, add:
.. code-block:: python .. code-block:: python
depends_on("perl-extutils-makemaker", type="build") depends_on('perl-extutils-makemaker', type='build')
If your package uses ``Build.PL`` to build, add: If your package uses ``Build.PL`` to build, add:
.. code-block:: python .. code-block:: python
depends_on("perl-module-build", type="build") depends_on('perl-module-build', type='build')
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
@@ -164,80 +165,14 @@ arguments to ``Makefile.PL`` or ``Build.PL`` by overriding
.. code-block:: python .. code-block:: python
def configure_args(self): def configure_args(self):
expat = self.spec["expat"].prefix expat = self.spec['expat'].prefix
return [ return [
"EXPATLIBPATH={0}".format(expat.lib), 'EXPATLIBPATH={0}'.format(expat.lib),
"EXPATINCPATH={0}".format(expat.include), 'EXPATINCPATH={0}'.format(expat.include),
] ]
^^^^^^^
Testing
^^^^^^^
``PerlPackage`` provides a simple stand-alone test of the successfully
installed package to confirm that installed perl module(s) can be used.
These tests can be performed any time after the installation using
``spack -v test run``. (For more information on the command, see
:ref:`cmd-spack-test-run`.)
The base class automatically detects perl modules based on the presence
of ``*.pm`` files under the package's library directory. For example,
the files under ``perl-bignum``'s perl library are:
.. code-block:: console
$ find . -name "*.pm"
./bigfloat.pm
./bigrat.pm
./Math/BigFloat/Trace.pm
./Math/BigInt/Trace.pm
./Math/BigRat/Trace.pm
./bigint.pm
./bignum.pm
which results in the package having the ``use_modules`` property containing:
.. code-block:: python
use_modules = [
"bigfloat",
"bigrat",
"Math::BigFloat::Trace",
"Math::BigInt::Trace",
"Math::BigRat::Trace",
"bigint",
"bignum",
]
.. note::
This list can often be used to catch missing dependencies.
If the list is somehow wrong, you can provide the names of the modules
yourself by overriding ``use_modules`` like so:
.. code-block:: python
use_modules = ["bigfloat", "bigrat", "bigint", "bignum"]
If you only want a subset of the automatically detected modules to be
tested, you could instead define the ``skip_modules`` property on the
package. So, instead of overriding ``use_modules`` as shown above, you
could define the following:
.. code-block:: python
skip_modules = [
"Math::BigFloat::Trace",
"Math::BigInt::Trace",
"Math::BigRat::Trace",
]
for the same use tests.
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
Alternatives to Spack Alternatives to Spack
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -151,16 +152,16 @@ set. Once set, ``pypi`` will be used to define the ``homepage``,
.. code-block:: python .. code-block:: python
homepage = "https://pypi.org/project/setuptools/" homepage = 'https://pypi.org/project/setuptools/'
url = "https://pypi.org/packages/source/s/setuptools/setuptools-49.2.0.zip" url = 'https://pypi.org/packages/source/s/setuptools/setuptools-49.2.0.zip'
list_url = "https://pypi.org/simple/setuptools/" list_url = 'https://pypi.org/simple/setuptools/'
is equivalent to: is equivalent to:
.. code-block:: python .. code-block:: python
pypi = "setuptools/setuptools-49.2.0.zip" pypi = 'setuptools/setuptools-49.2.0.zip'
If a package has a different homepage listed on PyPI, you can If a package has a different homepage listed on PyPI, you can
@@ -207,7 +208,7 @@ dependencies to your package:
.. code-block:: python .. code-block:: python
depends_on("py-setuptools@42:", type="build") depends_on('py-setuptools@42:', type='build')
Note that ``py-wheel`` is already listed as a build dependency in the Note that ``py-wheel`` is already listed as a build dependency in the
@@ -231,7 +232,7 @@ Look for dependencies under the following keys:
* ``dependencies`` under ``[project]`` * ``dependencies`` under ``[project]``
These packages are required for building and installation. You can These packages are required for building and installation. You can
add them with ``type=("build", "run")``. add them with ``type=('build', 'run')``.
* ``[project.optional-dependencies]`` * ``[project.optional-dependencies]``
@@ -278,12 +279,12 @@ distutils library, and has almost the exact same API. In addition to
* ``setup_requires`` * ``setup_requires``
These packages are usually only needed at build-time, so you can These packages are usually only needed at build-time, so you can
add them with ``type="build"``. add them with ``type='build'``.
* ``install_requires`` * ``install_requires``
These packages are required for building and installation. You can These packages are required for building and installation. You can
add them with ``type=("build", "run")``. add them with ``type=('build', 'run')``.
* ``extras_require`` * ``extras_require``
@@ -295,7 +296,7 @@ distutils library, and has almost the exact same API. In addition to
These are packages that are required to run the unit tests for the These are packages that are required to run the unit tests for the
package. These dependencies can be specified using the package. These dependencies can be specified using the
``type="test"`` dependency type. However, the PyPI tarballs rarely ``type='test'`` dependency type. However, the PyPI tarballs rarely
contain unit tests, so there is usually no reason to add these. contain unit tests, so there is usually no reason to add these.
See https://setuptools.pypa.io/en/latest/userguide/dependency_management.html See https://setuptools.pypa.io/en/latest/userguide/dependency_management.html
@@ -320,7 +321,7 @@ older versions of flit may use the following keys:
* ``requires`` under ``[tool.flit.metadata]`` * ``requires`` under ``[tool.flit.metadata]``
These packages are required for building and installation. You can These packages are required for building and installation. You can
add them with ``type=("build", "run")``. add them with ``type=('build', 'run')``.
* ``[tool.flit.metadata.requires-extra]`` * ``[tool.flit.metadata.requires-extra]``
@@ -365,7 +366,7 @@ If the ``pyproject.toml`` lists ``mesonpy`` as the ``build-backend``,
it uses the meson build system. Meson uses the default it uses the meson build system. Meson uses the default
``pyproject.toml`` keys to list dependencies. ``pyproject.toml`` keys to list dependencies.
See https://meson-python.readthedocs.io/en/latest/tutorials/introduction.html See https://meson-python.readthedocs.io/en/latest/usage/start.html
for more information. for more information.
""" """
@@ -433,12 +434,12 @@ the BLAS/LAPACK library you want pkg-config to search for:
.. code-block:: python .. code-block:: python
depends_on("py-pip@22.1:", type="build") depends_on('py-pip@22.1:', type='build')
def config_settings(self, spec, prefix): def config_settings(self, spec, prefix):
return { return {
"blas": spec["blas"].libs.names[0], 'blas': spec['blas'].libs.names[0],
"lapack": spec["lapack"].libs.names[0], 'lapack': spec['lapack'].libs.names[0],
} }
@@ -462,10 +463,10 @@ has an optional dependency on ``libyaml`` that can be enabled like so:
def global_options(self, spec, prefix): def global_options(self, spec, prefix):
options = [] options = []
if spec.satisfies("+libyaml"): if '+libyaml' in spec:
options.append("--with-libyaml") options.append('--with-libyaml')
else: else:
options.append("--without-libyaml") options.append('--without-libyaml')
return options return options
@@ -491,10 +492,10 @@ allows you to specify the directories to search for ``libyaml``:
def install_options(self, spec, prefix): def install_options(self, spec, prefix):
options = [] options = []
if spec.satisfies("+libyaml"): if '+libyaml' in spec:
options.extend([ options.extend([
spec["libyaml"].libs.search_flags, spec['libyaml'].libs.search_flags,
spec["libyaml"].headers.include_flags, spec['libyaml'].headers.include_flags,
]) ])
return options return options
@@ -555,7 +556,7 @@ detected are wrong, you can provide the names yourself by overriding
.. code-block:: python .. code-block:: python
import_modules = ["six"] import_modules = ['six']
Sometimes the list of module names to import depends on how the Sometimes the list of module names to import depends on how the
@@ -570,9 +571,9 @@ This can be expressed like so:
@property @property
def import_modules(self): def import_modules(self):
modules = ["yaml"] modules = ['yaml']
if self.spec.satisfies("+libyaml"): if '+libyaml' in self.spec:
modules.append("yaml.cyaml") modules.append('yaml.cyaml')
return modules return modules
@@ -581,18 +582,18 @@ libraries. Make sure not to add modules/packages containing the word
"test", as these likely won't end up in the installation directory, "test", as these likely won't end up in the installation directory,
or may require test dependencies like pytest to be installed. or may require test dependencies like pytest to be installed.
Instead of defining the ``import_modules`` explicitly, only the subset Instead of defining the ``import_modules`` explicity, only the subset
of module names to be skipped can be defined by using ``skip_modules``. of module names to be skipped can be defined by using ``skip_modules``.
If a defined module has submodules, they are skipped as well, e.g., If a defined module has submodules, they are skipped as well, e.g.,
in case the ``plotting`` modules should be excluded from the in case the ``plotting`` modules should be excluded from the
automatically detected ``import_modules`` ``["nilearn", "nilearn.surface", automatically detected ``import_modules`` ``['nilearn', 'nilearn.surface',
"nilearn.plotting", "nilearn.plotting.data"]`` set: 'nilearn.plotting', 'nilearn.plotting.data']`` set:
.. code-block:: python .. code-block:: python
skip_modules = ["nilearn.plotting"] skip_modules = ['nilearn.plotting']
This will set ``import_modules`` to ``["nilearn", "nilearn.surface"]`` This will set ``import_modules`` to ``['nilearn', 'nilearn.surface']``
Import tests can be run during the installation using ``spack install Import tests can be run during the installation using ``spack install
--test=root`` or at any time after the installation using --test=root`` or at any time after the installation using
@@ -611,11 +612,11 @@ after the ``install`` phase:
.. code-block:: python .. code-block:: python
@run_after("install") @run_after('install')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def install_test(self): def install_test(self):
with working_dir("spack-test", create=True): with working_dir('spack-test', create=True):
python("-c", "import numpy; numpy.test('full', verbose=2)") python('-c', 'import numpy; numpy.test("full", verbose=2)')
when testing is enabled during the installation (i.e., ``spack install when testing is enabled during the installation (i.e., ``spack install
@@ -637,7 +638,7 @@ provides Python bindings in a ``python`` directory, you can use:
.. code-block:: python .. code-block:: python
build_directory = "python" build_directory = 'python'
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -717,45 +718,23 @@ command-line tool, or C/C++/Fortran program with optional Python
modules? The former should be prepended with ``py-``, while the modules? The former should be prepended with ``py-``, while the
latter should not. latter should not.
"""""""""""""""""""""""""""""" """"""""""""""""""""""
``extends`` vs. ``depends_on`` extends vs. depends_on
"""""""""""""""""""""""""""""" """"""""""""""""""""""
This is very similar to the naming dilemma above, with a slight twist.
As mentioned in the :ref:`Packaging Guide <packaging_extensions>`, As mentioned in the :ref:`Packaging Guide <packaging_extensions>`,
``extends`` and ``depends_on`` are very similar, but ``extends`` ensures ``extends`` and ``depends_on`` are very similar, but ``extends`` ensures
that the extension and extendee share the same prefix in views. that the extension and extendee share the same prefix in views.
This allows the user to import a Python module without This allows the user to import a Python module without
having to add that module to ``PYTHONPATH``. having to add that module to ``PYTHONPATH``.
Additionally, ``extends("python")`` adds a dependency on the package When deciding between ``extends`` and ``depends_on``, the best rule of
``python-venv``. This improves isolation from the system, whether thumb is to check the installation prefix. If Python libraries are
it's during the build or at runtime: user and system site packages installed to ``<prefix>/lib/pythonX.Y/site-packages``, then you
cannot accidentally be used by any package that ``extends("python")``. should use ``extends``. If Python libraries are installed elsewhere
or the only files that get installed reside in ``<prefix>/bin``, then
As a rule of thumb: if a package does not install any Python modules don't use ``extends``.
of its own, and merely puts a Python script in the ``bin`` directory,
then there is no need for ``extends``. If the package installs modules
in the ``site-packages`` directory, it requires ``extends``.
"""""""""""""""""""""""""""""""""""""
Executing ``python`` during the build
"""""""""""""""""""""""""""""""""""""
Whenever you need to execute a Python command or pass the path of the
Python interpreter to the build system, it is best to use the global
variable ``python`` directly. For example:
.. code-block:: python
@run_before("install")
def recythonize(self):
python("setup.py", "clean") # use the `python` global
As mentioned in the previous section, ``extends("python")`` adds an
automatic dependency on ``python-venv``, which is a virtual environment
that guarantees build isolation. The ``python`` global always refers to
the correct Python interpreter, whether the package uses ``extends("python")``
or ``depends_on("python")``.
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
Alternatives to Spack Alternatives to Spack

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -24,14 +25,6 @@ QMake does not appear to have a standardized way of specifying
the installation directory, so you may have to set environment the installation directory, so you may have to set environment
variables or edit ``*.pro`` files to get things working properly. variables or edit ``*.pro`` files to get things working properly.
QMake packages will depend on the virtual ``qmake`` package which
is provided by multiple versions of Qt: ``qt`` provides Qt up to
Qt5, and ``qt-base`` provides Qt from version Qt6 onwards. This
split was motivated by the desire to split the single Qt package
into its components to allow for more fine-grained installation.
To depend on a specific version, refer to the documentation on
:ref:`virtual-dependencies`.
^^^^^^ ^^^^^^
Phases Phases
^^^^^^ ^^^^^^
@@ -90,7 +83,7 @@ base class already contains:
.. code-block:: python .. code-block:: python
depends_on("qt", type="build") depends_on('qt', type='build')
If you want to specify a particular version requirement, or need to If you want to specify a particular version requirement, or need to
@@ -98,7 +91,7 @@ link to the ``qt`` libraries, you can override this in your package:
.. code-block:: python .. code-block:: python
depends_on("qt@5.6.0:") depends_on('qt@5.6.0:')
^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
Passing arguments to qmake Passing arguments to qmake
@@ -110,7 +103,7 @@ override the ``qmake_args`` method like so:
.. code-block:: python .. code-block:: python
def qmake_args(self): def qmake_args(self):
return ["-recursive"] return ['-recursive']
This method can be used to pass flags as well as variables. This method can be used to pass flags as well as variables.
@@ -125,7 +118,7 @@ sub-directory by adding the following to the package:
.. code-block:: python .. code-block:: python
build_directory = "src" build_directory = 'src'
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -12,7 +13,8 @@ The ``ROCmPackage`` is not a build system but a helper package. Like ``CudaPacka
it provides standard variants, dependencies, and conflicts to facilitate building it provides standard variants, dependencies, and conflicts to facilitate building
packages using GPUs though for AMD in this case. packages using GPUs though for AMD in this case.
You can find the source for this package (and suggestions for setting up your ``packages.yaml`` file) at You can find the source for this package (and suggestions for setting up your
``compilers.yaml`` and ``packages.yaml`` files) at
`<https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/rocm.py>`__. `<https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/rocm.py>`__.
^^^^^^^^ ^^^^^^^^
@@ -79,27 +81,28 @@ class of your package. For example, you can add it to your
class MyRocmPackage(CMakePackage, ROCmPackage): class MyRocmPackage(CMakePackage, ROCmPackage):
... ...
# Ensure +rocm and amdgpu_targets are passed to dependencies # Ensure +rocm and amdgpu_targets are passed to dependencies
depends_on("mydeppackage", when="+rocm") depends_on('mydeppackage', when='+rocm')
for val in ROCmPackage.amdgpu_targets: for val in ROCmPackage.amdgpu_targets:
depends_on(f"mydeppackage amdgpu_target={val}", depends_on('mydeppackage amdgpu_target={0}'.format(val),
when=f"amdgpu_target={val}") when='amdgpu_target={0}'.format(val))
... ...
def cmake_args(self): def cmake_args(self):
spec = self.spec spec = self.spec
args = [] args = []
... ...
if spec.satisfies("+rocm"): if '+rocm' in spec:
# Set up the hip macros needed by the build # Set up the hip macros needed by the build
args.extend([ args.extend([
"-DENABLE_HIP=ON", '-DENABLE_HIP=ON',
f"-DHIP_ROOT_DIR={spec['hip'].prefix}"]) '-DHIP_ROOT_DIR={0}'.format(spec['hip'].prefix)])
rocm_archs = spec.variants["amdgpu_target"].value rocm_archs = spec.variants['amdgpu_target'].value
if "none" not in rocm_archs: if 'none' not in rocm_archs:
args.append(f"-DHIP_HIPCC_FLAGS=--amdgpu-target={','.join(rocm_archs}") args.append('-DHIP_HIPCC_FLAGS=--amdgpu-target={0}'
.format(",".join(rocm_archs)))
else: else:
# Ensure build with hip is disabled # Ensure build with hip is disabled
args.append("-DENABLE_HIP=OFF") args.append('-DENABLE_HIP=OFF')
... ...
return args return args
... ...
@@ -111,7 +114,7 @@ build.
This example also illustrates how to check for the ``rocm`` variant using This example also illustrates how to check for the ``rocm`` variant using
``self.spec`` and how to retrieve the ``amdgpu_target`` variant's value ``self.spec`` and how to retrieve the ``amdgpu_target`` variant's value
using ``self.spec.variants["amdgpu_target"].value``. using ``self.spec.variants['amdgpu_target'].value``.
All five packages using ``ROCmPackage`` as of January 2021 also use the All five packages using ``ROCmPackage`` as of January 2021 also use the
:ref:`CudaPackage <cudapackage>`. So it is worth looking at those packages :ref:`CudaPackage <cudapackage>`. So it is worth looking at those packages

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -39,7 +40,7 @@ for "CRAN <package-name>" and you should quickly find what you want.
If it isn't on CRAN, try Bioconductor, another common R repository. If it isn't on CRAN, try Bioconductor, another common R repository.
For the purposes of this tutorial, we will be walking through For the purposes of this tutorial, we will be walking through
`r-caret <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/r_caret/package.py>`_ `r-caret <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-caret/package.py>`_
as an example. If you search for "CRAN caret", you will quickly find what as an example. If you search for "CRAN caret", you will quickly find what
you are looking for at https://cran.r-project.org/package=caret. you are looking for at https://cran.r-project.org/package=caret.
https://cran.r-project.org is the main CRAN website. However, CRAN also https://cran.r-project.org is the main CRAN website. However, CRAN also
@@ -162,28 +163,28 @@ attributes that can be used to set ``homepage``, ``url``, ``list_url``, and
.. code-block:: python .. code-block:: python
cran = "caret" cran = 'caret'
is equivalent to: is equivalent to:
.. code-block:: python .. code-block:: python
homepage = "https://cloud.r-project.org/package=caret" homepage = 'https://cloud.r-project.org/package=caret'
url = "https://cloud.r-project.org/src/contrib/caret_6.0-86.tar.gz" url = 'https://cloud.r-project.org/src/contrib/caret_6.0-86.tar.gz'
list_url = "https://cloud.r-project.org/src/contrib/Archive/caret" list_url = 'https://cloud.r-project.org/src/contrib/Archive/caret'
Likewise, the following ``bioc`` attribute: Likewise, the following ``bioc`` attribute:
.. code-block:: python .. code-block:: python
bioc = "BiocVersion" bioc = 'BiocVersion'
is equivalent to: is equivalent to:
.. code-block:: python .. code-block:: python
homepage = "https://bioconductor.org/packages/BiocVersion/" homepage = 'https://bioconductor.org/packages/BiocVersion/'
git = "https://git.bioconductor.org/packages/BiocVersion" git = 'https://git.bioconductor.org/packages/BiocVersion'
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -199,7 +200,7 @@ base class contains:
.. code-block:: python .. code-block:: python
extends("r") extends('r')
Take a close look at the homepage for ``caret``. If you look at the Take a close look at the homepage for ``caret``. If you look at the
@@ -208,7 +209,7 @@ You should add this to your package like so:
.. code-block:: python .. code-block:: python
depends_on("r@3.2.0:", type=("build", "run")) depends_on('r@3.2.0:', type=('build', 'run'))
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
@@ -226,7 +227,7 @@ and list all of their dependencies in the following sections:
* LinkingTo * LinkingTo
As far as Spack is concerned, all 3 of these dependency types As far as Spack is concerned, all 3 of these dependency types
correspond to ``type=("build", "run")``, so you don't have to worry correspond to ``type=('build', 'run')``, so you don't have to worry
about the details. If you are curious what they mean, about the details. If you are curious what they mean,
https://github.com/spack/spack/issues/2951 has a pretty good summary: https://github.com/spack/spack/issues/2951 has a pretty good summary:
@@ -329,7 +330,7 @@ the dependency:
.. code-block:: python .. code-block:: python
depends_on("r-lattice@0.20:", type=("build", "run")) depends_on('r-lattice@0.20:', type=('build', 'run'))
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
@@ -337,7 +338,7 @@ Non-R dependencies
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
Some packages depend on non-R libraries for linking. Check out the Some packages depend on non-R libraries for linking. Check out the
`r-stringi <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/r_stringi/package.py>`_ `r-stringi <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-stringi/package.py>`_
package for an example: https://cloud.r-project.org/package=stringi. package for an example: https://cloud.r-project.org/package=stringi.
If you search for the text "SystemRequirements", you will see: If you search for the text "SystemRequirements", you will see:
@@ -352,7 +353,7 @@ Passing arguments to the installation
Some R packages provide additional flags that can be passed to Some R packages provide additional flags that can be passed to
``R CMD INSTALL``, often to locate non-R dependencies. ``R CMD INSTALL``, often to locate non-R dependencies.
`r-rmpi <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/r_rmpi/package.py>`_ `r-rmpi <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rmpi/package.py>`_
is an example of this, and flags for linking to an MPI library. To pass is an example of this, and flags for linking to an MPI library. To pass
these to the installation command, you can override ``configure_args`` these to the installation command, you can override ``configure_args``
like so: like so:
@@ -360,20 +361,20 @@ like so:
.. code-block:: python .. code-block:: python
def configure_args(self): def configure_args(self):
mpi_name = self.spec["mpi"].name mpi_name = self.spec['mpi'].name
# The type of MPI. Supported values are: # The type of MPI. Supported values are:
# OPENMPI, LAM, MPICH, MPICH2, or CRAY # OPENMPI, LAM, MPICH, MPICH2, or CRAY
if mpi_name == "openmpi": if mpi_name == 'openmpi':
Rmpi_type = "OPENMPI" Rmpi_type = 'OPENMPI'
elif mpi_name == "mpich": elif mpi_name == 'mpich':
Rmpi_type = "MPICH2" Rmpi_type = 'MPICH2'
else: else:
raise InstallError("Unsupported MPI type") raise InstallError('Unsupported MPI type')
return [ return [
"--with-Rmpi-type={0}".format(Rmpi_type), '--with-Rmpi-type={0}'.format(Rmpi_type),
"--with-mpi={0}".format(spec["mpi"].prefix), '--with-mpi={0}'.format(spec['mpi'].prefix),
] ]

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -83,8 +84,8 @@ The ``*.gemspec`` file may contain something like:
.. code-block:: ruby .. code-block:: ruby
summary = "An implementation of the AsciiDoc text processor and publishing toolchain" summary = 'An implementation of the AsciiDoc text processor and publishing toolchain'
description = "A fast, open source text processor and publishing toolchain for converting AsciiDoc content to HTML 5, DocBook 5, and other formats." description = 'A fast, open source text processor and publishing toolchain for converting AsciiDoc content to HTML 5, DocBook 5, and other formats.'
Either of these can be used for the description of the Spack package. Either of these can be used for the description of the Spack package.
@@ -97,7 +98,7 @@ The ``*.gemspec`` file may contain something like:
.. code-block:: ruby .. code-block:: ruby
homepage = "https://asciidoctor.org" homepage = 'https://asciidoctor.org'
This should be used as the official homepage of the Spack package. This should be used as the official homepage of the Spack package.
@@ -111,21 +112,21 @@ the base class contains:
.. code-block:: python .. code-block:: python
extends("ruby") extends('ruby')
The ``*.gemspec`` file may contain something like: The ``*.gemspec`` file may contain something like:
.. code-block:: ruby .. code-block:: ruby
required_ruby_version = ">= 2.3.0" required_ruby_version = '>= 2.3.0'
This can be added to the Spack package using: This can be added to the Spack package using:
.. code-block:: python .. code-block:: python
depends_on("ruby@2.3.0:", type=("build", "run")) depends_on('ruby@2.3.0:', type=('build', 'run'))
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -48,15 +49,15 @@ following phases:
#. ``install`` - install the package #. ``install`` - install the package
Package developers often add unit tests that can be invoked with Package developers often add unit tests that can be invoked with
``scons test`` or ``scons check``. Spack provides a ``build_test`` method ``scons test`` or ``scons check``. Spack provides a ``test`` method
to handle this. Since we don't know which one the package developer to handle this. Since we don't know which one the package developer
chose, the ``build_test`` method does nothing by default, but can be easily chose, the ``test`` method does nothing by default, but can be easily
overridden like so: overridden like so:
.. code-block:: python .. code-block:: python
def build_test(self): def test(self):
scons("check") scons('check')
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
@@ -87,7 +88,7 @@ base class already contains:
.. code-block:: python .. code-block:: python
depends_on("scons", type="build") depends_on('scons', type='build')
If you want to specify a particular version requirement, you can override If you want to specify a particular version requirement, you can override
@@ -95,7 +96,7 @@ this in your package:
.. code-block:: python .. code-block:: python
depends_on("scons@2.3.0:", type="build") depends_on('scons@2.3.0:', type='build')
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -104,10 +105,10 @@ Finding available options
The first place to start when looking for a list of valid options to The first place to start when looking for a list of valid options to
build a package is ``scons --help``. Some packages like build a package is ``scons --help``. Some packages like
`kahip <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/kahip/package.py>`_ `kahip <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kahip/package.py>`_
don't bother overwriting the default SCons help message, so this isn't don't bother overwriting the default SCons help message, so this isn't
very useful, but other packages like very useful, but other packages like
`serf <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/serf/package.py>`_ `serf <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/serf/package.py>`_
print a list of valid command-line variables: print a list of valid command-line variables:
.. code-block:: console .. code-block:: console
@@ -177,7 +178,7 @@ print a list of valid command-line variables:
More advanced packages like More advanced packages like
`cantera <https://github.com/spack/spack/blob/develop/var/spack/repos/spack_repo/builtin/packages/cantera/package.py>`_ `cantera <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cantera/package.py>`_
use ``scons --help`` to print a list of subcommands: use ``scons --help`` to print a list of subcommands:
.. code-block:: console .. code-block:: console
@@ -237,14 +238,14 @@ the package build phase. This is done by overriding ``build_args`` like so:
def build_args(self, spec, prefix): def build_args(self, spec, prefix):
args = [ args = [
f"PREFIX={prefix}", 'PREFIX={0}'.format(prefix),
f"ZLIB={spec['zlib'].prefix}", 'ZLIB={0}'.format(spec['zlib'].prefix),
] ]
if spec.satisfies("+debug"): if '+debug' in spec:
args.append("DEBUG=yes") args.append('DEBUG=yes')
else: else:
args.append("DEBUG=no") args.append('DEBUG=no')
return args return args
@@ -274,8 +275,8 @@ environment variables. For example, cantera has the following option:
* env_vars: [ string ] * env_vars: [ string ]
Environment variables to propagate through to SCons. Either the Environment variables to propagate through to SCons. Either the
string "all" or a comma separated list of variable names, e.g. string "all" or a comma separated list of variable names, e.g.
"LD_LIBRARY_PATH,HOME". 'LD_LIBRARY_PATH,HOME'.
- default: "LD_LIBRARY_PATH,PYTHONPATH" - default: 'LD_LIBRARY_PATH,PYTHONPATH'
In the case of cantera, using ``env_vars=all`` allows us to use In the case of cantera, using ``env_vars=all`` allows us to use

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -31,7 +32,7 @@ By default, these phases run:
.. code-block:: console .. code-block:: console
$ sip-build --verbose --target-dir ... $ python configure.py --bindir ... --destdir ...
$ make $ make
$ make install $ make install
@@ -40,30 +41,30 @@ By default, these phases run:
Important files Important files
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
Each SIP package comes with a custom configuration file written in Python. Each SIP package comes with a custom ``configure.py`` build script,
For newer packages, this is called ``project.py``, while in older packages, written in Python. This script contains instructions to build the project.
it may be called ``configure.py``. This script contains instructions to build
the project.
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
Build system dependencies Build system dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
``SIPPackage`` requires several dependencies. Python and SIP are needed at build-time ``SIPPackage`` requires several dependencies. Python is needed to run
to run the aforementioned configure script. Python is also needed at run-time to the ``configure.py`` build script, and to run the resulting Python
actually use the installed Python library. And as we are building Python bindings libraries. Qt is needed to provide the ``qmake`` command. SIP is also
for C/C++ libraries, Python is also needed as a link dependency. All of these needed to build the package. All of these dependencies are automatically
dependencies are automatically added via the base class. added via the base class
.. code-block:: python .. code-block:: python
extends("python", type=("build", "link", "run")) extends('python')
depends_on("py-sip", type="build")
depends_on('qt', type='build')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ depends_on('py-sip', type='build')
Passing arguments to ``sip-build``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Passing arguments to ``configure.py``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Each phase comes with a ``<phase_args>`` function that can be used to pass Each phase comes with a ``<phase_args>`` function that can be used to pass
arguments to that particular phase. For example, if you need to pass arguments to that particular phase. For example, if you need to pass
@@ -71,11 +72,11 @@ arguments to the configure phase, you can use:
.. code-block:: python .. code-block:: python
def configure_args(self): def configure_args(self, spec, prefix):
return ["--no-python-dbus"] return ['--no-python-dbus']
A list of valid options can be found by running ``sip-build --help``. A list of valid options can be found by running ``python configure.py --help``.
^^^^^^^ ^^^^^^^
Testing Testing
@@ -123,7 +124,7 @@ are wrong, you can provide the names yourself by overriding
.. code-block:: python .. code-block:: python
import_modules = ["PyQt5"] import_modules = ['PyQt5']
These tests often catch missing dependencies and non-RPATHed These tests often catch missing dependencies and non-RPATHed

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -57,13 +58,15 @@ Testing
``WafPackage`` also provides ``test`` and ``installtest`` methods, ``WafPackage`` also provides ``test`` and ``installtest`` methods,
which are run after the ``build`` and ``install`` phases, respectively. which are run after the ``build`` and ``install`` phases, respectively.
By default, these phases do nothing, but you can override them to By default, these phases do nothing, but you can override them to
run package-specific unit tests. run package-specific unit tests. For example, the
`py-py2cairo <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-py2cairo/package.py>`_
package uses:
.. code-block:: python .. code-block:: python
def installtest(self): def installtest(self):
with working_dir("test"): with working_dir('test'):
pytest = which("py.test") pytest = which('py.test')
pytest() pytest()
@@ -92,7 +95,7 @@ the following dependency automatically:
.. code-block:: python .. code-block:: python
depends_on("python@2.5:", type="build") depends_on('python@2.5:', type='build')
Waf only supports Python 2.5 and up. Waf only supports Python 2.5 and up.
@@ -112,7 +115,7 @@ phase, you can use:
args = [] args = []
if self.run_tests: if self.run_tests:
args.append("--test") args.append('--test')
return args return args

View File

@@ -1,17 +1,17 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. chain: .. chain:
============================================= ============================
Chaining Spack Installations (upstreams.yaml) Chaining Spack Installations
============================================= ============================
You can point your Spack installation to another installation to use any You can point your Spack installation to another installation to use any
packages that are installed there. To register the other Spack instance, packages that are installed there. To register the other Spack instance,
you can add it as an entry to ``upstreams.yaml`` at any of the you can add it as an entry to ``upstreams.yaml``:
:ref:`configuration-scopes`:
.. code-block:: yaml .. code-block:: yaml
@@ -22,8 +22,7 @@ you can add it as an entry to ``upstreams.yaml`` at any of the
install_tree: /path/to/another/spack/opt/spack install_tree: /path/to/another/spack/opt/spack
``install_tree`` must point to the ``opt/spack`` directory inside of the ``install_tree`` must point to the ``opt/spack`` directory inside of the
Spack base directory, or the location of the ``install_tree`` defined Spack base directory.
in :ref:`config.yaml <config-yaml>`.
Once the upstream Spack instance has been added, ``spack find`` will Once the upstream Spack instance has been added, ``spack find`` will
automatically check the upstream instance when querying installed packages, automatically check the upstream instance when querying installed packages,

View File

@@ -1,4 +1,5 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details. # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -35,7 +36,7 @@
if not os.path.exists(link_name): if not os.path.exists(link_name):
os.symlink(os.path.abspath("../../.."), link_name, target_is_directory=True) os.symlink(os.path.abspath("../../.."), link_name, target_is_directory=True)
sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external")) sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external"))
sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external/_vendoring")) sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external/pytest-fallback"))
sys.path.append(os.path.abspath("_spack_root/lib/spack/")) sys.path.append(os.path.abspath("_spack_root/lib/spack/"))
# Add the Spack bin directory to the path so that we can use its output in docs. # Add the Spack bin directory to the path so that we can use its output in docs.
@@ -47,6 +48,9 @@
os.environ["COLIFY_SIZE"] = "25x120" os.environ["COLIFY_SIZE"] = "25x120"
os.environ["COLUMNS"] = "120" os.environ["COLUMNS"] = "120"
# Generate full package list if needed
subprocess.call(["spack", "list", "--format=html", "--update=package_list.html"])
# Generate a command index if an update is needed # Generate a command index if an update is needed
subprocess.call( subprocess.call(
[ [
@@ -70,22 +74,13 @@
"--force", # Overwrite existing files "--force", # Overwrite existing files
"--no-toc", # Don't create a table of contents file "--no-toc", # Don't create a table of contents file
"--output-dir=.", # Directory to place all output "--output-dir=.", # Directory to place all output
"--module-first", # emit module docs before submodule docs
] ]
sphinx_apidoc( sphinx_apidoc(apidoc_args + ["_spack_root/lib/spack/spack"])
apidoc_args
+ [
"_spack_root/lib/spack/spack",
"_spack_root/lib/spack/spack/test/*.py",
"_spack_root/lib/spack/spack/test/cmd/*.py",
]
)
sphinx_apidoc(apidoc_args + ["_spack_root/lib/spack/llnl"]) sphinx_apidoc(apidoc_args + ["_spack_root/lib/spack/llnl"])
# Enable todo items # Enable todo items
todo_include_todos = True todo_include_todos = True
# #
# Disable duplicate cross-reference warnings. # Disable duplicate cross-reference warnings.
# #
@@ -93,7 +88,9 @@ class PatchedPythonDomain(PythonDomain):
def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
if "refspecific" in node: if "refspecific" in node:
del node["refspecific"] del node["refspecific"]
return super().resolve_xref(env, fromdocname, builder, typ, target, node, contnode) return super(PatchedPythonDomain, self).resolve_xref(
env, fromdocname, builder, typ, target, node, contnode
)
# #
@@ -143,6 +140,7 @@ def setup(sphinx):
# Get nice vector graphics # Get nice vector graphics
graphviz_output_format = "svg" graphviz_output_format = "svg"
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"] templates_path = ["_templates"]
@@ -157,7 +155,7 @@ def setup(sphinx):
# General information about the project. # General information about the project.
project = "Spack" project = "Spack"
copyright = "2013-2023, Lawrence Livermore National Laboratory." copyright = "2013-2021, Lawrence Livermore National Laboratory."
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the
@@ -198,41 +196,16 @@ def setup(sphinx):
("py:class", "contextlib.contextmanager"), ("py:class", "contextlib.contextmanager"),
("py:class", "module"), ("py:class", "module"),
("py:class", "_io.BufferedReader"), ("py:class", "_io.BufferedReader"),
("py:class", "_io.BytesIO"),
("py:class", "unittest.case.TestCase"), ("py:class", "unittest.case.TestCase"),
("py:class", "_frozen_importlib_external.SourceFileLoader"), ("py:class", "_frozen_importlib_external.SourceFileLoader"),
("py:class", "clingo.Control"), ("py:class", "clingo.Control"),
("py:class", "six.moves.urllib.parse.ParseResult"), ("py:class", "six.moves.urllib.parse.ParseResult"),
("py:class", "TextIO"),
("py:class", "hashlib._Hash"),
("py:class", "concurrent.futures._base.Executor"),
# Spack classes that are private and we don't want to expose # Spack classes that are private and we don't want to expose
("py:class", "spack.provider_index._IndexBase"), ("py:class", "spack.provider_index._IndexBase"),
("py:class", "spack.repo._PrependFileLoader"), ("py:class", "spack.repo._PrependFileLoader"),
("py:class", "spack.build_systems._checks.BuilderWithDefaults"), ("py:class", "spack.build_systems._checks.BaseBuilder"),
# Spack classes that intersphinx is unable to resolve # Spack classes that intersphinx is unable to resolve
("py:class", "spack.version.StandardVersion"), ("py:class", "spack.version.VersionBase"),
("py:class", "spack.spec.DependencySpec"),
("py:class", "spack.spec.ArchSpec"),
("py:class", "spack.spec.InstallStatus"),
("py:class", "spack.spec.SpecfileReaderBase"),
("py:class", "spack.install_test.Pb"),
("py:class", "spack.filesystem_view.SimpleFilesystemView"),
("py:class", "spack.traverse.EdgeAndDepth"),
("py:class", "archspec.cpu.microarchitecture.Microarchitecture"),
("py:class", "spack.compiler.CompilerCache"),
# TypeVar that is not handled correctly
("py:class", "llnl.util.lang.T"),
("py:class", "llnl.util.lang.KT"),
("py:class", "llnl.util.lang.VT"),
("py:class", "llnl.util.lang.K"),
("py:class", "llnl.util.lang.V"),
("py:class", "llnl.util.lang.ClassPropertyType"),
("py:obj", "llnl.util.lang.KT"),
("py:obj", "llnl.util.lang.VT"),
("py:obj", "llnl.util.lang.ClassPropertyType"),
("py:obj", "llnl.util.lang.K"),
("py:obj", "llnl.util.lang.V"),
] ]
# The reST default role (used for this markup: `text`) to use for all documents. # The reST default role (used for this markup: `text`) to use for all documents.
@@ -248,8 +221,30 @@ def setup(sphinx):
# If true, sectionauthor and moduleauthor directives will be shown in the # If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default. # output. They are ignored by default.
# show_authors = False # show_authors = False
sys.path.append("./_pygments")
pygments_style = "style.SpackStyle" # The name of the Pygments (syntax highlighting) style to use.
# We use our own extension of the default style with a few modifications
from pygments.style import Style
from pygments.styles.default import DefaultStyle
from pygments.token import Comment, Generic, Text
class SpackStyle(DefaultStyle):
styles = DefaultStyle.styles.copy()
background_color = "#f4f4f8"
styles[Generic.Output] = "#355"
styles[Generic.Prompt] = "bold #346ec9"
import pkg_resources
dist = pkg_resources.Distribution(__file__)
sys.path.append(".") # make 'conf' module findable
ep = pkg_resources.EntryPoint.parse("spack = conf:SpackStyle", dist=dist)
dist._ep_map = {"pygments.styles": {"plugin1": ep}}
pkg_resources.working_set.add(dist)
pygments_style = "spack"
# A list of ignored prefixes for module index sorting. # A list of ignored prefixes for module index sorting.
# modindex_common_prefix = [] # modindex_common_prefix = []
@@ -334,20 +329,23 @@ def setup(sphinx):
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = "Spackdoc" htmlhelp_basename = "Spackdoc"
# -- Options for LaTeX output -------------------------------------------------- # -- Options for LaTeX output --------------------------------------------------
latex_elements = { latex_elements = {
# The paper size ('letterpaper' or 'a4paper'). # The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper', #'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt'). # The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt', #'pointsize': '10pt',
# Additional stuff for the LaTeX preamble. # Additional stuff for the LaTeX preamble.
# 'preamble': '', #'preamble': '',
} }
# Grouping the document tree into LaTeX files. List of tuples # Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]). # (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [("index", "Spack.tex", "Spack Documentation", "Todd Gamblin", "manual")] latex_documents = [
("index", "Spack.tex", "Spack Documentation", "Todd Gamblin", "manual"),
]
# The name of an image file (relative to this directory) to place at the top of # The name of an image file (relative to this directory) to place at the top of
# the title page. # the title page.
@@ -394,7 +392,7 @@ def setup(sphinx):
"Spack", "Spack",
"One line description of project.", "One line description of project.",
"Miscellaneous", "Miscellaneous",
) ),
] ]
# Documents to append as an appendix to all manuals. # Documents to append as an appendix to all manuals.
@@ -410,4 +408,6 @@ def setup(sphinx):
# -- Extension configuration ------------------------------------------------- # -- Extension configuration -------------------------------------------------
# sphinx.ext.intersphinx # sphinx.ext.intersphinx
intersphinx_mapping = {"python": ("https://docs.python.org/3", None)} intersphinx_mapping = {
"python": ("https://docs.python.org/3", None),
}

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -25,23 +26,14 @@ These settings can be overridden in ``etc/spack/config.yaml`` or
The location where Spack will install packages and their dependencies. The location where Spack will install packages and their dependencies.
Default is ``$spack/opt/spack``. Default is ``$spack/opt/spack``.
--------------- ---------------------------------------------------
``projections`` ``install_hash_length`` and ``install_path_scheme``
--------------- ---------------------------------------------------
.. warning:: The default Spack installation path can be very long and can create problems
for scripts with hardcoded shebangs. Additionally, when using the Intel
Modifying projections of the install tree is strongly discouraged. compiler, and if there is also a long list of dependencies, the compiler may
segfault. If you see the following:
By default Spack installs all packages into a unique directory relative to the install
tree root with the following layout:
.. code-block::
{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}
In very rare cases, it may be necessary to reduce the length of this path. For example,
very old versions of the Intel compiler are known to segfault when input paths are too long:
.. code-block:: console .. code-block:: console
@@ -49,25 +41,36 @@ very old versions of the Intel compiler are known to segfault when input paths a
** Segmentation violation signal raised. ** ** Segmentation violation signal raised. **
Access violation or stack overflow. Please contact Intel Support for assistance. Access violation or stack overflow. Please contact Intel Support for assistance.
Another case is Python and R packages with many runtime dependencies, which can result it may be because variables containing dependency specs may be too long. There
in very large ``PYTHONPATH`` and ``R_LIBS`` environment variables. This can cause the are two parameters to help with long path names. Firstly, the
``execve`` system call to fail with ``E2BIG``, preventing processes from starting. ``install_hash_length`` parameter can set the length of the hash in the
installation path from 1 to 32. The default path uses the full 32 characters.
For this reason, Spack allows users to modify the installation layout through custom Secondly, it is also possible to modify the entire installation
projections. For example scheme. By default Spack uses
``{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}``
where the tokens that are available for use in this directive are the
same as those understood by the :meth:`~spack.spec.Spec.format`
method. Using this parameter it is possible to use a different package
layout or reduce the depth of the installation paths. For example
.. code-block:: yaml .. code-block:: yaml
config: config:
install_tree: install_path_scheme: '{name}/{version}/{hash:7}'
root: $spack/opt/spack
projections:
all: "{name}/{version}/{hash:16}"
would install packages into sub-directories using only the package name, version and a would install packages into sub-directories using only the package
hash length of 16 characters. name, version and a hash length of 7 characters.
Notice that reducing the hash length increases the likelihood of hash collisions. When using either parameter to set the hash length it only affects the
representation of the hash in the installation directory. You
should be aware that the smaller the hash length the more likely
naming conflicts will occur. These parameters are independent of those
used to configure module names.
.. warning:: Modifying the installation hash length or path scheme after
packages have been installed will prevent Spack from being
able to find the old installation directories.
-------------------- --------------------
``build_stage`` ``build_stage``
@@ -125,8 +128,6 @@ are stored in ``$spack/var/spack/cache``. These are stored indefinitely
by default. Can be purged with :ref:`spack clean --downloads by default. Can be purged with :ref:`spack clean --downloads
<cmd-spack-clean>`. <cmd-spack-clean>`.
.. _Misc Cache:
-------------------- --------------------
``misc_cache`` ``misc_cache``
-------------------- --------------------
@@ -144,26 +145,6 @@ hosts when making ``ssl`` connections. Set to ``false`` to disable, and
tools like ``curl`` will use their ``--insecure`` options. Disabling tools like ``curl`` will use their ``--insecure`` options. Disabling
this can expose you to attacks. Use at your own risk. this can expose you to attacks. Use at your own risk.
--------------------
``ssl_certs``
--------------------
Path to custom certificats for SSL verification. The value can be a
filesytem path, or an environment variable that expands to an absolute file path.
The default value is set to the environment variable ``SSL_CERT_FILE``
to use the same syntax used by many other applications that automatically
detect custom certificates.
When ``url_fetch_method:curl`` the ``config:ssl_certs`` should resolve to
a single file. Spack will then set the environment variable ``CURL_CA_BUNDLE``
in the subprocess calling ``curl``. If additional ``curl`` arguments are required,
they can be set in the config, e.g. ``url_fetch_method:'curl -k -q'``.
If ``url_fetch_method:urllib`` then files and directories are supported i.e.
``config:ssl_certs:$SSL_CERT_FILE`` or ``config:ssl_certs:$SSL_CERT_DIR``
will work.
In all cases the expanded path must be absolute for Spack to use the certificates.
Certificates relative to an environment can be created by prepending the path variable
with the Spack configuration variable``$env``.
-------------------- --------------------
``checksum`` ``checksum``
-------------------- --------------------
@@ -241,7 +222,7 @@ and location. (See the *Configuration settings* section of ``man
ccache`` to learn more about the default settings and how to change ccache`` to learn more about the default settings and how to change
them). Please note that we currently disable ccache's ``hash_dir`` them). Please note that we currently disable ccache's ``hash_dir``
feature to avoid an issue with the stage directory (see feature to avoid an issue with the stage directory (see
https://github.com/spack/spack/pull/3761#issuecomment-294352232). https://github.com/LLNL/spack/pull/3761#issuecomment-294352232).
----------------------- -----------------------
``shared_linking:type`` ``shared_linking:type``
@@ -311,78 +292,14 @@ It is also worth noting that:
non_bindable_shared_objects = ["libinterface.so"] non_bindable_shared_objects = ["libinterface.so"]
---------------------- ----------------------
``install_status`` ``terminal_title``
---------------------- ----------------------
When set to ``true``, Spack will show information about its current progress By setting this option to ``true``, Spack will update the terminal's title to
as well as the current and total package numbers. Progress is shown both provide information about its current progress as well as the current and
in the terminal title and inline. Setting it to ``false`` will not show any total package numbers.
progress information.
To work properly, this requires your terminal to reset its title after To work properly, this requires your terminal to reset its title after
Spack has finished its work, otherwise Spack's status information will Spack has finished its work, otherwise Spack's status information will
remain in the terminal's title indefinitely. Most terminals should already remain in the terminal's title indefinitely. Most terminals should already
be set up this way and clear Spack's status information. be set up this way and clear Spack's status information.
-----------
``aliases``
-----------
Aliases can be used to define new Spack commands. They can be either shortcuts
for longer commands or include specific arguments for convenience. For instance,
if users want to use ``spack install``'s ``-v`` argument all the time, they can
create a new alias called ``inst`` that will always call ``install -v``:
.. code-block:: yaml
aliases:
inst: install -v
-------------------------------
``concretization_cache:enable``
-------------------------------
When set to ``true``, Spack will utilize a cache of solver outputs from
successful concretization runs. When enabled, Spack will check the concretization
cache prior to running the solver. If a previous request to solve a given
problem is present in the cache, Spack will load the concrete specs and other
solver data from the cache rather than running the solver. Specs not previously
concretized will be added to the cache on a successful solve. The cache additionally
holds solver statistics, so commands like ``spack solve`` will still return information
about the run that produced a given solver result.
This cache is a subcache of the :ref:`Misc Cache` and as such will be cleaned when the Misc
Cache is cleaned.
When ``false`` or ommitted, all concretization requests will be performed from scatch
----------------------------
``concretization_cache:url``
----------------------------
Path to the location where Spack will root the concretization cache. Currently this only supports
paths on the local filesystem.
Default location is under the :ref:`Misc Cache` at: ``$misc_cache/concretization``
------------------------------------
``concretization_cache:entry_limit``
------------------------------------
Sets a limit on the number of concretization results that Spack will cache. The limit is evaluated
after each concretization run; if Spack has stored more results than the limit allows, the
oldest concretization results are pruned until 10% of the limit has been removed.
Setting this value to 0 disables the automatic pruning. It is expected users will be
responsible for maintaining this cache.
-----------------------------------
``concretization_cache:size_limit``
-----------------------------------
Sets a limit on the size of the concretization cache in bytes. The limit is evaluated
after each concretization run; if Spack has stored more results than the limit allows, the
oldest concretization results are pruned until 10% of the limit has been removed.
Setting this value to 0 disables the automatic pruning. It is expected users will be
responsible for maintaining this cache.

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details. .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -11,18 +12,16 @@ Configuration Files
Spack has many configuration files. Here is a quick list of them, in Spack has many configuration files. Here is a quick list of them, in
case you want to skip directly to specific docs: case you want to skip directly to specific docs:
* :ref:`packages.yaml <compiler-config>` * :ref:`compilers.yaml <compiler-config>`
* :ref:`concretizer.yaml <concretizer-options>` * :ref:`concretizer.yaml <concretizer-options>`
* :ref:`config.yaml <config-yaml>` * :ref:`config.yaml <config-yaml>`
* :ref:`include.yaml <include-yaml>`
* :ref:`mirrors.yaml <mirrors>` * :ref:`mirrors.yaml <mirrors>`
* :ref:`modules.yaml <modules>` * :ref:`modules.yaml <modules>`
* :ref:`packages.yaml <packages-config>` * :ref:`packages.yaml <build-settings>`
* :ref:`repos.yaml <repositories>` * :ref:`repos.yaml <repositories>`
You can also add any of these as inline configuration in the YAML You can also add any of these as inline configuration in ``spack.yaml``
manifest file (``spack.yaml``) describing an :ref:`environment in an :ref:`environment <environment-configuration>`.
<environment-configuration>`.
----------- -----------
YAML Format YAML Format
@@ -46,12 +45,6 @@ Each Spack configuration file is nested under a top-level section
corresponding to its name. So, ``config.yaml`` starts with ``config:``, corresponding to its name. So, ``config.yaml`` starts with ``config:``,
``mirrors.yaml`` starts with ``mirrors:``, etc. ``mirrors.yaml`` starts with ``mirrors:``, etc.
.. tip::
Validation and autocompletion of Spack config files can be enabled in
your editor with the YAML language server. See `spack/schemas
<https://github.com/spack/schemas>`_ for more information.
.. _configuration-scopes: .. _configuration-scopes:
-------------------- --------------------
@@ -79,12 +72,9 @@ are six configuration scopes. From lowest to highest:
Spack instance per project) or for site-wide settings on a multi-user Spack instance per project) or for site-wide settings on a multi-user
machine (e.g., for a common Spack instance). machine (e.g., for a common Spack instance).
#. **plugin**: Read from a Python project's entry points. Settings here affect
all instances of Spack running with the same Python installation. This scope takes higher precedence than site, system, and default scopes.
#. **user**: Stored in the home directory: ``~/.spack/``. These settings #. **user**: Stored in the home directory: ``~/.spack/``. These settings
affect all instances of Spack and take higher precedence than site, affect all instances of Spack and take higher precedence than site,
system, plugin, or defaults scopes. system, or defaults scopes.
#. **custom**: Stored in a custom directory specified by ``--config-scope``. #. **custom**: Stored in a custom directory specified by ``--config-scope``.
If multiple scopes are listed on the command line, they are ordered If multiple scopes are listed on the command line, they are ordered
@@ -101,7 +91,7 @@ are six configuration scopes. From lowest to highest:
precedence over all other scopes. precedence over all other scopes.
Each configuration directory may contain several configuration files, Each configuration directory may contain several configuration files,
such as ``config.yaml``, ``packages.yaml``, or ``mirrors.yaml``. When such as ``config.yaml``, ``compilers.yaml``, or ``mirrors.yaml``. When
configurations conflict, settings from higher-precedence scopes override configurations conflict, settings from higher-precedence scopes override
lower-precedence settings. lower-precedence settings.
@@ -205,45 +195,6 @@ with MPICH. You can create different configuration scopes for use with
mpi: [mpich] mpi: [mpich]
.. _plugin-scopes:
^^^^^^^^^^^^^
Plugin scopes
^^^^^^^^^^^^^
.. note::
Python version >= 3.8 is required to enable plugin configuration.
Spack can be made aware of configuration scopes that are installed as part of a python package. To do so, register a function that returns the scope's path to the ``"spack.config"`` entry point. Consider the Python package ``my_package`` that includes Spack configurations:
.. code-block:: console
my-package/
├── src
│   ├── my_package
│   │   ├── __init__.py
│   │   └── spack/
│   │   │   └── config.yaml
└── pyproject.toml
adding the following to ``my_package``'s ``pyproject.toml`` will make ``my_package``'s ``spack/`` configurations visible to Spack when ``my_package`` is installed:
.. code-block:: toml
[project.entry_points."spack.config"]
my_package = "my_package:get_config_path"
The function ``my_package.get_extension_path`` in ``my_package/__init__.py`` might look like
.. code-block:: python
import importlib.resources
def get_config_path():
dirname = importlib.resources.files("my_package").joinpath("spack")
if dirname.exists():
return str(dirname)
.. _platform-scopes: .. _platform-scopes:
------------------------ ------------------------
@@ -276,9 +227,6 @@ You can get the name to use for ``<platform>`` by running ``spack arch
--platform``. The system config scope has a ``<platform>`` section for --platform``. The system config scope has a ``<platform>`` section for
sites at which ``/etc`` is mounted on multiple heterogeneous machines. sites at which ``/etc`` is mounted on multiple heterogeneous machines.
.. _config-scope-precedence:
---------------- ----------------
Scope Precedence Scope Precedence
---------------- ----------------
@@ -287,17 +235,10 @@ When spack queries for configuration parameters, it searches in
higher-precedence scopes first. So, settings in a higher-precedence file higher-precedence scopes first. So, settings in a higher-precedence file
can override those with the same key in a lower-precedence one. For can override those with the same key in a lower-precedence one. For
list-valued settings, Spack *prepends* higher-precedence settings to list-valued settings, Spack *prepends* higher-precedence settings to
lower-precedence settings. Completely ignoring lower-precedence configuration lower-precedence settings. Completely ignoring higher-level configuration
options is supported with the ``::`` notation for keys (see options is supported with the ``::`` notation for keys (see
:ref:`config-overrides` below). :ref:`config-overrides` below).
There are also special notations for string concatenation and precendense override:
* ``+:`` will force *prepending* strings or lists. For lists, this is the default behavior.
* ``-:`` works similarly, but for *appending* values.
:ref:`config-prepend-append`
^^^^^^^^^^^ ^^^^^^^^^^^
Simple keys Simple keys
^^^^^^^^^^^ ^^^^^^^^^^^
@@ -338,47 +279,6 @@ command:
- ~/.spack/stage - ~/.spack/stage
.. _config-prepend-append:
^^^^^^^^^^^^^^^^^^^^
String Concatenation
^^^^^^^^^^^^^^^^^^^^
Above, the user ``config.yaml`` *completely* overrides specific settings in the
default ``config.yaml``. Sometimes, it is useful to add a suffix/prefix
to a path or name. To do this, you can use the ``-:`` notation for *append*
string concatenation at the end of a key in a configuration file. For example:
.. code-block:: yaml
:emphasize-lines: 1
:caption: ~/.spack/config.yaml
config:
install_tree-: /my/custom/suffix/
Spack will then append to the lower-precedence configuration under the
``install_tree-:`` section:
.. code-block:: console
$ spack config get config
config:
install_tree: /some/other/directory/my/custom/suffix
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
Similarly, ``+:`` can be used to *prepend* to a path or name:
.. code-block:: yaml
:emphasize-lines: 1
:caption: ~/.spack/config.yaml
config:
install_tree+: /my/custom/suffix/
.. _config-overrides: .. _config-overrides:
^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -494,7 +394,7 @@ are indicated at the start of the path with ``~`` or ``~user``.
Spack-specific variables Spack-specific variables
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
Spack understands over a dozen special variables. These are: Spack understands several special variables. These are:
* ``$env``: name of the currently active :ref:`environment <environments>` * ``$env``: name of the currently active :ref:`environment <environments>`
* ``$spack``: path to the prefix of this Spack installation * ``$spack``: path to the prefix of this Spack installation
@@ -517,7 +417,6 @@ Spack understands over a dozen special variables. These are:
* ``$target_family``. The target family for the current host, as * ``$target_family``. The target family for the current host, as
detected by ArchSpec. E.g. ``x86_64`` or ``aarch64``. detected by ArchSpec. E.g. ``x86_64`` or ``aarch64``.
* ``$date``: the current date in the format YYYY-MM-DD * ``$date``: the current date in the format YYYY-MM-DD
* ``$spack_short_version``: the Spack version truncated to the first components.
Note that, as with shell variables, you can write these as ``$varname`` Note that, as with shell variables, you can write these as ``$varname``

Some files were not shown because too many files have changed in this diff Show More